[Pulp-list] artifact creation using chunk upload of big package
daviddavis at redhat.com
Fri Jul 5 18:41:27 UTC 2019
You'll want to edit /usr/lib/systemd/system/pulp-api.service and set the
timeout option. This worked for me:
ExecStart=/usr/local/lib/pulp/bin/gunicorn pulpcore.app.wsgi:application \
--bind 'localhost:24817' \
--access-logfile - \
Make sure you run 'systemctl daemon-reload'.
On Fri, Jul 5, 2019 at 12:12 PM Juan Cabrera <juan.cabrera at unamur.be> wrote:
> Hi David,
> On 5/07/19 16:27, David Davis wrote:
> I tested and confirmed that the files are not being deleted. Mind opening
> a bug for that?
> I created an issue https://pulp.plan.io/issues/5092
> For the failed chunk uploads, httpie is timing out. You can set a higher
> timeout (like you did) or use smaller chunks.
> In my case, for uploading the 300MB chunks I have no problem. Is when
> creating the artifact that fails.
> I looked into why artifact creation is failing for files < 2GB. The reason
> is that it's taking too long to calculate the checksums. There are 6
> checksum types and each one takes about 4-8 seconds from the command line
> in my test environment. Calculating the digests in Python seems to add
> about 1-2 seconds. The default timeout in gunicorn is 30 seconds:
> Jul 05 14:21:56 pulp3 gunicorn: [2019-07-05 14:21:56 +0000] 
> [CRITICAL] WORKER TIMEOUT (pid:29843)
> Jul 05 14:21:57 pulp3 gunicorn: [2019-07-05 14:21:57 +0000] 
> [INFO] Booting worker with pid: 30031
> I see the same error. I have a VM with 2 CPU and 2 workers. Do I need to
> add more workers?
> You can raise this timeout or also you can pass in the checksums when
> creating the artifact. I think the best solution though might be to make
> artifact creation a background task.
>  http POST :24817/pulp/api/v3/artifacts/ upload=$UPLOAD sha256=abc...
> I passed the sha256 but still have a WORKER TIMEOUT problem. I suppose it
> has to calculate also md5, sha1, sha224, sha384, sha512 which takes time.
> Where can I increase the WORKER TIMEOUT ?
> On Fri, Jul 5, 2019 at 9:08 AM Juan Cabrera <juan.cabrera at unamur.be>
>> Hi David
>> This morning I made a test sequence and open a ticket
>> Yes, I see that the update file is deleted when the artifact is created.
>> When I get the artifact creation error, the upload HREF is not deleted,
>> which could be normal as there was a error.
>> But when I clean all the server uploads using API
>> for u in $(http $PORT/pulp/api/v3/uploads/ | jq -r '.results |
>> ._href'); do
>> echo $u
>> http DELETE $PORT$u
>> The files are not deleted.
>> On 5/07/19 12:44, David Davis wrote:
>> There is in fact a 2GB limit currently on artifact size. I consider this
>> a bug and I filed this issue:
>> The file in /var/lib/pulp/upload should be deleted once it's imported as
>> an artifact. I'm guessing it's maybe not happening since the server is
>> throwing an error.
>> On Thu, Jul 4, 2019 at 12:31 PM Juan Cabrera <juan.cabrera at unamur.be>
>>> In my previous mail I forget to say that I updated the Pulp version to
>>> pulp_source_dir: "git+https://firstname.lastname@example.org"
>>> app_label: "rpm"
>>> source_dir: "git+https://email@example.com"
>>> Pulp-list mailing list
>>> Pulp-list at redhat.com
>> Juan CABRERA
>> Correspondant informatique
>> Département de Mathématiques
>> T. 081724919
>> juan.cabrera at unamur.be
>> Université de Namur ASBL
>> Rue de Bruxelles 61 - 5000 Namur
>> Let’s respect the environment together.
>> Only print this message if necessary!
> Juan CABRERA
> Correspondant informatique
> Département de Mathématiques
> T. 081724919
> juan.cabrera at unamur.be
> Université de Namur ASBL
> Rue de Bruxelles 61 - 5000 Namur
> Let’s respect the environment together.
> Only print this message if necessary!
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Pulp-list