[Pulp-list] Content server Performance

Daniel Alley dalley at redhat.com
Mon Jun 28 13:39:45 UTC 2021


Sorry Bin, this ended up in my spam somehow so I missed your update until a
second ago.

Realistically, it's probably getting bottlenecked on the database.  You can
definitely try increasing the workers further (beyond 50) but I'm not sure
how much it will help.  A lot of the improvements in 3.14 are oriented
around reducing our load on the database so it should help quite a bit.

On Tue, Jun 22, 2021 at 12:35 PM Bin Li (BLOOMBERG/ 120 PARK) <
bli111 at bloomberg.net> wrote:

> We will look into upgrade 3.7.3 to 3.14.
> For now, I have updated number of worker a few times. We are having 50
> workers running. I no longer see the timed out messages but the TIME_WAIT
> is still around 5k.
>
> # netstat -an | grep -i TIME_WAIT |grep 24816 | wc -l
> 5473
>
> Also notice the database connection is over 60.
> => select count(*) from pg_stat_activity where usename = 'pulp';
> count
> -------
> 63
> (1 row)
>
> Should I keep adding workers until the queue comes down? We still have
> plenty of cpu and memory on the host.
>
>
> From: bmbouter at redhat.com At: 06/22/21 12:01:30 UTC-4:00
> To: danny.sauer at konghq.com
> Cc: Bin Li (BLOOMBERG/ 120 PARK ) <bli111 at bloomberg.net>,
> pulp-list at redhat.com
> Subject: Re: [Pulp-list] Content server Performance
>
>
>
> On Tue, Jun 22, 2021 at 11:56 AM Danny Sauer <danny.sauer at konghq.com>
> wrote:
>
>> You can certainly run multiple instances of the content server.  It just
>> needs a connection to the database and access to the storage.
>>
> Agreed, you could deploy additional content servers and have your
> nginx/apache load balance them.
>
>
>> Have you tuned the number of worker processes in Gunicorn?  It defaults
>> to 1, but should almost certainly be increased for any sort of volume.
>> https://docs.gunicorn.org/en/stable/settings.html#worker-processes
>>
> Pulp changed the default gunicorn worker processes to 8 maybe a release or
> two ago. See the `pulp_content_workers` variable in the installer here
> https://pulp-installer.readthedocs.io/en/latest/roles/pulp_content/#role-variables
>
>>
>> There are several moving pieces, but that's really all I had to touch
>> here.
>>
>> --Danny
>>
> With pulpcore==3.14 there is a significant performance improvement being
> reviewed now  https://pulp.plan.io/issues/8805  . In addition to
> resolving it with methods like ^, when 3.14 comes out (scheduled for June
> 29th) it would be great if you could report on if the improvements helped
> you.
>
>>
>> On Tue, Jun 22, 2021 at 10:34 AM Bin Li (BLOOMBERG/ 120 PARK) <
>> bli111 at bloomberg.net> wrote:
>>
>>> We recently add more clients to use the pulp content server. The
>>> processes run out the file descriptor first. We then increased both nginx
>>> and pulp-content by creating a override.conf
>>> /etc/systemd/system/pulpcore-content.service.d # cat override.conf
>>> [Service]
>>> LimitNOFILE=65536
>>>
>>> and updated nginx.conf
>>> # Gunicorn docs suggest this value.
>>> worker_processes 1;
>>> events {
>>> worker_connections 10000; # increase if you have lots of clients
>>> accept_mutex off; # set to 'on' if nginx worker_processes > 1
>>> }
>>>
>>> worker_rlimit_nofile 20000;
>>>
>>>
>>> Now we are keep getting this error.
>>> 2021/06/22 11:26:36 [error] 78373#0: *112823 upstream timed out (110:
>>> Connection timed out) while connecting to upstream, client:
>>>
>>> It looks like pulp-content server cannot keep up with requests. Is there
>>> anything we could do to increase the performance of the content server?
>>> _______________________________________________
>>> Pulp-list mailing list
>>> Pulp-list at redhat.com
>>> https://listman.redhat.com/mailman/listinfo/pulp-list
>>
>> _______________________________________________
>> Pulp-list mailing list
>> Pulp-list at redhat.com
>> https://listman.redhat.com/mailman/listinfo/pulp-list
>
>
> _______________________________________________
> Pulp-list mailing list
> Pulp-list at redhat.com
> https://listman.redhat.com/mailman/listinfo/pulp-list
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/pulp-list/attachments/20210628/7b9f4ab5/attachment.htm>


More information about the Pulp-list mailing list