[Pulp-list] Tuning Pulpcore Worker Count in Katello

Brian Bouterse bmbouter at redhat.com
Mon Jun 15 19:10:56 UTC 2020


On Mon, Jun 15, 2020 at 2:27 PM Eric Helms <ehelms at redhat.com> wrote:

> Are Pulp 3 workers running in a threaded manner ? There is a related
> concern, that is more of a concern for Katello installations, around the
> number of database connections needed to prevent starvation and ensure
> PostgreSQL is tuned correctly to handle this. For Foreman/Katello tihs
> means we need to count across Foreman, Foreman's task handler, Candlepin
> and Pulp connections.
>
Generally there are no threads but there are caveats. First, I don't know
of any Pulp code that uses threads, but it's possible code in the future
would. I know of no plans currently though. Second, aiofiles (a dependency
of Pulp) does use threads to move files around. These threads should not
make db connections though. Third, plugins can ship their own tasks that
can be run so this isn't entirely in our control.



>
> If you can also speak to the requirements for API and Content app that
> would be helpful as well rounding out the information.
>
Sure. The Content App and the API are expected to not load system resources
significantly, so for those it's mainly about how much capacity you want.
For example, if you want to serve lots of clients binary data yourself with
the content app (when not using S3 or Azure like pulp supports), then
you'll want "more" content app workers. For the API, the capacity there is
about the # API operations per second. More workers the more concurrent API
operations per second you can do. Generally I expect you'll want a small
number of API workers and a variable (potentially large) number of content
app workers. You'll also need to make sure your reverse proxy configs can
match the connection throughput desired also.

Workers in these cases are typically gunicorn workers in a single gunicorn
process, but to really scale a system we'll need to have both more gunicorn
workers per proces to vertically scale on a single node, and more gunicorn
processes themselves (each with N gunicorn workers) to horizontally scale.
Both vertical and horizontal scaling can be deployed manually and fully
today. The installer currently can vertically scale the number of content
and API gunicorn workers per gunicorn process, we cannot currently perform
clustered installations for horizontal scaling well. We are working on that
clustered install capability now actually.

More questions are welcome; this topic is important and can be a bit
complicated with our lack of docs. :/

>
> On Mon, Jun 15, 2020 at 2:14 PM Brian Bouterse <bmbouter at redhat.com>
> wrote:
>
>>
>>
>> On Mon, Jun 15, 2020 at 1:09 PM William Clark <wclark at redhat.com> wrote:
>>
>>> Hello Pulp Community!
>>>
>>> I'm working on a feature to allow the foreman-installer to set the
>>> number of Pulpcore workers deployed on Katello or Content Proxy, and I
>>> require some assistance from the Pulp community in setting sane defaults
>>> and limits.
>>>
>>> With Pulp 2 in Katello, the default behavior was that the worker count
>>> would match the number of logical CPUs up to a soft limit of 8. We advised
>>> that users could tune the worker count higher but it was expected to cause
>>> performance degradation in most cases due to I/O blocking. The largest
>>> scale installation to my knowledge uses a Pulp 2 worker count of 16.
>>>
>> I believe having one pulpcore-worker per CPU is still a good practice. We
>> haven't gotten a lot of feedback on right-sizing an installation so I won't
>> claim it's the absolute best practice, but it's what I recommend currently.
>> We have an issue to document sizing recommendations here that also has some
>> similar/more info:  https://pulp.plan.io/issues/6856
>>
>> The I/O blocking concern is roughly about the same as Pulp2; during sync
>> operations the workload could be I/O bound.
>>
>> A lot more CPU processing has been moved into postgresql which auto forks
>> postgresql processes per client connection, which in this case is a 1-1
>> pairing with each pulpcore-worker. So when it's under heavy load I expect
>> postresql to scale out a process and the workload could become constrained
>> on the postgresql CPU itself. In that case, lowering the worker to half of
>> the available processors would likely improve throughput. Moving postgresql
>> to another, dedicated box is also an option.
>>
>> If you'd be willing to share your findings with the Pulp community that
>> would be really great.
>>
>>
>>> With Pulp 2 being replaced and rebuilt with Pulpcore, I'm looking to
>>> understand what are the tuning best practices for the new technology, so
>>> that we could apply them to Katello.
>>>
>> Please let us know what other questions we can help answer.
>>
>>
>>> I am looking forward to hearing from you,
>>>
>> Thank you for reaching out.
>>
>>
>>> --
>>>
>>> William Clark, RHCA
>>>
>>> He/Him/His
>>>
>>> Software Engineer
>>>
>>> Red Hat <https://www.redhat.com>
>>>
>>> IM: wclark
>>> <https://www.redhat.com>
>>> _______________________________________________
>>> Pulp-list mailing list
>>> Pulp-list at redhat.com
>>> https://www.redhat.com/mailman/listinfo/pulp-list
>>
>> _______________________________________________
>> Pulp-list mailing list
>> Pulp-list at redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-list
>
>
>
> --
> Eric Helms
> Principal Software Engineer
> Satellite and Cloud Services
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/pulp-list/attachments/20200615/2f8394f2/attachment.htm>


More information about the Pulp-list mailing list