[Pulp-list] Celery memory usage during RPM repo copy

Michael Hrivnak mhrivnak at redhat.com
Sun Feb 19 19:43:41 UTC 2017


Jiri,

There's no particularly good reason. It would just increase the complexity
of that command on the client side by a fair amount. But you're right that
it would be an improvement for most users.

Right now the "copy all" command makes one HTTP request, gets back a task
ID, and then runs the standard task polling routine until the task
finishes. Simple stuff.

The other option would be to queue one task for every content type, and
then track progress on each separately until they're all done. It would
need to use the same list of fields as the individual type commands, so
there would be a bit of shared code, and that would add to the complexity
of implementation.

There may also be some users who want to ensure that all modification to
the destination repo happens in one task, which either fails or succeeds,
rather than needing to track success and failure of several tasks. That's
probably a minority use case though.

So your idea is probably a good one. We just haven't bee able to get that
area high enough on the priority list to do this kind of refinement. If
anyone is interested in making a PR though, we would welcome that. ;)

Michael

On Fri, Feb 17, 2017 at 11:35 AM, Jiri Tyr <jiri.tyr at gmail.com> wrote:

> Thanks for the blog post, Michael. I have just one question: Why the "rpm
> repo copy all" doesn't walk through all the available types of content to
> prevent the program to consume all memory and fail?
>
> On Fri, Feb 17, 2017 at 4:25 PM, Michael Hrivnak <mhrivnak at redhat.com>
> wrote:
>
>> Since this is a relatively common issue, I decided to respond via blog
>> post:
>>
>> http://pulpproject.org/2017/02/17/why-does-copy-use-lots-of-memory/
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/pulp-list/attachments/20170219/067fe4ec/attachment.htm>


More information about the Pulp-list mailing list