[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Pulp-list] Celery memory usage during RPM repo copy


I have a local RPM repository that syncs from an internet mirror of the CentOS 7.3 updates repository. When I try to create a local copy of this mirror, the celery processes balloon, using all available memory until the job eventually fails as follows:

$ pulp-admin rpm repo copy all --from-repo-id centos-7-updates-x86_64-live --to-repo-id centos-7-updates-x86_64-snapshots-20170216a
This command may be exited via ctrl+c without affecting the request.

An internal error occurred on the Pulp server:

RequestException: GET request on
/pulp/api/v2/tasks/776894cc-b524-4f2f-bd30-c8c4d8237c31/ failed with 500 -
[Errno 12] Cannot allocate memory

As often as not, this renders my Pulp VM inaccessible, requiring a hard reset in order to access it again. This is Pulp 2.12.0 on CentOS 7.2.1511. I presume this is the same issue described in https://pulp.plan.io/issues/1779, but given 10 months since that ticket was updated, I figured I'd ask here.

I've tried setting PULP_MAX_TASKS_PER_CHILD=2 in /etc/default/pulp_workers, but it doesn't seem to make any difference.

Is there a fix or workaround for this problem, or even a means to estimate how much memory a server needs to be able to copy a repo of a given size? This VM has 8GB, and I could add more, but it seems faintly ridiculous that it should be required.


Richard Gray | Operations Technical Lead
DDI: +64 9 950 2196 Fax: +64 9 302 0518
Mobile: +64 21 050 8178 Freephone:0800 SMX SMX (769 769)
SMX Limited: Level 15, 19 Victoria Street West, Auckland, New Zealand
Web: http://smxemail.com
            Cloud Email Hosting & Security

This email has been filtered by SMX. For more information visit smxemail.com.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]