[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Bug 459700] New: With MaxJobs=0, PreserveJobFiles=On cupsd uses 100% CPU

Please do not reply directly to this email. All additional
comments should be made in the comments box of this bug.

Summary: With MaxJobs=0, PreserveJobFiles=On cupsd uses 100% CPU


           Summary: With MaxJobs=0, PreserveJobFiles=On cupsd uses 100%
           Product: Red Hat Enterprise Linux 5
           Version: 5.1
          Platform: i386
        OS/Version: Linux
            Status: NEW
          Severity: high
          Priority: high
         Component: cups
        AssignedTo: twaugh redhat com
        ReportedBy: vgaikwad redhat com
                CC: fedora-triage-list redhat com
        Depends on: 421671
   Estimated Hours: 0.0
    Classification: Red Hat

+++ This bug was initially created as a clone of Bug #421671 +++

Description of problem:
On our production system we use a dedicated 2 dual core Xeon 2GB RAM server for 
printing (e.g. the server runs no service save CUPS).
In cupsd.conf we have:
   MaxJobs 0
   PreserveJobFiles On
and a daily cron script that purges jobs 14 days old.
On average we have 17,000 jobs in history.
Using the web interface if we click "Show all jobs" the cupsd process will hug 
100% of the CPU (1 core out of four) for 260 seconds (~4.5 minutes)!
During this time clients submitting print jobs (via lp) hang (the lp command 
doesn't fail), print jobs are not sent to idle/ready printers.
The same behavior is exhibited with commands such:
   lpstat -Wall -u
   lpstat -Wall -o MyPrinter

Opher Shachar,

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results:

Expected results:

Additional info:

--- Additional comment from twaugh redhat com on 2007-12-14 11:42:06 EDT ---

I have not yet been able to reproduce the problem here.

There is a large delay when fetching information for jobs where which-jobs=all,
and this is due to the fact that CUPS must load the completed jobs from the
system each time this operation is performed.  However, during this delay cupsd
is not processor-bound but disc-bound (i.e. usually state 'D' in the output of
ps, not 'R').

May I ask what MIME file types your print jobs are?  For instance, what output
does this command give, when run as root?:

ls -1 /var/spool/cups/d* | head -n1 | xargs file

The reason I ask is that when CUPS loads the completed jobs it seems to perform
automatic file-type detection in some cases, and I wonder if that's the case

--- Additional comment from ophers ladpc co il on 2007-12-16 05:21:53 EDT ---

The vast majority of print files are 
prd:/var/spool/cups# find . -name "d*" | sort | head -n2 | xargs file
./d96486-001: ISO-8859 text
./d96487-001: ISO-8859 text, with escape sequences
If it's relevant, some of the files are large 7MB+.

    ls -1 d* | head -n1 | xargs file 
    d96486-001:   ERROR: cannot open `d96486-001' (No such file or directory)

We've tried lowering the number of files by deleting files older than 7 then 
6,5... days old from /var/spool/cups, after each time running:
   service cups reload
we noticed the time the CPU spent at 100% decreasing from 130 seconds, for 9600 
jobs, to 17 seconds for 1309 jobs.
Also as we lowered the number of files, cupsd was sending ready print jobs to 
printers (top showed entries of backends) but, still, new job submissions - 
using lp - hang while cupsd was at 100%.

We set MaxJobs=1000 in cupsd.conf and restarted cups (after work hours)
   service cups restart
the response, to lpstat -Wall -o, now was instantaneous (I had expected a 4 
second delay). It is running for 2.5 days now (with MaxJobs=1000) and the 
problem did not reappear.

Obviously we can't have our production system slagging neither is limiting the 
history to 1000 jobs too great. So, we're still in need for a solution.

Opher Shachar,

--- Additional comment from twaugh redhat com on 2007-12-17 09:51:16 EDT ---

Changing version to 7.  Fedora Core 6 has reached its end-of-life; however this
problem remains in (at least) Fedora 7.

--- Additional comment from fedora-triage-list redhat com on 2008-05-14
11:09:27 EDT ---

This message is a reminder that Fedora 7 is nearing the end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining and
issuing updates for Fedora 7. It is Fedora's policy to close all bug reports
from releases that are no longer maintained. At that time this bug will be
closed as WONTFIX if it remains open with a Fedora 'version' of '7'.

Package Maintainer: If you wish for this bug to remain open because you plan to
fix it in a currently maintained version, simply change the 'version' to a
later Fedora version prior to Fedora 7's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that we may
not be able to fix it before Fedora 7 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version of
Fedora please change the 'version' of this bug. If you are unable to change the
version, please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a more recent
Fedora release includes newer upstream software that fixes bugs or makes them
obsolete. If possible, it is recommended that you try the newest available
Fedora distribution to see if your bug still exists.

Please read the Release Notes for the newest Fedora distribution to make sure
it will meet your needs:

The process we are following is described here:

--- Additional comment from twaugh redhat com on 2008-05-14 11:25:23 EDT ---

Changing version to '8'.

Configure bugmail: https://bugzilla.redhat.com/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]