[rhn-users] Ulimit and cron scheduler - too many open files error
Serge Bianda
serge.bianda at appiancorp.com
Tue May 24 16:17:11 UTC 2005
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain> <type> <item> <value>
#
#Where:
#<domain> can be:
# - an user name
# - a group name, with @group syntax
# - the wildcard *, for default entry
#
#<type> can have the two values:
# - "soft" for enforcing the soft limits
# - "hard" for enforcing hard limits
#
#<item> can be one of the following:
# - core - limits the core file size (KB)
# - data - max data size (KB)
# - fsize - maximum filesize (KB)
# - memlock - max locked-in-memory address space (KB)
# - nofile - max number of open files
# - rss - max resident set size (KB)
# - stack - max stack size (KB)
# - cpu - max CPU time (MIN)
# - nproc - max number of processes
# - as - address space limit
# - maxlogins - max number of logins for this user
# - priority - the priority to run user process with
# - locks - max number of file locks the user can hold
#
#<domain> <type> <item> <value>
#
* soft nofile 63536
* hard nofile 63536
testuser1 hard nofile 63536
testuser1 soft nofile 63536
# End of file
________________________________
From: rhn-users-bounces at redhat.com [mailto:rhn-users-bounces at redhat.com]
On Behalf Of Doerbeck, Christoph
Sent: Tuesday, May 24, 2005 10:38 AM
To: Red Hat Network Users List
Subject: RE: [rhn-users] Ulimit and cron scheduler - too many open files
error
please post your limits.conf
Christoph
-----Original Message-----
From: Serge Bianda [mailto:serge.bianda at appiancorp.com]
Sent: Tuesday, May 24, 2005 10:28 AM
To: rhn-users at redhat.com
Subject: [rhn-users] Ulimit and cron scheduler - too many open files
error
We are trying to run a script using the cron scheduler. For the
script to run, we had to increase the number of open file descriptors
using "ulimit -n" to 63000 from 1024. We increased the file descriptor
value in /etc/security/limits.conf. But crontab scheduler does not read
the new increased value for the file descriptors and always errors out
throwing - "too many open files" error. We tried to increase the value
in the script that is scheduled using the cron job, but it looks like
cron does not pick the new value even if it is set in the script. Are
you aware of any global setting for the cron scheduler which will force
it to have more open file descriptors? Note that when the script is ran
from the command line apart from the cron scheduler it works fine and
does not throw out errors stating "too many open files".
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/rhn-users/attachments/20050524/90c78887/attachment.htm>
More information about the rhn-users
mailing list