[rhn-users] Ulimit and cron scheduler - too many open files error

Doerbeck, Christoph Christoph.Doerbeck at FMR.COM
Tue May 24 17:48:45 UTC 2005


That looks ok.  Have you rebooted since these settings were put in
place.
Or at least shut cron down and restarted?  Have you tried creating a
cron to simply
write the ulimits to a output file for testing?
 
My last hints might be:
 
    /etc/pam.d/login needs a "required pam_limits.so"  (I usually place
in fron of pam_console line)
and
    /etc/sshd/sshd_config needs "UsePrivilegeSpearation no" set
 
But I'm not sure how this would affect crond... I'll have to ponder it
for a bit.



Christoph  

-----Original Message-----
From: Serge Bianda [mailto:serge.bianda at appiancorp.com] 
Sent: Tuesday, May 24, 2005 12:17 PM
To: Red Hat Network Users List
Cc: Doerbeck, Christoph
Subject: RE: [rhn-users] Ulimit and cron scheduler - too many open files
error



	# /etc/security/limits.conf

	#

	#Each line describes a limit for a user in the form:

	#

	#<domain>        <type>  <item>  <value>

	#

	#Where:

	#<domain> can be:

	#        - an user name

	#        - a group name, with @group syntax

	#        - the wildcard *, for default entry

	#

	#<type> can have the two values:

	#        - "soft" for enforcing the soft limits

	#        - "hard" for enforcing hard limits

	#

	#<item> can be one of the following:

	#        - core - limits the core file size (KB)

	#        - data - max data size (KB)

	#        - fsize - maximum filesize (KB)

	#        - memlock - max locked-in-memory address space (KB)

	#        - nofile - max number of open files

	#        - rss - max resident set size (KB)

	#        - stack - max stack size (KB)

	#        - cpu - max CPU time (MIN)

	#        - nproc - max number of processes

	#        - as - address space limit

	#        - maxlogins - max number of logins for this user

	#        - priority - the priority to run user process with

	#        - locks - max number of file locks the user can hold

	#

	#<domain>      <type>  <item>         <value>

	#

	 

	*               soft    nofile           63536

	*               hard    nofile           63536

	testuser1       hard    nofile           63536

	testuser1        soft    nofile           63536

	# End of file

	 

	
  _____  


	From: rhn-users-bounces at redhat.com
[mailto:rhn-users-bounces at redhat.com] On Behalf Of Doerbeck, Christoph
	Sent: Tuesday, May 24, 2005 10:38 AM
	To: Red Hat Network Users List
	Subject: RE: [rhn-users] Ulimit and cron scheduler - too many
open files error

	 

	please post your limits.conf

	 

	 

	 

	Christoph   

	 

	-----Original Message-----
	From: Serge Bianda [mailto:serge.bianda at appiancorp.com] 
	Sent: Tuesday, May 24, 2005 10:28 AM
	To: rhn-users at redhat.com
	Subject: [rhn-users] Ulimit and cron scheduler - too many open
files error

		 

		We are trying to run a script using the cron scheduler.
For the script to run, we had to increase the number of open file
descriptors using "ulimit -n" to 63000 from 1024. We increased the file
descriptor value in /etc/security/limits.conf. But crontab scheduler
does not read the new increased value for the file descriptors and
always errors out throwing - "too many open files" error. We tried to
increase the value in the script that is scheduled using the cron job,
but it looks like cron does not pick the new value even if it is set in
the script. Are you aware of any global setting for the cron scheduler
which will force it to have more open file descriptors? Note that when
the script is ran from the command line apart from the cron scheduler it
works fine and does not throw out errors stating "too many open files".

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/rhn-users/attachments/20050524/c4706a9b/attachment.htm>


More information about the rhn-users mailing list