opinions on /etc/security/limits.conf

Russell Coker russell at coker.com.au
Fri Nov 25 00:59:12 UTC 2005


On Friday 25 November 2005 09:23, Roland McGrath <roland at redhat.com> wrote:
> I don't think it makes sense that this configuration file be the means of
> resetting limits to their boot-time defaults, since that is what you really
> want.  If what you want is to reset the base limits to those inherited from
> init, you should do that explicitly.

How would you do this?  As far as I am aware it's not possible to read the 
limits of another process, and even if it was possible it probably wouldn't 
be desirable to permit it in the SE Linux policy.

> i.e. have runuser or su, or SELinux 
> transition on exec, reset the limits to init's before applying whatever
> configuration you want.  What you propose will wind up with drift between
> the configuration file and the kernel defaults.  Some of the kernel default
> limits are based on the size of RAM and such, so a default value in the
> config file will never be right across the board.

The kernel defaults may be based on RAM etc, but my observation is that they 
are not going to help you.

Below is the output of "ulimit -a" on a rawhide system with 768M of RAM and no 
entries in limits.conf:

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
max nice                        (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 12278
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
max rt priority                 (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 12278
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

The only real limits are 1024 open files and 12278 processes.  The 1024 open 
files are per-process and /proc/sys/fs/file-max on that machine is 76660 by 
default.  So if a hostile user (or buggy program) was to fork 77 processes 
that use the maximum number of file handles or the maximum number of 
processes that each use 7 file handles then the system would run out.

It seems to me that setting limits to lower values would do some good.  A 
default of say 500 processes for a non-root user would be useful and 
generally not get in the way of regular operations.  Even a limit of 1000 
processes would be less than the kernel is likely to assign to any machine in 
a supported configuration (minimum supported configuration for RHEL is 256M) 
while still being much larger than anyone is really likely to need.

The only fields that seem to change default values based on RAM are the number 
of pending signals (I don't even know the purpose of this) and the number of 
processes.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/    Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page




More information about the fedora-devel-list mailing list