[redhat-lspp] NetLabel performance numbers

Paul Moore paul.moore at hp.com
Thu Jul 13 18:25:07 UTC 2006


Valdis.Kletnieks at vt.edu wrote:
> On Thu, 13 Jul 2006 09:05:18 EDT, Paul Moore said:
>>>>  C_FlCat    625.46          935.52          9110.29      9262.92
>>>>  C_F_LxV    686.46          935.53          9325.37      9484.93
>>>
>>>Any idea why the tcp_rr only dropped about 14%, but tcp_stream dropped 30%?
>>>I'd expect the rate to be more sensitive to it, because the testing is
>>>per-packet, not per-KB?
>  
> Given a 1-byte payload on the tcp_rr and udp_rr tests, the added 40 bytes
> of IP headers explains the drop down to the low 9K, and the udp_stream
> numbers line up fine too.
> 
> There's still something unexplained about that 625 for tcp_stream on C_FlCat.
> Was either box hitting CPU saturation at that point?

Don't know for certain, I wasn't watching CPU usage since I wanted all
the numbers to be as unmolested as possibile - I just kicked off the
script and had a cookie.  Although I can say there is a lot of work that
needs to be done in the "s0:c0.c239", i.e. full category, case and I
wouldn't be surprised if the receive thread was maxing out a CPU core;
look at the validation code in cipso_ipv4.c and the ebitmap_import()
routine up in the SELinux code.  I both cases I tried to write code that
didn't suck too badly but I haven't done any serious refinement either.
 I suspect there is probably more speed to be gained but it is always
going to be inherently painful.

>>If people really feel that detailed analysis of this test is important for 
>>acceptance let me know and I'll see what I can do.
> 
> 
> Probably don't need to be *much* more detailed - it's a good coverage of
> tests, and most of the numbers are within statistical noise of "the best
> we can possibly do while carrying a CIPSO header".  I think once we figure
> out what happened on C_FlCat, just saying "Performance has been tested and
> found not an issue" and add a URL pointing to this thread should be good
> enough.

Okay, I'll probably leave it alone for now and re-run the tests when I
get a closer to getting something everybody will ACK.  Hopefully that
should be soon ... <crosses fingers> ...

>>>>  C_F_NoC    328.69          935.53          6258.61      6415.35
>>>
>>>I tuned in late - are there any real configurations where a site would
>>>actually want cipso_cache_enable=0 set?  Or is this an indication that
>>>the option needs to be nailed to 1?
>>
>>It was more for my own curiosity rather than anything else, I just thought I 
>>would throw it in here in case others were curious too.  Basically, I have 
>>always asserted that a CIPSO label cache would have a huge benefit in terms 
>>of receive side performance but I never had any numbers to back it up - now I
>>do.
> 
> This one, we're obviously getting CPU bound or something and that's why
> the numbers fell through the floor...

Yeah, you know all that ugly stuff I was talking about earlier - it gets
much worse with the cache off.

> OK.. that wouldn't be the first debugging knob with crud performance the
> kernel has sprouted.  How much memory does the cache use on a per-connection
> basis?  We're already carrying a number of slab entries and rcvbufs and
> the like around per connection - unless I'm insufficiently caffienated, it
> looks to be impossible to open an IPv4 TCP connection without burning around
> 128K of memory (assuming sane buffer sizes for anything over 10mbit).

The cache is not done per-connection, but rather per-label.  As it
stands right now the cache has a certain number of buckets as specified
by a #define, defaults to 128, and the number of entries in each bucket
is limited by another sysctl variable "net.ipv4.cipso_cache_bucket_size"
which defaults to 10.  The exact size of each cache entry is determined
by the LSM, in the case of SELinux the cache entry is defined as:

struct netlbl_cache {
	u32 type;
	union {
		u32 sid;
		struct mls_level mls_label;
	} data;
};

... plus a function pointer to do cleanup.  I don't expect the memory
requirements to be tremendous, but it is easy enough to allow users to
tweak it so I thought I would expose the knob.  In you opinion is using
the slab mechanism a requirement?  It doesn't use it currently ...

-- 
paul moore
linux security @ hp




More information about the redhat-lspp mailing list