[PATCH] Add auditd listener and remote audit protocol

LC Bruzenak lenny at magitekltd.com
Thu Aug 14 23:26:49 UTC 2008


On Thu, 2008-08-14 at 18:16 -0400, DJ Delorie wrote:
> > What I'm getting is that in addition to kernel-generated local events
> > the auditd would also receive signals as well as tcp-based events from
> > other sources. Would this be the way of implementing multi-source audit
> > aggregation or is it something different?
> 
> The net result is to aggregate audit logs from many systems onto one
> central audit server.  Remote audit messages have the new "node=" tag
> on them so you know where they came from.
> 
> I.e. you configure audisp-remote.conf like this:
> 
> remote_server = 10.2.3.4
> port = 1237
> 
> And the central server (10.2.3.4 in this example) like this:
> 
> tcp_listen_port = 1237
> 
> And then the client sends all audit messages to the server, where
> they're logged to disk.
> 
> This is similar to centralized syslog logging.

Maybe in theory...but a couple of differences matter a bit:
1: Yesterday I saved 95MB of audit data. The past 3 days' syslog is so
far under 3.5MB. I believe my audit data will grow more as my system
matures and gets tested. I don't know if those numbers are anywhere
close to representative or not though.
2: If I lose a portion of syslog data it doesn't hurt me too much
usually; not the case if I rely on my audit data to be complete and
accurate.
 
> 
> The event loop change I linked to is a neccessary design change
> prerequisite to this one, since the listener adds (potentially) many
> descriptors which will need to be serviced.  The loop now services
> four types of events: local signals, local netlink, the listen socket
> (for new connections), and client sockets (for incoming audit
> messages).

Thank you; I get it now.
This is about to get interesting! :)

Not certain why I didn't get it the first time, but for some reason I
had not considered sending the events into the auditd loop.
I was thinking of just aggregating the logfiles. Now it makes sense.

My one auditd machine gets very busy occasionally - I sometimes drop
events (rather than abort for a development machine) even after
ratcheting up my event queue to 8K. Often this is due to an error I've
introduced with too-general rules, so this is also not definitive.

Now the question is what happens if the network hiccups and I cannot
send the events from a client? I could still write the events to the
local disk, but them getting them onto the intended aggregator is now
tricky right? Will the sender keep track of the last event sent and
recover once the connection is restored? 

I'm not disputing the approach, just trying to look down the road
knowing problems I've experienced myself. There are some definite
benefits to this approach I see also - the log files now are "blended"
and you don't have to do any special directory hierarchy to accommodate
the other events, for one.

OK, so - thanks again for the explanation and I look forward to testing
this out soon!

Thx,
LCB.

-- 
LC (Lenny) Bruzenak
lenny at magitekltd.com




More information about the Linux-audit mailing list