New draft standards

Burn Alting burn at swtf.dyndns.org
Sun Dec 27 00:30:59 UTC 2015


On Sat, 2015-12-26 at 11:38 -0500, Steve Grubb wrote:
> On Thursday, December 24, 2015 09:44:00 AM Burn Alting wrote:
> > On Fri, 2015-12-18 at 16:12 +1100, Burn Alting wrote:
> > > On Tue, 2015-12-15 at 08:46 -0500, Steve Grubb wrote:
> > > > On Tuesday, December 15, 2015 09:12:54 AM Burn Alting wrote:
> > > > > I use a proprietary ELK-like system based on ausearch's -i option. I
> > > > > would
> > > > > like to see some variant outputs from ausearch that "packages" events
> > > > > into
> > > > > parse-friendly formats (json, xml) that also incorporates the local
> > > > > transformations Steve proposes. I believe this would be the most
> > > > > generic
> > > > > solution to support centralised log management.
> > > > > 
> > > > > I am travelling now, but can write up a specification for review next
> > > > > week.
> > > > 
> > > > Yes, please do send something to the mail list for people to look at and
> > > > comment on.
> > > 
> > > All,
> > > 
> > > To reiterate, my need is to generate easy to parse events over which
> > > local interpretation has been applied, retaining raw input to the some
> > > of the interpretations if required. I want to then transmit the complete
> > > interpreted event to my central event repository.
> > > 
> > > My proposal is that ausearch gains the following 'interpreted output'
> > > options
> > > 
> > >         --Xo plain|json|xml
> > >         generate plain (cf --interpret), xml or json formatted events
> > >         
> > >         --Xr key_a'+'key_b'+'key_c
> > >         include raw value for given keys using the the new key
> > >         __r_key_a, __r_key_b, etc. The special key __all__ is
> > >         interpreted to retain the complete raw record. If the raw value
> > >         has no interpreted value, then we will end up with two keys with
> > >         the same value.
> > > 
> > > I have attached the XSD from which the XML and JSON formats could be
> > > defined.
> > 
> > Is there any interest in this? If is was available, would people make
> > use of it?

Steve,

I'll start with the statement I am happy to enhance the audit capability
of Linux in any way (read that as a direct offer to help).

> I'm somewhat interested in this. I'm just not sure where the best place to do 
> all this is. Should it be in ausearch? Should it be in auditd? Should it be in 
> the remote logging plugin? Should audit utilities be modified to accept this 
> new form of input?

I've concentrated on ausearch as this is the only tool that
comprehensively parses all existing audit records, both well formed and
incorrectly formed. As you know auparse() has difficulties with
mal-formed events. Ausearch also has the benefit of not effecting real
time performance - I'd not like auditd have to wait on an external DNS
service to timeout when attempting to resolve an 'addr' field.

Whatever is done, the code needs to be modular so that any utility, be
it ausearch, auditd or an audispd plugin, or an independent auparse()
based utility can make use of it.

Perhaps to address the higher level audit needs, we can provide an
additional output format to my proposed changes for 'interpretive
formatting' to be that of 'descriptive statements'. This would be
similar to Windows auditing when it allows one to include 'Display
Information' field which provides a 'human readable' description of the
event data.

Perhaps the thrust should be
a. address performance
b. ensure auparse() can better deal with mal-formed events
c. provide interpretative formatting

Regards
Burn

> Ultimately, I am wanting to be able to reduce the audit records down to 
> English sentences something like this:
> 
> On 1-node at 2-time 3-subj 4-acting-as 5-results 6-action 7-what 8-using
> 
> Which maps to
> 1) node
> 2) time
> 3) auid, failed logins=remote system
> 4) uid (only when uid != auid) or role (when not unconfined_t)
> 5) res - successfully / failed to
> 6) op, syscall, type, key - requires per type classification
> 7) path,system
> 8) exe,comm
> 
> So, what I was thinking about is looking at the whole event and picking out 
> the node, time, subject, object, action, and results. The subject and object 
> would be further broken down to primary identity, secondary identity, and 
> attributes. I was planning to put this into an extension of auparse so that 
> events could be dumped out using the classification system.
> 
> My thoughts had been to organize the event data to support something along 
> these lines. I want to get the events easier to understand.
> 
>  
> > If so I can modify ausearch and generate a proposed patch over the
> > Christmas break.
> 
> At the moment, I'm looking at auditd performance improvements to prepare for 
> the enrichment of audit records. You're one step ahead of where I am. I hope 
> to finish this performance work soon so that I can start thinking about the 
> problem you are.  :-)
> 
> Of course...we could look at the auditd performance issues together and then 
> move on to event formatting.





More information about the Linux-audit mailing list