[Freeipa-devel] timestamps

Dmitri Pal dpal at redhat.com
Wed Jul 15 15:07:01 UTC 2009


John Dennis wrote:
> Dmitri and I were discussing the storage of timestamps yesterday. My
> understanding of best practice (after much investigation) is to store
> timestamps in UTC paired with the current zoneinfo (important
> clarification, *not* the local offset, rather the *zoneinfo* [1]).
> This is the only unambiguous way be able to convert to/from local time
> in the past, now, and in the future. It also has the nice property of
> preserving the timestamp in a normalized form (UTC) which can be
> compared to other timestamps and/or presented in another zone (in the
> past, now, or in the future).
>
> However one problem I encountered as I tried to implement this was I
> could not find a standard library which returned the current zoneinfo.
> glibc and other posix libraries provide offsets and/or the TZ value
> (which really just an offset in wolf's clothing). I have a very clear
> understanding of how to retrieve the zoneinfo in a Linux environment.
> In fact I found a number of examples of code which had their own
> private function to query the zoneinfo. The downside to this is it is
> OS specific (but it's not too terrible to tweak it to a particular OS).
>
> So I have a few questions:
>
> * Jakob, in the policy engine when you wrote the code which deals with
> time evaluation how did you handle the zone issue?
>
> * Have others needed to deal with timestamps and if so what has been
> your methodology?
>
> * Is anyone aware of a library which can retrieve the zoneinfo in a
> portable manner?
>
> [1] If you're wondering why offsets are evil and why zoneinfo was
> invented it's because timezones are notoriously fluid, they are the
> result of political decisions and vary in unpredictable ways (in both
> the geographic domain and the time domain). The only unambiguous way
> to know what a proper offset is to know a geographic location at a
> specific moment in history, hence the introduction of the zoneinfo
> database which allows one to query the offset based on location and
> moment in time.
>
Yes we talked about this with John the other day and John broght to my
attention the issues related to time. I promised to think about it more.
After some more thinking and discussions on IRC with Simo, here are some
of my thoughts.

I am looking at the problem from the point of view of the ELAPI
interface. When the event is created by the application it should have a
time stamp. But what it should be?
Let us look at what can happen with the event. The event can be :
a) Written to remote or local file
b) Sent to syslog
c) Sent over to central location (audit storage)
d) Inserted into a database
...
And one event can be sent to several of those destinations at the same time.

John above suggests that the event then should have two parts of
information: UTC time and time zone info. This information is enough to
be able to convert the time back and forth and compare it. But it my
opinion one significant piece is missing for such conversion - the locale.
I guess the question I have is why would we need to convert it? If the
event has UTC and all other events have UTC then they can easily
compared and related to each other. Time zone info is not needed for
such comparison. So the timezone info is needed only if we want to
convert the time on the central server to some local representation.
Hm... and why would we want to spend cycles doing this conversion on the
central server anyways? Why not just sent as a separate field the local
time in the format related to locale defined on the machine?  
Such approach has a lot of benefits:

a) You do not need platform dependent code to get time zone
b) You do not need to deliver locale to the central server
c) You do not need to spend cycles on the central server doing conversions
d) You can use the local time as is in the local logs (as syslog does)

The downside is that the local time takes some space in the event. Well
it is not going to be significantly longer than the time zone
information, so I do not see it to be a big issue.
One may argue it is redundant information. Well, yes, so what? It is
very convenient information though and allows us to avoid a lot of
unnecessary complexity.

I took a look at the Windows event log - they do not store the time
zone, they store just UTC which makes perfect sense for me.

I view the local time stamp as a very convenient field for
administrators to take a look at when they are analyzing the logs. There
is not need to parse it or convert it. It is just for presentation but
it is originated from the machine it came (factoring it locale). If
admin would find this field confusing or not needed we would always have
an option to not display it.

Also the RSA Authentication manager logs has two time stamps like that:
local time and UTC time in one event record. I have not heard any
complaints from the customers (in 10 years) about this approach but on
the contrary administrators find this approach sufficient and useful.

So based on this I do not see a real reason to over engineer the ELAPI
and try to read time zone information and be able to do conversion of
the time - IMO it is not needed. The local time stamp and the UTC time
provide sufficient information for the use cases ELAPI has to deal with.
So unless there is a strong argument for doing it differently I would
continue with the "local" + "UTC approach.

I am open for discussion though :)

-- 
Thank you,
Dmitri Pal

Engineering Manager IPA project,
Red Hat Inc.


-------------------------------
Looking to carve out IT costs?
www.redhat.com/carveoutcosts/




More information about the Freeipa-devel mailing list