[Freeipa-devel] DNSSEC support design considerations

Simo Sorce simo at redhat.com
Tue May 21 18:30:23 UTC 2013


On Tue, 2013-05-21 at 18:32 +0200, Petr Spacek wrote:
> Hello,
> 
> I found that we (probably) misunderstood each other. The sky-high level 
> overview of the proposal follow:
> 
> NO CHANGE:
> 1) LDAP stores all *unsigned* data.
> 
> 2)
> NO CHANGE:
> a) bind-dyndb-ldap *on each server* fetches all unsigned data from LDAP and 
> store them in *in memory* database (we do it now)
> 
> THE DIFFERENCE:
> b) All data will be stored in BIND's native RBT-database (RBTDB) instead of 
> our own in-memory database.
> 
> NEW PIECES:
> 3)
> Mechanisms implemented in BIND's RBTDB will do DNSSEC signing etc. for us. The 
> BIND's feature is called 'in-line signing' and it can do all key/signature 
> maintenance for us, including periodical zone re-signing etc.
> 
> 
> The whole point of this proposal is about code-reusage. I'm trying to avoid 
> re-inventing of the wheel.
> 
> Note that DNSSEC implementation in BIND has ~ 150 kiB of C code, stand-alone 
> signing utilities add another ~ 200 kiB of code (~ 7000 lines) . I really 
> don't want to re-write it again when it's not reasonable.
> 
> Further comments are in-line.

Ok putting some numbers on this topic really helps, thanks!

More inline.

[..]

> > I haven't seen any reasoning from you why letting Bind do this work is
> > a better idea.
> Simply said - because all the code is already in BIND (the feature is called 
> 'in-line signing', as I mentioned above).
> 
> > I actually see some security reasons why putting this into a DS plugin
> > can have quite some advantages instead. Have you considered doing this
> It could improve the security a bit, I agree. But I don't think that it is so 
> big advantage. BIND already has all the facilities for key material handling, 
> so the only thing we have to solve is how to distribute keys from LDAP to 
> running BIND.

Well it would mean sticking the key in ldap and letting Bind pull them
from there based on ACIs ...
The main issue would be changes in keys, but with the persistent search
I guess that's also not a huge deal.

> > work in a DS plugin at all ? If you haven and have discarded the idea,
> > can you say why ?
> 1) It would require pulling ~ 200 kiB (~ 7000 lines) of DNSSEC signing code 
> into 389.
> 
> 2) It would require pulling 'text->DNS wire format' parser into 389 (because 
> our LDAP stores plain text data but the signing process works with DNS wire 
> format).
> 
> 3) It simplifies bind-dyndb-ldap, but we still need to re-implement DNS search 
> algorithm which takes DNSSEC oddities into account. (Note that the DNS search 
> algorithm is part of the database implementation. Bugs/limitations in our 
> implementation are the reason why wildard records are not supported...)
> 
> 4) I'm not sure how it will work with replication. How to ensure that new 
> record will not appear in the zone until the associated RRset is (re)computed 
> by DS? (BIND has transaction mechanism built-in to the internal RBTDB.)

389ds has internal transactions, which is why I was thinking to do the
signatures on any change coming into LDAP (direct or via replication,
within the transaction.

> >> The point is that you *can* do changes run-time, but you need to know about
> >> the changes as soon as possible because each change requires significant
> >> amount of work (and magic/mana :-).
> >>
> >> It opens a lot of opportunities for race condition problems.
> >
> > Yes, I am really concerned about the race conditions of course, however
> > I really wonder whether doing signing in bind is really a good idea.
> > We need to synchronize these signatures to all masters right ?
> No, because signatures are computed and stored only in memory - and forgotten 
> after BIND shutdown. Yes, it requires re-computing on each load, this is 
> definitely disadvantage.

Ok I definitely need numbers here.
Can you do a test with a normal, text based, Bind zone with 10k entries
and see how much time it takes to re-sign everything ?

I suspect that will be way too much, so we will have the added problem
of having to maintain a local cache in order to be able to restart Bind
and have it actually server results in a reasonable time w/o killing the
machine completely.

> > Doesn't that mean we need to store this data back in LDAP ?
> No, only 'normal' DNS updates containing unsigned data will be written back to 
> LDAP. RRSIG and NSEC records will never reach LDAP.
> 
> > That means more round-trips before the data ends up being usable, and we
> > do not have transactions in LDAP, so I am worried that doing the signing
> > in Bind may not be the best way to go.
> I'm proposing to re-use BIND's transaction mechanism built in internal 
> database implementation.
> 
> >>>> => It should be possible to save old database to disk (during BIND shutdown
> >>>> or
> >>>> periodically) and re-use this old database during server startup. I.e. server
> >>>> will start replying immediately from 'old' database and then the server will
> >>>> switch to the new database when dump from LDAP is finished.
> >>>
> >>>
> >>> This look like an advantage ? Why is it a disadvantage ?
> >> It was mentioned as 'proposed remedy' for the disadvantage above.
> >
> > I think having dual authoritative data sources may not be a good thing.
> Consistency is a reason why I want to make persistent search mandatory.

A persistent search does not guarantee consistency only that updates are
sent to you as soon as they come in, you still may end up with bugs in
the implementation where you do not catch something and get out of date,
with the actual data in LDAP.

> IMHO persistent storage could save the day if LDAP is down for some reason. 
> Old data in DNS are much better than no data in DNS.

Depends how OLD :-)
But maybe we can bake some timeouts in there and simply dump any cached
data if it really is too old.

> >>>> => As a side effect, BIND can start even if connection to LDAP server is down
> >>>> - this can improve infrastructure resiliency a lot!
> >>>
> >>> Same as above ?
> >> The same here, it was mentioned as 'proposed remedy' for the disadvantage above.
> >
> > When it comes to DNSSEC starting w/o LDAP may just mean that you have
> > different signatures for the same records on different masters. Is that
> > 'legale' according to DNSSEC ?
> 1) You will have same signatures as long as records in LDAP and saved copy of 
> the database (on the disk) are equal.

Well given an IPA infrastructure uses Dynamic Updates I expect data to
change frequently enough that if you have an outage that lasts more than
a handful of minutes the data in the saved copy will not match the data
in LDAP.

> 2) I didn't find any new limitation imposed by DNSSEC. AFAIK some 
> inconsistency between servers is normal state in DNS, because zone transfers 
> take some time and the tree structure have many levels.

But in the bind case they assume a single master model, so they never
have inconsistencies, right ?
But in our case we have multiple masters, so I wonder if a client is
going to have issues ...
Are we guaranteed that if 2 servers have the exact same view they
generate the exact same signatures ? If that is the case then maybe we
are ok.

> The problems arise when data *in single database* (i.e. on one server) are 
> inconsistent (e.g. signature != data in unsigned records). BIND solves this 
> with it's built-in transaction mechanisms.

Understood.

> >>>> == Uncertain effects ==
> >>>> - Memory consumption will change, but I'm not sure in which direction.
> >>>> - SOA serial number maintenance is a open question.
> >>>
> >>> Why SOA serial is a problem ?
> >> It simply needs more investigation. BIND's RBTDB maintains SOA serial
> >> internally (it is intertwined to transactions in the DB), so the write-back to
> >> LDAP could be very delicate operation.
> >
> > It means all masters will often be out of sync, this is not very good.
> I don't think so. BIND can use timestamp-based serials in exactly same way as 
> we do. The only problem is how to implement 'read from internal DB'->'write to 
> LDAP' operation. It still needs more investigation.

Well if we use timestamp based serials ... why do we need to write
anything back ? :-) We can just let a DS plugin fill the serial in the
SOA with a timestamp just for schema compatibility purposes and just
assume bind has the same or a close enough serial internally.

> >>>> Decision if persistent search is a 'requirement' or not will have significant
> >>>> impact on the design, so I will write the design document when this decision
> >>>> is made.
> >>>
> >>> I would like to know more details about the reasons before I can usefully comment.
> >>
> >> I forgot to one another 'Uncertain effect':
> >> - Support for dynamically generated '_location' records will be a big
> >> adventure. It probably means no change from the state without persistent
> >> search :-) After basic exploration it seems doable, but still a bit uncertain.
> >
> > I need more info here, does it mean you have to store _location records
> > when they are generated ?
> I tend to do _location record generation during zone-load, so everything will 
> be prepared when the query comes in. As a benefit it will allow zone transfers 
> even for signed zones. This still needs more investigation.

The idea is that _location is dynamic though, isn't it ?

Anyway what if we do not sign _location records ?
Will DNSSEC compliant clients fail in that case ?

>  > Maybe we can use the internal bind database
> > just for _location "zone" ?
> I don't think that it is possible.
> 
> If _location.client-a and _location.client-b reside in the single database 
> then client-a and client-b have to reside in the same database. (The reason is 
> that _location.client-a and _location.client-b do not have immediate common 
> ancestor.)

Uhmm right, nvm.

> >> My personal conclusion is that re-using of BIND's backend will save a huge
> >> amount of work/code to maintain/bugs.
> >
> > I can see that, unfortunately I fear it will make multi-master a lot
> > more difficult at the same time. And given we do want to have
> > multi-master properties we need to analyze that problem more carefully.
> I agree. It is a delicate change and we should not hurry.
> 
> > Also by welding ourselves to internal Bind infrastructure too much, it
> > will make it a lot more difficult for us to change the DNS
> > infrastructure. Bind10 will be completely different internally, and we
> > may simply decide to even not use bind10 at all and use a completely
> > different engine going forward. So I am quite wary of welding ourselves
> > even more to bind 9 internals.
> Ehm ... how to say that ... 'to late'. I wasn't around when DNS design was 
> made, so I don't know all the reasons behind the decision, but IMHO we use 
> completely non-standard/obscure hacks all the time.
> 
> The proposal above doesn't extend our dependency on BIND, because we already 
> depend on BIND9 *completely*.

Not really, all the data is currently in LDAP, all we need is to write a
plugin for a different server and start serving data that way.

However if DNSSEC is handled with bind, then rewriting the plugin will
not be sufficient as we do not have the data in LDAP anymore, we also
need to find out that part.
I am not against your proposal because of this, just pointing out.

>  It is about dropping our own internal database 
> implementation (buggy, incomplete, standard non-compliant) with the code from 
> the original BIND (which is at least standard compliant).

Understood.

What changes are going to be required in bind-dyndb-ldap to use RBTDB
from Bind ? Do we have interfaces already ? Or will it require
additional changes to the glue code we currently use to load our plugin
into bind ?

> <sarcasm>
> Do you want to go back to 'light side of the force'? So we should start with 
> designing some LDAP->nsupdate gateway and use that for zone maintenance. It 
> doesn't solve adding/reconfiguring of zones on run-time, but it could be 
> handled by some stand-alone daemon with an abstraction layer at proper place.
> </sacrasm>

Well the problem is loading of zones, that is why nsupdate can't be
used, we'd have to dump zones on the fly at restart and pile up
nsupdates if bind is not available, and then handle the case where for
some reason nsupdate fails and bind and LDAP get out of sync.

Also would mean nsupdates made by clients would not be reported back to
LDAP.

Using nsupdate was considered, it just is not feasible.

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York




More information about the Freeipa-devel mailing list