[Freeipa-devel] DNSSEC support design considerations

Simo Sorce simo at redhat.com
Mon May 20 12:07:34 UTC 2013


On Wed, 2013-05-15 at 17:11 +0200, Petr Spacek wrote:
> On 15.5.2013 10:29, Simo Sorce wrote:
> >> I investigated various scenarios for DNSSEC integration and I would like to
> >> hear your opinions about proposed approach and it's effects.
> >>
> >>
> >> The most important finding is that bind-dyndb-ldap can't support DNSSEC
> >> without rewrite of the 'in-memory database' component.
> >
> > Can you elaborate why a rewrite would be needed ? What constraint we do not meet ?
> 
> We have three main problems - partially with data structures and mostly with 
> the way how we work with the 'internal database':
> 
> 1) DNSSEC requires strict record ordering, i.e. each record in database has to 
> have predecessor and successor (ordering by name and then by record data). 
> This can be done relatively simply, but it requires a full dump of the database.
> 
> 2) On-line record signing requires a lot of data stored 
> per-record+per-signature. This would require bigger effort than point 1), 
> because many data structures and respective APIs and locking protocols have to 
> be re-designed.
> 
> 3) Our current 'internal database' acts as a 'cache', i.e. records can appear 
> and disappear dynamically and the 'cache' is not considered as authoritative 
> source of data: LDAP search is conducted each time when some data are not 
> found etc. The result is that the same data can disappear and then appear 
> again in the cache etc.
> 
> Typical update scenario, with persistent search enabled:
> a) DNS UPDATE from client is received by BIND
> b) New data are written to LDAP
> c) DN of modified object is received via persistent search
> d) All RRs under the *updated name* are discarded from the cache
> <-- now the cache is not consistent with data in LDAP
> e) Object from LDAP is fetched by plugin
> <-- a query for the updated name will enforce instant cache refresh, because 
> we know that the cache is not authoritative
> f) All RRs in the object are updated (in cache)
> 
> The problem is that the cache in intermediate states (between <-- marks) can't 
> be used as authoritative source and will produce incorrect signatures. The 
> text below contains more details.
> 
> Database's in BIND has concept of 'versions' ('transactions') which our 
> internal cache do not implement ... It could be solved by proper locking, of 
> course, but it will not be a piece of cake. We need to take care of many 
> parallel updates, parallel queries and parallel re-signing at the same time.
> 
> I don't say that it is impossible to implement our own backend with same 
> properties as BIND's database, but I don't see the value (and I can see a lot 
> of bugs :-).

Well, we do not necessarily need all the same properties of bind's
database, only those that allow us to properly handle DNSSEC, so let's
try to uncover what those constrains are first, so I can understand why
you propose this solution as better than something else.

> >> Fortunately, it seems
> >> that we can drop our own implementation of the internal DNS database
> >> (ldap_driver.c and cache.c) and re-use the database from BIND (so called
> >> RBTDB).
> >>
> >> I'm trying to reach Adam Tkac with the question "Why we decided to implement
> >> it again rather than re-use BIND's code?".
> >>
> >>
> >> Re-usage of BIND's implementation will have following properties:
> >>
> >>
> >> == Advantages ==
> >> - Big part of DNSSEC implementation from BIND9 can be reused.
> >> - Overall plugin implementation will be simpler - we can drop many lines of
> >> our code and bugs.
> >> - Run-time performance could be much much better.
> >>
> >> - We will get implementation for these tickets "for free":
> >> -- #95  wildcard CNAME does NOT work
> >> -- #64 	IXFR support (IMHO this is important!)
> >> -- #6 	Cache non-existing records
> >>
> >> And partially:
> >> -- #7 	Allow limiting of the cache
> >
> > Sounds very interesting.
> >
> >
> >> == Disadvantages ==
> >> - Support for configurations without persistent search will complicate things
> >> a lot.
> >> -- Proposal => Make persistent search obligatory. OpenLDAP supports LDAP
> >> SyncRepl, so it should be possible to make plugin compatible with 389 and
> >> OpenLDAP at the same time. I would defer this to somebody from users/OpenLDAP
> >> community.
> >
> > Why the persistent search would be required ?
> As I mentioned above - you need database dump, because DNSSEC requires strict 
> name and record ordering.
> 
> It is possible to do incremental changes when the 'starting snapshot' is 
> established, but it means that we need information about each particular 
> change => that is what persistent search provides.

Ok, so it is to have a complete view of the databse, I assume to reduce
the number of re-computations needed for DNSSEC.

> >> - Data from LDAP have to be dumped to memory (or to file) before the server
> >> will start replying to queries.
> >> => This is not nice, but servers usually are not restarted often. IMHO it is
> >> a
> >> good compromise between complexity and performance.
> >
> > I am not sure I understand what this means. Does it mean you cannot change single
> > cache entries on the fly when a change happens in LDAP ? Or something else ?
> Sorry, I didn't explained this part in it's full depth.
> 
> You can change everything run-time, but there are small details which 
> complicates loading of the zone and run-time changes:
> 
> 1) A normal zones requires SOA + NS + A/AAAA records (for NSs) to load. It is 
> (hypothetically) possible to create empty zone, fill it with SOA, NS and A 
> records and then incrementally add rest of the records.
> 
> The problem is that you need to re-implement DNS resolution algorithm to find 
> which records you need at the beginning (SOA, NS, A/AAAA) and then load the rest.
> 
> I would like to avoid this re-implementation. It is not possible to re-use 
> BIND's implementation because it is tied to the DB implementation ... but we 
> can't load the database because it is missing SOA, NS and A/AAAA records. 
> Chicken-egg problem.

To be honest I am not sure I understand what's your point here.

> 2) The second reason why I want to make persistent search obligatory is that 
> each change in DNSSEC signed zone requires a lot of work, so it is not a good 
> idea to wait with the work to time when somebody asks for particular record.
> 
> How it works without persistent search (now):
> 1) Query from a client is received by BIND
> 2) Internal cache is consulted
> 3) Record is not found in the cache - LDAP search is done
> 4) Fetched records in saved to the cache
> 5) Reply to client is constructed
> 
> It is hard to work in the same way when DNSSEC is in place. Each change 
> implicates re-signing of the particular RRset and it's neighbours, i.e.:
> 1) Query from a client is received by BIND
> 2) Internal cache is consulted
> 3) Record is not found in the cache - LDAP search is done
> 4) Fetched records in saved to the cache
> * 4b) New RRset is re-signed
> * 4c) Records neighbouring with the new RR has to be updated and re-signed
> 5) Reply to client is constructed

Ok so the point here is that we want to do the signing at store time
rather than read time. That is understandable.
However we have 2 ways to look at it.
1. bind does the work
2. DS does the work

I haven't seen any reasoning from you why letting Bind do this work is
a better idea.
I actually see some security reasons why putting this into a DS plugin
can have quite some advantages instead. Have you considered doing this
work in a DS plugin at all ? If you haven and have discarded the idea,
can you say why ?

> The point is that you *can* do changes run-time, but you need to know about 
> the changes as soon as possible because each change requires significant 
> amount of work (and magic/mana :-).
> 
> It opens a lot of opportunities for race condition problems.

Yes, I am really concerned about the race conditions of course, however
I really wonder whether doing signing in bind is really a good idea.
We need to synchronize these signatures to all masters right ?
Doesn't that mean we need to store this data back in LDAP ?
That means more round-trips before the data ends up being usable, and we
do not have transactions in LDAP, so I am worried that doing the signing
in Bind may not be the best way to go.

> >> => It should be possible to save old database to disk (during BIND shutdown
> >> or
> >> periodically) and re-use this old database during server startup. I.e. server
> >> will start replying immediately from 'old' database and then the server will
> >> switch to the new database when dump from LDAP is finished.
> >
> >
> > This look like an advantage ? Why is it a disadvantage ?
> It was mentioned as 'proposed remedy' for the disadvantage above.

I think having dual authoritative data sources may not be a good thing.

> >> => As a side effect, BIND can start even if connection to LDAP server is down
> >> - this can improve infrastructure resiliency a lot!
> >
> > Same as above ?
> The same here, it was mentioned as 'proposed remedy' for the disadvantage above.

When it comes to DNSSEC starting w/o LDAP may just mean that you have
different signatures for the same records on different masters. Is that
'legale' according to DNSSEC ?

> >> == Uncertain effects ==
> >> - Memory consumption will change, but I'm not sure in which direction.
> >> - SOA serial number maintenance is a open question.
> >
> > Why SOA serial is a problem ?
> It simply needs more investigation. BIND's RBTDB maintains SOA serial 
> internally (it is intertwined to transactions in the DB), so the write-back to 
> LDAP could be very delicate operation.

It means all masters will often be out of sync, this is not very good.

> >> Decision if persistent search is a 'requirement' or not will have significant
> >> impact on the design, so I will write the design document when this decision
> >> is made.
> >
> > I would like to know more details about the reasons before I can usefully comment.
> 
> I forgot to one another 'Uncertain effect':
> - Support for dynamically generated '_location' records will be a big 
> adventure. It probably means no change from the state without persistent 
> search :-) After basic exploration it seems doable, but still a bit uncertain.

I need more info here, does it mean you have to store _location records
when they are generated ? Maybe we can use the internal bind database
just for _location "zone" ?

> My personal conclusion is that re-using of BIND's backend will save a huge 
> amount of work/code to maintain/bugs.

I can see that, unfortunately I fear it will make multi-master a lot
more difficult at the same time. And given we do want to have
multi-master properties we need to analyze that problem more carefully.

Also by welding ourselves to internal Bind infrastructure too much, it
will make it a lot more difficult for us to change the DNS
infrastructure. Bind10 will be completely different internally, and we
may simply decide to even not use bind10 at all and use a completely
different engine going forward. So I am quite wary of welding ourselves
even more to bind 9 internals.

Simo.

-- 
Simo Sorce * Red Hat, Inc * New York




More information about the Freeipa-devel mailing list