[Freeipa-devel] DNSSEC support design considerations

Petr Spacek pspacek at redhat.com
Wed May 15 15:11:20 UTC 2013


On 15.5.2013 10:29, Simo Sorce wrote:
>> I investigated various scenarios for DNSSEC integration and I would like to
>> hear your opinions about proposed approach and it's effects.
>>
>>
>> The most important finding is that bind-dyndb-ldap can't support DNSSEC
>> without rewrite of the 'in-memory database' component.
>
> Can you elaborate why a rewrite would be needed ? What constraint we do not meet ?

We have three main problems - partially with data structures and mostly with 
the way how we work with the 'internal database':

1) DNSSEC requires strict record ordering, i.e. each record in database has to 
have predecessor and successor (ordering by name and then by record data). 
This can be done relatively simply, but it requires a full dump of the database.

2) On-line record signing requires a lot of data stored 
per-record+per-signature. This would require bigger effort than point 1), 
because many data structures and respective APIs and locking protocols have to 
be re-designed.

3) Our current 'internal database' acts as a 'cache', i.e. records can appear 
and disappear dynamically and the 'cache' is not considered as authoritative 
source of data: LDAP search is conducted each time when some data are not 
found etc. The result is that the same data can disappear and then appear 
again in the cache etc.

Typical update scenario, with persistent search enabled:
a) DNS UPDATE from client is received by BIND
b) New data are written to LDAP
c) DN of modified object is received via persistent search
d) All RRs under the *updated name* are discarded from the cache
<-- now the cache is not consistent with data in LDAP
e) Object from LDAP is fetched by plugin
<-- a query for the updated name will enforce instant cache refresh, because 
we know that the cache is not authoritative
f) All RRs in the object are updated (in cache)

The problem is that the cache in intermediate states (between <-- marks) can't 
be used as authoritative source and will produce incorrect signatures. The 
text below contains more details.

Database's in BIND has concept of 'versions' ('transactions') which our 
internal cache do not implement ... It could be solved by proper locking, of 
course, but it will not be a piece of cake. We need to take care of many 
parallel updates, parallel queries and parallel re-signing at the same time.

I don't say that it is impossible to implement our own backend with same 
properties as BIND's database, but I don't see the value (and I can see a lot 
of bugs :-).


>> Fortunately, it seems
>> that we can drop our own implementation of the internal DNS database
>> (ldap_driver.c and cache.c) and re-use the database from BIND (so called
>> RBTDB).
>>
>> I'm trying to reach Adam Tkac with the question "Why we decided to implement
>> it again rather than re-use BIND's code?".
>>
>>
>> Re-usage of BIND's implementation will have following properties:
>>
>>
>> == Advantages ==
>> - Big part of DNSSEC implementation from BIND9 can be reused.
>> - Overall plugin implementation will be simpler - we can drop many lines of
>> our code and bugs.
>> - Run-time performance could be much much better.
>>
>> - We will get implementation for these tickets "for free":
>> -- #95  wildcard CNAME does NOT work
>> -- #64 	IXFR support (IMHO this is important!)
>> -- #6 	Cache non-existing records
>>
>> And partially:
>> -- #7 	Allow limiting of the cache
>
> Sounds very interesting.
>
>
>> == Disadvantages ==
>> - Support for configurations without persistent search will complicate things
>> a lot.
>> -- Proposal => Make persistent search obligatory. OpenLDAP supports LDAP
>> SyncRepl, so it should be possible to make plugin compatible with 389 and
>> OpenLDAP at the same time. I would defer this to somebody from users/OpenLDAP
>> community.
>
> Why the persistent search would be required ?
As I mentioned above - you need database dump, because DNSSEC requires strict 
name and record ordering.

It is possible to do incremental changes when the 'starting snapshot' is 
established, but it means that we need information about each particular 
change => that is what persistent search provides.

>> - Data from LDAP have to be dumped to memory (or to file) before the server
>> will start replying to queries.
>> => This is not nice, but servers usually are not restarted often. IMHO it is
>> a
>> good compromise between complexity and performance.
>
> I am not sure I understand what this means. Does it mean you cannot change single
> cache entries on the fly when a change happens in LDAP ? Or something else ?
Sorry, I didn't explained this part in it's full depth.

You can change everything run-time, but there are small details which 
complicates loading of the zone and run-time changes:

1) A normal zones requires SOA + NS + A/AAAA records (for NSs) to load. It is 
(hypothetically) possible to create empty zone, fill it with SOA, NS and A 
records and then incrementally add rest of the records.

The problem is that you need to re-implement DNS resolution algorithm to find 
which records you need at the beginning (SOA, NS, A/AAAA) and then load the rest.

I would like to avoid this re-implementation. It is not possible to re-use 
BIND's implementation because it is tied to the DB implementation ... but we 
can't load the database because it is missing SOA, NS and A/AAAA records. 
Chicken-egg problem.


2) The second reason why I want to make persistent search obligatory is that 
each change in DNSSEC signed zone requires a lot of work, so it is not a good 
idea to wait with the work to time when somebody asks for particular record.

How it works without persistent search (now):
1) Query from a client is received by BIND
2) Internal cache is consulted
3) Record is not found in the cache - LDAP search is done
4) Fetched records in saved to the cache
5) Reply to client is constructed

It is hard to work in the same way when DNSSEC is in place. Each change 
implicates re-signing of the particular RRset and it's neighbours, i.e.:
1) Query from a client is received by BIND
2) Internal cache is consulted
3) Record is not found in the cache - LDAP search is done
4) Fetched records in saved to the cache
* 4b) New RRset is re-signed
* 4c) Records neighbouring with the new RR has to be updated and re-signed
5) Reply to client is constructed

The point is that you *can* do changes run-time, but you need to know about 
the changes as soon as possible because each change requires significant 
amount of work (and magic/mana :-).

It opens a lot of opportunities for race condition problems.

>> => It should be possible to save old database to disk (during BIND shutdown
>> or
>> periodically) and re-use this old database during server startup. I.e. server
>> will start replying immediately from 'old' database and then the server will
>> switch to the new database when dump from LDAP is finished.
>
>
> This look like an advantage ? Why is it a disadvantage ?
It was mentioned as 'proposed remedy' for the disadvantage above.

>> => As a side effect, BIND can start even if connection to LDAP server is down
>> - this can improve infrastructure resiliency a lot!
>
> Same as above ?
The same here, it was mentioned as 'proposed remedy' for the disadvantage above.

>> == Uncertain effects ==
>> - Memory consumption will change, but I'm not sure in which direction.
>> - SOA serial number maintenance is a open question.
>
> Why SOA serial is a problem ?
It simply needs more investigation. BIND's RBTDB maintains SOA serial 
internally (it is intertwined to transactions in the DB), so the write-back to 
LDAP could be very delicate operation.

>> Decision if persistent search is a 'requirement' or not will have significant
>> impact on the design, so I will write the design document when this decision
>> is made.
>
> I would like to know more details about the reasons before I can usefully comment.

I forgot to one another 'Uncertain effect':
- Support for dynamically generated '_location' records will be a big 
adventure. It probably means no change from the state without persistent 
search :-) After basic exploration it seems doable, but still a bit uncertain.


My personal conclusion is that re-using of BIND's backend will save a huge 
amount of work/code to maintain/bugs.

-- 
Petr^2 Spacek




More information about the Freeipa-devel mailing list