[Freeipa-devel] Move replication topology to the shared tree
Rich Megginson
rmeggins at redhat.com
Mon Jun 2 18:02:48 UTC 2014
On 06/02/2014 02:46 AM, Ludwig Krispenz wrote:
> Ticket 4302 is a request for an enhancement: Move replication topology
> to the shared tree
>
>
> There has been some discussion in comments in the ticket, but I'd like
> to open the discussion to a wider audience to get an agreement on what
> should be implemented, before writing a design spec.
>
> The implementation requires a new IPA plugin for 389 DS and eventually
> an enhancement of the 389 replication plugin (which depends on some
> decisions below). In the following I will use the terms “topology
> plugin” for the new plugin and “replication plugin” for the existing
> 389 multimaster replication plugin.
>
>
> Lets start with the requirements: What should be achieved by this RFE ?
>
> In my opinion there are three different levels of features to
> implement with this request
>
> - providing all replication configuration information consistent
> across all deployed servers on all servers, eg to easily visualize the
> replication topology.
>
> - Allowing to do sanity checks on replication configuration, denying
> modifications which would break replication topology or issue warnings.
>
> - Use the information in the shared tree to trigger changes to the
> replication configuration in the corresponding servers, this means to
> allow to completely control replication configuration with
> modifications of entries in the shared tree
>
>
> The main questions are
>
> 1] which information is needed in the shared tree (eg what params in
> the repl config should be modifiable)
>
> 2] how is the information organized and stored (layout of the repl
> config information shared tree)
>
> 3] how is the interaction of the info in the shared tree and
> configuration in cn=config and the interaction between the topology
> plugin and the replication plugin
I apologize that I have not yet finished reading through all of this
thread and the comments/replies, so perhaps my following comment is out
of line:
Why not (selectively) replicate cn=config? We keep moving more and more
stuff out of cn=config and into the main tree (dna, automember, etc.),
to work around the problem that data underneath cn=config is not
replicated. We already have customers who have asked for things like
database configuration, index configuration, suffix configuration, and
many other configurations, to be replicated. And, for a bonus, if we do
this right, we might be able to leverage this work to do "real" schema
replication.
I will note that openldap syncrepl does allow cn=config to be replicated.
>
>
> ad 1] to verify the topology, eg connectivity info about all existing
> replication agreements is needed, the replication agreements only
> contain info about the target, and the parameters for connection to
> the target, but not about the origin. If the data have to evaluated on
> any server, information about the origin has to be added, eg
> replicaID, serverID,...
>
> In addition, if agreement config has to be changed based on the shared
> tree all required parameters need to be present, eg
> replicatedAttributeList, strippedAttrs, replicationEnabled, .....
>
> Replication agreements only provide information on connections where
> replication is configured, if connectivity is to be checked
> independent info about all deployed serevers/replicas is needed.
>
> If topology should be validated, do we need params definig
> requirements, eg each replica to be connected to 1,2,3,... others,
> type of topology (ring, mesh, star,.) ?
>
>
> ad 2] the data required are available in the replicationAgreement (and
> eventually replica) entries, but the question is if there should be a
> 1:1 relationship to entries in the shared tree or a condensed
> representation, if there should be a server or connection oriented view.
>
> In my opinion a 1:1 relation is straight forward, easy to handle and
> easy to extend (not the full data of a repl agreement need to be
> present, other attributes are possible). The downside may be a larger
> number of entries, but this is no problem for the directory server and
> replication and the utilities eg to visualize a topology will handle
> this.
>
> If the number of entries should be reduced information on multiple
> replication agreements would have to be stored in one entry, and the
> problem arises ho to group data belonging to one agreement. LDAP does
> not provide a simple way to group attribute values in one entry, so
> all the info related to one agreement (origin, target, replicated
> attrs and other repl configuration info) could be stored in a single
> attribute, which will make the attribute as nicely readable and
> managable as acis.
>
> If topology verification and connectivity check is an integral part of
> the feature, I think a connection oriented view is not sufficient, it
> might be incomplete, so a server view is required, the server entry
> would then have the connection information as subentries or as
> attributes.
>
>
> Ad 3] The replication configuration is stored under cn=config and can
> be modified either by ldap operations or by editing the dse.ldif. With
> the topology plugin another source of configuration changes comes into
> play.
>
> The first question is: which information has precendence ? I think if
> there is info in the shared tree it should be used, and the
> information in cn=config should be updated. This also means that the
> topology plugin needs to intercept all mods to the entries in
> cn=config and have them ignored and handle all updates to the shared
> tree and trigger changes to the cn=config entries, which then would
> trigger rebuilds of the in memory replication objects.
>
> Next question: How to handle changes directly done in the dse.ldif, if
> everything should be done by the topology plugin it would have to
> verify and compare the info in cn=config and in the shared tree at
> every startup of the directory server, which might be complicated by
> the fact that the replication plugin might already be started and repl
> agreemnts are active before the topology plugin is started and could
> do its work. (plugin starting order and dependencies need to be checked).
>
> Next next question: should there be a “bootstrapping” of the config
> information in the shared tree ?
>
> I think yes, the topology plugin could check at startup if there is a
> representation of the config info in the shared tree and if not
> construct it, so after deployment and enabling of the topology plugin
> the information in the shared tree would be initialized.
>
>
> I think that not every part of the feature has to be handled in the
> topology plugin, we could also ask for enhancements in the 389
> replication plugin itself. There could be an extension to the replica
> and replication agreement entries to reference an entry in the shared
> tree. The replication plugin could check at startup if these entries
> contain replication configuration attributes and if so use them,
> otherwise use the values in cn=config. The presence of the reference
> indicates to the topolgy plugin that initialization is done.
>
> In my opinion this would simplify the coordination at startup and
> avoid unnecessary revaluations and other deployments could benefit
> from this new feature in directory server (one could eg have one entry
> for replication argreements containing the fractional replication
> configuration – and it would be identical on all servers)
>
>
> So my proposal would contain the following components
>
> 1] Store replication configuration in the shared tree in a combination
> of server and connection view (think we need both) and map replication
> configuration to these entries. I would prefer a direct mapping (with
> a subset of the cn=config attributes and required additions)
>
> 2] provide a topology plugin to do consistency checks and topology
> verification, handle updates to trigger modification changes in
> cn=config, intercept and reject direct mods to cn=config entries At
> startup verify if shared tree opbjects are present, initialize them if
> not, apply to cn=config if required
>
> 3] enhance replication plugin to handle config information in the
> shared tree. This would allow to consistently handle config changes
> either applied to the shared config, cn=config mods or des.ldif
> changes. This feature might also be interesting to other DS deployments
>
> _______________________________________________
> Freeipa-devel mailing list
> Freeipa-devel at redhat.com
> https://www.redhat.com/mailman/listinfo/freeipa-devel
More information about the Freeipa-devel
mailing list