<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
<br>
<div class="moz-cite-prefix">On 03/08/2017 11:39 AM, Christopher
Young wrote:<br>
</div>
<blockquote
cite="mid:CAC1p532GyUHeRU3s-Bkrcba_U2PUC6ynwktttTREbXKV3jTAcg@mail.gmail.com"
type="cite">
<div dir="auto">My replication scheme has things like so:<br>
<br>
orldc-prod-ipa01 <--> orldc-prod-ipa02 <-->
bohdc-prod-ipa01<br>
<br>
I had run re-initialize on orldc-prod-ipa02 (--from
orldc-prod-ipa01) AND<br>
re-initialize on bohdc-prod-ipa01 (--from orldc-prod-ipa02).<br>
</div>
</blockquote>
Yeah if you still "The remote replica has a different database
generation ID than the local database" messages then things are out
of sync.<br>
<br>
Okay, I'm not an IPA guy, just DS. So lets do this the DS way...<br>
<br>
Export the replica database from orldc-prod-ipa01:<br>
<br>
# stop-dirsrv<br>
# db2ldif -r -n userroot -a /tmp/replica.ldif<br>
<br>
copy this LDIF to the other two servers and import it:<br>
<br>
# stop-dirsrv<br>
# ldif2db -n userroot -i /tmp/replica.ldif<br>
<br>
** If you get permissions errors then temporarily disable selinux,
and enable it after all the exports/imports are complete.<br>
<br>
Once this is done, go back and start all the servers: start-dirsrv<br>
<br>
Done!<br>
<br>
<br>
<br>
<blockquote
cite="mid:CAC1p532GyUHeRU3s-Bkrcba_U2PUC6ynwktttTREbXKV3jTAcg@mail.gmail.com"
type="cite">
<div dir="auto"><br>
That is where i'm currently at with the same errors.<br>
<br>
Any additional thoughts beyond just destroying
'orldc-prod-ipa02' and bohdc-prod-ipa01 and re-installing them
as new replicas?<br>
<br>
As always, many thanks.<br>
<br>
On Tue, Mar 7, 2017 at 7:40 PM, Mark Reynolds <<a
moz-do-not-send="true" href="mailto:mareynol@redhat.com">mareynol@redhat.com</a>>
wrote:<br>
><br>
><br>
> On 03/07/2017 06:08 PM, Christopher Young wrote:<br>
>> I had attempted to do _just_ a re-initialize on
orldc-prod-ipa02<br>
>> (using --from orldc-prod-ipa01), but after it
completes, I still end<br>
>> up with the same errors. What would be my next course
of action?<br>
> I have no idea. Sounds like the reinit did not work if you
keep getting<br>
> the same errors, or you reinited the wrong server (or the
wrong<br>
> direction) Remember you have to reinit ALL the replicas -
not just one.<br>
>><br>
>> To clarify the error(s) on orldc-prod-ipa01 are:<br>
>> -----<br>
>> Mar 7 18:04:53 orldc-prod-ipa01 ns-slapd:<br>
>> [07/Mar/2017:18:04:53.549127059 -0500]
NSMMReplicationPlugin -<br>
>>
agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"<br>
>> (orldc-prod-ipa02:389): The remote replica has a
different database<br>
>> generation ID than the local database. You may have to
reinitialize<br>
>> the remote replica, or the local replica.<br>
>> ....<br>
>> -----<br>
>><br>
>><br>
>> On orldc-prod-ipa02, I get:<br>
>> -----<br>
>> Mar 7 18:06:00 orldc-prod-ipa02 ns-slapd:<br>
>> [07/Mar/2017:18:06:00.290853165 -0500]
NSMMReplicationPlugin -<br>
>>
agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"<br>
>> (orldc-prod-ipa01:389): The remote replica has a
different database<br>
>> generation ID than the local database. You may have to
reinitialize<br>
>> the remote replica, or the local replica.<br>
>> Mar 7 18:06:01 orldc-prod-ipa02 ns-slapd:<br>
>> [07/Mar/2017:18:06:01.715691409 -0500] attrlist_replace
- attr_replace<br>
>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>> failed.<br>
>> Mar 7 18:06:01 orldc-prod-ipa02 ns-slapd:<br>
>> [07/Mar/2017:18:06:01.720475590 -0500] attrlist_replace
- attr_replace<br>
>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>> failed.<br>
>> Mar 7 18:06:01 orldc-prod-ipa02 ns-slapd:<br>
>> [07/Mar/2017:18:06:01.728588145 -0500] attrlist_replace
- attr_replace<br>
>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>> failed.<br>
>> Mar 7 18:06:04 orldc-prod-ipa02 ns-slapd:<br>
>> [07/Mar/2017:18:06:04.286539164 -0500]
NSMMReplicationPlugin -<br>
>>
agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"<br>
>> (orldc-prod-ipa01:389): The remote replica has a
different database<br>
>> generation ID than the local database. You may have to
reinitialize<br>
>> the remote replica, or the local replica.<br>
>> Mar 7 18:06:05 orldc-prod-ipa02 ns-slapd:<br>
>> [07/Mar/2017:18:06:05.328239468 -0500] attrlist_replace
- attr_replace<br>
>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>> failed.<br>
>> Mar 7 18:06:05 orldc-prod-ipa02 ns-slapd:<br>
>> [07/Mar/2017:18:06:05.330429534 -0500] attrlist_replace
- attr_replace<br>
>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>> failed.<br>
>> Mar 7 18:06:05 orldc-prod-ipa02 ns-slapd:<br>
>> [07/Mar/2017:18:06:05.333270479 -0500] attrlist_replace
- attr_replace<br>
>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>> failed.<br>
>> -----<br>
>><br>
>><br>
>> I'm trying to figure out what my next step(s) would be
in this<br>
>> situation. Would that be to completely remove those
systems are<br>
>> replicas (orldc-prod-ipa02 and bohdc-prod-ipa01) and
then completely<br>
>> recreate the replicas?<br>
>><br>
>> I appreciate all the responses. I'm still trying to
figure out what<br>
>> options to use for db2ldif, but I'm looking that up to
at least try<br>
>> and look at the DBs.<br>
>><br>
>> Thanks,<br>
>><br>
>> Chris<br>
>><br>
>> On Tue, Mar 7, 2017 at 4:23 PM, Mark Reynolds <<a
moz-do-not-send="true" href="mailto:mareynol@redhat.com">mareynol@redhat.com</a>>
wrote:<br>
>>><br>
>>> On 03/07/2017 11:29 AM, Christopher Young wrote:<br>
>>>> Thank you very much for the response!<br>
>>>><br>
>>>> To start:<br>
>>>> ----<br>
>>>> [root@orldc-prod-ipa01 ~]# rpm -qa 389-ds-base<br>
>>>> 389-ds-base-1.3.5.10-18.el7_3.x86_64<br>
>>>> ----<br>
>>> You are on the latest version with the latest
replication fixes.<br>
>>>> So, I believe a good part of my problem is that
I'm not _positive_<br>
>>>> which replica is good at this point (though my
directory really isn't<br>
>>>> that huge).<br>
>>>><br>
>>>> Do you have any pointers on a good method of
comparing the directory<br>
>>>> data between them? I was wondering if anyone
knows of any tools to<br>
>>>> facilitate that. I was thinking that it might
make sense for me to<br>
>>>> dump the DB and restore, but I really don't
know that procedure. As I<br>
>>>> mentioned, my directory really isn't that large
at all, however I'm<br>
>>>> not positive the best bullet-item listed method
to proceed. (I know<br>
>>>> I'm not helping things :) )<br>
>>> Heh, well only you know what your data should be.
You can always do a<br>
>>> <a moz-do-not-send="true" href="http://db2ldif.pl">db2ldif.pl</a>
on each server and compare the ldif files that are<br>
>>> generated. Then pick the one you think is the most
up to date.<br>
>>><br>
>>> <a moz-do-not-send="true"
href="https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Administration_Guide/Populating_Directory_Databases-Exporting_Data.html#Exporting-db2ldif">https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/10/html/Administration_Guide/Populating_Directory_Databases-Exporting_Data.html#Exporting-db2ldif</a><br>
>>><br>
>>> Once you decide on a server, then you need to
reinitialize all the other<br>
>>> servers/replicas from the "good" one. Use "
ipa-replica-manage<br>
>>> re-initialize" for this.<br>
>>><br>
>>> <a moz-do-not-send="true"
href="https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Identity_Management_Guide/ipa-replica-manage.html#initialize">https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Identity_Management_Guide/ipa-replica-manage.html#initialize</a><br>
>>><br>
>>> That's it.<br>
>>><br>
>>> Good luck,<br>
>>> Mark<br>
>>><br>
>>>> Would it be acceptable to just 'assume' one of
the replicas is good<br>
>>>> (taking the risk of whatever missing pieces
I'll have to deal with),<br>
>>>> completely removing the others, and then
rebuilding the replicas from<br>
>>>> scratch?<br>
>>>><br>
>>>> If I go that route, what are the potential
pitfalls?<br>
>>>><br>
>>>><br>
>>>> I want to decide on an approach and try and
resolve this once and for all.<br>
>>>><br>
>>>> Thanks again! It really is appreciated as I've
been frustrated with<br>
>>>> this for a while now.<br>
>>>><br>
>>>> -- Chris<br>
>>>><br>
>>>> On Tue, Mar 7, 2017 at 8:45 AM, Mark Reynolds
<<a moz-do-not-send="true" href="mailto:mareynol@redhat.com">mareynol@redhat.com</a>>
wrote:<br>
>>>>> What version of 389-ds-base are you using?<br>
>>>>><br>
>>>>> rpm -qa | grep 389-ds-base<br>
>>>>><br>
>>>>><br>
>>>>> comments below..<br>
>>>>><br>
>>>>> On 03/06/2017 02:37 PM, Christopher Young
wrote:<br>
>>>>><br>
>>>>> I've seen similar posts, but in the
interest of asking fresh and<br>
>>>>> trying to understand what is going on, I
thought I would ask for<br>
>>>>> advice on how best to handle this
situation.<br>
>>>>><br>
>>>>> In the interest of providing some history:<br>
>>>>> I have three (3) FreeIPA servers.
Everything is running 4.4.0 now.<br>
>>>>> The originals (orldc-prod-ipa01,
orldc-prod-ipa02) were upgraded from<br>
>>>>> the 3.x branch quite a while back.
Everything had been working fine,<br>
>>>>> however I ran into a replication issue
(that I _think_ may have been a<br>
>>>>> result of IPv6 being disabled by my default
Ansible roles). I thought<br>
>>>>> I had resolved that by reinitializing the
2nd replica,<br>
>>>>> orldc-prod-ipa02.<br>
>>>>><br>
>>>>> In any case, I feel like the replication
has never been fully stable<br>
>>>>> since then, and I have all types of errors
in messages that indicate<br>
>>>>> something is off. I had single introduced a
3rd replica such that the<br>
>>>>> agreements would look like so:<br>
>>>>><br>
>>>>> orldc-prod-ipa01 -> orldc-prod-ipa02
---> bohdc-prod-ipa01<br>
>>>>><br>
>>>>> It feels like orldc-prod-ipa02 &
bohdc-prod-ipa01 are out of sync.<br>
>>>>> I've tried reinitializing them in order but
with no positive results.<br>
>>>>> At this point, I feel like I'm ready to
'bite the bullet' and tear<br>
>>>>> them down quickly (remove them from IPA,
delete the local<br>
>>>>> DBs/directories) and rebuild them from
scratch.<br>
>>>>><br>
>>>>> I want to minimize my impact as much as
possible (which I can somewhat<br>
>>>>> do by redirecting LDAP/DNS request via my
load-balancers temporarily)<br>
>>>>> and do this right.<br>
>>>>><br>
>>>>> (Getting to the point...)<br>
>>>>><br>
>>>>> I'd like advice on the order of operations
to do this. Give the<br>
>>>>> errors (I'll include samples at the bottom
of this message), does it<br>
>>>>> make sense for me to remove the replicas on
bohdc-prod-ipa01 &<br>
>>>>> orldc-prod-ipa02 (in that order), wipe out
any directories/residual<br>
>>>>> pieces (I'd need some idea of what to do
there), and then create new<br>
>>>>> replicas? -OR- Should I export/backup the
LDAP DB and rebuild<br>
>>>>> everything from scratch.<br>
>>>>><br>
>>>>> I need advice and ideas. Furthermore, if
there is someone with<br>
>>>>> experience in this that would be interested
in making a little money<br>
>>>>> on the side, let me know, because having an
extra brain and set of<br>
>>>>> hands would be welcome.<br>
>>>>><br>
>>>>> DETAILS:<br>
>>>>> =================<br>
>>>>><br>
>>>>><br>
>>>>> ERRORS I see on orldc-prod-ipa01 (the one
whose LDAP DB seems the most<br>
>>>>> up-to-date since my changes are usually
directed at it):<br>
>>>>> ------<br>
>>>>> Mar 6 14:36:24 orldc-prod-ipa01 ns-slapd:<br>
>>>>> [06/Mar/2017:14:36:24.434956575 -0500]
NSMMReplicationPlugin -<br>
>>>>>
agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"<br>
>>>>> (orldc-prod-ipa02:389): The remote replica
has a different database<br>
>>>>> generation ID than the local database. You
may have to reinitialize<br>
>>>>> the remote replica, or the local replica.<br>
>>>>> Mar 6 14:36:25 orldc-prod-ipa01
ipa-dnskeysyncd: ipa : INFO<br>
>>>>> LDAP bind...<br>
>>>>> Mar 6 14:36:25 orldc-prod-ipa01
ipa-dnskeysyncd: ipa : INFO<br>
>>>>> Commencing sync process<br>
>>>>> Mar 6 14:36:26 orldc-prod-ipa01
ipa-dnskeysyncd:<br>
>>>>> ipa.ipapython.dnssec.keysyncer.KeySyncer:
INFO Initial LDAP dump<br>
>>>>> is done, sychronizing with ODS and BIND<br>
>>>>> Mar 6 14:36:27 orldc-prod-ipa01 ns-slapd:<br>
>>>>> [06/Mar/2017:14:36:27.799519203 -0500]
NSMMReplicationPlugin -<br>
>>>>>
agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"<br>
>>>>> (orldc-prod-ipa02:389): The remote replica
has a different database<br>
>>>>> generation ID than the local database. You
may have to reinitialize<br>
>>>>> the remote replica, or the local replica.<br>
>>>>> Mar 6 14:36:30 orldc-prod-ipa01 ns-slapd:<br>
>>>>> [06/Mar/2017:14:36:30.994760069 -0500]
NSMMReplicationPlugin -<br>
>>>>>
agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"<br>
>>>>> (orldc-prod-ipa02:389): The remote replica
has a different database<br>
>>>>> generation ID than the local database. You
may have to reinitialize<br>
>>>>> the remote replica, or the local replica.<br>
>>>>> Mar 6 14:36:34 orldc-prod-ipa01 ns-slapd:<br>
>>>>> [06/Mar/2017:14:36:34.940115481 -0500]
NSMMReplicationPlugin -<br>
>>>>>
agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"<br>
>>>>> (orldc-prod-ipa02:389): The remote replica
has a different database<br>
>>>>> generation ID than the local database. You
may have to reinitialize<br>
>>>>> the remote replica, or the local replica.<br>
>>>>> Mar 6 14:36:35 orldc-prod-ipa01
named-pkcs11[32134]: client<br>
>>>>> 10.26.250.66#49635 (56.10.in-addr.arpa):
transfer of<br>
>>>>> '56.10.in-addr.arpa/IN': AXFR-style IXFR
started<br>
>>>>> Mar 6 14:36:35 orldc-prod-ipa01
named-pkcs11[32134]: client<br>
>>>>> 10.26.250.66#49635 (56.10.in-addr.arpa):
transfer of<br>
>>>>> '56.10.in-addr.arpa/IN': AXFR-style IXFR
ended<br>
>>>>> Mar 6 14:36:37 orldc-prod-ipa01 ns-slapd:<br>
>>>>> [06/Mar/2017:14:36:37.977875463 -0500]
NSMMReplicationPlugin -<br>
>>>>>
agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"<br>
>>>>> (orldc-prod-ipa02:389): The remote replica
has a different database<br>
>>>>> generation ID than the local database. You
may have to reinitialize<br>
>>>>> the remote replica, or the local replica.<br>
>>>>> Mar 6 14:36:40 orldc-prod-ipa01 ns-slapd:<br>
>>>>> [06/Mar/2017:14:36:40.999275184 -0500]
NSMMReplicationPlugin -<br>
>>>>>
agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"<br>
>>>>> (orldc-prod-ipa02:389): The remote replica
has a different database<br>
>>>>> generation ID than the local database. You
may have to reinitialize<br>
>>>>> the remote replica, or the local replica.<br>
>>>>> Mar 6 14:36:45 orldc-prod-ipa01 ns-slapd:<br>
>>>>> [06/Mar/2017:14:36:45.211260414 -0500]
NSMMReplicationPlugin -<br>
>>>>>
agmt="cn=cloneAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"<br>
>>>>> (orldc-prod-ipa02:389): The remote replica
has a different database<br>
>>>>> generation ID than the local database. You
may have to reinitialize<br>
>>>>> the remote replica, or the local replica.<br>
>>>>> ------<br>
>>>>><br>
>>>>> These messages indicate that the replica
does not have the same database as<br>
>>>>> the master. So either the master or the
replica needs to be reinitialized.,<br>
>>>>> More on this below...<br>
>>>>><br>
>>>>><br>
>>>>> Errors on orldc-prod-ipa02:<br>
>>>>> ------<br>
>>>>> r 6 14:16:04 orldc-prod-ipa02
ipa-dnskeysyncd: ipa : INFO<br>
>>>>> Commencing sync process<br>
>>>>> Mar 6 14:16:04 orldc-prod-ipa02
ipa-dnskeysyncd:<br>
>>>>> ipa.ipapython.dnssec.keysyncer.KeySyncer:
INFO Initial LDAP dump<br>
>>>>> is done, sychronizing with ODS and BIND<br>
>>>>> Mar 6 14:16:05 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:05.934405274 -0500]
attrlist_replace - attr_replace<br>
>>>>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>>>>> failed.<br>
>>>>> Mar 6 14:16:05 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:05.937278142 -0500]
attrlist_replace - attr_replace<br>
>>>>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>>>>> failed.<br>
>>>>> Mar 6 14:16:05 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:05.939434025 -0500]
attrlist_replace - attr_replace<br>
>>>>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>>>>> failed.<br>
>>>>><br>
>>>>> These are harmless "errors" which have been
removed in newer versions of<br>
>>>>> 389-ds-base.<br>
>>>>><br>
>>>>> Mar 6 14:16:06 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:06.882795654 -0500]<br>
>>>>> agmt="cn=meTobohdc-prod-ipa01.passur.local"
(bohdc-prod-ipa01:389) -<br>
>>>>> Can't locate CSN 58bdf8f5000200070000 in
the changelog (DB rc=-30988).<br>
>>>>> If replication stops, the consumer may need
to be reinitialized.<br>
>>>>> Mar 6 14:16:06 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:06.886029272 -0500]
NSMMReplicationPlugin -<br>
>>>>> changelog program -
agmt="cn=meTobohdc-prod-ipa01.passur.local"<br>
>>>>> (bohdc-prod-ipa01:389): CSN
58bdf8f5000200070000 not found, we aren't<br>
>>>>> as up to date, or we purged<br>
>>>>><br>
>>>>> This "could" also be a known issue that is
fixed in newer versions of<br>
>>>>> 389-ds-base. Or this is a valid error
message due to the replica being<br>
>>>>> stale for a very long time and records
actually being purged from the<br>
>>>>> changelog before they were replicated.<br>
>>>>><br>
>>>>> Mar 6 14:16:06 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:06.888679268 -0500]
NSMMReplicationPlugin -<br>
>>>>> agmt="cn=meTobohdc-prod-ipa01.passur.local"
(bohdc-prod-ipa01:389):<br>
>>>>> Data required to update replica has been
purged from the changelog.<br>
>>>>> The replica must be reinitialized.<br>
>>>>> Mar 6 14:16:06 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:06.960804253 -0500]
NSMMReplicationPlugin -<br>
>>>>>
agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"<br>
>>>>> (orldc-prod-ipa01:389): The remote replica
has a different database<br>
>>>>> generation ID than the local database. You
may have to reinitialize<br>
>>>>> the remote replica, or the local replica.<br>
>>>>><br>
>>>>> Okay, so your replication
agreements/servers are not in sync. I suspect you<br>
>>>>> created a new replica and used that to
initialize a valid replica which<br>
>>>>> broke things. Something like that. You need
to find a "good" replica<br>
>>>>> server and reinitialize the other replicas
from that server. These errors<br>
>>>>> needs to addressed asap, as it's halting
replication for those agreements<br>
>>>>> which explains the "instability" you are
describing.<br>
>>>>><br>
>>>>> Mark<br>
>>>>><br>
>>>>> Mar 6 14:16:08 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:08.960622608 -0500]
attrlist_replace - attr_replace<br>
>>>>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>>>>> failed.<br>
>>>>> Mar 6 14:16:08 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:08.968927168 -0500]
attrlist_replace - attr_replace<br>
>>>>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>>>>> failed.<br>
>>>>> Mar 6 14:16:08 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:08.976952118 -0500]
attrlist_replace - attr_replace<br>
>>>>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>>>>> failed.<br>
>>>>> Mar 6 14:16:09 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:09.972315877 -0500]
NSMMReplicationPlugin -<br>
>>>>>
agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"<br>
>>>>> (orldc-prod-ipa01:389): The remote replica
has a different database<br>
>>>>> generation ID than the local database. You
may have to reinitialize<br>
>>>>> the remote replica, or the local replica.<br>
>>>>> Mar 6 14:16:10 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:10.034810948 -0500]<br>
>>>>> agmt="cn=meTobohdc-prod-ipa01.passur.local"
(bohdc-prod-ipa01:389) -<br>
>>>>> Can't locate CSN 58bdf8f5000200070000 in
the changelog (DB rc=-30988).<br>
>>>>> If replication stops, the consumer may need
to be reinitialized.<br>
>>>>> Mar 6 14:16:10 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:10.040020359 -0500]
NSMMReplicationPlugin -<br>
>>>>> changelog program -
agmt="cn=meTobohdc-prod-ipa01.passur.local"<br>
>>>>> (bohdc-prod-ipa01:389): CSN
58bdf8f5000200070000 not found, we aren't<br>
>>>>> as up to date, or we purged<br>
>>>>> Mar 6 14:16:10 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:10.042846879 -0500]
NSMMReplicationPlugin -<br>
>>>>> agmt="cn=meTobohdc-prod-ipa01.passur.local"
(bohdc-prod-ipa01:389):<br>
>>>>> Data required to update replica has been
purged from the changelog.<br>
>>>>> The replica must be reinitialized.<br>
>>>>> Mar 6 14:16:13 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:13.013253769 -0500]
attrlist_replace - attr_replace<br>
>>>>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>>>>> failed.<br>
>>>>> Mar 6 14:16:13 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:13.021514225 -0500]
attrlist_replace - attr_replace<br>
>>>>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>>>>> failed.<br>
>>>>> Mar 6 14:16:13 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:13.027521508 -0500]
attrlist_replace - attr_replace<br>
>>>>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>>>>> failed.<br>
>>>>> Mar 6 14:16:13 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:13.110566247 -0500]
NSMMReplicationPlugin -<br>
>>>>>
agmt="cn=masterAgreement1-orldc-prod-ipa01.passur.local-pki-tomcat"<br>
>>>>> (orldc-prod-ipa01:389): The remote replica
has a different database<br>
>>>>> generation ID than the local database. You
may have to reinitialize<br>
>>>>> the remote replica, or the local replica.<br>
>>>>> Mar 6 14:16:14 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:14.179819300 -0500]<br>
>>>>> agmt="cn=meTobohdc-prod-ipa01.passur.local"
(bohdc-prod-ipa01:389) -<br>
>>>>> Can't locate CSN 58bdf8f5000200070000 in
the changelog (DB rc=-30988).<br>
>>>>> If replication stops, the consumer may need
to be reinitialized.<br>
>>>>> Mar 6 14:16:14 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:14.188353328 -0500]
NSMMReplicationPlugin -<br>
>>>>> changelog program -
agmt="cn=meTobohdc-prod-ipa01.passur.local"<br>
>>>>> (bohdc-prod-ipa01:389): CSN
58bdf8f5000200070000 not found, we aren't<br>
>>>>> as up to date, or we purged<br>
>>>>> Mar 6 14:16:14 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:14.196463928 -0500]
NSMMReplicationPlugin -<br>
>>>>> agmt="cn=meTobohdc-prod-ipa01.passur.local"
(bohdc-prod-ipa01:389):<br>
>>>>> Data required to update replica has been
purged from the changelog.<br>
>>>>> The replica must be reinitialized.<br>
>>>>> Mar 6 14:16:17 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:17.068292919 -0500]
attrlist_replace - attr_replace<br>
>>>>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>>>>> failed.<br>
>>>>> Mar 6 14:16:17 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:17.071241757 -0500]
attrlist_replace - attr_replace<br>
>>>>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>>>>> failed.<br>
>>>>> Mar 6 14:16:17 orldc-prod-ipa02 ns-slapd:<br>
>>>>> [06/Mar/2017:14:16:17.073793922 -0500]
attrlist_replace - attr_replace<br>
>>>>> (nsslapd-referral,
<a class="moz-txt-link-freetext" href="ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca">ldap://orldc-prod-ipa01.passur.local:389/o%3Dipaca</a>)<br>
>>>>> failed.<br>
>>>>> ------<br>
>>>>><br>
>>>>><br>
>>>>> Thanks in advance!!!<br>
>>>>><br>
>>>>> -- Chris<br>
>>>>><br>
>>>>><br>
><br>
</div>
</blockquote>
<br>
</body>
</html>