[Freeipa-users] Cleanly removing replication agreement

Dominik Korittki d.korittki at mittwald.de
Wed Oct 14 08:55:33 UTC 2015


I was able to remove the replication, but when I try to readd ipa02 in 
replication agreement i get errors in 
/var/log/dirsrv/slapd-INTERNAL/errors on ipa02:

[11/Oct/2015:17:17:48 +0200] - 389-Directory/1.3.1.6 B2014.219.1825 
starting up
[11/Oct/2015:17:17:48 +0200] - WARNING: userRoot: entry cache size 
10485760B is less than db size 86450176B; We recommend to increase the 
entry cache size nsslapd-cachememsize.
[11/Oct/2015:17:17:48 +0200] schema-compat-plugin - warning: no entries 
set up under cn=computers, cn=compat,dc=internal
[11/Oct/2015:17:17:53 +0200] set_krb5_creds - Could not get initial 
credentials for principal [ldap/ipa02.internal at INTERNAL] in keytab 
[FILE:/etc/dirsrv/ds.keytab]: -1765328324 (Generic error (see e-text))
[11/Oct/2015:17:17:53 +0200] - slapd started.  Listening on All 
Interfaces port 389 for LDAP requests
[11/Oct/2015:17:17:53 +0200] - Listening on All Interfaces port 636 for 
LDAPS requests
[11/Oct/2015:17:17:53 +0200] - Listening on 
/var/run/slapd-INTERNAL.socket for LDAPI requests
[11/Oct/2015:17:17:53 +0200] slapd_ldap_sasl_interactive_bind - Error: 
could not perform interactive bind for id [] mech [GSSAPI]: LDAP error 
-2 (Local error) (SASL(-1): generic failure: GSSAPI Error: Unspecified 
GSS failure.  Minor code may provide more information (No Kerberos 
credentials available)) errno 0 (Success)
[11/Oct/2015:17:17:53 +0200] slapi_ldap_bind - Error: could not perform 
interactive bind for id [] authentication mechanism [GSSAPI]: error -2 
(Local error)
[11/Oct/2015:17:17:53 +0200] NSMMReplicationPlugin - 
agmt="cn=meToipa01.internal" (ipa01:389): Replication bind with GSSAPI 
auth failed: LDAP error -2 (Local error) (SASL(-1): generic failure: 
GSSAPI Error: Unspecified GSS failure.  Minor code may provide more 
information (No Kerberos credentials available))
[11/Oct/2015:17:17:56 +0200] NSMMReplicationPlugin - 
agmt="cn=meToipa01.internal" (ipa01:389): Replication bind with GSSAPI 
auth resumed

Seems like he can't get his tickets from the kdc. This is the krb5kdc.log:
Okt 11 17:17:53 ipa02.internal krb5kdc[5766](info): AS_REQ (6 etypes {18 
17 16 23 25 26}) 172.16.0.42: LOOKING_UP_CLIENT: 
ldap/ipa02.internal at INTERNAL for krbtgt/INTERNAL at INTERNAL, Server error
Okt 11 17:17:56 ipa02.internal krb5kdc[5767](info): AS_REQ (6 etypes {18 
17 16 23 25 26}) 172.16.0.42: NEEDED_PREAUTH: 
ldap/ipa02.internal at INTERNAL for krbtgt/INTERNAL at INTERNAL, Additional 
pre-authentication required
Okt 11 17:17:56 ipa02.internal krb5kdc[5766](info): AS_REQ (6 etypes {18 
17 16 23 25 26}) 172.16.0.42: ISSUE: authtime 1444576676, etypes {rep=18 
tkt=18 ses=18}, ldap/ipa02.internal at INTERNAL for krbtgt/INTERNAL at INTERNAL
Okt 11 17:17:56 ipa02.internal krb5kdc[5767](info): TGS_REQ (6 etypes 
{18 17 16 23 25 26}) 172.16.0.42: ISSUE: authtime 1444576676, etypes 
{rep=18 tkt=18 ses=18}, ldap/ipa02.internal at INTERNAL for 
ldap/ipa01.internal at INTERNAL

Strangely everything works fine, when trying to manually get the ticket:
root at ipa02:~ > kinit ldap/ipa02.internal at INTERNAL -kt /etc/dirsrv/ds.keytab
root at ipa02:~ > klist
Ticket cache: KEYRING:persistent:0:0
Default principal: ldap/ipa02.internal at INTERNAL

Valid starting       Expires              Service principal
11.10.2015 17:27:43  12.10.2015 17:27:43  krbtgt/INTERNAL at INTERNAL

This is the log from the kinit command, everything seems normal:
Okt 11 17:27:43 ipa02.internal krb5kdc[5767](info): AS_REQ (6 etypes {18 
17 16 23 25 26}) 172.16.0.42: NEEDED_PREAUTH: 
ldap/ipa02.internal at INTERNAL for krbtgt/INTERNAL at INTERNAL, Additional 
pre-authentication required
Okt 11 17:27:43 ipa02.internal krb5kdc[5767](info): AS_REQ (6 etypes {18 
17 16 23 25 26}) 172.16.0.42: ISSUE: authtime 1444577263, etypes {rep=18 
tkt=18 ses=18}, ldap/ipa02.internal at INTERNAL for krbtgt/INTERNAL at INTERNAL

Any suggestions on how to resolve this issue?
Many thanks!


- Dominik



Am 08.10.2015 um 17:47 schrieb Dominik Korittki:
> Hello folks,
>
> i have two FreeIPA 3.3 Machines running on CentOS7: ipa01.internal and
> ipa02.internal. Both have a CA installed.
> Initially ipa02 is a replication from ipa01. Recently ipa01 had some
> trouble while ipa02 was running fine (see "FreeIPA 3.3 performance
> issues with many hosts" on this maillinglist).
>
> So what i did was to uninstall ipa01 via "ipa-server-install
> --uninstall" and recreated ipa01 as a replica of ipa02 via
> "ipa-replica-install --setup-ca". Since then I was having trouble with
> replication. It seems to be there is still some RUV information about
> the old ipa01 in the database.
>
> Well long story short: I want to completely delete ipa02 from the
> replication agreement on host ipa01 to be able to re-add ipa02 later.
>
> Currently the situation on ipa01 is as follows:
>
> root at ipa01:~ > ipa-replica-manage list
> Directory Manager password:
>
> ipa01.internal: master
> ipa02.internal: master
>
> root at ipa01:~ > ipa-replica-manage list-ruv
> Directory Manager password:
>
> ipa01.internal:389: 6
> ipa02.internal:389: 5
>
> root at ipa01:~ > ipa-csreplica-manage list
> Directory Manager password:
>
> ipa01.internal: master
> ipa02.internal: master
>
> root at ipa01:~ > ldapsearch -D "cn=directory manager" -W -b "cn=mapping
> tree,cn=config" 'objectClass=nsDS5ReplicationAgreement' nsds50ruv -LLL
> Enter LDAP Password:
> dn:
> cn=cloneAgreement1-ipa01.internal-pki-tomcat,cn=replica,cn=o\3Dipaca,cn=ma
>   pping tree,cn=config
> nsds50ruv: {replicageneration} 54748540000000600000
> nsds50ruv: {replica 97 ldap://ipa02.internal:389} 54748548000000610000
> 56139e1
>   8000200610000
> nsds50ruv: {replica 1095 ldap://ipa01.internal:389} 56139e17000004470000
> 56139
>   e1e000204470000
> nsds50ruv: {replica 96 ldap://ipa01.internal:389}
>
>
> I'm a bit worried about the ldapsearch command. There is a nsds50ruv
> attribute with value 1035. It appeared after I readded ipa01 into the
> replication agreement. Do I need to get rid of it and if yes, how?
>
> Another question is: ipa02 is not responsible anymore, so the
> CLEANALLRUV Task started on ipa01 by "ipa-replica-manage del ..." would
> not be able to connect to ipa02. According to 389ds documentation it
> would stay active a long time trying to connect to the other host.  Is
> it save to abort the task via "ipa-replica-manage abort-clean-ruv ..."
> after a while?
>
> Thanks in advance!
>
>
> Kind regards,
> Dominik
>




More information about the Freeipa-users mailing list