From mkosek at redhat.com Mon Apr 2 07:27:07 2012 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 02 Apr 2012 09:27:07 +0200 Subject: [Freeipa-devel] [PATCH] 993 disable UPG for migration In-Reply-To: <4F75AF96.9010205@redhat.com> References: <4F6B7B9F.4080105@redhat.com> <1332493947.17059.24.camel@balmora.brq.redhat.com> <4F738281.3080505@redhat.com> <1333010191.20236.16.camel@balmora.brq.redhat.com> <4F747F51.7060009@redhat.com> <1333090082.2366.1.camel@priserak> <4F75AF96.9010205@redhat.com> Message-ID: <1333351627.10403.2.camel@balmora.brq.redhat.com> On Fri, 2012-03-30 at 09:05 -0400, Rob Crittenden wrote: > Martin Kosek wrote: > > On Thu, 2012-03-29 at 11:27 -0400, Rob Crittenden wrote: > >> Martin Kosek wrote: > >>> On Wed, 2012-03-28 at 17:28 -0400, Rob Crittenden wrote: > >>>> Martin Kosek wrote: > >>>>> On Thu, 2012-03-22 at 15:21 -0400, Rob Crittenden wrote: > >>>>>> We don't want to create private groups automatically for migrated users, > >>>>>> there could be namespace overlap (and race conditions prevent us from > >>>>>> trying to check in advance). > >>>>>> > >>>>>> Check the sanity of groups in general, warn if the group for the > >>>>>> gidnumber doesn't exist at least on the remote server. > >>>>>> > >>>>>> Fill in the user's shell as well. > >>>>>> > >>>>>> This will migrate users that are non-POSIX on the remote server. > >>>>>> > >>>>>> rob > >>>>> > >>>>> This patch fixes the problem of creating UPGs for migrated users, but > >>>>> there are several parts which confused me. > >>>>> > >>>>> 1) Confusing defaults > >>>>> > >>>>> + if 'def_group_gid' in ctx: > >>>>> + entry_attrs.setdefault('gidnumber', ctx['def_group_gid']) > >>>>> > >>>>> This statement seems redundant, because the account either is posix and > >>>>> has both uidnumber and gidnumber or it is non-posix and does not have > >>>>> any. > >>>>> > >>>>> Now, we don't touch gidNumber for posix user, and non-posix users are > >>>>> made posix with this statement: > >>>>> > >>>>> + # migrated user is not already POSIX, make it so > >>>>> + if 'uidnumber' not in entry_attrs: > >>>>> + entry_attrs['uidnumber'] = entry_attrs['gidnumber'] = [999] > >>>>> > >>>>> > >>>>> 2) Missing UPG > >>>>> When UPG is disabled, the following statement will result in a user with > >>>>> a GID pointing to non-existent group. > >>>>> > >>>>> + # migrated user is not already POSIX, make it so > >>>>> + if 'uidnumber' not in entry_attrs: > >>>>> + entry_attrs['uidnumber'] = entry_attrs['gidnumber'] = [999] > >>>>> > >>>>> We may want to run ldap.has_upg() and report a add this user to "failed > >>>>> users" with appropriate error. > >>>>> > >>>>> 3) Check for GID > >>>>> The patch implements a check if a group with the gidNumber exists on a > >>>>> remote LDAP server and the result is a warning: > >>>>> > >>>>> - (g_dn, g_attrs) = ldap.get_entry(ctx['def_group_dn'], ['gidnumber']) > >>>>> + (remote_dn, remote_entry) = ds_ldap.find_entry_by_attr( > >>>>> + 'gidnumber', entry_attrs['gidnumber'][0], 'posixgroup', [''], > >>>>> + search_bases['group'] > >>>>> + ) > >>>>> > >>>>> I have few (minor-ish) questions there: > >>>>> a) Is the warning in a apache log enough? Maybe it should be included in > >>>>> migrate-ds output. > >>>>> b) This will be a one more remote LDAP call for every user, we may want > >>>>> to optimize it with something like that: > >>>>> > >>>>> valid_gids = [] > >>>>> if user.gidnumber not in valid_gids: > >>>>> run the check in remote LDAP > >>>>> valid_gids.append(user.gidnumber) > >>>>> > >>>>> That would save us 999 LDAP queries for 1000 migrated with the same > >>>>> primary group. > >>>>> > >>>>> 4) Extraneous Warning: > >>>>> When non-posix user is migrated and thus we make it a posix user, we > >>>>> still produce a warning for non-existent group: > >>>>> > >>>>> [Fri Mar 23 04:21:36 2012] [error] ipa: WARNING: Migrated user's GID > >>>>> number 999 does not point to a known group. > >>>>> > >>>>> 5) Extraneous LDAP call > >>>>> > >>>>> Isn't the following call to LDAP to get a description redundant? We > >>>>> already have the description in entry_attrs. > >>>>> > >>>>> + (dn, desc_attr) = ldap.get_entry(dn, ['description']) > >>>>> + entry_attrs.update(desc_attr) > >>>>> + if 'description' in entry_attrs and NO_UPG_MAGIC in > >>>>> entry_attrs['description']: > >>>>> > >>>>> > >>>>> Martin > >>>>> > >>>> > >>>> I think this covers your concerns. > >>>> > >>>> I can't do anything but log warnings at this point in order to maintain > >>>> backwards compatibility. I looked into returning a warning entry and it > >>>> blew up in validate_output() on older clients. > >>>> > >>>> rob > >>>> > >>> > >>> This patch is much better and covers my previous concerns. I just find > >>> an issue with UPG. It is not created for non-posix users when UPGs are > >>> enabled: > >>> > >>> # echo "Secret123" | ipa migrate-ds ldap://ldap.example.com > >>> --with-compat --base-dn="dc=greyoak,dc=com" > >>> ----------- > >>> migrate-ds: > >>> ----------- > >>> Migrated: > >>> user: darcee_leeson, ayaz_kreiger, mnonposix, mollee_weisenberg > >>> group: ipagroup > >>> Failed user: > >>> Failed group: > >>> ---------- > >>> Passwords have been migrated in pre-hashed format. > >>> IPA is unable to generate Kerberos keys unless provided > >>> with clear text passwords. All migrated users need to > >>> login at https://your.domain/ipa/migration/ before they > >>> can use their Kerberos accounts. > >>> > >>> # ipa user-show mnonposix > >>> User login: mnonposix > >>> First name: Mister > >>> Last name: Nonposix > >>> Home directory: /home/mnonposix > >>> Login shell: /bin/sh > >>> UID: 328000195 > >>> GID: 328000195 > >>> Org. Unit: Product Testing > >>> Job Title: Test User > >>> Account disabled: False > >>> Password: True > >>> Member of groups: ipausers > >>> Kerberos keys available: False > >>> > >>> # ipa group-show mnonposix > >>> ipa: ERROR: mnonposix: group not found > >> > >> Yes, I was always disabling UPG. I now allow it when migrating a > >> non-POSIX user. > >> > >>> I am also thinking if we need to ask if UPG is enabled for every > >>> migrated user - every ldap.has_upg() call means one query to host LDAP. > >>> Maybe we should ask just in the beginning and store the setting in > >>> "ctx['upg_enabled']. I don't think that anyone needs to switch UPG > >>> status during migration process. > >> > >> I agree, nice optimization. > >> > >> rob > > > > This patch is OK, everything worked as expected. We just now need to > > specify if we want a special option/flag to enable non-posix -> posix > > user conversion. If not, then ACK. > > > > Martin > > > > Simo and I had a brief discussion about this in IRC and I'm going to > punt on non-POSIX user conversion for now. There could be specific > reasons for this, like having a shared user (nss_ldap) or some sort of > system user that they don't to be a full POSIX user. > > Adding a new option this late in 2.2 doesn't seem like a good idea so > I'll patch it to simply fail non-POSIX users for now. I've opened a > ticket to optionally allow them to be converted. > > rob Ok. Your patch 993 is still useful and fixes the creation of UPGs, we just need to remove nonposix->posix conversion and make sure non-posix users are migrated successfully as non-posix users. Martin From mkosek at redhat.com Mon Apr 2 09:01:09 2012 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 02 Apr 2012 11:01:09 +0200 Subject: [Freeipa-devel] [PATCH] 1000 fix upgrade crash when updating replication agreements In-Reply-To: <4F75F7CD.5010309@redhat.com> References: <4F75F7CD.5010309@redhat.com> Message-ID: <1333357269.10403.3.camel@balmora.brq.redhat.com> On Fri, 2012-03-30 at 14:13 -0400, Rob Crittenden wrote: > We check existing agreements to see if they are missing memberof in the > EXCLUDE list. It would crash if this list wasn't present at all. > > So we need to catch this and add in the missing exclusions if they > aren't there at all. > > rob ACK, the attribute is now correctly fixed even when its not present in the agreement. Pushed to master, ipa-2-2. Martin From rcritten at redhat.com Mon Apr 2 13:15:32 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 02 Apr 2012 09:15:32 -0400 Subject: [Freeipa-devel] [PATCH 69] Use indexed format specifiers in i18n strings In-Reply-To: <201203300136.q2U1a0E5030640@int-mx12.intmail.prod.int.phx2.redhat.com> References: <201203300136.q2U1a0E5030640@int-mx12.intmail.prod.int.phx2.redhat.com> Message-ID: <4F79A674.2060104@redhat.com> John Dennis wrote: > Translators need to reorder messages to suit the needs of the target > language. The conventional positional format specifiers (e.g. %s %d) > do not permit reordering because their order is tied to the ordering > of the arguments to the printf function. The fix is to use indexed > format specifiers. I guess this looks ok but all of these errors are of the format: string error, error number (and inconsistently, sometimes the reverse). Do those really need to be re-orderable? rob From rcritten at redhat.com Mon Apr 2 13:29:29 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 02 Apr 2012 09:29:29 -0400 Subject: [Freeipa-devel] [PATCH] 0032 Move DNS test skipping to class setup In-Reply-To: <4F759FFE.1010303@redhat.com> References: <4F72D1EB.7020007@redhat.com> <4F74C3B0.7020102@redhat.com> <4F759FFE.1010303@redhat.com> Message-ID: <4F79A9B9.3040703@redhat.com> Petr Viktorin wrote: > On 03/29/2012 10:18 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> >>> Currently, each DNS test case first checks if DNS is configured >>> by creating and deleting a test zone. This takes quite a lot of time. >>> >>> This patch moves the check to the setUpClass method, so the check is >>> only done once for all the tests. >>> >>> >>> >>> On my VM, this makes the DNS plugin tests 50% faster, saving about half >>> a minute for each test run. >>> >> >> This fails if the test XML-RPC server is not running. While working on >> that issue I found a few other places that weren't handling this as >> well. Here is my working patch on top of yours. >> >> rob > > Thank you! I see the other place is one I added recently. > > This updated patch includes your diff, and also makes sure a context is > created for the DNS test skipping. > > ACK, pushed to master and ipa-2-2 rob From mkosek at redhat.com Mon Apr 2 13:47:20 2012 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 02 Apr 2012 15:47:20 +0200 Subject: [Freeipa-devel] [PATCH] 998 certmonger restarts services on renewal In-Reply-To: <4F7233D9.2080405@redhat.com> References: <4F7233D9.2080405@redhat.com> Message-ID: <1333374440.10403.18.camel@balmora.brq.redhat.com> On Tue, 2012-03-27 at 17:40 -0400, Rob Crittenden wrote: > Certmonger will currently automatically renew server certificates but > doesn't restart the services so you can still end up with expired > certificates if you services never restart. > > This patch registers are restart command with certmonger so the IPA > services will automatically be restarted to get the updated cert. > > Easy to test. Install IPA then resubmit the current server certs and > watch the services restart: > > # ipa-getcert list > > Find the ID for either your dirsrv or httpd instance > > # ipa-getcert resubmit -i > > Watch /var/log/httpd/error_log or /var/log/dirsrv/slapd-INSTANCE/errors > to see the service restart. > > rob What about current instances - can we/do we want to update certmonger tracking so that their instances are restarted as well? Anyway, I found few issues SELinux issues with the patch: 1) # rpm -Uvh freeipa-* Preparing... ########################################### [100%] 1:freeipa-python ########################################### [ 20%] 2:freeipa-client ########################################### [ 40%] 3:freeipa-admintools ########################################### [ 60%] 4:freeipa-server ########################################### [ 80%] /usr/bin/chcon: failed to change context of `/usr/lib64/ipa/certmonger' to `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid argument /usr/bin/chcon: failed to change context of `/usr/lib64/ipa/certmonger/restart_dirsrv' to `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid argument /usr/bin/chcon: failed to change context of `/usr/lib64/ipa/certmonger/restart_httpd' to `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid argument warning: %post(freeipa-server-2.1.90GIT5b895af-0.fc16.x86_64) scriptlet failed, exit status 1 5:freeipa-server-selinux ########################################### [100%] certmonger_unconfined_exec_t type was unknown with my selinux policy: selinux-policy-3.10.0-80.fc16.noarch selinux-policy-targeted-3.10.0-80.fc16.noarch If we need a higher SELinux version, we should bump the required package version spec file. 2) Change of SELinux context with /usr/bin/chcon is temporary until restorecon or system relabel occurs. I think we should make it persistent and enforce this type in our SELinux policy and rather call restorecon instead of chcon Martin From rcritten at redhat.com Mon Apr 2 14:03:02 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 02 Apr 2012 10:03:02 -0400 Subject: [Freeipa-devel] [PATCH] 349 Fixed boot.ldif permission. In-Reply-To: <4F63C937.4030301@redhat.com> References: <4F63C937.4030301@redhat.com> Message-ID: <4F79B196.5060208@redhat.com> Endi Sukma Dewata wrote: > The server installation failed on F17 due to permission problem. > The /var/lib/dirsrv/boot.ldif was previously owned and only readable > by root. It is now owned by DS user dirsrv. > > Ticket #2544 ACK, pushed to master rob From rcritten at redhat.com Mon Apr 2 14:09:18 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 02 Apr 2012 10:09:18 -0400 Subject: [Freeipa-devel] [PATCH] 998 certmonger restarts services on renewal In-Reply-To: <1333374440.10403.18.camel@balmora.brq.redhat.com> References: <4F7233D9.2080405@redhat.com> <1333374440.10403.18.camel@balmora.brq.redhat.com> Message-ID: <4F79B30E.1060804@redhat.com> Martin Kosek wrote: > On Tue, 2012-03-27 at 17:40 -0400, Rob Crittenden wrote: >> Certmonger will currently automatically renew server certificates but >> doesn't restart the services so you can still end up with expired >> certificates if you services never restart. >> >> This patch registers are restart command with certmonger so the IPA >> services will automatically be restarted to get the updated cert. >> >> Easy to test. Install IPA then resubmit the current server certs and >> watch the services restart: >> >> # ipa-getcert list >> >> Find the ID for either your dirsrv or httpd instance >> >> # ipa-getcert resubmit -i >> >> Watch /var/log/httpd/error_log or /var/log/dirsrv/slapd-INSTANCE/errors >> to see the service restart. >> >> rob > > What about current instances - can we/do we want to update certmonger > tracking so that their instances are restarted as well? > > Anyway, I found few issues SELinux issues with the patch: > > 1) # rpm -Uvh freeipa-* > Preparing... ########################################### [100%] > 1:freeipa-python ########################################### [ 20%] > 2:freeipa-client ########################################### [ 40%] > 3:freeipa-admintools ########################################### [ 60%] > 4:freeipa-server ########################################### [ 80%] > /usr/bin/chcon: failed to change context of `/usr/lib64/ipa/certmonger' to `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid argument > /usr/bin/chcon: failed to change context of `/usr/lib64/ipa/certmonger/restart_dirsrv' to `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid argument > /usr/bin/chcon: failed to change context of `/usr/lib64/ipa/certmonger/restart_httpd' to `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid argument > warning: %post(freeipa-server-2.1.90GIT5b895af-0.fc16.x86_64) scriptlet failed, exit status 1 > 5:freeipa-server-selinux ########################################### [100%] > > certmonger_unconfined_exec_t type was unknown with my selinux policy: > > selinux-policy-3.10.0-80.fc16.noarch > selinux-policy-targeted-3.10.0-80.fc16.noarch > > If we need a higher SELinux version, we should bump the required package > version spec file. Yeah, waiting on it to be backported. > > 2) Change of SELinux context with /usr/bin/chcon is temporary until > restorecon or system relabel occurs. I think we should make it > persistent and enforce this type in our SELinux policy and rather call > restorecon instead of chcon That's a good idea, why didn't I think of that :-( rob From nalin at redhat.com Mon Apr 2 14:18:29 2012 From: nalin at redhat.com (Nalin Dahyabhai) Date: Mon, 2 Apr 2012 10:18:29 -0400 Subject: [Freeipa-devel] [PATCH] 998 certmonger restarts services on renewal In-Reply-To: <1333374440.10403.18.camel@balmora.brq.redhat.com> References: <4F7233D9.2080405@redhat.com> <1333374440.10403.18.camel@balmora.brq.redhat.com> Message-ID: <20120402141828.GA889@redhat.com> On Mon, Apr 02, 2012 at 03:47:20PM +0200, Martin Kosek wrote: > On Tue, 2012-03-27 at 17:40 -0400, Rob Crittenden wrote: > > Certmonger will currently automatically renew server certificates but > > doesn't restart the services so you can still end up with expired > > certificates if you services never restart. > > > > This patch registers are restart command with certmonger so the IPA > > services will automatically be restarted to get the updated cert. > > > > Easy to test. Install IPA then resubmit the current server certs and > > watch the services restart: > > > > # ipa-getcert list > > > > Find the ID for either your dirsrv or httpd instance > > > > # ipa-getcert resubmit -i > > > > Watch /var/log/httpd/error_log or /var/log/dirsrv/slapd-INSTANCE/errors > > to see the service restart. > > What about current instances - can we/do we want to update certmonger > tracking so that their instances are restarted as well? You can use the not-exactly-well-named start-tracking command to add a post-save command: ipa-getcert start-tracking \ -d /etc/dirsrv/slapd-PKI-IPA -n Server-Cert \ -C "/usr/bin/logger BeenThereDoneThat" Or use the ID, as Rob did above. HTH, Nalin From rcritten at redhat.com Mon Apr 2 15:05:34 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 02 Apr 2012 11:05:34 -0400 Subject: [Freeipa-devel] [PATCHES] 0025-26 Test improvements In-Reply-To: <4F717902.1060304@redhat.com> References: <4F5DE9A6.90704@redhat.com> <4F70D52E.5070101@redhat.com> <4F717902.1060304@redhat.com> Message-ID: <4F79C03E.8090301@redhat.com> Petr Viktorin wrote: > On 03/26/2012 10:44 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> Patch 25 fixes errors I found by running pylint on the testsuite. They >>> were in code that was unused, either by error or because it only runs on >>> errors. >>> >>> Patch 26 adds a test for the batch plugin. >> >> In patch 25 the second test_internal_error should really be >> test_unauthorized_error. I think that is a clearer name. Otherwise looks >> good. >> >> Patch 26 needs a very minor rebase to fix an error caused by improved >> error code handling: >> >> expected = Fuzzy(u"invalid 'gidnumber'.*", , None) >> got = u"invalid 'gid': Gettext('must be an integer', domain='ipa', >> localedir=None)" >> >> I tested this: >> >> diff --git a/tests/test_xmlrpc/test_batch_plugin.py >> b/tests/test_xmlrpc/test_bat >> ch_plugin.py >> index e4280ed..d69bfd9 100644 >> --- a/tests/test_xmlrpc/test_batch_plugin.py >> +++ b/tests/test_xmlrpc/test_batch_plugin.py >> @@ -186,7 +186,7 @@ class test_batch(Declarative): >> dict(error=u"'params' is required"), >> dict(error=u"'givenname' is required"), >> dict(error=u"'description' is required"), >> - dict(error=Fuzzy(u"invalid 'gidnumber'.*")), >> + dict(error=Fuzzy(u"invalid 'gid'.*")), >> ), >> ), >> ), >> >> rob > > Thank you! Fixed, attaching updated patches. > These look ok but it is baffling to me why tuple needs to be added to the Output format in batch. Do you know when it is being converted into a tuple? rob From mkosek at redhat.com Mon Apr 2 15:16:13 2012 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 02 Apr 2012 17:16:13 +0200 Subject: [Freeipa-devel] [PATCH] 245 Forbid public access to DNS tree Message-ID: <1333379773.10403.19.camel@balmora.brq.redhat.com> Test instructions are attached to ticket. -- With a publicly accessible DNS tree in LDAP, anyone with an access to the LDAP server can get all DNS data as with a zone transfer which is already restricted with ACL. Making DNS tree not readable to public is a common security practice and should be applied in FreeIPA as well. This patch adds a new deny rule to forbid access to DNS tree to users or hosts without an appropriate permission or users which are not members of admins group. The new permission/aci is applied both for new installs and upgraded servers. bind-dyndb-ldap plugin is allowed to read DNS tree without any change because its principal is already a member of "DNS Servers" privilege. https://fedorahosted.org/freeipa/ticket/2569 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-245-forbid-public-access-to-dns-tree.patch Type: text/x-patch Size: 7069 bytes Desc: not available URL: From sbose at redhat.com Mon Apr 2 15:50:10 2012 From: sbose at redhat.com (Sumit Bose) Date: Mon, 2 Apr 2012 17:50:10 +0200 Subject: [Freeipa-devel] [PATCH] (master) Support case-insensitive searches for principals during TGS request processing In-Reply-To: <1333054951.22628.169.camel@willson.li.ssimo.org> References: <20120329133035.GH3500@redhat.com> <1333054951.22628.169.camel@willson.li.ssimo.org> Message-ID: <20120402155010.GD2377@localhost.localdomain> On Thu, Mar 29, 2012 at 05:02:31PM -0400, Simo Sorce wrote: > On Thu, 2012-03-29 at 16:30 +0300, Alexander Bokovoy wrote: > > This is due to some krbtgt/realm at REALM searches performed in KDC > > without > > allowing for principal aliases and therefore no chance to our > > case-insensitive searches to kick in. Additional discussion is needed, > > I > > think, if we want to support case-insensitive realms. > > > I do not think we want to support case-insensitive realm lookups just > yet. So what you have now seem sufficient to me. Patch looks good and is passing my testing. ACK bye, Sumit > > Simo. > > -- > Simo Sorce * Red Hat, Inc * New York > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel From rcritten at redhat.com Mon Apr 2 18:26:11 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 02 Apr 2012 14:26:11 -0400 Subject: [Freeipa-devel] [PATCH] 245 Forbid public access to DNS tree In-Reply-To: <1333379773.10403.19.camel@balmora.brq.redhat.com> References: <1333379773.10403.19.camel@balmora.brq.redhat.com> Message-ID: <4F79EF43.90103@redhat.com> Martin Kosek wrote: > Test instructions are attached to ticket. > -- > With a publicly accessible DNS tree in LDAP, anyone with an access > to the LDAP server can get all DNS data as with a zone transfer > which is already restricted with ACL. Making DNS tree not readable > to public is a common security practice and should be applied > in FreeIPA as well. > > This patch adds a new deny rule to forbid access to DNS tree to > users or hosts without an appropriate permission or users which > are not members of admins group. The new permission/aci is > applied both for new installs and upgraded servers. > > bind-dyndb-ldap plugin is allowed to read DNS tree without any > change because its principal is already a member of "DNS > Servers" privilege. > > https://fedorahosted.org/freeipa/ticket/2569 ACK, pushed to master and ipa-2-2 From rcritten at redhat.com Mon Apr 2 19:18:36 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 02 Apr 2012 15:18:36 -0400 Subject: [Freeipa-devel] [PATCH] 993 disable UPG for migration In-Reply-To: <1333351627.10403.2.camel@balmora.brq.redhat.com> References: <4F6B7B9F.4080105@redhat.com> <1332493947.17059.24.camel@balmora.brq.redhat.com> <4F738281.3080505@redhat.com> <1333010191.20236.16.camel@balmora.brq.redhat.com> <4F747F51.7060009@redhat.com> <1333090082.2366.1.camel@priserak> <4F75AF96.9010205@redhat.com> <1333351627.10403.2.camel@balmora.brq.redhat.com> Message-ID: <4F79FB8C.6000503@redhat.com> Martin Kosek wrote: > On Fri, 2012-03-30 at 09:05 -0400, Rob Crittenden wrote: >> Martin Kosek wrote: >>> On Thu, 2012-03-29 at 11:27 -0400, Rob Crittenden wrote: >>>> Martin Kosek wrote: >>>>> On Wed, 2012-03-28 at 17:28 -0400, Rob Crittenden wrote: >>>>>> Martin Kosek wrote: >>>>>>> On Thu, 2012-03-22 at 15:21 -0400, Rob Crittenden wrote: >>>>>>>> We don't want to create private groups automatically for migrated users, >>>>>>>> there could be namespace overlap (and race conditions prevent us from >>>>>>>> trying to check in advance). >>>>>>>> >>>>>>>> Check the sanity of groups in general, warn if the group for the >>>>>>>> gidnumber doesn't exist at least on the remote server. >>>>>>>> >>>>>>>> Fill in the user's shell as well. >>>>>>>> >>>>>>>> This will migrate users that are non-POSIX on the remote server. >>>>>>>> >>>>>>>> rob >>>>>>> >>>>>>> This patch fixes the problem of creating UPGs for migrated users, but >>>>>>> there are several parts which confused me. >>>>>>> >>>>>>> 1) Confusing defaults >>>>>>> >>>>>>> + if 'def_group_gid' in ctx: >>>>>>> + entry_attrs.setdefault('gidnumber', ctx['def_group_gid']) >>>>>>> >>>>>>> This statement seems redundant, because the account either is posix and >>>>>>> has both uidnumber and gidnumber or it is non-posix and does not have >>>>>>> any. >>>>>>> >>>>>>> Now, we don't touch gidNumber for posix user, and non-posix users are >>>>>>> made posix with this statement: >>>>>>> >>>>>>> + # migrated user is not already POSIX, make it so >>>>>>> + if 'uidnumber' not in entry_attrs: >>>>>>> + entry_attrs['uidnumber'] = entry_attrs['gidnumber'] = [999] >>>>>>> >>>>>>> >>>>>>> 2) Missing UPG >>>>>>> When UPG is disabled, the following statement will result in a user with >>>>>>> a GID pointing to non-existent group. >>>>>>> >>>>>>> + # migrated user is not already POSIX, make it so >>>>>>> + if 'uidnumber' not in entry_attrs: >>>>>>> + entry_attrs['uidnumber'] = entry_attrs['gidnumber'] = [999] >>>>>>> >>>>>>> We may want to run ldap.has_upg() and report a add this user to "failed >>>>>>> users" with appropriate error. >>>>>>> >>>>>>> 3) Check for GID >>>>>>> The patch implements a check if a group with the gidNumber exists on a >>>>>>> remote LDAP server and the result is a warning: >>>>>>> >>>>>>> - (g_dn, g_attrs) = ldap.get_entry(ctx['def_group_dn'], ['gidnumber']) >>>>>>> + (remote_dn, remote_entry) = ds_ldap.find_entry_by_attr( >>>>>>> + 'gidnumber', entry_attrs['gidnumber'][0], 'posixgroup', [''], >>>>>>> + search_bases['group'] >>>>>>> + ) >>>>>>> >>>>>>> I have few (minor-ish) questions there: >>>>>>> a) Is the warning in a apache log enough? Maybe it should be included in >>>>>>> migrate-ds output. >>>>>>> b) This will be a one more remote LDAP call for every user, we may want >>>>>>> to optimize it with something like that: >>>>>>> >>>>>>> valid_gids = [] >>>>>>> if user.gidnumber not in valid_gids: >>>>>>> run the check in remote LDAP >>>>>>> valid_gids.append(user.gidnumber) >>>>>>> >>>>>>> That would save us 999 LDAP queries for 1000 migrated with the same >>>>>>> primary group. >>>>>>> >>>>>>> 4) Extraneous Warning: >>>>>>> When non-posix user is migrated and thus we make it a posix user, we >>>>>>> still produce a warning for non-existent group: >>>>>>> >>>>>>> [Fri Mar 23 04:21:36 2012] [error] ipa: WARNING: Migrated user's GID >>>>>>> number 999 does not point to a known group. >>>>>>> >>>>>>> 5) Extraneous LDAP call >>>>>>> >>>>>>> Isn't the following call to LDAP to get a description redundant? We >>>>>>> already have the description in entry_attrs. >>>>>>> >>>>>>> + (dn, desc_attr) = ldap.get_entry(dn, ['description']) >>>>>>> + entry_attrs.update(desc_attr) >>>>>>> + if 'description' in entry_attrs and NO_UPG_MAGIC in >>>>>>> entry_attrs['description']: >>>>>>> >>>>>>> >>>>>>> Martin >>>>>>> >>>>>> >>>>>> I think this covers your concerns. >>>>>> >>>>>> I can't do anything but log warnings at this point in order to maintain >>>>>> backwards compatibility. I looked into returning a warning entry and it >>>>>> blew up in validate_output() on older clients. >>>>>> >>>>>> rob >>>>>> >>>>> >>>>> This patch is much better and covers my previous concerns. I just find >>>>> an issue with UPG. It is not created for non-posix users when UPGs are >>>>> enabled: >>>>> >>>>> # echo "Secret123" | ipa migrate-ds ldap://ldap.example.com >>>>> --with-compat --base-dn="dc=greyoak,dc=com" >>>>> ----------- >>>>> migrate-ds: >>>>> ----------- >>>>> Migrated: >>>>> user: darcee_leeson, ayaz_kreiger, mnonposix, mollee_weisenberg >>>>> group: ipagroup >>>>> Failed user: >>>>> Failed group: >>>>> ---------- >>>>> Passwords have been migrated in pre-hashed format. >>>>> IPA is unable to generate Kerberos keys unless provided >>>>> with clear text passwords. All migrated users need to >>>>> login at https://your.domain/ipa/migration/ before they >>>>> can use their Kerberos accounts. >>>>> >>>>> # ipa user-show mnonposix >>>>> User login: mnonposix >>>>> First name: Mister >>>>> Last name: Nonposix >>>>> Home directory: /home/mnonposix >>>>> Login shell: /bin/sh >>>>> UID: 328000195 >>>>> GID: 328000195 >>>>> Org. Unit: Product Testing >>>>> Job Title: Test User >>>>> Account disabled: False >>>>> Password: True >>>>> Member of groups: ipausers >>>>> Kerberos keys available: False >>>>> >>>>> # ipa group-show mnonposix >>>>> ipa: ERROR: mnonposix: group not found >>>> >>>> Yes, I was always disabling UPG. I now allow it when migrating a >>>> non-POSIX user. >>>> >>>>> I am also thinking if we need to ask if UPG is enabled for every >>>>> migrated user - every ldap.has_upg() call means one query to host LDAP. >>>>> Maybe we should ask just in the beginning and store the setting in >>>>> "ctx['upg_enabled']. I don't think that anyone needs to switch UPG >>>>> status during migration process. >>>> >>>> I agree, nice optimization. >>>> >>>> rob >>> >>> This patch is OK, everything worked as expected. We just now need to >>> specify if we want a special option/flag to enable non-posix -> posix >>> user conversion. If not, then ACK. >>> >>> Martin >>> >> >> Simo and I had a brief discussion about this in IRC and I'm going to >> punt on non-POSIX user conversion for now. There could be specific >> reasons for this, like having a shared user (nss_ldap) or some sort of >> system user that they don't to be a full POSIX user. >> >> Adding a new option this late in 2.2 doesn't seem like a good idea so >> I'll patch it to simply fail non-POSIX users for now. I've opened a >> ticket to optionally allow them to be converted. >> >> rob > > Ok. Your patch 993 is still useful and fixes the creation of UPGs, we > just need to remove nonposix->posix conversion and make sure non-posix > users are migrated successfully as non-posix users. > > Martin > I decided to not migrate non-POSIX users at all. They are a-typical in IPA and won't show at all using our tools so I don't see the point in migrating them. non-POSIX users in IPA are typically stored in cn=sysconfig rather than cn=users. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-993-4-migration.patch Type: text/x-patch Size: 10043 bytes Desc: not available URL: From rcritten at redhat.com Mon Apr 2 19:36:29 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 02 Apr 2012 15:36:29 -0400 Subject: [Freeipa-devel] [PATCH] 998 certmonger restarts services on renewal In-Reply-To: <4F79B30E.1060804@redhat.com> References: <4F7233D9.2080405@redhat.com> <1333374440.10403.18.camel@balmora.brq.redhat.com> <4F79B30E.1060804@redhat.com> Message-ID: <4F79FFBD.1030408@redhat.com> Rob Crittenden wrote: > Martin Kosek wrote: >> On Tue, 2012-03-27 at 17:40 -0400, Rob Crittenden wrote: >>> Certmonger will currently automatically renew server certificates but >>> doesn't restart the services so you can still end up with expired >>> certificates if you services never restart. >>> >>> This patch registers are restart command with certmonger so the IPA >>> services will automatically be restarted to get the updated cert. >>> >>> Easy to test. Install IPA then resubmit the current server certs and >>> watch the services restart: >>> >>> # ipa-getcert list >>> >>> Find the ID for either your dirsrv or httpd instance >>> >>> # ipa-getcert resubmit -i >>> >>> Watch /var/log/httpd/error_log or /var/log/dirsrv/slapd-INSTANCE/errors >>> to see the service restart. >>> >>> rob >> >> What about current instances - can we/do we want to update certmonger >> tracking so that their instances are restarted as well? >> >> Anyway, I found few issues SELinux issues with the patch: >> >> 1) # rpm -Uvh freeipa-* >> Preparing... ########################################### [100%] >> 1:freeipa-python ########################################### [ 20%] >> 2:freeipa-client ########################################### [ 40%] >> 3:freeipa-admintools ########################################### [ 60%] >> 4:freeipa-server ########################################### [ 80%] >> /usr/bin/chcon: failed to change context of >> `/usr/lib64/ipa/certmonger' to >> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid argument >> /usr/bin/chcon: failed to change context of >> `/usr/lib64/ipa/certmonger/restart_dirsrv' to >> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid argument >> /usr/bin/chcon: failed to change context of >> `/usr/lib64/ipa/certmonger/restart_httpd' to >> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid argument >> warning: %post(freeipa-server-2.1.90GIT5b895af-0.fc16.x86_64) >> scriptlet failed, exit status 1 >> 5:freeipa-server-selinux ########################################### >> [100%] >> >> certmonger_unconfined_exec_t type was unknown with my selinux policy: >> >> selinux-policy-3.10.0-80.fc16.noarch >> selinux-policy-targeted-3.10.0-80.fc16.noarch >> >> If we need a higher SELinux version, we should bump the required package >> version spec file. > > Yeah, waiting on it to be backported. > >> >> 2) Change of SELinux context with /usr/bin/chcon is temporary until >> restorecon or system relabel occurs. I think we should make it >> persistent and enforce this type in our SELinux policy and rather call >> restorecon instead of chcon > > That's a good idea, why didn't I think of that :-( Ah, now I remember, it will be handled by selinux-policy. I would have used restorecon here but since the policy isn't there yet this seemed like a good idea. I'm trying to find out the status of this new policy, it may only make it into F-17. rob From mkosek at redhat.com Tue Apr 3 07:10:19 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Apr 2012 09:10:19 +0200 Subject: [Freeipa-devel] [PATCH] 993 disable UPG for migration In-Reply-To: <4F79FB8C.6000503@redhat.com> References: <4F6B7B9F.4080105@redhat.com> <1332493947.17059.24.camel@balmora.brq.redhat.com> <4F738281.3080505@redhat.com> <1333010191.20236.16.camel@balmora.brq.redhat.com> <4F747F51.7060009@redhat.com> <1333090082.2366.1.camel@priserak> <4F75AF96.9010205@redhat.com> <1333351627.10403.2.camel@balmora.brq.redhat.com> <4F79FB8C.6000503@redhat.com> Message-ID: <1333437019.23102.6.camel@balmora.brq.redhat.com> On Mon, 2012-04-02 at 15:18 -0400, Rob Crittenden wrote: > Martin Kosek wrote: > > On Fri, 2012-03-30 at 09:05 -0400, Rob Crittenden wrote: > >> Martin Kosek wrote: > >>> On Thu, 2012-03-29 at 11:27 -0400, Rob Crittenden wrote: > >>>> Martin Kosek wrote: > >>>>> On Wed, 2012-03-28 at 17:28 -0400, Rob Crittenden wrote: > >>>>>> Martin Kosek wrote: > >>>>>>> On Thu, 2012-03-22 at 15:21 -0400, Rob Crittenden wrote: > >>>>>>>> We don't want to create private groups automatically for migrated users, > >>>>>>>> there could be namespace overlap (and race conditions prevent us from > >>>>>>>> trying to check in advance). > >>>>>>>> > >>>>>>>> Check the sanity of groups in general, warn if the group for the > >>>>>>>> gidnumber doesn't exist at least on the remote server. > >>>>>>>> > >>>>>>>> Fill in the user's shell as well. > >>>>>>>> > >>>>>>>> This will migrate users that are non-POSIX on the remote server. > >>>>>>>> > >>>>>>>> rob > >>>>>>> > >>>>>>> This patch fixes the problem of creating UPGs for migrated users, but > >>>>>>> there are several parts which confused me. > >>>>>>> > >>>>>>> 1) Confusing defaults > >>>>>>> > >>>>>>> + if 'def_group_gid' in ctx: > >>>>>>> + entry_attrs.setdefault('gidnumber', ctx['def_group_gid']) > >>>>>>> > >>>>>>> This statement seems redundant, because the account either is posix and > >>>>>>> has both uidnumber and gidnumber or it is non-posix and does not have > >>>>>>> any. > >>>>>>> > >>>>>>> Now, we don't touch gidNumber for posix user, and non-posix users are > >>>>>>> made posix with this statement: > >>>>>>> > >>>>>>> + # migrated user is not already POSIX, make it so > >>>>>>> + if 'uidnumber' not in entry_attrs: > >>>>>>> + entry_attrs['uidnumber'] = entry_attrs['gidnumber'] = [999] > >>>>>>> > >>>>>>> > >>>>>>> 2) Missing UPG > >>>>>>> When UPG is disabled, the following statement will result in a user with > >>>>>>> a GID pointing to non-existent group. > >>>>>>> > >>>>>>> + # migrated user is not already POSIX, make it so > >>>>>>> + if 'uidnumber' not in entry_attrs: > >>>>>>> + entry_attrs['uidnumber'] = entry_attrs['gidnumber'] = [999] > >>>>>>> > >>>>>>> We may want to run ldap.has_upg() and report a add this user to "failed > >>>>>>> users" with appropriate error. > >>>>>>> > >>>>>>> 3) Check for GID > >>>>>>> The patch implements a check if a group with the gidNumber exists on a > >>>>>>> remote LDAP server and the result is a warning: > >>>>>>> > >>>>>>> - (g_dn, g_attrs) = ldap.get_entry(ctx['def_group_dn'], ['gidnumber']) > >>>>>>> + (remote_dn, remote_entry) = ds_ldap.find_entry_by_attr( > >>>>>>> + 'gidnumber', entry_attrs['gidnumber'][0], 'posixgroup', [''], > >>>>>>> + search_bases['group'] > >>>>>>> + ) > >>>>>>> > >>>>>>> I have few (minor-ish) questions there: > >>>>>>> a) Is the warning in a apache log enough? Maybe it should be included in > >>>>>>> migrate-ds output. > >>>>>>> b) This will be a one more remote LDAP call for every user, we may want > >>>>>>> to optimize it with something like that: > >>>>>>> > >>>>>>> valid_gids = [] > >>>>>>> if user.gidnumber not in valid_gids: > >>>>>>> run the check in remote LDAP > >>>>>>> valid_gids.append(user.gidnumber) > >>>>>>> > >>>>>>> That would save us 999 LDAP queries for 1000 migrated with the same > >>>>>>> primary group. > >>>>>>> > >>>>>>> 4) Extraneous Warning: > >>>>>>> When non-posix user is migrated and thus we make it a posix user, we > >>>>>>> still produce a warning for non-existent group: > >>>>>>> > >>>>>>> [Fri Mar 23 04:21:36 2012] [error] ipa: WARNING: Migrated user's GID > >>>>>>> number 999 does not point to a known group. > >>>>>>> > >>>>>>> 5) Extraneous LDAP call > >>>>>>> > >>>>>>> Isn't the following call to LDAP to get a description redundant? We > >>>>>>> already have the description in entry_attrs. > >>>>>>> > >>>>>>> + (dn, desc_attr) = ldap.get_entry(dn, ['description']) > >>>>>>> + entry_attrs.update(desc_attr) > >>>>>>> + if 'description' in entry_attrs and NO_UPG_MAGIC in > >>>>>>> entry_attrs['description']: > >>>>>>> > >>>>>>> > >>>>>>> Martin > >>>>>>> > >>>>>> > >>>>>> I think this covers your concerns. > >>>>>> > >>>>>> I can't do anything but log warnings at this point in order to maintain > >>>>>> backwards compatibility. I looked into returning a warning entry and it > >>>>>> blew up in validate_output() on older clients. > >>>>>> > >>>>>> rob > >>>>>> > >>>>> > >>>>> This patch is much better and covers my previous concerns. I just find > >>>>> an issue with UPG. It is not created for non-posix users when UPGs are > >>>>> enabled: > >>>>> > >>>>> # echo "Secret123" | ipa migrate-ds ldap://ldap.example.com > >>>>> --with-compat --base-dn="dc=greyoak,dc=com" > >>>>> ----------- > >>>>> migrate-ds: > >>>>> ----------- > >>>>> Migrated: > >>>>> user: darcee_leeson, ayaz_kreiger, mnonposix, mollee_weisenberg > >>>>> group: ipagroup > >>>>> Failed user: > >>>>> Failed group: > >>>>> ---------- > >>>>> Passwords have been migrated in pre-hashed format. > >>>>> IPA is unable to generate Kerberos keys unless provided > >>>>> with clear text passwords. All migrated users need to > >>>>> login at https://your.domain/ipa/migration/ before they > >>>>> can use their Kerberos accounts. > >>>>> > >>>>> # ipa user-show mnonposix > >>>>> User login: mnonposix > >>>>> First name: Mister > >>>>> Last name: Nonposix > >>>>> Home directory: /home/mnonposix > >>>>> Login shell: /bin/sh > >>>>> UID: 328000195 > >>>>> GID: 328000195 > >>>>> Org. Unit: Product Testing > >>>>> Job Title: Test User > >>>>> Account disabled: False > >>>>> Password: True > >>>>> Member of groups: ipausers > >>>>> Kerberos keys available: False > >>>>> > >>>>> # ipa group-show mnonposix > >>>>> ipa: ERROR: mnonposix: group not found > >>>> > >>>> Yes, I was always disabling UPG. I now allow it when migrating a > >>>> non-POSIX user. > >>>> > >>>>> I am also thinking if we need to ask if UPG is enabled for every > >>>>> migrated user - every ldap.has_upg() call means one query to host LDAP. > >>>>> Maybe we should ask just in the beginning and store the setting in > >>>>> "ctx['upg_enabled']. I don't think that anyone needs to switch UPG > >>>>> status during migration process. > >>>> > >>>> I agree, nice optimization. > >>>> > >>>> rob > >>> > >>> This patch is OK, everything worked as expected. We just now need to > >>> specify if we want a special option/flag to enable non-posix -> posix > >>> user conversion. If not, then ACK. > >>> > >>> Martin > >>> > >> > >> Simo and I had a brief discussion about this in IRC and I'm going to > >> punt on non-POSIX user conversion for now. There could be specific > >> reasons for this, like having a shared user (nss_ldap) or some sort of > >> system user that they don't to be a full POSIX user. > >> > >> Adding a new option this late in 2.2 doesn't seem like a good idea so > >> I'll patch it to simply fail non-POSIX users for now. I've opened a > >> ticket to optionally allow them to be converted. > >> > >> rob > > > > Ok. Your patch 993 is still useful and fixes the creation of UPGs, we > > just need to remove nonposix->posix conversion and make sure non-posix > > users are migrated successfully as non-posix users. > > > > Martin > > > > I decided to not migrate non-POSIX users at all. They are a-typical in > IPA and won't show at all using our tools so I don't see the point in > migrating them. non-POSIX users in IPA are typically stored in > cn=sysconfig rather than cn=users. > > rob ACK. Pushed to master, ipa-2-2. Martin From mkosek at redhat.com Tue Apr 3 07:26:34 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Apr 2012 09:26:34 +0200 Subject: [Freeipa-devel] [PATCH] 998 certmonger restarts services on renewal In-Reply-To: <4F79FFBD.1030408@redhat.com> References: <4F7233D9.2080405@redhat.com> <1333374440.10403.18.camel@balmora.brq.redhat.com> <4F79B30E.1060804@redhat.com> <4F79FFBD.1030408@redhat.com> Message-ID: <1333437994.23102.13.camel@balmora.brq.redhat.com> On Mon, 2012-04-02 at 15:36 -0400, Rob Crittenden wrote: > Rob Crittenden wrote: > > Martin Kosek wrote: > >> On Tue, 2012-03-27 at 17:40 -0400, Rob Crittenden wrote: > >>> Certmonger will currently automatically renew server certificates but > >>> doesn't restart the services so you can still end up with expired > >>> certificates if you services never restart. > >>> > >>> This patch registers are restart command with certmonger so the IPA > >>> services will automatically be restarted to get the updated cert. > >>> > >>> Easy to test. Install IPA then resubmit the current server certs and > >>> watch the services restart: > >>> > >>> # ipa-getcert list > >>> > >>> Find the ID for either your dirsrv or httpd instance > >>> > >>> # ipa-getcert resubmit -i > >>> > >>> Watch /var/log/httpd/error_log or /var/log/dirsrv/slapd-INSTANCE/errors > >>> to see the service restart. > >>> > >>> rob > >> > >> What about current instances - can we/do we want to update certmonger > >> tracking so that their instances are restarted as well? > >> > >> Anyway, I found few issues SELinux issues with the patch: > >> > >> 1) # rpm -Uvh freeipa-* > >> Preparing... ########################################### [100%] > >> 1:freeipa-python ########################################### [ 20%] > >> 2:freeipa-client ########################################### [ 40%] > >> 3:freeipa-admintools ########################################### [ 60%] > >> 4:freeipa-server ########################################### [ 80%] > >> /usr/bin/chcon: failed to change context of > >> `/usr/lib64/ipa/certmonger' to > >> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid argument > >> /usr/bin/chcon: failed to change context of > >> `/usr/lib64/ipa/certmonger/restart_dirsrv' to > >> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid argument > >> /usr/bin/chcon: failed to change context of > >> `/usr/lib64/ipa/certmonger/restart_httpd' to > >> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid argument > >> warning: %post(freeipa-server-2.1.90GIT5b895af-0.fc16.x86_64) > >> scriptlet failed, exit status 1 > >> 5:freeipa-server-selinux ########################################### > >> [100%] > >> > >> certmonger_unconfined_exec_t type was unknown with my selinux policy: > >> > >> selinux-policy-3.10.0-80.fc16.noarch > >> selinux-policy-targeted-3.10.0-80.fc16.noarch > >> > >> If we need a higher SELinux version, we should bump the required package > >> version spec file. > > > > Yeah, waiting on it to be backported. > > > >> > >> 2) Change of SELinux context with /usr/bin/chcon is temporary until > >> restorecon or system relabel occurs. I think we should make it > >> persistent and enforce this type in our SELinux policy and rather call > >> restorecon instead of chcon > > > > That's a good idea, why didn't I think of that :-( > > Ah, now I remember, it will be handled by selinux-policy. I would have > used restorecon here but since the policy isn't there yet this seemed > like a good idea. > > I'm trying to find out the status of this new policy, it may only make > it into F-17. > > rob Ok. But if this policy does not go in F-16 and if we want this fix in F16 release too, I guess we would have to implement both approaches in our spec file: 1) When on F16, include SELinux policy for restart scripts + run restorecon 2) When on F17, do not include the SELinux policy (+ run restorecon) Martin From mkosek at redhat.com Tue Apr 3 08:53:34 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Apr 2012 10:53:34 +0200 Subject: [Freeipa-devel] [PATCH] 246 Configure SELinux for httpd during upgrades Message-ID: <1333443214.23102.14.camel@balmora.brq.redhat.com> SELinux configuration for httpd instance was set for new installations only. Upgraded IPA servers (namely 2.1.x -> 2.2.x upgrade) missed the configuration. This lead to AVCs when httpd tries to contact ipa_memcached and user not being able to log in. This patch updates ipa-upgradeconfig to configure SELinux in the same way as ipa-server-install does. https://fedorahosted.org/freeipa/ticket/2603 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-246-configure-selinux-for-httpd-during-upgrades.patch Type: text/x-patch Size: 5107 bytes Desc: not available URL: From jcholast at redhat.com Tue Apr 3 09:58:16 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 03 Apr 2012 11:58:16 +0200 Subject: [Freeipa-devel] [PATCH] 73 Check whether the default user group is POSIX when adding new user with --noprivate Message-ID: <4F7AC9B8.3090604@redhat.com> https://fedorahosted.org/freeipa/ticket/2572 Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-73-default-group-posix-check.patch Type: text/x-patch Size: 2821 bytes Desc: not available URL: From ohamada at redhat.com Tue Apr 3 10:22:59 2012 From: ohamada at redhat.com (Ondrej Hamada) Date: Tue, 03 Apr 2012 12:22:59 +0200 Subject: [Freeipa-devel] [PATCH] 20 Fix empty external member processing Message-ID: <4F7ACF83.3080202@redhat.com> https://fedorahosted.org/freeipa/ticket/2447 Validation of external member was failing for empty strings because of wrong condition. -- Regards, Ondrej Hamada FreeIPA team jabber: ohama at jabbim.cz IRC: ohamada -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-ohamada-20-Fix-empty-external-member-processing.patch Type: text/x-patch Size: 1051 bytes Desc: not available URL: From abokovoy at redhat.com Tue Apr 3 10:41:35 2012 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 3 Apr 2012 13:41:35 +0300 Subject: [Freeipa-devel] [PATCH] 0042-0048 AD trusts support (master) Message-ID: <20120403104135.GD23171@redhat.com> Hi! Attached are the current patches for adding support for Active Directory trusts for FreeIPA v3 (master). These are tested and working with samba4 build available in ipa-devel@ repo. You have to use --delegate until we'll get all the parts of the Heimdal puzzle untangled and solved, and Simo patch 490 (s4u2proxy fix) is committed as well. Sumit asked me to send patches for review and commit to master so that he can proceed with his changes (removal of kadmin.local use, SID population task for 389-ds, etc). Without kadmin.local use fix these patches are not working with SELinux enabled. Patches have [../9] mark because they were generated out of my adwork tree. I have merged two patches together for obvious change reason and have left out Simo's s4u2proxy patch out, thus there are seven patches proposed for commit. -- / Alexander Bokovoy -------------- next part -------------- >From 7a769c59c9de3ac27577557ae66f5ae874ec167f Mon Sep 17 00:00:00 2001 From: Alexander Bokovoy Date: Tue, 28 Feb 2012 13:22:49 +0200 Subject: [PATCH 1/9] Add separate attribute to store trusted domain SID We need two attributes in the ipaNTTrustedDomain objectclass to store different kind of SID. Currently ipaNTSecurityIdentifier is used to store the Domain-SID of the trusted domain. A second attribute is needed to store the SID for the trusted domain user. Since it cannot be derived safely from other values and since it does not make sense to create a separate object for the user a new attribute is needed. https://fedorahosted.org/freeipa/ticket/2191 --- daemons/ipa-sam/ipa_sam.c | 9 +++++---- install/share/60basev3.ldif | 3 ++- install/share/bootstrap-template.ldif | 8 ++++++++ install/share/replica-s4u2proxy.ldif | 8 ++++++++ install/updates/60-trusts.update | 26 ++++++++++++++++++++++++++ install/updates/61-trusts-s4u2proxy.update | 12 ++++++++++++ install/updates/Makefile.am | 2 ++ 7 files changed, 63 insertions(+), 5 deletions(-) create mode 100644 install/updates/60-trusts.update create mode 100644 install/updates/61-trusts-s4u2proxy.update diff --git a/daemons/ipa-sam/ipa_sam.c b/daemons/ipa-sam/ipa_sam.c index be97cb7..c362988 100644 --- a/daemons/ipa-sam/ipa_sam.c +++ b/daemons/ipa-sam/ipa_sam.c @@ -123,6 +123,7 @@ do { \ #define LDAP_PAGE_SIZE 1024 #define LDAP_OBJ_SAMBASAMACCOUNT "ipaNTUserAttrs" #define LDAP_OBJ_TRUSTED_DOMAIN "ipaNTTrustedDomain" +#define LDAP_ATTRIBUTE_TRUST_SID "ipaNTTrustedDomainSID" #define LDAP_ATTRIBUTE_SID "ipaNTSecurityIdentifier" #define LDAP_OBJ_GROUPMAP "ipaNTGroupAttrs" @@ -1674,7 +1675,7 @@ static bool get_trusted_domain_by_sid_int(struct ldapsam_privates *ldap_state, filter = talloc_asprintf(mem_ctx, "(&(objectClass=%s)(%s=%s))", LDAP_OBJ_TRUSTED_DOMAIN, - LDAP_ATTRIBUTE_SECURITY_IDENTIFIER, sid); + LDAP_ATTRIBUTE_TRUST_SID, sid); if (filter == NULL) { return false; } @@ -1734,10 +1735,10 @@ static bool fill_pdb_trusted_domain(TALLOC_CTX *mem_ctx, /* All attributes are MAY */ dummy = get_single_attribute(NULL, priv2ld(ldap_state), entry, - LDAP_ATTRIBUTE_SECURITY_IDENTIFIER); + LDAP_ATTRIBUTE_TRUST_SID); if (dummy == NULL) { DEBUG(9, ("Attribute %s not present.\n", - LDAP_ATTRIBUTE_SECURITY_IDENTIFIER)); + LDAP_ATTRIBUTE_TRUST_SID)); ZERO_STRUCT(td->security_identifier); } else { res = string_to_sid(&td->security_identifier, dummy); @@ -2021,7 +2022,7 @@ static NTSTATUS ipasam_set_trusted_domain(struct pdb_methods *methods, if (!is_null_sid(&td->security_identifier)) { smbldap_make_mod(priv2ld(ldap_state), entry, &mods, - LDAP_ATTRIBUTE_SECURITY_IDENTIFIER, + LDAP_ATTRIBUTE_TRUST_SID, sid_string_talloc(tmp_ctx, &td->security_identifier)); } diff --git a/install/share/60basev3.ldif b/install/share/60basev3.ldif index 40412b5..2c24137 100644 --- a/install/share/60basev3.ldif +++ b/install/share/60basev3.ldif @@ -6,6 +6,7 @@ dn: cn=schema attributeTypes: (2.16.840.1.113730.3.8.11.1 NAME 'ipaExternalMember' DESC 'External Group Member Identifier' EQUALITY caseIgnoreMatch ORDERING caseIgnoreOrderingMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 X-ORIGIN 'IPA v3' ) attributeTypes: (2.16.840.1.113730.3.8.11.2 NAME 'ipaNTSecurityIdentifier' DESC 'NT Security ID' EQUALITY caseIgnoreIA5Match OREDRING caseIgnoreIA5OrderingMatch SUBSTR caseIgnoreIA5SubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 SINGLE-VALUE X-ORIGIN 'IPA v3' ) +attributeTypes: (2.16.840.1.113730.3.8.11.23 NAME 'ipaNTTrustedDomainSID' DESC 'NT Trusted Domain Security ID' EQUALITY caseIgnoreIA5Match OREDRING caseIgnoreIA5OrderingMatch SUBSTR caseIgnoreIA5SubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 SINGLE-VALUE X-ORIGIN 'IPA v3' ) attributeTypes: (2.16.840.1.113730.3.8.11.3 NAME 'ipaNTFlatName' DESC 'Flat/Netbios Name' EQUALITY caseIgnoreMatch OREDRING caseIgnoreOrderingMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 SINGLE-VALUE X-ORIGIN 'IPA v3' ) attributeTypes: (2.16.840.1.113730.3.8.11.4 NAME 'ipaNTFallbackPrimaryGroup' DESC 'Fallback Group to set the Primary group Security Identifier for users with UPGs' SUP distinguishedName X-ORIGIN 'IPA v3' ) attributeTypes: (2.16.840.1.113730.3.8.11.5 NAME 'ipaNTHash' DESC 'NT Hash of user password' EQUALITY octetStringMatch OREDRING octetStringOrderingMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.40 SINGLE-VALUE X-ORIGIN 'IPA v3' ) @@ -32,7 +33,7 @@ objectClasses: (2.16.840.1.113730.3.8.12.1 NAME 'ipaExternalGroup' SUP top STRUC objectClasses: (2.16.840.1.113730.3.8.12.2 NAME 'ipaNTUserAttrs' SUP top AUXILIARY MUST ( ipaNTSecurityIdentifier ) MAY ( ipaNTHash $ ipaNTLogonScript $ ipaNTProfilePath $ ipaNTHomeDirectory $ ipaNTHomeDirectoryDrive ) X-ORIGIN 'IPA v3' ) objectClasses: (2.16.840.1.113730.3.8.12.3 NAME 'ipaNTGroupAttrs' SUP top AUXILIARY MUST ( ipaNTSecurityIdentifier ) X-ORIGIN 'IPA v3' ) objectClasses: (2.16.840.1.113730.3.8.12.4 NAME 'ipaNTDomainAttrs' SUP top AUXILIARY MUST ( ipaNTSecurityIdentifier $ ipaNTFlatName $ ipaNTDomainGUID ) MAY ( ipaNTFallbackPrimaryGroup ) X-ORIGIN 'IPA v3' ) -objectClasses: (2.16.840.1.113730.3.8.12.5 NAME 'ipaNTTrustedDomain' SUP top STRUCTURAL DESC 'Trusted Domain Object' MUST ( cn ) MAY ( ipaNTTrustType $ ipaNTTrustAttributes $ ipaNTTrustDirection $ ipaNTTrustPartner $ ipaNTFlatName $ ipaNTTrustAuthOutgoing $ ipaNTTrustAuthIncoming $ ipaNTSecurityIdentifier $ ipaNTTrustForestTrustInfo $ ipaNTTrustPosixOffset $ ipaNTSupportedEncryptionTypes) ) +objectClasses: (2.16.840.1.113730.3.8.12.5 NAME 'ipaNTTrustedDomain' SUP top STRUCTURAL DESC 'Trusted Domain Object' MUST ( cn ) MAY ( ipaNTTrustType $ ipaNTTrustAttributes $ ipaNTTrustDirection $ ipaNTTrustPartner $ ipaNTFlatName $ ipaNTTrustAuthOutgoing $ ipaNTTrustAuthIncoming $ ipaNTTrustedDomainSID $ ipaNTTrustForestTrustInfo $ ipaNTTrustPosixOffset $ ipaNTSupportedEncryptionTypes) ) objectClasses: (2.16.840.1.113730.3.8.12.6 NAME 'groupOfPrincipals' SUP top AUXILIARY MUST ( cn ) MAY ( memberPrincipal ) X-ORIGIN 'IPA v3' ) objectClasses: (2.16.840.1.113730.3.8.12.7 NAME 'ipaKrb5DelegationACL' SUP groupOfPrincipals STRUCTURAL MAY ( ipaAllowToImpersonate $ ipaAllowedTarget ) X-ORIGIN 'IPA v3' ) objectClasses: (2.16.840.1.113730.3.8.12.10 NAME 'ipaSELinuxUserMap' SUP ipaAssociation STRUCTURAL MUST ipaSELinuxUser MAY ( accessTime $ seeAlso ) X-ORIGIN 'IPA v3') diff --git a/install/share/bootstrap-template.ldif b/install/share/bootstrap-template.ldif index 468a43b..149b6c9 100644 --- a/install/share/bootstrap-template.ldif +++ b/install/share/bootstrap-template.ldif @@ -175,6 +175,7 @@ objectClass: top cn: ipa-http-delegation memberPrincipal: HTTP/$HOST@$REALM ipaAllowedTarget: cn=ipa-ldap-delegation-targets,cn=s4u2proxy,cn=etc,$SUFFIX +ipaAllowedTarget: cn=ipa-cifs-delegation-targets,cn=s4u2proxy,cn=etc,$SUFFIX dn: cn=ipa-ldap-delegation-targets,cn=s4u2proxy,cn=etc,$SUFFIX changetype: add @@ -183,6 +184,13 @@ objectClass: top cn: ipa-ldap-delegation-targets memberPrincipal: ldap/$HOST@$REALM +dn: cn=ipa-cifs-delegation-targets,cn=s4u2proxy,cn=etc,$SUFFIX +changetype: add +objectClass: groupOfPrincipals +objectClass: top +cn: ipa-cifs-delegation-targets +memberPrincipal: cifs/$HOST@$REALM + dn: uid=admin,cn=users,cn=accounts,$SUFFIX changetype: add objectClass: top diff --git a/install/share/replica-s4u2proxy.ldif b/install/share/replica-s4u2proxy.ldif index 3cafa46..55c984a 100644 --- a/install/share/replica-s4u2proxy.ldif +++ b/install/share/replica-s4u2proxy.ldif @@ -2,8 +2,16 @@ dn: cn=ipa-http-delegation,cn=s4u2proxy,cn=etc,$SUFFIX changetype: modify add: memberPrincipal memberPrincipal: HTTP/$FQDN@$REALM +add: ipaAllowedTarget +ipaAllowedTarget: cn=ipa-cifs-delegation-targets,cn=s4u2proxy,cn=etc,$SUFFIX dn: cn=ipa-ldap-delegation-targets,cn=s4u2proxy,cn=etc,$SUFFIX changetype: modify add: memberPrincipal memberPrincipal: ldap/$FQDN@$REALM + +dn: cn=ipa-cifs-delegation-targets,cn=s4u2proxy,cn=etc,$SUFFIX +changetype: modify +add: memberPrincipal +memberPrincipal: cifs/$FQDN@$REALM + diff --git a/install/updates/60-trusts.update b/install/updates/60-trusts.update new file mode 100644 index 0000000..9a320fc --- /dev/null +++ b/install/updates/60-trusts.update @@ -0,0 +1,26 @@ +dn: cn=schema +add:attributeTypes: (2.16.840.1.113730.3.8.11.2 NAME 'ipaNTSecurityIdentifier' DESC 'NT Security ID' EQUALITY caseIgnoreIA5Match OREDRING caseIgnoreIA5OrderingMatch SUBSTR caseIgnoreIA5SubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 SINGLE-VALUE X-ORIGIN 'IPA v3' ) +add:attributeTypes: (2.16.840.1.113730.3.8.11.23 NAME 'ipaNTTrustedDomainSID' DESC 'NT Trusted Domain Security ID' EQUALITY caseIgnoreIA5Match OREDRING caseIgnoreIA5OrderingMatch SUBSTR caseIgnoreIA5SubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 SINGLE-VALUE X-ORIGIN 'IPA v3' ) +add:attributeTypes: (2.16.840.1.113730.3.8.11.3 NAME 'ipaNTFlatName' DESC 'Flat/Netbios Name' EQUALITY caseIgnoreMatch OREDRING caseIgnoreOrderingMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 SINGLE-VALUE X-ORIGIN 'IPA v3' ) +add:attributeTypes: (2.16.840.1.113730.3.8.11.4 NAME 'ipaNTFallbackPrimaryGroup' DESC 'Fallback Group to set the Primary group Security Identifier for users with UPGs' SUP distinguishedName X-ORIGIN 'IPA v3' ) +add:attributeTypes: (2.16.840.1.113730.3.8.11.5 NAME 'ipaNTHash' DESC 'NT Hash of user password' EQUALITY octetStringMatch OREDRING octetStringOrderingMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.40 SINGLE-VALUE X-ORIGIN 'IPA v3' ) +add:attributeTypes: (2.16.840.1.113730.3.8.11.6 NAME 'ipaNTLogonScript' DESC 'User Logon Script Name' EQUALITY caseIgnoreMatch OREDRING caseIgnoreOrderingMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 SINGLE-VALUE X-ORIGIN 'IPA v3' ) +add:attributeTypes: (2.16.840.1.113730.3.8.11.7 NAME 'ipaNTProfilePath' DESC 'User Profile Path' EQUALITY caseIgnoreMatch OREDRING caseIgnoreOrderingMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 SINGLE-VALUE X-ORIGIN 'IPA v3' ) +add:attributeTypes: (2.16.840.1.113730.3.8.11.8 NAME 'ipaNTHomeDirectory' DESC 'User Home Directory Path' EQUALITY caseIgnoreMatch OREDRING caseIgnoreOrderingMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 SINGLE-VALUE X-ORIGIN 'IPA v3' ) +add:attributeTypes: (2.16.840.1.113730.3.8.11.9 NAME 'ipaNTHomeDirectoryDrive' DESC 'User Home Drive Letter' EQUALITY caseIgnoreMatch OREDRING caseIgnoreOrderingMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 SINGLE-VALUE X-ORIGIN 'IPA v3' ) +add:attributeTypes: (2.16.840.1.113730.3.8.11.10 NAME 'ipaNTDomainGUID' DESC 'NT Domain GUID' EQUALITY caseIgnoreIA5Match OREDRING caseIgnoreIA5OrderingMatch SUBSTR caseIgnoreIA5SubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 SINGLE-VALUE X-ORIGIN 'IPA v3' ) +add:attributeTypes: ( 2.16.840.1.113730.3.8.11.11 NAME 'ipaNTTrustType' DESC 'Type of trust' EQUALITY integerMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) +add:attributeTypes: ( 2.16.840.1.113730.3.8.11.12 NAME 'ipaNTTrustAttributes' DESC 'Trust attributes for a trusted domain' EQUALITY integerMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) +add:attributeTypes: ( 2.16.840.1.113730.3.8.11.13 NAME 'ipaNTTrustDirection' DESC 'Direction of a trust' EQUALITY integerMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) +add:attributeTypes: ( 2.16.840.1.113730.3.8.11.14 NAME 'ipaNTTrustPartner' DESC 'Fully qualified name of the domain with which a trust exists' EQUALITY caseIgnoreMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{128} ) +add:attributeTypes: ( 2.16.840.1.113730.3.8.11.15 NAME 'ipaNTTrustAuthOutgoing' DESC 'Authentication information for the outgoing portion of a trust' EQUALITY octetStringMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.40 ) +add:attributeTypes: ( 2.16.840.1.113730.3.8.11.16 NAME 'ipaNTTrustAuthIncoming' DESC 'Authentication information for the incoming portion of a trust' EQUALITY octetStringMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.40 ) +add:attributeTypes: ( 2.16.840.1.113730.3.8.11.17 NAME 'ipaNTTrustForestTrustInfo' DESC 'Forest trust information for a trusted domain object' EQUALITY octetStringMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.40 ) +add:attributeTypes: ( 2.16.840.1.113730.3.8.11.18 NAME 'ipaNTTrustPosixOffset' DESC 'POSIX offset of a trust' EQUALITY integerMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) +add:attributeTypes: ( 2.16.840.1.113730.3.8.11.19 NAME 'ipaNTSupportedEncryptionTypes' DESC 'Supported encryption types of a trust' EQUALITY integerMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) +add:objectClasses: (2.16.840.1.113730.3.8.12.2 NAME 'ipaNTUserAttrs' SUP top AUXILIARY MUST ( ipaNTSecurityIdentifier ) MAY ( ipaNTHash $$ ipaNTLogonScript $$ ipaNTProfilePath $$ ipaNTHomeDirectory $$ ipaNTHomeDirectoryDrive ) X-ORIGIN 'IPA v3' ) +add:objectClasses: (2.16.840.1.113730.3.8.12.3 NAME 'ipaNTGroupAttrs' SUP top AUXILIARY MUST ( ipaNTSecurityIdentifier ) X-ORIGIN 'IPA v3' ) +add:objectClasses: (2.16.840.1.113730.3.8.12.4 NAME 'ipaNTDomainAttrs' SUP top AUXILIARY MUST ( ipaNTSecurityIdentifier $$ ipaNTFlatName $$ ipaNTDomainGUID ) MAY ( ipaNTFallbackPrimaryGroup ) X-ORIGIN 'IPA v3' ) +replace:objectClasses: (2.16.840.1.113730.3.8.12.5 NAME 'ipaNTTrustedDomain' SUP top STRUCTURAL DESC 'Trusted Domain Object' MUST ( cn ) MAY ( ipaNTTrustType $$ ipaNTTrustAttributes $$ ipaNTTrustDirection $$ ipaNTTrustPartner $$ ipaNTFlatName $$ ipaNTTrustAuthOutgoing $$ ipaNTTrustAuthIncoming $$ ipaNTSecurityIdentifier $$ ipaNTTrustForestTrustInfo $$ ipaNTTrustPosixOffset $$ ipaNTSupportedEncryptionTypes) )::objectClasses: (2.16.840.1.113730.3.8.12.5 NAME 'ipaNTTrustedDomain' SUP top STRUCTURAL DESC 'Trusted Domain Object' MUST ( cn ) MAY ( ipaNTTrustType $$ ipaNTTrustAttributes $$ ipaNTTrustDirection $$ ipaNTTrustPartner $$ ipaNTFlatName $$ ipaNTTrustAuthOutgoing $$ ipaNTTrustAuthIncoming $$ ipaNTTrustedDomainSID $$ ipaNTTrustForestTrustInfo $$ ipaNTTrustPosixOffset $$ ipaNTSupportedEncryptionTypes) ) +add:objectClasses: (2.16.840.1.113730.3.8.12.5 NAME 'ipaNTTrustedDomain' SUP top STRUCTURAL DESC 'Trusted Domain Object' MUST ( cn ) MAY ( ipaNTTrustType $$ ipaNTTrustAttributes $$ ipaNTTrustDirection $$ ipaNTTrustPartner $$ ipaNTFlatName $$ ipaNTTrustAuthOutgoing $$ ipaNTTrustAuthIncoming $$ ipaNTTrustedDomainSID $$ ipaNTTrustForestTrustInfo $$ ipaNTTrustPosixOffset $$ ipaNTSupportedEncryptionTypes) ) + diff --git a/install/updates/61-trusts-s4u2proxy.update b/install/updates/61-trusts-s4u2proxy.update new file mode 100644 index 0000000..4a71148 --- /dev/null +++ b/install/updates/61-trusts-s4u2proxy.update @@ -0,0 +1,12 @@ +dn: cn=ipa-http-delegation,cn=s4u2proxy,cn=etc,$SUFFIX +add: ipaAllowedTarget: 'cn=ipa-cifs-delegation-targets,cn=s4u2proxy,cn=etc,$SUFFIX' + +dn: cn=ipa-cifs-delegation-targets,cn=s4u2proxy,cn=etc,$SUFFIX +default: objectClass: groupOfPrincipals +default: objectClass: top +default: cn: ipa-cifs-delegation-targets +default: memberPrincipal: cifs/$FQDN@$REALM + +dn: cn=ipa-cifs-delegation-targets,cn=s4u2proxy,cn=etc,$SUFFIX +add: memberPrincipal: cifs/$FQDN@$REALM + diff --git a/install/updates/Makefile.am b/install/updates/Makefile.am index e1eb35a..412630e 100644 --- a/install/updates/Makefile.am +++ b/install/updates/Makefile.am @@ -33,6 +33,8 @@ app_DATA = \ 50-nis.update \ 50-ipaconfig.update \ 55-pbacmemberof.update \ + 60-trusts.update \ + 61-trusts-s4u2proxy.update \ $(NULL) EXTRA_DIST = \ -- 1.7.9.3 -------------- next part -------------- >From 0ca47cd69125ed2718ef1b6dca7bf86d35d05170 Mon Sep 17 00:00:00 2001 From: Alexander Bokovoy Date: Tue, 28 Feb 2012 13:23:51 +0200 Subject: [PATCH 2/9] Use dedicated keytab for Samba Samba just needs the cifs/ key on the ipa server. Configure samba to use a different keytab file so that we do not risk samba commands (net, or similar) to mess up the system keytab. https://fedorahosted.org/freeipa/ticket/2168 --- install/share/smb.conf.template | 4 +++- ipaserver/install/adtrustinstance.py | 27 ++++++++++++++++----------- 2 files changed, 19 insertions(+), 12 deletions(-) diff --git a/install/share/smb.conf.template b/install/share/smb.conf.template index 4ab79da..8ed521b 100644 --- a/install/share/smb.conf.template +++ b/install/share/smb.conf.template @@ -1,7 +1,8 @@ [global] workgroup = $NETBIOS_NAME realm = $REALM -kerberos method = system keytab +kerberos method = dedicated keytab +dedicated keytab file = FILE:/etc/samba/samba.keytab create krb5 conf = no security = user domain master = yes @@ -10,6 +11,7 @@ log level = 1 max log size = 100000 log file = /var/log/samba/log.%m passdb backend = ipasam:ldapi://$LDAPI_SOCKET +disable spoolss = yes ldapsam:trusted=yes ldap ssl = off ldap admin dn = $SMB_DN diff --git a/ipaserver/install/adtrustinstance.py b/ipaserver/install/adtrustinstance.py index f437901..b978146 100644 --- a/ipaserver/install/adtrustinstance.py +++ b/ipaserver/install/adtrustinstance.py @@ -255,7 +255,10 @@ class ADTRUSTInstance(service.Service): conf_fd.close() def __add_cldap_module(self): - self._ldap_mod("ipa-cldap-conf.ldif", self.sub_dict) + try: + self._ldap_mod("ipa-cldap-conf.ldif", self.sub_dict) + except: + pass def __write_smb_registry(self): template = os.path.join(ipautil.SHARE_DIR, "smb.conf.template") @@ -279,21 +282,23 @@ class ADTRUSTInstance(service.Service): def __setup_principal(self): cifs_principal = "cifs/" + self.fqdn + "@" + self.realm_name - installutils.kadmin_addprinc(cifs_principal) - self.move_service(cifs_principal) + api.Command.service_add(unicode(cifs_principal)) - try: - ipautil.run(["ipa-rmkeytab", "--principal", cifs_principal, - "-k", "/etc/krb5.keytab"]) - except ipautil.CalledProcessError, e: - if e.returncode != 5: - root_logger.critical("Failed to remove old key for %s" % cifs_principal) + samba_keytab = "/etc/samba/samba.keytab" + if os.path.exists(samba_keytab): + try: + ipautil.run(["ipa-rmkeytab", "--principal", cifs_principal, + "-k", samba_keytab]) + except ipautil.CalledProcessError, e: + root_logger.critical("Result of removing old key: %d" % e.returncode) + if e.returncode != 5: + root_logger.critical("Failed to remove old key for %s" % cifs_principal) try: ipautil.run(["ipa-getkeytab", "--server", self.fqdn, "--principal", cifs_principal, - "-k", "/etc/krb5.keytab"]) + "-k", samba_keytab]) except ipautil.CalledProcessError, e: root_logger.critical("Failed to add key for %s" % cifs_principal) @@ -368,7 +373,7 @@ class ADTRUSTInstance(service.Service): try: self.ldap_enable('ADTRUST', self.fqdn, self.dm_password, \ self.suffix) - except ldap.ALREADY_EXISTS: + except (ldap.ALREADY_EXISTS, errors.DuplicateEntry), e: root_logger.critical("ADTRUST Service startup entry already exists.") pass -- 1.7.9.3 -------------- next part -------------- >From 3c4cdcbf2fb55016da044ad41b6d19e8ee081d2c Mon Sep 17 00:00:00 2001 From: Alexander Bokovoy Date: Tue, 28 Feb 2012 13:24:41 +0200 Subject: [PATCH 3/9] Add trust management for Active Directory trusts https://fedorahosted.org/freeipa/ticket/1821 --- API.txt | 57 ++++++ freeipa.spec.in | 20 ++- install/share/Makefile.am | 1 + install/share/smb.conf.empty | 2 + install/tools/ipa-adtrust-install | 1 + ipalib/constants.py | 5 +- ipalib/plugins/trust.py | 220 +++++++++++++++++++++++ ipaserver/dcerpc.py | 325 ++++++++++++++++++++++++++++++++++ ipaserver/install/adtrustinstance.py | 14 +- 9 files changed, 638 insertions(+), 7 deletions(-) create mode 100644 install/share/smb.conf.empty create mode 100644 ipalib/plugins/trust.py create mode 100644 ipaserver/dcerpc.py diff --git a/API.txt b/API.txt index e9eb1e1..1a6e5a1 100644 --- a/API.txt +++ b/API.txt @@ -3066,6 +3066,63 @@ option: Str('version?', exclude='webui') output: Output('summary', (, ), None) output: Entry('result', , Gettext('A dictionary representing an LDAP entry', domain='ipa', localedir=None)) output: Output('value', , None) +command: trust_add_ad +args: 1,7,3 +arg: Str('cn', attribute=True, cli_name='realm', multivalue=False, primary_key=True, required=True) +option: Str('realm_admin?', cli_name='admin') +option: Password('realm_passwd?', cli_name='password', confirm=False) +option: Str('realm_server?', cli_name='server') +option: Password('trust_passwd?', cli_name='trust_password', confirm=False) +option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') +option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') +option: Str('version?', exclude='webui') +output: Output('summary', (, ), None) +output: Entry('result', , Gettext('A dictionary representing an LDAP entry', domain='ipa', localedir=None)) +output: Output('value', , None) +command: trust_del +args: 1,1,3 +arg: Str('cn', attribute=True, cli_name='realm', multivalue=True, primary_key=True, query=True, required=True) +option: Flag('continue', autofill=True, cli_name='continue', default=False) +output: Output('summary', (, ), None) +output: Output('result', , None) +output: Output('value', , None) +command: trust_find +args: 1,7,4 +arg: Str('criteria?', noextrawhitespace=False) +option: Str('cn', attribute=True, autofill=False, cli_name='realm', multivalue=False, primary_key=True, query=True, required=False) +option: Int('timelimit?', autofill=False, minvalue=0) +option: Int('sizelimit?', autofill=False, minvalue=0) +option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') +option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') +option: Str('version?', exclude='webui') +option: Flag('pkey_only?', autofill=True, default=False) +output: Output('summary', (, ), None) +output: ListOfEntries('result', (, ), Gettext('A list of LDAP entries', domain='ipa', localedir=None)) +output: Output('count', , None) +output: Output('truncated', , None) +command: trust_mod +args: 1,7,3 +arg: Str('cn', attribute=True, cli_name='realm', multivalue=False, primary_key=True, query=True, required=True) +option: Str('setattr*', cli_name='setattr', exclude='webui') +option: Str('addattr*', cli_name='addattr', exclude='webui') +option: Str('delattr*', cli_name='delattr', exclude='webui') +option: Flag('rights', autofill=True, default=False) +option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') +option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') +option: Str('version?', exclude='webui') +output: Output('summary', (, ), None) +output: Entry('result', , Gettext('A dictionary representing an LDAP entry', domain='ipa', localedir=None)) +output: Output('value', , None) +command: trust_show +args: 1,4,3 +arg: Str('cn', attribute=True, cli_name='realm', multivalue=False, primary_key=True, query=True, required=True) +option: Flag('rights', autofill=True, default=False) +option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') +option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') +option: Str('version?', exclude='webui') +output: Output('summary', (, ), None) +output: Entry('result', , Gettext('A dictionary representing an LDAP entry', domain='ipa', localedir=None)) +output: Output('value', , None) command: user_add args: 1,33,3 arg: Str('uid', attribute=True, cli_name='login', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.][a-zA-Z0-9_.-]{0,252}[a-zA-Z0-9_.$-]?$', pattern_errmsg='may only include letters, numbers, _, -, . and $', primary_key=True, required=True) diff --git a/freeipa.spec.in b/freeipa.spec.in index e7eb7d4..7746c98 100644 --- a/freeipa.spec.in +++ b/freeipa.spec.in @@ -188,6 +188,19 @@ user, virtual machines, groups, authentication credentials), Policy (configuration settings, access control information) and Audit (events, logs, analysis thereof). This package provides SELinux rules for the daemons included in freeipa-server + +%package server-trust-ad +Summary: Virtual package to install packages required for Active Directory trusts +Group: System Environment/Base +Requires: %{name}-server = %version-%release +Requires: python-crypto +Requires: samba4-python +Requires: samba4 + +%description server-trust-ad +Cross-realm trusts with Active Directory in IPA require working Samba 4 installation. +This package is provided for convenience to install all required dependencies at once. + %endif @@ -278,7 +291,6 @@ user, virtual machines, groups, authentication credentials), Policy logs, analysis thereof). If you are using IPA you need to install this package. - %prep %setup -n freeipa-%{version} -q @@ -630,6 +642,9 @@ fi %doc COPYING README Contributors.txt %{_usr}/share/selinux/targeted/ipa_httpd.pp %{_usr}/share/selinux/targeted/ipa_dogtag.pp + +%files server-trust-ad +%{_usr}/share/ipa/smb.conf.empty %endif %files client @@ -681,6 +696,9 @@ fi %ghost %attr(0644,root,apache) %config(noreplace) %{_sysconfdir}/ipa/ca.crt %changelog +* Mon Mar 12 2012 Alexander Bokovoy - 2.99.0-23 +- Add freeipa-server-trust-ad virtual package to capture all required dependencies + for Active Directory trust management * Tue Mar 27 2012 Rob Crittenden - 2.99.0-26 - Add python-krbV Requires on client package diff --git a/install/share/Makefile.am b/install/share/Makefile.am index 243fc2a..81fd0dc 100644 --- a/install/share/Makefile.am +++ b/install/share/Makefile.am @@ -33,6 +33,7 @@ app_DATA = \ krbrealm.con.template \ preferences.html.template \ smb.conf.template \ + smb.conf.empty \ referint-conf.ldif \ dna.ldif \ master-entry.ldif \ diff --git a/install/share/smb.conf.empty b/install/share/smb.conf.empty new file mode 100644 index 0000000..6d1e626 --- /dev/null +++ b/install/share/smb.conf.empty @@ -0,0 +1,2 @@ +[global] + diff --git a/install/tools/ipa-adtrust-install b/install/tools/ipa-adtrust-install index 248ea35..0f3e473 100755 --- a/install/tools/ipa-adtrust-install +++ b/install/tools/ipa-adtrust-install @@ -131,6 +131,7 @@ def main(): break # Check we have a public IP that is associated with the hostname + ip = None try: if options.ip_address: ip = ipautil.CheckedIPAddress(options.ip_address, match_local=True) diff --git a/ipalib/constants.py b/ipalib/constants.py index dc32533..3376d30 100644 --- a/ipalib/constants.py +++ b/ipalib/constants.py @@ -101,7 +101,10 @@ DEFAULT_CONFIG = ( ('container_automember', 'cn=automember,cn=etc'), ('container_selinux', 'cn=usermap,cn=selinux'), ('container_s4u2proxy', 'cn=s4u2proxy,cn=etc'), - + ('container_cifsdomains', 'cn=ad,cn=etc'), + ('container_trusts', 'cn=trusts'), + ('container_adtrusts', 'cn=ad,cn=trusts'), + # Ports, hosts, and URIs: # FIXME: let's renamed xmlrpc_uri to rpc_xml_uri ('xmlrpc_uri', 'http://localhost:8888/ipa/xml'), diff --git a/ipalib/plugins/trust.py b/ipalib/plugins/trust.py new file mode 100644 index 0000000..08cb79a --- /dev/null +++ b/ipalib/plugins/trust.py @@ -0,0 +1,220 @@ +# Authors: +# Alexander Bokovoy +# +# Copyright (C) 2011 Red Hat +# see file 'COPYING' for use and warranty information +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +from ipalib.plugins.baseldap import * +from ipalib import api, Str, Password, DefaultFrom, _, ngettext, Object +from ipalib.parameters import Enum +from ipalib import Command +from ipalib import errors +from ipapython import ipautil +from ipalib import util + +__doc__ = _(""" +Manage trust relationship between realms +""") + +trust_output_params = ( + Str('ipantflatname', + label='Domain NetBIOS name'), + Str('ipantsecurityidentifier', + label='Domain Security Identifier'), + Str('trustdirection', + label='Trust direction'), + Str('trusttype', + label='Trust type'), +) + +def trust_type_string(level): + return int(level) in (1,2,3) and (u'Forest',u'Cross-Forest',u'MIT')[int(level)-1] + +def trust_direction_string(level): + return int(level) in (1,2,3) and (u'Downlevel',u'Uplevel',u'Both directions')[int(level)-1] + +class trust(LDAPObject): + """ + Trust object. + """ + trust_types = ('ad', 'ipa') + container_dn = api.env.container_trusts + object_name = _('trust') + object_name_plural = _('trusts') + object_class = ['ipaNTTrustedDomain'] + default_attributes = ['cn', 'ipantflatname', 'ipantsecurityidentifier', + 'ipanttrusttype', 'ipanttrustattributes', 'ipanttrustdirection', 'ipanttrustpartner', + 'ipantauthtrustoutgoing', 'ipanttrustauthincoming', 'ipanttrustforesttrustinfo', + 'ipanttrustposixoffset', 'ipantsupportedencryptiontypes' ] + + label = _('Trusts') + label_singular = _('Trust') + + takes_params = ( + Str('cn', + cli_name='realm', + label=_('Realm name'), + primary_key=True, + ), + ) + +def make_trust_dn(env, trust_type, dn): + if trust_type in trust.trust_types: + container_dn = DN(('cn', trust_type), env.container_trusts, env.basedn) + return unicode(DN(DN(dn)[0], container_dn)) + return dn + +class trust_add_ad(LDAPCreate): + __doc__ = _('Add new trust to use against Active Directory Services.') + + takes_options = ( + Str('realm_admin?', + cli_name='admin', + label=_("Name of the realm's administrator"), + ), + Password('realm_passwd?', + cli_name='password', + label=_("Password of the realm's administrator"), + confirm=False, + ), + Str('realm_server?', + cli_name='server', + label=_('Domain controller for the realm'), + ), + Password('trust_passwd?', + cli_name='trust_password', + label=_('Shared password for the trust'), + confirm=False, + ), + ) + + + msg_summary = _('Added Active Directory trust for realm "%(value)s"') + + def execute(self, *keys, **options): + # Join domain using full credentials and with random trustdom + # secret (will be generated by the join method) + trustinstance = None + try: + import ipaserver.dcerpc + except Exception, e: + raise errors.NotFound(name=_('AD Trust setup'), + reason=_('Cannot perform join operation without Samba 4 python bindings installed')) + + if 'realm_server' not in options: + realm_server = None + else: + realm_server = options['realm_server'] + + trustinstance = ipaserver.dcerpc.TrustDomainJoins(self.api) + + # 1. Full access to the remote domain. Use admin credentials and + # generate random trustdom password to do work on both sides + if 'realm_admin' in options: + realm_admin = options['realm_admin'] + + if 'realm_passwd' not in options: + raise errors.ValidationError(name=_('AD Trust setup'), reason=_('Realm administrator password should be specified')) + realm_passwd = options['realm_passwd'] + + result = trustinstance.join_ad_full_credentials(keys[-1], realm_server, realm_admin, realm_passwd) + + if result is None: + raise errors.ValidationError(name=_('AD Trust setup'), reason=_('Unable to verify write permissions to the AD')) + + return dict(result=dict(), value=trustinstance.remote_domain.info['dns_domain']) + + # 2. We don't have access to the remote domain and trustdom password + # is provided. Do the work on our side and inform what to do on remote + # side. + if 'trust_passwd' in options: + result = trustinstance.join_ad_ipa_half(keys[-1], realm_server, options['trust_passwd']) + return dict(result=dict(), value=trustinstance.remote_domain.info['dns_domain']) + +class trust_del(LDAPDelete): + __doc__ = _('Delete a trust.') + + msg_summary = _('Deleted trust "%(value)s"') + + def pre_callback(self, ldap, dn, *keys, **options): + try: + result = self.api.Command.trust_show(keys[-1]) + except errors.NotFound, e: + self.obj.handle_not_found(*keys) + return result['result']['dn'] + +class trust_mod(LDAPUpdate): + __doc__ = _('Modify a trust.') + + msg_summary = _('Modified trust "%(value)s"') + + def pre_callback(self, ldap, dn, *keys, **options): + result = None + try: + result = self.api.Command.trust_show(keys[-1]) + except errors.NotFound, e: + self.obj.handle_not_found(*keys) + + # TODO: we found the trust object, now modify it + return result['result']['dn'] + +class trust_find(LDAPSearch): + __doc__ = _('Search for trusts.') + + msg_summary = ngettext( + '%(count)d trust matched', '%(count)d trusts matched', 0 + ) + + # Since all trusts types are stored within separate containers under 'cn=trusts', + # search needs to be done on a sub-tree scope + def pre_callback(self, ldap, filters, attrs_list, base_dn, scope, *args, **options): + return (filters, base_dn, ldap.SCOPE_SUBTREE) + +class trust_show(LDAPRetrieve): + __doc__ = _('Display information about a trust.') + has_output_params = LDAPRetrieve.has_output_params + trust_output_params + + def execute(self, *keys, **options): + error = None + result = None + for trust_type in trust.trust_types: + options['trust_show_type'] = trust_type + try: + result = super(trust_show, self).execute(*keys, **options) + except errors.NotFound, e: + result = None + error = e + if result: + result['result']['trusttype'] = [trust_type_string(result['result']['ipanttrusttype'][0])] + result['result']['trustdirection'] = [trust_direction_string(result['result']['ipanttrustdirection'][0])] + break + if error or not result: + self.obj.handle_not_found(*keys) + + return result + + def pre_callback(self, ldap, dn, entry_attrs, *keys, **options): + if 'trust_show_type' in options: + return make_trust_dn(self.env, options['trust_show_type'], dn) + return dn + +api.register(trust) +api.register(trust_add_ad) +api.register(trust_mod) +api.register(trust_del) +api.register(trust_find) +api.register(trust_show) + diff --git a/ipaserver/dcerpc.py b/ipaserver/dcerpc.py new file mode 100644 index 0000000..3f20fad --- /dev/null +++ b/ipaserver/dcerpc.py @@ -0,0 +1,325 @@ +# Authors: +# Alexander Bokovoy +# +# Copyright (C) 2011 Red Hat +# see file 'COPYING' for use and warranty information +# +# Portions (C) Andrew Tridgell, Andrew Bartlett +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation, either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +# Make sure we only run this module at the server where samba4-python +# package is installed to avoid issues with unavailable modules + +from ipalib.plugins.baseldap import * +from ipalib import api, Str, Password, DefaultFrom, _, ngettext, Object +from ipalib.parameters import Enum +from ipalib import Command +from ipalib import errors +from ipapython import ipautil +from ipalib import util + +import os, string, struct, copy +import uuid +from samba import param +from samba import credentials +from samba.dcerpc import security, lsa, drsblobs, nbt +from samba.ndr import ndr_pack +from samba import net +import samba +import random +import ldap as _ldap + +__doc__ = _(""" +Classes to manage trust joins using DCE-RPC calls + +The code in this module relies heavily on samba4-python package +and Samba4 python bindings. +""") + + +class ExtendedDNControl(_ldap.controls.RequestControl): + def __init__(self): + self.controlType = "1.2.840.113556.1.4.529" + self.criticality = False + self.integerValue = 1 + + def encodeControlValue(self): + return '0\x03\x02\x01\x01' + +class TrustDomainInstance(object): + + def __init__(self, hostname, creds=None): + self.parm = param.LoadParm() + self.parm.load(os.path.join(ipautil.SHARE_DIR,"smb.conf.empty")) + if len(hostname) > 0: + self.parm.set('netbios name', hostname) + self.creds = creds + self.hostname = hostname + self.info = {} + self._pipe = None + self._policy_handle = None + self.read_only = False + + def __gen_lsa_connection(self, binding): + if self.creds is None: + raise errors.RequirementError(name='CIFS credentials object') + try: + result = lsa.lsarpc(binding, self.parm, self.creds) + return result + except: + return None + + def __init_lsa_pipe(self, remote_host): + """ + Try to initialize connection to the LSA pipe at remote host. + This method tries consequently all possible transport options + and selects one that works. See __gen_lsa_bindings() for details. + + The actual result may depend on details of existing credentials. + For example, using signing causes NO_SESSION_KEY with Win2K8 and + using kerberos against Samba with signing does not work. + """ + # short-cut: if LSA pipe is initialized, skip completely + if self._pipe: + return + + bindings = self.__gen_lsa_bindings(remote_host) + for binding in bindings: + self._pipe = self.__gen_lsa_connection(binding) + if self._pipe: + break + if self._pipe is None: + raise errors.RequirementError(name='Working LSA pipe') + + def __gen_lsa_bindings(self, remote_host): + """ + There are multiple transports to issue LSA calls. However, depending on a + system in use they may be blocked by local operating system policies. + Generate all we can use. __init_lsa_pipe() will try them one by one until + there is one working. + + We try NCACN_NP before NCACN_IP_TCP and signed sessions before unsigned. + """ + transports = (u'ncacn_np', u'ncacn_ip_tcp') + options = ( u',', u'') + binding_template=lambda x,y,z: u'%s:%s[%s]' % (x, y, z) + return [binding_template(t, remote_host, o) for t in transports for o in options] + + def retrieve_anonymously(self, remote_host, discover_srv=False): + """ + When retrieving DC information anonymously, we can't get SID of the domain + """ + netrc = net.Net(creds=self.creds, lp=self.parm) + if discover_srv: + result = netrc.finddc(domain=remote_host, flags=nbt.NBT_SERVER_LDAP | nbt.NBT_SERVER_DS) + else: + result = netrc.finddc(address=remote_host, flags=nbt.NBT_SERVER_LDAP | nbt.NBT_SERVER_DS) + if not result: + return False + self.info['name'] = unicode(result.domain_name) + self.info['dns_domain'] = unicode(result.dns_domain) + self.info['dns_forest'] = unicode(result.forest) + self.info['guid'] = unicode(result.domain_uuid) + + # Netlogon response doesn't contain SID of the domain. + # We need to do rootDSE search with LDAP_SERVER_EXTENDED_DN_OID control to reveal the SID + ldap_uri = 'ldap://%s' % (result.pdc_dns_name) + conn = _ldap.initialize(ldap_uri) + conn.set_option(_ldap.OPT_SERVER_CONTROLS, [ExtendedDNControl()]) + result = None + try: + (objtype, res) = conn.search_s('', _ldap.SCOPE_BASE)[0] + result = res['defaultNamingContext'][0] + self.info['dns_hostname'] = res['dnsHostName'][0] + except _ldap.LDAPError, e: + print "LDAP error when connecting to %s: %s" % (unicode(result.pdc_name), str(e)) + + if result: + self.info['sid'] = self.parse_naming_context(result) + return True + + def parse_naming_context(self, context): + naming_ref = re.compile('.*.*') + return naming_ref.match(context).group(1) + + def retrieve(self, remote_host): + self.__init_lsa_pipe(remote_host) + + objectAttribute = lsa.ObjectAttribute() + objectAttribute.sec_qos = lsa.QosInfo() + self._policy_handle = self._pipe.OpenPolicy2(u"", objectAttribute, security.SEC_FLAG_MAXIMUM_ALLOWED) + result = self._pipe.QueryInfoPolicy2(self._policy_handle, lsa.LSA_POLICY_INFO_DNS) + self.info['name'] = unicode(result.name.string) + self.info['dns_domain'] = unicode(result.dns_domain.string) + self.info['dns_forest'] = unicode(result.dns_forest.string) + self.info['guid'] = unicode(result.domain_guid) + self.info['sid'] = unicode(result.sid) + + def generate_auth(self, trustdom_secret): + def arcfour_encrypt(key, data): + from Crypto.Cipher import ARC4 + c = ARC4.new(key) + return c.encrypt(data) + def string_to_array(what): + blob = [0] * len(what) + + for i in range(len(what)): + blob[i] = ord(what[i]) + return blob + + password_blob = string_to_array(trustdom_secret.encode('utf-16-le')) + + clear_value = drsblobs.AuthInfoClear() + clear_value.size = len(password_blob) + clear_value.password = password_blob + + clear_authentication_information = drsblobs.AuthenticationInformation() + clear_authentication_information.LastUpdateTime = samba.unix2nttime(int(time.time())) + clear_authentication_information.AuthType = lsa.TRUST_AUTH_TYPE_CLEAR + clear_authentication_information.AuthInfo = clear_value + + authentication_information_array = drsblobs.AuthenticationInformationArray() + authentication_information_array.count = 1 + authentication_information_array.array = [clear_authentication_information] + + outgoing = drsblobs.trustAuthInOutBlob() + outgoing.count = 1 + outgoing.current = authentication_information_array + + confounder = [3]*512 + for i in range(512): + confounder[i] = random.randint(0, 255) + + trustpass = drsblobs.trustDomainPasswords() + trustpass.confounder = confounder + + trustpass.outgoing = outgoing + trustpass.incoming = outgoing + + trustpass_blob = ndr_pack(trustpass) + + encrypted_trustpass = arcfour_encrypt(self._pipe.session_key, trustpass_blob) + + auth_blob = lsa.DATA_BUF2() + auth_blob.size = len(encrypted_trustpass) + auth_blob.data = string_to_array(encrypted_trustpass) + + auth_info = lsa.TrustDomainInfoAuthInfoInternal() + auth_info.auth_blob = auth_blob + self.auth_info = auth_info + + + + def establish_trust(self, another_domain, trustdom_secret): + """ + Establishes trust between our and another domain + Input: another_domain -- instance of TrustDomainInstance, initialized with #retrieve call + trustdom_secret -- shared secred used for the trust + """ + self.generate_auth(trustdom_secret) + + info = lsa.TrustDomainInfoInfoEx() + info.domain_name.string = another_domain.info['dns_domain'] + info.netbios_name.string = another_domain.info['name'] + info.sid = security.dom_sid(another_domain.info['sid']) + info.trust_direction = lsa.LSA_TRUST_DIRECTION_INBOUND | lsa.LSA_TRUST_DIRECTION_OUTBOUND + info.trust_type = lsa.LSA_TRUST_TYPE_UPLEVEL + info.trust_attributes = lsa.LSA_TRUST_ATTRIBUTE_FOREST_TRANSITIVE | lsa.LSA_TRUST_ATTRIBUTE_USES_RC4_ENCRYPTION + + try: + dname = lsa.String() + dname.string = another_domain.info['dns_domain'] + res = self._pipe.QueryTrustedDomainInfoByName(self._policy_handle, dname, lsa.LSA_TRUSTED_DOMAIN_INFO_FULL_INFO) + self._pipe.DeleteTrustedDomain(self._policy_handle, res.info_ex.sid) + except: + pass + self._pipe.CreateTrustedDomainEx2(self._policy_handle, info, self.auth_info, security.SEC_STD_DELETE) + +class TrustDomainJoins(object): + ATTR_FLATNAME = 'ipantflatname' + + def __init__(self, api): + self.api = api + self.local_domain = None + self.remote_domain = None + + self.ldap = self.api.Backend.ldap2 + cn_trust_local = DN(('cn', self.api.env.domain), self.api.env.container_cifsdomains, self.api.env.basedn) + (dn, entry_attrs) = self.ldap.get_entry(unicode(cn_trust_local), [self.ATTR_FLATNAME]) + self.local_flatname = entry_attrs[self.ATTR_FLATNAME][0] + self.local_dn = dn + + self.__populate_local_domain() + + def __populate_local_domain(self): + # Initialize local domain info using kerberos only + ld = TrustDomainInstance(self.local_flatname) + ld.creds = credentials.Credentials() + ld.creds.set_kerberos_state(credentials.MUST_USE_KERBEROS) + ld.creds.guess(ld.parm) + ld.creds.set_workstation(ld.hostname) + ld.retrieve(util.get_fqdn()) + self.local_domain = ld + + def __populate_remote_domain(self, realm, realm_server=None, realm_admin=None, realm_passwd=None): + def get_instance(self): + # Fetch data from foreign domain using password only + rd = TrustDomainInstance('') + rd.parm.set('workgroup', self.local_domain.info['name']) + rd.creds = credentials.Credentials() + rd.creds.set_kerberos_state(credentials.DONT_USE_KERBEROS) + rd.creds.guess(rd.parm) + return rd + + rd = get_instance(self) + rd.creds.set_anonymous() + rd.creds.set_workstation(self.local_domain.hostname) + if realm_server is None: + rd.retrieve_anonymously(realm, discover_srv=True) + else: + rd.retrieve_anonymously(realm_server, discover_srv=False) + rd.read_only = True + if realm_admin and realm_passwd: + if 'name' in rd.info: + auth_string = u"%s\%s%%%s" % (rd.info['name'], realm_admin, realm_passwd) + td = get_instance(self) + td.creds.parse_string(auth_string) + td.creds.set_workstation(self.local_domain.hostname) + if realm_server is None: + # we must have rd.info['dns_hostname'] then, part of anonymous discovery + td.retrieve(rd.info['dns_hostname']) + else: + td.retrieve(realm_server) + td.read_only = False + self.remote_domain = td + return + # Otherwise, use anonymously obtained data + self.remote_domain = rd + + def join_ad_full_credentials(self, realm, realm_server, realm_admin, realm_passwd): + self.__populate_remote_domain(realm, realm_server, realm_admin, realm_passwd) + if not self.remote_domain.read_only: + trustdom_pass = samba.generate_random_password(128, 128) + self.remote_domain.establish_trust(self.local_domain, trustdom_pass) + self.local_domain.establish_trust(self.remote_domain, trustdom_pass) + return dict(local=self.local_domain, remote=self.remote_domain) + return None + + def join_ad_ipa_half(self, realm, realm_server, trustdom_passwd): + self.__populate_remote_domain(realm, realm_server, realm_passwd=None) + self.local_domain.establish_trust(self.remote_domain, trustdom_passwd) + return dict(local=self.local_domain, remote=self.remote_domain) + + diff --git a/ipaserver/install/adtrustinstance.py b/ipaserver/install/adtrustinstance.py index b978146..38f9eae 100644 --- a/ipaserver/install/adtrustinstance.py +++ b/ipaserver/install/adtrustinstance.py @@ -114,7 +114,7 @@ class ADTRUSTInstance(service.Service): print "The user for Samba is %s" % self.smb_dn try: self.admin_conn.getEntry(self.smb_dn, ldap.SCOPE_BASE) - print "Samba user entry exists, resetting password" + root_logger.info("Samba user entry exists, resetting password") self.admin_conn.modify_s(self.smb_dn, \ [(ldap.MOD_REPLACE, "userPassword", self.smb_dn_pwd)]) @@ -215,7 +215,7 @@ class ADTRUSTInstance(service.Service): try: self.admin_conn.getEntry(self.smb_dom_dn, ldap.SCOPE_BASE) - print "Samba domain object already exists" + root_logger.info("Samba domain object already exists") return except errors.NotFound: pass @@ -283,7 +283,12 @@ class ADTRUSTInstance(service.Service): def __setup_principal(self): cifs_principal = "cifs/" + self.fqdn + "@" + self.realm_name - api.Command.service_add(unicode(cifs_principal)) + try: + api.Command.service_add(unicode(cifs_principal)) + except errors.DuplicateEntry, e: + # CIFS principal already exists, it is not the first time adtrustinstance is managed + # That's fine, we we'll re-extract the key again. + pass samba_keytab = "/etc/samba/samba.keytab" if os.path.exists(samba_keytab): @@ -291,7 +296,6 @@ class ADTRUSTInstance(service.Service): ipautil.run(["ipa-rmkeytab", "--principal", cifs_principal, "-k", samba_keytab]) except ipautil.CalledProcessError, e: - root_logger.critical("Result of removing old key: %d" % e.returncode) if e.returncode != 5: root_logger.critical("Failed to remove old key for %s" % cifs_principal) @@ -374,7 +378,7 @@ class ADTRUSTInstance(service.Service): self.ldap_enable('ADTRUST', self.fqdn, self.dm_password, \ self.suffix) except (ldap.ALREADY_EXISTS, errors.DuplicateEntry), e: - root_logger.critical("ADTRUST Service startup entry already exists.") + root_logger.info("ADTRUST Service startup entry already exists.") pass def __setup_sub_dict(self): -- 1.7.9.3 -------------- next part -------------- >From 4b1e18a82521405ac04f0814fa5acc93e6f05b22 Mon Sep 17 00:00:00 2001 From: Alexander Bokovoy Date: Wed, 21 Mar 2012 15:03:14 +0200 Subject: [PATCH 5/9] Update specfile with proper changelog --- freeipa.spec.in | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/freeipa.spec.in b/freeipa.spec.in index 7746c98..6b3240f 100644 --- a/freeipa.spec.in +++ b/freeipa.spec.in @@ -696,7 +696,7 @@ fi %ghost %attr(0644,root,apache) %config(noreplace) %{_sysconfdir}/ipa/ca.crt %changelog -* Mon Mar 12 2012 Alexander Bokovoy - 2.99.0-23 +* Mon Mar 26 2012 Alexander Bokovoy - 2.99.0-27 - Add freeipa-server-trust-ad virtual package to capture all required dependencies for Active Directory trust management -- 1.7.9.3 -------------- next part -------------- >From 27d757f5aae874b52115de4a646e577e3161e6c6 Mon Sep 17 00:00:00 2001 From: Alexander Bokovoy Date: Mon, 26 Mar 2012 14:23:42 +0300 Subject: [PATCH 6/9] Perform case-insensitive searches for principals on TGS requests We want to always resolve TGS requests even if the user mistakenly sends a request for a service ticket where the fqdn part contain upper case letters. The actual implementation follows hints set by KDC. When AP_REQ is done, KDC sets KRB5_FLAG_ALIAS_OK and we obey it when looking for principals on TGS requests. https://fedorahosted.org/freeipa/ticket/1577 --- daemons/ipa-kdb/ipa_kdb_principals.c | 73 ++++++++++++++++++++++++---------- install/share/61kerberos-ipav3.ldif | 3 ++ install/share/Makefile.am | 1 + install/updates/10-60basev3.update | 2 + ipalib/plugins/service.py | 7 +++- ipaserver/install/dsinstance.py | 1 + 6 files changed, 65 insertions(+), 22 deletions(-) create mode 100644 install/share/61kerberos-ipav3.ldif diff --git a/daemons/ipa-kdb/ipa_kdb_principals.c b/daemons/ipa-kdb/ipa_kdb_principals.c index 1432619..2e190a1 100644 --- a/daemons/ipa-kdb/ipa_kdb_principals.c +++ b/daemons/ipa-kdb/ipa_kdb_principals.c @@ -22,6 +22,16 @@ #include "ipa_kdb.h" +/* + * During TGS request search by ipaKrbPrincipalName (case-insensitive) + * and krbPrincipalName (case-sensitive) + */ +#define PRINC_TGS_SEARCH_FILTER "(&(|(objectclass=krbprincipalaux)" \ + "(objectclass=krbprincipal)" \ + "(objectclass=ipakrbprincipal))" \ + "(|(ipakrbprincipalalias=%s)" \ + "(krbprincipalname=%s)))" + #define PRINC_SEARCH_FILTER "(&(|(objectclass=krbprincipalaux)" \ "(objectclass=krbprincipal))" \ "(krbprincipalname=%s))" @@ -29,6 +39,7 @@ static char *std_principal_attrs[] = { "krbPrincipalName", "krbCanonicalName", + "ipaKrbPrincipalAlias", "krbUPEnabled", "krbPrincipalKey", "krbTicketPolicyReference", @@ -73,6 +84,7 @@ static char *std_principal_obj_classes[] = { "krbprincipal", "krbprincipalaux", "krbTicketPolicyAux", + "ipakrbprincipal", NULL }; @@ -636,13 +648,14 @@ done: } static krb5_error_code ipadb_fetch_principals(struct ipadb_context *ipactx, - char *search_expr, + unsigned int flags, + char *principal, LDAPMessage **result) { krb5_error_code kerr; char *src_filter = NULL; - char *esc_search_expr = NULL; - int ret; + char *esc_original_princ = NULL; + int ret, i; if (!ipactx->lcontext) { ret = ipadb_get_connection(ipactx); @@ -654,13 +667,19 @@ static krb5_error_code ipadb_fetch_principals(struct ipadb_context *ipactx, /* escape filter but do not touch '*' as this function accepts * wildcards in names */ - esc_search_expr = ipadb_filter_escape(search_expr, false); - if (!esc_search_expr) { + esc_original_princ = ipadb_filter_escape(principal, false); + if (!esc_original_princ) { kerr = KRB5_KDB_INTERNAL_ERROR; goto done; } - ret = asprintf(&src_filter, PRINC_SEARCH_FILTER, esc_search_expr); + if (flags & KRB5_KDB_FLAG_ALIAS_OK) { + ret = asprintf(&src_filter, PRINC_TGS_SEARCH_FILTER, + esc_original_princ, esc_original_princ); + } else { + ret = asprintf(&src_filter, PRINC_SEARCH_FILTER, esc_original_princ); + } + if (ret == -1) { kerr = KRB5_KDB_INTERNAL_ERROR; goto done; @@ -673,7 +692,7 @@ static krb5_error_code ipadb_fetch_principals(struct ipadb_context *ipactx, done: free(src_filter); - free(esc_search_expr); + free(esc_original_princ); return kerr; } @@ -713,9 +732,12 @@ static krb5_error_code ipadb_find_principal(krb5_context kcontext, /* we need to check for a strict match as a '*' in the name may have * caused the ldap server to return multiple entries */ for (i = 0; vals[i]; i++) { - /* FIXME: use case insensitive compare and tree as alias ?? */ - if (strcmp(vals[i]->bv_val, (*principal)) == 0) { - found = true; + /* KDC will accept aliases when doing TGT lookup (ref_tgt_again in do_tgs_req.c */ + /* Use case-insensitive comparison in such cases */ + if ((flags & KRB5_KDB_FLAG_ALIAS_OK) != 0) { + found = (strcasecmp(vals[i]->bv_val, (*principal)) == 0); + } else { + found = (strcmp(vals[i]->bv_val, (*principal)) == 0); } } @@ -731,11 +753,15 @@ static krb5_error_code ipadb_find_principal(krb5_context kcontext, continue; } - /* FIXME: use case insensitive compare and treat as alias ?? */ - if (strcmp(vals[0]->bv_val, (*principal)) != 0 && - !(flags & KRB5_KDB_FLAG_ALIAS_OK)) { + /* Again, if aliases are accepted by KDC, use case-insensitive comparison */ + if ((flags & KRB5_KDB_FLAG_ALIAS_OK) != 0) { + found = (strcasecmp(vals[0]->bv_val, (*principal)) == 0); + } else { + found = (strcmp(vals[0]->bv_val, (*principal)) == 0); + } + + if (!found) { /* search does not allow aliases */ - found = false; ldap_value_free_len(vals); continue; } @@ -883,7 +909,7 @@ krb5_error_code ipadb_get_principal(krb5_context kcontext, goto done; } - kerr = ipadb_fetch_principals(ipactx, principal, &res); + kerr = ipadb_fetch_principals(ipactx, flags, principal, &res); if (kerr != 0) { goto done; } @@ -1398,6 +1424,11 @@ static krb5_error_code ipadb_entry_to_mods(krb5_context kcontext, if (kerr) { goto done; } + kerr = ipadb_get_ldap_mod_str(imods, "ipaKrbPrincipalAlias", + principal, mod_op); + if (kerr) { + goto done; + } } /* KADM5_PRINC_EXPIRE_TIME */ @@ -1735,13 +1766,13 @@ static krb5_error_code ipadb_add_principal(krb5_context kcontext, goto done; } - kerr = ipadb_entry_to_mods(kcontext, imods, - entry, principal, LDAP_MOD_ADD); + kerr = ipadb_entry_default_attrs(imods); if (kerr != 0) { goto done; } - kerr = ipadb_entry_default_attrs(imods); + kerr = ipadb_entry_to_mods(kcontext, imods, + entry, principal, LDAP_MOD_ADD); if (kerr != 0) { goto done; } @@ -1779,7 +1810,7 @@ static krb5_error_code ipadb_modify_principal(krb5_context kcontext, goto done; } - kerr = ipadb_fetch_principals(ipactx, principal, &res); + kerr = ipadb_fetch_principals(ipactx, 0, principal, &res); if (kerr != 0) { goto done; } @@ -1930,7 +1961,7 @@ krb5_error_code ipadb_delete_principal(krb5_context kcontext, goto done; } - kerr = ipadb_fetch_principals(ipactx, principal, &res); + kerr = ipadb_fetch_principals(ipactx, 0, principal, &res); if (kerr != 0) { goto done; } @@ -1982,7 +2013,7 @@ krb5_error_code ipadb_iterate(krb5_context kcontext, } /* fetch list of principal matching filter */ - kerr = ipadb_fetch_principals(ipactx, match_entry, &res); + kerr = ipadb_fetch_principals(ipactx, 0, match_entry, &res); if (kerr != 0) { goto done; } diff --git a/install/share/61kerberos-ipav3.ldif b/install/share/61kerberos-ipav3.ldif new file mode 100644 index 0000000..dcdaa5d --- /dev/null +++ b/install/share/61kerberos-ipav3.ldif @@ -0,0 +1,3 @@ +dn: cn=schema +attributeTypes: (2.16.840.1.113730.3.8.11.32 NAME 'ipaKrbPrincipalAlias' DESC 'IPA principal alias' EQUALITY caseIgnoreMatch ORDERING caseIgnoreOrderingMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 SINGLE-VALUE X-ORIGIN 'IPA v3') +objectClasses: (2.16.840.1.113730.3.8.12.8 NAME 'ipaKrbPrincipal' SUP krbPrincipalAux AUXILIARY MUST ( krbPrincipalName $ ipaKrbPrincipalAlias ) X-ORIGIN 'IPA v3' ) diff --git a/install/share/Makefile.am b/install/share/Makefile.am index 81fd0dc..68c98e0 100644 --- a/install/share/Makefile.am +++ b/install/share/Makefile.am @@ -9,6 +9,7 @@ app_DATA = \ 60basev2.ldif \ 60basev3.ldif \ 60ipadns.ldif \ + 61kerberos-ipav3.ldif \ 65ipasudo.ldif \ anonymous-vlv.ldif \ bootstrap-template.ldif \ diff --git a/install/updates/10-60basev3.update b/install/updates/10-60basev3.update index 796eb16..96d012c 100644 --- a/install/updates/10-60basev3.update +++ b/install/updates/10-60basev3.update @@ -4,3 +4,5 @@ add:attributeTypes: ( 2.16.840.1.113730.3.8.11.21 NAME 'ipaAllowToImpersonate' D add:attributeTypes: ( 2.16.840.1.113730.3.8.11.22 NAME 'ipaAllowedTarget' DESC 'Target principals alowed to get a ticket for' SUP distinguishedName X-ORIGIN 'IPA-v3') add:objectClasses: (2.16.840.1.113730.3.8.12.6 NAME 'groupOfPrincipals' SUP top AUXILIARY MUST ( cn ) MAY ( memberPrincipal ) X-ORIGIN 'IPA v3' ) add:objectClasses: (2.16.840.1.113730.3.8.12.7 NAME 'ipaKrb5DelegationACL' SUP groupOfPrincipals STRUCTURAL MAY ( ipaAllowToImpersonate $$ ipaAllowedTarget ) X-ORIGIN 'IPA v3' ) +add:attributeTypes: (2.16.840.1.113730.3.8.11.32 NAME 'ipaKrbPrincipalAlias' DESC 'IPA principal alias' EQUALITY caseIgnoreMatch ORDERING caseIgnoreOrderingMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 SINGLE-VALUE X-ORIGIN 'IPA v3') +add:objectClasses: (2.16.840.1.113730.3.8.12.8 NAME 'ipaKrbPrincipal' SUP krbPrincipalAux AUXILIARY MUST ( krbPrincipalName $$ ipaKrbPrincipalAlias ) X-ORIGIN 'IPA v3' ) diff --git a/ipalib/plugins/service.py b/ipalib/plugins/service.py index 7c563b3..d2ddd4e 100644 --- a/ipalib/plugins/service.py +++ b/ipalib/plugins/service.py @@ -221,7 +221,7 @@ class service(LDAPObject): object_name_plural = _('services') object_class = [ 'krbprincipal', 'krbprincipalaux', 'krbticketpolicyaux', 'ipaobject', - 'ipaservice', 'pkiuser' + 'ipaservice', 'pkiuser', 'ipakrbprincipal' ] search_attributes = ['krbprincipalname', 'managedby'] default_attributes = ['krbprincipalname', 'usercertificate', 'managedby'] @@ -293,6 +293,11 @@ class service_add(LDAPCreate): if not 'managedby' in entry_attrs: entry_attrs['managedby'] = hostresult['dn'] + # Enforce ipaKrbPrincipalAlias to aid case-insensitive searches + # as krbPrincipalName/krbCanonicalName are case-sensitive in Kerberos + # schema + entry_attrs['ipakrbprincipalalias'] = keys[-1] + return dn api.register(service_add) diff --git a/ipaserver/install/dsinstance.py b/ipaserver/install/dsinstance.py index d82454d..ca2c3e6 100644 --- a/ipaserver/install/dsinstance.py +++ b/ipaserver/install/dsinstance.py @@ -362,6 +362,7 @@ class DsInstance(service.Service): "60basev2.ldif", "60basev3.ldif", "60ipadns.ldif", + "61kerberos-ipav3.ldif", "65ipasudo.ldif"): target_fname = schema_dirname(self.serverid) + schema_fname shutil.copyfile(ipautil.SHARE_DIR + schema_fname, target_fname) -- 1.7.9.3 -------------- next part -------------- >From 5c5ae521bd584f30a36739cb3d87c9e47b31aa01 Mon Sep 17 00:00:00 2001 From: Alexander Bokovoy Date: Tue, 27 Mar 2012 12:50:48 +0300 Subject: [PATCH 7/9] Properly handle multiple IP addresses per host when installing trust support resolve_host() function returns a list of IP addresses. Handle it all rather than expecting that there is a single address. It wouldn't hurt to make a common function that takes --ip-address into account when resolving host addresses and use it everywhere. --- install/tools/ipa-adtrust-install | 38 +++++++++++++++++++++---------------- 1 file changed, 22 insertions(+), 16 deletions(-) diff --git a/install/tools/ipa-adtrust-install b/install/tools/ipa-adtrust-install index 0f3e473..41e20f3 100755 --- a/install/tools/ipa-adtrust-install +++ b/install/tools/ipa-adtrust-install @@ -133,25 +133,31 @@ def main(): # Check we have a public IP that is associated with the hostname ip = None try: - if options.ip_address: - ip = ipautil.CheckedIPAddress(options.ip_address, match_local=True) + hostaddr = resolve_host(api.env.host) + if len(hostaddr) > 1: + print >> sys.stderr, "The server hostname resolves to more than one address:" + for addr in hostaddr: + print >> sys.stderr, " %s" % addr + + if options.ip_address: + if str(options.ip_address) not in hostaddr: + print >> sys.stderr, "Address passed in --ip-address did not match any resolved" + print >> sys.stderr, "address!" + sys.exit(1) + print "Selected IP address:", str(options.ip_address) + ip = options.ip_address + else: + if options.unattended: + print >> sys.stderr, "Please use --ip-address option to specify the address" + sys.exit(1) + else: + ip = read_ip_address(api.env.host, fstore) else: - hostaddr = resolve_host(api.env.host) - ip = hostaddr and ipautil.CheckedIPAddress(hostaddr, match_local=True) + ip = hostaddr and ipautil.CheckedIPAddress(hostaddr[0], match_local=True) except Exception, e: print "Error: Invalid IP Address %s: %s" % (ip, e) - ip = None - - if not ip: - if options.unattended: - sys.exit("Unable to resolve IP address for host name") - else: - read_ip = read_ip_address(api.env.host, fstore) - try: - ip = ipautil.CheckedIPAddress(read_ip, match_local=True) - except Exception, e: - print "Error: Invalid IP Address %s: %s" % (ip, e) - sys.exit("Aborting installation.") + print "Aborting installation" + sys.exit(1) ip_address = str(ip) root_logger.debug("will use ip_address: %s\n", ip_address) -- 1.7.9.3 -------------- next part -------------- >From b491e43fd21af281013cc9b90197bdc2e1becabc Mon Sep 17 00:00:00 2001 From: Alexander Bokovoy Date: Tue, 27 Mar 2012 13:06:33 +0300 Subject: [PATCH 8/9] Restart KDC after installing trust support to allow MS PAC generation Also make sure all exceptions are captured when creating CIFS service record. The one we care about is duplicate entry and we do nothing in that case anyway. Also make uniform use of action descriptors. --- ipaserver/install/adtrustinstance.py | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/ipaserver/install/adtrustinstance.py b/ipaserver/install/adtrustinstance.py index 38f9eae..d9609f4 100644 --- a/ipaserver/install/adtrustinstance.py +++ b/ipaserver/install/adtrustinstance.py @@ -32,6 +32,7 @@ from ipalib import errors, api from ipapython import sysrestore from ipapython import ipautil from ipapython.ipa_log_manager import * +from ipapython import services as ipaservices import string import struct @@ -285,7 +286,7 @@ class ADTRUSTInstance(service.Service): try: api.Command.service_add(unicode(cifs_principal)) - except errors.DuplicateEntry, e: + except Exception, e: # CIFS principal already exists, it is not the first time adtrustinstance is managed # That's fine, we we'll re-extract the key again. pass @@ -369,6 +370,12 @@ class ADTRUSTInstance(service.Service): except: pass + def __restart_kdc(self): + try: + ipaservices.knownservices.krb5kdc.restart() + except: + pass + def __enable(self): self.backup_state("enabled", self.is_enabled()) # We do not let the system start IPA components on its own, @@ -418,20 +425,22 @@ class ADTRUSTInstance(service.Service): self.ldap_connect() self.step("stopping smbd", self.__stop) - self.step("create samba user", self.__create_samba_user) - self.step("create samba domain object", \ + self.step("creating samba user", self.__create_samba_user) + self.step("creating samba domain object", \ self.__create_samba_domain_object) - self.step("create samba config registry", self.__write_smb_registry) + self.step("creating samba config registry", self.__write_smb_registry) self.step("writing samba config file", self.__write_smb_conf) self.step("setting password for the samba user", \ self.__set_smb_ldap_password) - self.step("Adding cifs Kerberos principal", self.__setup_principal) - self.step("Adding admin(group) SIDs", self.__add_admin_sids) - self.step("Activation CLDAP plugin", self.__add_cldap_module) + self.step("adding cifs Kerberos principal", self.__setup_principal) + self.step("adding admin(group) SIDs", self.__add_admin_sids) + self.step("activating CLDAP plugin", self.__add_cldap_module) self.step("configuring smbd to start on boot", self.__enable) if not self.no_msdcs: self.step("adding special DNS service records", \ self.__add_dns_service_records) + self.step("restarting KDC to take MS PAC changes into account", \ + self.__restart_kdc) self.step("starting smbd", self.__start) self.start_creation("Configuring smbd:") -- 1.7.9.3 From mkosek at redhat.com Tue Apr 3 11:02:29 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Apr 2012 13:02:29 +0200 Subject: [Freeipa-devel] [PATCH] 73 Check whether the default user group is POSIX when adding new user with --noprivate In-Reply-To: <4F7AC9B8.3090604@redhat.com> References: <4F7AC9B8.3090604@redhat.com> Message-ID: <1333450949.23102.16.camel@balmora.brq.redhat.com> On Tue, 2012-04-03 at 11:58 +0200, Jan Cholasta wrote: > https://fedorahosted.org/freeipa/ticket/2572 > > Honza > NACK. This creates a regression: # ipa group-show foogroup Group name: foogroup Description: foo GID: 358800017 # ipa user-add --first=Foo --last=Bar fbar5 --gidnumber=358800017 --noprivate ------------------ Added user "fbar5" ------------------ User login: fbar5 First name: Foo Last name: Bar Full name: Foo Bar Display name: Foo Bar Initials: FB Home directory: /home/fbar5 GECOS field: Foo Bar Login shell: /bin/sh Kerberos principal: fbar5 at IDM.LAB.BOS.REDHAT.COM UID: 358800019 GID: 358800012 Password: False Member of groups: ipausers Kerberos keys available: False # id fbar5 uid=358800019(fbar5) gid=358800012(ipausers) groups=358800012(ipausers) Custom user group (GID) was overwritten. I think we also want a test case for this situation. Martin From mkosek at redhat.com Tue Apr 3 11:04:39 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Apr 2012 13:04:39 +0200 Subject: [Freeipa-devel] [PATCH] 73 Check whether the default user group is POSIX when adding new user with --noprivate In-Reply-To: <1333450949.23102.16.camel@balmora.brq.redhat.com> References: <4F7AC9B8.3090604@redhat.com> <1333450949.23102.16.camel@balmora.brq.redhat.com> Message-ID: <1333451079.23102.17.camel@balmora.brq.redhat.com> On Tue, 2012-04-03 at 13:02 +0200, Martin Kosek wrote: > On Tue, 2012-04-03 at 11:58 +0200, Jan Cholasta wrote: > > https://fedorahosted.org/freeipa/ticket/2572 > > > > Honza > > > > NACK. > > This creates a regression: > > # ipa group-show foogroup > Group name: foogroup > Description: foo > GID: 358800017 > > # ipa user-add --first=Foo --last=Bar fbar5 --gidnumber=358800017 --noprivate > ------------------ > Added user "fbar5" > ------------------ > User login: fbar5 > First name: Foo > Last name: Bar > Full name: Foo Bar > Display name: Foo Bar > Initials: FB > Home directory: /home/fbar5 > GECOS field: Foo Bar > Login shell: /bin/sh > Kerberos principal: fbar5 at IDM.LAB.BOS.REDHAT.COM > UID: 358800019 > GID: 358800012 > Password: False > Member of groups: ipausers > Kerberos keys available: False > > # id fbar5 > uid=358800019(fbar5) gid=358800012(ipausers) groups=358800012(ipausers) > > Custom user group (GID) was overwritten. > > I think we also want a test case for this situation. > > Martin > ... and we also want to have the new error message(s) i18n-able. Martin From pspacek at redhat.com Tue Apr 3 13:06:31 2012 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 03 Apr 2012 15:06:31 +0200 Subject: [Freeipa-devel] [PATCH] 0015 Don't try to remove auxiliary nodes from internal RBT Message-ID: <4F7AF5D7.6010100@redhat.com> Hello, this patch optimizes code for removing deleted zones from BIND instance little bit. In some cases there are auxiliary zones (= not really served zones) in internal Red-Black tree. Current code tries to remove these auxiliary zones on each zone_refresh attempt. Everything works fine, because auxiliary zones are detected deeper in zone deletion code. Now plugin prints very confusing message "Zone '%s' has been removed from database." each 'zone_refresh' seconds, again and again. This patch prevents this. I think it's very very confusing. I spent a lot of time while debugging before I realized where is the problem. Petr^2 Spacek -------------- next part -------------- A non-text attachment was scrubbed... Name: bind-dyndb-ldap-pspacek-0015-Don-t-try-to-remove-auxilitary-nodes.patch Type: text/x-patch Size: 1206 bytes Desc: not available URL: From rcritten at redhat.com Tue Apr 3 13:19:56 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 03 Apr 2012 09:19:56 -0400 Subject: [Freeipa-devel] [PATCH] 998 certmonger restarts services on renewal In-Reply-To: <1333437994.23102.13.camel@balmora.brq.redhat.com> References: <4F7233D9.2080405@redhat.com> <1333374440.10403.18.camel@balmora.brq.redhat.com> <4F79B30E.1060804@redhat.com> <4F79FFBD.1030408@redhat.com> <1333437994.23102.13.camel@balmora.brq.redhat.com> Message-ID: <4F7AF8FC.5050000@redhat.com> Martin Kosek wrote: > On Mon, 2012-04-02 at 15:36 -0400, Rob Crittenden wrote: >> Rob Crittenden wrote: >>> Martin Kosek wrote: >>>> On Tue, 2012-03-27 at 17:40 -0400, Rob Crittenden wrote: >>>>> Certmonger will currently automatically renew server certificates but >>>>> doesn't restart the services so you can still end up with expired >>>>> certificates if you services never restart. >>>>> >>>>> This patch registers are restart command with certmonger so the IPA >>>>> services will automatically be restarted to get the updated cert. >>>>> >>>>> Easy to test. Install IPA then resubmit the current server certs and >>>>> watch the services restart: >>>>> >>>>> # ipa-getcert list >>>>> >>>>> Find the ID for either your dirsrv or httpd instance >>>>> >>>>> # ipa-getcert resubmit -i >>>>> >>>>> Watch /var/log/httpd/error_log or /var/log/dirsrv/slapd-INSTANCE/errors >>>>> to see the service restart. >>>>> >>>>> rob >>>> >>>> What about current instances - can we/do we want to update certmonger >>>> tracking so that their instances are restarted as well? >>>> >>>> Anyway, I found few issues SELinux issues with the patch: >>>> >>>> 1) # rpm -Uvh freeipa-* >>>> Preparing... ########################################### [100%] >>>> 1:freeipa-python ########################################### [ 20%] >>>> 2:freeipa-client ########################################### [ 40%] >>>> 3:freeipa-admintools ########################################### [ 60%] >>>> 4:freeipa-server ########################################### [ 80%] >>>> /usr/bin/chcon: failed to change context of >>>> `/usr/lib64/ipa/certmonger' to >>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid argument >>>> /usr/bin/chcon: failed to change context of >>>> `/usr/lib64/ipa/certmonger/restart_dirsrv' to >>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid argument >>>> /usr/bin/chcon: failed to change context of >>>> `/usr/lib64/ipa/certmonger/restart_httpd' to >>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid argument >>>> warning: %post(freeipa-server-2.1.90GIT5b895af-0.fc16.x86_64) >>>> scriptlet failed, exit status 1 >>>> 5:freeipa-server-selinux ########################################### >>>> [100%] >>>> >>>> certmonger_unconfined_exec_t type was unknown with my selinux policy: >>>> >>>> selinux-policy-3.10.0-80.fc16.noarch >>>> selinux-policy-targeted-3.10.0-80.fc16.noarch >>>> >>>> If we need a higher SELinux version, we should bump the required package >>>> version spec file. >>> >>> Yeah, waiting on it to be backported. >>> >>>> >>>> 2) Change of SELinux context with /usr/bin/chcon is temporary until >>>> restorecon or system relabel occurs. I think we should make it >>>> persistent and enforce this type in our SELinux policy and rather call >>>> restorecon instead of chcon >>> >>> That's a good idea, why didn't I think of that :-( >> >> Ah, now I remember, it will be handled by selinux-policy. I would have >> used restorecon here but since the policy isn't there yet this seemed >> like a good idea. >> >> I'm trying to find out the status of this new policy, it may only make >> it into F-17. >> >> rob > > Ok. But if this policy does not go in F-16 and if we want this fix in > F16 release too, I guess we would have to implement both approaches in > our spec file: > > 1) When on F16, include SELinux policy for restart scripts + run > restorecon > 2) When on F17, do not include the SELinux policy (+ run restorecon) > > Martin > Won't work without updated selinux-policy. Without the permission for certmonger to execute the commands things will still fail (just in really non-obvious and far in the future ways). It looks like this is fixed in F-17 selinux-policy-3.10.0-107. rob From ohamada at redhat.com Tue Apr 3 13:22:09 2012 From: ohamada at redhat.com (Ondrej Hamada) Date: Tue, 03 Apr 2012 15:22:09 +0200 Subject: [Freeipa-devel] [PATCH] 20 Fix empty external member processing In-Reply-To: <4F7ACF83.3080202@redhat.com> References: <4F7ACF83.3080202@redhat.com> Message-ID: <4F7AF981.2050303@redhat.com> On 04/03/2012 12:22 PM, Ondrej Hamada wrote: > https://fedorahosted.org/freeipa/ticket/2447 > > Validation of external member was failing for empty strings because of > wrong condition. > > > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel Used clearer solution. Thanks to Rob for advice. -- Regards, Ondrej Hamada FreeIPA team jabber: ohama at jabbim.cz IRC: ohamada -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-ohamada-20-2-Fix-empty-external-member-processing.patch Type: text/x-patch Size: 1035 bytes Desc: not available URL: From simo at redhat.com Tue Apr 3 13:29:59 2012 From: simo at redhat.com (Simo Sorce) Date: Tue, 03 Apr 2012 09:29:59 -0400 Subject: [Freeipa-devel] [PATCH] 490 Fix s4u2proxy handling when a MS-PAC is available In-Reply-To: <20120328093627.GG2369@localhost.localdomain> References: <1332875826.22628.51.camel@willson.li.ssimo.org> <20120328093627.GG2369@localhost.localdomain> Message-ID: <1333459799.22628.259.camel@willson.li.ssimo.org> On Wed, 2012-03-28 at 11:36 +0200, Sumit Bose wrote: > On Tue, Mar 27, 2012 at 03:17:06PM -0400, Simo Sorce wrote: > > This patch fixes #2504, the logic to choose the client principal to use > > was basically reversed, and we ended up using the wrong principal to > > verify the PAC owner. > > > > This patch fixes it. Tested and s4u2proxy keeps working both with and > > without a PAC attached. > > > > It also keeps working with normal TGS requests of course. > > ACK, '--delegate' is not neede anymore. Pushed to master. Simo. -- Simo Sorce * Red Hat, Inc * New York From jcholast at redhat.com Tue Apr 3 14:42:11 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 03 Apr 2012 16:42:11 +0200 Subject: [Freeipa-devel] [PATCH] 74 Check configured maximum user login length on user rename Message-ID: <4F7B0C43.3000201@redhat.com> https://fedorahosted.org/freeipa/ticket/2587 Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-74-user-rename-length-check.patch Type: text/x-patch Size: 1399 bytes Desc: not available URL: From rcritten at redhat.com Tue Apr 3 14:45:24 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 03 Apr 2012 10:45:24 -0400 Subject: [Freeipa-devel] [PATCH] 998 certmonger restarts services on renewal In-Reply-To: <4F7AF8FC.5050000@redhat.com> References: <4F7233D9.2080405@redhat.com> <1333374440.10403.18.camel@balmora.brq.redhat.com> <4F79B30E.1060804@redhat.com> <4F79FFBD.1030408@redhat.com> <1333437994.23102.13.camel@balmora.brq.redhat.com> <4F7AF8FC.5050000@redhat.com> Message-ID: <4F7B0D04.6060704@redhat.com> Rob Crittenden wrote: > Martin Kosek wrote: >> On Mon, 2012-04-02 at 15:36 -0400, Rob Crittenden wrote: >>> Rob Crittenden wrote: >>>> Martin Kosek wrote: >>>>> On Tue, 2012-03-27 at 17:40 -0400, Rob Crittenden wrote: >>>>>> Certmonger will currently automatically renew server certificates but >>>>>> doesn't restart the services so you can still end up with expired >>>>>> certificates if you services never restart. >>>>>> >>>>>> This patch registers are restart command with certmonger so the IPA >>>>>> services will automatically be restarted to get the updated cert. >>>>>> >>>>>> Easy to test. Install IPA then resubmit the current server certs and >>>>>> watch the services restart: >>>>>> >>>>>> # ipa-getcert list >>>>>> >>>>>> Find the ID for either your dirsrv or httpd instance >>>>>> >>>>>> # ipa-getcert resubmit -i >>>>>> >>>>>> Watch /var/log/httpd/error_log or >>>>>> /var/log/dirsrv/slapd-INSTANCE/errors >>>>>> to see the service restart. >>>>>> >>>>>> rob >>>>> >>>>> What about current instances - can we/do we want to update certmonger >>>>> tracking so that their instances are restarted as well? >>>>> >>>>> Anyway, I found few issues SELinux issues with the patch: >>>>> >>>>> 1) # rpm -Uvh freeipa-* >>>>> Preparing... ########################################### [100%] >>>>> 1:freeipa-python ########################################### [ 20%] >>>>> 2:freeipa-client ########################################### [ 40%] >>>>> 3:freeipa-admintools ########################################### [ >>>>> 60%] >>>>> 4:freeipa-server ########################################### [ 80%] >>>>> /usr/bin/chcon: failed to change context of >>>>> `/usr/lib64/ipa/certmonger' to >>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid >>>>> argument >>>>> /usr/bin/chcon: failed to change context of >>>>> `/usr/lib64/ipa/certmonger/restart_dirsrv' to >>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid >>>>> argument >>>>> /usr/bin/chcon: failed to change context of >>>>> `/usr/lib64/ipa/certmonger/restart_httpd' to >>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid >>>>> argument >>>>> warning: %post(freeipa-server-2.1.90GIT5b895af-0.fc16.x86_64) >>>>> scriptlet failed, exit status 1 >>>>> 5:freeipa-server-selinux ########################################### >>>>> [100%] >>>>> >>>>> certmonger_unconfined_exec_t type was unknown with my selinux policy: >>>>> >>>>> selinux-policy-3.10.0-80.fc16.noarch >>>>> selinux-policy-targeted-3.10.0-80.fc16.noarch >>>>> >>>>> If we need a higher SELinux version, we should bump the required >>>>> package >>>>> version spec file. >>>> >>>> Yeah, waiting on it to be backported. >>>> >>>>> >>>>> 2) Change of SELinux context with /usr/bin/chcon is temporary until >>>>> restorecon or system relabel occurs. I think we should make it >>>>> persistent and enforce this type in our SELinux policy and rather call >>>>> restorecon instead of chcon >>>> >>>> That's a good idea, why didn't I think of that :-( >>> >>> Ah, now I remember, it will be handled by selinux-policy. I would have >>> used restorecon here but since the policy isn't there yet this seemed >>> like a good idea. >>> >>> I'm trying to find out the status of this new policy, it may only make >>> it into F-17. >>> >>> rob >> >> Ok. But if this policy does not go in F-16 and if we want this fix in >> F16 release too, I guess we would have to implement both approaches in >> our spec file: >> >> 1) When on F16, include SELinux policy for restart scripts + run >> restorecon >> 2) When on F17, do not include the SELinux policy (+ run restorecon) >> >> Martin >> > > Won't work without updated selinux-policy. Without the permission for > certmonger to execute the commands things will still fail (just in > really non-obvious and far in the future ways). > > It looks like this is fixed in F-17 selinux-policy-3.10.0-107. > > rob Updated patch which works on F-17. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-998-2-certmonger.patch Type: text/x-diff Size: 11019 bytes Desc: not available URL: From sbose at redhat.com Tue Apr 3 14:57:50 2012 From: sbose at redhat.com (Sumit Bose) Date: Tue, 3 Apr 2012 16:57:50 +0200 Subject: [Freeipa-devel] [PATCH] 0042-0048 AD trusts support (master) In-Reply-To: <20120403104135.GD23171@redhat.com> References: <20120403104135.GD23171@redhat.com> Message-ID: <20120403145749.GD8996@localhost.localdomain> On Tue, Apr 03, 2012 at 01:41:35PM +0300, Alexander Bokovoy wrote: > Hi! > > Attached are the current patches for adding support for Active Directory > trusts for FreeIPA v3 (master). > > These are tested and working with samba4 build available in ipa-devel@ > repo. You have to use --delegate until we'll get all the parts of the > Heimdal puzzle untangled and solved, and Simo patch 490 (s4u2proxy fix) > is committed as well. > > Sumit asked me to send patches for review and commit to master so that > he can proceed with his changes (removal of kadmin.local use, SID > population task for 389-ds, etc). Without kadmin.local use fix these > patches are not working with SELinux enabled. > > Patches have [../9] mark because they were generated out of my adwork > tree. I have merged two patches together for obvious change reason and > have left out Simo's s4u2proxy patch out, thus there are seven patches > proposed for commit. I have tested the patches and they worked fine for me. They currently only work in F17, because it relies on the version of python-ldap shipped with F17. So it is an ACK form my side. It would be nice if someone more can have a look at the python parts if they are in agreement with the IPA standards (I expect they are :-). bye, Sumit > > -- > / Alexander Bokovoy From ohamada at redhat.com Tue Apr 3 16:45:43 2012 From: ohamada at redhat.com (Ondrej Hamada) Date: Tue, 03 Apr 2012 18:45:43 +0200 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F5E910D.3080604@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> Message-ID: <4F7B2937.4060300@redhat.com> On 03/13/2012 01:13 AM, Dmitri Pal wrote: > On 03/12/2012 06:10 PM, Simo Sorce wrote: >> On Mon, 2012-03-12 at 17:40 -0400, Dmitri Pal wrote: >>> On 03/12/2012 04:16 PM, Simo Sorce wrote: >>>> On Mon, 2012-03-12 at 20:38 +0100, Ondrej Hamada wrote: >>>>> USER'S operations when connection is OK: >>>>> ------------------------------------------------------- >>>>> read data -> local >>>>> write data -> forwarding to master >>>>> authentication: >>>>> -credentials cached -- authenticate against credentials in local cache >>>>> -on failure: log failure locally, update >>>>> data >>>>> about failures only on lock-down of account >>>>> -credentials not cached -- forward request to master, on success >>>>> cache >>>>> the credentials >>>>> >>>> This scheme doesn't work with Kerberos. >>>> Either you have a copy of the user's keys locally or you don't, there is >>>> nothing you can really cache if you don't. >>>> >>>> Simo. >>>> >>> Yes this is what we are talking about here - the cache would have to >>> contain user Kerberos key but there should be some expiration on the >>> cache so that fetched and stored keys periodically cleaned following the >>> policy an admin has defined. >> We would need a mechanism to transfer Kerberos keys, but that would not >> be sufficient, you'd have to give read-only servers also the realm >> krbtgt in order to be able to do anything with those keys. >> >> The way MS solves hits (I think) is by giving a special RODC krbtgt to >> each RODC, and then replicating all RODC krbtgt's with full domain >> controllers. Full domain controllers have logic to use RODC's krbtgt >> keys instead of the normal krbtgt to perform operations when user's >> krbtgt are presented to a different server. This is a lot of work and >> changes in the KDC, not something we can implement easily. >> >> As a first implementation I would restrict read-only replicas to not do >> Kerberos at all, only LDAP for all the lookup stuff necessary. to add a >> RO KDC we will need to plan a lot of changes in the KDC. >> >> We will also need intelligent partial replication where the rules about >> which object (and which attributes in the object) need/can be replicated >> are established based on some grouping+filter mechanism. This also is a >> pretty important change to 389ds. >> >> Simo. >> > I agree. I am just trying to structure the discussion a bit so that all > what you are saying can be captured in the design document and then we > can pick a subset of what Ondrej will actually implement. So let us > capture all the complexity and then do a POC for just LDAP part. > Sorry for inactivity, I was struggling with a lot of school stuff. I've summed up the main goals, do you agree on them or should I add/remove any? GOALS =========================================== Create Hub and Consumer types of replica with following features: * Hub is read-only * Hub interconnects Masters with Consumers or Masters with Hubs or Hubs with other Hubs * Hub is hidden in the network topology * Consumer is read-only * Consumer interconnects Masters/Hubs with clients * Write operations should be forwarded to Master * Consumer should be able to log users into system without communication with master * Consumer should cache user's credentials * Caching of credentials should be configurable * CA server should not be allowed on Hubs and Consumers -- Regards, Ondrej Hamada FreeIPA team jabber: ohama at jabbim.cz IRC: ohamada From mkosek at redhat.com Tue Apr 3 16:57:27 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Apr 2012 18:57:27 +0200 Subject: [Freeipa-devel] [PATCH] 20 Fix empty external member processing In-Reply-To: <4F7AF981.2050303@redhat.com> References: <4F7ACF83.3080202@redhat.com> <4F7AF981.2050303@redhat.com> Message-ID: <1333472247.10841.14.camel@balmora.brq.redhat.com> On Tue, 2012-04-03 at 15:22 +0200, Ondrej Hamada wrote: > On 04/03/2012 12:22 PM, Ondrej Hamada wrote: > > https://fedorahosted.org/freeipa/ticket/2447 > > > > Validation of external member was failing for empty strings because > > of > > wrong condition. > > > > > > > > _______________________________________________ > > Freeipa-devel mailing list > > Freeipa-devel at redhat.com > > https://www.redhat.com/mailman/listinfo/freeipa-devel > > Used clearer solution. Thanks to Rob for advice. ACK for this patch fixing empty --hosts, --users, etc. options. We just need to triage the second issue found during testing - an ability to set invalid external* attribute value with --setattr or --addattr options. I see 2 ways to fix that: 1) Ugly fix: Call a similar precallback in all affected *-mod commands where --addattr or --setattr could be used (netgroup-mod, sudorule-mod, etc.) which would specifically validate external* attribute values. 2) Nice fix: - create a param for external hosts, users to the respective LDAPOobjects - netgroup, sudorule, etc. and implement proper validators for them. These params would not be visible for users or cloned for Commands. Most code from Ondra's original patch 16 could be re-used - update Ondra's precallback to use these params for validation - update --setattr and --addattr param processing to consider also these params that exist only in LDAPObject and not in Command I think it would be OK to just create a ticket for the second issue and close ticket #2447 with Ondra's patch 20-2 as is. The new ticket could be targeted for next release as there are more changes needed, including fixes in --setattr and --addattr processing. I don't think this issue has a high impact, setting external* attribute values via --setattr is not really a standard procedure. Martin From mkosek at redhat.com Tue Apr 3 18:43:21 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Apr 2012 20:43:21 +0200 Subject: [Freeipa-devel] [PATCH] 998 certmonger restarts services on renewal In-Reply-To: <4F7B0D04.6060704@redhat.com> References: <4F7233D9.2080405@redhat.com> <1333374440.10403.18.camel@balmora.brq.redhat.com> <4F79B30E.1060804@redhat.com> <4F79FFBD.1030408@redhat.com> <1333437994.23102.13.camel@balmora.brq.redhat.com> <4F7AF8FC.5050000@redhat.com> <4F7B0D04.6060704@redhat.com> Message-ID: <1333478601.2626.5.camel@priserak> On Tue, 2012-04-03 at 10:45 -0400, Rob Crittenden wrote: > Rob Crittenden wrote: > > Martin Kosek wrote: > >> On Mon, 2012-04-02 at 15:36 -0400, Rob Crittenden wrote: > >>> Rob Crittenden wrote: > >>>> Martin Kosek wrote: > >>>>> On Tue, 2012-03-27 at 17:40 -0400, Rob Crittenden wrote: > >>>>>> Certmonger will currently automatically renew server certificates but > >>>>>> doesn't restart the services so you can still end up with expired > >>>>>> certificates if you services never restart. > >>>>>> > >>>>>> This patch registers are restart command with certmonger so the IPA > >>>>>> services will automatically be restarted to get the updated cert. > >>>>>> > >>>>>> Easy to test. Install IPA then resubmit the current server certs and > >>>>>> watch the services restart: > >>>>>> > >>>>>> # ipa-getcert list > >>>>>> > >>>>>> Find the ID for either your dirsrv or httpd instance > >>>>>> > >>>>>> # ipa-getcert resubmit -i > >>>>>> > >>>>>> Watch /var/log/httpd/error_log or > >>>>>> /var/log/dirsrv/slapd-INSTANCE/errors > >>>>>> to see the service restart. > >>>>>> > >>>>>> rob > >>>>> > >>>>> What about current instances - can we/do we want to update certmonger > >>>>> tracking so that their instances are restarted as well? > >>>>> > >>>>> Anyway, I found few issues SELinux issues with the patch: > >>>>> > >>>>> 1) # rpm -Uvh freeipa-* > >>>>> Preparing... ########################################### [100%] > >>>>> 1:freeipa-python ########################################### [ 20%] > >>>>> 2:freeipa-client ########################################### [ 40%] > >>>>> 3:freeipa-admintools ########################################### [ > >>>>> 60%] > >>>>> 4:freeipa-server ########################################### [ 80%] > >>>>> /usr/bin/chcon: failed to change context of > >>>>> `/usr/lib64/ipa/certmonger' to > >>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid > >>>>> argument > >>>>> /usr/bin/chcon: failed to change context of > >>>>> `/usr/lib64/ipa/certmonger/restart_dirsrv' to > >>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid > >>>>> argument > >>>>> /usr/bin/chcon: failed to change context of > >>>>> `/usr/lib64/ipa/certmonger/restart_httpd' to > >>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid > >>>>> argument > >>>>> warning: %post(freeipa-server-2.1.90GIT5b895af-0.fc16.x86_64) > >>>>> scriptlet failed, exit status 1 > >>>>> 5:freeipa-server-selinux ########################################### > >>>>> [100%] > >>>>> > >>>>> certmonger_unconfined_exec_t type was unknown with my selinux policy: > >>>>> > >>>>> selinux-policy-3.10.0-80.fc16.noarch > >>>>> selinux-policy-targeted-3.10.0-80.fc16.noarch > >>>>> > >>>>> If we need a higher SELinux version, we should bump the required > >>>>> package > >>>>> version spec file. > >>>> > >>>> Yeah, waiting on it to be backported. > >>>> > >>>>> > >>>>> 2) Change of SELinux context with /usr/bin/chcon is temporary until > >>>>> restorecon or system relabel occurs. I think we should make it > >>>>> persistent and enforce this type in our SELinux policy and rather call > >>>>> restorecon instead of chcon > >>>> > >>>> That's a good idea, why didn't I think of that :-( > >>> > >>> Ah, now I remember, it will be handled by selinux-policy. I would have > >>> used restorecon here but since the policy isn't there yet this seemed > >>> like a good idea. > >>> > >>> I'm trying to find out the status of this new policy, it may only make > >>> it into F-17. > >>> > >>> rob > >> > >> Ok. But if this policy does not go in F-16 and if we want this fix in > >> F16 release too, I guess we would have to implement both approaches in > >> our spec file: > >> > >> 1) When on F16, include SELinux policy for restart scripts + run > >> restorecon > >> 2) When on F17, do not include the SELinux policy (+ run restorecon) > >> > >> Martin > >> > > > > Won't work without updated selinux-policy. Without the permission for > > certmonger to execute the commands things will still fail (just in > > really non-obvious and far in the future ways). > > > > It looks like this is fixed in F-17 selinux-policy-3.10.0-107. > > > > rob > > Updated patch which works on F-17. > > rob What about F-16? The restart scripts won't work with enabled enforcing and will raise AVCs. Maybe we really need to deliver our own SELinux policy allowing it on F-16. I also found an issue with the restart scripts: 1) restart_dirsrv: this won't work with systemd: # /sbin/service dirsrv restart Redirecting to /bin/systemctl restart dirsrv.service Failed to issue method call: Unit dirsrv.service failed to load: No such file or directory. See system logs and 'systemctl status dirsrv.service' for details. We would need to pass an instance of IPA dirsrv for this to work. 2) restart_httpd: Is reload enough for httpd to pull a new certificate? Don't we need a full restart? If reload is enough, I think the command should be named reload_httpd Martin From rcritten at redhat.com Tue Apr 3 19:28:36 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 03 Apr 2012 15:28:36 -0400 Subject: [Freeipa-devel] [PATCH] 998 certmonger restarts services on renewal In-Reply-To: <1333478601.2626.5.camel@priserak> References: <4F7233D9.2080405@redhat.com> <1333374440.10403.18.camel@balmora.brq.redhat.com> <4F79B30E.1060804@redhat.com> <4F79FFBD.1030408@redhat.com> <1333437994.23102.13.camel@balmora.brq.redhat.com> <4F7AF8FC.5050000@redhat.com> <4F7B0D04.6060704@redhat.com> <1333478601.2626.5.camel@priserak> Message-ID: <4F7B4F64.7070703@redhat.com> Martin Kosek wrote: > On Tue, 2012-04-03 at 10:45 -0400, Rob Crittenden wrote: >> Rob Crittenden wrote: >>> Martin Kosek wrote: >>>> On Mon, 2012-04-02 at 15:36 -0400, Rob Crittenden wrote: >>>>> Rob Crittenden wrote: >>>>>> Martin Kosek wrote: >>>>>>> On Tue, 2012-03-27 at 17:40 -0400, Rob Crittenden wrote: >>>>>>>> Certmonger will currently automatically renew server certificates but >>>>>>>> doesn't restart the services so you can still end up with expired >>>>>>>> certificates if you services never restart. >>>>>>>> >>>>>>>> This patch registers are restart command with certmonger so the IPA >>>>>>>> services will automatically be restarted to get the updated cert. >>>>>>>> >>>>>>>> Easy to test. Install IPA then resubmit the current server certs and >>>>>>>> watch the services restart: >>>>>>>> >>>>>>>> # ipa-getcert list >>>>>>>> >>>>>>>> Find the ID for either your dirsrv or httpd instance >>>>>>>> >>>>>>>> # ipa-getcert resubmit -i >>>>>>>> >>>>>>>> Watch /var/log/httpd/error_log or >>>>>>>> /var/log/dirsrv/slapd-INSTANCE/errors >>>>>>>> to see the service restart. >>>>>>>> >>>>>>>> rob >>>>>>> >>>>>>> What about current instances - can we/do we want to update certmonger >>>>>>> tracking so that their instances are restarted as well? >>>>>>> >>>>>>> Anyway, I found few issues SELinux issues with the patch: >>>>>>> >>>>>>> 1) # rpm -Uvh freeipa-* >>>>>>> Preparing... ########################################### [100%] >>>>>>> 1:freeipa-python ########################################### [ 20%] >>>>>>> 2:freeipa-client ########################################### [ 40%] >>>>>>> 3:freeipa-admintools ########################################### [ >>>>>>> 60%] >>>>>>> 4:freeipa-server ########################################### [ 80%] >>>>>>> /usr/bin/chcon: failed to change context of >>>>>>> `/usr/lib64/ipa/certmonger' to >>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid >>>>>>> argument >>>>>>> /usr/bin/chcon: failed to change context of >>>>>>> `/usr/lib64/ipa/certmonger/restart_dirsrv' to >>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid >>>>>>> argument >>>>>>> /usr/bin/chcon: failed to change context of >>>>>>> `/usr/lib64/ipa/certmonger/restart_httpd' to >>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid >>>>>>> argument >>>>>>> warning: %post(freeipa-server-2.1.90GIT5b895af-0.fc16.x86_64) >>>>>>> scriptlet failed, exit status 1 >>>>>>> 5:freeipa-server-selinux ########################################### >>>>>>> [100%] >>>>>>> >>>>>>> certmonger_unconfined_exec_t type was unknown with my selinux policy: >>>>>>> >>>>>>> selinux-policy-3.10.0-80.fc16.noarch >>>>>>> selinux-policy-targeted-3.10.0-80.fc16.noarch >>>>>>> >>>>>>> If we need a higher SELinux version, we should bump the required >>>>>>> package >>>>>>> version spec file. >>>>>> >>>>>> Yeah, waiting on it to be backported. >>>>>> >>>>>>> >>>>>>> 2) Change of SELinux context with /usr/bin/chcon is temporary until >>>>>>> restorecon or system relabel occurs. I think we should make it >>>>>>> persistent and enforce this type in our SELinux policy and rather call >>>>>>> restorecon instead of chcon >>>>>> >>>>>> That's a good idea, why didn't I think of that :-( >>>>> >>>>> Ah, now I remember, it will be handled by selinux-policy. I would have >>>>> used restorecon here but since the policy isn't there yet this seemed >>>>> like a good idea. >>>>> >>>>> I'm trying to find out the status of this new policy, it may only make >>>>> it into F-17. >>>>> >>>>> rob >>>> >>>> Ok. But if this policy does not go in F-16 and if we want this fix in >>>> F16 release too, I guess we would have to implement both approaches in >>>> our spec file: >>>> >>>> 1) When on F16, include SELinux policy for restart scripts + run >>>> restorecon >>>> 2) When on F17, do not include the SELinux policy (+ run restorecon) >>>> >>>> Martin >>>> >>> >>> Won't work without updated selinux-policy. Without the permission for >>> certmonger to execute the commands things will still fail (just in >>> really non-obvious and far in the future ways). >>> >>> It looks like this is fixed in F-17 selinux-policy-3.10.0-107. >>> >>> rob >> >> Updated patch which works on F-17. >> >> rob > > What about F-16? The restart scripts won't work with enabled enforcing > and will raise AVCs. Maybe we really need to deliver our own SELinux > policy allowing it on F-16. Right, I don't see this working on F-16. I don't really want to carry this type of policy. It goes beyond marking a few files as certmonger_t, it is going to let certmonger execute arbitrary scripts. This is best left to the SELinux team who understand the consequences better. > > I also found an issue with the restart scripts: > > 1) restart_dirsrv: this won't work with systemd: > > # /sbin/service dirsrv restart > Redirecting to /bin/systemctl restart dirsrv.service > Failed to issue method call: Unit dirsrv.service failed to load: No such > file or directory. See system logs and 'systemctl status dirsrv.service' > for details. Wouldn't work so hot for sysV either as we'd be restarting all instances. I'll take a look. > We would need to pass an instance of IPA dirsrv for this to work. > > 2) restart_httpd: > Is reload enough for httpd to pull a new certificate? Don't we need a > full restart? If reload is enough, I think the command should be named > reload_httpd Yes, it causes the modules to be reloaded which will reload the NSS database, that's all we need. I named it this way for consistency. I can rename it, though I doubt it would cause any confusion either way. rob From pvoborni at redhat.com Wed Apr 4 07:18:23 2012 From: pvoborni at redhat.com (Petr Vobornik) Date: Wed, 04 Apr 2012 09:18:23 +0200 Subject: [Freeipa-devel] [PATCH] 116 Fixed: permission attrs table didn't update its available options on load Message-ID: <4F7BF5BF.8000601@redhat.com> It could lead to state where attributes from other object type were displayed instead of the correct ones. https://fedorahosted.org/freeipa/ticket/2590 -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0116-Fixed-permission-attrs-table-didn-t-update-its-avail.patch Type: text/x-patch Size: 1425 bytes Desc: not available URL: From pvoborni at redhat.com Wed Apr 4 07:19:35 2012 From: pvoborni at redhat.com (Petr Vobornik) Date: Wed, 04 Apr 2012 09:19:35 +0200 Subject: [Freeipa-devel] [PATCH] 117 Added attrs field to permission for target=subtree Message-ID: <4F7BF607.8080105@redhat.com> Permission form was missing attrs field for target=subtree. All other target types have it. It uses multivalued text widget, same as filter, because we can't predict the target type. https://fedorahosted.org/freeipa/ticket/2592 -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0117-Added-attrs-field-to-permission-for-target-subtree.patch Type: text/x-patch Size: 1678 bytes Desc: not available URL: From atkac at redhat.com Wed Apr 4 08:26:08 2012 From: atkac at redhat.com (Adam Tkac) Date: Wed, 4 Apr 2012 10:26:08 +0200 Subject: [Freeipa-devel] [PATCH] 0015 Don't try to remove auxiliary nodes from internal RBT In-Reply-To: <4F7AF5D7.6010100@redhat.com> References: <4F7AF5D7.6010100@redhat.com> Message-ID: <20120404082608.GC1586@redhat.com> On Tue, Apr 03, 2012 at 03:06:31PM +0200, Petr Spacek wrote: > Hello, > > this patch optimizes code for removing deleted zones from BIND > instance little bit. > > In some cases there are auxiliary zones (= not really served zones) > in internal Red-Black tree. Current code tries to remove these > auxiliary zones on each zone_refresh attempt. > > Everything works fine, because auxiliary zones are detected deeper > in zone deletion code. > Now plugin prints very confusing message "Zone '%s' has been removed > from database." each 'zone_refresh' seconds, again and again. This > patch prevents this. > > I think it's very very confusing. I spent a lot of time while > debugging before I realized where is the problem. The patch is OK, please push it. Regards, Adam > Petr^2 Spacek > From ce620e1e4bb888d784b8cdfac5ba75182d45b6c3 Mon Sep 17 00:00:00 2001 > From: Petr Spacek > Date: Tue, 3 Apr 2012 14:50:12 +0200 > Subject: [PATCH] Don't try to remove auxiliary nodes from internal RBT > Signed-off-by: Petr Spacek > > --- > src/ldap_helper.c | 10 ++++++++-- > 1 files changed, 8 insertions(+), 2 deletions(-) > > diff --git a/src/ldap_helper.c b/src/ldap_helper.c > index d0cde9d..df8b01e 100644 > --- a/src/ldap_helper.c > +++ b/src/ldap_helper.c > @@ -1192,6 +1192,14 @@ refresh_zones_from_ldap(ldap_instance_t *ldap_inst) > goto next; > } > > + /* Do not remove auxilitary (= non-zone) nodes. */ > + char buf[DNS_NAME_FORMATSIZE]; > + dns_name_format(&aname, buf, DNS_NAME_FORMATSIZE); > + if (!node->data) { > + log_debug(5,"auxilitary zone/node '%s' will not be removed", buf); > + goto next; > + } > + > DECLARE_BUFFERED_NAME(foundname); > INIT_BUFFERED_NAME(foundname); > > @@ -1201,8 +1209,6 @@ refresh_zones_from_ldap(ldap_instance_t *ldap_inst) > goto next; > } > /* Log zone removing. */ > - char buf[255]; > - dns_name_format(&aname, buf, 255); > log_debug(1, "Zone '%s' has been removed from database.", buf); > > delete = ISC_TRUE; > -- > 1.7.7.6 > -- Adam Tkac, Red Hat, Inc. From pspacek at redhat.com Wed Apr 4 08:44:06 2012 From: pspacek at redhat.com (Petr Spacek) Date: Wed, 04 Apr 2012 10:44:06 +0200 Subject: [Freeipa-devel] [PATCH] 0015 Don't try to remove auxiliary nodes from internal RBT In-Reply-To: <20120404082608.GC1586@redhat.com> References: <4F7AF5D7.6010100@redhat.com> <20120404082608.GC1586@redhat.com> Message-ID: <4F7C09D6.5090007@redhat.com> On 04/04/2012 10:26 AM, Adam Tkac wrote: > On Tue, Apr 03, 2012 at 03:06:31PM +0200, Petr Spacek wrote: >> Hello, >> >> this patch optimizes code for removing deleted zones from BIND >> instance little bit. >> >> In some cases there are auxiliary zones (= not really served zones) >> in internal Red-Black tree. Current code tries to remove these >> auxiliary zones on each zone_refresh attempt. >> >> Everything works fine, because auxiliary zones are detected deeper >> in zone deletion code. >> Now plugin prints very confusing message "Zone '%s' has been removed >> from database." each 'zone_refresh' seconds, again and again. This >> patch prevents this. >> >> I think it's very very confusing. I spent a lot of time while >> debugging before I realized where is the problem. > > The patch is OK, please push it. I fixed typo in comment and log message. After discussion on IRC I adjusted log severity to lower level. Revised patch is attached. Pushed to master: https://fedorahosted.org/bind-dyndb-ldap/changeset/fcaefa5a712d4ffa607d2e99000181e67fe7179a Petr^2 Spacek > > Regards, Adam > >> Petr^2 Spacek -------------- next part -------------- A non-text attachment was scrubbed... Name: bind-dyndb-ldap-pspacek-0015-Don-t-try-to-remove-auxilitary-nodes.patch Type: text/x-patch Size: 1205 bytes Desc: not available URL: From pviktori at redhat.com Wed Apr 4 09:55:51 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 04 Apr 2012 11:55:51 +0200 Subject: [Freeipa-devel] [PATCHES] 0025-26 Test improvements In-Reply-To: <4F79C03E.8090301@redhat.com> References: <4F5DE9A6.90704@redhat.com> <4F70D52E.5070101@redhat.com> <4F717902.1060304@redhat.com> <4F79C03E.8090301@redhat.com> Message-ID: <4F7C1AA7.9040806@redhat.com> On 04/02/2012 05:05 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> On 03/26/2012 10:44 PM, Rob Crittenden wrote: >>> Petr Viktorin wrote: >>>> Patch 25 fixes errors I found by running pylint on the testsuite. They >>>> were in code that was unused, either by error or because it only >>>> runs on >>>> errors. >>>> >>>> Patch 26 adds a test for the batch plugin. >>> >>> In patch 25 the second test_internal_error should really be >>> test_unauthorized_error. I think that is a clearer name. Otherwise looks >>> good. >>> >>> Patch 26 needs a very minor rebase to fix an error caused by improved >>> error code handling: >>> >>> expected = Fuzzy(u"invalid 'gidnumber'.*", , None) >>> got = u"invalid 'gid': Gettext('must be an integer', domain='ipa', >>> localedir=None)" >>> >>> I tested this: >>> >>> diff --git a/tests/test_xmlrpc/test_batch_plugin.py >>> b/tests/test_xmlrpc/test_bat >>> ch_plugin.py >>> index e4280ed..d69bfd9 100644 >>> --- a/tests/test_xmlrpc/test_batch_plugin.py >>> +++ b/tests/test_xmlrpc/test_batch_plugin.py >>> @@ -186,7 +186,7 @@ class test_batch(Declarative): >>> dict(error=u"'params' is required"), >>> dict(error=u"'givenname' is required"), >>> dict(error=u"'description' is required"), >>> - dict(error=Fuzzy(u"invalid 'gidnumber'.*")), >>> + dict(error=Fuzzy(u"invalid 'gid'.*")), >>> ), >>> ), >>> ), >>> >>> rob >> >> Thank you! Fixed, attaching updated patches. >> > > These look ok but it is baffling to me why tuple needs to be added to > the Output format in batch. Do you know when it is being converted into > a tuple? In XML-RPC unmarshalling, specifically ipalib/rpc.py: 109 if type(value) in (list, tuple): 110 return tuple(xml_unwrap(v, encoding) for v in value) Maybe we should relax the validation? That's out of scope for this patch though. The hbactest plugin has similar list/tuple Outputs. -- Petr? From pvoborni at redhat.com Wed Apr 4 10:36:21 2012 From: pvoborni at redhat.com (Petr Vobornik) Date: Wed, 04 Apr 2012 12:36:21 +0200 Subject: [Freeipa-devel] [PATCH] 118-119 DNS forward policy: checkboxes changed to radio buttons Message-ID: <4F7C2425.6000606@redhat.com> DNS forward policy fields were using mutually exclusive checkboxes. Such behavior is unusual for users. Checkboxes were changed to radios with new option 'none/default' to set empty value ''. https://fedorahosted.org/freeipa/ticket/2599 Second patch removes mutex option from checkboxes. Not sure if needed. -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0118-DNS-forward-policy-checkboxes-changed-to-radio-butto.patch Type: text/x-patch Size: 4640 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0119-Removed-mutex-option-from-checkboxes.patch Type: text/x-patch Size: 1380 bytes Desc: not available URL: From pviktori at redhat.com Wed Apr 4 10:44:04 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 04 Apr 2012 12:44:04 +0200 Subject: [Freeipa-devel] [PATCH] 0016 Fixes for{add, set, del}attr with managed attributes In-Reply-To: <4F75B379.7030106@redhat.com> References: <4F4B6648.1070209@redhat.com> <4F4BFDB7.7030501@redhat.com> <4F4E2655.2010207@redhat.com> <4F4E3B2F.3040609@redhat.com> <4F4E4586.2070805@redhat.com> <4F4F46C9.6030804@redhat.com> <4F624FE1.2080000@redhat.com> <4F634B9C.6030903@redhat.com> <4F63784B.4020009@redhat.com> <4F637961.4050805@redhat.com> <4F6379D8.1050602@redhat.com> <4F638DF4.4060602@redhat.com> <4F6853F0.7030007@redhat.com> <4F75B379.7030106@redhat.com> Message-ID: <4F7C25F4.7000702@redhat.com> On 03/30/2012 03:22 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> On 03/16/2012 08:01 PM, Petr Viktorin wrote: >>> On 03/16/2012 06:35 PM, Petr Viktorin wrote: >>>> On 03/16/2012 06:33 PM, Rob Crittenden wrote: >>>>> Rob Crittenden wrote: >>>>>> Petr Viktorin wrote: >>>>>>> On 03/15/2012 09:24 PM, Rob Crittenden wrote: >>>>>>>> Petr Viktorin wrote: >>>>>>>>> On 02/29/2012 04:34 PM, Petr Viktorin wrote: >>>>>>>>>> On 02/29/2012 03:50 PM, Rob Crittenden wrote: >>>>>>>>>>> Petr Viktorin wrote: >>>>>>>>>>>> On 02/27/2012 11:03 PM, Rob Crittenden wrote: >>>>>>>>>>>>> Petr Viktorin wrote: >>>>>>>>>>>>>> Patch 16 defers validation & conversion until after >>>>>>>>>>>>>> {add,del,set}attr is >>>>>>>>>>>>>> processed, so that we don't search for an integer in a >>>>>>>>>>>>>> list of >>>>>>>>>>>>>> strings >>>>>>>>>>>>>> (this caused ticket #2405), and so that the end result of >>>>>>>>>>>>>> these >>>>>>>>>>>>>> operations is validated (#2407). >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Patch 17 makes these options honor params marked no_create >>>>>>>>>>>>>> and >>>>>>>>>>>>>> no_update. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/2405 >>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/2407 >>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/2408 >>>>>>>>>>>>> >>>>>>>>>>>>> NACK on Patch 17 which breaks patch 16. >>>>>>>>>>>> >>>>>>>>>>>> How is patch 16 broken? It works for me. >>>>>>>>>>> >>>>>>>>>>> My point is they rely on one another, IMHO, so without 17 the >>>>>>>>>>> reported >>>>>>>>>>> problem still exists. >>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> *attr is specifically made to be powerful. We don't want to >>>>>>>>>>>>> arbitrarily >>>>>>>>>>>>> block updating certain values. >>>>>>>>>>>> >>>>>>>>>>>> Noted >>>>>>>>>>>> >>>>>>>>>>>>> Not having patch 17 means that the problem reported in 2408 >>>>>>>>>>>>> still >>>>>>>>>>>>> occurs. It should probably check both the schema and the >>>>>>>>>>>>> param to >>>>>>>>>>>>> determine if something can have multiple values and reject >>>>>>>>>>>>> that >>>>>>>>>>>>> way. >>>>>>>>>>>> >>>>>>>>>>>> I see the problem now: the certificate subject base is defined >>>>>>>>>>>> as a >>>>>>>>>>>> multi-value attribute in the LDAP schema. If it's changed to >>>>>>>>>>>> single-value the existing validation should catch it. >>>>>>>>>>>> >>>>>>>>>>>> Also, most of the config attributes should probably be >>>>>>>>>>>> required in >>>>>>>>>>>> the >>>>>>>>>>>> schema. Am I right? >>>>>>>>>>>> >>>>>>>>>>>> I'm a newbie here; what do I need to do when changing the >>>>>>>>>>>> schema? Is >>>>>>>>>>>> there a patch that does something similar I could use as an >>>>>>>>>>>> example? >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> The framework should be able to impose its own single-value >>>>>>>>>>> will as >>>>>>>>>>> well. If a Param is designated as single-value the *attr should >>>>>>>>>>> honor >>>>>>>>>>> it. >>>>>>>>>> >>>>>>>>>> Here is the updated patch. >>>>>>>>>> Since *attr is powerful enough to modify 'no_update' Params, >>>>>>>>>> which >>>>>>>>>> CRUDUpdate forgets about, I need to check the params of the >>>>>>>>>> LDAPObject >>>>>>>>>> itself. >>>>>>>>>> >>>>>>>>>>> Updating schema is a bit of a nasty business right now. See >>>>>>>>>>> 10-selinuxusermap.update for an example. >>>>>>>>>> >>>>>>>>>> I'll leave schema changes for after the release, then. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Freeipa-devel mailing list >>>>>>>>>> Freeipa-devel at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>>>> >>>>>>>>> Attached patch includes tests. Note that the test depends on my >>>>>>>>> patches >>>>>>>>> 12-13, which make ipasearchrecordslimit required. >>>>>>>> >>>>>>>> I gather that this eliminates the need for patch 17? It seems to >>>>>>>> work >>>>>>>> as-is. >>>>>>> >>>>>>> Yes. Patch 17 made *attr honor no_create and no_update, which you >>>>>>> said >>>>>>> is not desired behavior. >>>>>>> >>>>>>>> The patch doesn't apply because of an encoding change Martin made >>>>>>>> recently. >>>>>>>> >>>>>>>> It does seem to do the right thing though. >>>>>>>> >>>>>>>> rob >>>>>>> >>>>>>> Attaching rebased patch. >>>>>>> This deletes Martin's change, but unless I tested wrong, his bug >>>>>>> (https://fedorahosted.org/freeipa/ticket/2418) stays fixed. The >>>>>>> tests in >>>>>>> my patch should apply to that ticket as well. >>>>>>> >>>>>>> In another fork of this thread, there's discussion if this >>>>>>> approach is >>>>>>> good at all. Maybe we're overengineering a corner case here. >>>>>>> >>>>>> >>>>>> Found another issue, a very subtle one. >>>>>> >>>>>> When using *attr and an exception occurs where the param name would >>>>>> appear we want the name passed in to be used. >>>>>> >>>>>> For example: >>>>>> >>>>>> $ ipa pwpolicy-mod --setattr=krbpwdmaxfailure=xyz >>>>>> >>>>>> With this patch it will return: >>>>>> ipa: ERROR: invalid 'maxfail': must be an integer >>>>>> >>>>>> It should return: >>>>>> ipa: ERROR: invalid 'krbpwdmaxfailure': must be an integer >>>>> >>>>> On second thought, this may not be related... >>>> >>>> This is https://fedorahosted.org/freeipa/ticket/1418, I'll see if it >>>> makes sense to fix it here. >>> >>> Okay, this is a different problem. A quick grep of ConversionError in >>> parameters.py reveals that all of the "wrong datatype" errors are raised >>> with the raw parameter name. >>> Other errors are raised with either that or the cli_name or they >>> auto-detect. I don't think they follow some rule in this. >>> >>> Furthermore, our test suite doesn't check exception arguments. Sounds >>> like a major cleanup coming up here... >>> >>>>> >>>>>> >>>>>> The encoding problem does still exist too: >>>>>> >>>>>> $ ipa config-mod --setattr ipamigrationenabled=false >>>>>> ipa: ERROR: ipaMigrationEnabled: value #0 invalid per syntax: Invalid >>>>>> syntax. >>>>>> >>>> >>>> Will fix. >>>> >>> >>> Fixed in the attached update, which encodes the value. >>> >>> I was surprised to find that >>> config_mod.params['ipamigrationenabled'].attribute is True, while >>> config_mod.obj.params['ipamigrationenabled'].attribute is False (and so >>> its encode() method does nothing). That's because 'attribute' is only >>> set when the params are cloned from the LDAPObject to the CRUD method. >>> Is there a reason behind this, or is it just that it was easier to do? >>> >>> For this case it means that params marked no_update will not be encoded >>> properly -- getting to a working encode() would require either setting >>> 'attribute' on the parent object or some ugly hackery. >>> >>> >>> >>> --- >>> Petr? >>> >> >> Rebased to current master. >> > > There is a test failure. I ran out of time last night to investigate > whether the test is valid or it is an issue with the patch, forgot to > send this out before I left for the day yesterday. > > ====================================================================== > FAIL: test_attr[23]: pwpolicy_mod: Try setting non-numeric krbpwdmaxfailure > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 187, in runTest > self.test(*self.arg) > File > "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", > line 247, in > func = lambda: self.check(nice, **test) > File > "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", > line 260, in check > self.check_exception(nice, cmd, args, options, expected) > File > "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", > line 277, in check_exception > UNEXPECTED % (cmd, name, args, options, e.__class__.__name__, e) > AssertionError: Expected 'pwpolicy_mod' to raise ConversionError, but > caught different. > args = [] > options = {'setattr': u'krbpwdmaxfailure=abc'} > ValidationError: invalid 'krbpwdmaxfailure': must be an integer Fixed. I should have caught this on Friday. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0016-07-Defer-conversion-and-validation-until-after-add-del-.patch Type: text/x-patch Size: 9635 bytes Desc: not available URL: From pviktori at redhat.com Wed Apr 4 12:02:44 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 04 Apr 2012 14:02:44 +0200 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F761EF5.4050704@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> Message-ID: <4F7C3864.3070505@redhat.com> On 03/30/2012 11:00 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> On 03/26/2012 05:35 PM, Petr Viktorin wrote: >>> On 03/26/2012 04:54 PM, Rob Crittenden wrote: >>>> >>>> Some minor compliants. >>> >>> >>> Ideally, there would be a routine that sets up the logging and handles >>> command-line arguments in some uniform way (which is also needed before >>> logging setup to detect ipa-server-install --uninstall). >>> The original patch did the common logging setup, and I hacked around the >>> install/uninstall problem too. >>> I guess I overdid it when I simplified the patch. >>> I'm somewhat confused about the scope, so bear with me as I clarify what >>> you mean. >>> >>> >>>> If you abort the installation you get this somewhat unnerving error: >>>> >>>> Continue to configure the system with these values? [no]: >>>> ipa : ERROR ipa-server-install failed, SystemExit: Installation aborted >>>> Installation aborted >>>> >>>> ipa-ldap-updater is the same: >>>> >>>> # ipa-ldap-updater >>>> [2012-03-26T14:53:41Z ipa] : ipa-ldap-updater failed, >>>> SystemExit: >>>> IPA is not configured on this system. >>>> IPA is not configured on this system. >>>> >>>> and ipa-upgradeconfig >>>> >>>> $ ipa-upgradeconfig >>>> [2012-03-26T14:54:05Z ipa] : ipa-upgradeconfig failed, >>>> SystemExit: >>>> You must be root to run this script. >>>> >>>> >>>> You must be root to run this script. >>>> >>>> I'm guessing that the issue is that the log file isn't opened yet. >>> > >>>> It would be nice if the logging would be confined to just the log. >>> >>> >>> If I understand you correctly, the code should check if logging has been >>> configured already, and if not, skip displaying the message? >>> >>> >>>> When uninstalling you get the message 'ipa-server-install successful'. >>>> This is a little odd as well. >>> >>> ipa-server-install is the name of the command. Wontfix for now, unless >>> you disagree strongly. >>> >>> >> >> Updated patch: only log if logging has been configured (detected by >> looking at the root logger's handlers), and changed the message to ?The >> ipa-server-install command has succeeded/failed?. > > Works much better thanks. Just one request. When you created final_log() > you show less information than you did in earlier patches. It is nice > seeing the SystemExit failure. Can you do something like this (basically > cut-n-pasted from v05)? > > diff --git a/ipaserver/install/installutils.py > b/ipaserver/install/installutils. > py > index 851b58d..ca82a1b 100644 > --- a/ipaserver/install/installutils.py > +++ b/ipaserver/install/installutils.py > @@ -721,15 +721,15 @@ def script_context(operation_name): > # Only log if logging was already configured > # TODO: Do this properly (e.g. configure logging before the try/except) > if log_mgr.handlers.keys() != ['console']: > - root_logger.info(template, operation_name) > + root_logger.info(template) > try: > yield > except BaseException, e: > if isinstance(e, SystemExit) and (e.code is None or e.code == 0): > # Not an error after all > - final_log('The %s command was successful') > + final_log('The %s command was successful' % operation_name) > else: > - final_log('The %s command failed') > + final_log('%s failed, %s: %s' % (operation_name, type(e).__name__, > e)) > raise > else: > final_log('The %s command was successful') > > This looks like: > > 2012-03-30T20:56:53Z INFO ipa-dns-install failed, SystemExit: > DNS is already configured in this IPA server. > > rob Fixed. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0014-07-Add-final-debug-message-in-installers.patch Type: text/x-patch Size: 30375 bytes Desc: not available URL: From dpal at redhat.com Wed Apr 4 12:46:48 2012 From: dpal at redhat.com (Dmitri Pal) Date: Wed, 04 Apr 2012 08:46:48 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F7B2937.4060300@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> Message-ID: <4F7C42B8.9070100@redhat.com> On 04/03/2012 12:45 PM, Ondrej Hamada wrote: > On 03/13/2012 01:13 AM, Dmitri Pal wrote: >> On 03/12/2012 06:10 PM, Simo Sorce wrote: >>> On Mon, 2012-03-12 at 17:40 -0400, Dmitri Pal wrote: >>>> On 03/12/2012 04:16 PM, Simo Sorce wrote: >>>>> On Mon, 2012-03-12 at 20:38 +0100, Ondrej Hamada wrote: >>>>>> USER'S operations when connection is OK: >>>>>> ------------------------------------------------------- >>>>>> read data -> local >>>>>> write data -> forwarding to master >>>>>> authentication: >>>>>> -credentials cached -- authenticate against credentials in local >>>>>> cache >>>>>> -on failure: log failure locally, update >>>>>> data >>>>>> about failures only on lock-down of account >>>>>> -credentials not cached -- forward request to master, on success >>>>>> cache >>>>>> the credentials >>>>>> >>>>> This scheme doesn't work with Kerberos. >>>>> Either you have a copy of the user's keys locally or you don't, >>>>> there is >>>>> nothing you can really cache if you don't. >>>>> >>>>> Simo. >>>>> >>>> Yes this is what we are talking about here - the cache would have to >>>> contain user Kerberos key but there should be some expiration on the >>>> cache so that fetched and stored keys periodically cleaned >>>> following the >>>> policy an admin has defined. >>> We would need a mechanism to transfer Kerberos keys, but that would not >>> be sufficient, you'd have to give read-only servers also the realm >>> krbtgt in order to be able to do anything with those keys. >>> >>> The way MS solves hits (I think) is by giving a special RODC krbtgt to >>> each RODC, and then replicating all RODC krbtgt's with full domain >>> controllers. Full domain controllers have logic to use RODC's krbtgt >>> keys instead of the normal krbtgt to perform operations when user's >>> krbtgt are presented to a different server. This is a lot of work and >>> changes in the KDC, not something we can implement easily. >>> >>> As a first implementation I would restrict read-only replicas to not do >>> Kerberos at all, only LDAP for all the lookup stuff necessary. to add a >>> RO KDC we will need to plan a lot of changes in the KDC. >>> >>> We will also need intelligent partial replication where the rules about >>> which object (and which attributes in the object) need/can be >>> replicated >>> are established based on some grouping+filter mechanism. This also is a >>> pretty important change to 389ds. >>> >>> Simo. >>> >> I agree. I am just trying to structure the discussion a bit so that all >> what you are saying can be captured in the design document and then we >> can pick a subset of what Ondrej will actually implement. So let us >> capture all the complexity and then do a POC for just LDAP part. >> > Sorry for inactivity, I was struggling with a lot of school stuff. > > I've summed up the main goals, do you agree on them or should I > add/remove any? > > > GOALS > =========================================== > Create Hub and Consumer types of replica with following features: > > * Hub is read-only > > * Hub interconnects Masters with Consumers or Masters with Hubs > or Hubs with other Hubs > > * Hub is hidden in the network topology > > * Consumer is read-only > > * Consumer interconnects Masters/Hubs with clients > > * Write operations should be forwarded to Master > > * Consumer should be able to log users into system without > communication with master See below. > > * Consumer should cache user's credentials > > * Caching of credentials should be configurable If caching of the creds is not configured consumer should forward authentication to master. You also need to think about password changes and kerberos key rotation. > > * CA server should not be allowed on Hubs and Consumers > -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From pviktori at redhat.com Wed Apr 4 13:01:50 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 04 Apr 2012 15:01:50 +0200 Subject: [Freeipa-devel] [PATCH 69] Use indexed format specifiers in i18n strings In-Reply-To: <4F79A674.2060104@redhat.com> References: <201203300136.q2U1a0E5030640@int-mx12.intmail.prod.int.phx2.redhat.com> <4F79A674.2060104@redhat.com> Message-ID: <4F7C463E.4020205@redhat.com> On 04/02/2012 03:15 PM, Rob Crittenden wrote: > John Dennis wrote: >> Translators need to reorder messages to suit the needs of the target >> language. The conventional positional format specifiers (e.g. %s %d) >> do not permit reordering because their order is tied to the ordering >> of the arguments to the printf function. The fix is to use indexed >> format specifiers. > > I guess this looks ok but all of these errors are of the format: string > error, error number (and inconsistently, sometimes the reverse). Not all of them, e.g. - fprintf(stderr, _("Search for %s on rootdse failed with error %d"), + fprintf(stderr, _("Search for %1$s on rootdse failed with error %2$d\n"), root_attrs[0], ret); - fprintf(stderr, _("Failed to open keytab '%s': %s\n"), keytab, + fprintf(stderr, _("Failed to open keytab '%1$s': %2$s\n"), keytab, error_message(krberr)); > Do those really need to be re-orderable? > You can never make too few assumptions about foreign languages. Most likely at least some will need reordering. Enforcing indexed specifiers everywhere means we don't have to worry about individual cases, or change our strings when a new language is added. -- Petr? From simo at redhat.com Wed Apr 4 13:02:25 2012 From: simo at redhat.com (Simo Sorce) Date: Wed, 04 Apr 2012 09:02:25 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F7B2937.4060300@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> Message-ID: <1333544545.22628.287.camel@willson.li.ssimo.org> On Tue, 2012-04-03 at 18:45 +0200, Ondrej Hamada wrote: > On 03/13/2012 01:13 AM, Dmitri Pal wrote: > > On 03/12/2012 06:10 PM, Simo Sorce wrote: > >> On Mon, 2012-03-12 at 17:40 -0400, Dmitri Pal wrote: > >>> On 03/12/2012 04:16 PM, Simo Sorce wrote: > >>>> On Mon, 2012-03-12 at 20:38 +0100, Ondrej Hamada wrote: > >>>>> USER'S operations when connection is OK: > >>>>> ------------------------------------------------------- > >>>>> read data -> local > >>>>> write data -> forwarding to master > >>>>> authentication: > >>>>> -credentials cached -- authenticate against credentials in local cache > >>>>> -on failure: log failure locally, update > >>>>> data > >>>>> about failures only on lock-down of account > >>>>> -credentials not cached -- forward request to master, on success > >>>>> cache > >>>>> the credentials > >>>>> > >>>> This scheme doesn't work with Kerberos. > >>>> Either you have a copy of the user's keys locally or you don't, there is > >>>> nothing you can really cache if you don't. > >>>> > >>>> Simo. > >>>> > >>> Yes this is what we are talking about here - the cache would have to > >>> contain user Kerberos key but there should be some expiration on the > >>> cache so that fetched and stored keys periodically cleaned following the > >>> policy an admin has defined. > >> We would need a mechanism to transfer Kerberos keys, but that would not > >> be sufficient, you'd have to give read-only servers also the realm > >> krbtgt in order to be able to do anything with those keys. > >> > >> The way MS solves hits (I think) is by giving a special RODC krbtgt to > >> each RODC, and then replicating all RODC krbtgt's with full domain > >> controllers. Full domain controllers have logic to use RODC's krbtgt > >> keys instead of the normal krbtgt to perform operations when user's > >> krbtgt are presented to a different server. This is a lot of work and > >> changes in the KDC, not something we can implement easily. > >> > >> As a first implementation I would restrict read-only replicas to not do > >> Kerberos at all, only LDAP for all the lookup stuff necessary. to add a > >> RO KDC we will need to plan a lot of changes in the KDC. > >> > >> We will also need intelligent partial replication where the rules about > >> which object (and which attributes in the object) need/can be replicated > >> are established based on some grouping+filter mechanism. This also is a > >> pretty important change to 389ds. > >> > >> Simo. > >> > > I agree. I am just trying to structure the discussion a bit so that all > > what you are saying can be captured in the design document and then we > > can pick a subset of what Ondrej will actually implement. So let us > > capture all the complexity and then do a POC for just LDAP part. > > > Sorry for inactivity, I was struggling with a lot of school stuff. > > I've summed up the main goals, do you agree on them or should I > add/remove any? > > > GOALS > =========================================== > Create Hub and Consumer types of replica with following features: > > * Hub is read-only > > * Hub interconnects Masters with Consumers or Masters with Hubs > or Hubs with other Hubs > > * Hub is hidden in the network topology > > * Consumer is read-only > > * Consumer interconnects Masters/Hubs with clients > > * Write operations should be forwarded to Master > > * Consumer should be able to log users into system without > communication with master We need to define how this can be done, it will almost certainly mean part of the consumer is writable, plus it also means you need additional access control and policies, on what the Consumer should be allowed to see. > * Consumer should cache user's credentials Ok what credentials ? As I explained earlier Kerberos creds cannot really be cached. Either they are transferred with replication or the KDC needs to be change to do chaining. Neither I consider as 'caching'. A password obtained through an LDAP bind could be cached, but I am not sure it is worth it. > * Caching of credentials should be configurable See above. > * CA server should not be allowed on Hubs and Consumers Missing points: - Masters should not transfer KRB keys to HUBs/Consumers by default. - We need selective replication if you want to allow distributing a partial set of Kerberos credentials to consumers. With Hubs it becomes complicated to decide what to replicate about credentials. Simo. -- Simo Sorce * Red Hat, Inc * New York From rcritten at redhat.com Wed Apr 4 13:28:17 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Wed, 04 Apr 2012 09:28:17 -0400 Subject: [Freeipa-devel] [PATCHES] 0025-26 Test improvements In-Reply-To: <4F7C1AA7.9040806@redhat.com> References: <4F5DE9A6.90704@redhat.com> <4F70D52E.5070101@redhat.com> <4F717902.1060304@redhat.com> <4F79C03E.8090301@redhat.com> <4F7C1AA7.9040806@redhat.com> Message-ID: <4F7C4C71.4040601@redhat.com> Petr Viktorin wrote: > On 04/02/2012 05:05 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> On 03/26/2012 10:44 PM, Rob Crittenden wrote: >>>> Petr Viktorin wrote: >>>>> Patch 25 fixes errors I found by running pylint on the testsuite. They >>>>> were in code that was unused, either by error or because it only >>>>> runs on >>>>> errors. >>>>> >>>>> Patch 26 adds a test for the batch plugin. >>>> >>>> In patch 25 the second test_internal_error should really be >>>> test_unauthorized_error. I think that is a clearer name. Otherwise >>>> looks >>>> good. >>>> >>>> Patch 26 needs a very minor rebase to fix an error caused by improved >>>> error code handling: >>>> >>>> expected = Fuzzy(u"invalid 'gidnumber'.*", , None) >>>> got = u"invalid 'gid': Gettext('must be an integer', domain='ipa', >>>> localedir=None)" >>>> >>>> I tested this: >>>> >>>> diff --git a/tests/test_xmlrpc/test_batch_plugin.py >>>> b/tests/test_xmlrpc/test_bat >>>> ch_plugin.py >>>> index e4280ed..d69bfd9 100644 >>>> --- a/tests/test_xmlrpc/test_batch_plugin.py >>>> +++ b/tests/test_xmlrpc/test_batch_plugin.py >>>> @@ -186,7 +186,7 @@ class test_batch(Declarative): >>>> dict(error=u"'params' is required"), >>>> dict(error=u"'givenname' is required"), >>>> dict(error=u"'description' is required"), >>>> - dict(error=Fuzzy(u"invalid 'gidnumber'.*")), >>>> + dict(error=Fuzzy(u"invalid 'gid'.*")), >>>> ), >>>> ), >>>> ), >>>> >>>> rob >>> >>> Thank you! Fixed, attaching updated patches. >>> >> >> These look ok but it is baffling to me why tuple needs to be added to >> the Output format in batch. Do you know when it is being converted into >> a tuple? > > In XML-RPC unmarshalling, specifically ipalib/rpc.py: > > 109 if type(value) in (list, tuple): > 110 return tuple(xml_unwrap(v, encoding) for v in value) > > Maybe we should relax the validation? That's out of scope for this patch > though. > > The hbactest plugin has similar list/tuple Outputs. > Ok, ACK, both pushed to master and ipa-2-2 rob From yzhang at redhat.com Wed Apr 4 14:16:39 2012 From: yzhang at redhat.com (yi zhang) Date: Wed, 04 Apr 2012 07:16:39 -0700 Subject: [Freeipa-devel] [PATCH] 118-119 DNS forward policy: checkboxes changed to radio buttons In-Reply-To: <4F7C2425.6000606@redhat.com> References: <4F7C2425.6000606@redhat.com> Message-ID: <4F7C57C7.90500@redhat.com> On 04/04/2012 03:36 AM, Petr Vobornik wrote: > DNS forward policy fields were using mutually exclusive checkboxes. > Such behavior is unusual for users. > > Checkboxes were changed to radios with new option 'none/default' to > set empty value ''. > > https://fedorahosted.org/freeipa/ticket/2599 thanks for fixing this :) Yi > > Second patch removes mutex option from checkboxes. Not sure if needed. > > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | Yi Zhang | | QA @ Mountain View, California | | Cell: 408-509-6375 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: yzhang.vcf Type: text/x-vcard Size: 134 bytes Desc: not available URL: From mkosek at redhat.com Wed Apr 4 14:43:40 2012 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 04 Apr 2012 16:43:40 +0200 Subject: [Freeipa-devel] [PATCH] 247 Fix installation when server hostname is not in a default domain Message-ID: <1333550620.23241.2.camel@balmora.brq.redhat.com> When IPA server is configured with DNS and its hostname is not located in a default domain, SRV records are not valid. Additionally, httpd does not serve XMLRPC interface because it IPA server domain-realm mapping is missing in krb5.conf. All CLI commands were then failing. This patch amends this configuration. It fixes SRV records in served domain to include full FQDN instead of relative hostname when the IPA server hostname is not located in served domain. IPA server forward record is also placed to correct zone. When IPA server is not in a served domain a proper domain-realm mapping is configured to krb5.conf. The template was improved in order to be able to hold this information. https://fedorahosted.org/freeipa/ticket/2602 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-247-hostname-not-in-domain.patch Type: text/x-patch Size: 7057 bytes Desc: not available URL: From rcritten at redhat.com Wed Apr 4 14:51:49 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Wed, 04 Apr 2012 10:51:49 -0400 Subject: [Freeipa-devel] [PATCH] 0016 Fixes for{add, set, del}attr with managed attributes In-Reply-To: <4F7C25F4.7000702@redhat.com> References: <4F4B6648.1070209@redhat.com> <4F4BFDB7.7030501@redhat.com> <4F4E2655.2010207@redhat.com> <4F4E3B2F.3040609@redhat.com> <4F4E4586.2070805@redhat.com> <4F4F46C9.6030804@redhat.com> <4F624FE1.2080000@redhat.com> <4F634B9C.6030903@redhat.com> <4F63784B.4020009@redhat.com> <4F637961.4050805@redhat.com> <4F6379D8.1050602@redhat.com> <4F638DF4.4060602@redhat.com> <4F6853F0.7030007@redhat.com> <4F75B379.7030106@redhat.com> <4F7C25F4.7000702@redhat.com> Message-ID: <4F7C6005.90100@redhat.com> Petr Viktorin wrote: > On 03/30/2012 03:22 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> On 03/16/2012 08:01 PM, Petr Viktorin wrote: >>>> On 03/16/2012 06:35 PM, Petr Viktorin wrote: >>>>> On 03/16/2012 06:33 PM, Rob Crittenden wrote: >>>>>> Rob Crittenden wrote: >>>>>>> Petr Viktorin wrote: >>>>>>>> On 03/15/2012 09:24 PM, Rob Crittenden wrote: >>>>>>>>> Petr Viktorin wrote: >>>>>>>>>> On 02/29/2012 04:34 PM, Petr Viktorin wrote: >>>>>>>>>>> On 02/29/2012 03:50 PM, Rob Crittenden wrote: >>>>>>>>>>>> Petr Viktorin wrote: >>>>>>>>>>>>> On 02/27/2012 11:03 PM, Rob Crittenden wrote: >>>>>>>>>>>>>> Petr Viktorin wrote: >>>>>>>>>>>>>>> Patch 16 defers validation & conversion until after >>>>>>>>>>>>>>> {add,del,set}attr is >>>>>>>>>>>>>>> processed, so that we don't search for an integer in a >>>>>>>>>>>>>>> list of >>>>>>>>>>>>>>> strings >>>>>>>>>>>>>>> (this caused ticket #2405), and so that the end result of >>>>>>>>>>>>>>> these >>>>>>>>>>>>>>> operations is validated (#2407). >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Patch 17 makes these options honor params marked no_create >>>>>>>>>>>>>>> and >>>>>>>>>>>>>>> no_update. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/2405 >>>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/2407 >>>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/2408 >>>>>>>>>>>>>> >>>>>>>>>>>>>> NACK on Patch 17 which breaks patch 16. >>>>>>>>>>>>> >>>>>>>>>>>>> How is patch 16 broken? It works for me. >>>>>>>>>>>> >>>>>>>>>>>> My point is they rely on one another, IMHO, so without 17 the >>>>>>>>>>>> reported >>>>>>>>>>>> problem still exists. >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>>> *attr is specifically made to be powerful. We don't want to >>>>>>>>>>>>>> arbitrarily >>>>>>>>>>>>>> block updating certain values. >>>>>>>>>>>>> >>>>>>>>>>>>> Noted >>>>>>>>>>>>> >>>>>>>>>>>>>> Not having patch 17 means that the problem reported in 2408 >>>>>>>>>>>>>> still >>>>>>>>>>>>>> occurs. It should probably check both the schema and the >>>>>>>>>>>>>> param to >>>>>>>>>>>>>> determine if something can have multiple values and reject >>>>>>>>>>>>>> that >>>>>>>>>>>>>> way. >>>>>>>>>>>>> >>>>>>>>>>>>> I see the problem now: the certificate subject base is defined >>>>>>>>>>>>> as a >>>>>>>>>>>>> multi-value attribute in the LDAP schema. If it's changed to >>>>>>>>>>>>> single-value the existing validation should catch it. >>>>>>>>>>>>> >>>>>>>>>>>>> Also, most of the config attributes should probably be >>>>>>>>>>>>> required in >>>>>>>>>>>>> the >>>>>>>>>>>>> schema. Am I right? >>>>>>>>>>>>> >>>>>>>>>>>>> I'm a newbie here; what do I need to do when changing the >>>>>>>>>>>>> schema? Is >>>>>>>>>>>>> there a patch that does something similar I could use as an >>>>>>>>>>>>> example? >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> The framework should be able to impose its own single-value >>>>>>>>>>>> will as >>>>>>>>>>>> well. If a Param is designated as single-value the *attr should >>>>>>>>>>>> honor >>>>>>>>>>>> it. >>>>>>>>>>> >>>>>>>>>>> Here is the updated patch. >>>>>>>>>>> Since *attr is powerful enough to modify 'no_update' Params, >>>>>>>>>>> which >>>>>>>>>>> CRUDUpdate forgets about, I need to check the params of the >>>>>>>>>>> LDAPObject >>>>>>>>>>> itself. >>>>>>>>>>> >>>>>>>>>>>> Updating schema is a bit of a nasty business right now. See >>>>>>>>>>>> 10-selinuxusermap.update for an example. >>>>>>>>>>> >>>>>>>>>>> I'll leave schema changes for after the release, then. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Freeipa-devel mailing list >>>>>>>>>>> Freeipa-devel at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>>>>> >>>>>>>>>> Attached patch includes tests. Note that the test depends on my >>>>>>>>>> patches >>>>>>>>>> 12-13, which make ipasearchrecordslimit required. >>>>>>>>> >>>>>>>>> I gather that this eliminates the need for patch 17? It seems to >>>>>>>>> work >>>>>>>>> as-is. >>>>>>>> >>>>>>>> Yes. Patch 17 made *attr honor no_create and no_update, which you >>>>>>>> said >>>>>>>> is not desired behavior. >>>>>>>> >>>>>>>>> The patch doesn't apply because of an encoding change Martin made >>>>>>>>> recently. >>>>>>>>> >>>>>>>>> It does seem to do the right thing though. >>>>>>>>> >>>>>>>>> rob >>>>>>>> >>>>>>>> Attaching rebased patch. >>>>>>>> This deletes Martin's change, but unless I tested wrong, his bug >>>>>>>> (https://fedorahosted.org/freeipa/ticket/2418) stays fixed. The >>>>>>>> tests in >>>>>>>> my patch should apply to that ticket as well. >>>>>>>> >>>>>>>> In another fork of this thread, there's discussion if this >>>>>>>> approach is >>>>>>>> good at all. Maybe we're overengineering a corner case here. >>>>>>>> >>>>>>> >>>>>>> Found another issue, a very subtle one. >>>>>>> >>>>>>> When using *attr and an exception occurs where the param name would >>>>>>> appear we want the name passed in to be used. >>>>>>> >>>>>>> For example: >>>>>>> >>>>>>> $ ipa pwpolicy-mod --setattr=krbpwdmaxfailure=xyz >>>>>>> >>>>>>> With this patch it will return: >>>>>>> ipa: ERROR: invalid 'maxfail': must be an integer >>>>>>> >>>>>>> It should return: >>>>>>> ipa: ERROR: invalid 'krbpwdmaxfailure': must be an integer >>>>>> >>>>>> On second thought, this may not be related... >>>>> >>>>> This is https://fedorahosted.org/freeipa/ticket/1418, I'll see if it >>>>> makes sense to fix it here. >>>> >>>> Okay, this is a different problem. A quick grep of ConversionError in >>>> parameters.py reveals that all of the "wrong datatype" errors are >>>> raised >>>> with the raw parameter name. >>>> Other errors are raised with either that or the cli_name or they >>>> auto-detect. I don't think they follow some rule in this. >>>> >>>> Furthermore, our test suite doesn't check exception arguments. Sounds >>>> like a major cleanup coming up here... >>>> >>>>>> >>>>>>> >>>>>>> The encoding problem does still exist too: >>>>>>> >>>>>>> $ ipa config-mod --setattr ipamigrationenabled=false >>>>>>> ipa: ERROR: ipaMigrationEnabled: value #0 invalid per syntax: >>>>>>> Invalid >>>>>>> syntax. >>>>>>> >>>>> >>>>> Will fix. >>>>> >>>> >>>> Fixed in the attached update, which encodes the value. >>>> >>>> I was surprised to find that >>>> config_mod.params['ipamigrationenabled'].attribute is True, while >>>> config_mod.obj.params['ipamigrationenabled'].attribute is False (and so >>>> its encode() method does nothing). That's because 'attribute' is only >>>> set when the params are cloned from the LDAPObject to the CRUD method. >>>> Is there a reason behind this, or is it just that it was easier to do? >>>> >>>> For this case it means that params marked no_update will not be encoded >>>> properly -- getting to a working encode() would require either setting >>>> 'attribute' on the parent object or some ugly hackery. >>>> >>>> >>>> >>>> --- >>>> Petr? >>>> >>> >>> Rebased to current master. >>> >> >> There is a test failure. I ran out of time last night to investigate >> whether the test is valid or it is an issue with the patch, forgot to >> send this out before I left for the day yesterday. >> >> ====================================================================== >> FAIL: test_attr[23]: pwpolicy_mod: Try setting non-numeric >> krbpwdmaxfailure >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/nose/case.py", line 187, in >> runTest >> self.test(*self.arg) >> File >> "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", >> line 247, in >> func = lambda: self.check(nice, **test) >> File >> "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", >> line 260, in check >> self.check_exception(nice, cmd, args, options, expected) >> File >> "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", >> line 277, in check_exception >> UNEXPECTED % (cmd, name, args, options, e.__class__.__name__, e) >> AssertionError: Expected 'pwpolicy_mod' to raise ConversionError, but >> caught different. >> args = [] >> options = {'setattr': u'krbpwdmaxfailure=abc'} >> ValidationError: invalid 'krbpwdmaxfailure': must be an integer > > Fixed. I should have caught this on Friday. > > ACK, pushed to master and ipa-2-2 rob From pviktori at redhat.com Wed Apr 4 15:28:16 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 04 Apr 2012 17:28:16 +0200 Subject: [Freeipa-devel] [PATCH] 0033 Pass make-test arguments through to Nose + Test coverage Message-ID: <4F7C6890.1090200@redhat.com> Currently, our test script forwards a select few command line arguments to nosetests. This patch removes the filtering, passing all arguments through. This allows things like disabling output redirection (--nocapture), dropping into a debugger (--pdb, --pdb-failures), coverage reporting (--with-cover, once installed), etc. https://fedorahosted.org/freeipa/ticket/2135 I believe this is a better solution than adding individual options as they're needed. --- A coverage report can be generated by combining data from both the tests and the server. I run this: Setup: yum install python-coverage echo /.coverage* >> .git/info/exclude echo /htmlcov/ >> .git/info/exclude Terminal 1: coverage erase coverage run -p --source . lite-server.py Terminal 2: kinit ./make-test --with-coverage --cover-inclusive Terminal 1 again: ^C coverage combine coverage html --omit=/usr/lib/* Then view ./htmlcov/index.html in a browser. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0033-Pass-make-test-arguments-through-to-Nose.patch Type: text/x-patch Size: 2056 bytes Desc: not available URL: From rcritten at redhat.com Wed Apr 4 15:29:43 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Wed, 04 Apr 2012 11:29:43 -0400 Subject: [Freeipa-devel] [PATCH] 246 Configure SELinux for httpd during upgrades In-Reply-To: <1333443214.23102.14.camel@balmora.brq.redhat.com> References: <1333443214.23102.14.camel@balmora.brq.redhat.com> Message-ID: <4F7C68E7.4090203@redhat.com> Martin Kosek wrote: > SELinux configuration for httpd instance was set for new > installations only. Upgraded IPA servers (namely 2.1.x -> 2.2.x > upgrade) missed the configuration. This lead to AVCs when httpd > tries to contact ipa_memcached and user not being able to log in. > > This patch updates ipa-upgradeconfig to configure SELinux > in the same way as ipa-server-install does. > > https://fedorahosted.org/freeipa/ticket/2603 ACK, pushed to master and ipa-2-2 rob From ohamada at redhat.com Wed Apr 4 16:16:58 2012 From: ohamada at redhat.com (Ondrej Hamada) Date: Wed, 04 Apr 2012 18:16:58 +0200 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <1333544545.22628.287.camel@willson.li.ssimo.org> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> Message-ID: <4F7C73FA.3090809@redhat.com> On 04/04/2012 03:02 PM, Simo Sorce wrote: > On Tue, 2012-04-03 at 18:45 +0200, Ondrej Hamada wrote: >> On 03/13/2012 01:13 AM, Dmitri Pal wrote: >>> On 03/12/2012 06:10 PM, Simo Sorce wrote: >>>> On Mon, 2012-03-12 at 17:40 -0400, Dmitri Pal wrote: >>>>> On 03/12/2012 04:16 PM, Simo Sorce wrote: >>>>>> On Mon, 2012-03-12 at 20:38 +0100, Ondrej Hamada wrote: >>>>>>> USER'S operations when connection is OK: >>>>>>> ------------------------------------------------------- >>>>>>> read data -> local >>>>>>> write data -> forwarding to master >>>>>>> authentication: >>>>>>> -credentials cached -- authenticate against credentials in local cache >>>>>>> -on failure: log failure locally, update >>>>>>> data >>>>>>> about failures only on lock-down of account >>>>>>> -credentials not cached -- forward request to master, on success >>>>>>> cache >>>>>>> the credentials >>>>>>> >>>>>> This scheme doesn't work with Kerberos. >>>>>> Either you have a copy of the user's keys locally or you don't, there is >>>>>> nothing you can really cache if you don't. >>>>>> >>>>>> Simo. >>>>>> >>>>> Yes this is what we are talking about here - the cache would have to >>>>> contain user Kerberos key but there should be some expiration on the >>>>> cache so that fetched and stored keys periodically cleaned following the >>>>> policy an admin has defined. >>>> We would need a mechanism to transfer Kerberos keys, but that would not >>>> be sufficient, you'd have to give read-only servers also the realm >>>> krbtgt in order to be able to do anything with those keys. >>>> >>>> The way MS solves hits (I think) is by giving a special RODC krbtgt to >>>> each RODC, and then replicating all RODC krbtgt's with full domain >>>> controllers. Full domain controllers have logic to use RODC's krbtgt >>>> keys instead of the normal krbtgt to perform operations when user's >>>> krbtgt are presented to a different server. This is a lot of work and >>>> changes in the KDC, not something we can implement easily. >>>> >>>> As a first implementation I would restrict read-only replicas to not do >>>> Kerberos at all, only LDAP for all the lookup stuff necessary. to add a >>>> RO KDC we will need to plan a lot of changes in the KDC. >>>> >>>> We will also need intelligent partial replication where the rules about >>>> which object (and which attributes in the object) need/can be replicated >>>> are established based on some grouping+filter mechanism. This also is a >>>> pretty important change to 389ds. >>>> >>>> Simo. >>>> >>> I agree. I am just trying to structure the discussion a bit so that all >>> what you are saying can be captured in the design document and then we >>> can pick a subset of what Ondrej will actually implement. So let us >>> capture all the complexity and then do a POC for just LDAP part. >>> >> Sorry for inactivity, I was struggling with a lot of school stuff. >> >> I've summed up the main goals, do you agree on them or should I >> add/remove any? >> >> >> GOALS >> =========================================== >> Create Hub and Consumer types of replica with following features: >> >> * Hub is read-only >> >> * Hub interconnects Masters with Consumers or Masters with Hubs >> or Hubs with other Hubs >> >> * Hub is hidden in the network topology >> >> * Consumer is read-only >> >> * Consumer interconnects Masters/Hubs with clients >> >> * Write operations should be forwarded to Master >> >> * Consumer should be able to log users into system without >> communication with master > We need to define how this can be done, it will almost certainly mean > part of the consumer is writable, plus it also means you need additional > access control and policies, on what the Consumer should be allowed to > see. Right, in such case the Consumers and Hubs will have to be masters (from 389-DS's point of view). >> * Consumer should cache user's credentials > Ok what credentials ? As I explained earlier Kerberos creds cannot > really be cached. Either they are transferred with replication or the > KDC needs to be change to do chaining. Neither I consider as 'caching'. > A password obtained through an LDAP bind could be cached, but I am not > sure it is worth it. > >> * Caching of credentials should be configurable > See above. > >> * CA server should not be allowed on Hubs and Consumers > Missing points: > - Masters should not transfer KRB keys to HUBs/Consumers by default. Add point: - storing of the Krb creds must be configurable and disabled by default > - We need selective replication if you want to allow distributing a > partial set of Kerberos credentials to consumers. With Hubs it becomes > complicated to decide what to replicate about credentials. > > Simo. > Rich mentioned that they are planning support for LDAP filters in fractional replication in the future, but currently it is not supported. -- Regards, Ondrej Hamada FreeIPA team jabber:ohama at jabbim.cz IRC: ohamada From jcholast at redhat.com Wed Apr 4 16:50:39 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 04 Apr 2012 18:50:39 +0200 Subject: [Freeipa-devel] [PATCH] 73 Check whether the default user group is POSIX when adding new user with --noprivate In-Reply-To: <1333451079.23102.17.camel@balmora.brq.redhat.com> References: <4F7AC9B8.3090604@redhat.com> <1333450949.23102.16.camel@balmora.brq.redhat.com> <1333451079.23102.17.camel@balmora.brq.redhat.com> Message-ID: <4F7C7BDF.40009@redhat.com> On 3.4.2012 13:04, Martin Kosek wrote: > On Tue, 2012-04-03 at 13:02 +0200, Martin Kosek wrote: >> On Tue, 2012-04-03 at 11:58 +0200, Jan Cholasta wrote: >>> https://fedorahosted.org/freeipa/ticket/2572 >>> >>> Honza >>> >> >> NACK. >> >> This creates a regression: >> >> # ipa group-show foogroup >> Group name: foogroup >> Description: foo >> GID: 358800017 >> >> # ipa user-add --first=Foo --last=Bar fbar5 --gidnumber=358800017 --noprivate >> ------------------ >> Added user "fbar5" >> ------------------ >> User login: fbar5 >> First name: Foo >> Last name: Bar >> Full name: Foo Bar >> Display name: Foo Bar >> Initials: FB >> Home directory: /home/fbar5 >> GECOS field: Foo Bar >> Login shell: /bin/sh >> Kerberos principal: fbar5 at IDM.LAB.BOS.REDHAT.COM >> UID: 358800019 >> GID: 358800012 >> Password: False >> Member of groups: ipausers >> Kerberos keys available: False >> >> # id fbar5 >> uid=358800019(fbar5) gid=358800012(ipausers) groups=358800012(ipausers) >> >> Custom user group (GID) was overwritten. >> >> I think we also want a test case for this situation. >> >> Martin >> > > ... and we also want to have the new error message(s) i18n-able. > > Martin > Updated patch attached. Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-73.1-default-group-posix-check.patch Type: text/x-patch Size: 14933 bytes Desc: not available URL: From rcritten at redhat.com Wed Apr 4 19:05:34 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Wed, 04 Apr 2012 15:05:34 -0400 Subject: [Freeipa-devel] [PATCH] 1002 make revocation_reason required Message-ID: <4F7C9B7E.5080304@redhat.com> When revoking a certificate passing in an empty revocation reason caused an Internal Error. It already sets a default so making it required prevents empty values and it still operates the same way. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1002-revoke.patch Type: text/x-diff Size: 2053 bytes Desc: not available URL: From jdennis at redhat.com Wed Apr 4 19:45:30 2012 From: jdennis at redhat.com (John Dennis) Date: Wed, 04 Apr 2012 15:45:30 -0400 Subject: [Freeipa-devel] [PATCH 71] improve handling of ds instances during uninstall Message-ID: <4F7CA4DA.6060703@redhat.com> -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jdennis-0071-improve-handling-of-ds-instances-during-uninstall.patch Type: text/x-patch Size: 6886 bytes Desc: not available URL: From abokovoy at redhat.com Thu Apr 5 05:26:33 2012 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Thu, 5 Apr 2012 08:26:33 +0300 Subject: [Freeipa-devel] [PATCH 71] improve handling of ds instances during uninstall In-Reply-To: <4F7CA4DA.6060703@redhat.com> References: <4F7CA4DA.6060703@redhat.com> Message-ID: <20120405052633.GA2642@redhat.com> On Wed, 04 Apr 2012, John Dennis wrote: >Ticket #2502 > >* remove the "running" flag from backup_state in cainstance.py and > dsinstance.py because it does not provide the correct > information. In cainstance the running flag was never referenced > because restarting dirsrv instances occurs later in dsinstance. In > dsinstance when the running flag is set it incorrectly identifed the > PKI ds instance configured earlier by cainstance. The intent was to > determine if there were any ds instances other than those owned by > IPA which will need to be restarted upon uninstall. Clearly the PKI > ds instance does not qualify. We were generating a traceback when at > the conclusion of dsinstance.uninstall we tried to start the > remaining ds instances as indicated by the running flag, but there > were none to restart (because the running flag had been set as a > consequence of the PKI ds instance). > >* We only want to restart ds instances if there are other ds instances > besides those owned by IPA. We shouldn't be stopping all ds > instances either, but that's going to be covered by another > ticket. The fix for restarting other ds instances at the end of > uninstall is to check and see if there are other ds instances > remaining after we've removed ours, if so we restart them. Also it's > irrelevant if those ds instances were not present when we installed, > it only matters if they exist after we restore things during > uninstall. If they are present we have to start them back up because > we shut them down during uninstall. > >* Add new function get_ds_instances() which returns a list of existing > ds instances. > >* fixed error messages that incorrectly stated it "failed to restart" > a ds instance when it should be "failed to create". >--- >+def get_ds_instances(): >+ ''' >+ Return a sorted list of all 389ds instances. >+ >+ If the instance name ends with '.removed' it is ignored. This >+ matches 389ds behavior. >+ ''' >+ >+ dirsrv_instance_dir='/etc/dirsrv' >+ instance_prefix = 'slapd-' >+ >+ instances = [] >+ >+ for basename in os.listdir(dirsrv_instance_dir): >+ pathname = os.path.join(dirsrv_instance_dir, basename) >+ # Must be a directory >+ if os.path.isdir(pathname): >+ # Must start with prefix and not end with .removed >+ if basename.startswith(instance_prefix) and not basename.endswith('.removed'): >+ # Strip off prefix >+ instance = basename[len(instance_prefix):] >+ # Must be non-empty >+ if instance: >+ instances.append(basename[len(instance_prefix):]) You have already generated basename[len(instance_prefix):], may be it could be as simple as instances.append(instance) here? -- / Alexander Bokovoy From mkosek at redhat.com Thu Apr 5 06:51:43 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 05 Apr 2012 08:51:43 +0200 Subject: [Freeipa-devel] [PATCH] 1002 make revocation_reason required In-Reply-To: <4F7C9B7E.5080304@redhat.com> References: <4F7C9B7E.5080304@redhat.com> Message-ID: <1333608703.25058.0.camel@balmora.brq.redhat.com> On Wed, 2012-04-04 at 15:05 -0400, Rob Crittenden wrote: > When revoking a certificate passing in an empty revocation reason caused > an Internal Error. It already sets a default so making it required > prevents empty values and it still operates the same way. > > rob Pushed to master, ipa-2-2. Martin From jdennis at redhat.com Thu Apr 5 12:42:35 2012 From: jdennis at redhat.com (John Dennis) Date: Thu, 05 Apr 2012 08:42:35 -0400 Subject: [Freeipa-devel] [PATCH 71] improve handling of ds instances during uninstall In-Reply-To: <20120405052633.GA2642@redhat.com> References: <4F7CA4DA.6060703@redhat.com> <20120405052633.GA2642@redhat.com> Message-ID: <4F7D933B.606@redhat.com> On 04/05/2012 01:26 AM, Alexander Bokovoy wrote: > On Wed, 04 Apr 2012, John Dennis wrote: >> + # Strip off prefix >> + instance = basename[len(instance_prefix):] >> + # Must be non-empty >> + if instance: >> + instances.append(basename[len(instance_prefix):]) > You have already generated basename[len(instance_prefix):], may be it > could be as simple as > instances.append(instance) > here? > Good catch, yup that's what I meant to write, must have been a cut-n-paste mistake. Revised patch attached. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jdennis-0071-1-improve-handling-of-ds-instances-during-uninstall.patch Type: text/x-patch Size: 6850 bytes Desc: not available URL: From jdennis at redhat.com Thu Apr 5 12:56:32 2012 From: jdennis at redhat.com (John Dennis) Date: Thu, 05 Apr 2012 08:56:32 -0400 Subject: [Freeipa-devel] [PATCH 69] Use indexed format specifiers in i18n strings In-Reply-To: <4F7C463E.4020205@redhat.com> References: <201203300136.q2U1a0E5030640@int-mx12.intmail.prod.int.phx2.redhat.com> <4F79A674.2060104@redhat.com> <4F7C463E.4020205@redhat.com> Message-ID: <4F7D9680.50709@redhat.com> On 04/04/2012 09:01 AM, Petr Viktorin wrote: > On 04/02/2012 03:15 PM, Rob Crittenden wrote: >> John Dennis wrote: >>> Translators need to reorder messages to suit the needs of the target >>> language. The conventional positional format specifiers (e.g. %s %d) >>> do not permit reordering because their order is tied to the ordering >>> of the arguments to the printf function. The fix is to use indexed >>> format specifiers. >> >> I guess this looks ok but all of these errors are of the format: string >> error, error number (and inconsistently, sometimes the reverse). > > Not all of them, e.g. > > - fprintf(stderr, _("Search for %s on rootdse failed with error %d"), > + fprintf(stderr, _("Search for %1$s on rootdse failed with error %2$d\n"), > root_attrs[0], ret); > > - fprintf(stderr, _("Failed to open keytab '%s': %s\n"), keytab, > + fprintf(stderr, _("Failed to open keytab '%1$s': %2$s\n"), keytab, > error_message(krberr)); > >> Do those really need to be re-orderable? >> > > You can never make too few assumptions about foreign languages. Most > likely at least some will need reordering. > Enforcing indexed specifiers everywhere means we don't have to worry > about individual cases, or change our strings when a new language is added. +1 But there is also another practical reason. The validation logic does not have artificial intelligence and cannot parse the semantic intent of the string. It only knows if there are multiple non-indexed specifiers. If we want to automatically validate strings (make lint) from another patch, we have to live with rigid application of the rules (adding exception logic to the validator would be pretty complex because unlike lint there is no way to tag the string that would get carried all the way thought the xgettext process and be visible to the validator). -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From mkosek at redhat.com Thu Apr 5 13:07:13 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 05 Apr 2012 15:07:13 +0200 Subject: [Freeipa-devel] [PATCH] 73 Check whether the default user group is POSIX when adding new user with --noprivate In-Reply-To: <4F7C7BDF.40009@redhat.com> References: <4F7AC9B8.3090604@redhat.com> <1333450949.23102.16.camel@balmora.brq.redhat.com> <1333451079.23102.17.camel@balmora.brq.redhat.com> <4F7C7BDF.40009@redhat.com> Message-ID: <1333631233.25058.10.camel@balmora.brq.redhat.com> On Wed, 2012-04-04 at 18:50 +0200, Jan Cholasta wrote: > On 3.4.2012 13:04, Martin Kosek wrote: > > On Tue, 2012-04-03 at 13:02 +0200, Martin Kosek wrote: > >> On Tue, 2012-04-03 at 11:58 +0200, Jan Cholasta wrote: > >>> https://fedorahosted.org/freeipa/ticket/2572 > >>> > >>> Honza > >>> > >> > >> NACK. > >> > >> This creates a regression: > >> > >> # ipa group-show foogroup > >> Group name: foogroup > >> Description: foo > >> GID: 358800017 > >> > >> # ipa user-add --first=Foo --last=Bar fbar5 --gidnumber=358800017 --noprivate > >> ------------------ > >> Added user "fbar5" > >> ------------------ > >> User login: fbar5 > >> First name: Foo > >> Last name: Bar > >> Full name: Foo Bar > >> Display name: Foo Bar > >> Initials: FB > >> Home directory: /home/fbar5 > >> GECOS field: Foo Bar > >> Login shell: /bin/sh > >> Kerberos principal: fbar5 at IDM.LAB.BOS.REDHAT.COM > >> UID: 358800019 > >> GID: 358800012 > >> Password: False > >> Member of groups: ipausers > >> Kerberos keys available: False > >> > >> # id fbar5 > >> uid=358800019(fbar5) gid=358800012(ipausers) groups=358800012(ipausers) > >> > >> Custom user group (GID) was overwritten. > >> > >> I think we also want a test case for this situation. > >> > >> Martin > >> > > > > ... and we also want to have the new error message(s) i18n-able. > > > > Martin > > > > Updated patch attached. > > Honza > ACK. Thanks for all the tests. Pushed to master, ipa-2-2. Martin From edewata at redhat.com Thu Apr 5 14:54:41 2012 From: edewata at redhat.com (Endi Sukma Dewata) Date: Thu, 05 Apr 2012 09:54:41 -0500 Subject: [Freeipa-devel] [PATCH] 115 Reworked netgroup Web UI to allow setting user/host category In-Reply-To: <4F745995.2060704@redhat.com> References: <4F745995.2060704@redhat.com> Message-ID: <4F7DB231.70506@redhat.com> On 3/29/2012 7:46 AM, Petr Vobornik wrote: > This patch is changing netgroup web ui to look more like hbac or sudo > rule UI. This change allows to define and display user category, host > category and external host. > > The core of the change is changing member attributes (user, group, host, > hostgroup) to use rule_details_widget instead of separate association > facets. In host case it also allows to display and add external hosts. > > https://fedorahosted.org/freeipa/ticket/2578 > > Note: compare to other plugins (HBAC, Sudo) netgroup plugins doesn't > have member attrs in takes_param therefore labels for columns have to be > explicitly set. ACK. Just one thing, the label for user/host category says "User/host category the rule applies to". Netgroup is not a rule, so it might be better to say something like "User/host category of netgroup members" or "Member user/host category". This is an existing server issue. -- Endi S. Dewata From edewata at redhat.com Thu Apr 5 14:55:01 2012 From: edewata at redhat.com (Endi Sukma Dewata) Date: Thu, 05 Apr 2012 09:55:01 -0500 Subject: [Freeipa-devel] [PATCH] 116 Fixed: permission attrs table didn't update its available options on load In-Reply-To: <4F7BF5BF.8000601@redhat.com> References: <4F7BF5BF.8000601@redhat.com> Message-ID: <4F7DB245.8060401@redhat.com> On 4/4/2012 2:18 AM, Petr Vobornik wrote: > It could lead to state where attributes from other object type were > displayed instead of the correct ones. > > https://fedorahosted.org/freeipa/ticket/2590 ACK. -- Endi S. Dewata From edewata at redhat.com Thu Apr 5 14:55:13 2012 From: edewata at redhat.com (Endi Sukma Dewata) Date: Thu, 05 Apr 2012 09:55:13 -0500 Subject: [Freeipa-devel] [PATCH] 117 Added attrs field to permission for target=subtree In-Reply-To: <4F7BF607.8080105@redhat.com> References: <4F7BF607.8080105@redhat.com> Message-ID: <4F7DB251.5030509@redhat.com> On 4/4/2012 2:19 AM, Petr Vobornik wrote: > Permission form was missing attrs field for target=subtree. All other > target types have it. > > It uses multivalued text widget, same as filter, because we can't > predict the target type. > > https://fedorahosted.org/freeipa/ticket/2592 ACK. -- Endi S. Dewata From edewata at redhat.com Thu Apr 5 14:56:49 2012 From: edewata at redhat.com (Endi Sukma Dewata) Date: Thu, 05 Apr 2012 09:56:49 -0500 Subject: [Freeipa-devel] [PATCH] 118-119 DNS forward policy: checkboxes changed to radio buttons In-Reply-To: <4F7C2425.6000606@redhat.com> References: <4F7C2425.6000606@redhat.com> Message-ID: <4F7DB2B1.9030206@redhat.com> On 4/4/2012 5:36 AM, Petr Vobornik wrote: > DNS forward policy fields were using mutually exclusive checkboxes. Such > behavior is unusual for users. > > Checkboxes were changed to radios with new option 'none/default' to set > empty value ''. > > https://fedorahosted.org/freeipa/ticket/2599 > > Second patch removes mutex option from checkboxes. Not sure if needed. Patch #118 might need some revision. According to DNS docs the default forward policy is "first": forward is only relevant in conjunction with a valid forwarders statement. If set to 'only' the server will only forward queries, if set to 'first' (default) it will send the queries to the forwarder and if not answered will attempt to answer the query. http://www.zytrax.com/books/dns/ch7/queries.html So there are actually only 2 possible policies: first and only. Ideally we should present 2 radio buttons: * Forward first * Forward only However, in IPA the attribute is optional, so we have 3 ways to define the policy: * Default * Forward first * Forward only This is OK and will match the CLI (I can ACK this), but people might be asking 'What is the default policy?' and I'm not sure we document it clearly. If we're sure IPA's default forwarding policy is equivalent to DNS default forwarding policy (i.e. first), the other alternative is to just present 2 options like this: * Forward first (default) * Forward only When loading the data, if there's no policy defined the UI will select 'first' by default. When saving the data, the UI can either store the 'first' value or normalize it into empty value. Another thing, let's not use 'none' because it could be interpreted as 'do not forward'. Also, it would be better to use a long description like "Forward first" instead of just "first". In the DNS configuration it's meant to be read like the longer description: forward first; forward only; In the current UI we have "Forward policy" label which doesn't go well with "first" or "only". Patch #119 is ACKed, but feel free to keep the functionality if you think it could be useful someday. -- Endi S. Dewata From rcritten at redhat.com Thu Apr 5 15:04:04 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 05 Apr 2012 11:04:04 -0400 Subject: [Freeipa-devel] [PATCH] 1003 return consistent value in netgroup triple Message-ID: <4F7DB464.4090809@redhat.com> When constructing netgroup triples with hostcat or usercat set to all we weren't setting the user/host part of the triple correctly. The first entry would have '' as the host/user value as appropriate but all subsequent entries would have -. They should all be empty. This patch uses a new feature of slapi-nis-0.40 so we can use an expression in a pad. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1003-netgroup.patch Type: text/x-diff Size: 8369 bytes Desc: not available URL: From abokovoy at redhat.com Thu Apr 5 15:13:35 2012 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Thu, 5 Apr 2012 18:13:35 +0300 Subject: [Freeipa-devel] [PATCH 71] improve handling of ds instances during uninstall In-Reply-To: <4F7D933B.606@redhat.com> References: <4F7CA4DA.6060703@redhat.com> <20120405052633.GA2642@redhat.com> <4F7D933B.606@redhat.com> Message-ID: <20120405151335.GB2642@redhat.com> On Thu, 05 Apr 2012, John Dennis wrote: > On 04/05/2012 01:26 AM, Alexander Bokovoy wrote: >> On Wed, 04 Apr 2012, John Dennis wrote: >>> + # Strip off prefix >>> + instance = basename[len(instance_prefix):] >>> + # Must be non-empty >>> + if instance: >>> + instances.append(basename[len(instance_prefix):]) >> You have already generated basename[len(instance_prefix):], may be it >> could be as simple as >> instances.append(instance) >> here? >> > > Good catch, yup that's what I meant to write, must have been a > cut-n-paste mistake. > > Revised patch attached. ACK now. -- / Alexander Bokovoy From rcritten at redhat.com Thu Apr 5 15:30:34 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 05 Apr 2012 11:30:34 -0400 Subject: [Freeipa-devel] [PATCH] 1004 add missing comma to list of services Message-ID: <4F7DBA9A.10108@redhat.com> The list of services that cannot be disabled was missing a comma so some services could still be disabled. This is a one-liner, just want a sanity check. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1004-disable.patch Type: text/x-diff Size: 1055 bytes Desc: not available URL: From pvoborni at redhat.com Thu Apr 5 15:58:42 2012 From: pvoborni at redhat.com (Petr Vobornik) Date: Thu, 05 Apr 2012 17:58:42 +0200 Subject: [Freeipa-devel] [PATCH] 118-119 DNS forward policy: checkboxes changed to radio buttons In-Reply-To: <4F7DB2B1.9030206@redhat.com> References: <4F7C2425.6000606@redhat.com> <4F7DB2B1.9030206@redhat.com> Message-ID: <4F7DC132.70008@redhat.com> Revised patch 118 attached. I used: * Forward first * Forward only and set 'default_value' to 'first'. So there would be always some value checked, which indicates what is actually used. There is a little issue with undo button if policy is not set '' because default_value !== ''. I did this kinda in the hurry, so I hope I didn't missed anything crucial. Will be back on Tuesday. On 04/05/2012 04:56 PM, Endi Sukma Dewata wrote: > On 4/4/2012 5:36 AM, Petr Vobornik wrote: >> DNS forward policy fields were using mutually exclusive checkboxes. Such >> behavior is unusual for users. >> >> Checkboxes were changed to radios with new option 'none/default' to set >> empty value ''. >> >> https://fedorahosted.org/freeipa/ticket/2599 >> >> Second patch removes mutex option from checkboxes. Not sure if needed. > > Patch #118 might need some revision. > > According to DNS docs the default forward policy is "first": > > forward is only relevant in conjunction with a valid forwarders > statement. If set to 'only' the server will only forward queries, > if set to 'first' (default) it will send the queries to the forwarder > and if not answered will attempt to answer the query. > > http://www.zytrax.com/books/dns/ch7/queries.html > > So there are actually only 2 possible policies: first and only. Ideally > we should present 2 radio buttons: > * Forward first > * Forward only > > However, in IPA the attribute is optional, so we have 3 ways to define > the policy: > * Default > * Forward first > * Forward only > > This is OK and will match the CLI (I can ACK this), but people might be > asking 'What is the default policy?' and I'm not sure we document it > clearly. > > If we're sure IPA's default forwarding policy is equivalent to DNS > default forwarding policy (i.e. first), the other alternative is to just > present 2 options like this: > * Forward first (default) > * Forward only > > When loading the data, if there's no policy defined the UI will select > 'first' by default. When saving the data, the UI can either store the > 'first' value or normalize it into empty value. > > Another thing, let's not use 'none' because it could be interpreted as > 'do not forward'. > > Also, it would be better to use a long description like "Forward first" > instead of just "first". In the DNS configuration it's meant to be read > like the longer description: > > forward first; > forward only; > > In the current UI we have "Forward policy" label which doesn't go well > with "first" or "only". > > Patch #119 is ACKed, but feel free to keep the functionality if you > think it could be useful someday. > -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0118-1-DNS-forward-policy-checkboxes-changed-to-radio-butto.patch Type: text/x-patch Size: 5105 bytes Desc: not available URL: From ohamada at redhat.com Thu Apr 5 16:10:56 2012 From: ohamada at redhat.com (Ondrej Hamada) Date: Thu, 05 Apr 2012 18:10:56 +0200 Subject: [Freeipa-devel] [PATCH] 15 Confusing default user groups In-Reply-To: <4F7198D3.9090700@redhat.com> References: <4F565247.2050105@redhat.com> <1332174357.24427.47.camel@balmora.brq.redhat.com> <4F6B5622.2090409@redhat.com> <4F70D131.8090909@redhat.com> <4F7198D3.9090700@redhat.com> Message-ID: <4F7DC410.1010206@redhat.com> On 03/27/2012 12:39 PM, Petr Vobornik wrote: > On 03/26/2012 10:27 PM, Rob Crittenden wrote: >> Ondrej Hamada wrote: >>> On 03/19/2012 05:25 PM, Martin Kosek wrote: >>>> On Tue, 2012-03-06 at 19:07 +0100, Ondrej Hamada wrote: >>>>> https://fedorahosted.org/freeipa/ticket/2354 >>>>> >>>>> There was added '(fallback)' string in the automember plugin labels >>>>> referring to automember default groups to point out, that the >>>>> users are >>>>> already members of default group specified in IPA config, thus the >>>>> default group specified in automember will be additional one - a >>>>> fallback group. >>>> Hm, looks ok. Though I would also like some second opinion for this >>>> change. I think naming it simply "Fallback Group" would be better, but >>>> we cannot change the API at this stage and rename the parameter. So >>>> this >>>> change is a good compromise so far, IMO. >>>> >>>> I found few issues though: >>>> >>>> 1) The label of default group parameter in automember has not been >>>> updated, i.e. the following command still shows the old name: >>>> >>>> # ipa automember-default-group-show --type=group >>>> Default Group: >>>> cn=editors,cn=groups,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com >>>> >>>> 2) I think we could fix few issues in docstrings since we touch these >>>> strings anyway: >>>> >>>> a) Typo in doc >>>> >>>> - label=_('Default Group'), >>>> - doc=_('Default group for entires to land'), >>>> + label=_('Default (fallback) Group'), >>>> + doc=_('Default (fallback) group for entires to land'), >>>> >>>> b) Non-translatable strings: >>>> >>>> - entry_attrs['automemberdefaultgroup'] = u'No default group >>>> set' >>>> + entry_attrs['automemberdefaultgroup'] = u'No default >>>> (fallback) group set' >>>> >>>> >>>> - entry_attrs['automemberdefaultgroup'] = u'No default group >>>> set' >>>> + entry_attrs['automemberdefaultgroup'] = u'No default >>>> (fallback) group set' >>>> >>>> Martin >>>> >>> fixed >>> >>> Ondra >> >> Petr, related to handling in the UI, do you look for the string "No >> default group set' or just look for a string that isn't a dn? >> >> rob > > We are checking if the string looks like dn - if it contains 'cn='. If > not, we consider it as an error message. > Fixed issues with json serialization -- Regards, Ondrej Hamada FreeIPA team jabber: ohama at jabbim.cz IRC: ohamada -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-ohamada-15-3-Confusing-default-user-groups.patch Type: text/x-patch Size: 7122 bytes Desc: not available URL: From jcholast at redhat.com Thu Apr 5 17:35:21 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 05 Apr 2012 19:35:21 +0200 Subject: [Freeipa-devel] [PATCH] 1003 return consistent value in netgroup triple In-Reply-To: <4F7DB464.4090809@redhat.com> References: <4F7DB464.4090809@redhat.com> Message-ID: <4F7DD7D9.7070001@redhat.com> On 5.4.2012 17:04, Rob Crittenden wrote: > When constructing netgroup triples with hostcat or usercat set to all we > weren't setting the user/host part of the triple correctly. The first > entry would have '' as the host/user value as appropriate but all > subsequent entries would have -. They should all be empty. > > This patch uses a new feature of slapi-nis-0.40 so we can use an > expression in a pad. > > rob > NACK, this does not work for new installs. Did you forget to include install/share/*.ldif files in the commit? Honza -- Jan Cholasta From rcritten at redhat.com Thu Apr 5 18:55:32 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 05 Apr 2012 14:55:32 -0400 Subject: [Freeipa-devel] [PATCH] 1003 return consistent value in netgroup triple In-Reply-To: <4F7DD7D9.7070001@redhat.com> References: <4F7DB464.4090809@redhat.com> <4F7DD7D9.7070001@redhat.com> Message-ID: <4F7DEAA4.4090706@redhat.com> Jan Cholasta wrote: > On 5.4.2012 17:04, Rob Crittenden wrote: >> When constructing netgroup triples with hostcat or usercat set to all we >> weren't setting the user/host part of the triple correctly. The first >> entry would have '' as the host/user value as appropriate but all >> subsequent entries would have -. They should all be empty. >> >> This patch uses a new feature of slapi-nis-0.40 so we can use an >> expression in a pad. >> >> rob >> > > NACK, this does not work for new installs. Did you forget to include > install/share/*.ldif files in the commit? > > Honza > Yes, forgot to include nis.uldif. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1003-2-netgroup.patch Type: text/x-diff Size: 10988 bytes Desc: not available URL: From rcritten at redhat.com Thu Apr 5 19:58:54 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 05 Apr 2012 15:58:54 -0400 Subject: [Freeipa-devel] [PATCH 71] improve handling of ds instances during uninstall In-Reply-To: <20120405151335.GB2642@redhat.com> References: <4F7CA4DA.6060703@redhat.com> <20120405052633.GA2642@redhat.com> <4F7D933B.606@redhat.com> <20120405151335.GB2642@redhat.com> Message-ID: <4F7DF97E.4030708@redhat.com> Alexander Bokovoy wrote: > On Thu, 05 Apr 2012, John Dennis wrote: >> On 04/05/2012 01:26 AM, Alexander Bokovoy wrote: >>> On Wed, 04 Apr 2012, John Dennis wrote: >>>> + # Strip off prefix >>>> + instance = basename[len(instance_prefix):] >>>> + # Must be non-empty >>>> + if instance: >>>> + instances.append(basename[len(instance_prefix):]) >>> You have already generated basename[len(instance_prefix):], may be it >>> could be as simple as >>> instances.append(instance) >>> here? >>> >> >> Good catch, yup that's what I meant to write, must have been a >> cut-n-paste mistake. >> >> Revised patch attached. > ACK now. > Rebased and pushed to master and ipa-2-2 rob From mkosek at redhat.com Thu Apr 5 20:20:13 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 05 Apr 2012 22:20:13 +0200 Subject: [Freeipa-devel] [PATCH] 1004 add missing comma to list of services In-Reply-To: <4F7DBA9A.10108@redhat.com> References: <4F7DBA9A.10108@redhat.com> Message-ID: <1333657213.2559.1.camel@priserak> On Thu, 2012-04-05 at 11:30 -0400, Rob Crittenden wrote: > The list of services that cannot be disabled was missing a comma so some > services could still be disabled. > > This is a one-liner, just want a sanity check. > > rob ACK. Unfortunately I did not spot this during review. Pushed to master, ipa-2-2. Martin From rcritten at redhat.com Thu Apr 5 20:47:55 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 05 Apr 2012 16:47:55 -0400 Subject: [Freeipa-devel] [PATCH] 998 certmonger restarts services on renewal In-Reply-To: <4F7B4F64.7070703@redhat.com> References: <4F7233D9.2080405@redhat.com> <1333374440.10403.18.camel@balmora.brq.redhat.com> <4F79B30E.1060804@redhat.com> <4F79FFBD.1030408@redhat.com> <1333437994.23102.13.camel@balmora.brq.redhat.com> <4F7AF8FC.5050000@redhat.com> <4F7B0D04.6060704@redhat.com> <1333478601.2626.5.camel@priserak> <4F7B4F64.7070703@redhat.com> Message-ID: <4F7E04FB.4070200@redhat.com> Rob Crittenden wrote: > Martin Kosek wrote: >> On Tue, 2012-04-03 at 10:45 -0400, Rob Crittenden wrote: >>> Rob Crittenden wrote: >>>> Martin Kosek wrote: >>>>> On Mon, 2012-04-02 at 15:36 -0400, Rob Crittenden wrote: >>>>>> Rob Crittenden wrote: >>>>>>> Martin Kosek wrote: >>>>>>>> On Tue, 2012-03-27 at 17:40 -0400, Rob Crittenden wrote: >>>>>>>>> Certmonger will currently automatically renew server >>>>>>>>> certificates but >>>>>>>>> doesn't restart the services so you can still end up with expired >>>>>>>>> certificates if you services never restart. >>>>>>>>> >>>>>>>>> This patch registers are restart command with certmonger so the >>>>>>>>> IPA >>>>>>>>> services will automatically be restarted to get the updated cert. >>>>>>>>> >>>>>>>>> Easy to test. Install IPA then resubmit the current server >>>>>>>>> certs and >>>>>>>>> watch the services restart: >>>>>>>>> >>>>>>>>> # ipa-getcert list >>>>>>>>> >>>>>>>>> Find the ID for either your dirsrv or httpd instance >>>>>>>>> >>>>>>>>> # ipa-getcert resubmit -i >>>>>>>>> >>>>>>>>> Watch /var/log/httpd/error_log or >>>>>>>>> /var/log/dirsrv/slapd-INSTANCE/errors >>>>>>>>> to see the service restart. >>>>>>>>> >>>>>>>>> rob >>>>>>>> >>>>>>>> What about current instances - can we/do we want to update >>>>>>>> certmonger >>>>>>>> tracking so that their instances are restarted as well? >>>>>>>> >>>>>>>> Anyway, I found few issues SELinux issues with the patch: >>>>>>>> >>>>>>>> 1) # rpm -Uvh freeipa-* >>>>>>>> Preparing... ########################################### [100%] >>>>>>>> 1:freeipa-python ########################################### [ 20%] >>>>>>>> 2:freeipa-client ########################################### [ 40%] >>>>>>>> 3:freeipa-admintools ########################################### [ >>>>>>>> 60%] >>>>>>>> 4:freeipa-server ########################################### [ 80%] >>>>>>>> /usr/bin/chcon: failed to change context of >>>>>>>> `/usr/lib64/ipa/certmonger' to >>>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid >>>>>>>> argument >>>>>>>> /usr/bin/chcon: failed to change context of >>>>>>>> `/usr/lib64/ipa/certmonger/restart_dirsrv' to >>>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid >>>>>>>> argument >>>>>>>> /usr/bin/chcon: failed to change context of >>>>>>>> `/usr/lib64/ipa/certmonger/restart_httpd' to >>>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid >>>>>>>> argument >>>>>>>> warning: %post(freeipa-server-2.1.90GIT5b895af-0.fc16.x86_64) >>>>>>>> scriptlet failed, exit status 1 >>>>>>>> 5:freeipa-server-selinux >>>>>>>> ########################################### >>>>>>>> [100%] >>>>>>>> >>>>>>>> certmonger_unconfined_exec_t type was unknown with my selinux >>>>>>>> policy: >>>>>>>> >>>>>>>> selinux-policy-3.10.0-80.fc16.noarch >>>>>>>> selinux-policy-targeted-3.10.0-80.fc16.noarch >>>>>>>> >>>>>>>> If we need a higher SELinux version, we should bump the required >>>>>>>> package >>>>>>>> version spec file. >>>>>>> >>>>>>> Yeah, waiting on it to be backported. >>>>>>> >>>>>>>> >>>>>>>> 2) Change of SELinux context with /usr/bin/chcon is temporary until >>>>>>>> restorecon or system relabel occurs. I think we should make it >>>>>>>> persistent and enforce this type in our SELinux policy and >>>>>>>> rather call >>>>>>>> restorecon instead of chcon >>>>>>> >>>>>>> That's a good idea, why didn't I think of that :-( >>>>>> >>>>>> Ah, now I remember, it will be handled by selinux-policy. I would >>>>>> have >>>>>> used restorecon here but since the policy isn't there yet this seemed >>>>>> like a good idea. >>>>>> >>>>>> I'm trying to find out the status of this new policy, it may only >>>>>> make >>>>>> it into F-17. >>>>>> >>>>>> rob >>>>> >>>>> Ok. But if this policy does not go in F-16 and if we want this fix in >>>>> F16 release too, I guess we would have to implement both approaches in >>>>> our spec file: >>>>> >>>>> 1) When on F16, include SELinux policy for restart scripts + run >>>>> restorecon >>>>> 2) When on F17, do not include the SELinux policy (+ run restorecon) >>>>> >>>>> Martin >>>>> >>>> >>>> Won't work without updated selinux-policy. Without the permission for >>>> certmonger to execute the commands things will still fail (just in >>>> really non-obvious and far in the future ways). >>>> >>>> It looks like this is fixed in F-17 selinux-policy-3.10.0-107. >>>> >>>> rob >>> >>> Updated patch which works on F-17. >>> >>> rob >> >> What about F-16? The restart scripts won't work with enabled enforcing >> and will raise AVCs. Maybe we really need to deliver our own SELinux >> policy allowing it on F-16. > > Right, I don't see this working on F-16. I don't really want to carry > this type of policy. It goes beyond marking a few files as certmonger_t, > it is going to let certmonger execute arbitrary scripts. This is best > left to the SELinux team who understand the consequences better. > >> >> I also found an issue with the restart scripts: >> >> 1) restart_dirsrv: this won't work with systemd: >> >> # /sbin/service dirsrv restart >> Redirecting to /bin/systemctl restart dirsrv.service >> Failed to issue method call: Unit dirsrv.service failed to load: No such >> file or directory. See system logs and 'systemctl status dirsrv.service' >> for details. > > Wouldn't work so hot for sysV either as we'd be restarting all > instances. I'll take a look. > >> We would need to pass an instance of IPA dirsrv for this to work. >> >> 2) restart_httpd: >> Is reload enough for httpd to pull a new certificate? Don't we need a >> full restart? If reload is enough, I think the command should be named >> reload_httpd > > Yes, it causes the modules to be reloaded which will reload the NSS > database, that's all we need. I named it this way for consistency. I can > rename it, though I doubt it would cause any confusion either way. > > rob revised patch. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-998-3-certmonger.patch Type: text/x-diff Size: 11075 bytes Desc: not available URL: From rcritten at redhat.com Thu Apr 5 21:12:59 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 05 Apr 2012 17:12:59 -0400 Subject: [Freeipa-devel] [PATCH] 15 Confusing default user groups In-Reply-To: <4F7DC410.1010206@redhat.com> References: <4F565247.2050105@redhat.com> <1332174357.24427.47.camel@balmora.brq.redhat.com> <4F6B5622.2090409@redhat.com> <4F70D131.8090909@redhat.com> <4F7198D3.9090700@redhat.com> <4F7DC410.1010206@redhat.com> Message-ID: <4F7E0ADB.6000403@redhat.com> Ondrej Hamada wrote: > On 03/27/2012 12:39 PM, Petr Vobornik wrote: >> On 03/26/2012 10:27 PM, Rob Crittenden wrote: >>> Ondrej Hamada wrote: >>>> On 03/19/2012 05:25 PM, Martin Kosek wrote: >>>>> On Tue, 2012-03-06 at 19:07 +0100, Ondrej Hamada wrote: >>>>>> https://fedorahosted.org/freeipa/ticket/2354 >>>>>> >>>>>> There was added '(fallback)' string in the automember plugin labels >>>>>> referring to automember default groups to point out, that the >>>>>> users are >>>>>> already members of default group specified in IPA config, thus the >>>>>> default group specified in automember will be additional one - a >>>>>> fallback group. >>>>> Hm, looks ok. Though I would also like some second opinion for this >>>>> change. I think naming it simply "Fallback Group" would be better, but >>>>> we cannot change the API at this stage and rename the parameter. So >>>>> this >>>>> change is a good compromise so far, IMO. >>>>> >>>>> I found few issues though: >>>>> >>>>> 1) The label of default group parameter in automember has not been >>>>> updated, i.e. the following command still shows the old name: >>>>> >>>>> # ipa automember-default-group-show --type=group >>>>> Default Group: >>>>> cn=editors,cn=groups,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com >>>>> >>>>> 2) I think we could fix few issues in docstrings since we touch these >>>>> strings anyway: >>>>> >>>>> a) Typo in doc >>>>> >>>>> - label=_('Default Group'), >>>>> - doc=_('Default group for entires to land'), >>>>> + label=_('Default (fallback) Group'), >>>>> + doc=_('Default (fallback) group for entires to land'), >>>>> >>>>> b) Non-translatable strings: >>>>> >>>>> - entry_attrs['automemberdefaultgroup'] = u'No default group >>>>> set' >>>>> + entry_attrs['automemberdefaultgroup'] = u'No default >>>>> (fallback) group set' >>>>> >>>>> >>>>> - entry_attrs['automemberdefaultgroup'] = u'No default group >>>>> set' >>>>> + entry_attrs['automemberdefaultgroup'] = u'No default >>>>> (fallback) group set' >>>>> >>>>> Martin >>>>> >>>> fixed >>>> >>>> Ondra >>> >>> Petr, related to handling in the UI, do you look for the string "No >>> default group set' or just look for a string that isn't a dn? >>> >>> rob >> >> We are checking if the string looks like dn - if it contains 'cn='. If >> not, we consider it as an error message. >> > Fixed issues with json serialization > ACK, pushed to master and ipa-2-2 rob From rcritten at redhat.com Thu Apr 5 21:38:52 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 05 Apr 2012 17:38:52 -0400 Subject: [Freeipa-devel] [PATCH] 74 Check configured maximum user login length on user rename In-Reply-To: <4F7B0C43.3000201@redhat.com> References: <4F7B0C43.3000201@redhat.com> Message-ID: <4F7E10EC.1020808@redhat.com> Jan Cholasta wrote: > https://fedorahosted.org/freeipa/ticket/2587 > > Honza This looks ok, it would be nice to have a unit test. rob From jdennis at redhat.com Thu Apr 5 21:55:58 2012 From: jdennis at redhat.com (John Dennis) Date: Thu, 05 Apr 2012 17:55:58 -0400 Subject: [Freeipa-devel] parameter validation Message-ID: <4F7E14EE.6060902@redhat.com> Jason was exceptional at documenting everything he wrote, especially in the framework. However the correct use of validation in the Param class is not discussed at all. I tried to figure it out by looking at what we currently do in our code base. Given the inconsistencies I presume I'm not the only one who wasn't quite sure how to do this right (or rather what the intent was). Below are some issues/questions, if you can shed light on any them it would be helpful and I'll try to summarize and get best practice documented. Every param object is passed a name and then zero or more validation functions as positional arguments. Each validation function has this signature: validate(ugettext, value) It's supposed to return None if there was no validation error. If there was a validation error it's supposed to return an error string. 1) Most of our validation functions do not follow the calling convention in parameter.py for a validator. Instead of returning either None or an error string they raise a ValidationError. Here is an example of how the code in parameter.py calls a validation function: for rule in self.all_rules: error = rule(ugettext, value) if error is not None: raise ValidationError( name=self.get_param_name(), value=value, index=index, error=error, rule=rule, ) But there are a number of places in the code where we raise a ValidationError and substitute a hardcoded name. Directly raising a ValidationError by-passes the above and prevents correctly filling in all the information. Speaking of all the information see the next point. The direct use of ValidationError occurs in other places besides validation functions. There are places where that makes sense, perhaps the error only occurs in the presence of another option, or perhaps it's buried deep in some other code, but for validation functions that can validate without external context we shouldn't be raising ValidationError directly, but rather returning an error string, right? 2) Currently ValidationError only returns this string: format = _('invalid %(name)r: %(error)s') Note only the name and error keywords are used, omitted are value, index and rule. Shouldn't the ValidationError test for presence of the other keywords and return a different format string including those keywords if they are present? Seems to me we're dropping useful information which would be helpful to the user on the floor. 3) I'm confused as to why the validator signature has ugettext as a parameter. None of our code uses it. I suspect the intent was to use it to obtain the l10n translation for the error string. But as far as I can tell the i18n strings used in the validators via _(). # current usage, believed to be correct validate(ugettext, value) return _("Must not contain spaces") But if the validator used the ugettext parameter, it would have to be done in one of two fashions # This does not work because the string is not marked for translation via _(), ugettext will never find the string. validate(ugettext, value) return ugettext("Must not contain spaces") # This is just silly, the _() returns the l10n translation (by calling # ugettext), which is then passed to ugettext a second time which does # nothing because it can't find the translation for a string already # translated, it's a no-op. validate(ugettext, value) return ugettext(_("Must not contain spaces")) Also, by passing only a pointer to the ugettext function the use of plural forms is prohibited because that requires a different function. But perhaps that's O.K. because the validator is only called on scalar values. Anyway, we don't use the ugettext parameter and I don't see the purpose. Do you? Detail: In parameter.py ugettext is called on class variables initialized with _(). This is correct. Why? Class variables are evaluated at compile time, the _() function will return the msgid unmodified. The string must be marked for translation with _(). Thus at runtime ugettext needs to be invoked again on the string. But this is a consequence of it being evaluated at compile time. None of our validation functions use i18n strings which are evaluated at compile time, thus there is no need to pass ugettext to them. If for some reason they did, they should call the proper version o fugettext from text.py, thus I still don't see the need or value of always passing ugettext as a parameter to the validator, do you? -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From jdennis at redhat.com Fri Apr 6 01:26:55 2012 From: jdennis at redhat.com (John Dennis) Date: Thu, 05 Apr 2012 21:26:55 -0400 Subject: [Freeipa-devel] [PATCH 72] Validate DN & RDN parameters for migrate command Message-ID: <4F7E465F.7050206@redhat.com> -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jdennis-0072-Validate-DN-RDN-parameters-for-migrate-command.patch Type: text/x-patch Size: 3078 bytes Desc: not available URL: From mkosek at redhat.com Fri Apr 6 08:06:27 2012 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 06 Apr 2012 10:06:27 +0200 Subject: [Freeipa-devel] parameter validation In-Reply-To: <4F7E14EE.6060902@redhat.com> References: <4F7E14EE.6060902@redhat.com> Message-ID: <1333699587.14740.13.camel@balmora.brq.redhat.com> On Thu, 2012-04-05 at 17:55 -0400, John Dennis wrote: > Jason was exceptional at documenting everything he wrote, especially in > the framework. However the correct use of validation in the Param class > is not discussed at all. I tried to figure it out by looking at what we > currently do in our code base. Given the inconsistencies I presume I'm > not the only one who wasn't quite sure how to do this right (or rather > what the intent was). Below are some issues/questions, if you can shed > light on any them it would be helpful and I'll try to summarize and get > best practice documented. > > Every param object is passed a name and then zero or more validation > functions as positional arguments. Each validation function has this > signature: > > validate(ugettext, value) Correct. > > It's supposed to return None if there was no validation error. If there > was a validation error it's supposed to return an error string. > > 1) Most of our validation functions do not follow the calling convention > in parameter.py for a validator. Instead of returning either None or an > error string they raise a ValidationError. > > Here is an example of how the code in parameter.py calls a validation > function: > > for rule in self.all_rules: > error = rule(ugettext, value) > if error is not None: > raise ValidationError( > name=self.get_param_name(), > value=value, > index=index, > error=error, > rule=rule, > ) > > But there are a number of places in the code where we raise a > ValidationError and substitute a hardcoded name. Directly raising a > ValidationError by-passes the above and prevents correctly filling in > all the information. Speaking of all the information see the next point. > > The direct use of ValidationError occurs in other places besides > validation functions. There are places where that makes sense, perhaps > the error only occurs in the presence of another option, or perhaps it's > buried deep in some other code, but for validation functions that can > validate without external context we shouldn't be raising > ValidationError directly, but rather returning an error string, right? I agree. If this is not happening in some validation functions, we should fix it. I think we should create a Ticket for that. > > 2) Currently ValidationError only returns this string: > > format = _('invalid %(name)r: %(error)s') > > Note only the name and error keywords are used, omitted are value, index > and rule. Shouldn't the ValidationError test for presence of the other > keywords and return a different format string including those keywords > if they are present? Seems to me we're dropping useful information which > would be helpful to the user on the floor. Probably. It is true that this way, one cannot know which value is when passing more values: # ipa dnsrecord-add idm.lab.bos.redhat.com foo --a-rec=10.0.0.1,10.0.0.2,3,10.0.0.4 ipa: ERROR: invalid 'ip_address': invalid IP address format # ipa dnsrecord-add idm.lab.bos.redhat.com foo --a-rec=10.0.0.1,10.0.0.2,10.0.0.3,10.0.0.4 Record name: foo A record: 10.0.0.1, 10.0.0.2, 10.0.0.3, 10.0.0.4 > > 3) I'm confused as to why the validator signature has ugettext as a > parameter. None of our code uses it. I suspect the intent was to use it > to obtain the l10n translation for the error string. But as far as I can > tell the i18n strings used in the validators via _(). > > # current usage, believed to be correct > validate(ugettext, value) > return _("Must not contain spaces") > > But if the validator used the ugettext parameter, it would have to be > done in one of two fashions > > # This does not work because the string is not marked for translation > via _(), ugettext will never find the string. > validate(ugettext, value) > return ugettext("Must not contain spaces") > > # This is just silly, the _() returns the l10n translation (by calling > # ugettext), which is then passed to ugettext a second time which does > # nothing because it can't find the translation for a string already > # translated, it's a no-op. > validate(ugettext, value) > return ugettext(_("Must not contain spaces")) > > Also, by passing only a pointer to the ugettext function the use of > plural forms is prohibited because that requires a different function. > But perhaps that's O.K. because the validator is only called on scalar > values. > > Anyway, we don't use the ugettext parameter and I don't see the purpose. > Do you? Personally, I don't see the purpose either. I think we should revise the validator function parameters and check what should be removed/added. ugettext is a good candidate for a removal. Martin From mkosek at redhat.com Fri Apr 6 08:22:27 2012 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 06 Apr 2012 10:22:27 +0200 Subject: [Freeipa-devel] [PATCH] 998 certmonger restarts services on renewal In-Reply-To: <4F7E04FB.4070200@redhat.com> References: <4F7233D9.2080405@redhat.com> <1333374440.10403.18.camel@balmora.brq.redhat.com> <4F79B30E.1060804@redhat.com> <4F79FFBD.1030408@redhat.com> <1333437994.23102.13.camel@balmora.brq.redhat.com> <4F7AF8FC.5050000@redhat.com> <4F7B0D04.6060704@redhat.com> <1333478601.2626.5.camel@priserak> <4F7B4F64.7070703@redhat.com> <4F7E04FB.4070200@redhat.com> Message-ID: <1333700547.14740.18.camel@balmora.brq.redhat.com> On Thu, 2012-04-05 at 16:47 -0400, Rob Crittenden wrote: > Rob Crittenden wrote: > > Martin Kosek wrote: > >> On Tue, 2012-04-03 at 10:45 -0400, Rob Crittenden wrote: > >>> Rob Crittenden wrote: > >>>> Martin Kosek wrote: > >>>>> On Mon, 2012-04-02 at 15:36 -0400, Rob Crittenden wrote: > >>>>>> Rob Crittenden wrote: > >>>>>>> Martin Kosek wrote: > >>>>>>>> On Tue, 2012-03-27 at 17:40 -0400, Rob Crittenden wrote: > >>>>>>>>> Certmonger will currently automatically renew server > >>>>>>>>> certificates but > >>>>>>>>> doesn't restart the services so you can still end up with expired > >>>>>>>>> certificates if you services never restart. > >>>>>>>>> > >>>>>>>>> This patch registers are restart command with certmonger so the > >>>>>>>>> IPA > >>>>>>>>> services will automatically be restarted to get the updated cert. > >>>>>>>>> > >>>>>>>>> Easy to test. Install IPA then resubmit the current server > >>>>>>>>> certs and > >>>>>>>>> watch the services restart: > >>>>>>>>> > >>>>>>>>> # ipa-getcert list > >>>>>>>>> > >>>>>>>>> Find the ID for either your dirsrv or httpd instance > >>>>>>>>> > >>>>>>>>> # ipa-getcert resubmit -i > >>>>>>>>> > >>>>>>>>> Watch /var/log/httpd/error_log or > >>>>>>>>> /var/log/dirsrv/slapd-INSTANCE/errors > >>>>>>>>> to see the service restart. > >>>>>>>>> > >>>>>>>>> rob > >>>>>>>> > >>>>>>>> What about current instances - can we/do we want to update > >>>>>>>> certmonger > >>>>>>>> tracking so that their instances are restarted as well? > >>>>>>>> > >>>>>>>> Anyway, I found few issues SELinux issues with the patch: > >>>>>>>> > >>>>>>>> 1) # rpm -Uvh freeipa-* > >>>>>>>> Preparing... ########################################### [100%] > >>>>>>>> 1:freeipa-python ########################################### [ 20%] > >>>>>>>> 2:freeipa-client ########################################### [ 40%] > >>>>>>>> 3:freeipa-admintools ########################################### [ > >>>>>>>> 60%] > >>>>>>>> 4:freeipa-server ########################################### [ 80%] > >>>>>>>> /usr/bin/chcon: failed to change context of > >>>>>>>> `/usr/lib64/ipa/certmonger' to > >>>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid > >>>>>>>> argument > >>>>>>>> /usr/bin/chcon: failed to change context of > >>>>>>>> `/usr/lib64/ipa/certmonger/restart_dirsrv' to > >>>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid > >>>>>>>> argument > >>>>>>>> /usr/bin/chcon: failed to change context of > >>>>>>>> `/usr/lib64/ipa/certmonger/restart_httpd' to > >>>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid > >>>>>>>> argument > >>>>>>>> warning: %post(freeipa-server-2.1.90GIT5b895af-0.fc16.x86_64) > >>>>>>>> scriptlet failed, exit status 1 > >>>>>>>> 5:freeipa-server-selinux > >>>>>>>> ########################################### > >>>>>>>> [100%] > >>>>>>>> > >>>>>>>> certmonger_unconfined_exec_t type was unknown with my selinux > >>>>>>>> policy: > >>>>>>>> > >>>>>>>> selinux-policy-3.10.0-80.fc16.noarch > >>>>>>>> selinux-policy-targeted-3.10.0-80.fc16.noarch > >>>>>>>> > >>>>>>>> If we need a higher SELinux version, we should bump the required > >>>>>>>> package > >>>>>>>> version spec file. > >>>>>>> > >>>>>>> Yeah, waiting on it to be backported. > >>>>>>> > >>>>>>>> > >>>>>>>> 2) Change of SELinux context with /usr/bin/chcon is temporary until > >>>>>>>> restorecon or system relabel occurs. I think we should make it > >>>>>>>> persistent and enforce this type in our SELinux policy and > >>>>>>>> rather call > >>>>>>>> restorecon instead of chcon > >>>>>>> > >>>>>>> That's a good idea, why didn't I think of that :-( > >>>>>> > >>>>>> Ah, now I remember, it will be handled by selinux-policy. I would > >>>>>> have > >>>>>> used restorecon here but since the policy isn't there yet this seemed > >>>>>> like a good idea. > >>>>>> > >>>>>> I'm trying to find out the status of this new policy, it may only > >>>>>> make > >>>>>> it into F-17. > >>>>>> > >>>>>> rob > >>>>> > >>>>> Ok. But if this policy does not go in F-16 and if we want this fix in > >>>>> F16 release too, I guess we would have to implement both approaches in > >>>>> our spec file: > >>>>> > >>>>> 1) When on F16, include SELinux policy for restart scripts + run > >>>>> restorecon > >>>>> 2) When on F17, do not include the SELinux policy (+ run restorecon) > >>>>> > >>>>> Martin > >>>>> > >>>> > >>>> Won't work without updated selinux-policy. Without the permission for > >>>> certmonger to execute the commands things will still fail (just in > >>>> really non-obvious and far in the future ways). > >>>> > >>>> It looks like this is fixed in F-17 selinux-policy-3.10.0-107. > >>>> > >>>> rob > >>> > >>> Updated patch which works on F-17. > >>> > >>> rob > >> > >> What about F-16? The restart scripts won't work with enabled enforcing > >> and will raise AVCs. Maybe we really need to deliver our own SELinux > >> policy allowing it on F-16. > > > > Right, I don't see this working on F-16. I don't really want to carry > > this type of policy. It goes beyond marking a few files as certmonger_t, > > it is going to let certmonger execute arbitrary scripts. This is best > > left to the SELinux team who understand the consequences better. > > > >> > >> I also found an issue with the restart scripts: > >> > >> 1) restart_dirsrv: this won't work with systemd: > >> > >> # /sbin/service dirsrv restart > >> Redirecting to /bin/systemctl restart dirsrv.service > >> Failed to issue method call: Unit dirsrv.service failed to load: No such > >> file or directory. See system logs and 'systemctl status dirsrv.service' > >> for details. > > > > Wouldn't work so hot for sysV either as we'd be restarting all > > instances. I'll take a look. > > > >> We would need to pass an instance of IPA dirsrv for this to work. > >> > >> 2) restart_httpd: > >> Is reload enough for httpd to pull a new certificate? Don't we need a > >> full restart? If reload is enough, I think the command should be named > >> reload_httpd > > > > Yes, it causes the modules to be reloaded which will reload the NSS > > database, that's all we need. I named it this way for consistency. I can > > rename it, though I doubt it would cause any confusion either way. > > > > rob > > revised patch. > > rob Thanks, this is better, dirsrv restart is now fixed. I still have few issues: 1) Wrong command for httpd certs (should be reload_httpd): - db.track_server_cert(nickname, self.principal, db.passwd_fname) + db.track_server_cert(nickname, self.principal, db.passwd_fname, 'restart_httpd') - db.track_server_cert("Server-Cert", self.principal, db.passwd_fname) + db.track_server_cert("Server-Cert", self.principal, db.passwd_fname, 'restart_httpd') 2) What about current certmonger monitored certs? We can use the command Nalin suggested to modify existing monitored certs to set up a restart command: # ipa-getcert list ... Request ID '20120404083924': status: MONITORING stuck: no key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt' certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB' CA: IPA issuer: CN=Certificate Authority,O=IDM.LAB.BOS.REDHAT.COM subject: CN=vm-079.idm.lab.bos.redhat.com,O=IDM.LAB.BOS.REDHAT.COM expires: 2014-04-05 08:39:24 UTC eku: id-kp-serverAuth,id-kp-clientAuth command: track: yes auto-renew: yes # ipa-getcert start-tracking -i 20120404083924 -C /usr/lib64/ipa/certmonger/reload_httpd Request "20120404083924" modified. # ipa-getcert list Request ID '20120404083924': status: MONITORING stuck: no key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt' certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB' CA: IPA issuer: CN=Certificate Authority,O=IDM.LAB.BOS.REDHAT.COM subject: CN=vm-079.idm.lab.bos.redhat.com,O=IDM.LAB.BOS.REDHAT.COM expires: 2014-04-05 08:39:24 UTC eku: id-kp-serverAuth,id-kp-clientAuth command: /usr/lib64/ipa/certmonger/reload_httpd track: yes auto-renew: yes 3) The new dirsrv service restart command does not work either with systemd: # service dirsrv restart IDM-LAB-BOS-REDHAT-COM I think the approach here would be to take advantage of our system service management abstraction that ipactl uses. It will then work well with SysV, systemd or any other future configurations or ports to other Linux distributions. Example of the script which restarts the required instance: # cat /tmp/restart_dirsrv #!/usr/bin/python import sys from ipapython import services as ipaservices try: instance = sys.argv[1] except IndexError: instance = "" dirsrv = ipaservices.knownservices.dirsrv dirsrv.restart(instance) # /tmp/restart_dirsrv IDM-LAB-BOS-REDHAT-COM Martin From mkosek at redhat.com Fri Apr 6 08:40:42 2012 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 06 Apr 2012 10:40:42 +0200 Subject: [Freeipa-devel] [PATCH 72] Validate DN & RDN parameters for migrate command In-Reply-To: <4F7E465F.7050206@redhat.com> References: <4F7E465F.7050206@redhat.com> Message-ID: <1333701642.14740.27.camel@balmora.brq.redhat.com> On Thu, 2012-04-05 at 21:26 -0400, John Dennis wrote: > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel 1) We still crash when the parameter is empty. We may want to make it required (the same fix Rob did for cert rejection reason): # echo "secret123" | ipa migrate-ds ldap://vm-054.idm.lab.bos.redhat.com --with-compat --base-dn="dc=greyoak,dc=com" --user-container= ipa: ERROR: cannot connect to u'http://vm-022.idm.lab.bos.redhat.com/ipa/xml': Internal Server Error 2) Do you think it would make sense to create a special Param for DN? Its quite general type and I bet there are other Params that could use DN instead of Str. It could look like that: DN('binddn?', cli_name='bind_dn', label=_('Bind DN'), default=u'cn=directory manager', autofill=True, ), DN('usercontainer?', rdn=True, <<<< can be RDN, not DN cli_name='user_container', label=_('User container'), doc=_('RDN of container for users in DS relative to base DN'), default=u'ou=people', autofill=True, ), Then, we wouldn't need to import special validators from ipalib.util whenever DN parameter is used. 3) We should not restrict users from passing a user/group container with more than one RDN: # echo "secret123" | ipa migrate-ds ldap://vm-054.idm.lab.bos.redhat.com --with-compat --base-dn="dc=greyoak,dc=com" --user-container ou=Admins,ou=People ipa: ERROR: invalid 'user_container': multiple RDN's specified by "ou=Admins,ou=People" Martin From pviktori at redhat.com Fri Apr 6 11:29:09 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 06 Apr 2012 13:29:09 +0200 Subject: [Freeipa-devel] [PATCH] 0034 Limit permission and selfservice names Message-ID: <4F7ED385.1010108@redhat.com> https://fedorahosted.org/freeipa/ticket/2585: ipa permission-add throws internal server error when name contains '<', '>' or other special characters. The problem is, of course, proper escaping; not only in DNs but also in ACIs. Right now we don't really do either. This patch is just a simple workaround: disallow anything except known-good characters. It's just names, so no functionality is lost. All tickets for April are now taken, so unless a new one comes my way, I'll take a dive into the code and fix it properly. This could take some time and would mean somewhat larger changes. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0034-Limit-permission-and-selfservice-names-to-alphanumer.patch Type: text/x-patch Size: 4933 bytes Desc: not available URL: From jcholast at redhat.com Fri Apr 6 12:12:05 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Fri, 06 Apr 2012 14:12:05 +0200 Subject: [Freeipa-devel] [PATCH] 1003 return consistent value in netgroup triple In-Reply-To: <4F7DEAA4.4090706@redhat.com> References: <4F7DB464.4090809@redhat.com> <4F7DD7D9.7070001@redhat.com> <4F7DEAA4.4090706@redhat.com> Message-ID: <4F7EDD95.2020105@redhat.com> On 5.4.2012 20:55, Rob Crittenden wrote: > Jan Cholasta wrote: >> On 5.4.2012 17:04, Rob Crittenden wrote: >>> When constructing netgroup triples with hostcat or usercat set to all we >>> weren't setting the user/host part of the triple correctly. The first >>> entry would have '' as the host/user value as appropriate but all >>> subsequent entries would have -. They should all be empty. >>> >>> This patch uses a new feature of slapi-nis-0.40 so we can use an >>> expression in a pad. >>> >>> rob >>> >> >> NACK, this does not work for new installs. Did you forget to include >> install/share/*.ldif files in the commit? >> >> Honza >> > > Yes, forgot to include nis.uldif. > > rob ACK. Honza -- Jan Cholasta From jcholast at redhat.com Fri Apr 6 12:54:18 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Fri, 06 Apr 2012 14:54:18 +0200 Subject: [Freeipa-devel] [PATCH] 74 Check configured maximum user login length on user rename In-Reply-To: <4F7E10EC.1020808@redhat.com> References: <4F7B0C43.3000201@redhat.com> <4F7E10EC.1020808@redhat.com> Message-ID: <4F7EE77A.10703@redhat.com> On 5.4.2012 23:38, Rob Crittenden wrote: > Jan Cholasta wrote: >> https://fedorahosted.org/freeipa/ticket/2587 >> >> Honza > > This looks ok, it would be nice to have a unit test. > > rob Test added. Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-74.1-user-rename-length-check.patch Type: text/x-patch Size: 2156 bytes Desc: not available URL: From jdennis at redhat.com Fri Apr 6 14:11:22 2012 From: jdennis at redhat.com (John Dennis) Date: Fri, 06 Apr 2012 10:11:22 -0400 Subject: [Freeipa-devel] [PATCH 72] Validate DN & RDN parameters for migrate command In-Reply-To: <1333701642.14740.27.camel@balmora.brq.redhat.com> References: <4F7E465F.7050206@redhat.com> <1333701642.14740.27.camel@balmora.brq.redhat.com> Message-ID: <4F7EF98A.5000804@redhat.com> On 04/06/2012 04:40 AM, Martin Kosek wrote: > 1) We still crash when the parameter is empty. We may want to make it > required (the same fix Rob did for cert rejection reason): > > # echo "secret123" | ipa migrate-ds ldap://vm-054.idm.lab.bos.redhat.com > --with-compat --base-dn="dc=greyoak,dc=com" --user-container= > ipa: ERROR: cannot connect to > u'http://vm-022.idm.lab.bos.redhat.com/ipa/xml': Internal Server Error Good point, will fix. > 2) Do you think it would make sense to create a special Param for DN? > Its quite general type and I bet there are other Params that could use > DN instead of Str. It could look like that: > DN('usercontainer?', > rdn=True,<<<< can be RDN, not DN Yes, I considered introducing a new DN parameter type as well. I think this is a good approach and will have payoff down the road. I will make that change. However I'm inclined to introduce both a DN parameter and a RDN parameter, they really are different entities, if you use a flag to indicate you require a RDN then you lose the type of the parameter, it could be either, and it really should be one or the other and knowing which it is has value. > 3) We should not restrict users from passing a user/group container with > more than one RDN: Yeah, I wasn't too sure about that. The parameter distinctly called for an RDN, but it seemed to me it should support any container below the base, which would make it a DN not a RDN. FWIW, a DN is an ordered sequence of 1 or more RDN's. DN's do not have to be "absolute", they can be any subset of a "path sequence". A DN with exactly one RDN is equivalent to an RDN, a special case). I will change the parameter to be a DN since that is what was the original intent. All good suggestions, a revised patch will follow shortly. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From ohamada at redhat.com Fri Apr 6 15:15:12 2012 From: ohamada at redhat.com (Ondrej Hamada) Date: Fri, 06 Apr 2012 17:15:12 +0200 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F7C73FA.3090809@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F7C73FA.3090809@redhat.com> Message-ID: <4F7F0880.8020601@redhat.com> On 04/04/2012 06:16 PM, Ondrej Hamada wrote: > On 04/04/2012 03:02 PM, Simo Sorce wrote: >> On Tue, 2012-04-03 at 18:45 +0200, Ondrej Hamada wrote: >>> On 03/13/2012 01:13 AM, Dmitri Pal wrote: >>>> On 03/12/2012 06:10 PM, Simo Sorce wrote: >>>>> On Mon, 2012-03-12 at 17:40 -0400, Dmitri Pal wrote: >>>>>> On 03/12/2012 04:16 PM, Simo Sorce wrote: >>>>>>> On Mon, 2012-03-12 at 20:38 +0100, Ondrej Hamada wrote: >>>>>>>> USER'S operations when connection is OK: >>>>>>>> ------------------------------------------------------- >>>>>>>> read data -> local >>>>>>>> write data -> forwarding to master >>>>>>>> authentication: >>>>>>>> -credentials cached -- authenticate against credentials in >>>>>>>> local cache >>>>>>>> -on failure: log failure locally, >>>>>>>> update >>>>>>>> data >>>>>>>> about failures only on lock-down of account >>>>>>>> -credentials not cached -- forward request to master, on success >>>>>>>> cache >>>>>>>> the credentials >>>>>>>> >>>>>>> This scheme doesn't work with Kerberos. >>>>>>> Either you have a copy of the user's keys locally or you don't, >>>>>>> there is >>>>>>> nothing you can really cache if you don't. >>>>>>> >>>>>>> Simo. >>>>>>> >>>>>> Yes this is what we are talking about here - the cache would have to >>>>>> contain user Kerberos key but there should be some expiration on the >>>>>> cache so that fetched and stored keys periodically cleaned >>>>>> following the >>>>>> policy an admin has defined. >>>>> We would need a mechanism to transfer Kerberos keys, but that >>>>> would not >>>>> be sufficient, you'd have to give read-only servers also the realm >>>>> krbtgt in order to be able to do anything with those keys. >>>>> >>>>> The way MS solves hits (I think) is by giving a special RODC >>>>> krbtgt to >>>>> each RODC, and then replicating all RODC krbtgt's with full domain >>>>> controllers. Full domain controllers have logic to use RODC's krbtgt >>>>> keys instead of the normal krbtgt to perform operations when user's >>>>> krbtgt are presented to a different server. This is a lot of work and >>>>> changes in the KDC, not something we can implement easily. >>>>> >>>>> As a first implementation I would restrict read-only replicas to >>>>> not do >>>>> Kerberos at all, only LDAP for all the lookup stuff necessary. to >>>>> add a >>>>> RO KDC we will need to plan a lot of changes in the KDC. >>>>> >>>>> We will also need intelligent partial replication where the rules >>>>> about >>>>> which object (and which attributes in the object) need/can be >>>>> replicated >>>>> are established based on some grouping+filter mechanism. This also >>>>> is a >>>>> pretty important change to 389ds. >>>>> >>>>> Simo. >>>>> >>>> I agree. I am just trying to structure the discussion a bit so that >>>> all >>>> what you are saying can be captured in the design document and then we >>>> can pick a subset of what Ondrej will actually implement. So let us >>>> capture all the complexity and then do a POC for just LDAP part. >>>> >>> Sorry for inactivity, I was struggling with a lot of school stuff. >>> >>> I've summed up the main goals, do you agree on them or should I >>> add/remove any? >>> >>> >>> GOALS >>> =========================================== >>> Create Hub and Consumer types of replica with following features: >>> >>> * Hub is read-only >>> >>> * Hub interconnects Masters with Consumers or Masters with Hubs >>> or Hubs with other Hubs >>> >>> * Hub is hidden in the network topology >>> >>> * Consumer is read-only >>> >>> * Consumer interconnects Masters/Hubs with clients >>> >>> * Write operations should be forwarded to Master >>> >>> * Consumer should be able to log users into system without >>> communication with master >> We need to define how this can be done, it will almost certainly mean >> part of the consumer is writable, plus it also means you need additional >> access control and policies, on what the Consumer should be allowed to >> see. > Right, in such case the Consumers and Hubs will have to be masters > (from 389-DS's point of view). > >>> * Consumer should cache user's credentials >> Ok what credentials ? As I explained earlier Kerberos creds cannot >> really be cached. Either they are transferred with replication or the >> KDC needs to be change to do chaining. Neither I consider as 'caching'. >> A password obtained through an LDAP bind could be cached, but I am not >> sure it is worth it. >> >>> * Caching of credentials should be configurable >> See above. >> >>> * CA server should not be allowed on Hubs and Consumers >> Missing points: >> - Masters should not transfer KRB keys to HUBs/Consumers by default. > Add point: > - storing of the Krb creds must be configurable and disabled by > default >> - We need selective replication if you want to allow distributing a >> partial set of Kerberos credentials to consumers. With Hubs it becomes >> complicated to decide what to replicate about credentials. >> >> Simo. >> > Rich mentioned that they are planning support for LDAP filters in > fractional replication in the future, but currently it is not supported. > Ad distribution of user's Krb creds: When the user logs on any Consumer for a first time, he has to authenticate against master. If succeeds, he will be added to a specific user group. Each consumer will have one of these groups. These groups will be used by LDAP filters in fractional replication to distribute the Krb creds to the chosen Consumers only. This will be more complicated because of the HUBs (as Simo already said). The easiest way will be to let the Hubs contain all the creds - but this solution might result in massive network traffic when the users start logging in. Other solution might be creating similar user group for each Hub. Because every replica will be Master(from 389-DS point of view) - it will hold record about everyone of its suppliers and consumers and thus it will be possible to decide who belongs to which user group. -- Regards, Ondrej Hamada FreeIPA team jabber:ohama at jabbim.cz IRC: ohamada From rmeggins at redhat.com Mon Apr 9 13:05:01 2012 From: rmeggins at redhat.com (Rich Megginson) Date: Mon, 09 Apr 2012 07:05:01 -0600 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F7F0880.8020601@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F7C73FA.3090809@redhat.com> <4F7F0880.8020601@redhat.com> Message-ID: <4F82DE7D.9060608@redhat.com> On 04/06/2012 09:15 AM, Ondrej Hamada wrote: > On 04/04/2012 06:16 PM, Ondrej Hamada wrote: >> On 04/04/2012 03:02 PM, Simo Sorce wrote: >>> On Tue, 2012-04-03 at 18:45 +0200, Ondrej Hamada wrote: >>>> On 03/13/2012 01:13 AM, Dmitri Pal wrote: >>>>> On 03/12/2012 06:10 PM, Simo Sorce wrote: >>>>>> On Mon, 2012-03-12 at 17:40 -0400, Dmitri Pal wrote: >>>>>>> On 03/12/2012 04:16 PM, Simo Sorce wrote: >>>>>>>> On Mon, 2012-03-12 at 20:38 +0100, Ondrej Hamada wrote: >>>>>>>>> USER'S operations when connection is OK: >>>>>>>>> ------------------------------------------------------- >>>>>>>>> read data -> local >>>>>>>>> write data -> forwarding to master >>>>>>>>> authentication: >>>>>>>>> -credentials cached -- authenticate against credentials in >>>>>>>>> local cache >>>>>>>>> -on failure: log failure locally, >>>>>>>>> update >>>>>>>>> data >>>>>>>>> about failures only on lock-down of account >>>>>>>>> -credentials not cached -- forward request to master, on success >>>>>>>>> cache >>>>>>>>> the credentials >>>>>>>>> >>>>>>>> This scheme doesn't work with Kerberos. >>>>>>>> Either you have a copy of the user's keys locally or you don't, >>>>>>>> there is >>>>>>>> nothing you can really cache if you don't. >>>>>>>> >>>>>>>> Simo. >>>>>>>> >>>>>>> Yes this is what we are talking about here - the cache would >>>>>>> have to >>>>>>> contain user Kerberos key but there should be some expiration on >>>>>>> the >>>>>>> cache so that fetched and stored keys periodically cleaned >>>>>>> following the >>>>>>> policy an admin has defined. >>>>>> We would need a mechanism to transfer Kerberos keys, but that >>>>>> would not >>>>>> be sufficient, you'd have to give read-only servers also the realm >>>>>> krbtgt in order to be able to do anything with those keys. >>>>>> >>>>>> The way MS solves hits (I think) is by giving a special RODC >>>>>> krbtgt to >>>>>> each RODC, and then replicating all RODC krbtgt's with full domain >>>>>> controllers. Full domain controllers have logic to use RODC's krbtgt >>>>>> keys instead of the normal krbtgt to perform operations when user's >>>>>> krbtgt are presented to a different server. This is a lot of work >>>>>> and >>>>>> changes in the KDC, not something we can implement easily. >>>>>> >>>>>> As a first implementation I would restrict read-only replicas to >>>>>> not do >>>>>> Kerberos at all, only LDAP for all the lookup stuff necessary. to >>>>>> add a >>>>>> RO KDC we will need to plan a lot of changes in the KDC. >>>>>> >>>>>> We will also need intelligent partial replication where the rules >>>>>> about >>>>>> which object (and which attributes in the object) need/can be >>>>>> replicated >>>>>> are established based on some grouping+filter mechanism. This >>>>>> also is a >>>>>> pretty important change to 389ds. >>>>>> >>>>>> Simo. >>>>>> >>>>> I agree. I am just trying to structure the discussion a bit so >>>>> that all >>>>> what you are saying can be captured in the design document and >>>>> then we >>>>> can pick a subset of what Ondrej will actually implement. So let us >>>>> capture all the complexity and then do a POC for just LDAP part. >>>>> >>>> Sorry for inactivity, I was struggling with a lot of school stuff. >>>> >>>> I've summed up the main goals, do you agree on them or should I >>>> add/remove any? >>>> >>>> >>>> GOALS >>>> =========================================== >>>> Create Hub and Consumer types of replica with following features: >>>> >>>> * Hub is read-only >>>> >>>> * Hub interconnects Masters with Consumers or Masters with Hubs >>>> or Hubs with other Hubs >>>> >>>> * Hub is hidden in the network topology >>>> >>>> * Consumer is read-only >>>> >>>> * Consumer interconnects Masters/Hubs with clients >>>> >>>> * Write operations should be forwarded to Master >>>> >>>> * Consumer should be able to log users into system without >>>> communication with master >>> We need to define how this can be done, it will almost certainly mean >>> part of the consumer is writable, plus it also means you need >>> additional >>> access control and policies, on what the Consumer should be allowed to >>> see. >> Right, in such case the Consumers and Hubs will have to be masters >> (from 389-DS's point of view). >> >>>> * Consumer should cache user's credentials >>> Ok what credentials ? As I explained earlier Kerberos creds cannot >>> really be cached. Either they are transferred with replication or the >>> KDC needs to be change to do chaining. Neither I consider as 'caching'. >>> A password obtained through an LDAP bind could be cached, but I am not >>> sure it is worth it. >>> >>>> * Caching of credentials should be configurable >>> See above. >>> >>>> * CA server should not be allowed on Hubs and Consumers >>> Missing points: >>> - Masters should not transfer KRB keys to HUBs/Consumers by default. >> Add point: >> - storing of the Krb creds must be configurable and disabled by >> default >>> - We need selective replication if you want to allow distributing a >>> partial set of Kerberos credentials to consumers. With Hubs it becomes >>> complicated to decide what to replicate about credentials. >>> >>> Simo. >>> >> Rich mentioned that they are planning support for LDAP filters in >> fractional replication in the future, but currently it is not supported. >> > Ad distribution of user's Krb creds: > When the user logs on any Consumer for a first time, he has to > authenticate against master. If succeeds, he will be added to a > specific user group. Each consumer will have one of these groups. > These groups will be used by LDAP filters in fractional replication to > distribute the Krb creds to the chosen Consumers only. > > This will be more complicated because of the HUBs (as Simo already > said). The easiest way will be to let the Hubs contain all the creds - > but this solution might result in massive network traffic when the > users start logging in. Other solution might be creating similar user > group for each Hub. Because every replica will be Master(from 389-DS > point of view) - it will hold record about everyone of its suppliers > and consumers and thus it will be possible to decide who belongs to > which user group. > Note that the groups created on the masters will be replicated to the hubs, so the hubs will know about those groups. From rcritten at redhat.com Mon Apr 9 13:55:54 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 09 Apr 2012 09:55:54 -0400 Subject: [Freeipa-devel] [PATCH] 0034 Limit permission and selfservice names In-Reply-To: <4F7ED385.1010108@redhat.com> References: <4F7ED385.1010108@redhat.com> Message-ID: <4F82EA6A.8000000@redhat.com> Petr Viktorin wrote: > https://fedorahosted.org/freeipa/ticket/2585: ipa permission-add throws > internal server error when name contains '<', '>' or other special > characters. > > The problem is, of course, proper escaping; not only in DNs but also in > ACIs. Right now we don't really do either. > > This patch is just a simple workaround: disallow anything except > known-good characters. It's just names, so no functionality is lost. > > All tickets for April are now taken, so unless a new one comes my way, > I'll take a dive into the code and fix it properly. This could take some > time and would mean somewhat larger changes. Is there a reason you didn't use pattern/pattern_errmsg instead? You'd need to change the regex as patterns use re.match rather than re.search. rob From rcritten at redhat.com Mon Apr 9 14:03:21 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 09 Apr 2012 10:03:21 -0400 Subject: [Freeipa-devel] [PATCH] 1003 return consistent value in netgroup triple In-Reply-To: <4F7EDD95.2020105@redhat.com> References: <4F7DB464.4090809@redhat.com> <4F7DD7D9.7070001@redhat.com> <4F7DEAA4.4090706@redhat.com> <4F7EDD95.2020105@redhat.com> Message-ID: <4F82EC29.7070606@redhat.com> Jan Cholasta wrote: > On 5.4.2012 20:55, Rob Crittenden wrote: >> Jan Cholasta wrote: >>> On 5.4.2012 17:04, Rob Crittenden wrote: >>>> When constructing netgroup triples with hostcat or usercat set to >>>> all we >>>> weren't setting the user/host part of the triple correctly. The first >>>> entry would have '' as the host/user value as appropriate but all >>>> subsequent entries would have -. They should all be empty. >>>> >>>> This patch uses a new feature of slapi-nis-0.40 so we can use an >>>> expression in a pad. >>>> >>>> rob >>>> >>> >>> NACK, this does not work for new installs. Did you forget to include >>> install/share/*.ldif files in the commit? >>> >>> Honza >>> >> >> Yes, forgot to include nis.uldif. >> >> rob > > ACK. > > Honza > pushed to master and ipa-2-2 From rcritten at redhat.com Mon Apr 9 14:19:53 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 09 Apr 2012 10:19:53 -0400 Subject: [Freeipa-devel] [PATCH] 74 Check configured maximum user login length on user rename In-Reply-To: <4F7EE77A.10703@redhat.com> References: <4F7B0C43.3000201@redhat.com> <4F7E10EC.1020808@redhat.com> <4F7EE77A.10703@redhat.com> Message-ID: <4F82F009.7080909@redhat.com> Jan Cholasta wrote: > On 5.4.2012 23:38, Rob Crittenden wrote: >> Jan Cholasta wrote: >>> https://fedorahosted.org/freeipa/ticket/2587 >>> >>> Honza >> >> This looks ok, it would be nice to have a unit test. >> >> rob > > Test added. > > Honza > ACK, pushed to master and ipa-2-2 From jdennis at redhat.com Mon Apr 9 15:24:23 2012 From: jdennis at redhat.com (John Dennis) Date: Mon, 09 Apr 2012 11:24:23 -0400 Subject: [Freeipa-devel] [PATCH 68] text unit test should validate using installed mo file In-Reply-To: <4F75ADD7.4020404@redhat.com> References: <4F71D00E.2070907@redhat.com> <4F71D58E.5060507@redhat.com> <4F71FF85.7090102@redhat.com> <4F7223AF.7030003@redhat.com> <4F72CE84.6010407@redhat.com> <4F750137.9010600@redhat.com> <4F75ADD7.4020404@redhat.com> Message-ID: <4F82FF27.8010808@redhat.com> On 03/30/2012 08:57 AM, Petr Viktorin wrote: > On 03/30/2012 02:41 AM, John Dennis wrote: >> On 03/28/2012 04:40 AM, Petr Viktorin wrote: >>> Can install/po/Makefile just call test_i18n.py from the tests/ tree? It >>> doesn't import any IPA code so there's no need to set sys.path in this >>> case (though there'd have to be a comment saying we depend on this). >>> In the other case, unit tests, the path is already set by Nose. >>> Also the file would have to be renamed so nose doesn't pick it up as a >>> test module. >> >> Good idea. I moved test_i18n.py to tests/i18n.py. I was reluctant about >> moving the file, but that was without merit, it works better this way. > > The downside is that the file now looks like a test utility module. It > could use a comment near the imports saying that it's also called as a > script, and that it shouldn't import IPA code (this could silently use > the system-installed version of IPA, or crash if it's not there). > Alternatively, set PYTHONPATH in the Makefile. > >> I also removed the superfluous comment in Makefile.in you pointed out. >> >> When I was exercising the code I noticed the validation code was not >> treating msgid's from C code correctly (we do have some C code in the >> client area). That required a much more nuanced parsing the format >> conversion specifiers to correctly identify what was a positional format >> specifier vs. an indexed format specifier. The new version of the >> i18n.py includes the function parse_printf_fmt() and get_prog_langs() to >> identify the source programming language. > > More non-trivial code without tests. This makes me worry. But, tests for > this can be added later I guess. > >> Two more patches will follow shortly, one which adds validation when >> "make lint" is run and a patch to correct the problems it found in the C >> code strings which did not used indexed format specifiers. revised patch with comment about imports attached. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jdennis-0068-3-text-unit-test-should-validate-using-installed-mo-fi.patch Type: text/x-patch Size: 65725 bytes Desc: not available URL: From rcritten at redhat.com Mon Apr 9 17:44:01 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 09 Apr 2012 13:44:01 -0400 Subject: [Freeipa-devel] [PATCH] 247 Fix installation when server hostname is not in a default domain In-Reply-To: <1333550620.23241.2.camel@balmora.brq.redhat.com> References: <1333550620.23241.2.camel@balmora.brq.redhat.com> Message-ID: <4F831FE1.1050404@redhat.com> Martin Kosek wrote: > When IPA server is configured with DNS and its hostname is not > located in a default domain, SRV records are not valid. > Additionally, httpd does not serve XMLRPC interface because it > IPA server domain-realm mapping is missing in krb5.conf. All CLI > commands were then failing. > > This patch amends this configuration. It fixes SRV records in > served domain to include full FQDN instead of relative hostname > when the IPA server hostname is not located in served domain. > IPA server forward record is also placed to correct zone. > > When IPA server is not in a served domain a proper domain-realm > mapping is configured to krb5.conf. The template was improved > in order to be able to hold this information. > > https://fedorahosted.org/freeipa/ticket/2602 ACK, pushed to master and ipa-2-2 From rcritten at redhat.com Tue Apr 10 03:54:06 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 09 Apr 2012 23:54:06 -0400 Subject: [Freeipa-devel] [PATCH] 1005 fix password history Message-ID: <4F83AEDE.9050203@redhat.com> Password history wasn't working because the qsort comparison function was comparing pointers, not data. This resulted in a random element being removed from the history on overflow rather than the oldest. We sort in reverse so we don't have to move elements inside the list when removing to make more room. We just pop off the top then shove on the new password. The history includes a time to make comparisons straightforward (and LDAP doesn't guarantee order). I've attached a test script to exercise things. I don't see a way to easily include this into our current framework at the moment. We'd need a way to switch users in the middle of a test. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1005-history.patch Type: text/x-diff Size: 1241 bytes Desc: not available URL: -------------- next part -------------- #!/bin/sh SLEEP=1 echo password | kinit admin ipa user-del tuser1 echo "Set password policy" ipa pwpolicy-mod --history=3 --minlife=0 echo "Create user" echo password | ipa user-add --first=tim --last=user tuser1 --password sleep $SLEEP echo "Password 1" echo -e 'password\nredhat001\nredhat001\n' | kinit tuser1 sleep $SLEEP echo "Password 2" echo -e 'redhat001\nredhat002\nredhat002' | ipa passwd sleep $SLEEP echo "Password 3" echo -e 'redhat002\nredhat003\nredhat003' | ipa passwd sleep $SLEEP echo "Try resetting to password 1: it should fail" echo -e 'redhat003\nredhat001\nredhat001' | ipa passwd sleep $SLEEP echo "Password 4" echo -e 'redhat003\nredhat004\nredhat004' | ipa passwd sleep $SLEEP echo "Try resetting to password 1: it should succeed" echo -e 'redhat004\nredhat001\nredhat001' | ipa passwd sleep $SLEEP echo "Try resetting to password 3: it should fail" echo -e 'redhat001\nredhat003\nredhat003' | ipa passwd sleep $SLEEP echo "Try resetting to password 2: it should succeed" echo -e 'redhat001\nredhat002\nredhat002' | ipa passwd sleep $SLEEP echo "Try resetting to password 4: it should fail" echo -e 'redhat002\nredhat004\nredhat004' | ipa passwd sleep $SLEEP echo "Try resetting to password 3: it should succeed" echo -e 'redhat002\nredhat003\nredhat003' | ipa passwd sleep $SLEEP From yzhang at redhat.com Tue Apr 10 04:45:00 2012 From: yzhang at redhat.com (yi zhang) Date: Mon, 09 Apr 2012 21:45:00 -0700 Subject: [Freeipa-devel] [PATCH] 1005 fix password history In-Reply-To: <4F83AEDE.9050203@redhat.com> References: <4F83AEDE.9050203@redhat.com> Message-ID: <4F83BACC.1080007@redhat.com> On 04/09/2012 08:54 PM, Rob Crittenden wrote: > Password history wasn't working because the qsort comparison function > was comparing pointers, not data. This resulted in a random element > being removed from the history on overflow rather than the oldest. > > We sort in reverse so we don't have to move elements inside the list > when removing to make more room. We just pop off the top then shove on > the new password. The history includes a time to make comparisons > straightforward (and LDAP doesn't guarantee order). > > I've attached a test script to exercise things. I don't see a way to > easily include this into our current framework at the moment. We'd > need a way to switch users in the middle of a test. the current QE CLI test already has test case for it. No worried. Thanks for the fix. Yi > rob > > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel -- yi zhang qa @ mountain view office, 8th floor cell: 408-509-6375 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkosek at redhat.com Tue Apr 10 08:57:42 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 10 Apr 2012 10:57:42 +0200 Subject: [Freeipa-devel] [PATCH] 248 Raise proper exception when LDAP limits are exceeded Message-ID: <1334048262.7045.19.camel@balmora.brq.redhat.com> Few test hints are attached to the ticket. --- ldap2 plugin returns NotFound error for find_entries/get_entry queries when the server did not manage to return an entry due to time limits. This may be confusing for user when the entry he searches actually exists. This patch fixes the behavior in ldap2 plugin to return LimitsExceeded exception instead. This way, user would know that his time/size limits are set too low and can amend them to get correct results. https://fedorahosted.org/freeipa/ticket/2606 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-248-raise-proper-exception-when-ldap-limits-are-exceeded.patch Type: text/x-patch Size: 1529 bytes Desc: not available URL: From pviktori at redhat.com Tue Apr 10 10:23:41 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 10 Apr 2012 12:23:41 +0200 Subject: [Freeipa-devel] [PATCH] 0034 Limit permission and selfservice names In-Reply-To: <4F82EA6A.8000000@redhat.com> References: <4F7ED385.1010108@redhat.com> <4F82EA6A.8000000@redhat.com> Message-ID: <4F840A2D.5070309@redhat.com> On 04/09/2012 03:55 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> https://fedorahosted.org/freeipa/ticket/2585: ipa permission-add throws >> internal server error when name contains '<', '>' or other special >> characters. >> >> The problem is, of course, proper escaping; not only in DNs but also in >> ACIs. Right now we don't really do either. >> >> This patch is just a simple workaround: disallow anything except >> known-good characters. It's just names, so no functionality is lost. >> >> All tickets for April are now taken, so unless a new one comes my way, >> I'll take a dive into the code and fix it properly. This could take some >> time and would mean somewhat larger changes. > > Is there a reason you didn't use pattern/pattern_errmsg instead? > > You'd need to change the regex as patterns use re.match rather than > re.search. > > rob Right, that makes more sense. It changes API.txt though. Do I need to bump VERSION in this case? Also, is there a reason pattern_errmsg is included in API.txt? -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0034-02-Limit-permission-and-selfservice-names-to-alphanumer.patch Type: text/x-patch Size: 15521 bytes Desc: not available URL: From pviktori at redhat.com Tue Apr 10 11:16:39 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 10 Apr 2012 13:16:39 +0200 Subject: [Freeipa-devel] [PATCH] 0029 Check expected error messages in tests In-Reply-To: <4F74D17C.2010504@redhat.com> References: <4F687A8D.6020508@redhat.com> <4F709C11.1070800@redhat.com> <4F70C864.5050107@redhat.com> <4F71C48D.3010502@redhat.com> <4F74D17C.2010504@redhat.com> Message-ID: <4F841697.5060903@redhat.com> On 03/29/2012 11:17 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> On 03/26/2012 09:49 PM, Rob Crittenden wrote: >>> Petr Viktorin wrote: >>>> On 03/20/2012 01:39 PM, Petr Viktorin wrote: >>>>> This patch adds checking error messages, not just types, to the >>>>> XML-RPC >>>>> tests. >>>>> The checking is still somewhat hackish, since XML-RPC doesn't give us >>>>> structured error info, but it should protect against regressions on >>>>> issues like whether we put name or cli_name in a ValidationError. >>>>> >>>>> https://fedorahosted.org/freeipa/ticket/2549 >>>>> >>>> >>>> Updated and rebased to current master. >>> >>> NACK >>> >>> automember wrongly was testing for non-existent users rather than >>> automember rules but those should still be tested IMHO, perhaps with >>> both types. >>> >>> There is also some inconsistency. In host you use substitution to set >>> the hostname in the error: '%s: host not found' % fqdn1 but in others >>> (group, hostgroup for example) the name is hardcoded. I also noticed >>> that some reasons are unicode and others are not. >>> >>> rob >> >> Added tests for automember, made all the reasons unicode, using >> substitutions when variables are involved. >> >> The patch still only updates tests that didn't pass the error message >> check. >> > > I'm seeing three failures that I think are due to recently pushed > patches. Otherwise it looks good. > > rob > > > ====================================================================== > FAIL: test_netgroup[1]: netgroup_mod: Try to update non-existent > u'netgroup1' > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 187, in runTest > self.test(*self.arg) > File > "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", > line 247, in > func = lambda: self.check(nice, **test) > File > "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", > line 260, in check > self.check_exception(nice, cmd, args, options, expected) > File > "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", > line 284, in check_exception > assert_deepequal(expected.strerror, e.strerror) > File "/home/rcrit/redhat/freeipa-beta1/tests/util.py", line 328, in > assert_deepequal > VALUE % (doc, expected, got, stack) > AssertionError: assert_deepequal: expected != got. > > expected = u'netgroup1: netgroup not found' > got = u'no such entry' > path = () > > ====================================================================== > FAIL: test_netgroup[4]: netgroup_add: Test an invalid nisdomain1 name > u'domain1,domain2' > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 187, in runTest > self.test(*self.arg) > File > "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", > line 247, in > func = lambda: self.check(nice, **test) > File > "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", > line 260, in check > self.check_exception(nice, cmd, args, options, expected) > File > "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", > line 284, in check_exception > assert_deepequal(expected.strerror, e.strerror) > File "/home/rcrit/redhat/freeipa-beta1/tests/util.py", line 328, in > assert_deepequal > VALUE % (doc, expected, got, stack) > AssertionError: assert_deepequal: expected != got. > > expected = u"invalid 'nisdomainname': may only include letters, numbers, > _, - and ." > got = u"invalid 'nisdomain': may only include letters, numbers, _, -, > and ." > path = () > > ====================================================================== > FAIL: test_netgroup[5]: netgroup_add: Test an invalid nisdomain2 name > u'+invalidnisdomain' > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/nose/case.py", line 187, in runTest > self.test(*self.arg) > File > "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", > line 247, in > func = lambda: self.check(nice, **test) > File > "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", > line 260, in check > self.check_exception(nice, cmd, args, options, expected) > File > "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", > line 284, in check_exception > assert_deepequal(expected.strerror, e.strerror) > File "/home/rcrit/redhat/freeipa-beta1/tests/util.py", line 328, in > assert_deepequal > VALUE % (doc, expected, got, stack) > AssertionError: assert_deepequal: expected != got. > > expected = u"invalid 'nisdomainname': may only include letters, numbers, > _, - and ." > got = u"invalid 'nisdomain': may only include letters, numbers, _, -, > and ." > path = () > Updated patch attached. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0029-03-Fix-expected-error-messages-in-tests.patch Type: text/x-patch Size: 72239 bytes Desc: not available URL: From pvoborni at redhat.com Tue Apr 10 11:43:49 2012 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 10 Apr 2012 13:43:49 +0200 Subject: [Freeipa-devel] [PATCH] 115 Reworked netgroup Web UI to allow setting user/host category In-Reply-To: <4F7DB231.70506@redhat.com> References: <4F745995.2060704@redhat.com> <4F7DB231.70506@redhat.com> Message-ID: <4F841CF5.7090701@redhat.com> On 04/05/2012 04:54 PM, Endi Sukma Dewata wrote: > On 3/29/2012 7:46 AM, Petr Vobornik wrote: >> This patch is changing netgroup web ui to look more like hbac or sudo >> rule UI. This change allows to define and display user category, host >> category and external host. >> >> The core of the change is changing member attributes (user, group, host, >> hostgroup) to use rule_details_widget instead of separate association >> facets. In host case it also allows to display and add external hosts. >> >> https://fedorahosted.org/freeipa/ticket/2578 >> >> Note: compare to other plugins (HBAC, Sudo) netgroup plugins doesn't >> have member attrs in takes_param therefore labels for columns have to be >> explicitly set. > > ACK. Pushed to master, ipa-2-2. > > Just one thing, the label for user/host category says "User/host > category the rule applies to". Netgroup is not a rule, so it might be > better to say something like "User/host category of netgroup members" or > "Member user/host category". This is an existing server issue. > -- Petr Vobornik From pvoborni at redhat.com Tue Apr 10 11:44:34 2012 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 10 Apr 2012 13:44:34 +0200 Subject: [Freeipa-devel] [PATCH] 116 Fixed: permission attrs table didn't update its available options on load In-Reply-To: <4F7DB245.8060401@redhat.com> References: <4F7BF5BF.8000601@redhat.com> <4F7DB245.8060401@redhat.com> Message-ID: <4F841D22.4070900@redhat.com> On 04/05/2012 04:55 PM, Endi Sukma Dewata wrote: > On 4/4/2012 2:18 AM, Petr Vobornik wrote: >> It could lead to state where attributes from other object type were >> displayed instead of the correct ones. >> >> https://fedorahosted.org/freeipa/ticket/2590 > > ACK. > Pushed to master, ipa-2-2. -- Petr Vobornik From pvoborni at redhat.com Tue Apr 10 11:44:52 2012 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 10 Apr 2012 13:44:52 +0200 Subject: [Freeipa-devel] [PATCH] 117 Added attrs field to permission for target=subtree In-Reply-To: <4F7DB251.5030509@redhat.com> References: <4F7BF607.8080105@redhat.com> <4F7DB251.5030509@redhat.com> Message-ID: <4F841D34.602@redhat.com> On 04/05/2012 04:55 PM, Endi Sukma Dewata wrote: > On 4/4/2012 2:19 AM, Petr Vobornik wrote: >> Permission form was missing attrs field for target=subtree. All other >> target types have it. >> >> It uses multivalued text widget, same as filter, because we can't >> predict the target type. >> >> https://fedorahosted.org/freeipa/ticket/2592 > > ACK. > Pushed to master, ipa-2-2. -- Petr Vobornik From edewata at redhat.com Tue Apr 10 13:39:53 2012 From: edewata at redhat.com (Endi Sukma Dewata) Date: Tue, 10 Apr 2012 08:39:53 -0500 Subject: [Freeipa-devel] [PATCH] 118-119 DNS forward policy: checkboxes changed to radio buttons In-Reply-To: <4F7DC132.70008@redhat.com> References: <4F7C2425.6000606@redhat.com> <4F7DB2B1.9030206@redhat.com> <4F7DC132.70008@redhat.com> Message-ID: <4F843829.6080203@redhat.com> On 4/5/2012 10:58 AM, Petr Vobornik wrote: > Revised patch 118 attached. > I used: > * Forward first > * Forward only > > and set 'default_value' to 'first'. So there would be always some value > checked, which indicates what is actually used. There is a little issue > with undo button if policy is not set '' because default_value !== ''. > > I did this kinda in the hurry, so I hope I didn't missed anything > crucial. Will be back on Tuesday. ACK on both patches. -- Endi S. Dewata From rcritten at redhat.com Tue Apr 10 13:46:35 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 10 Apr 2012 09:46:35 -0400 Subject: [Freeipa-devel] [PATCH] 0034 Limit permission and selfservice names In-Reply-To: <4F840A2D.5070309@redhat.com> References: <4F7ED385.1010108@redhat.com> <4F82EA6A.8000000@redhat.com> <4F840A2D.5070309@redhat.com> Message-ID: <4F8439BB.7090800@redhat.com> Petr Viktorin wrote: > On 04/09/2012 03:55 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> https://fedorahosted.org/freeipa/ticket/2585: ipa permission-add throws >>> internal server error when name contains '<', '>' or other special >>> characters. >>> >>> The problem is, of course, proper escaping; not only in DNs but also in >>> ACIs. Right now we don't really do either. >>> >>> This patch is just a simple workaround: disallow anything except >>> known-good characters. It's just names, so no functionality is lost. >>> >>> All tickets for April are now taken, so unless a new one comes my way, >>> I'll take a dive into the code and fix it properly. This could take some >>> time and would mean somewhat larger changes. >> >> Is there a reason you didn't use pattern/pattern_errmsg instead? >> >> You'd need to change the regex as patterns use re.match rather than >> re.search. >> >> rob > > Right, that makes more sense. > It changes API.txt though. Do I need to bump VERSION in this case? > Also, is there a reason pattern_errmsg is included in API.txt? Yes, please bump VERSION. pattern_errmsg should probably be removed from API.txt. We've been paring back the amount of data to validate slowly as we've run into these questionable items. Please open a ticket for this. rob From pvoborni at redhat.com Tue Apr 10 13:48:27 2012 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 10 Apr 2012 15:48:27 +0200 Subject: [Freeipa-devel] [PATCH] 118-119 DNS forward policy: checkboxes changed to radio buttons In-Reply-To: <4F843829.6080203@redhat.com> References: <4F7C2425.6000606@redhat.com> <4F7DB2B1.9030206@redhat.com> <4F7DC132.70008@redhat.com> <4F843829.6080203@redhat.com> Message-ID: <4F843A2B.60102@redhat.com> On 04/10/2012 03:39 PM, Endi Sukma Dewata wrote: > On 4/5/2012 10:58 AM, Petr Vobornik wrote: >> Revised patch 118 attached. >> I used: >> * Forward first >> * Forward only >> >> and set 'default_value' to 'first'. So there would be always some value >> checked, which indicates what is actually used. There is a little issue >> with undo button if policy is not set '' because default_value !== ''. >> >> I did this kinda in the hurry, so I hope I didn't missed anything >> crucial. Will be back on Tuesday. > > ACK on both patches. > Pushed to master, ipa-2-2. -- Petr Vobornik From pviktori at redhat.com Tue Apr 10 14:00:47 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 10 Apr 2012 16:00:47 +0200 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. Message-ID: <4F843D0F.5060906@redhat.com> I'm aware that we have backwards compatibility requirements so we have to stick with unfortunate decisions, but I wanted you to know what I think. Please tell me I'm wrong! It is not clear what --{set,add,del}attr and friends should do. On the one hand they should be powerful -- presumably as powerful as ldapmodify. On the other hand they should be checked to ensure they can't be used to break the system. These requirements are contradictory. And in either case we're doing it wrong: - If they should be all-powerful, we shouldn't validate them. - If they shouldn't break the system we can just disable them for IPA-managed attributes. My understanding is that they were originally only added for attributes IPA doesn't know about. People can still use ldapmodify to bypass validation if they want. - If somewhere in between, we need to clearly define what they should do, instead of drawing the line ad-hoc based on individual details we forgot about, as tickets come from QE. I would hope people won't use --setattr for IPA-managed attributes. Which would however mean we won't get much community testing for all this extra code. Then, there's an unfortunate detail in IPA implementation: attribute Params need to be cloned to method objects (Create, Update, etc.) to work properly (e.g. get the `attribute` flag set). If they are marked no_update, they don't get cloned, so they don't work properly. Yet --setattr apparently needs to be able to update and validate attributes marked no_update (this ties to the confusing requirements on --setattr I already mentioned). This leads to doing the same work again, slightly differently. tl;dr: --setattr work on IPA-managed attributes (with validation) is a mistake. It adds no functionality, only complexity. We don't want people to use it. It will cost us a lot of maintenance work to support. Thank you for listening. A patch for the newest regression is coming up. -- Petr? From pviktori at redhat.com Tue Apr 10 14:11:19 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 10 Apr 2012 16:11:19 +0200 Subject: [Freeipa-devel] [PATCH] 0034 Limit permission and selfservice names In-Reply-To: <4F8439BB.7090800@redhat.com> References: <4F7ED385.1010108@redhat.com> <4F82EA6A.8000000@redhat.com> <4F840A2D.5070309@redhat.com> <4F8439BB.7090800@redhat.com> Message-ID: <4F843F87.4000708@redhat.com> On 04/10/2012 03:46 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> On 04/09/2012 03:55 PM, Rob Crittenden wrote: >>> Petr Viktorin wrote: >>>> https://fedorahosted.org/freeipa/ticket/2585: ipa permission-add throws >>>> internal server error when name contains '<', '>' or other special >>>> characters. >>>> >>>> The problem is, of course, proper escaping; not only in DNs but also in >>>> ACIs. Right now we don't really do either. >>>> >>>> This patch is just a simple workaround: disallow anything except >>>> known-good characters. It's just names, so no functionality is lost. >>>> >>>> All tickets for April are now taken, so unless a new one comes my way, >>>> I'll take a dive into the code and fix it properly. This could take >>>> some >>>> time and would mean somewhat larger changes. >>> >>> Is there a reason you didn't use pattern/pattern_errmsg instead? >>> >>> You'd need to change the regex as patterns use re.match rather than >>> re.search. >>> >>> rob >> >> Right, that makes more sense. >> It changes API.txt though. Do I need to bump VERSION in this case? >> Also, is there a reason pattern_errmsg is included in API.txt? > > Yes, please bump VERSION. Attaching updated patch. > pattern_errmsg should probably be removed from API.txt. We've been > paring back the amount of data to validate slowly as we've run into > these questionable items. Please open a ticket for this. Done: https://fedorahosted.org/freeipa/ticket/2619 -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0034-03-Limit-permission-and-selfservice-names-to-alphanumer.patch Type: text/x-patch Size: 16059 bytes Desc: not available URL: From jcholast at redhat.com Tue Apr 10 15:03:32 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 10 Apr 2012 17:03:32 +0200 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. In-Reply-To: <4F843D0F.5060906@redhat.com> References: <4F843D0F.5060906@redhat.com> Message-ID: <4F844BC4.2020109@redhat.com> On 10.4.2012 16:00, Petr Viktorin wrote: > I'm aware that we have backwards compatibility requirements so we have > to stick with unfortunate decisions, but I wanted you to know what I > think. Please tell me I'm wrong! > > > > It is not clear what --{set,add,del}attr and friends should do. On the > one hand they should be powerful -- presumably as powerful as > ldapmodify. On the other hand they should be checked to ensure they > can't be used to break the system. These requirements are contradictory. > And in either case we're doing it wrong: > - If they should be all-powerful, we shouldn't validate them. > - If they shouldn't break the system we can just disable them for > IPA-managed attributes. My understanding is that they were originally > only added for attributes IPA doesn't know about. People can still use > ldapmodify to bypass validation if they want. > - If somewhere in between, we need to clearly define what they should > do, instead of drawing the line ad-hoc based on individual details we > forgot about, as tickets come from QE. > > > I would hope people won't use --setattr for IPA-managed attributes. > Which would however mean we won't get much community testing for all > this extra code. > > > Then, there's an unfortunate detail in IPA implementation: attribute > Params need to be cloned to method objects (Create, Update, etc.) to > work properly (e.g. get the `attribute` flag set). If they are marked > no_update, they don't get cloned, so they don't work properly. > Yet --setattr apparently needs to be able to update and validate > attributes marked no_update (this ties to the confusing requirements on > --setattr I already mentioned). This leads to doing the same work again, > slightly differently. > > > > tl;dr: --setattr work on IPA-managed attributes (with validation) is a > mistake. It adds no functionality, only complexity. We don't want people > to use it. It will cost us a lot of maintenance work to support. > > > Thank you for listening. A patch for the newest regression is coming up. > I wholeheartedly agree. Like you said above, we should either not validate --{set,add,del}attr or don't allow them on known attributes. To be functionally complete, we should also add validated equivalents of --{add,del}attr to *-mod commands for all multivalue params (think --add- and --del- for each --). Honza -- Jan Cholasta From jdennis at redhat.com Tue Apr 10 15:03:56 2012 From: jdennis at redhat.com (John Dennis) Date: Tue, 10 Apr 2012 11:03:56 -0400 Subject: [Freeipa-devel] [PATCH 72] Validate DN & RDN parameters for migrate command In-Reply-To: <4F7EF98A.5000804@redhat.com> References: <4F7E465F.7050206@redhat.com> <1333701642.14740.27.camel@balmora.brq.redhat.com> <4F7EF98A.5000804@redhat.com> Message-ID: <4F844BDC.108@redhat.com> On 04/06/2012 10:11 AM, John Dennis wrote: > On 04/06/2012 04:40 AM, Martin Kosek wrote: >> 1) We still crash when the parameter is empty. We may want to make it >> required (the same fix Rob did for cert rejection reason): >> >> # echo "secret123" | ipa migrate-ds ldap://vm-054.idm.lab.bos.redhat.com >> --with-compat --base-dn="dc=greyoak,dc=com" --user-container= >> ipa: ERROR: cannot connect to >> u'http://vm-022.idm.lab.bos.redhat.com/ipa/xml': Internal Server Error > > Good point, will fix. > >> 2) Do you think it would make sense to create a special Param for DN? >> Its quite general type and I bet there are other Params that could use >> DN instead of Str. It could look like that: >> DN('usercontainer?', >> rdn=True,<<<< can be RDN, not DN > > Yes, I considered introducing a new DN parameter type as well. I think > this is a good approach and will have payoff down the road. I will make > that change. However I'm inclined to introduce both a DN parameter and a > RDN parameter, they really are different entities, if you use a flag to > indicate you require a RDN then you lose the type of the parameter, it > could be either, and it really should be one or the other and knowing > which it is has value. > >> 3) We should not restrict users from passing a user/group container with >> more than one RDN: > > Yeah, I wasn't too sure about that. The parameter distinctly called for > an RDN, but it seemed to me it should support any container below the > base, which would make it a DN not a RDN. > > FWIW, a DN is an ordered sequence of 1 or more RDN's. DN's do not have > to be "absolute", they can be any subset of a "path sequence". A DN > with exactly one RDN is equivalent to an RDN, a special case). > > I will change the parameter to be a DN since that is what was the > original intent. > > All good suggestions, a revised patch will follow shortly. Hmm... I'm rethinking this. The suggestion of adding a new DN parameter is the right thing to do and I tried to implement it, but as I was working I realized I needed to touch a fair amount of code to support the new parameter type and to modify multiple places in the migration code to work with the new type. That's more code changes at the very last minute for this release than I'm comfortable with, we're too close the freeze date for invasive modifications. We have a work item open to introduce DN types throughout the code including as a new Param type. I think it's best to make just a small tweak to fix the traceback today and wait till the 3.0 work to "do it right". Therefore I'm no longer in favor of the new Param approach. I will fix the problem you reported when the parameter is empty, but merge that into the original patch. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From jcholast at redhat.com Tue Apr 10 15:14:04 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 10 Apr 2012 17:14:04 +0200 Subject: [Freeipa-devel] [PATCH 72] Validate DN & RDN parameters for migrate command In-Reply-To: <4F844BDC.108@redhat.com> References: <4F7E465F.7050206@redhat.com> <1333701642.14740.27.camel@balmora.brq.redhat.com> <4F7EF98A.5000804@redhat.com> <4F844BDC.108@redhat.com> Message-ID: <4F844E3C.8070706@redhat.com> On 10.4.2012 17:03, John Dennis wrote: > On 04/06/2012 10:11 AM, John Dennis wrote: >> On 04/06/2012 04:40 AM, Martin Kosek wrote: >>> 1) We still crash when the parameter is empty. We may want to make it >>> required (the same fix Rob did for cert rejection reason): >>> >>> # echo "secret123" | ipa migrate-ds ldap://vm-054.idm.lab.bos.redhat.com >>> --with-compat --base-dn="dc=greyoak,dc=com" --user-container= >>> ipa: ERROR: cannot connect to >>> u'http://vm-022.idm.lab.bos.redhat.com/ipa/xml': Internal Server Error >> >> Good point, will fix. >> >>> 2) Do you think it would make sense to create a special Param for DN? >>> Its quite general type and I bet there are other Params that could use >>> DN instead of Str. It could look like that: >>> DN('usercontainer?', >>> rdn=True,<<<< can be RDN, not DN >> >> Yes, I considered introducing a new DN parameter type as well. I think >> this is a good approach and will have payoff down the road. I will make >> that change. However I'm inclined to introduce both a DN parameter and a >> RDN parameter, they really are different entities, if you use a flag to >> indicate you require a RDN then you lose the type of the parameter, it >> could be either, and it really should be one or the other and knowing >> which it is has value. >> >>> 3) We should not restrict users from passing a user/group container with >>> more than one RDN: >> >> Yeah, I wasn't too sure about that. The parameter distinctly called for >> an RDN, but it seemed to me it should support any container below the >> base, which would make it a DN not a RDN. >> >> FWIW, a DN is an ordered sequence of 1 or more RDN's. DN's do not have >> to be "absolute", they can be any subset of a "path sequence". A DN >> with exactly one RDN is equivalent to an RDN, a special case). >> >> I will change the parameter to be a DN since that is what was the >> original intent. >> >> All good suggestions, a revised patch will follow shortly. > > Hmm... I'm rethinking this. The suggestion of adding a new DN parameter > is the right thing to do and I tried to implement it, but as I was > working I realized I needed to touch a fair amount of code to support > the new parameter type and to modify multiple places in the migration > code to work with the new type. That's more code changes at the very > last minute for this release than I'm comfortable with, we're too close > the freeze date for invasive modifications. > > We have a work item open to introduce DN types throughout the code > including as a new Param type. I think it's best to make just a small > tweak to fix the traceback today and wait till the 3.0 work to "do it > right". Therefore I'm no longer in favor of the new Param approach. I > will fix the problem you reported when the parameter is empty, but merge > that into the original patch. > Just for the record, there are some related tickets: , . Honza -- Jan Cholasta From pviktori at redhat.com Tue Apr 10 15:31:10 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 10 Apr 2012 17:31:10 +0200 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. In-Reply-To: <4F844BC4.2020109@redhat.com> References: <4F843D0F.5060906@redhat.com> <4F844BC4.2020109@redhat.com> Message-ID: <4F84523E.60304@redhat.com> On 04/10/2012 05:03 PM, Jan Cholasta wrote: > > To be functionally complete, we should also add validated equivalents of > --{add,del}attr to *-mod commands for all multivalue params (think > --add- and --del- for each --). > We need something like that anyway. Requiring users to learn raw LDAP attribute names and value encodings, which they'd need for --setattr, is suboptimal to say the least. -- Petr? From mkosek at redhat.com Tue Apr 10 16:34:45 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 10 Apr 2012 18:34:45 +0200 Subject: [Freeipa-devel] [PATCH] 1005 fix password history In-Reply-To: <4F83AEDE.9050203@redhat.com> References: <4F83AEDE.9050203@redhat.com> Message-ID: <1334075685.16034.1.camel@balmora.brq.redhat.com> On Mon, 2012-04-09 at 23:54 -0400, Rob Crittenden wrote: > Password history wasn't working because the qsort comparison function > was comparing pointers, not data. This resulted in a random element > being removed from the history on overflow rather than the oldest. > > We sort in reverse so we don't have to move elements inside the list > when removing to make more room. We just pop off the top then shove on > the new password. The history includes a time to make comparisons > straightforward (and LDAP doesn't guarantee order). > > I've attached a test script to exercise things. I don't see a way to > easily include this into our current framework at the moment. We'd need > a way to switch users in the middle of a test. > > rob Thanks. The new line looks quite scary, but it is OK and works fine (explanation in "man qsort"). ACK. Pushed to master, ipa-2-2. Martin From pviktori at redhat.com Tue Apr 10 16:39:48 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 10 Apr 2012 18:39:48 +0200 Subject: [Freeipa-devel] [PATCH] 0035 Convert --setattr values for attributes marked no_update Message-ID: <4F846254.3070200@redhat.com> Fix --setattr to work on no_update params. https://fedorahosted.org/freeipa/ticket/2616 -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0035-Convert-setattr-values-for-attributes-marked-no_upda.patch Type: text/x-patch Size: 5264 bytes Desc: not available URL: From pspacek at redhat.com Tue Apr 10 16:43:05 2012 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 10 Apr 2012 18:43:05 +0200 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. In-Reply-To: <4F84523E.60304@redhat.com> References: <4F843D0F.5060906@redhat.com> <4F844BC4.2020109@redhat.com> <4F84523E.60304@redhat.com> Message-ID: <4F846319.9080506@redhat.com> On 04/10/2012 05:31 PM, Petr Viktorin wrote: > On 04/10/2012 05:03 PM, Jan Cholasta wrote: >> On 04/10/2012 05:31 PM, Petr Viktorin wrote: >> >> tl;dr: --setattr work on IPA-managed attributes (with validation) is a >> mistake. +1 >> It adds no functionality, only complexity. We don't want people >> to use it. It will cost us a lot of maintenance work to support. >> > > I wholeheartedly agree. I absolutely agree. Why we should write validation code twice? But things are worse than only code duplicity: Small differences between two versions of code lead to problems with code maintenance and potentially add a lot of bugs. Petr^2 Spacek >> To be functionally complete, we should also add validated equivalents of >> --{add,del}attr to *-mod commands for all multivalue params (think >> --add- and --del- for each --). >> > > We need something like that anyway. Requiring users to learn raw LDAP > attribute names and value encodings, which they'd need for --setattr, is > suboptimal to say the least. From pvoborni at redhat.com Tue Apr 10 16:47:54 2012 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 10 Apr 2012 18:47:54 +0200 Subject: [Freeipa-devel] [PATCH] 120 Removal of memberofindirect_permissons from privileges Message-ID: <4F84643A.7060609@redhat.com> Problem: In the Privilege page, can list Permissions. This "Shows Results" for "Direct Membership". But there is an option to list this for "Indirect Membership" also. There isn't a way to nest permissions, so this option is not needed. Solution: This patch removes the memberofindirect_persmission definition from server plugin. It fixes the problem in Web UI. https://fedorahosted.org/freeipa/ticket/2611 -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0120-1-Removal-of-memberofindirect_permissons-from-privileg.patch Type: text/x-patch Size: 1554 bytes Desc: not available URL: From mkosek at redhat.com Tue Apr 10 16:54:21 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 10 Apr 2012 18:54:21 +0200 Subject: [Freeipa-devel] [PATCH 72] Validate DN & RDN parameters for migrate command In-Reply-To: <4F844BDC.108@redhat.com> References: <4F7E465F.7050206@redhat.com> <1333701642.14740.27.camel@balmora.brq.redhat.com> <4F7EF98A.5000804@redhat.com> <4F844BDC.108@redhat.com> Message-ID: <1334076861.16034.9.camel@balmora.brq.redhat.com> On Tue, 2012-04-10 at 11:03 -0400, John Dennis wrote: > On 04/06/2012 10:11 AM, John Dennis wrote: > > On 04/06/2012 04:40 AM, Martin Kosek wrote: > >> 1) We still crash when the parameter is empty. We may want to make it > >> required (the same fix Rob did for cert rejection reason): > >> > >> # echo "secret123" | ipa migrate-ds ldap://vm-054.idm.lab.bos.redhat.com > >> --with-compat --base-dn="dc=greyoak,dc=com" --user-container= > >> ipa: ERROR: cannot connect to > >> u'http://vm-022.idm.lab.bos.redhat.com/ipa/xml': Internal Server Error > > > > Good point, will fix. > > > >> 2) Do you think it would make sense to create a special Param for DN? > >> Its quite general type and I bet there are other Params that could use > >> DN instead of Str. It could look like that: > >> DN('usercontainer?', > >> rdn=True,<<<< can be RDN, not DN > > > > Yes, I considered introducing a new DN parameter type as well. I think > > this is a good approach and will have payoff down the road. I will make > > that change. However I'm inclined to introduce both a DN parameter and a > > RDN parameter, they really are different entities, if you use a flag to > > indicate you require a RDN then you lose the type of the parameter, it > > could be either, and it really should be one or the other and knowing > > which it is has value. > > > >> 3) We should not restrict users from passing a user/group container with > >> more than one RDN: > > > > Yeah, I wasn't too sure about that. The parameter distinctly called for > > an RDN, but it seemed to me it should support any container below the > > base, which would make it a DN not a RDN. > > > > FWIW, a DN is an ordered sequence of 1 or more RDN's. DN's do not have > > to be "absolute", they can be any subset of a "path sequence". A DN > > with exactly one RDN is equivalent to an RDN, a special case). > > > > I will change the parameter to be a DN since that is what was the > > original intent. > > > > All good suggestions, a revised patch will follow shortly. > > Hmm... I'm rethinking this. The suggestion of adding a new DN parameter > is the right thing to do and I tried to implement it, but as I was > working I realized I needed to touch a fair amount of code to support > the new parameter type and to modify multiple places in the migration > code to work with the new type. That's more code changes at the very > last minute for this release than I'm comfortable with, we're too close > the freeze date for invasive modifications. Ok, I agree with this approach. Lets fix just points 1) and 3) in my intial review. Martin From ohamada at redhat.com Tue Apr 10 16:56:51 2012 From: ohamada at redhat.com (Ondrej Hamada) Date: Tue, 10 Apr 2012 18:56:51 +0200 Subject: [Freeipa-devel] [PATCH] 21 Unable to rename permission object Message-ID: <4F846653.8070509@redhat.com> https://fedorahosted.org/freeipa/ticket/2571 The update was failing because of the case insensitivity of permission object DN. -- Regards, Ondrej Hamada FreeIPA team jabber: ohama at jabbim.cz IRC: ohamada -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-ohamada-21-Unable-to-rename-permission-object.patch Type: text/x-patch Size: 1960 bytes Desc: not available URL: From mkosek at redhat.com Tue Apr 10 17:07:15 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 10 Apr 2012 19:07:15 +0200 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. In-Reply-To: <4F844BC4.2020109@redhat.com> References: <4F843D0F.5060906@redhat.com> <4F844BC4.2020109@redhat.com> Message-ID: <1334077635.16034.20.camel@balmora.brq.redhat.com> On Tue, 2012-04-10 at 17:03 +0200, Jan Cholasta wrote: > On 10.4.2012 16:00, Petr Viktorin wrote: > > I'm aware that we have backwards compatibility requirements so we have > > to stick with unfortunate decisions, but I wanted you to know what I > > think. Please tell me I'm wrong! > > > > > > > > It is not clear what --{set,add,del}attr and friends should do. On the > > one hand they should be powerful -- presumably as powerful as > > ldapmodify. On the other hand they should be checked to ensure they > > can't be used to break the system. These requirements are contradictory. > > And in either case we're doing it wrong: > > - If they should be all-powerful, we shouldn't validate them. > > - If they shouldn't break the system we can just disable them for > > IPA-managed attributes. My understanding is that they were originally > > only added for attributes IPA doesn't know about. People can still use > > ldapmodify to bypass validation if they want. > > - If somewhere in between, we need to clearly define what they should > > do, instead of drawing the line ad-hoc based on individual details we > > forgot about, as tickets come from QE. > > > > > > I would hope people won't use --setattr for IPA-managed attributes. > > Which would however mean we won't get much community testing for all > > this extra code. > > > > > > Then, there's an unfortunate detail in IPA implementation: attribute > > Params need to be cloned to method objects (Create, Update, etc.) to > > work properly (e.g. get the `attribute` flag set). If they are marked > > no_update, they don't get cloned, so they don't work properly. > > Yet --setattr apparently needs to be able to update and validate > > attributes marked no_update (this ties to the confusing requirements on > > --setattr I already mentioned). This leads to doing the same work again, > > slightly differently. > > > > > > > > tl;dr: --setattr work on IPA-managed attributes (with validation) is a > > mistake. It adds no functionality, only complexity. We don't want people > > to use it. It will cost us a lot of maintenance work to support. > > > > > > Thank you for listening. A patch for the newest regression is coming up. > > > > I wholeheartedly agree. This is indeed a mine field and we need to make a look from at the issue from all sides before accepting a decision. > > Like you said above, we should either not validate --{set,add,del}attr > or don't allow them on known attributes. IMHO, validating attributes we manage in the same way for both --setattr and standard attrs is not that wrong. It is a good precaution, because if we let an unvalidated value in, it can make even a bigger mess later in our pre_callbacks or post_callbacks where we assume that at this point everything is valid. If somebody wants to modify attributes in an uncontrolled, unvalidated way, he is free to use ldapmodify or other tool to play with raw LDAP values. Without our guarantee of course. But if he chooses to use our --{set,add,del}attr we should at least help him to not shoot himself to the leg and validate/normalize/encode the value. I don't know how many users use this API, but removing a support for all managed attributes seems as a big compatibility break to me. Martin From pviktori at redhat.com Tue Apr 10 17:07:35 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 10 Apr 2012 19:07:35 +0200 Subject: [Freeipa-devel] [PATCH 68] text unit test should validate using installed mo file In-Reply-To: <4F82FF27.8010808@redhat.com> References: <4F71D00E.2070907@redhat.com> <4F71D58E.5060507@redhat.com> <4F71FF85.7090102@redhat.com> <4F7223AF.7030003@redhat.com> <4F72CE84.6010407@redhat.com> <4F750137.9010600@redhat.com> <4F75ADD7.4020404@redhat.com> <4F82FF27.8010808@redhat.com> Message-ID: <4F8468D7.4000608@redhat.com> On 04/09/2012 05:24 PM, John Dennis wrote: > On 03/30/2012 08:57 AM, Petr Viktorin wrote: >> On 03/30/2012 02:41 AM, John Dennis wrote: >>> On 03/28/2012 04:40 AM, Petr Viktorin wrote: >>>> Can install/po/Makefile just call test_i18n.py from the tests/ tree? It >>>> doesn't import any IPA code so there's no need to set sys.path in this >>>> case (though there'd have to be a comment saying we depend on this). >>>> In the other case, unit tests, the path is already set by Nose. >>>> Also the file would have to be renamed so nose doesn't pick it up as a >>>> test module. >>> >>> Good idea. I moved test_i18n.py to tests/i18n.py. I was reluctant about >>> moving the file, but that was without merit, it works better this way. >> >> The downside is that the file now looks like a test utility module. It >> could use a comment near the imports saying that it's also called as a >> script, and that it shouldn't import IPA code (this could silently use >> the system-installed version of IPA, or crash if it's not there). >> Alternatively, set PYTHONPATH in the Makefile. >> >>> I also removed the superfluous comment in Makefile.in you pointed out. >>> >>> When I was exercising the code I noticed the validation code was not >>> treating msgid's from C code correctly (we do have some C code in the >>> client area). That required a much more nuanced parsing the format >>> conversion specifiers to correctly identify what was a positional format >>> specifier vs. an indexed format specifier. The new version of the >>> i18n.py includes the function parse_printf_fmt() and get_prog_langs() to >>> identify the source programming language. >> >> More non-trivial code without tests. This makes me worry. But, tests for >> this can be added later I guess. >> >>> Two more patches will follow shortly, one which adds validation when >>> "make lint" is run and a patch to correct the problems it found in the C >>> code strings which did not used indexed format specifiers. > > revised patch with comment about imports attached. > > ACK -- Petr? From pviktori at redhat.com Tue Apr 10 17:25:12 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 10 Apr 2012 19:25:12 +0200 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. In-Reply-To: <1334077635.16034.20.camel@balmora.brq.redhat.com> References: <4F843D0F.5060906@redhat.com> <4F844BC4.2020109@redhat.com> <1334077635.16034.20.camel@balmora.brq.redhat.com> Message-ID: <4F846CF8.3020708@redhat.com> On 04/10/2012 07:07 PM, Martin Kosek wrote: > On Tue, 2012-04-10 at 17:03 +0200, Jan Cholasta wrote: >> On 10.4.2012 16:00, Petr Viktorin wrote: >>> I'm aware that we have backwards compatibility requirements so we have >>> to stick with unfortunate decisions, but I wanted you to know what I >>> think. Please tell me I'm wrong! >>> >>> >>> >>> It is not clear what --{set,add,del}attr and friends should do. On the >>> one hand they should be powerful -- presumably as powerful as >>> ldapmodify. On the other hand they should be checked to ensure they >>> can't be used to break the system. These requirements are contradictory. >>> And in either case we're doing it wrong: >>> - If they should be all-powerful, we shouldn't validate them. >>> - If they shouldn't break the system we can just disable them for >>> IPA-managed attributes. My understanding is that they were originally >>> only added for attributes IPA doesn't know about. People can still use >>> ldapmodify to bypass validation if they want. >>> - If somewhere in between, we need to clearly define what they should >>> do, instead of drawing the line ad-hoc based on individual details we >>> forgot about, as tickets come from QE. >>> >>> >>> I would hope people won't use --setattr for IPA-managed attributes. >>> Which would however mean we won't get much community testing for all >>> this extra code. >>> >>> >>> Then, there's an unfortunate detail in IPA implementation: attribute >>> Params need to be cloned to method objects (Create, Update, etc.) to >>> work properly (e.g. get the `attribute` flag set). If they are marked >>> no_update, they don't get cloned, so they don't work properly. >>> Yet --setattr apparently needs to be able to update and validate >>> attributes marked no_update (this ties to the confusing requirements on >>> --setattr I already mentioned). This leads to doing the same work again, >>> slightly differently. >>> >>> >>> >>> tl;dr: --setattr work on IPA-managed attributes (with validation) is a >>> mistake. It adds no functionality, only complexity. We don't want people >>> to use it. It will cost us a lot of maintenance work to support. >>> >>> >>> Thank you for listening. A patch for the newest regression is coming up. >>> >> >> I wholeheartedly agree. > > This is indeed a mine field and we need to make a look from at the issue > from all sides before accepting a decision. Yes. >> >> Like you said above, we should either not validate --{set,add,del}attr >> or don't allow them on known attributes. > > IMHO, validating attributes we manage in the same way for both --setattr > and standard attrs is not that wrong. It is a good precaution, because > if we let an unvalidated value in, it can make even a bigger mess later > in our pre_callbacks or post_callbacks where we assume that at this > point everything is valid. Then we should validate *exactly* the same way, including not allowing no_update attributes to be updated. > If somebody wants to modify attributes in an uncontrolled, unvalidated > way, he is free to use ldapmodify or other tool to play with raw LDAP > values. Without our guarantee of course. That's clear. > But if he chooses to use our --{set,add,del}attr we should at least help > him to not shoot himself to the leg and validate/normalize/encode the > value. I don't know how many users use this API, but removing a support > for all managed attributes seems as a big compatibility break to me. Well, it was broken (see https://fedorahosted.org/freeipa/ticket/2405, 2407, 2408), so I don't think many people used it. Anyway, what's the use case? Why would the user want to use --setattr for validated attributes? Is our regular API lacking something? -- Petr? From rcritten at redhat.com Tue Apr 10 17:48:30 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 10 Apr 2012 13:48:30 -0400 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. In-Reply-To: <4F846CF8.3020708@redhat.com> References: <4F843D0F.5060906@redhat.com> <4F844BC4.2020109@redhat.com> <1334077635.16034.20.camel@balmora.brq.redhat.com> <4F846CF8.3020708@redhat.com> Message-ID: <4F84726E.90504@redhat.com> Petr Viktorin wrote: > On 04/10/2012 07:07 PM, Martin Kosek wrote: >> On Tue, 2012-04-10 at 17:03 +0200, Jan Cholasta wrote: >>> On 10.4.2012 16:00, Petr Viktorin wrote: >>>> I'm aware that we have backwards compatibility requirements so we have >>>> to stick with unfortunate decisions, but I wanted you to know what I >>>> think. Please tell me I'm wrong! >>>> >>>> >>>> >>>> It is not clear what --{set,add,del}attr and friends should do. On the >>>> one hand they should be powerful -- presumably as powerful as >>>> ldapmodify. On the other hand they should be checked to ensure they >>>> can't be used to break the system. These requirements are >>>> contradictory. >>>> And in either case we're doing it wrong: >>>> - If they should be all-powerful, we shouldn't validate them. >>>> - If they shouldn't break the system we can just disable them for >>>> IPA-managed attributes. My understanding is that they were originally >>>> only added for attributes IPA doesn't know about. People can still use >>>> ldapmodify to bypass validation if they want. >>>> - If somewhere in between, we need to clearly define what they should >>>> do, instead of drawing the line ad-hoc based on individual details we >>>> forgot about, as tickets come from QE. >>>> >>>> >>>> I would hope people won't use --setattr for IPA-managed attributes. >>>> Which would however mean we won't get much community testing for all >>>> this extra code. >>>> >>>> >>>> Then, there's an unfortunate detail in IPA implementation: attribute >>>> Params need to be cloned to method objects (Create, Update, etc.) to >>>> work properly (e.g. get the `attribute` flag set). If they are marked >>>> no_update, they don't get cloned, so they don't work properly. >>>> Yet --setattr apparently needs to be able to update and validate >>>> attributes marked no_update (this ties to the confusing requirements on >>>> --setattr I already mentioned). This leads to doing the same work >>>> again, >>>> slightly differently. >>>> >>>> >>>> >>>> tl;dr: --setattr work on IPA-managed attributes (with validation) is a >>>> mistake. It adds no functionality, only complexity. We don't want >>>> people >>>> to use it. It will cost us a lot of maintenance work to support. >>>> >>>> >>>> Thank you for listening. A patch for the newest regression is coming >>>> up. >>>> >>> >>> I wholeheartedly agree. >> >> This is indeed a mine field and we need to make a look from at the issue >> from all sides before accepting a decision. > > Yes. > >>> >>> Like you said above, we should either not validate --{set,add,del}attr >>> or don't allow them on known attributes. >> >> IMHO, validating attributes we manage in the same way for both --setattr >> and standard attrs is not that wrong. It is a good precaution, because >> if we let an unvalidated value in, it can make even a bigger mess later >> in our pre_callbacks or post_callbacks where we assume that at this >> point everything is valid. > > Then we should validate *exactly* the same way, including not allowing > no_update attributes to be updated. > >> If somebody wants to modify attributes in an uncontrolled, unvalidated >> way, he is free to use ldapmodify or other tool to play with raw LDAP >> values. Without our guarantee of course. > > That's clear. > >> But if he chooses to use our --{set,add,del}attr we should at least help >> him to not shoot himself to the leg and validate/normalize/encode the >> value. I don't know how many users use this API, but removing a support >> for all managed attributes seems as a big compatibility break to me. > > Well, it was broken (see https://fedorahosted.org/freeipa/ticket/2405, > 2407, 2408), so I don't think many people used it. > > Anyway, what's the use case? Why would the user want to use --setattr > for validated attributes? Is our regular API lacking something? > I think Honza had the best suggestion, enhance the standard API to handle multi-valued attributes better and reduce the need for set/add/delattr. At some future point we can consider dropping them when updating something already covered by a Param. We're stuck with them for now. rob From mkosek at redhat.com Tue Apr 10 17:53:03 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 10 Apr 2012 19:53:03 +0200 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. In-Reply-To: <4F846CF8.3020708@redhat.com> References: <4F843D0F.5060906@redhat.com> <4F844BC4.2020109@redhat.com> <1334077635.16034.20.camel@balmora.brq.redhat.com> <4F846CF8.3020708@redhat.com> Message-ID: <1334080383.2282.9.camel@priserak> On Tue, 2012-04-10 at 19:25 +0200, Petr Viktorin wrote: > On 04/10/2012 07:07 PM, Martin Kosek wrote: > > On Tue, 2012-04-10 at 17:03 +0200, Jan Cholasta wrote: > >> On 10.4.2012 16:00, Petr Viktorin wrote: [snip] > >> Like you said above, we should either not validate --{set,add,del}attr > >> or don't allow them on known attributes. > > > > IMHO, validating attributes we manage in the same way for both --setattr > > and standard attrs is not that wrong. It is a good precaution, because > > if we let an unvalidated value in, it can make even a bigger mess later > > in our pre_callbacks or post_callbacks where we assume that at this > > point everything is valid. > > Then we should validate *exactly* the same way, including not allowing > no_update attributes to be updated. That makes some sense, I could agree with that. > > > If somebody wants to modify attributes in an uncontrolled, unvalidated > > way, he is free to use ldapmodify or other tool to play with raw LDAP > > values. Without our guarantee of course. > > That's clear. > > > But if he chooses to use our --{set,add,del}attr we should at least help > > him to not shoot himself to the leg and validate/normalize/encode the > > value. I don't know how many users use this API, but removing a support > > for all managed attributes seems as a big compatibility break to me. > > Well, it was broken (see https://fedorahosted.org/freeipa/ticket/2405, > 2407, 2408), so I don't think many people used it. > > Anyway, what's the use case? Why would the user want to use --setattr > for validated attributes? Is our regular API lacking something? > 1) Currently, --{set,add,del}attr is the only way to add/remove values to/from multivalue LDAP attributes without having to re-state all other values. This point may not be that critical if we introduce other means to handle multivalued attribute as you proposed earlier. 2) I don't know if this is a real example, but --{set,add,del}attr can be used to create an interface with other tool or system working with LDAP. If such tool knows attribute names, it could add or modify objects in IPA managed LDAP without knowing our CLI names or parsing them from our metadata. This way, such tool could take advantage of all the power that IPA has to sanitize/validate/check LDAP values it adds or modifies. Martin From sbingram at gmail.com Tue Apr 10 17:53:37 2012 From: sbingram at gmail.com (Stephen Ingram) Date: Tue, 10 Apr 2012 10:53:37 -0700 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. In-Reply-To: <4F846CF8.3020708@redhat.com> References: <4F843D0F.5060906@redhat.com> <4F844BC4.2020109@redhat.com> <1334077635.16034.20.camel@balmora.brq.redhat.com> <4F846CF8.3020708@redhat.com> Message-ID: On Tue, Apr 10, 2012 at 10:25 AM, Petr Viktorin wrote: > On 04/10/2012 07:07 PM, Martin Kosek wrote: >> >> On Tue, 2012-04-10 at 17:03 +0200, Jan Cholasta wrote: >>> >>> On 10.4.2012 16:00, Petr Viktorin wrote: >>>> >>>> I'm aware that we have backwards compatibility requirements so we have >>>> to stick with unfortunate decisions, but I wanted you to know what I >>>> think. Please tell me I'm wrong! >>>> >>>> >>>> >>>> It is not clear what --{set,add,del}attr and friends should do. On the >>>> one hand they should be powerful -- presumably as powerful as >>>> ldapmodify. On the other hand they should be checked to ensure they >>>> can't be used to break the system. These requirements are contradictory. >>>> And in either case we're doing it wrong: >>>> - If they should be all-powerful, we shouldn't validate them. >>>> - If they shouldn't break the system we can just disable them for >>>> IPA-managed attributes. My understanding is that they were originally >>>> only added for attributes IPA doesn't know about. People can still use >>>> ldapmodify to bypass validation if they want. >>>> - If somewhere in between, we need to clearly define what they should >>>> do, instead of drawing the line ad-hoc based on individual details we >>>> forgot about, as tickets come from QE. >>>> >>>> >>>> I would hope people won't use --setattr for IPA-managed attributes. >>>> Which would however mean we won't get much community testing for all >>>> this extra code. >>>> >>>> >>>> Then, there's an unfortunate detail in IPA implementation: attribute >>>> Params need to be cloned to method objects (Create, Update, etc.) to >>>> work properly (e.g. get the `attribute` flag set). If they are marked >>>> no_update, they don't get cloned, so they don't work properly. >>>> Yet --setattr apparently needs to be able to update and validate >>>> attributes marked no_update (this ties to the confusing requirements on >>>> --setattr I already mentioned). This leads to doing the same work again, >>>> slightly differently. >>>> >>>> >>>> >>>> tl;dr: --setattr work on IPA-managed attributes (with validation) is a >>>> mistake. It adds no functionality, only complexity. We don't want people >>>> to use it. It will cost us a lot of maintenance work to support. >>>> >>>> >>>> Thank you for listening. A patch for the newest regression is coming up. >>>> >>> >>> I wholeheartedly agree. >> >> >> This is indeed a mine field and we need to make a look from at the issue >> from all sides before accepting a decision. > > > Yes. > > >>> >>> Like you said above, we should either not validate --{set,add,del}attr >>> or don't allow them on known attributes. >> >> >> IMHO, validating attributes we manage in the same way for both --setattr >> and standard attrs is not that wrong. It is a good precaution, because >> if we let an unvalidated value in, it can make even a bigger mess later >> in our pre_callbacks or post_callbacks where we assume that at this >> point everything is valid. > > > Then we should validate *exactly* the same way, including not allowing > no_update attributes to be updated. > > >> If somebody wants to modify attributes in an uncontrolled, unvalidated >> way, he is free to use ldapmodify or other tool to play with raw LDAP >> values. Without our guarantee of course. > > > That's clear. > > >> But if he chooses to use our --{set,add,del}attr we should at least help >> him to not shoot himself to the leg and validate/normalize/encode the >> value. I don't know how many users use this API, but removing a support >> for all managed attributes seems as a big compatibility break to me. > > > Well, it was broken (see https://fedorahosted.org/freeipa/ticket/2405, 2407, > 2408), so I don't think many people used it. > > Anyway, what's the use case? Why would the user want to use --setattr for > validated attributes? Is our regular API lacking something? As a user of the IPA I thought I would throw in our use scenario. Let me first say that I come to IPA having used Kerberos/LDAP solution previously so I *really* appreciate what you've been able to do here. That said, one of the things we are using IPA for is email configuration. Although the mail attribute is available, we need to separate by attribute the primary address from the aliases (for searching the directory) so we need mailAlternateAddress. Luckily, it is already in the schema. All we have to do is add the mailRecipient objectclass. It's really nice to be able to add (and hopefully soon remove) one objectclass without having to go back to ldap commands. I certainly wouldn't anticipate changing or removing any IPA attributes, but it sure is nice to have this feature for extra attributes. Steve From dpal at redhat.com Tue Apr 10 17:56:03 2012 From: dpal at redhat.com (Dmitri Pal) Date: Tue, 10 Apr 2012 13:56:03 -0400 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. In-Reply-To: <4F84726E.90504@redhat.com> References: <4F843D0F.5060906@redhat.com> <4F844BC4.2020109@redhat.com> <1334077635.16034.20.camel@balmora.brq.redhat.com> <4F846CF8.3020708@redhat.com> <4F84726E.90504@redhat.com> Message-ID: <4F847433.3060703@redhat.com> On 04/10/2012 01:48 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> On 04/10/2012 07:07 PM, Martin Kosek wrote: >>> On Tue, 2012-04-10 at 17:03 +0200, Jan Cholasta wrote: >>>> On 10.4.2012 16:00, Petr Viktorin wrote: >>>>> I'm aware that we have backwards compatibility requirements so we >>>>> have >>>>> to stick with unfortunate decisions, but I wanted you to know what I >>>>> think. Please tell me I'm wrong! >>>>> >>>>> >>>>> >>>>> It is not clear what --{set,add,del}attr and friends should do. On >>>>> the >>>>> one hand they should be powerful -- presumably as powerful as >>>>> ldapmodify. On the other hand they should be checked to ensure they >>>>> can't be used to break the system. These requirements are >>>>> contradictory. >>>>> And in either case we're doing it wrong: >>>>> - If they should be all-powerful, we shouldn't validate them. >>>>> - If they shouldn't break the system we can just disable them for >>>>> IPA-managed attributes. My understanding is that they were originally >>>>> only added for attributes IPA doesn't know about. People can still >>>>> use >>>>> ldapmodify to bypass validation if they want. >>>>> - If somewhere in between, we need to clearly define what they should >>>>> do, instead of drawing the line ad-hoc based on individual details we >>>>> forgot about, as tickets come from QE. >>>>> >>>>> >>>>> I would hope people won't use --setattr for IPA-managed attributes. >>>>> Which would however mean we won't get much community testing for all >>>>> this extra code. >>>>> >>>>> >>>>> Then, there's an unfortunate detail in IPA implementation: attribute >>>>> Params need to be cloned to method objects (Create, Update, etc.) to >>>>> work properly (e.g. get the `attribute` flag set). If they are marked >>>>> no_update, they don't get cloned, so they don't work properly. >>>>> Yet --setattr apparently needs to be able to update and validate >>>>> attributes marked no_update (this ties to the confusing >>>>> requirements on >>>>> --setattr I already mentioned). This leads to doing the same work >>>>> again, >>>>> slightly differently. >>>>> >>>>> >>>>> >>>>> tl;dr: --setattr work on IPA-managed attributes (with validation) >>>>> is a >>>>> mistake. It adds no functionality, only complexity. We don't want >>>>> people >>>>> to use it. It will cost us a lot of maintenance work to support. >>>>> >>>>> >>>>> Thank you for listening. A patch for the newest regression is coming >>>>> up. >>>>> >>>> >>>> I wholeheartedly agree. >>> >>> This is indeed a mine field and we need to make a look from at the >>> issue >>> from all sides before accepting a decision. >> >> Yes. >> >>>> >>>> Like you said above, we should either not validate --{set,add,del}attr >>>> or don't allow them on known attributes. >>> >>> IMHO, validating attributes we manage in the same way for both >>> --setattr >>> and standard attrs is not that wrong. It is a good precaution, because >>> if we let an unvalidated value in, it can make even a bigger mess later >>> in our pre_callbacks or post_callbacks where we assume that at this >>> point everything is valid. >> >> Then we should validate *exactly* the same way, including not allowing >> no_update attributes to be updated. >> >>> If somebody wants to modify attributes in an uncontrolled, unvalidated >>> way, he is free to use ldapmodify or other tool to play with raw LDAP >>> values. Without our guarantee of course. >> >> That's clear. >> >>> But if he chooses to use our --{set,add,del}attr we should at least >>> help >>> him to not shoot himself to the leg and validate/normalize/encode the >>> value. I don't know how many users use this API, but removing a support >>> for all managed attributes seems as a big compatibility break to me. >> >> Well, it was broken (see https://fedorahosted.org/freeipa/ticket/2405, >> 2407, 2408), so I don't think many people used it. >> >> Anyway, what's the use case? Why would the user want to use --setattr >> for validated attributes? Is our regular API lacking something? >> > > I think Honza had the best suggestion, enhance the standard API to > handle multi-valued attributes better and reduce the need for > set/add/delattr. At some future point we can consider dropping them > when updating something already covered by a Param. We're stuck with > them for now. > The use case I would see is the extensibility. Say a customer wants to extend a schema and add an attribute X to the user object. He would still be able to manage users using CLI without writing a plugin for the new attribute. Yes plugin is preferred but not everybody would go for it. So in absence of the plugin we can't do validation but we still should function and be able to deal with this attribute via CLI (and UI if this attribute is enabled for UI via UI configuration). I am generally against dropping this interface. But expectations IMO should be: 1) If the attribute is managed by us with setattr and friends it should behave in the same way as via the direct add/mod/del command 2) If attribute is not managed it should not provide any guarantees and act in the same way as via LDAP Hope this helps. > rob > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From rcritten at redhat.com Tue Apr 10 18:06:39 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 10 Apr 2012 14:06:39 -0400 Subject: [Freeipa-devel] [PATCH] 0034 Limit permission and selfservice names In-Reply-To: <4F843F87.4000708@redhat.com> References: <4F7ED385.1010108@redhat.com> <4F82EA6A.8000000@redhat.com> <4F840A2D.5070309@redhat.com> <4F8439BB.7090800@redhat.com> <4F843F87.4000708@redhat.com> Message-ID: <4F8476AF.1070105@redhat.com> Petr Viktorin wrote: > On 04/10/2012 03:46 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> On 04/09/2012 03:55 PM, Rob Crittenden wrote: >>>> Petr Viktorin wrote: >>>>> https://fedorahosted.org/freeipa/ticket/2585: ipa permission-add >>>>> throws >>>>> internal server error when name contains '<', '>' or other special >>>>> characters. >>>>> >>>>> The problem is, of course, proper escaping; not only in DNs but >>>>> also in >>>>> ACIs. Right now we don't really do either. >>>>> >>>>> This patch is just a simple workaround: disallow anything except >>>>> known-good characters. It's just names, so no functionality is lost. >>>>> >>>>> All tickets for April are now taken, so unless a new one comes my way, >>>>> I'll take a dive into the code and fix it properly. This could take >>>>> some >>>>> time and would mean somewhat larger changes. >>>> >>>> Is there a reason you didn't use pattern/pattern_errmsg instead? >>>> >>>> You'd need to change the regex as patterns use re.match rather than >>>> re.search. >>>> >>>> rob >>> >>> Right, that makes more sense. >>> It changes API.txt though. Do I need to bump VERSION in this case? >>> Also, is there a reason pattern_errmsg is included in API.txt? >> >> Yes, please bump VERSION. > > Attaching updated patch. > >> pattern_errmsg should probably be removed from API.txt. We've been >> paring back the amount of data to validate slowly as we've run into >> these questionable items. Please open a ticket for this. > > Done: https://fedorahosted.org/freeipa/ticket/2619 > I made a minor change. VERSION shoudl just update the minor version number. I changed this, ACK, pushed to master and ipa-2-2 rob From rcritten at redhat.com Tue Apr 10 19:02:25 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 10 Apr 2012 15:02:25 -0400 Subject: [Freeipa-devel] [PATCH] 0035 Convert --setattr values for attributes marked no_update In-Reply-To: <4F846254.3070200@redhat.com> References: <4F846254.3070200@redhat.com> Message-ID: <4F8483C1.8020805@redhat.com> Petr Viktorin wrote: > Fix --setattr to work on no_update params. > > https://fedorahosted.org/freeipa/ticket/2616 > ACK, pushed to master and ipa-2-2 From rcritten at redhat.com Tue Apr 10 19:21:22 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 10 Apr 2012 15:21:22 -0400 Subject: [Freeipa-devel] [PATCH] 120 Removal of memberofindirect_permissons from privileges In-Reply-To: <4F84643A.7060609@redhat.com> References: <4F84643A.7060609@redhat.com> Message-ID: <4F848832.7090604@redhat.com> Petr Vobornik wrote: > Problem: > In the Privilege page, can list Permissions. This "Shows Results" for > "Direct > Membership". But there is an option to list this for "Indirect Membership" > also. > There isn't a way to nest permissions, so this option is not needed. > > Solution: > This patch removes the memberofindirect_persmission definition from > server plugin. It fixes the problem in Web UI. > > https://fedorahosted.org/freeipa/ticket/2611 ACK, pushed to master and ipa-2-2 From rcritten at redhat.com Tue Apr 10 19:35:30 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 10 Apr 2012 15:35:30 -0400 Subject: [Freeipa-devel] [PATCH] 21 Unable to rename permission object In-Reply-To: <4F846653.8070509@redhat.com> References: <4F846653.8070509@redhat.com> Message-ID: <4F848B82.4040606@redhat.com> Ondrej Hamada wrote: > https://fedorahosted.org/freeipa/ticket/2571 > > The update was failing because of the case insensitivity of permission > object DN. Can you wrap the error in _() and add a couple of test cases for this, say one for the case insensitivity and one for empty rename attempt? rob From rcritten at redhat.com Tue Apr 10 20:41:09 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 10 Apr 2012 16:41:09 -0400 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F7C3864.3070505@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> Message-ID: <4F849AE5.5030409@redhat.com> Petr Viktorin wrote: > On 03/30/2012 11:00 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> On 03/26/2012 05:35 PM, Petr Viktorin wrote: >>>> On 03/26/2012 04:54 PM, Rob Crittenden wrote: >>>>> >>>>> Some minor compliants. >>>> >>>> >>>> Ideally, there would be a routine that sets up the logging and handles >>>> command-line arguments in some uniform way (which is also needed before >>>> logging setup to detect ipa-server-install --uninstall). >>>> The original patch did the common logging setup, and I hacked around >>>> the >>>> install/uninstall problem too. >>>> I guess I overdid it when I simplified the patch. >>>> I'm somewhat confused about the scope, so bear with me as I clarify >>>> what >>>> you mean. >>>> >>>> >>>>> If you abort the installation you get this somewhat unnerving error: >>>>> >>>>> Continue to configure the system with these values? [no]: >>>>> ipa : ERROR ipa-server-install failed, SystemExit: Installation >>>>> aborted >>>>> Installation aborted >>>>> >>>>> ipa-ldap-updater is the same: >>>>> >>>>> # ipa-ldap-updater >>>>> [2012-03-26T14:53:41Z ipa] : ipa-ldap-updater failed, >>>>> SystemExit: >>>>> IPA is not configured on this system. >>>>> IPA is not configured on this system. >>>>> >>>>> and ipa-upgradeconfig >>>>> >>>>> $ ipa-upgradeconfig >>>>> [2012-03-26T14:54:05Z ipa] : ipa-upgradeconfig failed, >>>>> SystemExit: >>>>> You must be root to run this script. >>>>> >>>>> >>>>> You must be root to run this script. >>>>> >>>>> I'm guessing that the issue is that the log file isn't opened yet. >>>> > >>>>> It would be nice if the logging would be confined to just the log. >>>> >>>> >>>> If I understand you correctly, the code should check if logging has >>>> been >>>> configured already, and if not, skip displaying the message? >>>> >>>> >>>>> When uninstalling you get the message 'ipa-server-install successful'. >>>>> This is a little odd as well. >>>> >>>> ipa-server-install is the name of the command. Wontfix for now, unless >>>> you disagree strongly. >>>> >>>> >>> >>> Updated patch: only log if logging has been configured (detected by >>> looking at the root logger's handlers), and changed the message to ?The >>> ipa-server-install command has succeeded/failed?. >> >> Works much better thanks. Just one request. When you created final_log() >> you show less information than you did in earlier patches. It is nice >> seeing the SystemExit failure. Can you do something like this (basically >> cut-n-pasted from v05)? >> >> diff --git a/ipaserver/install/installutils.py >> b/ipaserver/install/installutils. >> py >> index 851b58d..ca82a1b 100644 >> --- a/ipaserver/install/installutils.py >> +++ b/ipaserver/install/installutils.py >> @@ -721,15 +721,15 @@ def script_context(operation_name): >> # Only log if logging was already configured >> # TODO: Do this properly (e.g. configure logging before the try/except) >> if log_mgr.handlers.keys() != ['console']: >> - root_logger.info(template, operation_name) >> + root_logger.info(template) >> try: >> yield >> except BaseException, e: >> if isinstance(e, SystemExit) and (e.code is None or e.code == 0): >> # Not an error after all >> - final_log('The %s command was successful') >> + final_log('The %s command was successful' % operation_name) >> else: >> - final_log('The %s command failed') >> + final_log('%s failed, %s: %s' % (operation_name, type(e).__name__, >> e)) >> raise >> else: >> final_log('The %s command was successful') >> >> This looks like: >> >> 2012-03-30T20:56:53Z INFO ipa-dns-install failed, SystemExit: >> DNS is already configured in this IPA server. >> >> rob > > Fixed. > Hate to do this to you but I've found a few more issues. I basically went down the list and ran all the commands in various conditions. Some don't open any logs at all so the output gets written twice, like ipa-replica-prepare and ipa-replica-manage: # ipa-replica-manage del foo Directory Manager password: ipa: INFO: The ipa-replica-manage command failed, SystemExit: 'pony.greyoak.com' has no replication agreement for 'foo' 'pony.greyoak.com' has no replication agreement for 'foo' Same with ipa-csreplica-manage. # ipa-replica-prepare foo Directory Manager (existing master) password: ipa: INFO: The ipa-replica-prepare command failed, SystemExit: The password provided is incorrect for LDAP server pony.greyoak.com The password provided is incorrect for LDAP server pony.greyoak.com rob From mkosek at redhat.com Tue Apr 10 20:48:25 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 10 Apr 2012 22:48:25 +0200 Subject: [Freeipa-devel] [PATCH] 998 certmonger restarts services on renewal In-Reply-To: <1333700547.14740.18.camel@balmora.brq.redhat.com> References: <4F7233D9.2080405@redhat.com> <1333374440.10403.18.camel@balmora.brq.redhat.com> <4F79B30E.1060804@redhat.com> <4F79FFBD.1030408@redhat.com> <1333437994.23102.13.camel@balmora.brq.redhat.com> <4F7AF8FC.5050000@redhat.com> <4F7B0D04.6060704@redhat.com> <1333478601.2626.5.camel@priserak> <4F7B4F64.7070703@redhat.com> <4F7E04FB.4070200@redhat.com> <1333700547.14740.18.camel@balmora.brq.redhat.com> Message-ID: <1334090905.2282.20.camel@priserak> On Fri, 2012-04-06 at 10:22 +0200, Martin Kosek wrote: > On Thu, 2012-04-05 at 16:47 -0400, Rob Crittenden wrote: > > Rob Crittenden wrote: > > > Martin Kosek wrote: > > >> On Tue, 2012-04-03 at 10:45 -0400, Rob Crittenden wrote: > > >>> Rob Crittenden wrote: > > >>>> Martin Kosek wrote: > > >>>>> On Mon, 2012-04-02 at 15:36 -0400, Rob Crittenden wrote: > > >>>>>> Rob Crittenden wrote: > > >>>>>>> Martin Kosek wrote: > > >>>>>>>> On Tue, 2012-03-27 at 17:40 -0400, Rob Crittenden wrote: > > >>>>>>>>> Certmonger will currently automatically renew server > > >>>>>>>>> certificates but > > >>>>>>>>> doesn't restart the services so you can still end up with expired > > >>>>>>>>> certificates if you services never restart. > > >>>>>>>>> > > >>>>>>>>> This patch registers are restart command with certmonger so the > > >>>>>>>>> IPA > > >>>>>>>>> services will automatically be restarted to get the updated cert. > > >>>>>>>>> > > >>>>>>>>> Easy to test. Install IPA then resubmit the current server > > >>>>>>>>> certs and > > >>>>>>>>> watch the services restart: > > >>>>>>>>> > > >>>>>>>>> # ipa-getcert list > > >>>>>>>>> > > >>>>>>>>> Find the ID for either your dirsrv or httpd instance > > >>>>>>>>> > > >>>>>>>>> # ipa-getcert resubmit -i > > >>>>>>>>> > > >>>>>>>>> Watch /var/log/httpd/error_log or > > >>>>>>>>> /var/log/dirsrv/slapd-INSTANCE/errors > > >>>>>>>>> to see the service restart. > > >>>>>>>>> > > >>>>>>>>> rob > > >>>>>>>> > > >>>>>>>> What about current instances - can we/do we want to update > > >>>>>>>> certmonger > > >>>>>>>> tracking so that their instances are restarted as well? > > >>>>>>>> > > >>>>>>>> Anyway, I found few issues SELinux issues with the patch: > > >>>>>>>> > > >>>>>>>> 1) # rpm -Uvh freeipa-* > > >>>>>>>> Preparing... ########################################### [100%] > > >>>>>>>> 1:freeipa-python ########################################### [ 20%] > > >>>>>>>> 2:freeipa-client ########################################### [ 40%] > > >>>>>>>> 3:freeipa-admintools ########################################### [ > > >>>>>>>> 60%] > > >>>>>>>> 4:freeipa-server ########################################### [ 80%] > > >>>>>>>> /usr/bin/chcon: failed to change context of > > >>>>>>>> `/usr/lib64/ipa/certmonger' to > > >>>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid > > >>>>>>>> argument > > >>>>>>>> /usr/bin/chcon: failed to change context of > > >>>>>>>> `/usr/lib64/ipa/certmonger/restart_dirsrv' to > > >>>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid > > >>>>>>>> argument > > >>>>>>>> /usr/bin/chcon: failed to change context of > > >>>>>>>> `/usr/lib64/ipa/certmonger/restart_httpd' to > > >>>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid > > >>>>>>>> argument > > >>>>>>>> warning: %post(freeipa-server-2.1.90GIT5b895af-0.fc16.x86_64) > > >>>>>>>> scriptlet failed, exit status 1 > > >>>>>>>> 5:freeipa-server-selinux > > >>>>>>>> ########################################### > > >>>>>>>> [100%] > > >>>>>>>> > > >>>>>>>> certmonger_unconfined_exec_t type was unknown with my selinux > > >>>>>>>> policy: > > >>>>>>>> > > >>>>>>>> selinux-policy-3.10.0-80.fc16.noarch > > >>>>>>>> selinux-policy-targeted-3.10.0-80.fc16.noarch > > >>>>>>>> > > >>>>>>>> If we need a higher SELinux version, we should bump the required > > >>>>>>>> package > > >>>>>>>> version spec file. > > >>>>>>> > > >>>>>>> Yeah, waiting on it to be backported. > > >>>>>>> > > >>>>>>>> > > >>>>>>>> 2) Change of SELinux context with /usr/bin/chcon is temporary until > > >>>>>>>> restorecon or system relabel occurs. I think we should make it > > >>>>>>>> persistent and enforce this type in our SELinux policy and > > >>>>>>>> rather call > > >>>>>>>> restorecon instead of chcon > > >>>>>>> > > >>>>>>> That's a good idea, why didn't I think of that :-( > > >>>>>> > > >>>>>> Ah, now I remember, it will be handled by selinux-policy. I would > > >>>>>> have > > >>>>>> used restorecon here but since the policy isn't there yet this seemed > > >>>>>> like a good idea. > > >>>>>> > > >>>>>> I'm trying to find out the status of this new policy, it may only > > >>>>>> make > > >>>>>> it into F-17. > > >>>>>> > > >>>>>> rob > > >>>>> > > >>>>> Ok. But if this policy does not go in F-16 and if we want this fix in > > >>>>> F16 release too, I guess we would have to implement both approaches in > > >>>>> our spec file: > > >>>>> > > >>>>> 1) When on F16, include SELinux policy for restart scripts + run > > >>>>> restorecon > > >>>>> 2) When on F17, do not include the SELinux policy (+ run restorecon) > > >>>>> > > >>>>> Martin > > >>>>> > > >>>> > > >>>> Won't work without updated selinux-policy. Without the permission for > > >>>> certmonger to execute the commands things will still fail (just in > > >>>> really non-obvious and far in the future ways). > > >>>> > > >>>> It looks like this is fixed in F-17 selinux-policy-3.10.0-107. > > >>>> > > >>>> rob > > >>> > > >>> Updated patch which works on F-17. > > >>> > > >>> rob > > >> > > >> What about F-16? The restart scripts won't work with enabled enforcing > > >> and will raise AVCs. Maybe we really need to deliver our own SELinux > > >> policy allowing it on F-16. > > > > > > Right, I don't see this working on F-16. I don't really want to carry > > > this type of policy. It goes beyond marking a few files as certmonger_t, > > > it is going to let certmonger execute arbitrary scripts. This is best > > > left to the SELinux team who understand the consequences better. > > > > > >> > > >> I also found an issue with the restart scripts: > > >> > > >> 1) restart_dirsrv: this won't work with systemd: > > >> > > >> # /sbin/service dirsrv restart > > >> Redirecting to /bin/systemctl restart dirsrv.service > > >> Failed to issue method call: Unit dirsrv.service failed to load: No such > > >> file or directory. See system logs and 'systemctl status dirsrv.service' > > >> for details. > > > > > > Wouldn't work so hot for sysV either as we'd be restarting all > > > instances. I'll take a look. > > > > > >> We would need to pass an instance of IPA dirsrv for this to work. > > >> > > >> 2) restart_httpd: > > >> Is reload enough for httpd to pull a new certificate? Don't we need a > > >> full restart? If reload is enough, I think the command should be named > > >> reload_httpd > > > > > > Yes, it causes the modules to be reloaded which will reload the NSS > > > database, that's all we need. I named it this way for consistency. I can > > > rename it, though I doubt it would cause any confusion either way. > > > > > > rob > > > > revised patch. > > > > rob > > Thanks, this is better, dirsrv restart is now fixed. I still have few > issues: > > 1) Wrong command for httpd certs (should be reload_httpd): > > - db.track_server_cert(nickname, self.principal, > db.passwd_fname) > + db.track_server_cert(nickname, self.principal, > db.passwd_fname, 'restart_httpd') > > - db.track_server_cert("Server-Cert", self.principal, > db.passwd_fname) > + db.track_server_cert("Server-Cert", self.principal, > db.passwd_fname, 'restart_httpd') > > > 2) What about current certmonger monitored certs? We can use the command > Nalin suggested to modify existing monitored certs to set up a restart > command: > > # ipa-getcert list > ... > Request ID '20120404083924': > status: MONITORING > stuck: no > key pair storage: > type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS > Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt' > certificate: > type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS > Certificate DB' > CA: IPA > issuer: CN=Certificate Authority,O=IDM.LAB.BOS.REDHAT.COM > subject: CN=vm-079.idm.lab.bos.redhat.com,O=IDM.LAB.BOS.REDHAT.COM > expires: 2014-04-05 08:39:24 UTC > eku: id-kp-serverAuth,id-kp-clientAuth > command: > track: yes > auto-renew: yes > > # ipa-getcert start-tracking -i 20120404083924 -C /usr/lib64/ipa/certmonger/reload_httpd > Request "20120404083924" modified. > > # ipa-getcert list > Request ID '20120404083924': > status: MONITORING > stuck: no > key pair storage: > type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS > Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt' > certificate: > type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS > Certificate DB' > CA: IPA > issuer: CN=Certificate Authority,O=IDM.LAB.BOS.REDHAT.COM > subject: CN=vm-079.idm.lab.bos.redhat.com,O=IDM.LAB.BOS.REDHAT.COM > expires: 2014-04-05 08:39:24 UTC > eku: id-kp-serverAuth,id-kp-clientAuth > command: /usr/lib64/ipa/certmonger/reload_httpd > track: yes > auto-renew: yes > > 3) The new dirsrv service restart command does not work either with > systemd: > # service dirsrv restart IDM-LAB-BOS-REDHAT-COM > > I think the approach here would be to take advantage of our system > service management abstraction that ipactl uses. It will then work well > with SysV, systemd or any other future configurations or ports to other > Linux distributions. Example of the script which restarts the required > instance: > > # cat /tmp/restart_dirsrv > #!/usr/bin/python > > import sys > from ipapython import services as ipaservices > > try: > instance = sys.argv[1] > except IndexError: > instance = "" > > dirsrv = ipaservices.knownservices.dirsrv > dirsrv.restart(instance) > > # /tmp/restart_dirsrv IDM-LAB-BOS-REDHAT-COM > > Martin > I prepared a rebased and fixed patch which works on both SystemV and systemd systems, please check attached file. The patch does not include updating existing certs, it would need more time to implement a safe update that it is not available this late in the game. We may want to create a ticket to add a documentation about how to update existing certs + RFE for add a support of updating existing certs later. Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-certmonger-restarts.patch Type: text/x-patch Size: 11527 bytes Desc: not available URL: From dpal at redhat.com Tue Apr 10 21:20:46 2012 From: dpal at redhat.com (Dmitri Pal) Date: Tue, 10 Apr 2012 17:20:46 -0400 Subject: [Freeipa-devel] [PATCH] 998 certmonger restarts services on renewal In-Reply-To: <1334090905.2282.20.camel@priserak> References: <4F7233D9.2080405@redhat.com> <1333374440.10403.18.camel@balmora.brq.redhat.com> <4F79B30E.1060804@redhat.com> <4F79FFBD.1030408@redhat.com> <1333437994.23102.13.camel@balmora.brq.redhat.com> <4F7AF8FC.5050000@redhat.com> <4F7B0D04.6060704@redhat.com> <1333478601.2626.5.camel@priserak> <4F7B4F64.7070703@redhat.com> <4F7E04FB.4070200@redhat.com> <1333700547.14740.18.camel@balmora.brq.redhat.com> <1334090905.2282.20.camel@priserak> Message-ID: <4F84A42E.8040800@redhat.com> On 04/10/2012 04:48 PM, Martin Kosek wrote: > On Fri, 2012-04-06 at 10:22 +0200, Martin Kosek wrote: >> On Thu, 2012-04-05 at 16:47 -0400, Rob Crittenden wrote: >>> Rob Crittenden wrote: >>>> Martin Kosek wrote: >>>>> On Tue, 2012-04-03 at 10:45 -0400, Rob Crittenden wrote: >>>>>> Rob Crittenden wrote: >>>>>>> Martin Kosek wrote: >>>>>>>> On Mon, 2012-04-02 at 15:36 -0400, Rob Crittenden wrote: >>>>>>>>> Rob Crittenden wrote: >>>>>>>>>> Martin Kosek wrote: >>>>>>>>>>> On Tue, 2012-03-27 at 17:40 -0400, Rob Crittenden wrote: >>>>>>>>>>>> Certmonger will currently automatically renew server >>>>>>>>>>>> certificates but >>>>>>>>>>>> doesn't restart the services so you can still end up with expired >>>>>>>>>>>> certificates if you services never restart. >>>>>>>>>>>> >>>>>>>>>>>> This patch registers are restart command with certmonger so the >>>>>>>>>>>> IPA >>>>>>>>>>>> services will automatically be restarted to get the updated cert. >>>>>>>>>>>> >>>>>>>>>>>> Easy to test. Install IPA then resubmit the current server >>>>>>>>>>>> certs and >>>>>>>>>>>> watch the services restart: >>>>>>>>>>>> >>>>>>>>>>>> # ipa-getcert list >>>>>>>>>>>> >>>>>>>>>>>> Find the ID for either your dirsrv or httpd instance >>>>>>>>>>>> >>>>>>>>>>>> # ipa-getcert resubmit -i >>>>>>>>>>>> >>>>>>>>>>>> Watch /var/log/httpd/error_log or >>>>>>>>>>>> /var/log/dirsrv/slapd-INSTANCE/errors >>>>>>>>>>>> to see the service restart. >>>>>>>>>>>> >>>>>>>>>>>> rob >>>>>>>>>>> What about current instances - can we/do we want to update >>>>>>>>>>> certmonger >>>>>>>>>>> tracking so that their instances are restarted as well? >>>>>>>>>>> >>>>>>>>>>> Anyway, I found few issues SELinux issues with the patch: >>>>>>>>>>> >>>>>>>>>>> 1) # rpm -Uvh freeipa-* >>>>>>>>>>> Preparing... ########################################### [100%] >>>>>>>>>>> 1:freeipa-python ########################################### [ 20%] >>>>>>>>>>> 2:freeipa-client ########################################### [ 40%] >>>>>>>>>>> 3:freeipa-admintools ########################################### [ >>>>>>>>>>> 60%] >>>>>>>>>>> 4:freeipa-server ########################################### [ 80%] >>>>>>>>>>> /usr/bin/chcon: failed to change context of >>>>>>>>>>> `/usr/lib64/ipa/certmonger' to >>>>>>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid >>>>>>>>>>> argument >>>>>>>>>>> /usr/bin/chcon: failed to change context of >>>>>>>>>>> `/usr/lib64/ipa/certmonger/restart_dirsrv' to >>>>>>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid >>>>>>>>>>> argument >>>>>>>>>>> /usr/bin/chcon: failed to change context of >>>>>>>>>>> `/usr/lib64/ipa/certmonger/restart_httpd' to >>>>>>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid >>>>>>>>>>> argument >>>>>>>>>>> warning: %post(freeipa-server-2.1.90GIT5b895af-0.fc16.x86_64) >>>>>>>>>>> scriptlet failed, exit status 1 >>>>>>>>>>> 5:freeipa-server-selinux >>>>>>>>>>> ########################################### >>>>>>>>>>> [100%] >>>>>>>>>>> >>>>>>>>>>> certmonger_unconfined_exec_t type was unknown with my selinux >>>>>>>>>>> policy: >>>>>>>>>>> >>>>>>>>>>> selinux-policy-3.10.0-80.fc16.noarch >>>>>>>>>>> selinux-policy-targeted-3.10.0-80.fc16.noarch >>>>>>>>>>> >>>>>>>>>>> If we need a higher SELinux version, we should bump the required >>>>>>>>>>> package >>>>>>>>>>> version spec file. >>>>>>>>>> Yeah, waiting on it to be backported. >>>>>>>>>> >>>>>>>>>>> 2) Change of SELinux context with /usr/bin/chcon is temporary until >>>>>>>>>>> restorecon or system relabel occurs. I think we should make it >>>>>>>>>>> persistent and enforce this type in our SELinux policy and >>>>>>>>>>> rather call >>>>>>>>>>> restorecon instead of chcon >>>>>>>>>> That's a good idea, why didn't I think of that :-( >>>>>>>>> Ah, now I remember, it will be handled by selinux-policy. I would >>>>>>>>> have >>>>>>>>> used restorecon here but since the policy isn't there yet this seemed >>>>>>>>> like a good idea. >>>>>>>>> >>>>>>>>> I'm trying to find out the status of this new policy, it may only >>>>>>>>> make >>>>>>>>> it into F-17. >>>>>>>>> >>>>>>>>> rob >>>>>>>> Ok. But if this policy does not go in F-16 and if we want this fix in >>>>>>>> F16 release too, I guess we would have to implement both approaches in >>>>>>>> our spec file: >>>>>>>> >>>>>>>> 1) When on F16, include SELinux policy for restart scripts + run >>>>>>>> restorecon >>>>>>>> 2) When on F17, do not include the SELinux policy (+ run restorecon) >>>>>>>> >>>>>>>> Martin >>>>>>>> >>>>>>> Won't work without updated selinux-policy. Without the permission for >>>>>>> certmonger to execute the commands things will still fail (just in >>>>>>> really non-obvious and far in the future ways). >>>>>>> >>>>>>> It looks like this is fixed in F-17 selinux-policy-3.10.0-107. >>>>>>> >>>>>>> rob >>>>>> Updated patch which works on F-17. >>>>>> >>>>>> rob >>>>> What about F-16? The restart scripts won't work with enabled enforcing >>>>> and will raise AVCs. Maybe we really need to deliver our own SELinux >>>>> policy allowing it on F-16. >>>> Right, I don't see this working on F-16. I don't really want to carry >>>> this type of policy. It goes beyond marking a few files as certmonger_t, >>>> it is going to let certmonger execute arbitrary scripts. This is best >>>> left to the SELinux team who understand the consequences better. >>>> >>>>> I also found an issue with the restart scripts: >>>>> >>>>> 1) restart_dirsrv: this won't work with systemd: >>>>> >>>>> # /sbin/service dirsrv restart >>>>> Redirecting to /bin/systemctl restart dirsrv.service >>>>> Failed to issue method call: Unit dirsrv.service failed to load: No such >>>>> file or directory. See system logs and 'systemctl status dirsrv.service' >>>>> for details. >>>> Wouldn't work so hot for sysV either as we'd be restarting all >>>> instances. I'll take a look. >>>> >>>>> We would need to pass an instance of IPA dirsrv for this to work. >>>>> >>>>> 2) restart_httpd: >>>>> Is reload enough for httpd to pull a new certificate? Don't we need a >>>>> full restart? If reload is enough, I think the command should be named >>>>> reload_httpd >>>> Yes, it causes the modules to be reloaded which will reload the NSS >>>> database, that's all we need. I named it this way for consistency. I can >>>> rename it, though I doubt it would cause any confusion either way. >>>> >>>> rob >>> revised patch. >>> >>> rob >> Thanks, this is better, dirsrv restart is now fixed. I still have few >> issues: >> >> 1) Wrong command for httpd certs (should be reload_httpd): >> >> - db.track_server_cert(nickname, self.principal, >> db.passwd_fname) >> + db.track_server_cert(nickname, self.principal, >> db.passwd_fname, 'restart_httpd') >> >> - db.track_server_cert("Server-Cert", self.principal, >> db.passwd_fname) >> + db.track_server_cert("Server-Cert", self.principal, >> db.passwd_fname, 'restart_httpd') >> >> >> 2) What about current certmonger monitored certs? We can use the command >> Nalin suggested to modify existing monitored certs to set up a restart >> command: >> >> # ipa-getcert list >> ... >> Request ID '20120404083924': >> status: MONITORING >> stuck: no >> key pair storage: >> type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS >> Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt' >> certificate: >> type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS >> Certificate DB' >> CA: IPA >> issuer: CN=Certificate Authority,O=IDM.LAB.BOS.REDHAT.COM >> subject: CN=vm-079.idm.lab.bos.redhat.com,O=IDM.LAB.BOS.REDHAT.COM >> expires: 2014-04-05 08:39:24 UTC >> eku: id-kp-serverAuth,id-kp-clientAuth >> command: >> track: yes >> auto-renew: yes >> >> # ipa-getcert start-tracking -i 20120404083924 -C /usr/lib64/ipa/certmonger/reload_httpd >> Request "20120404083924" modified. >> >> # ipa-getcert list >> Request ID '20120404083924': >> status: MONITORING >> stuck: no >> key pair storage: >> type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS >> Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt' >> certificate: >> type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS >> Certificate DB' >> CA: IPA >> issuer: CN=Certificate Authority,O=IDM.LAB.BOS.REDHAT.COM >> subject: CN=vm-079.idm.lab.bos.redhat.com,O=IDM.LAB.BOS.REDHAT.COM >> expires: 2014-04-05 08:39:24 UTC >> eku: id-kp-serverAuth,id-kp-clientAuth >> command: /usr/lib64/ipa/certmonger/reload_httpd >> track: yes >> auto-renew: yes >> >> 3) The new dirsrv service restart command does not work either with >> systemd: >> # service dirsrv restart IDM-LAB-BOS-REDHAT-COM >> >> I think the approach here would be to take advantage of our system >> service management abstraction that ipactl uses. It will then work well >> with SysV, systemd or any other future configurations or ports to other >> Linux distributions. Example of the script which restarts the required >> instance: >> >> # cat /tmp/restart_dirsrv >> #!/usr/bin/python >> >> import sys >> from ipapython import services as ipaservices >> >> try: >> instance = sys.argv[1] >> except IndexError: >> instance = "" >> >> dirsrv = ipaservices.knownservices.dirsrv >> dirsrv.restart(instance) >> >> # /tmp/restart_dirsrv IDM-LAB-BOS-REDHAT-COM >> >> Martin >> > I prepared a rebased and fixed patch which works on both SystemV and > systemd systems, please check attached file. > > The patch does not include updating existing certs, it would need more > time to implement a safe update that it is not available this late in > the game. We may want to create a ticket to add a documentation about > how to update existing certs + RFE for add a support of updating > existing certs later. > > Martin This should come out of the bigger cert renewal strategy effort. > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rcritten at redhat.com Tue Apr 10 22:17:39 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 10 Apr 2012 18:17:39 -0400 Subject: [Freeipa-devel] [PATCH] 998 certmonger restarts services on renewal In-Reply-To: <1334090905.2282.20.camel@priserak> References: <4F7233D9.2080405@redhat.com> <1333374440.10403.18.camel@balmora.brq.redhat.com> <4F79B30E.1060804@redhat.com> <4F79FFBD.1030408@redhat.com> <1333437994.23102.13.camel@balmora.brq.redhat.com> <4F7AF8FC.5050000@redhat.com> <4F7B0D04.6060704@redhat.com> <1333478601.2626.5.camel@priserak> <4F7B4F64.7070703@redhat.com> <4F7E04FB.4070200@redhat.com> <1333700547.14740.18.camel@balmora.brq.redhat.com> <1334090905.2282.20.camel@priserak> Message-ID: <4F84B183.904@redhat.com> Martin Kosek wrote: > On Fri, 2012-04-06 at 10:22 +0200, Martin Kosek wrote: >> On Thu, 2012-04-05 at 16:47 -0400, Rob Crittenden wrote: >>> Rob Crittenden wrote: >>>> Martin Kosek wrote: >>>>> On Tue, 2012-04-03 at 10:45 -0400, Rob Crittenden wrote: >>>>>> Rob Crittenden wrote: >>>>>>> Martin Kosek wrote: >>>>>>>> On Mon, 2012-04-02 at 15:36 -0400, Rob Crittenden wrote: >>>>>>>>> Rob Crittenden wrote: >>>>>>>>>> Martin Kosek wrote: >>>>>>>>>>> On Tue, 2012-03-27 at 17:40 -0400, Rob Crittenden wrote: >>>>>>>>>>>> Certmonger will currently automatically renew server >>>>>>>>>>>> certificates but >>>>>>>>>>>> doesn't restart the services so you can still end up with expired >>>>>>>>>>>> certificates if you services never restart. >>>>>>>>>>>> >>>>>>>>>>>> This patch registers are restart command with certmonger so the >>>>>>>>>>>> IPA >>>>>>>>>>>> services will automatically be restarted to get the updated cert. >>>>>>>>>>>> >>>>>>>>>>>> Easy to test. Install IPA then resubmit the current server >>>>>>>>>>>> certs and >>>>>>>>>>>> watch the services restart: >>>>>>>>>>>> >>>>>>>>>>>> # ipa-getcert list >>>>>>>>>>>> >>>>>>>>>>>> Find the ID for either your dirsrv or httpd instance >>>>>>>>>>>> >>>>>>>>>>>> # ipa-getcert resubmit -i >>>>>>>>>>>> >>>>>>>>>>>> Watch /var/log/httpd/error_log or >>>>>>>>>>>> /var/log/dirsrv/slapd-INSTANCE/errors >>>>>>>>>>>> to see the service restart. >>>>>>>>>>>> >>>>>>>>>>>> rob >>>>>>>>>>> >>>>>>>>>>> What about current instances - can we/do we want to update >>>>>>>>>>> certmonger >>>>>>>>>>> tracking so that their instances are restarted as well? >>>>>>>>>>> >>>>>>>>>>> Anyway, I found few issues SELinux issues with the patch: >>>>>>>>>>> >>>>>>>>>>> 1) # rpm -Uvh freeipa-* >>>>>>>>>>> Preparing... ########################################### [100%] >>>>>>>>>>> 1:freeipa-python ########################################### [ 20%] >>>>>>>>>>> 2:freeipa-client ########################################### [ 40%] >>>>>>>>>>> 3:freeipa-admintools ########################################### [ >>>>>>>>>>> 60%] >>>>>>>>>>> 4:freeipa-server ########################################### [ 80%] >>>>>>>>>>> /usr/bin/chcon: failed to change context of >>>>>>>>>>> `/usr/lib64/ipa/certmonger' to >>>>>>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid >>>>>>>>>>> argument >>>>>>>>>>> /usr/bin/chcon: failed to change context of >>>>>>>>>>> `/usr/lib64/ipa/certmonger/restart_dirsrv' to >>>>>>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid >>>>>>>>>>> argument >>>>>>>>>>> /usr/bin/chcon: failed to change context of >>>>>>>>>>> `/usr/lib64/ipa/certmonger/restart_httpd' to >>>>>>>>>>> `unconfined_u:object_r:certmonger_unconfined_exec_t:s0': Invalid >>>>>>>>>>> argument >>>>>>>>>>> warning: %post(freeipa-server-2.1.90GIT5b895af-0.fc16.x86_64) >>>>>>>>>>> scriptlet failed, exit status 1 >>>>>>>>>>> 5:freeipa-server-selinux >>>>>>>>>>> ########################################### >>>>>>>>>>> [100%] >>>>>>>>>>> >>>>>>>>>>> certmonger_unconfined_exec_t type was unknown with my selinux >>>>>>>>>>> policy: >>>>>>>>>>> >>>>>>>>>>> selinux-policy-3.10.0-80.fc16.noarch >>>>>>>>>>> selinux-policy-targeted-3.10.0-80.fc16.noarch >>>>>>>>>>> >>>>>>>>>>> If we need a higher SELinux version, we should bump the required >>>>>>>>>>> package >>>>>>>>>>> version spec file. >>>>>>>>>> >>>>>>>>>> Yeah, waiting on it to be backported. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> 2) Change of SELinux context with /usr/bin/chcon is temporary until >>>>>>>>>>> restorecon or system relabel occurs. I think we should make it >>>>>>>>>>> persistent and enforce this type in our SELinux policy and >>>>>>>>>>> rather call >>>>>>>>>>> restorecon instead of chcon >>>>>>>>>> >>>>>>>>>> That's a good idea, why didn't I think of that :-( >>>>>>>>> >>>>>>>>> Ah, now I remember, it will be handled by selinux-policy. I would >>>>>>>>> have >>>>>>>>> used restorecon here but since the policy isn't there yet this seemed >>>>>>>>> like a good idea. >>>>>>>>> >>>>>>>>> I'm trying to find out the status of this new policy, it may only >>>>>>>>> make >>>>>>>>> it into F-17. >>>>>>>>> >>>>>>>>> rob >>>>>>>> >>>>>>>> Ok. But if this policy does not go in F-16 and if we want this fix in >>>>>>>> F16 release too, I guess we would have to implement both approaches in >>>>>>>> our spec file: >>>>>>>> >>>>>>>> 1) When on F16, include SELinux policy for restart scripts + run >>>>>>>> restorecon >>>>>>>> 2) When on F17, do not include the SELinux policy (+ run restorecon) >>>>>>>> >>>>>>>> Martin >>>>>>>> >>>>>>> >>>>>>> Won't work without updated selinux-policy. Without the permission for >>>>>>> certmonger to execute the commands things will still fail (just in >>>>>>> really non-obvious and far in the future ways). >>>>>>> >>>>>>> It looks like this is fixed in F-17 selinux-policy-3.10.0-107. >>>>>>> >>>>>>> rob >>>>>> >>>>>> Updated patch which works on F-17. >>>>>> >>>>>> rob >>>>> >>>>> What about F-16? The restart scripts won't work with enabled enforcing >>>>> and will raise AVCs. Maybe we really need to deliver our own SELinux >>>>> policy allowing it on F-16. >>>> >>>> Right, I don't see this working on F-16. I don't really want to carry >>>> this type of policy. It goes beyond marking a few files as certmonger_t, >>>> it is going to let certmonger execute arbitrary scripts. This is best >>>> left to the SELinux team who understand the consequences better. >>>> >>>>> >>>>> I also found an issue with the restart scripts: >>>>> >>>>> 1) restart_dirsrv: this won't work with systemd: >>>>> >>>>> # /sbin/service dirsrv restart >>>>> Redirecting to /bin/systemctl restart dirsrv.service >>>>> Failed to issue method call: Unit dirsrv.service failed to load: No such >>>>> file or directory. See system logs and 'systemctl status dirsrv.service' >>>>> for details. >>>> >>>> Wouldn't work so hot for sysV either as we'd be restarting all >>>> instances. I'll take a look. >>>> >>>>> We would need to pass an instance of IPA dirsrv for this to work. >>>>> >>>>> 2) restart_httpd: >>>>> Is reload enough for httpd to pull a new certificate? Don't we need a >>>>> full restart? If reload is enough, I think the command should be named >>>>> reload_httpd >>>> >>>> Yes, it causes the modules to be reloaded which will reload the NSS >>>> database, that's all we need. I named it this way for consistency. I can >>>> rename it, though I doubt it would cause any confusion either way. >>>> >>>> rob >>> >>> revised patch. >>> >>> rob >> >> Thanks, this is better, dirsrv restart is now fixed. I still have few >> issues: >> >> 1) Wrong command for httpd certs (should be reload_httpd): >> >> - db.track_server_cert(nickname, self.principal, >> db.passwd_fname) >> + db.track_server_cert(nickname, self.principal, >> db.passwd_fname, 'restart_httpd') >> >> - db.track_server_cert("Server-Cert", self.principal, >> db.passwd_fname) >> + db.track_server_cert("Server-Cert", self.principal, >> db.passwd_fname, 'restart_httpd') >> >> >> 2) What about current certmonger monitored certs? We can use the command >> Nalin suggested to modify existing monitored certs to set up a restart >> command: >> >> # ipa-getcert list >> ... >> Request ID '20120404083924': >> status: MONITORING >> stuck: no >> key pair storage: >> type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS >> Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt' >> certificate: >> type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS >> Certificate DB' >> CA: IPA >> issuer: CN=Certificate Authority,O=IDM.LAB.BOS.REDHAT.COM >> subject: CN=vm-079.idm.lab.bos.redhat.com,O=IDM.LAB.BOS.REDHAT.COM >> expires: 2014-04-05 08:39:24 UTC >> eku: id-kp-serverAuth,id-kp-clientAuth >> command: >> track: yes >> auto-renew: yes >> >> # ipa-getcert start-tracking -i 20120404083924 -C /usr/lib64/ipa/certmonger/reload_httpd >> Request "20120404083924" modified. >> >> # ipa-getcert list >> Request ID '20120404083924': >> status: MONITORING >> stuck: no >> key pair storage: >> type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS >> Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt' >> certificate: >> type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS >> Certificate DB' >> CA: IPA >> issuer: CN=Certificate Authority,O=IDM.LAB.BOS.REDHAT.COM >> subject: CN=vm-079.idm.lab.bos.redhat.com,O=IDM.LAB.BOS.REDHAT.COM >> expires: 2014-04-05 08:39:24 UTC >> eku: id-kp-serverAuth,id-kp-clientAuth >> command: /usr/lib64/ipa/certmonger/reload_httpd >> track: yes >> auto-renew: yes >> >> 3) The new dirsrv service restart command does not work either with >> systemd: >> # service dirsrv restart IDM-LAB-BOS-REDHAT-COM >> >> I think the approach here would be to take advantage of our system >> service management abstraction that ipactl uses. It will then work well >> with SysV, systemd or any other future configurations or ports to other >> Linux distributions. Example of the script which restarts the required >> instance: >> >> # cat /tmp/restart_dirsrv >> #!/usr/bin/python >> >> import sys >> from ipapython import services as ipaservices >> >> try: >> instance = sys.argv[1] >> except IndexError: >> instance = "" >> >> dirsrv = ipaservices.knownservices.dirsrv >> dirsrv.restart(instance) >> >> # /tmp/restart_dirsrv IDM-LAB-BOS-REDHAT-COM >> >> Martin >> > > I prepared a rebased and fixed patch which works on both SystemV and > systemd systems, please check attached file. > > The patch does not include updating existing certs, it would need more > time to implement a safe update that it is not available this late in > the game. We may want to create a ticket to add a documentation about > how to update existing certs + RFE for add a support of updating > existing certs later. > > Martin This works fine. We'll address existing certs in the future. ACK, pushed to master and ipa-2-2 rob From mkosek at redhat.com Wed Apr 11 06:22:06 2012 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 11 Apr 2012 08:22:06 +0200 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. In-Reply-To: <4F847433.3060703@redhat.com> References: <4F843D0F.5060906@redhat.com> <4F844BC4.2020109@redhat.com> <1334077635.16034.20.camel@balmora.brq.redhat.com> <4F846CF8.3020708@redhat.com> <4F84726E.90504@redhat.com> <4F847433.3060703@redhat.com> Message-ID: <1334125326.18938.3.camel@balmora.brq.redhat.com> On Tue, 2012-04-10 at 13:56 -0400, Dmitri Pal wrote: > On 04/10/2012 01:48 PM, Rob Crittenden wrote: [snip] > The use case I would see is the extensibility. Say a customer wants to > extend a schema and add an attribute X to the user object. He would > still be able to manage users using CLI without writing a plugin for > the new attribute. Yes plugin is preferred but not everybody would go > for it. So in absence of the plugin we can't do validation but we still > should function and be able to deal with this attribute via CLI (and UI > if this attribute is enabled for UI via UI configuration). > > I am generally against dropping this interface. But expectations IMO > should be: > 1) If the attribute is managed by us with setattr and friends it should > behave in the same way as via the direct add/mod/del command > 2) If attribute is not managed it should not provide any guarantees and > act in the same way as via LDAP > > Hope this helps. I agree with your points, that's what I was trying to say in my previous mail. I think that all the grief is caused by expectation 1) which is broken with current setattr options. If we fix that (preferably in 3.0), I would keep this API. Martin From jcholast at redhat.com Wed Apr 11 07:18:40 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 11 Apr 2012 09:18:40 +0200 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. In-Reply-To: <4F847433.3060703@redhat.com> References: <4F843D0F.5060906@redhat.com> <4F844BC4.2020109@redhat.com> <1334077635.16034.20.camel@balmora.brq.redhat.com> <4F846CF8.3020708@redhat.com> <4F84726E.90504@redhat.com> <4F847433.3060703@redhat.com> Message-ID: <4F853050.2060107@redhat.com> On 10.4.2012 19:56, Dmitri Pal wrote: > On 04/10/2012 01:48 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> On 04/10/2012 07:07 PM, Martin Kosek wrote: >>>> On Tue, 2012-04-10 at 17:03 +0200, Jan Cholasta wrote: >>>>> On 10.4.2012 16:00, Petr Viktorin wrote: >>>>>> I'm aware that we have backwards compatibility requirements so we >>>>>> have >>>>>> to stick with unfortunate decisions, but I wanted you to know what I >>>>>> think. Please tell me I'm wrong! >>>>>> >>>>>> >>>>>> >>>>>> It is not clear what --{set,add,del}attr and friends should do. On >>>>>> the >>>>>> one hand they should be powerful -- presumably as powerful as >>>>>> ldapmodify. On the other hand they should be checked to ensure they >>>>>> can't be used to break the system. These requirements are >>>>>> contradictory. >>>>>> And in either case we're doing it wrong: >>>>>> - If they should be all-powerful, we shouldn't validate them. >>>>>> - If they shouldn't break the system we can just disable them for >>>>>> IPA-managed attributes. My understanding is that they were originally >>>>>> only added for attributes IPA doesn't know about. People can still >>>>>> use >>>>>> ldapmodify to bypass validation if they want. >>>>>> - If somewhere in between, we need to clearly define what they should >>>>>> do, instead of drawing the line ad-hoc based on individual details we >>>>>> forgot about, as tickets come from QE. >>>>>> >>>>>> >>>>>> I would hope people won't use --setattr for IPA-managed attributes. >>>>>> Which would however mean we won't get much community testing for all >>>>>> this extra code. >>>>>> >>>>>> >>>>>> Then, there's an unfortunate detail in IPA implementation: attribute >>>>>> Params need to be cloned to method objects (Create, Update, etc.) to >>>>>> work properly (e.g. get the `attribute` flag set). If they are marked >>>>>> no_update, they don't get cloned, so they don't work properly. >>>>>> Yet --setattr apparently needs to be able to update and validate >>>>>> attributes marked no_update (this ties to the confusing >>>>>> requirements on >>>>>> --setattr I already mentioned). This leads to doing the same work >>>>>> again, >>>>>> slightly differently. >>>>>> >>>>>> >>>>>> >>>>>> tl;dr: --setattr work on IPA-managed attributes (with validation) >>>>>> is a >>>>>> mistake. It adds no functionality, only complexity. We don't want >>>>>> people >>>>>> to use it. It will cost us a lot of maintenance work to support. >>>>>> >>>>>> >>>>>> Thank you for listening. A patch for the newest regression is coming >>>>>> up. >>>>>> >>>>> >>>>> I wholeheartedly agree. >>>> >>>> This is indeed a mine field and we need to make a look from at the >>>> issue >>>> from all sides before accepting a decision. >>> >>> Yes. >>> >>>>> >>>>> Like you said above, we should either not validate --{set,add,del}attr >>>>> or don't allow them on known attributes. >>>> >>>> IMHO, validating attributes we manage in the same way for both >>>> --setattr >>>> and standard attrs is not that wrong. It is a good precaution, because >>>> if we let an unvalidated value in, it can make even a bigger mess later >>>> in our pre_callbacks or post_callbacks where we assume that at this >>>> point everything is valid. >>> >>> Then we should validate *exactly* the same way, including not allowing >>> no_update attributes to be updated. >>> >>>> If somebody wants to modify attributes in an uncontrolled, unvalidated >>>> way, he is free to use ldapmodify or other tool to play with raw LDAP >>>> values. Without our guarantee of course. >>> >>> That's clear. >>> >>>> But if he chooses to use our --{set,add,del}attr we should at least >>>> help >>>> him to not shoot himself to the leg and validate/normalize/encode the >>>> value. I don't know how many users use this API, but removing a support >>>> for all managed attributes seems as a big compatibility break to me. >>> >>> Well, it was broken (see https://fedorahosted.org/freeipa/ticket/2405, >>> 2407, 2408), so I don't think many people used it. >>> >>> Anyway, what's the use case? Why would the user want to use --setattr >>> for validated attributes? Is our regular API lacking something? >>> >> >> I think Honza had the best suggestion, enhance the standard API to >> handle multi-valued attributes better and reduce the need for >> set/add/delattr. At some future point we can consider dropping them >> when updating something already covered by a Param. We're stuck with >> them for now. >> > > The use case I would see is the extensibility. Say a customer wants to > extend a schema and add an attribute X to the user object. He would > still be able to manage users using CLI without writing a plugin for > the new attribute. Yes plugin is preferred but not everybody would go > for it. So in absence of the plugin we can't do validation but we still > should function and be able to deal with this attribute via CLI (and UI > if this attribute is enabled for UI via UI configuration). > > I am generally against dropping this interface. But expectations IMO > should be: > 1) If the attribute is managed by us with setattr and friends it should > behave in the same way as via the direct add/mod/del command > 2) If attribute is not managed it should not provide any guarantees and > act in the same way as via LDAP > > Hope this helps. > This might be the best thing to do, but IMO it is still no good, because the behavior of --{set,add,del}attr for a particular attribute might change between API versions, when that attribute changes from unmanaged to managed. Honza -- Jan Cholasta From mkosek at redhat.com Wed Apr 11 07:27:29 2012 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 11 Apr 2012 09:27:29 +0200 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. In-Reply-To: <4F853050.2060107@redhat.com> References: <4F843D0F.5060906@redhat.com> <4F844BC4.2020109@redhat.com> <1334077635.16034.20.camel@balmora.brq.redhat.com> <4F846CF8.3020708@redhat.com> <4F84726E.90504@redhat.com> <4F847433.3060703@redhat.com> <4F853050.2060107@redhat.com> Message-ID: <1334129249.18938.12.camel@balmora.brq.redhat.com> On Wed, 2012-04-11 at 09:18 +0200, Jan Cholasta wrote: > On 10.4.2012 19:56, Dmitri Pal wrote: > > On 04/10/2012 01:48 PM, Rob Crittenden wrote: > >> Petr Viktorin wrote: > >>> On 04/10/2012 07:07 PM, Martin Kosek wrote: > >>>> On Tue, 2012-04-10 at 17:03 +0200, Jan Cholasta wrote: > >>>>> On 10.4.2012 16:00, Petr Viktorin wrote: > >>>>>> I'm aware that we have backwards compatibility requirements so we > >>>>>> have > >>>>>> to stick with unfortunate decisions, but I wanted you to know what I > >>>>>> think. Please tell me I'm wrong! > >>>>>> > >>>>>> > >>>>>> > >>>>>> It is not clear what --{set,add,del}attr and friends should do. On > >>>>>> the > >>>>>> one hand they should be powerful -- presumably as powerful as > >>>>>> ldapmodify. On the other hand they should be checked to ensure they > >>>>>> can't be used to break the system. These requirements are > >>>>>> contradictory. > >>>>>> And in either case we're doing it wrong: > >>>>>> - If they should be all-powerful, we shouldn't validate them. > >>>>>> - If they shouldn't break the system we can just disable them for > >>>>>> IPA-managed attributes. My understanding is that they were originally > >>>>>> only added for attributes IPA doesn't know about. People can still > >>>>>> use > >>>>>> ldapmodify to bypass validation if they want. > >>>>>> - If somewhere in between, we need to clearly define what they should > >>>>>> do, instead of drawing the line ad-hoc based on individual details we > >>>>>> forgot about, as tickets come from QE. > >>>>>> > >>>>>> > >>>>>> I would hope people won't use --setattr for IPA-managed attributes. > >>>>>> Which would however mean we won't get much community testing for all > >>>>>> this extra code. > >>>>>> > >>>>>> > >>>>>> Then, there's an unfortunate detail in IPA implementation: attribute > >>>>>> Params need to be cloned to method objects (Create, Update, etc.) to > >>>>>> work properly (e.g. get the `attribute` flag set). If they are marked > >>>>>> no_update, they don't get cloned, so they don't work properly. > >>>>>> Yet --setattr apparently needs to be able to update and validate > >>>>>> attributes marked no_update (this ties to the confusing > >>>>>> requirements on > >>>>>> --setattr I already mentioned). This leads to doing the same work > >>>>>> again, > >>>>>> slightly differently. > >>>>>> > >>>>>> > >>>>>> > >>>>>> tl;dr: --setattr work on IPA-managed attributes (with validation) > >>>>>> is a > >>>>>> mistake. It adds no functionality, only complexity. We don't want > >>>>>> people > >>>>>> to use it. It will cost us a lot of maintenance work to support. > >>>>>> > >>>>>> > >>>>>> Thank you for listening. A patch for the newest regression is coming > >>>>>> up. > >>>>>> > >>>>> > >>>>> I wholeheartedly agree. > >>>> > >>>> This is indeed a mine field and we need to make a look from at the > >>>> issue > >>>> from all sides before accepting a decision. > >>> > >>> Yes. > >>> > >>>>> > >>>>> Like you said above, we should either not validate --{set,add,del}attr > >>>>> or don't allow them on known attributes. > >>>> > >>>> IMHO, validating attributes we manage in the same way for both > >>>> --setattr > >>>> and standard attrs is not that wrong. It is a good precaution, because > >>>> if we let an unvalidated value in, it can make even a bigger mess later > >>>> in our pre_callbacks or post_callbacks where we assume that at this > >>>> point everything is valid. > >>> > >>> Then we should validate *exactly* the same way, including not allowing > >>> no_update attributes to be updated. > >>> > >>>> If somebody wants to modify attributes in an uncontrolled, unvalidated > >>>> way, he is free to use ldapmodify or other tool to play with raw LDAP > >>>> values. Without our guarantee of course. > >>> > >>> That's clear. > >>> > >>>> But if he chooses to use our --{set,add,del}attr we should at least > >>>> help > >>>> him to not shoot himself to the leg and validate/normalize/encode the > >>>> value. I don't know how many users use this API, but removing a support > >>>> for all managed attributes seems as a big compatibility break to me. > >>> > >>> Well, it was broken (see https://fedorahosted.org/freeipa/ticket/2405, > >>> 2407, 2408), so I don't think many people used it. > >>> > >>> Anyway, what's the use case? Why would the user want to use --setattr > >>> for validated attributes? Is our regular API lacking something? > >>> > >> > >> I think Honza had the best suggestion, enhance the standard API to > >> handle multi-valued attributes better and reduce the need for > >> set/add/delattr. At some future point we can consider dropping them > >> when updating something already covered by a Param. We're stuck with > >> them for now. > >> > > > > The use case I would see is the extensibility. Say a customer wants to > > extend a schema and add an attribute X to the user object. He would > > still be able to manage users using CLI without writing a plugin for > > the new attribute. Yes plugin is preferred but not everybody would go > > for it. So in absence of the plugin we can't do validation but we still > > should function and be able to deal with this attribute via CLI (and UI > > if this attribute is enabled for UI via UI configuration). > > > > I am generally against dropping this interface. But expectations IMO > > should be: > > 1) If the attribute is managed by us with setattr and friends it should > > behave in the same way as via the direct add/mod/del command > > 2) If attribute is not managed it should not provide any guarantees and > > act in the same way as via LDAP > > > > Hope this helps. > > > > This might be the best thing to do, but IMO it is still no good, because > the behavior of --{set,add,del}attr for a particular attribute might > change between API versions, when that attribute changes from unmanaged > to managed. > > Honza > I think this is OK and expectable - user may use setattr option to set an attribute before it is officially supported by IPA and still have it working when he upgrades. Though we should make sure that we describe this API well in our documentation to make sure this expectation is shared with users. Martin From jcholast at redhat.com Wed Apr 11 07:42:42 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 11 Apr 2012 09:42:42 +0200 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. In-Reply-To: <1334129249.18938.12.camel@balmora.brq.redhat.com> References: <4F843D0F.5060906@redhat.com> <4F844BC4.2020109@redhat.com> <1334077635.16034.20.camel@balmora.brq.redhat.com> <4F846CF8.3020708@redhat.com> <4F84726E.90504@redhat.com> <4F847433.3060703@redhat.com> <4F853050.2060107@redhat.com> <1334129249.18938.12.camel@balmora.brq.redhat.com> Message-ID: <4F8535F2.6030505@redhat.com> On 11.4.2012 09:27, Martin Kosek wrote: > On Wed, 2012-04-11 at 09:18 +0200, Jan Cholasta wrote: >> On 10.4.2012 19:56, Dmitri Pal wrote: >>> On 04/10/2012 01:48 PM, Rob Crittenden wrote: >>>> Petr Viktorin wrote: >>>>> On 04/10/2012 07:07 PM, Martin Kosek wrote: >>>>>> On Tue, 2012-04-10 at 17:03 +0200, Jan Cholasta wrote: >>>>>>> On 10.4.2012 16:00, Petr Viktorin wrote: >>>>>>>> I'm aware that we have backwards compatibility requirements so we >>>>>>>> have >>>>>>>> to stick with unfortunate decisions, but I wanted you to know what I >>>>>>>> think. Please tell me I'm wrong! >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> It is not clear what --{set,add,del}attr and friends should do. On >>>>>>>> the >>>>>>>> one hand they should be powerful -- presumably as powerful as >>>>>>>> ldapmodify. On the other hand they should be checked to ensure they >>>>>>>> can't be used to break the system. These requirements are >>>>>>>> contradictory. >>>>>>>> And in either case we're doing it wrong: >>>>>>>> - If they should be all-powerful, we shouldn't validate them. >>>>>>>> - If they shouldn't break the system we can just disable them for >>>>>>>> IPA-managed attributes. My understanding is that they were originally >>>>>>>> only added for attributes IPA doesn't know about. People can still >>>>>>>> use >>>>>>>> ldapmodify to bypass validation if they want. >>>>>>>> - If somewhere in between, we need to clearly define what they should >>>>>>>> do, instead of drawing the line ad-hoc based on individual details we >>>>>>>> forgot about, as tickets come from QE. >>>>>>>> >>>>>>>> >>>>>>>> I would hope people won't use --setattr for IPA-managed attributes. >>>>>>>> Which would however mean we won't get much community testing for all >>>>>>>> this extra code. >>>>>>>> >>>>>>>> >>>>>>>> Then, there's an unfortunate detail in IPA implementation: attribute >>>>>>>> Params need to be cloned to method objects (Create, Update, etc.) to >>>>>>>> work properly (e.g. get the `attribute` flag set). If they are marked >>>>>>>> no_update, they don't get cloned, so they don't work properly. >>>>>>>> Yet --setattr apparently needs to be able to update and validate >>>>>>>> attributes marked no_update (this ties to the confusing >>>>>>>> requirements on >>>>>>>> --setattr I already mentioned). This leads to doing the same work >>>>>>>> again, >>>>>>>> slightly differently. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> tl;dr: --setattr work on IPA-managed attributes (with validation) >>>>>>>> is a >>>>>>>> mistake. It adds no functionality, only complexity. We don't want >>>>>>>> people >>>>>>>> to use it. It will cost us a lot of maintenance work to support. >>>>>>>> >>>>>>>> >>>>>>>> Thank you for listening. A patch for the newest regression is coming >>>>>>>> up. >>>>>>>> >>>>>>> >>>>>>> I wholeheartedly agree. >>>>>> >>>>>> This is indeed a mine field and we need to make a look from at the >>>>>> issue >>>>>> from all sides before accepting a decision. >>>>> >>>>> Yes. >>>>> >>>>>>> >>>>>>> Like you said above, we should either not validate --{set,add,del}attr >>>>>>> or don't allow them on known attributes. >>>>>> >>>>>> IMHO, validating attributes we manage in the same way for both >>>>>> --setattr >>>>>> and standard attrs is not that wrong. It is a good precaution, because >>>>>> if we let an unvalidated value in, it can make even a bigger mess later >>>>>> in our pre_callbacks or post_callbacks where we assume that at this >>>>>> point everything is valid. >>>>> >>>>> Then we should validate *exactly* the same way, including not allowing >>>>> no_update attributes to be updated. >>>>> >>>>>> If somebody wants to modify attributes in an uncontrolled, unvalidated >>>>>> way, he is free to use ldapmodify or other tool to play with raw LDAP >>>>>> values. Without our guarantee of course. >>>>> >>>>> That's clear. >>>>> >>>>>> But if he chooses to use our --{set,add,del}attr we should at least >>>>>> help >>>>>> him to not shoot himself to the leg and validate/normalize/encode the >>>>>> value. I don't know how many users use this API, but removing a support >>>>>> for all managed attributes seems as a big compatibility break to me. >>>>> >>>>> Well, it was broken (see https://fedorahosted.org/freeipa/ticket/2405, >>>>> 2407, 2408), so I don't think many people used it. >>>>> >>>>> Anyway, what's the use case? Why would the user want to use --setattr >>>>> for validated attributes? Is our regular API lacking something? >>>>> >>>> >>>> I think Honza had the best suggestion, enhance the standard API to >>>> handle multi-valued attributes better and reduce the need for >>>> set/add/delattr. At some future point we can consider dropping them >>>> when updating something already covered by a Param. We're stuck with >>>> them for now. >>>> >>> >>> The use case I would see is the extensibility. Say a customer wants to >>> extend a schema and add an attribute X to the user object. He would >>> still be able to manage users using CLI without writing a plugin for >>> the new attribute. Yes plugin is preferred but not everybody would go >>> for it. So in absence of the plugin we can't do validation but we still >>> should function and be able to deal with this attribute via CLI (and UI >>> if this attribute is enabled for UI via UI configuration). >>> >>> I am generally against dropping this interface. But expectations IMO >>> should be: >>> 1) If the attribute is managed by us with setattr and friends it should >>> behave in the same way as via the direct add/mod/del command >>> 2) If attribute is not managed it should not provide any guarantees and >>> act in the same way as via LDAP >>> >>> Hope this helps. >>> >> >> This might be the best thing to do, but IMO it is still no good, because >> the behavior of --{set,add,del}attr for a particular attribute might >> change between API versions, when that attribute changes from unmanaged >> to managed. >> >> Honza >> > > I think this is OK and expectable - user may use setattr option to set > an attribute before it is officially supported by IPA and still have it > working when he upgrades. There's no guarantee the user will still have it working. User application might break if it depends on the unmanaged behavior and we are too strict in validation of the managed attribute, for example. I don't known how much of an issue is this actually, but this kind of unpredictability is not a good thing IMHO. > Though we should make sure that we describe > this API well in our documentation to make sure this expectation is > shared with users. > > Martin > Honza -- Jan Cholasta From pviktori at redhat.com Wed Apr 11 07:56:40 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 11 Apr 2012 09:56:40 +0200 Subject: [Freeipa-devel] [PATCH] 0034 Limit permission and selfservice names In-Reply-To: <4F8476AF.1070105@redhat.com> References: <4F7ED385.1010108@redhat.com> <4F82EA6A.8000000@redhat.com> <4F840A2D.5070309@redhat.com> <4F8439BB.7090800@redhat.com> <4F843F87.4000708@redhat.com> <4F8476AF.1070105@redhat.com> Message-ID: <4F853938.4030208@redhat.com> On 04/10/2012 08:06 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> On 04/10/2012 03:46 PM, Rob Crittenden wrote: >>> Petr Viktorin wrote: ... >>> pattern_errmsg should probably be removed from API.txt. We've been >>> paring back the amount of data to validate slowly as we've run into >>> these questionable items. Please open a ticket for this. >> >> Done: https://fedorahosted.org/freeipa/ticket/2619 >> > > I made a minor change. VERSION shoudl just update the minor version > number. I changed this, ACK, pushed to master and ipa-2-2 > > rob I followed the comment in the VERSION file, which says: # The version of the IPA API. This controls which # # client versions can use the XML-RPC and json APIs # # # # A change to existing API requires a MAJOR version # # update. The addition of new API bumps the MINOR # # version. # I see the actual policy is different. How does this really work? -- Petr? From ohamada at redhat.com Wed Apr 11 09:15:28 2012 From: ohamada at redhat.com (Ondrej Hamada) Date: Wed, 11 Apr 2012 11:15:28 +0200 Subject: [Freeipa-devel] [PATCH] 21 Unable to rename permission object In-Reply-To: <4F848B82.4040606@redhat.com> References: <4F846653.8070509@redhat.com> <4F848B82.4040606@redhat.com> Message-ID: <4F854BB0.1090604@redhat.com> On 04/10/2012 09:35 PM, Rob Crittenden wrote: > Ondrej Hamada wrote: >> https://fedorahosted.org/freeipa/ticket/2571 >> >> The update was failing because of the case insensitivity of permission >> object DN. > > Can you wrap the error in _() and add a couple of test cases for this, > say one for the case insensitivity and one for empty rename attempt? > > rob fixed patch attached -- Regards, Ondrej Hamada FreeIPA team jabber: ohama at jabbim.cz IRC: ohamada -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-ohamada-21-2-Unable-to-rename-permission-object.patch Type: text/x-patch Size: 5093 bytes Desc: not available URL: From rcritten at redhat.com Wed Apr 11 12:41:52 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Wed, 11 Apr 2012 08:41:52 -0400 Subject: [Freeipa-devel] [PATCH] 0034 Limit permission and selfservice names In-Reply-To: <4F853938.4030208@redhat.com> References: <4F7ED385.1010108@redhat.com> <4F82EA6A.8000000@redhat.com> <4F840A2D.5070309@redhat.com> <4F8439BB.7090800@redhat.com> <4F843F87.4000708@redhat.com> <4F8476AF.1070105@redhat.com> <4F853938.4030208@redhat.com> Message-ID: <4F857C10.7050302@redhat.com> Petr Viktorin wrote: > On 04/10/2012 08:06 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> On 04/10/2012 03:46 PM, Rob Crittenden wrote: >>>> Petr Viktorin wrote: > ... >>>> pattern_errmsg should probably be removed from API.txt. We've been >>>> paring back the amount of data to validate slowly as we've run into >>>> these questionable items. Please open a ticket for this. >>> >>> Done: https://fedorahosted.org/freeipa/ticket/2619 >>> >> >> I made a minor change. VERSION shoudl just update the minor version >> number. I changed this, ACK, pushed to master and ipa-2-2 >> >> rob > > I followed the comment in the VERSION file, which says: > > # The version of the IPA API. This controls which # > # client versions can use the XML-RPC and json APIs # > # # > # A change to existing API requires a MAJOR version # > # update. The addition of new API bumps the MINOR # > # version. # > > I see the actual policy is different. How does this really work? Yes, I guess that isn't too clear. You can ADD new API without causing backwards compatibility issues. You can modify the validation without causing backwards compat issues (other than the data itself may no longer be allowed, but that is unrelated to the API). In these cases just bump the minor version. rob From rcritten at redhat.com Wed Apr 11 15:17:05 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Wed, 11 Apr 2012 11:17:05 -0400 Subject: [Freeipa-devel] [PATCH 69] Use indexed format specifiers in i18n strings In-Reply-To: <4F7D9680.50709@redhat.com> References: <201203300136.q2U1a0E5030640@int-mx12.intmail.prod.int.phx2.redhat.com> <4F79A674.2060104@redhat.com> <4F7C463E.4020205@redhat.com> <4F7D9680.50709@redhat.com> Message-ID: <4F85A071.8050302@redhat.com> John Dennis wrote: > On 04/04/2012 09:01 AM, Petr Viktorin wrote: >> On 04/02/2012 03:15 PM, Rob Crittenden wrote: >>> John Dennis wrote: >>>> Translators need to reorder messages to suit the needs of the target >>>> language. The conventional positional format specifiers (e.g. %s %d) >>>> do not permit reordering because their order is tied to the ordering >>>> of the arguments to the printf function. The fix is to use indexed >>>> format specifiers. >>> >>> I guess this looks ok but all of these errors are of the format: string >>> error, error number (and inconsistently, sometimes the reverse). >> >> Not all of them, e.g. >> >> - fprintf(stderr, _("Search for %s on rootdse failed with error %d"), >> + fprintf(stderr, _("Search for %1$s on rootdse failed with error >> %2$d\n"), >> root_attrs[0], ret); >> >> - fprintf(stderr, _("Failed to open keytab '%s': %s\n"), keytab, >> + fprintf(stderr, _("Failed to open keytab '%1$s': %2$s\n"), keytab, >> error_message(krberr)); >> >>> Do those really need to be re-orderable? >>> >> >> You can never make too few assumptions about foreign languages. Most >> likely at least some will need reordering. >> Enforcing indexed specifiers everywhere means we don't have to worry >> about individual cases, or change our strings when a new language is >> added. > > +1 > > But there is also another practical reason. The validation logic does > not have artificial intelligence and cannot parse the semantic intent of > the string. It only knows if there are multiple non-indexed specifiers. > > If we want to automatically validate strings (make lint) from another > patch, we have to live with rigid application of the rules (adding > exception logic to the validator would be pretty complex because unlike > lint there is no way to tag the string that would get carried all the > way thought the xgettext process and be visible to the validator). I'm still not convinced that another language would want to reorder these but it does no harm so ACK. pushed to master and ipa-2-2 rob From rcritten at redhat.com Wed Apr 11 15:20:39 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Wed, 11 Apr 2012 11:20:39 -0400 Subject: [Freeipa-devel] [PATCH 68] text unit test should validate using installed mo file In-Reply-To: <4F8468D7.4000608@redhat.com> References: <4F71D00E.2070907@redhat.com> <4F71D58E.5060507@redhat.com> <4F71FF85.7090102@redhat.com> <4F7223AF.7030003@redhat.com> <4F72CE84.6010407@redhat.com> <4F750137.9010600@redhat.com> <4F75ADD7.4020404@redhat.com> <4F82FF27.8010808@redhat.com> <4F8468D7.4000608@redhat.com> Message-ID: <4F85A147.9070804@redhat.com> Petr Viktorin wrote: > On 04/09/2012 05:24 PM, John Dennis wrote: >> On 03/30/2012 08:57 AM, Petr Viktorin wrote: >>> On 03/30/2012 02:41 AM, John Dennis wrote: >>>> On 03/28/2012 04:40 AM, Petr Viktorin wrote: >>>>> Can install/po/Makefile just call test_i18n.py from the tests/ >>>>> tree? It >>>>> doesn't import any IPA code so there's no need to set sys.path in this >>>>> case (though there'd have to be a comment saying we depend on this). >>>>> In the other case, unit tests, the path is already set by Nose. >>>>> Also the file would have to be renamed so nose doesn't pick it up as a >>>>> test module. >>>> >>>> Good idea. I moved test_i18n.py to tests/i18n.py. I was reluctant about >>>> moving the file, but that was without merit, it works better this way. >>> >>> The downside is that the file now looks like a test utility module. It >>> could use a comment near the imports saying that it's also called as a >>> script, and that it shouldn't import IPA code (this could silently use >>> the system-installed version of IPA, or crash if it's not there). >>> Alternatively, set PYTHONPATH in the Makefile. >>> >>>> I also removed the superfluous comment in Makefile.in you pointed out. >>>> >>>> When I was exercising the code I noticed the validation code was not >>>> treating msgid's from C code correctly (we do have some C code in the >>>> client area). That required a much more nuanced parsing the format >>>> conversion specifiers to correctly identify what was a positional >>>> format >>>> specifier vs. an indexed format specifier. The new version of the >>>> i18n.py includes the function parse_printf_fmt() and >>>> get_prog_langs() to >>>> identify the source programming language. >>> >>> More non-trivial code without tests. This makes me worry. But, tests for >>> this can be added later I guess. >>> >>>> Two more patches will follow shortly, one which adds validation when >>>> "make lint" is run and a patch to correct the problems it found in >>>> the C >>>> code strings which did not used indexed format specifiers. >> >> revised patch with comment about imports attached. >> >> > > ACK > pushed to master and ipa-2-2 From dpal at redhat.com Wed Apr 11 16:05:17 2012 From: dpal at redhat.com (Dmitri Pal) Date: Wed, 11 Apr 2012 12:05:17 -0400 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. In-Reply-To: <4F8535F2.6030505@redhat.com> References: <4F843D0F.5060906@redhat.com> <4F844BC4.2020109@redhat.com> <1334077635.16034.20.camel@balmora.brq.redhat.com> <4F846CF8.3020708@redhat.com> <4F84726E.90504@redhat.com> <4F847433.3060703@redhat.com> <4F853050.2060107@redhat.com> <1334129249.18938.12.camel@balmora.brq.redhat.com> <4F8535F2.6030505@redhat.com> Message-ID: <4F85ABBD.7030107@redhat.com> On 04/11/2012 03:42 AM, Jan Cholasta wrote: > On 11.4.2012 09:27, Martin Kosek wrote: >> On Wed, 2012-04-11 at 09:18 +0200, Jan Cholasta wrote: >>> On 10.4.2012 19:56, Dmitri Pal wrote: >>>> On 04/10/2012 01:48 PM, Rob Crittenden wrote: >>>>> Petr Viktorin wrote: >>>>>> On 04/10/2012 07:07 PM, Martin Kosek wrote: >>>>>>> On Tue, 2012-04-10 at 17:03 +0200, Jan Cholasta wrote: >>>>>>>> On 10.4.2012 16:00, Petr Viktorin wrote: >>>>>>>>> I'm aware that we have backwards compatibility requirements so we >>>>>>>>> have >>>>>>>>> to stick with unfortunate decisions, but I wanted you to know >>>>>>>>> what I >>>>>>>>> think. Please tell me I'm wrong! >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> It is not clear what --{set,add,del}attr and friends should >>>>>>>>> do. On >>>>>>>>> the >>>>>>>>> one hand they should be powerful -- presumably as powerful as >>>>>>>>> ldapmodify. On the other hand they should be checked to ensure >>>>>>>>> they >>>>>>>>> can't be used to break the system. These requirements are >>>>>>>>> contradictory. >>>>>>>>> And in either case we're doing it wrong: >>>>>>>>> - If they should be all-powerful, we shouldn't validate them. >>>>>>>>> - If they shouldn't break the system we can just disable them for >>>>>>>>> IPA-managed attributes. My understanding is that they were >>>>>>>>> originally >>>>>>>>> only added for attributes IPA doesn't know about. People can >>>>>>>>> still >>>>>>>>> use >>>>>>>>> ldapmodify to bypass validation if they want. >>>>>>>>> - If somewhere in between, we need to clearly define what they >>>>>>>>> should >>>>>>>>> do, instead of drawing the line ad-hoc based on individual >>>>>>>>> details we >>>>>>>>> forgot about, as tickets come from QE. >>>>>>>>> >>>>>>>>> >>>>>>>>> I would hope people won't use --setattr for IPA-managed >>>>>>>>> attributes. >>>>>>>>> Which would however mean we won't get much community testing >>>>>>>>> for all >>>>>>>>> this extra code. >>>>>>>>> >>>>>>>>> >>>>>>>>> Then, there's an unfortunate detail in IPA implementation: >>>>>>>>> attribute >>>>>>>>> Params need to be cloned to method objects (Create, Update, >>>>>>>>> etc.) to >>>>>>>>> work properly (e.g. get the `attribute` flag set). If they are >>>>>>>>> marked >>>>>>>>> no_update, they don't get cloned, so they don't work properly. >>>>>>>>> Yet --setattr apparently needs to be able to update and validate >>>>>>>>> attributes marked no_update (this ties to the confusing >>>>>>>>> requirements on >>>>>>>>> --setattr I already mentioned). This leads to doing the same work >>>>>>>>> again, >>>>>>>>> slightly differently. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> tl;dr: --setattr work on IPA-managed attributes (with validation) >>>>>>>>> is a >>>>>>>>> mistake. It adds no functionality, only complexity. We don't want >>>>>>>>> people >>>>>>>>> to use it. It will cost us a lot of maintenance work to support. >>>>>>>>> >>>>>>>>> >>>>>>>>> Thank you for listening. A patch for the newest regression is >>>>>>>>> coming >>>>>>>>> up. >>>>>>>>> >>>>>>>> >>>>>>>> I wholeheartedly agree. >>>>>>> >>>>>>> This is indeed a mine field and we need to make a look from at the >>>>>>> issue >>>>>>> from all sides before accepting a decision. >>>>>> >>>>>> Yes. >>>>>> >>>>>>>> >>>>>>>> Like you said above, we should either not validate >>>>>>>> --{set,add,del}attr >>>>>>>> or don't allow them on known attributes. >>>>>>> >>>>>>> IMHO, validating attributes we manage in the same way for both >>>>>>> --setattr >>>>>>> and standard attrs is not that wrong. It is a good precaution, >>>>>>> because >>>>>>> if we let an unvalidated value in, it can make even a bigger >>>>>>> mess later >>>>>>> in our pre_callbacks or post_callbacks where we assume that at this >>>>>>> point everything is valid. >>>>>> >>>>>> Then we should validate *exactly* the same way, including not >>>>>> allowing >>>>>> no_update attributes to be updated. >>>>>> >>>>>>> If somebody wants to modify attributes in an uncontrolled, >>>>>>> unvalidated >>>>>>> way, he is free to use ldapmodify or other tool to play with raw >>>>>>> LDAP >>>>>>> values. Without our guarantee of course. >>>>>> >>>>>> That's clear. >>>>>> >>>>>>> But if he chooses to use our --{set,add,del}attr we should at least >>>>>>> help >>>>>>> him to not shoot himself to the leg and >>>>>>> validate/normalize/encode the >>>>>>> value. I don't know how many users use this API, but removing a >>>>>>> support >>>>>>> for all managed attributes seems as a big compatibility break to >>>>>>> me. >>>>>> >>>>>> Well, it was broken (see >>>>>> https://fedorahosted.org/freeipa/ticket/2405, >>>>>> 2407, 2408), so I don't think many people used it. >>>>>> >>>>>> Anyway, what's the use case? Why would the user want to use >>>>>> --setattr >>>>>> for validated attributes? Is our regular API lacking something? >>>>>> >>>>> >>>>> I think Honza had the best suggestion, enhance the standard API to >>>>> handle multi-valued attributes better and reduce the need for >>>>> set/add/delattr. At some future point we can consider dropping them >>>>> when updating something already covered by a Param. We're stuck with >>>>> them for now. >>>>> >>>> >>>> The use case I would see is the extensibility. Say a customer wants to >>>> extend a schema and add an attribute X to the user object. He would >>>> still be able to manage users using CLI without writing a plugin for >>>> the new attribute. Yes plugin is preferred but not everybody would go >>>> for it. So in absence of the plugin we can't do validation but we >>>> still >>>> should function and be able to deal with this attribute via CLI >>>> (and UI >>>> if this attribute is enabled for UI via UI configuration). >>>> >>>> I am generally against dropping this interface. But expectations IMO >>>> should be: >>>> 1) If the attribute is managed by us with setattr and friends it >>>> should >>>> behave in the same way as via the direct add/mod/del command >>>> 2) If attribute is not managed it should not provide any guarantees >>>> and >>>> act in the same way as via LDAP >>>> >>>> Hope this helps. >>>> >>> >>> This might be the best thing to do, but IMO it is still no good, >>> because >>> the behavior of --{set,add,del}attr for a particular attribute might >>> change between API versions, when that attribute changes from unmanaged >>> to managed. >>> >>> Honza >>> >> >> I think this is OK and expectable - user may use setattr option to set >> an attribute before it is officially supported by IPA and still have it >> working when he upgrades. > > There's no guarantee the user will still have it working. User > application might break if it depends on the unmanaged behavior and we > are too strict in validation of the managed attribute, for example. I > don't known how much of an issue is this actually, but this kind of > unpredictability is not a good thing IMHO. > >> Though we should make sure that we describe >> this API well in our documentation to make sure this expectation is >> shared with users. >> >> Martin >> > > Honza > Honza, I do not agree. The difference in behavior will be only in validation. So instead of letting an invalid data in we would start to prevent it. IMO it is a bug in the application in the first place so breakage is justified especially if we document this expectation. -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From jdennis at redhat.com Wed Apr 11 17:17:57 2012 From: jdennis at redhat.com (John Dennis) Date: Wed, 11 Apr 2012 13:17:57 -0400 Subject: [Freeipa-devel] param type issues Message-ID: <4F85BCC5.5040901@redhat.com> I recently tried to add a new parameter class (i.e. derived from Param), it did not go smoothly. I presume the problems I ran into were coding oversights in our current base, not intentional limitations but I like to confirm that. 1) Default values are not converted to the type of the parameter using the stock default keyword. This seems like an omission. See the below code snippets. get_default() is called and if there is a default_from() function pointer it is invoked and it's return value is passed to normalize() and convert(), this seems correct. However if the default_from() function pointer is not provided the default attribute is returned bypassing the normalization and conversion steps, that seems incorrect. A default should be subject to the same processing rules. One might argue that the default attribute should already be normalized and converted because the programmer who added the default should have done so. But that's not always true, lets say you have a type which is most easily expressed as a string for the purpose of specifying a default value, but is really converted to another type when it's used. Here is a good example, DN's. A DN parameter has the DN class as it's type, but you would like to specify the default as a string, e.g. "ou=users". In the current scheme you're forced to have this in your Param constructor: DN_Param('binddn', default=DN('cn=directory manager')) This is necessary because the type of a DN_Param is a DN object. However if you do this then makeapi gets all confused, see issue 2 below. But if you allow the default value to be normalized and converted everything just works nicely. Here is another example, albeit contrived, but it serves to illustrate. Number('max', default="100") This won't work either in the current code. Now you say "but you would just write default=100". Yes that's true and why it's contrived, but that only works because 100 happens to be a native Python type (e.g. int). When defaults are either not native Python types or it's easier to express the value differently (e.g. a duration specified as the string "1day 2hours") the current scheme fails. It shouldn't be necessary to supply a default_from() function pointer for these simple cases just to get normalization and conversion, the default_from() really makes most sense when you need to postpone generation of the default value or it involves some computation. The default value should be treated just as if the user had supplied the value passing through all the same normization and conversion steps. def __call__(self, value, **kw): """ One stop shopping. """ if value in NULLS: value = self.get_default(**kw) else: value = self.convert(self.normalize(value)) if hasattr(self, 'env'): self.validate(value, self.env.context, supplied=self.name in kw) else: self.validate(value, supplied=self.name in kw) return value def get_default(self, **kw): if self.default_from is not None: default = self.default_from(**kw) if default is not None: try: return self.convert(self.normalize(default)) except StandardError: pass return self.default 2) makeapi cannot deal with defaults which are not Python primitives. makeapi generates a string to describe a parameter, that includes the default value. makeapi uses repr() to generate the string. That works fine when defaults (or other items) are Python primitives but fails for user defined types, the reason is because calling repr() on a user defined type generates something like this: ipalib.dn.DN object at 0x8c0128c It's the type of the object followed by it's address in memory. Thus makeapi adds this line to the API.txt file option: DN_Param('binddn?', autofill=True, cli_name='bind_dn', default=) Clearly the memory location of a default value is the wrong thing to capture. (Recall from above you must specify the default as DN('cn=directory manager') because conversions are not performed) There are two solutions right off the top of my head a) Permit conversions from string values for defaults (issue 1), then the default remains a string. b) Use str() instead of repr() in makeapi (however we would lose the str vs. unicode marker, not sure if that's critical nor if that's actually how we shold be validating the type of the value anyway). Perhaps both should be done. FWIW the use of repr() in conjunction with a user defined type for a default caused the makeapi --validate to fail (because it's trying to compare memory locations). Tracking the cause of that failure was not easy. 3) The class name cannot contain an underscore. The find_name() function in makeapi uses a regexp that does not permit an underscore in the name of the class. I presume this was an oversight and not a requirement that class names do not contain underscores. 4) makeapi is makeing assumptions about dict ordering. makeapi when it generates a string calls repr() on the contents of a dict. It uses that to compare to a previous run to see if they are identical by doing a string comparision. That's not robust. There are no guarantees concerning the ordering of keys in a dict, nor the string values produced by repr(). If you want to compare dicts for equality then you should compare dicts for equality. If you want to use strings for comparison purposes you have to be a lot more careful about how you generate that string representation. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From rcritten at redhat.com Wed Apr 11 17:44:04 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Wed, 11 Apr 2012 13:44:04 -0400 Subject: [Freeipa-devel] param type issues In-Reply-To: <4F85BCC5.5040901@redhat.com> References: <4F85BCC5.5040901@redhat.com> Message-ID: <4F85C2E4.2090604@redhat.com> John Dennis wrote: > I recently tried to add a new parameter class (i.e. derived from > Param), it did not go smoothly. I presume the problems I ran into were > coding oversights in our current base, not intentional limitations but > I like to confirm that. > > 1) Default values are not converted to the type of the parameter using > the stock default keyword. This seems like an omission. See the below > code snippets. get_default() is called and if there is a > default_from() function pointer it is invoked and it's return value is > passed to normalize() and convert(), this seems correct. > > However if the default_from() function pointer is not provided the > default attribute is returned bypassing the normalization and > conversion steps, that seems incorrect. A default should be subject to > the same processing rules. One might argue that the default attribute > should already be normalized and converted because the programmer who > added the default should have done so. But that's not always true, > lets say you have a type which is most easily expressed as a string > for the purpose of specifying a default value, but is really converted > to another type when it's used. Here is a good example, DN's. A DN > parameter has the DN class as it's type, but you would like to specify > the default as a string, e.g. "ou=users". In the current scheme you're > forced to have this in your Param constructor: > > DN_Param('binddn', default=DN('cn=directory manager')) > > This is necessary because the type of a DN_Param is a DN > object. However if you do this then makeapi gets all confused, see > issue 2 below. > > But if you allow the default value to be normalized and converted > everything just works nicely. > > Here is another example, albeit contrived, but it serves to > illustrate. > > Number('max', default="100") > > This won't work either in the current code. Now you say "but you would > just write default=100". Yes that's true and why it's contrived, but > that only works because 100 happens to be a native Python type > (e.g. int). When defaults are either not native Python types or it's > easier to express the value differently (e.g. a duration specified as > the string "1day 2hours") the current scheme fails. It shouldn't be > necessary to supply a default_from() function pointer for these simple > cases just to get normalization and conversion, the default_from() > really makes most sense when you need to postpone generation of the > default value or it involves some computation. > > The default value should be treated just as if the user had supplied > the value passing through all the same normization and conversion steps. > > def __call__(self, value, **kw): > """ > One stop shopping. > """ > if value in NULLS: > value = self.get_default(**kw) > else: > value = self.convert(self.normalize(value)) > if hasattr(self, 'env'): > self.validate(value, self.env.context, supplied=self.name in kw) > else: > self.validate(value, supplied=self.name in kw) > return value > > def get_default(self, **kw): > if self.default_from is not None: > default = self.default_from(**kw) > if default is not None: > try: > return self.convert(self.normalize(default)) > except StandardError: > pass > return self.default Params have always been rather primitive types so this has never come up. It seems like a lot of extra work to run every default through a validator but I don't think that's a show-stopper. > > 2) makeapi cannot deal with defaults which are not Python primitives. > > makeapi generates a string to describe a parameter, that includes the > default value. makeapi uses repr() to generate the string. That works > fine when defaults (or other items) are Python primitives but fails > for user defined types, the reason is because calling repr() on a user > defined type generates something like this: > > ipalib.dn.DN object at 0x8c0128c > > It's the type of the object followed by it's address in memory. > > Thus makeapi adds this line to the API.txt file > > option: DN_Param('binddn?', autofill=True, cli_name='bind_dn', > default=) > > Clearly the memory location of a default value is the wrong thing to > capture. (Recall from above you must specify the default as > DN('cn=directory manager') because conversions are not performed) > > There are two solutions right off the top of my head > > a) Permit conversions from string values for defaults (issue 1), > then the default remains a string. > > b) Use str() instead of repr() in makeapi (however we would lose the > str vs. unicode marker, not sure if that's critical nor if that's > actually how we shold be validating the type of the value > anyway). > > Perhaps both should be done. > > FWIW the use of repr() in conjunction with a user defined type for a > default caused the makeapi --validate to fail (because it's trying to > compare memory locations). Tracking the cause of that failure was not > easy. We can probably just drop default from the things tracked by the API. It doesn't affect the data on the wire. > 3) The class name cannot contain an underscore. > > The find_name() function in makeapi uses a regexp that does not permit > an underscore in the name of the class. I presume this was an > oversight and not a requirement that class names do not contain > underscores. Why would you want to? Param classes are by definition simple things: Int, Str, etc. > 4) makeapi is makeing assumptions about dict ordering. > > makeapi when it generates a string calls repr() on the contents of a > dict. It uses that to compare to a previous run to see if they are > identical by doing a string comparision. That's not robust. There are > no guarantees concerning the ordering of keys in a dict, nor the > string values produced by repr(). If you want to compare dicts for > equality then you should compare dicts for equality. If you want to > use strings for comparison purposes you have to be a lot more careful > about how you generate that string representation. > > makeapi was rather quick and dirty. dict ordering has always been the same (except when it isn't). In the year since we introduced makeapi I don't recall a case where dict ordering changed. I don't want to over-engineer things but since we already have dict comparison code in the test sutie we can probably leverage that. makeapi is just there to keep us honest. It doesn't have the same robustness requirements that other code has. In other words if it takes 15 minutes to properly compare the dicts then fine. If it is going to take a day then don't bother, the bang isn't worth the buck. rob From jdennis at redhat.com Wed Apr 11 18:07:15 2012 From: jdennis at redhat.com (John Dennis) Date: Wed, 11 Apr 2012 14:07:15 -0400 Subject: [Freeipa-devel] param type issues In-Reply-To: <4F85C2E4.2090604@redhat.com> References: <4F85BCC5.5040901@redhat.com> <4F85C2E4.2090604@redhat.com> Message-ID: <4F85C853.6080506@redhat.com> On 04/11/2012 01:44 PM, Rob Crittenden wrote: > John Dennis wrote: >> The default value should be treated just as if the user had supplied >> the value passing through all the same normization and conversion steps. > Params have always been rather primitive types so this has never come > up. It seems like a lot of extra work to run every default through a > validator but I don't think that's a show-stopper. It's not much extra work, default values aren't evaluated that often, the generality is useful and the framework tries otherwise to be symmetric and general. Plus if you ever look at how much code we execute via the framework to do the simplest of things it would be somewhat shocking :-) >> 3) The class name cannot contain an underscore. >> >> The find_name() function in makeapi uses a regexp that does not permit >> an underscore in the name of the class. I presume this was an >> oversight and not a requirement that class names do not contain >> underscores. > > Why would you want to? Param classes are by definition simple things: > Int, Str, etc. Simple example, I originally named the DN parameter DN following your intuitive suggestion. But we already have a type named DN, thus it's a name conflict. So, the parameter type had to be called something other than DN. Also, the existing parameter types don't really capture the fact they can only ever be used as parameter to a command (e.g. Str is very generic name, but I digress). So I decided to pick a name that both captured the fact it's a DN type *and* it can only be used in the context of a command parameter, DN_Param seemed to make sense, but I discovered makeapi silently stumbled on this and failed to recognize it all. It just seems like not permitting a type name to have an underscore is an arbitrary limitation, but I'm really not hung up over the name, rather it was a time spent to figure out why makeapi was failing that was the real issue. >> 4) makeapi is makeing assumptions about dict ordering. >> >> makeapi when it generates a string calls repr() on the contents of a >> dict. It uses that to compare to a previous run to see if they are >> identical by doing a string comparision. That's not robust. There are >> no guarantees concerning the ordering of keys in a dict, nor the >> string values produced by repr(). If you want to compare dicts for >> equality then you should compare dicts for equality. If you want to >> use strings for comparison purposes you have to be a lot more careful >> about how you generate that string representation. >> >> > > makeapi was rather quick and dirty. dict ordering has always been the > same (except when it isn't). In the year since we introduced makeapi I > don't recall a case where dict ordering changed. I don't want to > over-engineer things but since we already have dict comparison code in > the test sutie we can probably leverage that. makeapi is just there to > keep us honest. It doesn't have the same robustness requirements that > other code has. In other words if it takes 15 minutes to properly > compare the dicts then fine. If it is going to take a day then don't > bother, the bang isn't worth the buck. I agree, I don't think this deserves recoding at this point, it's worked so far. It was more to capture the issue, I had to take a hard look at how makeapi worked and it was just something I noticed while debugging. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From rcritten at redhat.com Wed Apr 11 18:23:30 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Wed, 11 Apr 2012 14:23:30 -0400 Subject: [Freeipa-devel] [PATCH] 0029 Check expected error messages in tests In-Reply-To: <4F841697.5060903@redhat.com> References: <4F687A8D.6020508@redhat.com> <4F709C11.1070800@redhat.com> <4F70C864.5050107@redhat.com> <4F71C48D.3010502@redhat.com> <4F74D17C.2010504@redhat.com> <4F841697.5060903@redhat.com> Message-ID: <4F85CC22.4060102@redhat.com> Petr Viktorin wrote: > On 03/29/2012 11:17 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> On 03/26/2012 09:49 PM, Rob Crittenden wrote: >>>> Petr Viktorin wrote: >>>>> On 03/20/2012 01:39 PM, Petr Viktorin wrote: >>>>>> This patch adds checking error messages, not just types, to the >>>>>> XML-RPC >>>>>> tests. >>>>>> The checking is still somewhat hackish, since XML-RPC doesn't give us >>>>>> structured error info, but it should protect against regressions on >>>>>> issues like whether we put name or cli_name in a ValidationError. >>>>>> >>>>>> https://fedorahosted.org/freeipa/ticket/2549 >>>>>> >>>>> >>>>> Updated and rebased to current master. >>>> >>>> NACK >>>> >>>> automember wrongly was testing for non-existent users rather than >>>> automember rules but those should still be tested IMHO, perhaps with >>>> both types. >>>> >>>> There is also some inconsistency. In host you use substitution to set >>>> the hostname in the error: '%s: host not found' % fqdn1 but in others >>>> (group, hostgroup for example) the name is hardcoded. I also noticed >>>> that some reasons are unicode and others are not. >>>> >>>> rob >>> >>> Added tests for automember, made all the reasons unicode, using >>> substitutions when variables are involved. >>> >>> The patch still only updates tests that didn't pass the error message >>> check. >>> >> >> I'm seeing three failures that I think are due to recently pushed >> patches. Otherwise it looks good. >> >> rob >> >> >> ====================================================================== >> FAIL: test_netgroup[1]: netgroup_mod: Try to update non-existent >> u'netgroup1' >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/nose/case.py", line 187, in >> runTest >> self.test(*self.arg) >> File >> "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", >> line 247, in >> func = lambda: self.check(nice, **test) >> File >> "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", >> line 260, in check >> self.check_exception(nice, cmd, args, options, expected) >> File >> "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", >> line 284, in check_exception >> assert_deepequal(expected.strerror, e.strerror) >> File "/home/rcrit/redhat/freeipa-beta1/tests/util.py", line 328, in >> assert_deepequal >> VALUE % (doc, expected, got, stack) >> AssertionError: assert_deepequal: expected != got. >> >> expected = u'netgroup1: netgroup not found' >> got = u'no such entry' >> path = () >> >> ====================================================================== >> FAIL: test_netgroup[4]: netgroup_add: Test an invalid nisdomain1 name >> u'domain1,domain2' >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/nose/case.py", line 187, in >> runTest >> self.test(*self.arg) >> File >> "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", >> line 247, in >> func = lambda: self.check(nice, **test) >> File >> "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", >> line 260, in check >> self.check_exception(nice, cmd, args, options, expected) >> File >> "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", >> line 284, in check_exception >> assert_deepequal(expected.strerror, e.strerror) >> File "/home/rcrit/redhat/freeipa-beta1/tests/util.py", line 328, in >> assert_deepequal >> VALUE % (doc, expected, got, stack) >> AssertionError: assert_deepequal: expected != got. >> >> expected = u"invalid 'nisdomainname': may only include letters, numbers, >> _, - and ." >> got = u"invalid 'nisdomain': may only include letters, numbers, _, -, >> and ." >> path = () >> >> ====================================================================== >> FAIL: test_netgroup[5]: netgroup_add: Test an invalid nisdomain2 name >> u'+invalidnisdomain' >> ---------------------------------------------------------------------- >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/nose/case.py", line 187, in >> runTest >> self.test(*self.arg) >> File >> "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", >> line 247, in >> func = lambda: self.check(nice, **test) >> File >> "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", >> line 260, in check >> self.check_exception(nice, cmd, args, options, expected) >> File >> "/home/rcrit/redhat/freeipa-beta1/tests/test_xmlrpc/xmlrpc_test.py", >> line 284, in check_exception >> assert_deepequal(expected.strerror, e.strerror) >> File "/home/rcrit/redhat/freeipa-beta1/tests/util.py", line 328, in >> assert_deepequal >> VALUE % (doc, expected, got, stack) >> AssertionError: assert_deepequal: expected != got. >> >> expected = u"invalid 'nisdomainname': may only include letters, numbers, >> _, - and ." >> got = u"invalid 'nisdomain': may only include letters, numbers, _, -, >> and ." >> path = () >> > > Updated patch attached. Works great. ACK, pushed to master and ipa-2-2 rob From rcritten at redhat.com Wed Apr 11 21:26:18 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Wed, 11 Apr 2012 17:26:18 -0400 Subject: [Freeipa-devel] [PATCH] 0033 Pass make-test arguments through to Nose + Test coverage In-Reply-To: <4F7C6890.1090200@redhat.com> References: <4F7C6890.1090200@redhat.com> Message-ID: <4F85F6FA.7060901@redhat.com> Petr Viktorin wrote: > Currently, our test script forwards a select few command line arguments > to nosetests. > This patch removes the filtering, passing all arguments through. > This allows things like disabling output redirection (--nocapture), > dropping into a debugger (--pdb, --pdb-failures), coverage reporting > (--with-cover, once installed), etc. > > https://fedorahosted.org/freeipa/ticket/2135 > > I believe this is a better solution than adding individual options as > they're needed. > > > --- > > A coverage report can be generated by combining data from both the tests > and the server. I run this: > > Setup: > yum install python-coverage > echo /.coverage* >> .git/info/exclude > echo /htmlcov/ >> .git/info/exclude > > Terminal 1: > coverage erase > coverage run -p --source . lite-server.py > > Terminal 2: > kinit > ./make-test --with-coverage --cover-inclusive > > Terminal 1 again: > ^C > coverage combine > coverage html --omit=/usr/lib/* > > Then view ./htmlcov/index.html in a browser. This looks very good to me. I'll open a ticket to go through the coverage to find weaknesses. John, this replaces your existing patch, does this still fit the bill? rob From jdennis at redhat.com Thu Apr 12 02:05:45 2012 From: jdennis at redhat.com (John Dennis) Date: Wed, 11 Apr 2012 22:05:45 -0400 Subject: [Freeipa-devel] [PATCH 72] Validate DN & RDN parameters for migrate command In-Reply-To: <4F844BDC.108@redhat.com> References: <4F7E465F.7050206@redhat.com> <1333701642.14740.27.camel@balmora.brq.redhat.com> <4F7EF98A.5000804@redhat.com> <4F844BDC.108@redhat.com> Message-ID: <4F863879.3040405@redhat.com> Revised patch attached. We'll leave the DN parameter changes till later. This is essentially the same as the original patch with the addition of the fixes necessary to support passing an empty container arg, an issue Martin discovered in his review. FWIW the answer was not to make the param required (actually it would have been adding the flag 'nonempty') because you should be able to say you don't want to introduce a container into the search bases (see commit comment) -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jdennis-0072-1-Validate-DN-RDN-parameters-for-migrate-command.patch Type: text/x-patch Size: 4894 bytes Desc: not available URL: From mkosek at redhat.com Thu Apr 12 07:51:38 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 12 Apr 2012 09:51:38 +0200 Subject: [Freeipa-devel] [PATCH] 249 Return correct record name in DNS plugin Message-ID: <1334217098.23788.0.camel@balmora.brq.redhat.com> When dnsrecord-add or dnsrecord-mod commands are used on a root zone record (it has a special name "@"), a zone name is returned instead of a special name "@". This confuses DNS part of Web UI which is then not able to manipulate records in the root zone when these commands are used. This patch fixes these 2 commands to return correct value when a root zone is modified. https://fedorahosted.org/freeipa/ticket/2627 https://fedorahosted.org/freeipa/ticket/2628 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-249-return-correct-record-name-in-dns-plugin.patch Type: text/x-patch Size: 1765 bytes Desc: not available URL: From mkosek at redhat.com Thu Apr 12 08:17:50 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 12 Apr 2012 10:17:50 +0200 Subject: [Freeipa-devel] [PATCH 72] Validate DN & RDN parameters for migrate command In-Reply-To: <4F863879.3040405@redhat.com> References: <4F7E465F.7050206@redhat.com> <1333701642.14740.27.camel@balmora.brq.redhat.com> <4F7EF98A.5000804@redhat.com> <4F844BDC.108@redhat.com> <4F863879.3040405@redhat.com> Message-ID: <1334218670.23788.8.camel@balmora.brq.redhat.com> On Wed, 2012-04-11 at 22:05 -0400, John Dennis wrote: > Revised patch attached. We'll leave the DN parameter changes till later. > This is essentially the same as the original patch with the addition of > the fixes necessary to support passing an empty container arg, an issue > Martin discovered in his review. FWIW the answer was not to make the > param required (actually it would have been adding the flag 'nonempty') > because you should be able to say you don't want to introduce a > container into the search bases (see commit comment) > I don't agree with the removal of default values for the containers and allowing an empty value for them. Please, see my reasoning: 1) I don't think its unlikely to have ou=People and ou=groups as containers for users/groups as they are default containers in fresh LDAP installs. I think most of the small LDAP deployments will use these values. 2) I am also not sure if somebody would want to pass empty user and group container. Users and groups won't be shared in the same container and since we search with _ldap.SCOPE_ONELEVEL the migration would not find users or groups in containers nested under the search base anyway. Martin From pviktori at redhat.com Thu Apr 12 10:16:03 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 12 Apr 2012 12:16:03 +0200 Subject: [Freeipa-devel] [RANT] --setattr validation is a minefield. In-Reply-To: <4F8535F2.6030505@redhat.com> References: <4F843D0F.5060906@redhat.com> <4F844BC4.2020109@redhat.com> <1334077635.16034.20.camel@balmora.brq.redhat.com> <4F846CF8.3020708@redhat.com> <4F84726E.90504@redhat.com> <4F847433.3060703@redhat.com> <4F853050.2060107@redhat.com> <1334129249.18938.12.camel@balmora.brq.redhat.com> <4F8535F2.6030505@redhat.com> Message-ID: <4F86AB63.5060305@redhat.com> On 04/11/2012 09:42 AM, Jan Cholasta wrote: > On 11.4.2012 09:27, Martin Kosek wrote: >> On Wed, 2012-04-11 at 09:18 +0200, Jan Cholasta wrote: >>> On 10.4.2012 19:56, Dmitri Pal wrote: ... >>>> >>>> The use case I would see is the extensibility. Say a customer wants to >>>> extend a schema and add an attribute X to the user object. He would >>>> still be able to manage users using CLI without writing a plugin for >>>> the new attribute. Yes plugin is preferred but not everybody would go >>>> for it. So in absence of the plugin we can't do validation but we still >>>> should function and be able to deal with this attribute via CLI (and UI >>>> if this attribute is enabled for UI via UI configuration). >>>> >>>> I am generally against dropping this interface. But expectations IMO >>>> should be: >>>> 1) If the attribute is managed by us with setattr and friends it should >>>> behave in the same way as via the direct add/mod/del command Does this include refusing to modify no_update attributes? Or do we treat those the same way as non-managed attributes (no validation.conversion)? Or is it okay as it's done now, validating and updating as if no_update wasn't there? >>>> 2) If attribute is not managed it should not provide any guarantees and >>>> act in the same way as via LDAP I never had an issue with the non-managed attributes; addattr is perfect for those. >>>> Hope this helps. >>> This might be the best thing to do, but IMO it is still no good, because >>> the behavior of --{set,add,del}attr for a particular attribute might >>> change between API versions, when that attribute changes from unmanaged >>> to managed. >>> >>> Honza >>> >> >> I think this is OK and expectable - user may use setattr option to set >> an attribute before it is officially supported by IPA and still have it >> working when he upgrades. > > There's no guarantee the user will still have it working. User > application might break if it depends on the unmanaged behavior and we > are too strict in validation of the managed attribute, for example. I > don't known how much of an issue is this actually, but this kind of > unpredictability is not a good thing IMHO. Since our validation now allows working with bad attributes, just not adding new ones, I think this is not much of a problem. >> Though we should make sure that we describe >> this API well in our documentation to make sure this expectation is >> shared with users. >> >> Martin >> > > Honza > -- Petr? From pviktori at redhat.com Thu Apr 12 11:30:38 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 12 Apr 2012 13:30:38 +0200 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F849AE5.5030409@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> <4F849AE5.5030409@redhat.com> Message-ID: <4F86BCDE.1080508@redhat.com> On 04/10/2012 10:41 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> On 03/30/2012 11:00 PM, Rob Crittenden wrote: >>> Petr Viktorin wrote: >>>> On 03/26/2012 05:35 PM, Petr Viktorin wrote: >>>>> On 03/26/2012 04:54 PM, Rob Crittenden wrote: >>>>>> >>>>>> Some minor compliants. >>>>> >>>>> >>>>> Ideally, there would be a routine that sets up the logging and handles >>>>> command-line arguments in some uniform way (which is also needed >>>>> before >>>>> logging setup to detect ipa-server-install --uninstall). >>>>> The original patch did the common logging setup, and I hacked around >>>>> the >>>>> install/uninstall problem too. >>>>> I guess I overdid it when I simplified the patch. >>>>> I'm somewhat confused about the scope, so bear with me as I clarify >>>>> what >>>>> you mean. >>>>> >>>>> >>>>>> If you abort the installation you get this somewhat unnerving error: >>>>>> >>>>>> Continue to configure the system with these values? [no]: >>>>>> ipa : ERROR ipa-server-install failed, SystemExit: Installation >>>>>> aborted >>>>>> Installation aborted >>>>>> >>>>>> ipa-ldap-updater is the same: >>>>>> >>>>>> # ipa-ldap-updater >>>>>> [2012-03-26T14:53:41Z ipa] : ipa-ldap-updater failed, >>>>>> SystemExit: >>>>>> IPA is not configured on this system. >>>>>> IPA is not configured on this system. >>>>>> >>>>>> and ipa-upgradeconfig >>>>>> >>>>>> $ ipa-upgradeconfig >>>>>> [2012-03-26T14:54:05Z ipa] : ipa-upgradeconfig failed, >>>>>> SystemExit: >>>>>> You must be root to run this script. >>>>>> >>>>>> >>>>>> You must be root to run this script. >>>>>> >>>>>> I'm guessing that the issue is that the log file isn't opened yet. >>>>> > >>>>>> It would be nice if the logging would be confined to just the log. >>>>> >>>>> >>>>> If I understand you correctly, the code should check if logging has >>>>> been >>>>> configured already, and if not, skip displaying the message? >>>>> >>>>> >>>>>> When uninstalling you get the message 'ipa-server-install >>>>>> successful'. >>>>>> This is a little odd as well. >>>>> >>>>> ipa-server-install is the name of the command. Wontfix for now, unless >>>>> you disagree strongly. >>>>> >>>>> >>>> >>>> Updated patch: only log if logging has been configured (detected by >>>> looking at the root logger's handlers), and changed the message to ?The >>>> ipa-server-install command has succeeded/failed?. >>> >>> Works much better thanks. Just one request. When you created final_log() >>> you show less information than you did in earlier patches. It is nice >>> seeing the SystemExit failure. Can you do something like this (basically >>> cut-n-pasted from v05)? >>> >>> diff --git a/ipaserver/install/installutils.py >>> b/ipaserver/install/installutils. >>> py >>> index 851b58d..ca82a1b 100644 >>> --- a/ipaserver/install/installutils.py >>> +++ b/ipaserver/install/installutils.py >>> @@ -721,15 +721,15 @@ def script_context(operation_name): >>> # Only log if logging was already configured >>> # TODO: Do this properly (e.g. configure logging before the try/except) >>> if log_mgr.handlers.keys() != ['console']: >>> - root_logger.info(template, operation_name) >>> + root_logger.info(template) >>> try: >>> yield >>> except BaseException, e: >>> if isinstance(e, SystemExit) and (e.code is None or e.code == 0): >>> # Not an error after all >>> - final_log('The %s command was successful') >>> + final_log('The %s command was successful' % operation_name) >>> else: >>> - final_log('The %s command failed') >>> + final_log('%s failed, %s: %s' % (operation_name, type(e).__name__, >>> e)) >>> raise >>> else: >>> final_log('The %s command was successful') >>> >>> This looks like: >>> >>> 2012-03-30T20:56:53Z INFO ipa-dns-install failed, SystemExit: >>> DNS is already configured in this IPA server. >>> >>> rob >> >> Fixed. >> > > Hate to do this to you but I've found a few more issues. I basically > went down the list and ran all the commands in various conditions. > > Some don't open any logs at all so the output gets written twice, like > ipa-replica-prepare and ipa-replica-manage: > > # ipa-replica-manage del foo > Directory Manager password: > > ipa: INFO: The ipa-replica-manage command failed, SystemExit: > 'pony.greyoak.com' has no replication agreement for 'foo' > 'pony.greyoak.com' has no replication agreement for 'foo' > > Same with ipa-csreplica-manage. > > # ipa-replica-prepare foo > Directory Manager (existing master) password: > > ipa: INFO: The ipa-replica-prepare command failed, SystemExit: > The password provided is incorrect for LDAP server pony.greyoak.com > > The password provided is incorrect for LDAP server pony.greyoak.com > > rob When the utility sets logging to console, the extra log message gets printed out there. I agree this isn't optimal. Attached patch removes the console log handler before logging the result. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0014-08-Add-final-debug-message-in-installers.patch Type: text/x-patch Size: 30351 bytes Desc: not available URL: From jcholast at redhat.com Thu Apr 12 11:34:02 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 12 Apr 2012 13:34:02 +0200 Subject: [Freeipa-devel] [PATCH] 75 Fix internal error when renaming user with an empty string Message-ID: <4F86BDAA.9070102@redhat.com> https://fedorahosted.org/freeipa/ticket/2629 Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-75-user-rename-empty-fix.patch Type: text/x-patch Size: 965 bytes Desc: not available URL: From pviktori at redhat.com Thu Apr 12 11:38:14 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 12 Apr 2012 13:38:14 +0200 Subject: [Freeipa-devel] param type issues In-Reply-To: <4F85C2E4.2090604@redhat.com> References: <4F85BCC5.5040901@redhat.com> <4F85C2E4.2090604@redhat.com> Message-ID: <4F86BEA6.4030402@redhat.com> On 04/11/2012 07:44 PM, Rob Crittenden wrote: > John Dennis wrote: ... > >> 3) The class name cannot contain an underscore. >> >> The find_name() function in makeapi uses a regexp that does not permit >> an underscore in the name of the class. I presume this was an >> oversight and not a requirement that class names do not contain >> underscores. My interpretation of PEP8 is that class names shouldn't use underscores; your class should be named DNParam, not DN_Param. We already have classes like IA5Str, LDAPError, SSLTransport or CLIOptionParser. find_name() should still be fixed though > > Why would you want to? Param classes are by definition simple things: > Int, Str, etc. > >> 4) makeapi is makeing assumptions about dict ordering. >> >> makeapi when it generates a string calls repr() on the contents of a >> dict. It uses that to compare to a previous run to see if they are >> identical by doing a string comparision. That's not robust. There are >> no guarantees concerning the ordering of keys in a dict, nor the >> string values produced by repr(). If you want to compare dicts for >> equality then you should compare dicts for equality. If you want to >> use strings for comparison purposes you have to be a lot more careful >> about how you generate that string representation. >> >> > > makeapi was rather quick and dirty. dict ordering has always been the > same (except when it isn't). In the year since we introduced makeapi I > don't recall a case where dict ordering changed. A change is slowly coming up. There is a security issue with predictable hashing (oCERT-2011-003); the fix for this will make dict ordering randomized between Python runs. Python 2.6.8 and 2.7.3 (released upstream this week) can do this, but to maintain backwards compatibility, it's off by default. Most likely, it won't be turned on until Python 3.3. > I don't want to > over-engineer things but since we already have dict comparison code in > the test sutie we can probably leverage that. makeapi is just there to > keep us honest. It doesn't have the same robustness requirements that > other code has. In other words if it takes 15 minutes to properly > compare the dicts then fine. If it is going to take a day then don't > bother, the bang isn't worth the buck. Once 2.7.3 is in the distro I recommend setting testing with the randomization enabled (export PYTHONHASHSEED=random), as we will want IPA to work when users have it on. makeapi will need fixing then. -- Petr? From mkosek at redhat.com Thu Apr 12 12:12:43 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 12 Apr 2012 14:12:43 +0200 Subject: [Freeipa-devel] [PATCH] 75 Fix internal error when renaming user with an empty string In-Reply-To: <4F86BDAA.9070102@redhat.com> References: <4F86BDAA.9070102@redhat.com> Message-ID: <1334232763.23788.34.camel@balmora.brq.redhat.com> On Thu, 2012-04-12 at 13:34 +0200, Jan Cholasta wrote: > https://fedorahosted.org/freeipa/ticket/2629 > > Honza ACK. I will wait with push until the ticket is triaged. Martin From pviktori at redhat.com Thu Apr 12 12:50:56 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 12 Apr 2012 14:50:56 +0200 Subject: [Freeipa-devel] [PATCH] 0036 Remove pattern_errmsg from API.txt Message-ID: <4F86CFB0.1030609@redhat.com> https://fedorahosted.org/freeipa/ticket/2619 -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0036-Remove-pattern_errmsg-from-API.txt.patch Type: text/x-patch Size: 44128 bytes Desc: not available URL: From jcholast at redhat.com Thu Apr 12 12:53:01 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 12 Apr 2012 14:53:01 +0200 Subject: [Freeipa-devel] [PATCH] 249 Return correct record name in DNS plugin In-Reply-To: <1334217098.23788.0.camel@balmora.brq.redhat.com> References: <1334217098.23788.0.camel@balmora.brq.redhat.com> Message-ID: <4F86D02D.30104@redhat.com> On 12.4.2012 09:51, Martin Kosek wrote: > When dnsrecord-add or dnsrecord-mod commands are used on a root > zone record (it has a special name "@"), a zone name is returned > instead of a special name "@". This confuses DNS part of Web UI > which is then not able to manipulate records in the root zone > when these commands are used. > > This patch fixes these 2 commands to return correct value when > a root zone is modified. > > https://fedorahosted.org/freeipa/ticket/2627 > https://fedorahosted.org/freeipa/ticket/2628 > Works as advertised. ACK. Honza -- Jan Cholasta From mkosek at redhat.com Thu Apr 12 13:06:47 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 12 Apr 2012 15:06:47 +0200 Subject: [Freeipa-devel] [PATCH] 0036 Remove pattern_errmsg from API.txt In-Reply-To: <4F86CFB0.1030609@redhat.com> References: <4F86CFB0.1030609@redhat.com> Message-ID: <1334236007.23788.40.camel@balmora.brq.redhat.com> On Thu, 2012-04-12 at 14:50 +0200, Petr Viktorin wrote: > https://fedorahosted.org/freeipa/ticket/2619 > ACK. Pushed to master, ipa-2-2. Martin From jcholast at redhat.com Thu Apr 12 13:12:25 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 12 Apr 2012 15:12:25 +0200 Subject: [Freeipa-devel] [PATCH] 248 Raise proper exception when LDAP limits are exceeded In-Reply-To: <1334048262.7045.19.camel@balmora.brq.redhat.com> References: <1334048262.7045.19.camel@balmora.brq.redhat.com> Message-ID: <4F86D4B9.7090509@redhat.com> On 10.4.2012 10:57, Martin Kosek wrote: > Few test hints are attached to the ticket. > > --- > > ldap2 plugin returns NotFound error for find_entries/get_entry > queries when the server did not manage to return an entry > due to time limits. This may be confusing for user when the > entry he searches actually exists. > > This patch fixes the behavior in ldap2 plugin to return > LimitsExceeded exception instead. This way, user would know > that his time/size limits are set too low and can amend them to > get correct results. > > https://fedorahosted.org/freeipa/ticket/2606 > ACK. Honza -- Jan Cholasta From pviktori at redhat.com Thu Apr 12 13:26:15 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 12 Apr 2012 15:26:15 +0200 Subject: [Freeipa-devel] [PATCH 70] validate i18n strings when running "make lint" In-Reply-To: <4F751025.7090204@redhat.com> References: <4F751025.7090204@redhat.com> Message-ID: <4F86D7F7.9040107@redhat.com> On 03/30/2012 03:45 AM, John Dennis wrote: > Translatable strings have certain requirements for proper translation > and run time behaviour. We should routinely validate those strings. A > recent checkin to install/po/test_i18n.py makes it possible to validate > the contents of our current pot file by searching for problematic strings. > > However, we only occasionally generate a new pot file, far less > frequently than translatable strings change in the source base. Just > like other checkin's to the source which are tested for correctness we > should also validate new or modified translation strings when they are > introduced and not accumulate problems to fix at the last minute. This > would also raise the awareness of developers as to the requirements for > proper string translation. > > The top level Makefile should be enhanced to create a temporary pot > files from the current sources and validate it. We need to use a > temporary pot file because we do not want to modify the pot file under > source code control and exported to Transifex. > NACK install/po/Makefile is not created early enough when running `make rpms` from a clean checkout. # git clean -fx ... # make rpms rm -rf /rpmbuild mkdir -p /rpmbuild/BUILD mkdir -p /rpmbuild/RPMS mkdir -p /rpmbuild/SOURCES mkdir -p /rpmbuild/SPECS mkdir -p /rpmbuild/SRPMS mkdir -p dist/rpms mkdir -p dist/srpms if [ ! -e RELEASE ]; then echo 0 > RELEASE; fi sed -e s/__VERSION__/2.99.0GITde16a82/ -e s/__RELEASE__/0/ \ freeipa.spec.in > freeipa.spec sed -e s/__VERSION__/2.99.0GITde16a82/ version.m4.in \ > version.m4 sed -e s/__VERSION__/2.99.0GITde16a82/ ipapython/setup.py.in \ > ipapython/setup.py sed -e s/__VERSION__/2.99.0GITde16a82/ ipapython/version.py.in \ > ipapython/version.py perl -pi -e "s:__NUM_VERSION__:2990:" ipapython/version.py perl -pi -e "s:__API_VERSION__:2.34:" ipapython/version.py sed -e s/__VERSION__/2.99.0GITde16a82/ daemons/ipa-version.h.in \ > daemons/ipa-version.h perl -pi -e "s:__NUM_VERSION__:2990:" daemons/ipa-version.h perl -pi -e "s:__DATA_VERSION__:20100614120000:" daemons/ipa-version.h sed -e s/__VERSION__/2.99.0GITde16a82/ -e s/__RELEASE__/0/ \ ipa-client/ipa-client.spec.in > ipa-client/ipa-client.spec sed -e s/__VERSION__/2.99.0GITde16a82/ ipa-client/version.m4.in \ > ipa-client/version.m4 if [ "redhat" != "" ]; then \ sed -e s/SUPPORTED_PLATFORM/redhat/ ipapython/services.py.in \ > ipapython/services.py; \ fi if [ "" != "yes" ]; then \ ./makeapi --validate; \ fi make -C install/po validate-src-strings make[1]: Entering directory `/home/pviktori/freeipa/install/po' make[1]: *** No rule to make target `validate-src-strings'. Stop. make[1]: Leaving directory `/home/pviktori/freeipa/install/po' make: *** [validate-src-strings] Error 2 -- Petr? From rcritten at redhat.com Thu Apr 12 13:27:15 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 12 Apr 2012 09:27:15 -0400 Subject: [Freeipa-devel] [PATCH] 248 Raise proper exception when LDAP limits are exceeded In-Reply-To: <4F86D4B9.7090509@redhat.com> References: <1334048262.7045.19.camel@balmora.brq.redhat.com> <4F86D4B9.7090509@redhat.com> Message-ID: <4F86D833.2010501@redhat.com> Jan Cholasta wrote: > On 10.4.2012 10:57, Martin Kosek wrote: >> Few test hints are attached to the ticket. >> >> --- >> >> ldap2 plugin returns NotFound error for find_entries/get_entry >> queries when the server did not manage to return an entry >> due to time limits. This may be confusing for user when the >> entry he searches actually exists. >> >> This patch fixes the behavior in ldap2 plugin to return >> LimitsExceeded exception instead. This way, user would know >> that his time/size limits are set too low and can amend them to >> get correct results. >> >> https://fedorahosted.org/freeipa/ticket/2606 >> > > ACK. > > Honza > Before pushing I'd like to look at this more. truncated is supposed to indicate a limits problem. I want to see if the caller should be responsible for returning a limits error instead. rob From ohamada at redhat.com Thu Apr 12 13:51:43 2012 From: ohamada at redhat.com (Ondrej Hamada) Date: Thu, 12 Apr 2012 15:51:43 +0200 Subject: [Freeipa-devel] [PATCH] 22 Always set ipa_hostname for sssd.conf Message-ID: <4F86DDEF.10707@redhat.com> https://fedorahosted.org/freeipa/ticket/2527 ipa-client-install will always set ipa_hostname for sssd.conf in order to prevent the client from getting into weird state. -- Regards, Ondrej Hamada FreeIPA team jabber:ohama at jabbim.cz IRC: ohamada -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-ohamada-22-Always-set-ipa_hostname-for-sssd.conf.patch Type: text/x-patch Size: 2313 bytes Desc: not available URL: From jdennis at redhat.com Thu Apr 12 14:03:11 2012 From: jdennis at redhat.com (John Dennis) Date: Thu, 12 Apr 2012 10:03:11 -0400 Subject: [Freeipa-devel] [PATCH 72] Validate DN & RDN parameters for migrate command In-Reply-To: <1334218670.23788.8.camel@balmora.brq.redhat.com> References: <4F7E465F.7050206@redhat.com> <1333701642.14740.27.camel@balmora.brq.redhat.com> <4F7EF98A.5000804@redhat.com> <4F844BDC.108@redhat.com> <4F863879.3040405@redhat.com> <1334218670.23788.8.camel@balmora.brq.redhat.com> Message-ID: <4F86E09F.6070408@redhat.com> On 04/12/2012 04:17 AM, Martin Kosek wrote: > On Wed, 2012-04-11 at 22:05 -0400, John Dennis wrote: >> Revised patch attached. We'll leave the DN parameter changes till later. >> This is essentially the same as the original patch with the addition of >> the fixes necessary to support passing an empty container arg, an issue >> Martin discovered in his review. FWIW the answer was not to make the >> param required (actually it would have been adding the flag 'nonempty') >> because you should be able to say you don't want to introduce a >> container into the search bases (see commit comment) >> > > I don't agree with the removal of default values for the containers and > allowing an empty value for them. Please, see my reasoning: > > 1) I don't think its unlikely to have ou=People and ou=groups as > containers for users/groups as they are default containers in fresh LDAP > installs. I think most of the small LDAP deployments will use these > values. > > 2) I am also not sure if somebody would want to pass empty user and > group container. Users and groups won't be shared in the same container > and since we search with _ldap.SCOPE_ONELEVEL the migration would not > find users or groups in containers nested under the search base anyway. OK. Patch is revised, restored the defaults, usercontainer and groupcontainer are now required to be non-empty. Also, basedn had been optional without a default which didn't make much sense, now basedn is a required parameter. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jdennis-0072-2-Validate-DN-RDN-parameters-for-migrate-command.patch Type: text/x-patch Size: 5250 bytes Desc: not available URL: From mkosek at redhat.com Thu Apr 12 14:56:55 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 12 Apr 2012 16:56:55 +0200 Subject: [Freeipa-devel] [PATCH] 0042-0048 AD trusts support (master) In-Reply-To: <20120403145749.GD8996@localhost.localdomain> References: <20120403104135.GD23171@redhat.com> <20120403145749.GD8996@localhost.localdomain> Message-ID: <1334242615.777.3.camel@balmora.brq.redhat.com> On Tue, 2012-04-03 at 16:57 +0200, Sumit Bose wrote: > On Tue, Apr 03, 2012 at 01:41:35PM +0300, Alexander Bokovoy wrote: > > Hi! > > > > Attached are the current patches for adding support for Active Directory > > trusts for FreeIPA v3 (master). > > > > These are tested and working with samba4 build available in ipa-devel@ > > repo. You have to use --delegate until we'll get all the parts of the > > Heimdal puzzle untangled and solved, and Simo patch 490 (s4u2proxy fix) > > is committed as well. > > > > Sumit asked me to send patches for review and commit to master so that > > he can proceed with his changes (removal of kadmin.local use, SID > > population task for 389-ds, etc). Without kadmin.local use fix these > > patches are not working with SELinux enabled. > > > > Patches have [../9] mark because they were generated out of my adwork > > tree. I have merged two patches together for obvious change reason and > > have left out Simo's s4u2proxy patch out, thus there are seven patches > > proposed for commit. > > I have tested the patches and they worked fine for me. They currently > only work in F17, because it relies on the version of python-ldap > shipped with F17. So it is an > > ACK > > form my side. It would be nice if someone more can have a look at the > python parts if they are in agreement with the IPA standards (I expect > they are :-). > > bye, > Sumit > > > > > -- > > / Alexander Bokovoy > Hello Alexander, I read the patches with focus on Python parts, please check my comments. freeipa-abbra-0042-ticket-2192.patch: 1) s4u2proxy records that you add for new replicas should also be removed during replica uninstall. Otherwise you will get a warning when the replica is being re-installed. You can find the clean up code in replication.py in ipaserver/install in function replica_cleanup() freeipa-abbra-0044-ticket-1821.patch: 1) Missing i18n: +trust_output_params = ( + Str('ipantflatname', + label='Domain NetBIOS name'), + Str('ipantsecurityidentifier', + label='Domain Security Identifier'), + Str('trustdirection', + label='Trust direction'), + Str('trusttype', + label='Trust type'), +) 2) This does not look nice (and returns False (i.e. not str) when level is out of bounds): +def trust_type_string(level): + return int(level) in (1,2,3) and (u'Forest',u'Cross-Forest',u'MIT')[int(level)-1] + +def trust_direction_string(level): + return int(level) in (1,2,3) and (u'Downlevel',u'Uplevel',u'Both directions')[int(level)-1] Maybe something like this would be better (and i18n-ed): _trust_type_dict = {1 : _('Forest'), 2 : _('Cross-Forest'), 3 : _('MIT')} _trust_type_dict_unknown = _('Unknown') def trust_type_string(level): string = _trust_type_dict.get(int(level), _trust_type_dict_unknown) return unicode(string) 3) I would not try to import ipaserver.dcerpc every time the command is executed: + try: + import ipaserver.dcerpc + except Exception, e: + raise errors.NotFound(name=_('AD Trust setup'), + reason=_('Cannot perform join operation without Samba 4 python bindings installed')) I would rather do it once in the beginning and set a flag: try: import ipaserver.dcerpc _bindings_installed = True except Exception: _bindings_installed = False ... + def execute(self, *keys, **options): + # Join domain using full credentials and with random trustdom + # secret (will be generated by the join method) + trustinstance = None + if not _bindings_installed: + raise errors.NotFound(name=_('AD Trust setup'), + reason=_('Cannot perform join operation without Samba 4 python bindings installed')) 4) Another import inside a function: + def arcfour_encrypt(key, data): + from Crypto.Cipher import ARC4 + c = ARC4.new(key) + return c.encrypt(data) HTH, Martin From abokovoy at redhat.com Thu Apr 12 15:08:03 2012 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Thu, 12 Apr 2012 18:08:03 +0300 Subject: [Freeipa-devel] [PATCH] 0042-0048 AD trusts support (master) In-Reply-To: <1334242615.777.3.camel@balmora.brq.redhat.com> References: <20120403104135.GD23171@redhat.com> <20120403145749.GD8996@localhost.localdomain> <1334242615.777.3.camel@balmora.brq.redhat.com> Message-ID: <20120412150803.GA24623@redhat.com> Hi Martin! On Thu, 12 Apr 2012, Martin Kosek wrote: >Hello Alexander, > >I read the patches with focus on Python parts, please check my >comments. > >freeipa-abbra-0042-ticket-2192.patch: >1) s4u2proxy records that you add for new replicas should also be >removed >during replica uninstall. Otherwise you will get a warning when the >replica is being re-installed. > >You can find the clean up code in replication.py in ipaserver/install in >function replica_cleanup() thanks. >freeipa-abbra-0044-ticket-1821.patch: >1) Missing i18n: > >+trust_output_params = ( >+ Str('ipantflatname', >+ label='Domain NetBIOS name'), >+ Str('ipantsecurityidentifier', >+ label='Domain Security Identifier'), >+ Str('trustdirection', >+ label='Trust direction'), >+ Str('trusttype', >+ label='Trust type'), >+) Ok, will fix. >2) This does not look nice (and returns False (i.e. not str) when level >is out of bounds): >+def trust_type_string(level): >+ return int(level) in (1,2,3) and (u'Forest',u'Cross-Forest',u'MIT')[int(level)-1] >+ >+def trust_direction_string(level): >+ return int(level) in (1,2,3) and (u'Downlevel',u'Uplevel',u'Both directions')[int(level)-1] > >Maybe something like this would be better (and i18n-ed): >_trust_type_dict = {1 : _('Forest'), > 2 : _('Cross-Forest'), > 3 : _('MIT')} >_trust_type_dict_unknown = _('Unknown') >def trust_type_string(level): > string = _trust_type_dict.get(int(level), _trust_type_dict_unknown) > return unicode(string) ok, makes sense. We'll need to do it to both directions later (not now). >3) I would not try to import ipaserver.dcerpc every time the command is >executed: >+ try: >+ import ipaserver.dcerpc >+ except Exception, e: >+ raise errors.NotFound(name=_('AD Trust setup'), >+ reason=_('Cannot perform join operation without Samba >4 python bindings installed')) > >I would rather do it once in the beginning and set a flag: > >try: > import ipaserver.dcerpc > _bindings_installed = True >except Exception: > _bindings_installed = False > >... The idea was that this code is only executed on the server. We need to differentiate between: - running on client - running on server, no samba4 python bindings - running on server with samba4 python bindings By making it executed all time you are affecting the client code as well while with current approach it only affects server side. >+ def execute(self, *keys, **options): >+ # Join domain using full credentials and with random trustdom >+ # secret (will be generated by the join method) >+ trustinstance = None >+ if not _bindings_installed: >+ raise errors.NotFound(name=_('AD Trust setup'), >+ reason=_('Cannot perform join operation without Samba >4 python bindings installed')) > > >4) Another import inside a function: >+ def arcfour_encrypt(key, data): >+ from Crypto.Cipher import ARC4 >+ c = ARC4.new(key) >+ return c.encrypt(data) Same here, it is only needed on server side. Let us get consensus over 3) and 4) and I'll fix patches altogether (and push). -- / Alexander Bokovoy From mkosek at redhat.com Thu Apr 12 15:16:47 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 12 Apr 2012 17:16:47 +0200 Subject: [Freeipa-devel] [PATCH] 0042-0048 AD trusts support (master) In-Reply-To: <20120412150803.GA24623@redhat.com> References: <20120403104135.GD23171@redhat.com> <20120403145749.GD8996@localhost.localdomain> <1334242615.777.3.camel@balmora.brq.redhat.com> <20120412150803.GA24623@redhat.com> Message-ID: <1334243807.777.6.camel@balmora.brq.redhat.com> On Thu, 2012-04-12 at 18:08 +0300, Alexander Bokovoy wrote: > Hi Martin! > > On Thu, 12 Apr 2012, Martin Kosek wrote: ... > >3) I would not try to import ipaserver.dcerpc every time the command is > >executed: > >+ try: > >+ import ipaserver.dcerpc > >+ except Exception, e: > >+ raise errors.NotFound(name=_('AD Trust setup'), > >+ reason=_('Cannot perform join operation without Samba > >4 python bindings installed')) > > > >I would rather do it once in the beginning and set a flag: > > > >try: > > import ipaserver.dcerpc > > _bindings_installed = True > >except Exception: > > _bindings_installed = False > > > >... > The idea was that this code is only executed on the server. We need to > differentiate between: > - running on client > - running on server, no samba4 python bindings > - running on server with samba4 python bindings > > By making it executed all time you are affecting the client code as > well while with current approach it only affects server side. Across our code base, this situation is currently solved with this condition: if api.env.in_server and api.env.context in ['lite', 'server']: # try-import block > > > >+ def execute(self, *keys, **options): > >+ # Join domain using full credentials and with random trustdom > >+ # secret (will be generated by the join method) > >+ trustinstance = None > >+ if not _bindings_installed: > >+ raise errors.NotFound(name=_('AD Trust setup'), > >+ reason=_('Cannot perform join operation without Samba > >4 python bindings installed')) > > > > > >4) Another import inside a function: > >+ def arcfour_encrypt(key, data): > >+ from Crypto.Cipher import ARC4 > >+ c = ARC4.new(key) > >+ return c.encrypt(data) > Same here, it is only needed on server side. > > Let us get consensus over 3) and 4) and I'll fix patches altogether (and > push). > Yeah, I would fix in the same way as 3). Martin From jdennis at redhat.com Thu Apr 12 15:46:45 2012 From: jdennis at redhat.com (John Dennis) Date: Thu, 12 Apr 2012 11:46:45 -0400 Subject: [Freeipa-devel] [PATCH] 0033 Pass make-test arguments through to Nose + Test coverage In-Reply-To: <4F85F6FA.7060901@redhat.com> References: <4F7C6890.1090200@redhat.com> <4F85F6FA.7060901@redhat.com> Message-ID: <4F86F8E5.4070003@redhat.com> On 04/11/2012 05:26 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> Currently, our test script forwards a select few command line arguments >> to nosetests. >> This patch removes the filtering, passing all arguments through. >> This allows things like disabling output redirection (--nocapture), >> dropping into a debugger (--pdb, --pdb-failures), coverage reporting >> (--with-cover, once installed), etc. >> >> https://fedorahosted.org/freeipa/ticket/2135 >> >> I believe this is a better solution than adding individual options as >> they're needed. >> >> >> --- >> >> A coverage report can be generated by combining data from both the tests >> and the server. I run this: >> >> Setup: >> yum install python-coverage >> echo /.coverage*>> .git/info/exclude >> echo /htmlcov/>> .git/info/exclude >> >> Terminal 1: >> coverage erase >> coverage run -p --source . lite-server.py >> >> Terminal 2: >> kinit >> ./make-test --with-coverage --cover-inclusive >> >> Terminal 1 again: >> ^C >> coverage combine >> coverage html --omit=/usr/lib/* >> >> Then view ./htmlcov/index.html in a browser. > > This looks very good to me. I'll open a ticket to go through the > coverage to find weaknesses. > > John, this replaces your existing patch, does this still fit the bill? > > rob Yes, this looks fine, it's a perfectly reasonable approach. The thing I was trying to address in my patch 54 was the fact you couldn't start the tests from within the debugger which prevented you from setting breakpoints. Petr's patch solves the same problem, but in a more general manner. ACK. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From jdennis at redhat.com Thu Apr 12 16:06:09 2012 From: jdennis at redhat.com (John Dennis) Date: Thu, 12 Apr 2012 12:06:09 -0400 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F758939.4060503@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> Message-ID: <4F86FD71.7000502@redhat.com> On 03/30/2012 06:21 AM, Petr Viktorin wrote: > Updated patch: only log if logging has been configured (detected by > looking at the root logger's handlers), and changed the message to ?The > ipa-server-install command has succeeded/failed?. Actually the log_manager has an attribute called configure_state whose purpose is to tell you if the log manager has been configured (not None) and if so how it was configured. When it's not None it's a string whose value is by programmer convention (there are no restrictions). Currently the 3 values in use are: 'standard' The log_mgr was initialized by standard_logging_setup() 'default' The log_mgr was initialized by virtue of the ipa_log_manager being loaded. 'api' The log_mgr was initialized by the plugin bootstrap process (i.e. the api is initialized and has the log manager bound to it). It better to reference log_mgr.configure_state than to make assumptions about the internals of the class. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From rcritten at redhat.com Thu Apr 12 19:34:45 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 12 Apr 2012 15:34:45 -0400 Subject: [Freeipa-devel] [PATCH] 0033 Pass make-test arguments through to Nose + Test coverage In-Reply-To: <4F86F8E5.4070003@redhat.com> References: <4F7C6890.1090200@redhat.com> <4F85F6FA.7060901@redhat.com> <4F86F8E5.4070003@redhat.com> Message-ID: <4F872E55.7090500@redhat.com> John Dennis wrote: > On 04/11/2012 05:26 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> Currently, our test script forwards a select few command line arguments >>> to nosetests. >>> This patch removes the filtering, passing all arguments through. >>> This allows things like disabling output redirection (--nocapture), >>> dropping into a debugger (--pdb, --pdb-failures), coverage reporting >>> (--with-cover, once installed), etc. >>> >>> https://fedorahosted.org/freeipa/ticket/2135 >>> >>> I believe this is a better solution than adding individual options as >>> they're needed. >>> >>> >>> --- >>> >>> A coverage report can be generated by combining data from both the tests >>> and the server. I run this: >>> >>> Setup: >>> yum install python-coverage >>> echo /.coverage*>> .git/info/exclude >>> echo /htmlcov/>> .git/info/exclude >>> >>> Terminal 1: >>> coverage erase >>> coverage run -p --source . lite-server.py >>> >>> Terminal 2: >>> kinit >>> ./make-test --with-coverage --cover-inclusive >>> >>> Terminal 1 again: >>> ^C >>> coverage combine >>> coverage html --omit=/usr/lib/* >>> >>> Then view ./htmlcov/index.html in a browser. >> >> This looks very good to me. I'll open a ticket to go through the >> coverage to find weaknesses. >> >> John, this replaces your existing patch, does this still fit the bill? >> >> rob > > Yes, this looks fine, it's a perfectly reasonable approach. The thing I > was trying to address in my patch 54 was the fact you couldn't start the > tests from within the debugger which prevented you from setting > breakpoints. Petr's patch solves the same problem, but in a more general > manner. ACK. > > pushed to master and ipa-2-2 From rcritten at redhat.com Thu Apr 12 19:38:06 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 12 Apr 2012 15:38:06 -0400 Subject: [Freeipa-devel] [PATCH] 21 Unable to rename permission object In-Reply-To: <4F854BB0.1090604@redhat.com> References: <4F846653.8070509@redhat.com> <4F848B82.4040606@redhat.com> <4F854BB0.1090604@redhat.com> Message-ID: <4F872F1E.8030706@redhat.com> Ondrej Hamada wrote: > On 04/10/2012 09:35 PM, Rob Crittenden wrote: >> Ondrej Hamada wrote: >>> https://fedorahosted.org/freeipa/ticket/2571 >>> >>> The update was failing because of the case insensitivity of permission >>> object DN. >> >> Can you wrap the error in _() and add a couple of test cases for this, >> say one for the case insensitivity and one for empty rename attempt? >> >> rob > fixed patch attached > ack, pushed to master and ipa-2-2 From pviktori at redhat.com Fri Apr 13 10:25:47 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 13 Apr 2012 12:25:47 +0200 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F86BCDE.1080508@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> <4F849AE5.5030409@redhat.com> <4F86BCDE.1080508@redhat.com> Message-ID: <4F87FF2B.90108@redhat.com> On 04/12/2012 01:30 PM, Petr Viktorin wrote: > On 04/10/2012 10:41 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> On 03/30/2012 11:00 PM, Rob Crittenden wrote: >>>> Petr Viktorin wrote: >>>>> On 03/26/2012 05:35 PM, Petr Viktorin wrote: >>>>>> On 03/26/2012 04:54 PM, Rob Crittenden wrote: >>>>>>> >>>>>>> Some minor compliants. >>>>>> >>>>>> >>>>>> Ideally, there would be a routine that sets up the logging and >>>>>> handles >>>>>> command-line arguments in some uniform way (which is also needed >>>>>> before >>>>>> logging setup to detect ipa-server-install --uninstall). >>>>>> The original patch did the common logging setup, and I hacked around >>>>>> the >>>>>> install/uninstall problem too. >>>>>> I guess I overdid it when I simplified the patch. >>>>>> I'm somewhat confused about the scope, so bear with me as I clarify >>>>>> what >>>>>> you mean. >>>>>> >>>>>> >>>>>>> If you abort the installation you get this somewhat unnerving error: >>>>>>> >>>>>>> Continue to configure the system with these values? [no]: >>>>>>> ipa : ERROR ipa-server-install failed, SystemExit: Installation >>>>>>> aborted >>>>>>> Installation aborted >>>>>>> >>>>>>> ipa-ldap-updater is the same: >>>>>>> >>>>>>> # ipa-ldap-updater >>>>>>> [2012-03-26T14:53:41Z ipa] : ipa-ldap-updater failed, >>>>>>> SystemExit: >>>>>>> IPA is not configured on this system. >>>>>>> IPA is not configured on this system. >>>>>>> >>>>>>> and ipa-upgradeconfig >>>>>>> >>>>>>> $ ipa-upgradeconfig >>>>>>> [2012-03-26T14:54:05Z ipa] : ipa-upgradeconfig failed, >>>>>>> SystemExit: >>>>>>> You must be root to run this script. >>>>>>> >>>>>>> >>>>>>> You must be root to run this script. >>>>>>> >>>>>>> I'm guessing that the issue is that the log file isn't opened yet. >>>>>> > >>>>>>> It would be nice if the logging would be confined to just the log. >>>>>> >>>>>> >>>>>> If I understand you correctly, the code should check if logging has >>>>>> been >>>>>> configured already, and if not, skip displaying the message? >>>>>> >>>>>> >>>>>>> When uninstalling you get the message 'ipa-server-install >>>>>>> successful'. >>>>>>> This is a little odd as well. >>>>>> >>>>>> ipa-server-install is the name of the command. Wontfix for now, >>>>>> unless >>>>>> you disagree strongly. >>>>>> >>>>>> >>>>> >>>>> Updated patch: only log if logging has been configured (detected by >>>>> looking at the root logger's handlers), and changed the message to >>>>> ?The >>>>> ipa-server-install command has succeeded/failed?. >>>> >>>> Works much better thanks. Just one request. When you created >>>> final_log() >>>> you show less information than you did in earlier patches. It is nice >>>> seeing the SystemExit failure. Can you do something like this >>>> (basically >>>> cut-n-pasted from v05)? >>>> >>>> diff --git a/ipaserver/install/installutils.py >>>> b/ipaserver/install/installutils. >>>> py >>>> index 851b58d..ca82a1b 100644 >>>> --- a/ipaserver/install/installutils.py >>>> +++ b/ipaserver/install/installutils.py >>>> @@ -721,15 +721,15 @@ def script_context(operation_name): >>>> # Only log if logging was already configured >>>> # TODO: Do this properly (e.g. configure logging before the try/except) >>>> if log_mgr.handlers.keys() != ['console']: >>>> - root_logger.info(template, operation_name) >>>> + root_logger.info(template) >>>> try: >>>> yield >>>> except BaseException, e: >>>> if isinstance(e, SystemExit) and (e.code is None or e.code == 0): >>>> # Not an error after all >>>> - final_log('The %s command was successful') >>>> + final_log('The %s command was successful' % operation_name) >>>> else: >>>> - final_log('The %s command failed') >>>> + final_log('%s failed, %s: %s' % (operation_name, type(e).__name__, >>>> e)) >>>> raise >>>> else: >>>> final_log('The %s command was successful') >>>> >>>> This looks like: >>>> >>>> 2012-03-30T20:56:53Z INFO ipa-dns-install failed, SystemExit: >>>> DNS is already configured in this IPA server. >>>> >>>> rob >>> >>> Fixed. >>> >> >> Hate to do this to you but I've found a few more issues. I basically >> went down the list and ran all the commands in various conditions. >> >> Some don't open any logs at all so the output gets written twice, like >> ipa-replica-prepare and ipa-replica-manage: >> >> # ipa-replica-manage del foo >> Directory Manager password: >> >> ipa: INFO: The ipa-replica-manage command failed, SystemExit: >> 'pony.greyoak.com' has no replication agreement for 'foo' >> 'pony.greyoak.com' has no replication agreement for 'foo' >> >> Same with ipa-csreplica-manage. >> >> # ipa-replica-prepare foo >> Directory Manager (existing master) password: >> >> ipa: INFO: The ipa-replica-prepare command failed, SystemExit: >> The password provided is incorrect for LDAP server pony.greyoak.com >> >> The password provided is incorrect for LDAP server pony.greyoak.com >> >> rob > > When the utility sets logging to console, the extra log message gets > printed out there. I agree this isn't optimal. > Attached patch removes the console log handler before logging the result. > I read through log_manager, and found that I can do this more cleanly with remove_handler. John, is this a good use of the API? -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0014-09-Add-final-debug-message-in-installers.patch Type: text/x-patch Size: 30374 bytes Desc: not available URL: From pspacek at redhat.com Fri Apr 13 10:33:35 2012 From: pspacek at redhat.com (Petr Spacek) Date: Fri, 13 Apr 2012 12:33:35 +0200 Subject: [Freeipa-devel] [PATCH] 0016 Remove old work-around for a bug in dns_db_unregister() Message-ID: <4F8800FF.1000401@redhat.com> Hello, this patch removes old work-around for a bug in dns_db_unregister(). This bug was fixed in BIND version 9.7.0a1. Oldest available BIND version for RHEL 6.2 contains required fix already. (version bind-9.7.0-5.P2, build Wed, 26 May 2010 04:55:42 EDT, https://brewweb.devel.redhat.com/buildinfo?buildID=133161) Patch also adds note to README and bumps dependency version in SPEC file. Petr^2 Spacek -------------- next part -------------- A non-text attachment was scrubbed... Name: bind-dyndb-ldap-pspacek-0016-Removed-old-work-around-for-a-bug-in-dns_db_unregist.patch Type: text/x-patch Size: 2027 bytes Desc: not available URL: From mkosek at redhat.com Fri Apr 13 11:53:43 2012 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 13 Apr 2012 13:53:43 +0200 Subject: [Freeipa-devel] [PATCH 72] Validate DN & RDN parameters for migrate command In-Reply-To: <4F86E09F.6070408@redhat.com> References: <4F7E465F.7050206@redhat.com> <1333701642.14740.27.camel@balmora.brq.redhat.com> <4F7EF98A.5000804@redhat.com> <4F844BDC.108@redhat.com> <4F863879.3040405@redhat.com> <1334218670.23788.8.camel@balmora.brq.redhat.com> <4F86E09F.6070408@redhat.com> Message-ID: <1334318023.3089.7.camel@balmora.brq.redhat.com> On Thu, 2012-04-12 at 10:03 -0400, John Dennis wrote: > On 04/12/2012 04:17 AM, Martin Kosek wrote: > > On Wed, 2012-04-11 at 22:05 -0400, John Dennis wrote: > >> Revised patch attached. We'll leave the DN parameter changes till later. > >> This is essentially the same as the original patch with the addition of > >> the fixes necessary to support passing an empty container arg, an issue > >> Martin discovered in his review. FWIW the answer was not to make the > >> param required (actually it would have been adding the flag 'nonempty') > >> because you should be able to say you don't want to introduce a > >> container into the search bases (see commit comment) > >> > > > > I don't agree with the removal of default values for the containers and > > allowing an empty value for them. Please, see my reasoning: > > > > 1) I don't think its unlikely to have ou=People and ou=groups as > > containers for users/groups as they are default containers in fresh LDAP > > installs. I think most of the small LDAP deployments will use these > > values. > > > > 2) I am also not sure if somebody would want to pass empty user and > > group container. Users and groups won't be shared in the same container > > and since we search with _ldap.SCOPE_ONELEVEL the migration would not > > find users or groups in containers nested under the search base anyway. > > OK. Patch is revised, restored the defaults, usercontainer and > groupcontainer are now required to be non-empty. Also, basedn had been > optional without a default which didn't make much sense, now basedn is a > required parameter. > I still have few issues: 1) basedn should not be required by default. When it is not supplied, we proactively check remote LDAP DSE and try to either read nsslapd-defaultnamingcontext or pick the first naming context to make the migration process easier for the end user. This is what migration plugin doc says: If a base DN is not provided with --basedn then IPA will use either the value of defaultNamingContext if it is set or the first value in namingContexts set in the root of the remote LDAP server. 2) I don't understand what's the advantage of using an optional param for usercontainer/groupcontainer and flag 'nonempty' compared to using a required param. IMHO, using a required param for this use case is perfectly appropriate as we indeed require these containers. Besides, 'nonempty' flag is quite rare - in fact its not used anywhere but migration plugin. Martin From pviktori at redhat.com Fri Apr 13 13:22:16 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 13 Apr 2012 15:22:16 +0200 Subject: [Freeipa-devel] [PATCH 72] Validate DN & RDN parameters for migrate command In-Reply-To: <1334318023.3089.7.camel@balmora.brq.redhat.com> References: <4F7E465F.7050206@redhat.com> <1333701642.14740.27.camel@balmora.brq.redhat.com> <4F7EF98A.5000804@redhat.com> <4F844BDC.108@redhat.com> <4F863879.3040405@redhat.com> <1334218670.23788.8.camel@balmora.brq.redhat.com> <4F86E09F.6070408@redhat.com> <1334318023.3089.7.camel@balmora.brq.redhat.com> Message-ID: <4F882888.40707@redhat.com> On 04/13/2012 01:53 PM, Martin Kosek wrote: > On Thu, 2012-04-12 at 10:03 -0400, John Dennis wrote: >> On 04/12/2012 04:17 AM, Martin Kosek wrote: >>> On Wed, 2012-04-11 at 22:05 -0400, John Dennis wrote: >>>> Revised patch attached. We'll leave the DN parameter changes till later. >>>> This is essentially the same as the original patch with the addition of >>>> the fixes necessary to support passing an empty container arg, an issue >>>> Martin discovered in his review. FWIW the answer was not to make the >>>> param required (actually it would have been adding the flag 'nonempty') >>>> because you should be able to say you don't want to introduce a >>>> container into the search bases (see commit comment) >>>> >>> >>> I don't agree with the removal of default values for the containers and >>> allowing an empty value for them. Please, see my reasoning: >>> >>> 1) I don't think its unlikely to have ou=People and ou=groups as >>> containers for users/groups as they are default containers in fresh LDAP >>> installs. I think most of the small LDAP deployments will use these >>> values. >>> >>> 2) I am also not sure if somebody would want to pass empty user and >>> group container. Users and groups won't be shared in the same container >>> and since we search with _ldap.SCOPE_ONELEVEL the migration would not >>> find users or groups in containers nested under the search base anyway. >> >> OK. Patch is revised, restored the defaults, usercontainer and >> groupcontainer are now required to be non-empty. Also, basedn had been >> optional without a default which didn't make much sense, now basedn is a >> required parameter. >> > > I still have few issues: > > 1) basedn should not be required by default. When it is not supplied, we > proactively check remote LDAP DSE and try to either read > nsslapd-defaultnamingcontext or pick the first naming context to make > the migration process easier for the end user. This is what migration > plugin doc says: > > If a base DN is not provided with --basedn then IPA will use either > the value of defaultNamingContext if it is set or the first value > in namingContexts set in the root of the remote LDAP server. > > > 2) I don't understand what's the advantage of using an optional param > for usercontainer/groupcontainer and flag 'nonempty' compared to using a > required param. > > IMHO, using a required param for this use case is perfectly appropriate > as we indeed require these containers. Besides, 'nonempty' flag is quite > rare - in fact its not used anywhere but migration plugin. Update commands automatically replace required arguments with `nonempty`-flagged ones. The flag should have no other use. I added it in one of my early patches but didn't include documentation. I'll send a docstring patch to make clear what I meant. -- Petr? From pviktori at redhat.com Fri Apr 13 13:28:01 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 13 Apr 2012 15:28:01 +0200 Subject: [Freeipa-devel] [PATCH] 0037 Document the 'nonempty' flag Message-ID: <4F8829E1.2070100@redhat.com> In my patch 0012 (commits 7cfc16c, c6e4372), I introduced the 'nonempty' flag but did not document it. Here's a docstring patch that clarifies what I meant. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0037-Document-the-nonempty-flag.patch Type: text/x-patch Size: 1359 bytes Desc: not available URL: From pviktori at redhat.com Fri Apr 13 13:46:16 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 13 Apr 2012 15:46:16 +0200 Subject: [Freeipa-devel] Utility modules and unused code Message-ID: <4F882E28.9070406@redhat.com> I've started looking at our test coverage and noticed that our utility modules have a lot of stuff that's not only untested, but also unused. Some of these are left over from Radius support or other ancient code. What to do with these? Remove them for 3.0, or be careful and just deprecate them with a warning? Or did I look wrong and some of them still useful? Seeing the util modules, I'd recommend to leave future code that *could* be potentially useful in more places, but currently isn't, near the place that uses it. It is easy to put in a common location if there's need for it later, but removing something once all its dependencies are gone is more problematic. Not to mention that it's hard to write generic code when there's just one use for it, or that such code is hard to change once something else might depend on it. Unused code I found: ipapython/ipautil.py: format_list parse_key_value_pairs read_pairs_file user_input_plain AttributeValueCompleter ItemCompleter ipaserver/ipautil.py: get_gsserror [ipapython.ipautil.get_gsserror (!!) is similar but used] ipalib/util.py: load_plugins_in_dir import_plugins_subpackage make_repr [in imports & its test only] Some functions appear in more places and are the same: realm_to_suffix get_fqdn (I used `git grep` to look for usage. There are ways to refer to an object without its literal name, but I doubt we're doing that) -- Petr? From mkosek at redhat.com Fri Apr 13 13:46:51 2012 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 13 Apr 2012 15:46:51 +0200 Subject: [Freeipa-devel] [PATCH] 0037 Document the 'nonempty' flag In-Reply-To: <4F8829E1.2070100@redhat.com> References: <4F8829E1.2070100@redhat.com> Message-ID: <1334324811.8344.0.camel@balmora.brq.redhat.com> On Fri, 2012-04-13 at 15:28 +0200, Petr Viktorin wrote: > In my patch 0012 (commits 7cfc16c, c6e4372), I introduced the 'nonempty' > flag but did not document it. Here's a docstring patch that clarifies > what I meant. > Thanks for the documentation, this should prevent confusions in the future. ACK. Pushed to master, ipa-2-2. Martin From jdennis at redhat.com Fri Apr 13 18:42:10 2012 From: jdennis at redhat.com (John Dennis) Date: Fri, 13 Apr 2012 14:42:10 -0400 Subject: [Freeipa-devel] [PATCH 72] Validate DN & RDN parameters for migrate command In-Reply-To: <1334318023.3089.7.camel@balmora.brq.redhat.com> References: <4F7E465F.7050206@redhat.com> <1333701642.14740.27.camel@balmora.brq.redhat.com> <4F7EF98A.5000804@redhat.com> <4F844BDC.108@redhat.com> <4F863879.3040405@redhat.com> <1334218670.23788.8.camel@balmora.brq.redhat.com> <4F86E09F.6070408@redhat.com> <1334318023.3089.7.camel@balmora.brq.redhat.com> Message-ID: <4F887382.5030307@redhat.com> On 04/13/2012 07:53 AM, Martin Kosek wrote: > On Thu, 2012-04-12 at 10:03 -0400, John Dennis wrote: >> On 04/12/2012 04:17 AM, Martin Kosek wrote: >>> On Wed, 2012-04-11 at 22:05 -0400, John Dennis wrote: >>>> Revised patch attached. We'll leave the DN parameter changes till later. >>>> This is essentially the same as the original patch with the addition of >>>> the fixes necessary to support passing an empty container arg, an issue >>>> Martin discovered in his review. FWIW the answer was not to make the >>>> param required (actually it would have been adding the flag 'nonempty') >>>> because you should be able to say you don't want to introduce a >>>> container into the search bases (see commit comment) >>>> >>> >>> I don't agree with the removal of default values for the containers and >>> allowing an empty value for them. Please, see my reasoning: >>> >>> 1) I don't think its unlikely to have ou=People and ou=groups as >>> containers for users/groups as they are default containers in fresh LDAP >>> installs. I think most of the small LDAP deployments will use these >>> values. >>> >>> 2) I am also not sure if somebody would want to pass empty user and >>> group container. Users and groups won't be shared in the same container >>> and since we search with _ldap.SCOPE_ONELEVEL the migration would not >>> find users or groups in containers nested under the search base anyway. >> >> OK. Patch is revised, restored the defaults, usercontainer and >> groupcontainer are now required to be non-empty. Also, basedn had been >> optional without a default which didn't make much sense, now basedn is a >> required parameter. >> > > I still have few issues: > > 1) basedn should not be required by default. When it is not supplied, we > proactively check remote LDAP DSE and try to either read > nsslapd-defaultnamingcontext or pick the first naming context to make > the migration process easier for the end user. This is what migration > plugin doc says: > > If a base DN is not provided with --basedn then IPA will use either > the value of defaultNamingContext if it is set or the first value > in namingContexts set in the root of the remote LDAP server. > > > 2) I don't understand what's the advantage of using an optional param > for usercontainer/groupcontainer and flag 'nonempty' compared to using a > required param. > > IMHO, using a required param for this use case is perfectly appropriate > as we indeed require these containers. Besides, 'nonempty' flag is quite > rare - in fact its not used anywhere but migration plugin. Fixed the above two issues, revised patch attached. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jdennis-0072-3-Validate-DN-RDN-parameters-for-migrate-command.patch Type: text/x-patch Size: 4637 bytes Desc: not available URL: From rcritten at redhat.com Fri Apr 13 19:28:15 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Fri, 13 Apr 2012 15:28:15 -0400 Subject: [Freeipa-devel] [PATCH] 1006 detect expired passwords in forms login Message-ID: <4F887E4F.6060805@redhat.com> When doing a forms-based login there is no notification that a password needs to be reset. We don't currently provide a facility for that but we should at least tell users what is going on. This patch adds an LDAP bind to test the password to see if it is expired and returns the string "Password Expired" along with the 401 if it is. I'm told this is all the UI will need to be able to identify this condition. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1006-expired.patch Type: text/x-diff Size: 3690 bytes Desc: not available URL: From mkosek at redhat.com Mon Apr 16 06:37:47 2012 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 16 Apr 2012 08:37:47 +0200 Subject: [Freeipa-devel] [PATCH 72] Validate DN & RDN parameters for migrate command In-Reply-To: <4F887382.5030307@redhat.com> References: <4F7E465F.7050206@redhat.com> <1333701642.14740.27.camel@balmora.brq.redhat.com> <4F7EF98A.5000804@redhat.com> <4F844BDC.108@redhat.com> <4F863879.3040405@redhat.com> <1334218670.23788.8.camel@balmora.brq.redhat.com> <4F86E09F.6070408@redhat.com> <1334318023.3089.7.camel@balmora.brq.redhat.com> <4F887382.5030307@redhat.com> Message-ID: <1334558267.22853.0.camel@balmora.brq.redhat.com> On Fri, 2012-04-13 at 14:42 -0400, John Dennis wrote: > On 04/13/2012 07:53 AM, Martin Kosek wrote: > > On Thu, 2012-04-12 at 10:03 -0400, John Dennis wrote: > >> On 04/12/2012 04:17 AM, Martin Kosek wrote: > >>> On Wed, 2012-04-11 at 22:05 -0400, John Dennis wrote: > >>>> Revised patch attached. We'll leave the DN parameter changes till later. > >>>> This is essentially the same as the original patch with the addition of > >>>> the fixes necessary to support passing an empty container arg, an issue > >>>> Martin discovered in his review. FWIW the answer was not to make the > >>>> param required (actually it would have been adding the flag 'nonempty') > >>>> because you should be able to say you don't want to introduce a > >>>> container into the search bases (see commit comment) > >>>> > >>> > >>> I don't agree with the removal of default values for the containers and > >>> allowing an empty value for them. Please, see my reasoning: > >>> > >>> 1) I don't think its unlikely to have ou=People and ou=groups as > >>> containers for users/groups as they are default containers in fresh LDAP > >>> installs. I think most of the small LDAP deployments will use these > >>> values. > >>> > >>> 2) I am also not sure if somebody would want to pass empty user and > >>> group container. Users and groups won't be shared in the same container > >>> and since we search with _ldap.SCOPE_ONELEVEL the migration would not > >>> find users or groups in containers nested under the search base anyway. > >> > >> OK. Patch is revised, restored the defaults, usercontainer and > >> groupcontainer are now required to be non-empty. Also, basedn had been > >> optional without a default which didn't make much sense, now basedn is a > >> required parameter. > >> > > > > I still have few issues: > > > > 1) basedn should not be required by default. When it is not supplied, we > > proactively check remote LDAP DSE and try to either read > > nsslapd-defaultnamingcontext or pick the first naming context to make > > the migration process easier for the end user. This is what migration > > plugin doc says: > > > > If a base DN is not provided with --basedn then IPA will use either > > the value of defaultNamingContext if it is set or the first value > > in namingContexts set in the root of the remote LDAP server. > > > > > > 2) I don't understand what's the advantage of using an optional param > > for usercontainer/groupcontainer and flag 'nonempty' compared to using a > > required param. > > > > IMHO, using a required param for this use case is perfectly appropriate > > as we indeed require these containers. Besides, 'nonempty' flag is quite > > rare - in fact its not used anywhere but migration plugin. > > Fixed the above two issues, revised patch attached. > > ACK. I just needed to re-generate API.txt before I pushed the patch. Pushed to master, ipa-2-2. Martin From pvoborni at redhat.com Mon Apr 16 09:02:16 2012 From: pvoborni at redhat.com (Petr Vobornik) Date: Mon, 16 Apr 2012 11:02:16 +0200 Subject: [Freeipa-devel] [PATCH] 1006 detect expired passwords in forms login In-Reply-To: <4F887E4F.6060805@redhat.com> References: <4F887E4F.6060805@redhat.com> Message-ID: <4F8BE018.4070108@redhat.com> On 04/13/2012 09:28 PM, Rob Crittenden wrote: > When doing a forms-based login there is no notification that a password > needs to be reset. We don't currently provide a facility for that but we > should at least tell users what is going on. > > This patch adds an LDAP bind to test the password to see if it is > expired and returns the string "Password Expired" along with the 401 if > it is. I'm told this is all the UI will need to be able to identify this > condition. > > rob > UI can work with it. I have a patch ready. I'll send it when this will be ACKed. Some notes: 1) The error templates and the 'Password Expired' message are hardcoded to be English. It's fine at the moment. Will we internationalize them sometime in future? If so, we will run into the same problem again. 2) conn.destroy_connection() won't be called if an exception occurs. Not sure if it is a problem, GC and __del__ should take care of it. -- Petr Vobornik From mkosek at redhat.com Mon Apr 16 09:04:36 2012 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 16 Apr 2012 11:04:36 +0200 Subject: [Freeipa-devel] [PATCH] 250 Fix dnsrecord_add interactive mode Message-ID: <1334567076.22853.15.camel@balmora.brq.redhat.com> This issue was found during testing of 2386. --- dnsrecord_add interactive mode did not work correctly when more than one DNS record part was entered as command line option. It asked for remaining options more than once. This patch fixes this situation and also adds tests to cover this use case properly. https://fedorahosted.org/freeipa/ticket/2641 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-250-fix-dnsrecord_add-interactive-mode.patch Type: text/x-patch Size: 3900 bytes Desc: not available URL: From atkac at redhat.com Mon Apr 16 09:54:55 2012 From: atkac at redhat.com (Adam Tkac) Date: Mon, 16 Apr 2012 11:54:55 +0200 Subject: [Freeipa-devel] [PATCH] 0016 Remove old work-around for a bug in dns_db_unregister() In-Reply-To: <4F8800FF.1000401@redhat.com> References: <4F8800FF.1000401@redhat.com> Message-ID: <20120416095454.GA1814@redhat.com> On Fri, Apr 13, 2012 at 12:33:35PM +0200, Petr Spacek wrote: > Hello, > > this patch removes old work-around for a bug in dns_db_unregister(). > This bug was fixed in BIND version 9.7.0a1. > > Oldest available BIND version for RHEL 6.2 contains required fix already. > (version bind-9.7.0-5.P2, build Wed, 26 May 2010 04:55:42 EDT, > https://brewweb.devel.redhat.com/buildinfo?buildID=133161) > > Patch also adds note to README and bumps dependency version in SPEC file. Ack, please push it to master. A -- Adam Tkac, Red Hat, Inc. From pspacek at redhat.com Mon Apr 16 10:06:58 2012 From: pspacek at redhat.com (Petr Spacek) Date: Mon, 16 Apr 2012 12:06:58 +0200 Subject: [Freeipa-devel] [PATCH] 0016 Remove old work-around for a bug in dns_db_unregister() In-Reply-To: <20120416095454.GA1814@redhat.com> References: <4F8800FF.1000401@redhat.com> <20120416095454.GA1814@redhat.com> Message-ID: <4F8BEF42.6000409@redhat.com> On 04/16/2012 11:54 AM, Adam Tkac wrote: > On Fri, Apr 13, 2012 at 12:33:35PM +0200, Petr Spacek wrote: >> Hello, >> >> this patch removes old work-around for a bug in dns_db_unregister(). >> This bug was fixed in BIND version 9.7.0a1. >> >> Oldest available BIND version for RHEL 6.2 contains required fix already. >> (version bind-9.7.0-5.P2, build Wed, 26 May 2010 04:55:42 EDT, >> https://brewweb.devel.redhat.com/buildinfo?buildID=133161) >> >> Patch also adds note to README and bumps dependency version in SPEC file. > > Ack, please push it to master. > > A > Pushed to master: https://fedorahosted.org/bind-dyndb-ldap/changeset/d09edb2d88fb730043c7d1f11b979ea8bc260e37 From pspacek at redhat.com Mon Apr 16 12:13:06 2012 From: pspacek at redhat.com (Petr Spacek) Date: Mon, 16 Apr 2012 14:13:06 +0200 Subject: [Freeipa-devel] [PATCH] 0017 Fix various memory leaks in Kerberos helper code Message-ID: <4F8C0CD2.6010402@redhat.com> Hello, this patch fixes several memory leaks in Kerberos integration code. Fix was tested with Valgrind. There is another memory leak in persistent search code, it will be fixed by separate patch. Petr^2 Spacek -------------- next part -------------- A non-text attachment was scrubbed... Name: bind-dyndb-ldap-pspacek-0017-Fix-various-memory-leaks-in-Kerberos-helper-code.patch Type: text/x-patch Size: 2547 bytes Desc: not available URL: From atkac at redhat.com Mon Apr 16 12:25:24 2012 From: atkac at redhat.com (Adam Tkac) Date: Mon, 16 Apr 2012 14:25:24 +0200 Subject: [Freeipa-devel] [PATCH] 0017 Fix various memory leaks in Kerberos helper code In-Reply-To: <4F8C0CD2.6010402@redhat.com> References: <4F8C0CD2.6010402@redhat.com> Message-ID: <20120416122523.GA5246@redhat.com> On Mon, Apr 16, 2012 at 02:13:06PM +0200, Petr Spacek wrote: > Hello, > > this patch fixes several memory leaks in Kerberos integration code. > Fix was tested with Valgrind. > > There is another memory leak in persistent search code, it will be > fixed by separate patch. Ack, please push it to master. A > From 2b95bc00554f19f8949fb4690d802828ccf17023 Mon Sep 17 00:00:00 2001 > From: Petr Spacek > Date: Mon, 16 Apr 2012 14:07:20 +0200 > Subject: [PATCH] Fix various memory leaks in Kerberos helper code. > Signed-off-by: Petr Spacek > > --- > src/krb5_helper.c | 16 ++++++++++++---- > 1 files changed, 12 insertions(+), 4 deletions(-) > > diff --git a/src/krb5_helper.c b/src/krb5_helper.c > index 571f511..ffa6938 100644 > --- a/src/krb5_helper.c > +++ b/src/krb5_helper.c > @@ -31,8 +31,9 @@ > #define CHECK_KRB5(ctx, err, msg, ...) \ > do { \ > if (err) { \ > - log_error(msg " (%s)", ##__VA_ARGS__, \ > - krb5_get_error_message(ctx, err)); \ > + const char * errmsg = krb5_get_error_message(ctx, err); \ > + log_error(msg " (%s)", ##__VA_ARGS__, errmsg); \ > + krb5_free_error_message(ctx, errmsg); \ > result = ISC_R_FAILURE; \ > goto cleanup; \ > } \ > @@ -66,8 +67,10 @@ check_credentials(krb5_context context, > > krberr = krb5_cc_retrieve_cred(context, ccache, 0, &mcreds, &creds); > if (krberr) { > + const char * errmsg = krb5_get_error_message(context, krberr); > log_debug(2, "Principal not found in cred cache (%s)", > - krb5_get_error_message(context, krberr)); > + errmsg); > + krb5_free_error_message(context, errmsg); > result = ISC_R_FAILURE; > goto cleanup; > } > @@ -97,8 +100,9 @@ get_krb5_tgt(isc_mem_t *mctx, const char *principal, const char *keyfile) > krb5_context context = NULL; > krb5_keytab keytab = NULL; > krb5_ccache ccache = NULL; > - krb5_principal kprincpw; > + krb5_principal kprincpw = NULL; > krb5_creds my_creds; > + krb5_creds * my_creds_ptr = NULL; > krb5_get_init_creds_opt options; > krb5_error_code krberr; > isc_result_t result; > @@ -167,6 +171,7 @@ get_krb5_tgt(isc_mem_t *mctx, const char *principal, const char *keyfile) > krberr = krb5_get_init_creds_keytab(context, &my_creds, kprincpw, > keytab, 0, NULL, &options); > CHECK_KRB5(context, krberr, "Failed to init credentials"); > + my_creds_ptr = &my_creds; > > /* store credentials in cache */ > krberr = krb5_cc_initialize(context, ccache, kprincpw); > @@ -179,7 +184,10 @@ get_krb5_tgt(isc_mem_t *mctx, const char *principal, const char *keyfile) > > cleanup: > if (ccname) str_destroy(&ccname); > + if (ccache) krb5_cc_close(context, ccache); > if (keytab) krb5_kt_close(context, keytab); > + if (kprincpw) krb5_free_principal(context, kprincpw); > + if (my_creds_ptr) krb5_free_cred_contents(context, my_creds_ptr); > if (context) krb5_free_context(context); > return result; > } > -- > 1.7.7.6 > -- Adam Tkac, Red Hat, Inc. From rcritten at redhat.com Mon Apr 16 13:16:34 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 16 Apr 2012 09:16:34 -0400 Subject: [Freeipa-devel] [PATCH] 1006 detect expired passwords in forms login In-Reply-To: <4F8BE018.4070108@redhat.com> References: <4F887E4F.6060805@redhat.com> <4F8BE018.4070108@redhat.com> Message-ID: <4F8C1BB2.1040604@redhat.com> Petr Vobornik wrote: > On 04/13/2012 09:28 PM, Rob Crittenden wrote: >> When doing a forms-based login there is no notification that a password >> needs to be reset. We don't currently provide a facility for that but we >> should at least tell users what is going on. >> >> This patch adds an LDAP bind to test the password to see if it is >> expired and returns the string "Password Expired" along with the 401 if >> it is. I'm told this is all the UI will need to be able to identify this >> condition. >> >> rob >> > > UI can work with it. I have a patch ready. I'll send it when this will > be ACKed. > > Some notes: > > 1) The error templates and the 'Password Expired' message are hardcoded > to be English. It's fine at the moment. Will we internationalize them > sometime in future? If so, we will run into the same problem again. No plans to. I can update the patch with a comment specifically to not internationalize it if you'd like. > 2) conn.destroy_connection() won't be called if an exception occurs. Not > sure if it is a problem, GC and __del__ should take care of it. Hmm, this is due to a late stage change I made. I originally had this broken out into two blocks where the only thing done in the first try/except block was the connection, so the only exception that could happen was a failed connection. That isn't true any more. I'll update the patch. rob From rcritten at redhat.com Mon Apr 16 13:34:56 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 16 Apr 2012 09:34:56 -0400 Subject: [Freeipa-devel] [PATCH] 1006 detect expired passwords in forms login In-Reply-To: <4F8C1BB2.1040604@redhat.com> References: <4F887E4F.6060805@redhat.com> <4F8BE018.4070108@redhat.com> <4F8C1BB2.1040604@redhat.com> Message-ID: <4F8C2000.9000400@redhat.com> Rob Crittenden wrote: > Petr Vobornik wrote: >> On 04/13/2012 09:28 PM, Rob Crittenden wrote: >>> When doing a forms-based login there is no notification that a password >>> needs to be reset. We don't currently provide a facility for that but we >>> should at least tell users what is going on. >>> >>> This patch adds an LDAP bind to test the password to see if it is >>> expired and returns the string "Password Expired" along with the 401 if >>> it is. I'm told this is all the UI will need to be able to identify this >>> condition. >>> >>> rob >>> >> >> UI can work with it. I have a patch ready. I'll send it when this will >> be ACKed. >> >> Some notes: >> >> 1) The error templates and the 'Password Expired' message are hardcoded >> to be English. It's fine at the moment. Will we internationalize them >> sometime in future? If so, we will run into the same problem again. > > No plans to. I can update the patch with a comment specifically to not > internationalize it if you'd like. > >> 2) conn.destroy_connection() won't be called if an exception occurs. Not >> sure if it is a problem, GC and __del__ should take care of it. > > Hmm, this is due to a late stage change I made. I originally had this > broken out into two blocks where the only thing done in the first > try/except block was the connection, so the only exception that could > happen was a failed connection. > > That isn't true any more. I'll update the patch. And here you go. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1006-2-expired.patch Type: text/x-diff Size: 3827 bytes Desc: not available URL: From mkosek at redhat.com Mon Apr 16 14:14:42 2012 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 16 Apr 2012 16:14:42 +0200 Subject: [Freeipa-devel] [PATCH] 249 Return correct record name in DNS plugin In-Reply-To: <4F86D02D.30104@redhat.com> References: <1334217098.23788.0.camel@balmora.brq.redhat.com> <4F86D02D.30104@redhat.com> Message-ID: <1334585682.22853.21.camel@balmora.brq.redhat.com> On Thu, 2012-04-12 at 14:53 +0200, Jan Cholasta wrote: > On 12.4.2012 09:51, Martin Kosek wrote: > > When dnsrecord-add or dnsrecord-mod commands are used on a root > > zone record (it has a special name "@"), a zone name is returned > > instead of a special name "@". This confuses DNS part of Web UI > > which is then not able to manipulate records in the root zone > > when these commands are used. > > > > This patch fixes these 2 commands to return correct value when > > a root zone is modified. > > > > https://fedorahosted.org/freeipa/ticket/2627 > > https://fedorahosted.org/freeipa/ticket/2628 > > > > Works as advertised. ACK. > > Honza > Pushed to master, ipa-2-2. Martin From rcritten at redhat.com Mon Apr 16 15:08:28 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 16 Apr 2012 11:08:28 -0400 Subject: [Freeipa-devel] Utility modules and unused code In-Reply-To: <4F882E28.9070406@redhat.com> References: <4F882E28.9070406@redhat.com> Message-ID: <4F8C35EC.1070807@redhat.com> Petr Viktorin wrote: > I've started looking at our test coverage and noticed that our utility > modules have a lot of stuff that's not only untested, but also unused. > Some of these are left over from Radius support or other ancient code. > > What to do with these? Remove them for 3.0, or be careful and just > deprecate them with a warning? > Or did I look wrong and some of them still useful? > > > Seeing the util modules, I'd recommend to leave future code that *could* > be potentially useful in more places, but currently isn't, near the > place that uses it. It is easy to put in a common location if there's > need for it later, but removing something once all its dependencies are > gone is more problematic. > Not to mention that it's hard to write generic code when there's just > one use for it, or that such code is hard to change once something else > might depend on it. > > > > > Unused code I found: > > ipapython/ipautil.py: > format_list > parse_key_value_pairs > read_pairs_file Left overs from radius support. It can probably go, we can revive it if needed. I must have missed these when I removed the rest of the radius code. > user_input_plain This is left over from v1, it can go. > AttributeValueCompleter > ItemCompleter Jason intended to add tab support to some parameters. I believe these are related to that effort. Not sure what it'd take to revive it. > > ipaserver/ipautil.py: > get_gsserror [ipapython.ipautil.get_gsserror (!!) is similar but used] The one in ipaserver should be dropped and replaced by the one in ipapython. > ipalib/util.py: > load_plugins_in_dir > import_plugins_subpackage These were intended to make it more flexible to locate plugins outside of ipa. > make_repr [in imports & its test only] Not sure. Maybe Jason used this to make it easier to tell what was being imported. > > Some functions appear in more places and are the same: > realm_to_suffix > get_fqdn Should be centralized in ipapython if practical. What I'd suggest is to open a series of tickets to reduce and drop various bits. rob From pspacek at redhat.com Mon Apr 16 15:15:12 2012 From: pspacek at redhat.com (Petr Spacek) Date: Mon, 16 Apr 2012 17:15:12 +0200 Subject: [Freeipa-devel] [PATCH] 0017 Fix various memory leaks in Kerberos helper code In-Reply-To: <20120416122523.GA5246@redhat.com> References: <4F8C0CD2.6010402@redhat.com> <20120416122523.GA5246@redhat.com> Message-ID: <4F8C3780.10904@redhat.com> On 04/16/2012 02:25 PM, Adam Tkac wrote: > On Mon, Apr 16, 2012 at 02:13:06PM +0200, Petr Spacek wrote: >> Hello, >> >> this patch fixes several memory leaks in Kerberos integration code. >> Fix was tested with Valgrind. >> >> There is another memory leak in persistent search code, it will be >> fixed by separate patch. > > Ack, please push it to master. > > A Pushed to master: https://fedorahosted.org/bind-dyndb-ldap/changeset/2b95bc00554f19f8949fb4690d802828ccf17023 > >> From 2b95bc00554f19f8949fb4690d802828ccf17023 Mon Sep 17 00:00:00 2001 >> From: Petr Spacek >> Date: Mon, 16 Apr 2012 14:07:20 +0200 >> Subject: [PATCH] Fix various memory leaks in Kerberos helper code. >> Signed-off-by: Petr Spacek From rcritten at redhat.com Mon Apr 16 15:45:54 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 16 Apr 2012 11:45:54 -0400 Subject: [Freeipa-devel] [PATCH] 250 Fix dnsrecord_add interactive mode In-Reply-To: <1334567076.22853.15.camel@balmora.brq.redhat.com> References: <1334567076.22853.15.camel@balmora.brq.redhat.com> Message-ID: <4F8C3EB2.4030003@redhat.com> Martin Kosek wrote: > This issue was found during testing of 2386. > --- > dnsrecord_add interactive mode did not work correctly when more > than one DNS record part was entered as command line option. It > asked for remaining options more than once. This patch fixes > this situation and also adds tests to cover this use case > properly. > > https://fedorahosted.org/freeipa/ticket/2641 ACK, pushed to master and ipa-2-2 rob From pvoborni at redhat.com Mon Apr 16 16:54:05 2012 From: pvoborni at redhat.com (Petr Vobornik) Date: Mon, 16 Apr 2012 18:54:05 +0200 Subject: [Freeipa-devel] [PATCH] 1006 detect expired passwords in forms login In-Reply-To: <4F8C2000.9000400@redhat.com> References: <4F887E4F.6060805@redhat.com> <4F8BE018.4070108@redhat.com> <4F8C1BB2.1040604@redhat.com> <4F8C2000.9000400@redhat.com> Message-ID: <4F8C4EAD.1070904@redhat.com> On 04/16/2012 03:34 PM, Rob Crittenden wrote: > Rob Crittenden wrote: >> Petr Vobornik wrote: >>> On 04/13/2012 09:28 PM, Rob Crittenden wrote: >>>> When doing a forms-based login there is no notification that a password >>>> needs to be reset. We don't currently provide a facility for that >>>> but we >>>> should at least tell users what is going on. >>>> >>>> This patch adds an LDAP bind to test the password to see if it is >>>> expired and returns the string "Password Expired" along with the 401 if >>>> it is. I'm told this is all the UI will need to be able to identify >>>> this >>>> condition. >>>> >>>> rob >>>> >>> >>> UI can work with it. I have a patch ready. I'll send it when this will >>> be ACKed. >>> >>> Some notes: >>> >>> 1) The error templates and the 'Password Expired' message are hardcoded >>> to be English. It's fine at the moment. Will we internationalize them >>> sometime in future? If so, we will run into the same problem again. >> >> No plans to. I can update the patch with a comment specifically to not >> internationalize it if you'd like. It isn't necessary. I just wanted to be sure we won't implemented it twice. >> >>> 2) conn.destroy_connection() won't be called if an exception occurs. Not >>> sure if it is a problem, GC and __del__ should take care of it. >> >> Hmm, this is due to a late stage change I made. I originally had this >> broken out into two blocks where the only thing done in the first >> try/except block was the connection, so the only exception that could >> happen was a failed connection. >> >> That isn't true any more. I'll update the patch. > > And here you go. > > rob The patch looks good. I also opened similar ticket regarding locked status. https://fedorahosted.org/freeipa/ticket/2643 -- Petr Vobornik From jdennis at redhat.com Mon Apr 16 17:10:08 2012 From: jdennis at redhat.com (John Dennis) Date: Mon, 16 Apr 2012 13:10:08 -0400 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F87FF2B.90108@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> <4F849AE5.5030409@redhat.com> <4F86BCDE.1080508@redhat.com> <4F87FF2B.90108@redhat.com> Message-ID: <4F8C5270.5030605@redhat.com> On 04/13/2012 06:25 AM, Petr Viktorin wrote: >> When the utility sets logging to console, the extra log message gets >> printed out there. I agree this isn't optimal. >> Attached patch removes the console log handler before logging the result. >> > > I read through log_manager, and found that I can do this more cleanly > with remove_handler. > > John, is this a good use of the API? The problem you're trying to correct is that under some circumstances a log message is emitted twice to the console, right? Removing the console handler fees like the wrong solution, it's a pretty big hammer, you have to know when to remove it and once removed any expectation that messages will appear on the console will be untrue. Can you give me a brief explanation as to why you're getting duplicate messages on the console. I wonder if there isn't a better way to handle the problem which isn't so invasive and potentially hidden. Or is the issue you don't want a console handler at all? If that's the case then maybe we should provide a configuration that does not create one, that way it will be explicit and obvious there is no console handler. FWIW, the current configuration of logging is historical dating back to the beginning of the project. When I added the log manager the intent was to fix deficiencies in logging, not to modify the existing behavior. I wasn't clear to me the existing configuration was ideal. If you're hitting problems because of the existing configuration perhaps we should look at the historical configuration and ask if it needs to be modified. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From rcritten at redhat.com Mon Apr 16 17:31:31 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 16 Apr 2012 13:31:31 -0400 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F8C5270.5030605@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> <4F849AE5.5030409@redhat.com> <4F86BCDE.1080508@redhat.com> <4F87FF2B.90108@redhat.com> <4F8C5270.5030605@redhat.com> Message-ID: <4F8C5773.4030200@redhat.com> John Dennis wrote: > On 04/13/2012 06:25 AM, Petr Viktorin wrote: >>> When the utility sets logging to console, the extra log message gets >>> printed out there. I agree this isn't optimal. >>> Attached patch removes the console log handler before logging the >>> result. >>> >> >> I read through log_manager, and found that I can do this more cleanly >> with remove_handler. >> >> John, is this a good use of the API? > > The problem you're trying to correct is that under some circumstances a > log message is emitted twice to the console, right? > > Removing the console handler fees like the wrong solution, it's a pretty > big hammer, you have to know when to remove it and once removed any > expectation that messages will appear on the console will be untrue. > > Can you give me a brief explanation as to why you're getting duplicate > messages on the console. I wonder if there isn't a better way to handle > the problem which isn't so invasive and potentially hidden. > > Or is the issue you don't want a console handler at all? If that's the > case then maybe we should provide a configuration that does not create > one, that way it will be explicit and obvious there is no console handler. > > FWIW, the current configuration of logging is historical dating back to > the beginning of the project. When I added the log manager the intent > was to fix deficiencies in logging, not to modify the existing behavior. > I wasn't clear to me the existing configuration was ideal. If you're > hitting problems because of the existing configuration perhaps we should > look at the historical configuration and ask if it needs to be modified. This patch is standardizing logging the final disposition of a number of commands. Currently is must be inferred. The problem is that some commands have had this disposition added but open no log files so some error messages are being displayed twice and in log format. I think the easiest solution would simply be to scale this back to only those commands that open a log file at all. rob From rcritten at redhat.com Mon Apr 16 17:51:41 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 16 Apr 2012 13:51:41 -0400 Subject: [Freeipa-devel] [PATCH] 248 Raise proper exception when LDAP limits are exceeded In-Reply-To: <4F86D833.2010501@redhat.com> References: <1334048262.7045.19.camel@balmora.brq.redhat.com> <4F86D4B9.7090509@redhat.com> <4F86D833.2010501@redhat.com> Message-ID: <4F8C5C2D.5070909@redhat.com> Rob Crittenden wrote: > Jan Cholasta wrote: >> On 10.4.2012 10:57, Martin Kosek wrote: >>> Few test hints are attached to the ticket. >>> >>> --- >>> >>> ldap2 plugin returns NotFound error for find_entries/get_entry >>> queries when the server did not manage to return an entry >>> due to time limits. This may be confusing for user when the >>> entry he searches actually exists. >>> >>> This patch fixes the behavior in ldap2 plugin to return >>> LimitsExceeded exception instead. This way, user would know >>> that his time/size limits are set too low and can amend them to >>> get correct results. >>> >>> https://fedorahosted.org/freeipa/ticket/2606 >>> >> >> ACK. >> >> Honza >> > > Before pushing I'd like to look at this more. truncated is supposed to > indicate a limits problem. I want to see if the caller should be > responsible for returning a limits error instead. > > rob This is what I had in mind. diff --git a/ipaserver/plugins/ldap2.py b/ipaserver/plugins/ldap2.py index 61341b0..447e738 100644 --- a/ipaserver/plugins/ldap2.py +++ b/ipaserver/plugins/ldap2.py @@ -754,7 +754,7 @@ class ldap2(CrudBackend, Encoder): except _ldap.LDAPError, e: _handle_errors(e) - if not res: + if not res and not truncated: raise errors.NotFound(reason='no such entry') if attrs_list and ('memberindirect' in attrs_list or '*' in attrs_list) : @@ -801,7 +801,10 @@ class ldap2(CrudBackend, Encoder): if len(entries) > 1: raise errors.SingleMatchExpected(found=len(entries)) else: - return entries[0] + if truncated: + raise errors.LimitsExceeded() + else: + return entries[0] def get_entry(self, dn, attrs_list=None, time_limit=None, size_limit=None, normalize=True): @@ -811,10 +814,13 @@ class ldap2(CrudBackend, Encoder): Keyword arguments: attrs_list - list of attributes to return, all if None (default None) """ - return self.find_entries( + (entry, truncated) = self.find_entries( None, attrs_list, dn, self.SCOPE_BASE, time_limit=time_limit, size_limit=size_limit, normalize=normalize - )[0][0] + ) + if truncated: + raise errors.LimitsExceeded() + return entry[0] config_defaults = {'ipasearchtimelimit': [2], 'ipasearchrecordslimit': [0]} def get_ipa_config(self, attrs_list=None): From jdennis at redhat.com Mon Apr 16 20:10:27 2012 From: jdennis at redhat.com (John Dennis) Date: Mon, 16 Apr 2012 16:10:27 -0400 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F8C5773.4030200@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> <4F849AE5.5030409@redhat.com> <4F86BCDE.1080508@redhat.com> <4F87FF2B.90108@redhat.com> <4F8C5270.5030605@redhat.com> <4F8C5773.4030200@redhat.com> Message-ID: <4F8C7CB3.2080309@redhat.com> On 04/16/2012 01:31 PM, Rob Crittenden wrote: > John Dennis wrote: >> On 04/13/2012 06:25 AM, Petr Viktorin wrote: >>>> When the utility sets logging to console, the extra log message gets >>>> printed out there. I agree this isn't optimal. >>>> Attached patch removes the console log handler before logging the >>>> result. >>>> >>> >>> I read through log_manager, and found that I can do this more cleanly >>> with remove_handler. >>> >>> John, is this a good use of the API? >> >> The problem you're trying to correct is that under some circumstances a >> log message is emitted twice to the console, right? >> >> Removing the console handler fees like the wrong solution, it's a pretty >> big hammer, you have to know when to remove it and once removed any >> expectation that messages will appear on the console will be untrue. >> >> Can you give me a brief explanation as to why you're getting duplicate >> messages on the console. I wonder if there isn't a better way to handle >> the problem which isn't so invasive and potentially hidden. >> >> Or is the issue you don't want a console handler at all? If that's the >> case then maybe we should provide a configuration that does not create >> one, that way it will be explicit and obvious there is no console handler. >> >> FWIW, the current configuration of logging is historical dating back to >> the beginning of the project. When I added the log manager the intent >> was to fix deficiencies in logging, not to modify the existing behavior. >> I wasn't clear to me the existing configuration was ideal. If you're >> hitting problems because of the existing configuration perhaps we should >> look at the historical configuration and ask if it needs to be modified. > > This patch is standardizing logging the final disposition of a number of > commands. Currently is must be inferred. > > The problem is that some commands have had this disposition added but > open no log files so some error messages are being displayed twice and > in log format. > > I think the easiest solution would simply be to scale this back to only > those commands that open a log file at all. You say "command" but command is a loaded word :-) I presume you mean command line utility and not an IPA Command object. The reason I'm drawing this distinction is because the environment Commands execute in are not supposed to change once api.bootstrap() has completed. Who or what is aggregating the final disposition of a number of commands? The reason I ask is because the log_mgr object is global and deleting a handler from a global resource to satisfy a local need may have unintentional global side effects. How is the aggregation accomplished? -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From rcritten at redhat.com Mon Apr 16 20:15:35 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 16 Apr 2012 16:15:35 -0400 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F8C7CB3.2080309@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> <4F849AE5.5030409@redhat.com> <4F86BCDE.1080508@redhat.com> <4F87FF2B.90108@redhat.com> <4F8C5270.5030605@redhat.com> <4F8C5773.4030200@redhat.com> <4F8C7CB3.2080309@redhat.com> Message-ID: <4F8C7DE7.8080609@redhat.com> John Dennis wrote: > On 04/16/2012 01:31 PM, Rob Crittenden wrote: >> John Dennis wrote: >>> On 04/13/2012 06:25 AM, Petr Viktorin wrote: >>>>> When the utility sets logging to console, the extra log message gets >>>>> printed out there. I agree this isn't optimal. >>>>> Attached patch removes the console log handler before logging the >>>>> result. >>>>> >>>> >>>> I read through log_manager, and found that I can do this more cleanly >>>> with remove_handler. >>>> >>>> John, is this a good use of the API? >>> >>> The problem you're trying to correct is that under some circumstances a >>> log message is emitted twice to the console, right? >>> >>> Removing the console handler fees like the wrong solution, it's a pretty >>> big hammer, you have to know when to remove it and once removed any >>> expectation that messages will appear on the console will be untrue. >>> >>> Can you give me a brief explanation as to why you're getting duplicate >>> messages on the console. I wonder if there isn't a better way to handle >>> the problem which isn't so invasive and potentially hidden. >>> >>> Or is the issue you don't want a console handler at all? If that's the >>> case then maybe we should provide a configuration that does not create >>> one, that way it will be explicit and obvious there is no console >>> handler. >>> >>> FWIW, the current configuration of logging is historical dating back to >>> the beginning of the project. When I added the log manager the intent >>> was to fix deficiencies in logging, not to modify the existing behavior. >>> I wasn't clear to me the existing configuration was ideal. If you're >>> hitting problems because of the existing configuration perhaps we should >>> look at the historical configuration and ask if it needs to be modified. >> >> This patch is standardizing logging the final disposition of a number of >> commands. Currently is must be inferred. >> >> The problem is that some commands have had this disposition added but >> open no log files so some error messages are being displayed twice and >> in log format. >> >> I think the easiest solution would simply be to scale this back to only >> those commands that open a log file at all. > > You say "command" but command is a loaded word :-) I presume you mean > command line utility and not an IPA Command object. The reason I'm > drawing this distinction is because the environment Commands execute in > are not supposed to change once api.bootstrap() has completed. > > Who or what is aggregating the final disposition of a number of > commands? The reason I ask is because the log_mgr object is global and > deleting a handler from a global resource to satisfy a local need may > have unintentional global side effects. How is the aggregation > accomplished? > > This patch is in the context of the command-line utilities like ipa-server-install, ipa-client-install, etc. and not the ipa tool. Some of them initialize the api, some do not. As I said, the intention of my original request was to log the disposition of an install command. I think this patch goes too far and updates all admin utilities. rob From nalin at redhat.com Mon Apr 16 20:32:21 2012 From: nalin at redhat.com (Nalin Dahyabhai) Date: Mon, 16 Apr 2012 16:32:21 -0400 Subject: [Freeipa-devel] [PATCH] index fqdn and macAddress attributes Message-ID: <20120416203221.GC8158@redhat.com> When we implement ticket #2259, indexing fqdn and macAddress should help the Schema Compatibility and NIS Server plugins locate relevant computer entries more easily. Nalin -------------- next part -------------- >From 44491a90ae258e3932a7a19d61313d28f8936978 Mon Sep 17 00:00:00 2001 From: Nalin Dahyabhai Date: Mon, 16 Apr 2012 15:26:50 -0400 Subject: [PATCH 1/3] - index the fqdn and macAddress attributes for the sake of the compat plugin --- install/updates/20-indices.update | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/install/updates/20-indices.update b/install/updates/20-indices.update index b0e2f36..ecca027 100644 --- a/install/updates/20-indices.update +++ b/install/updates/20-indices.update @@ -32,3 +32,19 @@ default:ObjectClass: top default:ObjectClass: nsIndex default:nsSystemIndex: false default:nsIndexType: eq + +dn: cn=fqdn,cn=index,cn=userRoot,cn=ldbm database,cn=plugins,cn=config +default:cn: fqdn +default:ObjectClass: top +default:ObjectClass: nsIndex +default:nsSystemIndex: false +default:nsIndexType: eq +default:nsIndexType: pres + +dn: cn=macAddress,cn=index,cn=userRoot,cn=ldbm database,cn=plugins,cn=config +default:cn: macAddress +default:ObjectClass: top +default:ObjectClass: nsIndex +default:nsSystemIndex: false +default:nsIndexType: eq +default:nsIndexType: pres -- 1.7.10 From jdennis at redhat.com Mon Apr 16 20:32:26 2012 From: jdennis at redhat.com (John Dennis) Date: Mon, 16 Apr 2012 16:32:26 -0400 Subject: [Freeipa-devel] [PATCH 70] validate i18n strings when running "make lint" In-Reply-To: <4F86D7F7.9040107@redhat.com> References: <4F751025.7090204@redhat.com> <4F86D7F7.9040107@redhat.com> Message-ID: <4F8C81DA.5030800@redhat.com> On 04/12/2012 09:26 AM, Petr Viktorin wrote: > On 03/30/2012 03:45 AM, John Dennis wrote: >> Translatable strings have certain requirements for proper translation >> and run time behaviour. We should routinely validate those strings. A >> recent checkin to install/po/test_i18n.py makes it possible to validate >> the contents of our current pot file by searching for problematic strings. >> >> However, we only occasionally generate a new pot file, far less >> frequently than translatable strings change in the source base. Just >> like other checkin's to the source which are tested for correctness we >> should also validate new or modified translation strings when they are >> introduced and not accumulate problems to fix at the last minute. This >> would also raise the awareness of developers as to the requirements for >> proper string translation. >> >> The top level Makefile should be enhanced to create a temporary pot >> files from the current sources and validate it. We need to use a >> temporary pot file because we do not want to modify the pot file under >> source code control and exported to Transifex. >> > > NACK > > install/po/Makefile is not created early enough when running `make rpms` > from a clean checkout. > > # git clean -fx > ... > # make rpms > rm -rf /rpmbuild > mkdir -p /rpmbuild/BUILD > mkdir -p /rpmbuild/RPMS > mkdir -p /rpmbuild/SOURCES > mkdir -p /rpmbuild/SPECS > mkdir -p /rpmbuild/SRPMS > mkdir -p dist/rpms > mkdir -p dist/srpms > if [ ! -e RELEASE ]; then echo 0> RELEASE; fi > sed -e s/__VERSION__/2.99.0GITde16a82/ -e s/__RELEASE__/0/ \ > freeipa.spec.in> freeipa.spec > sed -e s/__VERSION__/2.99.0GITde16a82/ version.m4.in \ > > version.m4 > sed -e s/__VERSION__/2.99.0GITde16a82/ ipapython/setup.py.in \ > > ipapython/setup.py > sed -e s/__VERSION__/2.99.0GITde16a82/ ipapython/version.py.in \ > > ipapython/version.py > perl -pi -e "s:__NUM_VERSION__:2990:" ipapython/version.py > perl -pi -e "s:__API_VERSION__:2.34:" ipapython/version.py > sed -e s/__VERSION__/2.99.0GITde16a82/ daemons/ipa-version.h.in \ > > daemons/ipa-version.h > perl -pi -e "s:__NUM_VERSION__:2990:" daemons/ipa-version.h > perl -pi -e "s:__DATA_VERSION__:20100614120000:" daemons/ipa-version.h > sed -e s/__VERSION__/2.99.0GITde16a82/ -e s/__RELEASE__/0/ \ > ipa-client/ipa-client.spec.in> ipa-client/ipa-client.spec > sed -e s/__VERSION__/2.99.0GITde16a82/ ipa-client/version.m4.in \ > > ipa-client/version.m4 > if [ "redhat" != "" ]; then \ > sed -e s/SUPPORTED_PLATFORM/redhat/ ipapython/services.py.in \ > > ipapython/services.py; \ > fi > if [ "" != "yes" ]; then \ > ./makeapi --validate; \ > fi > make -C install/po validate-src-strings > make[1]: Entering directory `/home/pviktori/freeipa/install/po' > make[1]: *** No rule to make target `validate-src-strings'. Stop. > make[1]: Leaving directory `/home/pviktori/freeipa/install/po' > make: *** [validate-src-strings] Error 2 Updated patch attached. The fundamental problem is that we were trying to run code before configure had ever been run. We have dependencies on the output and side effects of configure. Therefore the solution is to add the bootstrap-autogen target as a dependency. FWIW, I tried an approach that did not require having bootstrap-autogen run first by having the validation occur as part of the normal "make all". But that had some undesirable properties, first we really only want to run validation for developers, not during a normal build, secondly the file generation requires a git repo (see below). But there is another reason to require running bootstrap-autogen prior to any validation (e.g. makeapi, make-lint, etc.) Those validation utilities need access to generated source files and those generated source files are supposed to be generated by the configure step. Right now we're generating them as part of updating version information. I will post a long email describing the problem on the devel list. So we've created a situation we we must run configure early on, even as part of "making the distribution" because part of "making the distribution" is validating the distribution and that requires the full content of the distribution be available. Also while trying to determine if the i18n validation step executed correctly I realized the i18n validation only emitted a message if an error was detected. That made it impossible to distinguish between empty input (a problem when not running in a development tree) and successful validation. Therefore I also updated i18n.py to output counts of messages checked which also caused me to fix some validations that were missing on plural forms. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jdennis-0070-1-validate-i18n-strings-when-running-make-lint.patch Type: text/x-patch Size: 10116 bytes Desc: not available URL: From jdennis at redhat.com Mon Apr 16 20:38:50 2012 From: jdennis at redhat.com (John Dennis) Date: Mon, 16 Apr 2012 16:38:50 -0400 Subject: [Freeipa-devel] Generating files from .in template files Message-ID: <4F8C835A.2070107@redhat.com> Traditionally when using autotools it's job of the configure script to generate files from .in template files. The reason it's done this way is because configure has to probe the system it's running on to discover the correct substitutions to be used in the .in templates. In our top level Makefile we generate several files from .in templates under the "version-update" target. Most of this logic applies to inserting version information into specific files. In the context of release specific data this is probably a valid methodology although other projects have incorporated the version munging into configure. For version strings I don't think there is necessarily a correct approach (e.g. part of configure or external to configure). Technically updating the version strings is part of "make dist" step, i.e. the act of creating the distribution, which is independent of configure. However, any template file that depends on the target system where configure is being run should be created by configure. Why? Because that's the design of autotools and how people expect things will work in an autotools environment. Another reason is you might mistakenly use a generated file on a target it was not configured for. When generated files are not created by configure the likely hood of that mistake increases because the distinction between a "distribution" file and a file designed to be generated on the target is lost (in this case the .in template is the distribution file). Currently we create ipapython/services.py from ipapython/services.py.in in the top level Makefile, not from configure. I believe that's a mistake, ipapython/services.py is a file that depends on the target it's being built on and should be produced by configure. Also, it's generation is part of the 'version-update' target in the top level Makefile, but generating ipapython/services.py is completely divorced from updating version information for the distribution snapshot, it logically doesn't belong there. FWIW, I discovered this issue when trying to fix the build for the i18n validation which is now run with the lint check (because it too has dependencies on files generated by configure). Some of our test/validation utilities (e.g. makeapi, make-lint, etc) locate all our source files. But if our set of source files is not complete until we generate them (or not syntactically complete until processed) then those validation steps will be incomplete or erroneous. We shouldn't run the validation code until AFTER we've generated all our source files from their respective template files. Which means we shouldn't run them until after configure has executed, which means making bootstrap-autogen a dependency of the validation targets. The only reason it's worked so far is because everything is dependent on updating the version information which as a side-effect is generating other source code files. The problem manifests itself in other ways as well. Here is a more general explanation: Many of our tools depend on knowing our set of source files. Some of our tools locate them by searching, some have a hardcoded enumerated list, and some derive the list by interrogating git. Anything that depends on git fails after a distribution tarball is created because the git repo does not exist in the distribution. Anything that depends on search is probably O.K. when building in an antiseptic environment known to be clean (i.e. building an RPM) but is likely to fail in a development environment that picks up cruft along the way. Hardcoded lists fail when developers forget to update them. I discovered this problem when I tried to move the i18n validation checks into the normal build process. It fails because it uses git to generate the list of files to validate. Perhaps a better approach would be to generate the file list using git and include the file list in the distribution. Anyway, all of this is to say we need to be careful how and when we generate files as well as our dependencies on those generated files as well as what's in the distribution, where the build is occurring, and how, when and what we choose to validate. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From nalin at redhat.com Mon Apr 16 20:39:18 2012 From: nalin at redhat.com (Nalin Dahyabhai) Date: Mon, 16 Apr 2012 16:39:18 -0400 Subject: [Freeipa-devel] [PATCH] compat ieee802Device entries for ipaHost entries Message-ID: <20120416203918.GD8158@redhat.com> This bit of configuration creates a cn=computers area under cn=compat which we populate with ieee802Device entries corresponding to any ipaHost entries which have both fqdn and macAddress values. Nalin -------------- next part -------------- >From 7cffe5a5d62e54e1dc7c621df131f621e49c14f5 Mon Sep 17 00:00:00 2001 From: Nalin Dahyabhai Date: Mon, 16 Apr 2012 15:31:12 -0400 Subject: [PATCH 2/3] - create a "cn=computers" compat area populated with ieee802Device entries corresponding to computers with fqdn and macAddress attributes --- install/share/schema_compat.uldif | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/install/share/schema_compat.uldif b/install/share/schema_compat.uldif index f042edf..38bf678 100644 --- a/install/share/schema_compat.uldif +++ b/install/share/schema_compat.uldif @@ -92,6 +92,20 @@ add:schema-compat-entry-attribute: 'sudoRunAsGroup=%{ipaSudoRunAsExtGroup}' add:schema-compat-entry-attribute: 'sudoRunAsGroup=%deref("ipaSudoRunAs","cn")' add:schema-compat-entry-attribute: 'sudoOption=%{ipaSudoOpt}' +dn: cn=computers, cn=Schema Compatibility, cn=plugins, cn=config +default:objectClass: top +default:objectClass: extensibleObject +default:cn: computers +default:schema-compat-container-group: cn=compat, $SUFFIX +default:schema-compat-container-rdn: cn=computers +default:schema-compat-search-base: cn=computers, cn=accounts, $SUFFIX +default:schema-compat-search-filter: (&(macAddress=*)(fqdn=*)(objectClass=ipaHost)) +default:schema-compat-entry-rdn: 'cn=%first("%{fqdn}")' +default:schema-compat-entry-attribute: objectclass=device +default:schema-compat-entry-attribute: objectclass=ieee802Device +default:schema-compat-entry-attribute: cn=%{fqdn} +default:schema-compat-entry-attribute: macAddress=%{macAddress} + # Enable anonymous VLV browsing for Solaris dn: oid=2.16.840.1.113730.3.4.9,cn=features,cn=config only:aci: '(targetattr !="aci")(version 3.0; acl "VLV Request Control"; allow (read, search, compare, proxy) userdn = "ldap:///anyone"; )' -- 1.7.10 From nalin at redhat.com Mon Apr 16 20:51:54 2012 From: nalin at redhat.com (Nalin Dahyabhai) Date: Mon, 16 Apr 2012 16:51:54 -0400 Subject: [Freeipa-devel] [PATCH] add ethers.byname and ethers.byaddr NIS maps Message-ID: <20120416205154.GE8158@redhat.com> The ethers.byname and ethers.byaddr NIS maps pair host names and hardware network addresses. This should close ticket #2259. Nalin -------------- next part -------------- From a69406b83496c053dbe68ab7e019c86242c06565 Mon Sep 17 00:00:00 2001 From: Nalin Dahyabhai Date: Mon, 16 Apr 2012 15:33:42 -0400 Subject: [PATCH 3/3] - add a pair of ethers maps for computers with hardware addresses on file --- install/share/nis.uldif | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/install/share/nis.uldif b/install/share/nis.uldif index 2255541..96b790f 100644 --- a/install/share/nis.uldif +++ b/install/share/nis.uldif @@ -70,3 +70,26 @@ default:nis-filter: (objectClass=ipanisNetgroup) default:nis-key-format: %{cn} default:nis-value-format:%merge(" ","%deref_f(\"member\",\"(objectclass=ipanisNetgroup)\",\"cn\")","(%link(\"%ifeq(\\\"hostCategory\\\",\\\"all\\\",\\\"\\\",\\\"%collect(\\\\\\\"%{externalHost}\\\\\\\",\\\\\\\"%deref(\\\\\\\\\\\\\\\"memberHost\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"memberHost\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\")\\\")\",\"%ifeq(\\\"hostCategory\\\",\\\"all\\\",\\\"\\\",\\\"-\\\")\",\",\",\"%ifeq(\\\"userCategory\\\",\\\"all\\\",\\\"\\\",\\\"%collect(\\\\\\\"%deref(\\\\\\\\\\\\\\\"memberUser\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"memberUser\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\")\\\")\",\"%ifeq(\\\"userCategory\\\",\\\"all\\\",\\\"\\\",\\\"-\\\")\"),%{nisDomainName:-})") default:nis-secure: no + +dn: nis-domain=$DOMAIN+nis-map=ethers.byaddr, cn=NIS Server, cn=plugins, cn=config +default:objectclass: top +default:objectclass: extensibleObject +default:nis-domain: $DOMAIN +default:nis-map: ethers.byaddr +default:nis-base: cn=computers, cn=accounts, $SUFFIX +default:nis-filter: (&(macAddress=*)(fqdn=*)(objectClass=ipaHost)) +default:nis-keys-format: %mregsub("%{macAddress} %{fqdn}","(..:..:..:..:..:..) (.*)","%1") +default:nis-values-format: %{macAddress} %{fqdn} +default:nis-secure: no + +dn: nis-domain=$DOMAIN+nis-map=ethers.byname, cn=NIS Server, cn=plugins, cn=config +default:objectclass: top +default:objectclass: extensibleObject +default:nis-domain: $DOMAIN +default:nis-map: ethers.byname +default:nis-base: cn=computers, cn=accounts, $SUFFIX +default:nis-filter: (&(macAddress=*)(fqdn=*)(objectClass=ipaHost)) +default:nis-keys-format: %mregsub("%{macAddress} %{fqdn}","(..:..:..:..:..:..) (.*)","%2") +default:nis-values-format: %{macAddress} %{fqdn} +default:nis-secure: no + -- 1.7.10 From rcritten at redhat.com Mon Apr 16 21:28:20 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 16 Apr 2012 17:28:20 -0400 Subject: [Freeipa-devel] [PATCH] 1007 remove all state when uninstalling Message-ID: <4F8C8EF4.5030604@redhat.com> We no longer use the running state when uninstalling DS instances but we need to pull it in case it is there in an upgraded instance. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1007-uninstall.patch Type: text/x-patch Size: 1794 bytes Desc: not available URL: From jdennis at redhat.com Mon Apr 16 21:50:48 2012 From: jdennis at redhat.com (John Dennis) Date: Mon, 16 Apr 2012 17:50:48 -0400 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F8C7DE7.8080609@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> <4F849AE5.5030409@redhat.com> <4F86BCDE.1080508@redhat.com> <4F87FF2B.90108@redhat.com> <4F8C5270.5030605@redhat.com> <4F8C5773.4030200@redhat.com> <4F8C7CB3.2080309@redhat.com> <4F8C7DE7.8080609@redhat.com> Message-ID: <4F8C9438.8090805@redhat.com> On 04/16/2012 04:15 PM, Rob Crittenden wrote: > John Dennis wrote: >> On 04/16/2012 01:31 PM, Rob Crittenden wrote: >>> John Dennis wrote: >>>> On 04/13/2012 06:25 AM, Petr Viktorin wrote: >>>>>> When the utility sets logging to console, the extra log message gets >>>>>> printed out there. I agree this isn't optimal. >>>>>> Attached patch removes the console log handler before logging the >>>>>> result. >>>>>> >>>>> >>>>> I read through log_manager, and found that I can do this more cleanly >>>>> with remove_handler. >>>>> >>>>> John, is this a good use of the API? >>>> >>>> The problem you're trying to correct is that under some circumstances a >>>> log message is emitted twice to the console, right? >>>> >>>> Removing the console handler fees like the wrong solution, it's a pretty >>>> big hammer, you have to know when to remove it and once removed any >>>> expectation that messages will appear on the console will be untrue. >>>> >>>> Can you give me a brief explanation as to why you're getting duplicate >>>> messages on the console. I wonder if there isn't a better way to handle >>>> the problem which isn't so invasive and potentially hidden. >>>> >>>> Or is the issue you don't want a console handler at all? If that's the >>>> case then maybe we should provide a configuration that does not create >>>> one, that way it will be explicit and obvious there is no console >>>> handler. >>>> >>>> FWIW, the current configuration of logging is historical dating back to >>>> the beginning of the project. When I added the log manager the intent >>>> was to fix deficiencies in logging, not to modify the existing behavior. >>>> I wasn't clear to me the existing configuration was ideal. If you're >>>> hitting problems because of the existing configuration perhaps we should >>>> look at the historical configuration and ask if it needs to be modified. >>> >>> This patch is standardizing logging the final disposition of a number of >>> commands. Currently is must be inferred. >>> >>> The problem is that some commands have had this disposition added but >>> open no log files so some error messages are being displayed twice and >>> in log format. >>> >>> I think the easiest solution would simply be to scale this back to only >>> those commands that open a log file at all. >> >> You say "command" but command is a loaded word :-) I presume you mean >> command line utility and not an IPA Command object. The reason I'm >> drawing this distinction is because the environment Commands execute in >> are not supposed to change once api.bootstrap() has completed. >> >> Who or what is aggregating the final disposition of a number of >> commands? The reason I ask is because the log_mgr object is global and >> deleting a handler from a global resource to satisfy a local need may >> have unintentional global side effects. How is the aggregation >> accomplished? >> >> > > This patch is in the context of the command-line utilities like > ipa-server-install, ipa-client-install, etc. and not the ipa tool. Some > of them initialize the api, some do not. This is exactly the type of problem I'm concerned about, a conflict over who owns the logging configuration. These multiple potential "owners" of a global configuration is what lead me to introduce the "configure_state" attribute of the log manager, to disambiguate who is in control and who initialized the configuration. In fact bootstrap() in plugable.py explicitly examines the configure_state attribute of the log_manager and if it's been previously configured (for example because the api was loaded by a command line utility that has configured it's own logging) it then defers to the logging configuration established by the environment it's being run in (in other words if a command line utility calls standard_logging_setup() before loading the api it wins and the logging configuration belongs to the utility, not the api). All of this is to say that isolated code that reaches into a global configuration and takes an axe to it and chops out a significant part of the configuration that someone else may have established without them knowing about it feels wrong. I suspect there is some deeper problem afoot and unilaterally chopping out the console handler is not addressing the fundamental problem and likely introduces a new one. But I really need to look at the specific instances of the problem, maybe if I dig through the history of this patch review more carefully I'll discover it, but if someone wants to chime in an point me at the exact issue that would be great :-) > As I said, the intention of my original request was to log the > disposition of an install command. I think this patch goes too far and > updates all admin utilities. > > rob -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From jdennis at redhat.com Mon Apr 16 21:56:21 2012 From: jdennis at redhat.com (John Dennis) Date: Mon, 16 Apr 2012 17:56:21 -0400 Subject: [Freeipa-devel] [PATCH] 1007 remove all state when uninstalling In-Reply-To: <4F8C8EF4.5030604@redhat.com> References: <4F8C8EF4.5030604@redhat.com> Message-ID: <4F8C9585.3080807@redhat.com> On 04/16/2012 05:28 PM, Rob Crittenden wrote: > We no longer use the running state when uninstalling DS instances but we > need to pull it in case it is there in an upgraded instance. Ah, so the issue is that StateFile.restore_state() has the side effect of removing the key from the state file and if we don't call restore_state() cruft is left in the state file that somehow confuses things later on? -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From rcritten at redhat.com Mon Apr 16 22:12:30 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 16 Apr 2012 18:12:30 -0400 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F8C9438.8090805@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> <4F849AE5.5030409@redhat.com> <4F86BCDE.1080508@redhat.com> <4F87FF2B.90108@redhat.com> <4F8C5270.5030605@redhat.com> <4F8C5773.4030200@redhat.com> <4F8C7CB3.2080309@redhat.com> <4F8C7DE7.8080609@redhat.com> <4F8C9438.8090805@redhat.com> Message-ID: <4F8C994E.7070605@redhat.com> John Dennis wrote: > On 04/16/2012 04:15 PM, Rob Crittenden wrote: >> John Dennis wrote: >>> On 04/16/2012 01:31 PM, Rob Crittenden wrote: >>>> John Dennis wrote: >>>>> On 04/13/2012 06:25 AM, Petr Viktorin wrote: >>>>>>> When the utility sets logging to console, the extra log message gets >>>>>>> printed out there. I agree this isn't optimal. >>>>>>> Attached patch removes the console log handler before logging the >>>>>>> result. >>>>>>> >>>>>> >>>>>> I read through log_manager, and found that I can do this more cleanly >>>>>> with remove_handler. >>>>>> >>>>>> John, is this a good use of the API? >>>>> >>>>> The problem you're trying to correct is that under some >>>>> circumstances a >>>>> log message is emitted twice to the console, right? >>>>> >>>>> Removing the console handler fees like the wrong solution, it's a >>>>> pretty >>>>> big hammer, you have to know when to remove it and once removed any >>>>> expectation that messages will appear on the console will be untrue. >>>>> >>>>> Can you give me a brief explanation as to why you're getting duplicate >>>>> messages on the console. I wonder if there isn't a better way to >>>>> handle >>>>> the problem which isn't so invasive and potentially hidden. >>>>> >>>>> Or is the issue you don't want a console handler at all? If that's the >>>>> case then maybe we should provide a configuration that does not create >>>>> one, that way it will be explicit and obvious there is no console >>>>> handler. >>>>> >>>>> FWIW, the current configuration of logging is historical dating >>>>> back to >>>>> the beginning of the project. When I added the log manager the intent >>>>> was to fix deficiencies in logging, not to modify the existing >>>>> behavior. >>>>> I wasn't clear to me the existing configuration was ideal. If you're >>>>> hitting problems because of the existing configuration perhaps we >>>>> should >>>>> look at the historical configuration and ask if it needs to be >>>>> modified. >>>> >>>> This patch is standardizing logging the final disposition of a >>>> number of >>>> commands. Currently is must be inferred. >>>> >>>> The problem is that some commands have had this disposition added but >>>> open no log files so some error messages are being displayed twice and >>>> in log format. >>>> >>>> I think the easiest solution would simply be to scale this back to only >>>> those commands that open a log file at all. >>> >>> You say "command" but command is a loaded word :-) I presume you mean >>> command line utility and not an IPA Command object. The reason I'm >>> drawing this distinction is because the environment Commands execute in >>> are not supposed to change once api.bootstrap() has completed. >>> >>> Who or what is aggregating the final disposition of a number of >>> commands? The reason I ask is because the log_mgr object is global and >>> deleting a handler from a global resource to satisfy a local need may >>> have unintentional global side effects. How is the aggregation >>> accomplished? >>> >>> >> >> This patch is in the context of the command-line utilities like >> ipa-server-install, ipa-client-install, etc. and not the ipa tool. Some >> of them initialize the api, some do not. > > This is exactly the type of problem I'm concerned about, a conflict over > who owns the logging configuration. > > These multiple potential "owners" of a global configuration is what lead > me to introduce the "configure_state" attribute of the log manager, to > disambiguate who is in control and who initialized the configuration. In > fact bootstrap() in plugable.py explicitly examines the configure_state > attribute of the log_manager and if it's been previously configured (for > example because the api was loaded by a command line utility that has > configured it's own logging) it then defers to the logging configuration > established by the environment it's being run in (in other words if a > command line utility calls standard_logging_setup() before loading the > api it wins and the logging configuration belongs to the utility, not > the api). > > All of this is to say that isolated code that reaches into a global > configuration and takes an axe to it and chops out a significant part of > the configuration that someone else may have established without them > knowing about it feels wrong. > > I suspect there is some deeper problem afoot and unilaterally chopping > out the console handler is not addressing the fundamental problem and > likely introduces a new one. But I really need to look at the specific > instances of the problem, maybe if I dig through the history of this > patch review more carefully I'll discover it, but if someone wants to > chime in an point me at the exact issue that would be great :-) Like I've said, this is a non-issue in this case. Those utilities that do not open a log simply don't need to call this log_on_exit function. Any remaining logging issues should be logged as a ticket. I think you're overstating the logging problem in general though. Basically we just want a way to override the default log that the API chooses. So the API bootstrap() call defers to the user if logging has already been configured. It is the responsibility of the developer to know when to do what but since there is less than a half dozen exceptions I don't think this is worth a significant time investment. At some point we may rewrite all of these utilities to use the API itself at which time this problem will simply go away. rob From rcritten at redhat.com Mon Apr 16 22:13:24 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 16 Apr 2012 18:13:24 -0400 Subject: [Freeipa-devel] [PATCH] 1007 remove all state when uninstalling In-Reply-To: <4F8C9585.3080807@redhat.com> References: <4F8C8EF4.5030604@redhat.com> <4F8C9585.3080807@redhat.com> Message-ID: <4F8C9984.2010303@redhat.com> John Dennis wrote: > On 04/16/2012 05:28 PM, Rob Crittenden wrote: >> We no longer use the running state when uninstalling DS instances but we >> need to pull it in case it is there in an upgraded instance. > > Ah, so the issue is that StateFile.restore_state() has the side effect > of removing the key from the state file and if we don't call > restore_state() cruft is left in the state file that somehow confuses > things later on? > Exactly. And there is a check at the end of installation to see if anything got missed. This is sort of a catch-all in case something does terribly wrong and gives the user some avenue to fix things. rob From jdennis at redhat.com Mon Apr 16 22:54:32 2012 From: jdennis at redhat.com (John Dennis) Date: Mon, 16 Apr 2012 18:54:32 -0400 Subject: [Freeipa-devel] [PATCH 73] don't append basedn to container if it is included Message-ID: <4F8CA328.4080609@redhat.com> -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jdennis-0073-don-t-append-basedn-to-container-if-it-is-included.patch Type: text/x-patch Size: 1864 bytes Desc: not available URL: From ohamada at redhat.com Mon Apr 16 23:13:52 2012 From: ohamada at redhat.com (Ondrej Hamada) Date: Tue, 17 Apr 2012 01:13:52 +0200 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <1333544545.22628.287.camel@willson.li.ssimo.org> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> Message-ID: <4F8CA7B0.2080000@redhat.com> >> Sorry for inactivity, I was struggling with a lot of school stuff. >> >> I've summed up the main goals, do you agree on them or should I >> add/remove any? >> >> >> GOALS >> =========================================== >> Create Hub and Consumer types of replica with following features: >> >> * Hub is read-only >> >> * Hub interconnects Masters with Consumers or Masters with Hubs >> or Hubs with other Hubs >> >> * Hub is hidden in the network topology >> >> * Consumer is read-only >> >> * Consumer interconnects Masters/Hubs with clients >> >> * Write operations should be forwarded to Master >> >> * Consumer should be able to log users into system without >> communication with master > We need to define how this can be done, it will almost certainly mean > part of the consumer is writable, plus it also means you need additional > access control and policies, on what the Consumer should be allowed to > see. > >> * Consumer should cache user's credentials > Ok what credentials ? As I explained earlier Kerberos creds cannot > really be cached. Either they are transferred with replication or the > KDC needs to be change to do chaining. Neither I consider as 'caching'. > A password obtained through an LDAP bind could be cached, but I am not > sure it is worth it. > >> * Caching of credentials should be configurable > See above. > >> * CA server should not be allowed on Hubs and Consumers > > Missing points: > - Masters should not transfer KRB keys to HUBs/Consumers by default. > > - We need selective replication if you want to allow distributing a > partial set of Kerberos credentials to consumers. With Hubs it becomes > complicated to decide what to replicate about credentials. > > Simo. > Can you please have a look at this draft and comment it please? Design document draft: More types of replicas in FreeIPA GOALS ================================================================= Create Hub and Consumer types of replica with following features: * Hub is read-only * Hub interconnects Masters with Consumers or Masters with Hubs or Hubs with other Hubs * Hub is hidden in the network topology * Consumer is read-only * Consumer interconnects Masters/Hubs with clients * Write operations should be forwarded to Master * Consumer should be able to log users into system without communication with master * Consumer should be able to store user's credentials * Storing of credentials should be configurable and disabled by default * Credentials expiration on replica should be configurable * CA server should not be allowed on Hubs and Consumers ISSUES ================================================================= - SSSD is currently supposed to cooperate with one LDAP server only - OpenLDAP client and its support for referrals - 389-DS allows replication of whole suffix only - Storing credentials and allowing authentication against Consumer server POSSIBLE SOLUTIONS ================================================================= 389-DS allows replication of whole suffix only: * Rich said that they are planning to allow the fractional replication in DS to use LDAP filters. It will allow us to do selective replication what is mainly important for replication of user's credentials. ______________________________________ Forwarding of requests in LDAP: * use existing 389-DS plugin "Chain-on-update" - we can try it as a proof of concept solution, but for real deployment it won't be very cool solution as it will increase the demands on Hubs. * better way is to use the referrals. The master server(s) to be referred might be: 1) specified at install time 2) looked up in DNS records 3) find master dynamically - Consumers and Hubs will be in fact master servers (from 389-DS point of view), this means that every consumer or hub knows his direct suppliers a they know their suppliers ... ISSUE: support for referrals in OpenLDAP client * SSSD must be improved to allow cooperation with more than one LDAP server _____________________________________ Authentication and replication of credentials: * authentication policies, every user must authenticate against master server by default - if the authentication is successful and proper policy is set for him, the user will be added into a specific user group. Each consumer will have one of these groups. These groups will be used by LDAP filters in fractional replication to distribute the Krb creds to the chosen Consumers only. - The groups will be created and modified on the master, so they will get replicated to all Hubs and Consumers. Hubs make it more complicated as they must know which groups are relevant for them. Because of that I suppose that every Hub will have to know about all its 'subordinates' - this information will have to be generated dynamically - probably on every change to the replication topology (adding/removing replicas is usually not a very frequent operation) - The policy must also specify the credentials expiration time. If user tries to authenticate with expired credential, he will be refused and redirected to Master server for authentication. ISSUE: How to deal with creds. expiration in replication? The replication of credential to the Consumer could be stopped by removing the user from the Consumer specific user group (mentioned above). The easiest way would be to delete him when he tries to auth. with expired credentials or do a regular check (intervals specified in policy) and delete all expired creds. Because of the removal of expired creds. we will have to grant the Consumer the permission to delete users from the Consumer specific user group (but only deleting, adding users will be possible on Masters only). Offline authentication: * Consumer (and Hub) must allow write operations just for a small set of attributes: last login date and time, count of unsuccessful logins and the lockup of account - to be able to do that, both Consumers and Hubs must be Masters(from 389-DS point of view). When the Master<->Consumer connection is broken, the lockup information is saved only locally and will be pushed to Master on connection restoration. I suppose that only the lockup information should be replicated. In case of lockup the user will have to authenticate against Master server only. Transfer of Krb keys: * Consumer server will have to have realm krbtgt. This means that we will have to distribute every Consumer's krbtgt to the Master servers. The Masters will need to have a logic for using those keys instead of the normal krbtgt to perform operations when user's krbtgt are presented to a different server. ____________________________________ -- Regards, Ondrej Hamada FreeIPA team jabber: ohama at jabbim.cz IRC: ohamada From mkosek at redhat.com Tue Apr 17 08:04:46 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 17 Apr 2012 10:04:46 +0200 Subject: [Freeipa-devel] [PATCH] 248 Raise proper exception when LDAP limits are exceeded In-Reply-To: <4F8C5C2D.5070909@redhat.com> References: <1334048262.7045.19.camel@balmora.brq.redhat.com> <4F86D4B9.7090509@redhat.com> <4F86D833.2010501@redhat.com> <4F8C5C2D.5070909@redhat.com> Message-ID: <1334649886.12753.18.camel@balmora.brq.redhat.com> On Mon, 2012-04-16 at 13:51 -0400, Rob Crittenden wrote: > Rob Crittenden wrote: > > Jan Cholasta wrote: > >> On 10.4.2012 10:57, Martin Kosek wrote: > >>> Few test hints are attached to the ticket. > >>> > >>> --- > >>> > >>> ldap2 plugin returns NotFound error for find_entries/get_entry > >>> queries when the server did not manage to return an entry > >>> due to time limits. This may be confusing for user when the > >>> entry he searches actually exists. > >>> > >>> This patch fixes the behavior in ldap2 plugin to return > >>> LimitsExceeded exception instead. This way, user would know > >>> that his time/size limits are set too low and can amend them to > >>> get correct results. > >>> > >>> https://fedorahosted.org/freeipa/ticket/2606 > >>> > >> > >> ACK. > >> > >> Honza > >> > > > > Before pushing I'd like to look at this more. truncated is supposed to > > indicate a limits problem. I want to see if the caller should be > > responsible for returning a limits error instead. > > > > rob > > This is what I had in mind. > > diff --git a/ipaserver/plugins/ldap2.py b/ipaserver/plugins/ldap2.py > index 61341b0..447e738 100644 > --- a/ipaserver/plugins/ldap2.py > +++ b/ipaserver/plugins/ldap2.py > @@ -754,7 +754,7 @@ class ldap2(CrudBackend, Encoder): > except _ldap.LDAPError, e: > _handle_errors(e) > > - if not res: > + if not res and not truncated: > raise errors.NotFound(reason='no such entry') > > if attrs_list and ('memberindirect' in attrs_list or '*' in > attrs_list) > : > @@ -801,7 +801,10 @@ class ldap2(CrudBackend, Encoder): > if len(entries) > 1: > raise errors.SingleMatchExpected(found=len(entries)) > else: > - return entries[0] > + if truncated: > + raise errors.LimitsExceeded() > + else: > + return entries[0] > > def get_entry(self, dn, attrs_list=None, time_limit=None, > size_limit=None, normalize=True): > @@ -811,10 +814,13 @@ class ldap2(CrudBackend, Encoder): > Keyword arguments: > attrs_list - list of attributes to return, all if None > (default None) > """ > - return self.find_entries( > + (entry, truncated) = self.find_entries( > None, attrs_list, dn, self.SCOPE_BASE, time_limit=time_limit, > size_limit=size_limit, normalize=normalize > - )[0][0] > + ) > + if truncated: > + raise errors.LimitsExceeded() > + return entry[0] > > config_defaults = {'ipasearchtimelimit': [2], > 'ipasearchrecordslimit': [0]} > def get_ipa_config(self, attrs_list=None): > Thanks for the patch. I had similar patch planned in my mind. We just need to be more careful, this patch will change an assumption that ldap2.find_entries will always either raise NotFound error or return at least one result. I checked all ldap2.find_entries calls and did few fixes, it should be OK now. No regression found in unit tests. The updated patch is attached. Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-248-2-raise-proper-exception-when-ldap-limits-are-exceeded.patch Type: text/x-patch Size: 4434 bytes Desc: not available URL: From mkosek at redhat.com Tue Apr 17 08:36:25 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 17 Apr 2012 10:36:25 +0200 Subject: [Freeipa-devel] [PATCH] 251 Fix DNS and permissions unit tests Message-ID: <1334651785.12753.19.camel@balmora.brq.redhat.com> Amend unit tests to match the latest changes in DNS (tickets 2627, 2628) and hardened exception error message checks. -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-251-fix-dns-and-permissions-unit-tests.patch Type: text/x-patch Size: 2389 bytes Desc: not available URL: From mkosek at redhat.com Tue Apr 17 08:54:10 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 17 Apr 2012 10:54:10 +0200 Subject: [Freeipa-devel] [PATCH] 1006 detect expired passwords in forms login In-Reply-To: <4F8C2000.9000400@redhat.com> References: <4F887E4F.6060805@redhat.com> <4F8BE018.4070108@redhat.com> <4F8C1BB2.1040604@redhat.com> <4F8C2000.9000400@redhat.com> Message-ID: <1334652850.12753.24.camel@balmora.brq.redhat.com> On Mon, 2012-04-16 at 09:34 -0400, Rob Crittenden wrote: > Rob Crittenden wrote: > > Petr Vobornik wrote: > >> On 04/13/2012 09:28 PM, Rob Crittenden wrote: > >>> When doing a forms-based login there is no notification that a password > >>> needs to be reset. We don't currently provide a facility for that but we > >>> should at least tell users what is going on. > >>> > >>> This patch adds an LDAP bind to test the password to see if it is > >>> expired and returns the string "Password Expired" along with the 401 if > >>> it is. I'm told this is all the UI will need to be able to identify this > >>> condition. > >>> > >>> rob > >>> > >> > >> UI can work with it. I have a patch ready. I'll send it when this will > >> be ACKed. > >> > >> Some notes: > >> > >> 1) The error templates and the 'Password Expired' message are hardcoded > >> to be English. It's fine at the moment. Will we internationalize them > >> sometime in future? If so, we will run into the same problem again. > > > > No plans to. I can update the patch with a comment specifically to not > > internationalize it if you'd like. > > > >> 2) conn.destroy_connection() won't be called if an exception occurs. Not > >> sure if it is a problem, GC and __del__ should take care of it. > > > > Hmm, this is due to a late stage change I made. I originally had this > > broken out into two blocks where the only thing done in the first > > try/except block was the connection, so the only exception that could > > happen was a failed connection. > > > > That isn't true any more. I'll update the patch. > > And here you go. > > rob I still think that deciding based on error message string is not right, we may want to have it internationalized and thus hit the same error. What about returning a custom HTTP header specifying the reason? Something like that: X-rejection-reason: password-expired OR X-rejection-reason: password-invalid OR X-rejection-reason: account-locked Web UI could customize next actions based on this header instead of parsing the error message. If you decide to rather stick with current solution and file my proposal as an enhancement ticket (or discard it entirely), then ACK. Martin From pvoborni at redhat.com Tue Apr 17 09:17:17 2012 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 17 Apr 2012 11:17:17 +0200 Subject: [Freeipa-devel] [PATCH] 121 User is notified that password needs to be reset in, forms-based login Message-ID: <4F8D351D.4050400@redhat.com> This solution depends on Rob's patch #1006-2 Forms-based login procedure detects if 401 unauthorized message contains 'Expired Password' message. If so it displays an error message that user needs to reset his password. https://fedorahosted.org/freeipa/ticket/2608 -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0121-User-is-notified-that-password-needs-to-be-reset-in-.patch Type: text/x-patch Size: 6174 bytes Desc: not available URL: From mkosek at redhat.com Tue Apr 17 09:32:26 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 17 Apr 2012 11:32:26 +0200 Subject: [Freeipa-devel] [PATCH] 1007 remove all state when uninstalling In-Reply-To: <4F8C8EF4.5030604@redhat.com> References: <4F8C8EF4.5030604@redhat.com> Message-ID: <1334655146.12753.25.camel@balmora.brq.redhat.com> On Mon, 2012-04-16 at 17:28 -0400, Rob Crittenden wrote: > We no longer use the running state when uninstalling DS instances but we > need to pull it in case it is there in an upgraded instance. > > rob ACK. Pushed to master, ipa-2-2. I added a clear reproduction steps to the linked Bugzilla. Martin From pviktori at redhat.com Tue Apr 17 12:02:33 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 17 Apr 2012 14:02:33 +0200 Subject: [Freeipa-devel] [PATCH] 251 Fix DNS and permissions unit tests In-Reply-To: <1334651785.12753.19.camel@balmora.brq.redhat.com> References: <1334651785.12753.19.camel@balmora.brq.redhat.com> Message-ID: <4F8D5BD9.4030602@redhat.com> On 04/17/2012 10:36 AM, Martin Kosek wrote: > Amend unit tests to match the latest changes in DNS (tickets 2627, > 2628) and hardened exception error message checks. > ACK -- Petr? From mkosek at redhat.com Tue Apr 17 12:05:35 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 17 Apr 2012 14:05:35 +0200 Subject: [Freeipa-devel] [PATCH] 251 Fix DNS and permissions unit tests In-Reply-To: <4F8D5BD9.4030602@redhat.com> References: <1334651785.12753.19.camel@balmora.brq.redhat.com> <4F8D5BD9.4030602@redhat.com> Message-ID: <1334664335.12753.28.camel@balmora.brq.redhat.com> On Tue, 2012-04-17 at 14:02 +0200, Petr Viktorin wrote: > On 04/17/2012 10:36 AM, Martin Kosek wrote: > > Amend unit tests to match the latest changes in DNS (tickets 2627, > > 2628) and hardened exception error message checks. > > > > ACK > Pushed to master, ipa-2-2. Martin From simo at redhat.com Tue Apr 17 12:42:01 2012 From: simo at redhat.com (Simo Sorce) Date: Tue, 17 Apr 2012 08:42:01 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F8CA7B0.2080000@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> Message-ID: <1334666521.16658.171.camel@willson.li.ssimo.org> On Tue, 2012-04-17 at 01:13 +0200, Ondrej Hamada wrote: > >> Sorry for inactivity, I was struggling with a lot of school stuff. > >> > >> I've summed up the main goals, do you agree on them or should I > >> add/remove any? > >> > >> > >> GOALS > >> =========================================== > >> Create Hub and Consumer types of replica with following features: > >> > >> * Hub is read-only > >> > >> * Hub interconnects Masters with Consumers or Masters with Hubs > >> or Hubs with other Hubs > >> > >> * Hub is hidden in the network topology > >> > >> * Consumer is read-only > >> > >> * Consumer interconnects Masters/Hubs with clients > >> > >> * Write operations should be forwarded to Master > >> > >> * Consumer should be able to log users into system without > >> communication with master > > We need to define how this can be done, it will almost certainly mean > > part of the consumer is writable, plus it also means you need additional > > access control and policies, on what the Consumer should be allowed to > > see. > > > >> * Consumer should cache user's credentials > > Ok what credentials ? As I explained earlier Kerberos creds cannot > > really be cached. Either they are transferred with replication or the > > KDC needs to be change to do chaining. Neither I consider as 'caching'. > > A password obtained through an LDAP bind could be cached, but I am not > > sure it is worth it. > > > >> * Caching of credentials should be configurable > > See above. > > > >> * CA server should not be allowed on Hubs and Consumers > > > > Missing points: > > - Masters should not transfer KRB keys to HUBs/Consumers by default. > > > > - We need selective replication if you want to allow distributing a > > partial set of Kerberos credentials to consumers. With Hubs it becomes > > complicated to decide what to replicate about credentials. > > > > Simo. > > > > Can you please have a look at this draft and comment it please? > > > Design document draft: More types of replicas in FreeIPA > > GOALS > ================================================================= > > Create Hub and Consumer types of replica with following features: > > * Hub is read-only > > * Hub interconnects Masters with Consumers or Masters with Hubs > or Hubs with other Hubs > > * Hub is hidden in the network topology > > * Consumer is read-only > > * Consumer interconnects Masters/Hubs with clients > > * Write operations should be forwarded to Master Do we need to specify how this is done ? Referrals vs Chain-on-update ? > * Consumer should be able to log users into system without > communication with master > > * Consumer should be able to store user's credentials Can you expand on this ? Do you mean user keys ? > * Storing of credentials should be configurable and disabled by default > > * Credentials expiration on replica should be configurable What does this mean ? > * CA server should not be allowed on Hubs and Consumers > > ISSUES > ================================================================= > > - SSSD is currently supposed to cooperate with one LDAP server only Is this a problem in having an LDAP server that doesn't also have a KDC on the same host ? Or something else ? > - OpenLDAP client and its support for referrals Should we avoid referrals and use chain-on-update ? What does it mean for access control ? How do consumers authenticate to masters ? Should we use s4u2proxy ? > - 389-DS allows replication of whole suffix only What kind of filters do we think we need ? We can already exclude specific attributes from replication. > - Storing credentials and allowing authentication against Consumer server > > > POSSIBLE SOLUTIONS > ================================================================= > > 389-DS allows replication of whole suffix only: > > * Rich said that they are planning to allow the fractional replication > in DS to > use LDAP filters. It will allow us to do selective replication what > is mainly > important for replication of user's credentials. I guess we want to do this to selectively prevent replication of only some kerberos keys ? Based on groups ? Would filtes allow that using memberof ? > ______________________________________ > > Forwarding of requests in LDAP: > > * use existing 389-DS plugin "Chain-on-update" - we can try it as a proof of > concept solution, but for real deployment it won't be very cool solution > as it > will increase the demands on Hubs. Why do you think it would increase demands for hubs ? Doesn't the consumer directly contact the masters skipping the hubs ? > * better way is to use the referrals. The master server(s) to be referred > might be: > 1) specified at install time This is not really useful, as it would break updates every time the specified master is offline, it would also require some work to reconfigure stuff if the mastrer is retired. > 2) looked up in DNS records Probably easier to look up in LDAP, we have a record for each master in the domain. > 3) find master dynamically - Consumers and Hubs will be in fact master > servers (from 389-DS point of view), this means that every > consumer or hub > knows his direct suppliers a they know their suppliers ... Not clear what this means, can you elaborate ? > ISSUE: support for referrals in OpenLDAP client We've had quite some issue with referrals indeed, and a lot of client software dopes not properly handle referrals. That would leave a bunch of clients unable to modify the Directory. OTOH very few clients need to modify the directory, so maybe that's good enough. > * SSSD must be improved to allow cooperation with more than one LDAP server Can you elaborate what you think is missing in SSSD ? Is it about the need to fix referrals handling ? Or something else ? > _____________________________________ > > Authentication and replication of credentials: > > * authentication policies, every user must authenticate against master > server by > default If users always contact the master, what are the consumers for ? Need to elaborate on this and explain. > - if the authentication is successful and proper policy is set for > him, the > user will be added into a specific user group. Each consumer will > have one > of these groups. These groups will be used by LDAP filters in > fractional > replication to distribute the Krb creds to the chosen Consumers only. Why should this depend on authentication ?? Keep in mind that changing filters will not cause any replication to occur, replication would occur only when a change happens. Therefore placing a user in one group should happen before the kerberos keys are created. Also in order to move a user from one group to another, which would theoretically cause deletion of credentials from a group of servers and distribution to another we will probably need a plugin. This plugin would take care of intercepting this special membership change. On servers that loos membership this plugin would go and delete locally stored keys from the user(s) that lost membership. On servers that gained membership they would have to go to one of the master and fetch the keys and store them locally, this would need to be in a way that prevent replication and retain the master modification time so that later replication events will not conflict in any way. There i also the problem of rekeying and having different master keys on hubs/consumers, not an easy problem, and would require quite some custom changes to the replication protocol for these special entries. > - The groups will be created and modified on the master, so they will get > replicated to all Hubs and Consumers. Hubs make it more complicated > as they > must know which groups are relevant for them. Because of that I > suppose that > every Hub will have to know about all its 'subordinates' - this > information > will have to be generated dynamically - probably on every change to the > replication topology (adding/removing replicas is usually not a very > frequent operation) Hubs will simply be made members of these groups just like consumers. All members of a group are authorized to do something with that group membership. The grouping part doesn't seem complicated to me, but I may have missed a detail, care to elaborate what you see as difficult ? > - The policy must also specify the credentials expiration time. If > user tries to > authenticate with expired credential, he will be refused and > redirected to Master > server for authentication. How is this different from current status ? All accounts already have password expiration times and account expiration times. What am I missing ? > ISSUE: How to deal with creds. expiration in replication? The replication of > credential to the Consumer could be stopped by removing the user > from the > Consumer specific user group (mentioned above). The easiest way > would be to > delete him when he tries to auth. See above, we need a plugin IMO. > with expired credentials or do a > regular > check (intervals specified in policy) and delete all expired creds. It's not clear to me what we mean by expired creds, what am I missing ? > Because > of the removal of expired creds. we will have to grant the Consumer the > permission to delete users from the Consumer specific user group > (but only > deleting, adding users will be possible on Masters only). I do not understand this. > Offline authentication: > > * Consumer (and Hub) must allow write operations just for a small set of > attributes: last login date and time, count of unsuccessful logins > and the > lockup of account This shouldn't be a problem, we already do that with masters, the trick is in non replicating those attributes so that they never conflict. > - to be able to do that, both Consumers and Hubs must be Masters(from > 389-DS point of view). This doesn't sound right at all. All server can always write locally, what prevents them from doing so are referrals/configuration. Consumers and hubs do not and cannot be masters. > When the Master<->Consumer connection is > broken, the > lockup information is saved only locally and will be pushed to Master > on connection restoration. I suppose that only the lockup information > should > be replicated. In case of lockup the user will have to authenticate > against > Master server only. What is the lookup information ? What connection is broken ? There aren't persistent connections between masters and consumers (esp. when hubs are in between there are none). > Transfer of Krb keys: > > * Consumer server will have to have realm krbtgt. I guess you mean "a krbtgt usuable in the realm", not 'the' realm krbtgt, right ? > This means that we > will have > to distribute every Consumer's krbtgt to the Master servers. It's the other way around. All keys are generated on the masters just like with any other principal key, and then replicated to consumers. > The > Masters will > need to have a logic for using those keys instead of the normal krbtgt to > perform operations when user's krbtgt are presented to a different > server. Yes, we will need potentially quite invasive changes to the KDC when the 'krbtgt' is involved. We will need to plan this ahead with MIT to validate our idea or see if they have different ideas on how to solve this problem. Simo. -- Simo Sorce * Red Hat, Inc * New York From pviktori at redhat.com Tue Apr 17 13:30:54 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 17 Apr 2012 15:30:54 +0200 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F8C994E.7070605@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> <4F849AE5.5030409@redhat.com> <4F86BCDE.1080508@redhat.com> <4F87FF2B.90108@redhat.com> <4F8C5270.5030605@redhat.com> <4F8C5773.4030200@redhat.com> <4F8C7CB3.2080309@redhat.com> <4F8C7DE7.8080609@redhat.com> <4F8C9438.8090805@redhat.com> <4F8C994E.7070605@redhat.com> Message-ID: <4F8D708E.3050701@redhat.com> On 04/17/2012 12:12 AM, Rob Crittenden wrote: > John Dennis wrote: >> On 04/16/2012 04:15 PM, Rob Crittenden wrote: >>> John Dennis wrote: >>>> On 04/16/2012 01:31 PM, Rob Crittenden wrote: >>>>> John Dennis wrote: >>>>>> On 04/13/2012 06:25 AM, Petr Viktorin wrote: >>>>>>>> When the utility sets logging to console, the extra log message >>>>>>>> gets >>>>>>>> printed out there. I agree this isn't optimal. >>>>>>>> Attached patch removes the console log handler before logging the >>>>>>>> result. >>>>>>>> >>>>>>> >>>>>>> I read through log_manager, and found that I can do this more >>>>>>> cleanly >>>>>>> with remove_handler. >>>>>>> >>>>>>> John, is this a good use of the API? >>>>>> >>>>>> The problem you're trying to correct is that under some >>>>>> circumstances a >>>>>> log message is emitted twice to the console, right? >>>>>> >>>>>> Removing the console handler fees like the wrong solution, it's a >>>>>> pretty >>>>>> big hammer, you have to know when to remove it and once removed any >>>>>> expectation that messages will appear on the console will be untrue. >>>>>> >>>>>> Can you give me a brief explanation as to why you're getting >>>>>> duplicate >>>>>> messages on the console. I wonder if there isn't a better way to >>>>>> handle >>>>>> the problem which isn't so invasive and potentially hidden. The patch adds a final log message that says if the command ultimately succeeded or not (https://fedorahosted.org/freeipa/ticket/2071). Some of the commands print an error message to the console independently. When they've configured logging to console, the added message appears as well. The right thing to do is to modify the utility just log once. But this patch treats the utilities as black boxes, otherwise it's too invasive. >>>>>> Or is the issue you don't want a console handler at all? If that's >>>>>> the >>>>>> case then maybe we should provide a configuration that does not >>>>>> create >>>>>> one, that way it will be explicit and obvious there is no console >>>>>> handler. This would mean changing how the commands set up logging. I plan to do that, but it's too invasive to do now. >>>>>> FWIW, the current configuration of logging is historical dating >>>>>> back to >>>>>> the beginning of the project. When I added the log manager the intent >>>>>> was to fix deficiencies in logging, not to modify the existing >>>>>> behavior. >>>>>> I wasn't clear to me the existing configuration was ideal. If you're >>>>>> hitting problems because of the existing configuration perhaps we >>>>>> should >>>>>> look at the historical configuration and ask if it needs to be >>>>>> modified. >>>>> >>>>> This patch is standardizing logging the final disposition of a >>>>> number of >>>>> commands. Currently is must be inferred. >>>>> >>>>> The problem is that some commands have had this disposition added but >>>>> open no log files so some error messages are being displayed twice and >>>>> in log format. >>>>> >>>>> I think the easiest solution would simply be to scale this back to >>>>> only >>>>> those commands that open a log file at all. >>>> >>>> You say "command" but command is a loaded word :-) I presume you mean >>>> command line utility and not an IPA Command object. The reason I'm >>>> drawing this distinction is because the environment Commands execute in >>>> are not supposed to change once api.bootstrap() has completed. >>>> >>>> Who or what is aggregating the final disposition of a number of >>>> commands? The reason I ask is because the log_mgr object is global and >>>> deleting a handler from a global resource to satisfy a local need may >>>> have unintentional global side effects. How is the aggregation >>>> accomplished? This is a wrapper around scripts. The final log message is the last thing done before exiting Python. >>> This patch is in the context of the command-line utilities like >>> ipa-server-install, ipa-client-install, etc. and not the ipa tool. Some >>> of them initialize the api, some do not. Also there's ipa-ldap-updater, which sometimes opens a log file and sometimes not. >> This is exactly the type of problem I'm concerned about, a conflict over >> who owns the logging configuration. >> >> These multiple potential "owners" of a global configuration is what lead >> me to introduce the "configure_state" attribute of the log manager, to >> disambiguate who is in control and who initialized the configuration. In >> fact bootstrap() in plugable.py explicitly examines the configure_state >> attribute of the log_manager and if it's been previously configured (for >> example because the api was loaded by a command line utility that has >> configured it's own logging) it then defers to the logging configuration >> established by the environment it's being run in (in other words if a >> command line utility calls standard_logging_setup() before loading the >> api it wins and the logging configuration belongs to the utility, not >> the api). >> >> All of this is to say that isolated code that reaches into a global >> configuration and takes an axe to it and chops out a significant part of >> the configuration that someone else may have established without them >> knowing about it feels wrong. >> >> I suspect there is some deeper problem afoot and unilaterally chopping >> out the console handler is not addressing the fundamental problem and >> likely introduces a new one. But I really need to look at the specific >> instances of the problem, maybe if I dig through the history of this >> patch review more carefully I'll discover it, but if someone wants to >> chime in an point me at the exact issue that would be great :-) > > Like I've said, this is a non-issue in this case. Those utilities that > do not open a log simply don't need to call this log_on_exit function. > > Any remaining logging issues should be logged as a ticket. > > I think you're overstating the logging problem in general though. > Basically we just want a way to override the default log that the API > chooses. So the API bootstrap() call defers to the user if logging has > already been configured. It is the responsibility of the developer to > know when to do what but since there is less than a half dozen > exceptions I don't think this is worth a significant time investment. > > At some point we may rewrite all of these utilities to use the API > itself at which time this problem will simply go away. > > rob Attaching a patch that only adds the final log for commands that do standard_logging_setup. It still includes the removing of the console handler, though. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0014-10-Add-final-debug-message-in-installers.patch Type: text/x-patch Size: 23765 bytes Desc: not available URL: From jdennis at redhat.com Tue Apr 17 14:12:19 2012 From: jdennis at redhat.com (John Dennis) Date: Tue, 17 Apr 2012 10:12:19 -0400 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F8D708E.3050701@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> <4F849AE5.5030409@redhat.com> <4F86BCDE.1080508@redhat.com> <4F87FF2B.90108@redhat.com> <4F8C5270.5030605@redhat.com> <4F8C5773.4030200@redhat.com> <4F8C7CB3.2080309@redhat.com> <4F8C7DE7.8080609@redhat.com> <4F8C9438.8090805@redhat.com> <4F8C994E.7070605@redhat.com> <4F8D708E.3050701@redhat.com> Message-ID: <4F8D7A43.5060303@redhat.com> There have been so many versions of the patch and various comments attached to it I'm afraid I'm still trying to wrap my head around what the actual problem is we're trying to solve, until I have that understanding I can't evaluate the proposed solution. Could you please state simply what the fundamental problem is the patch is trying to correct for and how it accomplishes it? When I tried to reach this understanding myself about the only thing I figured out was that some of our command line utilities pass a string to the exit() command whose output is out of band with respect to logging resulting in a duplicate message (however before application of the patch I don't see why logging would re-emit the message). -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From rcritten at redhat.com Tue Apr 17 14:13:57 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 17 Apr 2012 10:13:57 -0400 Subject: [Freeipa-devel] [PATCH] 1006 detect expired passwords in forms login In-Reply-To: <1334652850.12753.24.camel@balmora.brq.redhat.com> References: <4F887E4F.6060805@redhat.com> <4F8BE018.4070108@redhat.com> <4F8C1BB2.1040604@redhat.com> <4F8C2000.9000400@redhat.com> <1334652850.12753.24.camel@balmora.brq.redhat.com> Message-ID: <4F8D7AA5.9040400@redhat.com> Martin Kosek wrote: > On Mon, 2012-04-16 at 09:34 -0400, Rob Crittenden wrote: >> Rob Crittenden wrote: >>> Petr Vobornik wrote: >>>> On 04/13/2012 09:28 PM, Rob Crittenden wrote: >>>>> When doing a forms-based login there is no notification that a password >>>>> needs to be reset. We don't currently provide a facility for that but we >>>>> should at least tell users what is going on. >>>>> >>>>> This patch adds an LDAP bind to test the password to see if it is >>>>> expired and returns the string "Password Expired" along with the 401 if >>>>> it is. I'm told this is all the UI will need to be able to identify this >>>>> condition. >>>>> >>>>> rob >>>>> >>>> >>>> UI can work with it. I have a patch ready. I'll send it when this will >>>> be ACKed. >>>> >>>> Some notes: >>>> >>>> 1) The error templates and the 'Password Expired' message are hardcoded >>>> to be English. It's fine at the moment. Will we internationalize them >>>> sometime in future? If so, we will run into the same problem again. >>> >>> No plans to. I can update the patch with a comment specifically to not >>> internationalize it if you'd like. >>> >>>> 2) conn.destroy_connection() won't be called if an exception occurs. Not >>>> sure if it is a problem, GC and __del__ should take care of it. >>> >>> Hmm, this is due to a late stage change I made. I originally had this >>> broken out into two blocks where the only thing done in the first >>> try/except block was the connection, so the only exception that could >>> happen was a failed connection. >>> >>> That isn't true any more. I'll update the patch. >> >> And here you go. >> >> rob > > I still think that deciding based on error message string is not right, > we may want to have it internationalized and thus hit the same error. > What about returning a custom HTTP header specifying the reason? > > Something like that: > > X-rejection-reason: password-expired > OR > X-rejection-reason: password-invalid > OR > X-rejection-reason: account-locked > > Web UI could customize next actions based on this header instead of > parsing the error message. > > If you decide to rather stick with current solution and file my proposal > as an enhancement ticket (or discard it entirely), then ACK. > > Martin > I think this is a better solution. I've updated my patch. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1006-3-expired.patch Type: text/x-diff Size: 5426 bytes Desc: not available URL: From mkosek at redhat.com Tue Apr 17 14:29:20 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 17 Apr 2012 16:29:20 +0200 Subject: [Freeipa-devel] [PATCH] 20 Fix empty external member processing In-Reply-To: <4F7AF981.2050303@redhat.com> References: <4F7ACF83.3080202@redhat.com> <4F7AF981.2050303@redhat.com> Message-ID: <1334672960.12753.32.camel@balmora.brq.redhat.com> On Tue, 2012-04-03 at 15:22 +0200, Ondrej Hamada wrote: > On 04/03/2012 12:22 PM, Ondrej Hamada wrote: > > https://fedorahosted.org/freeipa/ticket/2447 > > > > Validation of external member was failing for empty strings because > > of > > wrong condition. > > > > > > > > _______________________________________________ > > Freeipa-devel mailing list > > Freeipa-devel at redhat.com > > https://www.redhat.com/mailman/listinfo/freeipa-devel > > Used clearer solution. Thanks to Rob for advice. ACK. Pushed to master, ipa-2-2. I just replaced: + if options.get(membertype,False): with + if options.get(membertype): as it was redundant. Validation of externalHost attribute passed via --setattr or --addattr shall be solved in ticket #2649. Martin From pviktori at redhat.com Tue Apr 17 14:37:34 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 17 Apr 2012 16:37:34 +0200 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F8D7A43.5060303@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> <4F849AE5.5030409@redhat.com> <4F86BCDE.1080508@redhat.com> <4F87FF2B.90108@redhat.com> <4F8C5270.5030605@redhat.com> <4F8C5773.4030200@redhat.com> <4F8C7CB3.2080309@redhat.com> <4F8C7DE7.8080609@redhat.com> <4F8C9438.8090805@redhat.com> <4F8C994E.7070605@redhat.com> <4F8D708E.3050701@redhat.com> <4F8D7A43.5060303@redhat.com> Message-ID: <4F8D802E.6010201@redhat.com> On 04/17/2012 04:12 PM, John Dennis wrote: > There have been so many versions of the patch and various comments > attached to it I'm afraid I'm still trying to wrap my head around what > the actual problem is we're trying to solve, until I have that > understanding I can't evaluate the proposed solution. > > Could you please state simply what the fundamental problem is the patch > is trying to correct for and how it accomplishes it? The fundamental problem is in https://fedorahosted.org/freeipa/ticket/2071: the install tools need to put a final message in their logs, which states whether the command succeeded or not, and if not, what the error was. The patch needs to do this with minimal modifications to the scripts, so that it doesn't break any functionality. It accomplishes this by wrapping the scripts' top-level code in a context manager. When the script is done, the context manager looks whether an exception was raised and logs an appropriate message. The current problem is that some tools configure logging to the console, so this extra message ends up both in both the log file and on the console. The tools themselves also write their errors to the console (via print, SystemExit, or traceback), so the error appears twice. Since I do not want to add an extra message to the console, but only want it in a log file (if the tool uses one), I remove the console log handler before logging the message. > When I tried to reach this understanding myself about the only thing I > figured out was that some of our command line utilities pass a string to > the exit() command whose output is out of band with respect to logging > resulting in a duplicate message (however before application of the > patch I don't see why logging would re-emit the message). Yes, that's exactly right. The output to console was fine, it should stay as it was before the patch. -- Petr? From pvoborni at redhat.com Tue Apr 17 14:45:59 2012 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 17 Apr 2012 16:45:59 +0200 Subject: [Freeipa-devel] [PATCH] 121 User is notified that password needs to be reset in, forms-based login In-Reply-To: <4F8D351D.4050400@redhat.com> References: <4F8D351D.4050400@redhat.com> Message-ID: <4F8D8227.7080309@redhat.com> Updated patch attached. It's modified according to Rob's patch #1006-3 which uses 'X-rejection-reason' to notify expired password. On 04/17/2012 11:17 AM, Petr Vobornik wrote: > This solution depends on Rob's patch #1006-2 > > Forms-based login procedure detects if 401 unauthorized message contains > 'Expired Password' message. If so it displays an error message that user > needs to reset his password. > > https://fedorahosted.org/freeipa/ticket/2608 > -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0121-1-User-is-notified-that-password-needs-to-be-reset-in-.patch Type: text/x-patch Size: 6569 bytes Desc: not available URL: From mkosek at redhat.com Tue Apr 17 14:51:53 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 17 Apr 2012 16:51:53 +0200 Subject: [Freeipa-devel] [PATCH] 1006 detect expired passwords in forms login In-Reply-To: <4F8D7AA5.9040400@redhat.com> References: <4F887E4F.6060805@redhat.com> <4F8BE018.4070108@redhat.com> <4F8C1BB2.1040604@redhat.com> <4F8C2000.9000400@redhat.com> <1334652850.12753.24.camel@balmora.brq.redhat.com> <4F8D7AA5.9040400@redhat.com> Message-ID: <1334674313.12753.37.camel@balmora.brq.redhat.com> On Tue, 2012-04-17 at 10:13 -0400, Rob Crittenden wrote: > Martin Kosek wrote: > > On Mon, 2012-04-16 at 09:34 -0400, Rob Crittenden wrote: > >> Rob Crittenden wrote: > >>> Petr Vobornik wrote: > >>>> On 04/13/2012 09:28 PM, Rob Crittenden wrote: > >>>>> When doing a forms-based login there is no notification that a password > >>>>> needs to be reset. We don't currently provide a facility for that but we > >>>>> should at least tell users what is going on. > >>>>> > >>>>> This patch adds an LDAP bind to test the password to see if it is > >>>>> expired and returns the string "Password Expired" along with the 401 if > >>>>> it is. I'm told this is all the UI will need to be able to identify this > >>>>> condition. > >>>>> > >>>>> rob > >>>>> > >>>> > >>>> UI can work with it. I have a patch ready. I'll send it when this will > >>>> be ACKed. > >>>> > >>>> Some notes: > >>>> > >>>> 1) The error templates and the 'Password Expired' message are hardcoded > >>>> to be English. It's fine at the moment. Will we internationalize them > >>>> sometime in future? If so, we will run into the same problem again. > >>> > >>> No plans to. I can update the patch with a comment specifically to not > >>> internationalize it if you'd like. > >>> > >>>> 2) conn.destroy_connection() won't be called if an exception occurs. Not > >>>> sure if it is a problem, GC and __del__ should take care of it. > >>> > >>> Hmm, this is due to a late stage change I made. I originally had this > >>> broken out into two blocks where the only thing done in the first > >>> try/except block was the connection, so the only exception that could > >>> happen was a failed connection. > >>> > >>> That isn't true any more. I'll update the patch. > >> > >> And here you go. > >> > >> rob > > > > I still think that deciding based on error message string is not right, > > we may want to have it internationalized and thus hit the same error. > > What about returning a custom HTTP header specifying the reason? > > > > Something like that: > > > > X-rejection-reason: password-expired > > OR > > X-rejection-reason: password-invalid > > OR > > X-rejection-reason: account-locked > > > > Web UI could customize next actions based on this header instead of > > parsing the error message. > > > > If you decide to rather stick with current solution and file my proposal > > as an enhancement ticket (or discard it entirely), then ACK. > > > > Martin > > > > I think this is a better solution. I've updated my patch. > > rob ACK. Worked for me nice, header was there. Now, the ball is on WebUI side. I would just rename the header from "X-rejection-reason" to "X-Rejection-Reason" (camel case used) in order to be consistent with other headers. Honza also suggested to add "IPA" prefix so that people know where this headers comes from. So the final header name would be: X-IPA-Rejection-Reason Martin From pspacek at redhat.com Tue Apr 17 15:49:36 2012 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 17 Apr 2012 17:49:36 +0200 Subject: [Freeipa-devel] DNS zone serial number updates [#2554] Message-ID: <4F8D9110.3030703@redhat.com> Hello, there is IPA ticket #2554 "DNS zone serial number is not updated" [1], which is required by RFE "Support zone transfers in bind-dyndb-ldap" [2]. I think we need to discuss next steps with this issue: Basic support for zone transfers is already done in bind-dyndb-ldap. We need second part - correct behaviour during SOA serial number update. Bind-dyndb-ldap plugin handles dynamic update in correct way (each update increment serial #), so biggest problem lays in IPA for now. Modifying SOA serial number can be pretty hard, because of DS replication. There are potential race conditions, if records are modified/added/deleted on two or more places, replication takes some time (because of network connection latency/problem) and zone transfer is started in meanwhile. Question is: How consistent we want to be? Can we accept these absolutely improbable race conditions? It will be probably corrected by next SOA update = by (any) next record change. It won't affect normal operations, only zone transfers. (IMHO we should consider DNS "nature": In general is not strictly consistent, because of massive caching at every level.) If it's acceptable, we can suppress explicit SOA serial number value in LDAP and derive actual value from latest modifyTimestamp value from all objects in cn=dns subtree. This approach saves some hooks in IPA's LDAP update code and will save problems with manual modifications. Persistent search will be (probably) required for effective implementation. I think it's not a problem, because DNSSEC will require (with very high probability) persistent search for generating NSEC/NSEC3 records. [1] https://fedorahosted.org/freeipa/ticket/2554 [2] https://bugzilla.redhat.com/show_bug.cgi?id=766233 Please, post your opinions about DNS consistency strictness. Petr^2 Spacek From simo at redhat.com Tue Apr 17 16:13:31 2012 From: simo at redhat.com (Simo Sorce) Date: Tue, 17 Apr 2012 12:13:31 -0400 Subject: [Freeipa-devel] DNS zone serial number updates [#2554] In-Reply-To: <4F8D9110.3030703@redhat.com> References: <4F8D9110.3030703@redhat.com> Message-ID: <1334679211.16658.200.camel@willson.li.ssimo.org> On Tue, 2012-04-17 at 17:49 +0200, Petr Spacek wrote: > Hello, > > there is IPA ticket #2554 "DNS zone serial number is not updated" [1], > which is required by RFE "Support zone transfers in bind-dyndb-ldap" [2]. > > I think we need to discuss next steps with this issue: > > Basic support for zone transfers is already done in bind-dyndb-ldap. We > need second part - correct behaviour during SOA serial number update. > > Bind-dyndb-ldap plugin handles dynamic update in correct way (each > update increment serial #), so biggest problem lays in IPA for now. > > Modifying SOA serial number can be pretty hard, because of DS > replication. There are potential race conditions, if records are > modified/added/deleted on two or more places, replication takes some > time (because of network connection latency/problem) and zone transfer > is started in meanwhile. > > Question is: How consistent we want to be? Enough, what we want to do is stop updating the SOA from bind-dyndb-ldap and instead update it in a DS plugin. That's because a DS plugin is the only thing that can see entries coming in from multiple servers. If you update the SOA from bind-dyndb-ldap you can potentially set it back in time because last write win in DS. This will require a persistent sarch so bind-dyndb-ldap can be updated with the last SOA serial number, or bind-dyndb-ldap must not cache it and always try to fetch it from ldap. > Can we accept these > absolutely improbable race conditions? It will be probably corrected by > next SOA update = by (any) next record change. It won't affect normal > operations, only zone transfers. Yes and No, the problem is that if 2 servers update the SOA independently you may have the serial go backwards on replication. See above. > (IMHO we should consider DNS "nature": In general is not strictly > consistent, because of massive caching at every level.) True, but the serial is normally considered monotonically increasing. > If it's acceptable, we can suppress explicit SOA serial number value in > LDAP and derive actual value from latest modifyTimestamp value from all > objects in cn=dns subtree. This approach saves some hooks in IPA's LDAP > update code and will save problems with manual modifications. It will cause a big search though. It also will not take in account when there are changes replicated from another replica that are "backdated" relative to the last modifyTimestamp. Also using modifyTimestamp would needlessly increment the SOA if there are changes to the entries that are not relevant to DNS (like admins changing ACIs, or other actions like that). > Persistent search will be (probably) required for effective implementation. > I think it's not a problem, because DNSSEC will require (with very high > probability) persistent search for generating NSEC/NSEC3 records. > > [1] https://fedorahosted.org/freeipa/ticket/2554 > > [2] https://bugzilla.redhat.com/show_bug.cgi?id=766233 > > > Please, post your opinions about DNS consistency strictness. I think we need to try to be more consistent than what we are now. There may always be minor races, but the current races are too big to pass on IMHO. Simo. -- Simo Sorce * Red Hat, Inc * New York From pvoborni at redhat.com Tue Apr 17 16:37:29 2012 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 17 Apr 2012 18:37:29 +0200 Subject: [Freeipa-devel] [PATCH] 122 Added permission field to delegation Message-ID: <4F8D9C49.4050207@redhat.com> Permission field is missing in delegation so it can't be set/modified. It was added to delegation details facet and adder dialog. The field is using checkboxes instead of multivalued textbox because it can have only two effective values: 'read' and 'write'. https://fedorahosted.org/freeipa/ticket/2635 -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0122-Added-permission-field-to-delegation.patch Type: text/x-patch Size: 1867 bytes Desc: not available URL: From jdennis at redhat.com Tue Apr 17 16:46:21 2012 From: jdennis at redhat.com (John Dennis) Date: Tue, 17 Apr 2012 12:46:21 -0400 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F8D802E.6010201@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> <4F849AE5.5030409@redhat.com> <4F86BCDE.1080508@redhat.com> <4F87FF2B.90108@redhat.com> <4F8C5270.5030605@redhat.com> <4F8C5773.4030200@redhat.com> <4F8C7CB3.2080309@redhat.com> <4F8C7DE7.8080609@redhat.com> <4F8C9438.8090805@redhat.com> <4F8C994E.7070605@redhat.com> <4F8D708E.3050701@redhat.com> <4F8D7A43.5060303@redhat.com> <4F8D802E.6010201@redhat.com> Message-ID: <4F8D9E5D.9000901@redhat.com> Thank you for the explanation Petr, it's very much appreciated. I do have a problem with this patch and I'm inclined to NAK it, but I'm open to discussion. Here's my thoughts, if I've made mistakes in my reasoning please point them out. The fundamental problem is many of our command line utilities do not do logging correctly. Fixing the logging is not terribly difficult but it raises concerns over invasive changes at the last minute. To address the problem we're going to introduce what can only be called a "hack" to compensate for the original deficiency. The hack is a bit obscure and convoluted (and I'm not sure entirely robust). It introduces enough complexity it's not obvious or easy to see what is going on. Code that obscures should be treated with skepticism and be justified by important needs. I'm also afraid what should really be a short term work-around will get ensconced in the code and never cleaned up, it will be duplicated, and used as an example of how things are supposed to work. So my question is, "Is the output of the command line utilities so broken that it justifies introducing this?" and "Is this any less invasive than simply fixing the messages in the utilities cleanly and not introducing a hack?" -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From pviktori at redhat.com Tue Apr 17 17:13:24 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 17 Apr 2012 19:13:24 +0200 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F8D9E5D.9000901@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> <4F849AE5.5030409@redhat.com> <4F86BCDE.1080508@redhat.com> <4F87FF2B.90108@redhat.com> <4F8C5270.5030605@redhat.com> <4F8C5773.4030200@redhat.com> <4F8C7CB3.2080309@redhat.com> <4F8C7DE7.8080609@redhat.com> <4F8C9438.8090805@redhat.com> <4F8C994E.7070605@redhat.com> <4F8D708E.3050701@redhat.com> <4F8D7A43.5060303@redhat.com> <4F8D802E.6010201@redhat.com> <4F8D9E5D.9000901@redhat.com> Message-ID: <4F8DA4B4.1010409@redhat.com> On 04/17/2012 06:46 PM, John Dennis wrote: > Thank you for the explanation Petr, it's very much appreciated. > > I do have a problem with this patch and I'm inclined to NAK it, but I'm > open to discussion. Here's my thoughts, if I've made mistakes in my > reasoning please point them out. > > The fundamental problem is many of our command line utilities do not do > logging correctly. > > Fixing the logging is not terribly difficult but it raises concerns over > invasive changes at the last minute. > > To address the problem we're going to introduce what can only be called > a "hack" to compensate for the original deficiency. The hack is a bit > obscure and convoluted (and I'm not sure entirely robust). It introduces > enough complexity it's not obvious or easy to see what is going on. Code > that obscures should be treated with skepticism and be justified by > important needs. I'm also afraid what should really be a short term > work-around will get ensconced in the code and never cleaned up, it will > be duplicated, and used as an example of how things are supposed to work. > > So my question is, "Is the output of the command line utilities so > broken that it justifies introducing this?" and "Is this any less > invasive than simply fixing the messages in the utilities cleanly and > not introducing a hack?" > > Yes, it's a hack, because it needs to be non-invasive. It does that by not modifying the scripts themselves, but just wrapping them in the context handler. So no functionality is broken, there are just problems with extra messages printed or not printed. I think that's the least invasive thing to do. So the answer to your second question is yes. I'm not experienced enough to answer the first one. I opened https://fedorahosted.org/freeipa/ticket/2652 to track the larger issue that needs solving: removing the code duplication in these tools. This includes a common way to configure logging and report errors. -- Petr? From mkosek at redhat.com Tue Apr 17 18:45:09 2012 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 17 Apr 2012 20:45:09 +0200 Subject: [Freeipa-devel] [PATCH] 252 Do not fail migration because of duplicate groups Message-ID: <1334688309.2460.0.camel@priserak> When 2 groups in a remote LDAP server share the same GID number, the migration may fail entirely with incomprehensible message. This should not be taken as unrecoverable error - GID number check is just a sanity check, a warning is enough. This patch also makes sure that GID check warnings include a user name to make an investigation easier. https://fedorahosted.org/freeipa/ticket/2644 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-252-do-not-fail-migration-because-of-duplicate-groups.patch Type: text/x-patch Size: 1902 bytes Desc: not available URL: From rcritten at redhat.com Tue Apr 17 19:03:31 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 17 Apr 2012 15:03:31 -0400 Subject: [Freeipa-devel] [PATCH] 1006 detect expired passwords in forms login In-Reply-To: <1334674313.12753.37.camel@balmora.brq.redhat.com> References: <4F887E4F.6060805@redhat.com> <4F8BE018.4070108@redhat.com> <4F8C1BB2.1040604@redhat.com> <4F8C2000.9000400@redhat.com> <1334652850.12753.24.camel@balmora.brq.redhat.com> <4F8D7AA5.9040400@redhat.com> <1334674313.12753.37.camel@balmora.brq.redhat.com> Message-ID: <4F8DBE83.6060709@redhat.com> Martin Kosek wrote: > On Tue, 2012-04-17 at 10:13 -0400, Rob Crittenden wrote: >> Martin Kosek wrote: >>> On Mon, 2012-04-16 at 09:34 -0400, Rob Crittenden wrote: >>>> Rob Crittenden wrote: >>>>> Petr Vobornik wrote: >>>>>> On 04/13/2012 09:28 PM, Rob Crittenden wrote: >>>>>>> When doing a forms-based login there is no notification that a password >>>>>>> needs to be reset. We don't currently provide a facility for that but we >>>>>>> should at least tell users what is going on. >>>>>>> >>>>>>> This patch adds an LDAP bind to test the password to see if it is >>>>>>> expired and returns the string "Password Expired" along with the 401 if >>>>>>> it is. I'm told this is all the UI will need to be able to identify this >>>>>>> condition. >>>>>>> >>>>>>> rob >>>>>>> >>>>>> >>>>>> UI can work with it. I have a patch ready. I'll send it when this will >>>>>> be ACKed. >>>>>> >>>>>> Some notes: >>>>>> >>>>>> 1) The error templates and the 'Password Expired' message are hardcoded >>>>>> to be English. It's fine at the moment. Will we internationalize them >>>>>> sometime in future? If so, we will run into the same problem again. >>>>> >>>>> No plans to. I can update the patch with a comment specifically to not >>>>> internationalize it if you'd like. >>>>> >>>>>> 2) conn.destroy_connection() won't be called if an exception occurs. Not >>>>>> sure if it is a problem, GC and __del__ should take care of it. >>>>> >>>>> Hmm, this is due to a late stage change I made. I originally had this >>>>> broken out into two blocks where the only thing done in the first >>>>> try/except block was the connection, so the only exception that could >>>>> happen was a failed connection. >>>>> >>>>> That isn't true any more. I'll update the patch. >>>> >>>> And here you go. >>>> >>>> rob >>> >>> I still think that deciding based on error message string is not right, >>> we may want to have it internationalized and thus hit the same error. >>> What about returning a custom HTTP header specifying the reason? >>> >>> Something like that: >>> >>> X-rejection-reason: password-expired >>> OR >>> X-rejection-reason: password-invalid >>> OR >>> X-rejection-reason: account-locked >>> >>> Web UI could customize next actions based on this header instead of >>> parsing the error message. >>> >>> If you decide to rather stick with current solution and file my proposal >>> as an enhancement ticket (or discard it entirely), then ACK. >>> >>> Martin >>> >> >> I think this is a better solution. I've updated my patch. >> >> rob > > ACK. Worked for me nice, header was there. Now, the ball is on WebUI > side. > > I would just rename the header from "X-rejection-reason" to > "X-Rejection-Reason" (camel case used) in order to be consistent with > other headers. > > Honza also suggested to add "IPA" prefix so that people know where this > headers comes from. So the final header name would be: > X-IPA-Rejection-Reason > > Martin > Good suggestions. Applied both and pushed to ipa-2-2 and master rob From rcritten at redhat.com Tue Apr 17 19:04:06 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 17 Apr 2012 15:04:06 -0400 Subject: [Freeipa-devel] [PATCH] 121 User is notified that password needs to be reset in, forms-based login In-Reply-To: <4F8D8227.7080309@redhat.com> References: <4F8D351D.4050400@redhat.com> <4F8D8227.7080309@redhat.com> Message-ID: <4F8DBEA6.9010405@redhat.com> Petr Vobornik wrote: > Updated patch attached. It's modified according to Rob's patch #1006-3 > which uses 'X-rejection-reason' to notify expired password. ACK In patch 1006 it was suggested to use X-IPA-Rejection-Reason instead. Updated this patch and pushed to master and ipa-2-2 rob > > On 04/17/2012 11:17 AM, Petr Vobornik wrote: >> This solution depends on Rob's patch #1006-2 >> >> Forms-based login procedure detects if 401 unauthorized message contains >> 'Expired Password' message. If so it displays an error message that user >> needs to reset his password. >> >> https://fedorahosted.org/freeipa/ticket/2608 >> >> >> >> _______________________________________________ >> Freeipa-devel mailing list >> Freeipa-devel at redhat.com >> https://www.redhat.com/mailman/listinfo/freeipa-devel From rcritten at redhat.com Tue Apr 17 19:36:52 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 17 Apr 2012 15:36:52 -0400 Subject: [Freeipa-devel] [PATCH 73] don't append basedn to container if it is included In-Reply-To: <4F8CA328.4080609@redhat.com> References: <4F8CA328.4080609@redhat.com> Message-ID: <4F8DC654.3060800@redhat.com> ACK, pushed to master and ipa-2-2 rob From simo at redhat.com Tue Apr 17 19:43:46 2012 From: simo at redhat.com (Simo Sorce) Date: Tue, 17 Apr 2012 15:43:46 -0400 Subject: [Freeipa-devel] [PATCH] Pushed one-liner to silence coverity Message-ID: <1334691826.16658.208.camel@willson.li.ssimo.org> http://git.fedorahosted.org/git/?p=freeipa.git;a=commit;h=adf16a9b1c52ea8ee1e9989b99ab7da32adddf38 Was ticket #2634 Simo. -- Simo Sorce * Red Hat, Inc * New York From rcritten at redhat.com Tue Apr 17 20:34:15 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 17 Apr 2012 16:34:15 -0400 Subject: [Freeipa-devel] [PATCH] 248 Raise proper exception when LDAP limits are exceeded In-Reply-To: <1334649886.12753.18.camel@balmora.brq.redhat.com> References: <1334048262.7045.19.camel@balmora.brq.redhat.com> <4F86D4B9.7090509@redhat.com> <4F86D833.2010501@redhat.com> <4F8C5C2D.5070909@redhat.com> <1334649886.12753.18.camel@balmora.brq.redhat.com> Message-ID: <4F8DD3C7.3060201@redhat.com> Martin Kosek wrote: > On Mon, 2012-04-16 at 13:51 -0400, Rob Crittenden wrote: >> Rob Crittenden wrote: >>> Jan Cholasta wrote: >>>> On 10.4.2012 10:57, Martin Kosek wrote: >>>>> Few test hints are attached to the ticket. >>>>> >>>>> --- >>>>> >>>>> ldap2 plugin returns NotFound error for find_entries/get_entry >>>>> queries when the server did not manage to return an entry >>>>> due to time limits. This may be confusing for user when the >>>>> entry he searches actually exists. >>>>> >>>>> This patch fixes the behavior in ldap2 plugin to return >>>>> LimitsExceeded exception instead. This way, user would know >>>>> that his time/size limits are set too low and can amend them to >>>>> get correct results. >>>>> >>>>> https://fedorahosted.org/freeipa/ticket/2606 >>>>> >>>> >>>> ACK. >>>> >>>> Honza >>>> >>> >>> Before pushing I'd like to look at this more. truncated is supposed to >>> indicate a limits problem. I want to see if the caller should be >>> responsible for returning a limits error instead. >>> >>> rob >> >> This is what I had in mind. >> >> diff --git a/ipaserver/plugins/ldap2.py b/ipaserver/plugins/ldap2.py >> index 61341b0..447e738 100644 >> --- a/ipaserver/plugins/ldap2.py >> +++ b/ipaserver/plugins/ldap2.py >> @@ -754,7 +754,7 @@ class ldap2(CrudBackend, Encoder): >> except _ldap.LDAPError, e: >> _handle_errors(e) >> >> - if not res: >> + if not res and not truncated: >> raise errors.NotFound(reason='no such entry') >> >> if attrs_list and ('memberindirect' in attrs_list or '*' in >> attrs_list) >> : >> @@ -801,7 +801,10 @@ class ldap2(CrudBackend, Encoder): >> if len(entries)> 1: >> raise errors.SingleMatchExpected(found=len(entries)) >> else: >> - return entries[0] >> + if truncated: >> + raise errors.LimitsExceeded() >> + else: >> + return entries[0] >> >> def get_entry(self, dn, attrs_list=None, time_limit=None, >> size_limit=None, normalize=True): >> @@ -811,10 +814,13 @@ class ldap2(CrudBackend, Encoder): >> Keyword arguments: >> attrs_list - list of attributes to return, all if None >> (default None) >> """ >> - return self.find_entries( >> + (entry, truncated) = self.find_entries( >> None, attrs_list, dn, self.SCOPE_BASE, time_limit=time_limit, >> size_limit=size_limit, normalize=normalize >> - )[0][0] >> + ) >> + if truncated: >> + raise errors.LimitsExceeded() >> + return entry[0] >> >> config_defaults = {'ipasearchtimelimit': [2], >> 'ipasearchrecordslimit': [0]} >> def get_ipa_config(self, attrs_list=None): >> > > Thanks for the patch. I had similar patch planned in my mind. > > We just need to be more careful, this patch will change an assumption > that ldap2.find_entries will always either raise NotFound error or > return at least one result. > > I checked all ldap2.find_entries calls and did few fixes, it should be > OK now. No regression found in unit tests. The updated patch is > attached. > > Martin ACK, pushed to master and ipa-2-2 rob From rcritten at redhat.com Tue Apr 17 21:31:16 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 17 Apr 2012 17:31:16 -0400 Subject: [Freeipa-devel] [PATCH] 252 Do not fail migration because of duplicate groups In-Reply-To: <1334688309.2460.0.camel@priserak> References: <1334688309.2460.0.camel@priserak> Message-ID: <4F8DE124.7000605@redhat.com> Martin Kosek wrote: > When 2 groups in a remote LDAP server share the same GID number, > the migration may fail entirely with incomprehensible message. This > should not be taken as unrecoverable error - GID number check is > just a sanity check, a warning is enough. This patch also makes > sure that GID check warnings include a user name to make > an investigation easier. > > https://fedorahosted.org/freeipa/ticket/2644 ACK, pushed to master and ipa-2-2 rob From dpal at redhat.com Tue Apr 17 22:38:38 2012 From: dpal at redhat.com (Dmitri Pal) Date: Tue, 17 Apr 2012 18:38:38 -0400 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F8DA4B4.1010409@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> <4F849AE5.5030409@redhat.com> <4F86BCDE.1080508@redhat.com> <4F87FF2B.90108@redhat.com> <4F8C5270.5030605@redhat.com> <4F8C5773.4030200@redhat.com> <4F8C7CB3.2080309@redhat.com> <4F8C7DE7.8080609@redhat.com> <4F8C9438.8090805@redhat.com> <4F8C994E.7070605@redhat.com> <4F8D708E.3050701@redhat.com> <4F8D7A43.5060303@redhat.com> <4F8D802E.6010201@redhat.com> <4F8D9E5D.9000901@redhat.com> <4F8DA4B4.1010409@redhat.com> Message-ID: <4F8DF0EE.9020205@redhat.com> On 04/17/2012 01:13 PM, Petr Viktorin wrote: > On 04/17/2012 06:46 PM, John Dennis wrote: >> Thank you for the explanation Petr, it's very much appreciated. >> >> I do have a problem with this patch and I'm inclined to NAK it, but I'm >> open to discussion. Here's my thoughts, if I've made mistakes in my >> reasoning please point them out. >> >> The fundamental problem is many of our command line utilities do not do >> logging correctly. >> >> Fixing the logging is not terribly difficult but it raises concerns over >> invasive changes at the last minute. >> >> To address the problem we're going to introduce what can only be called >> a "hack" to compensate for the original deficiency. The hack is a bit >> obscure and convoluted (and I'm not sure entirely robust). It introduces >> enough complexity it's not obvious or easy to see what is going on. Code >> that obscures should be treated with skepticism and be justified by >> important needs. I'm also afraid what should really be a short term >> work-around will get ensconced in the code and never cleaned up, it will >> be duplicated, and used as an example of how things are supposed to >> work. >> >> So my question is, "Is the output of the command line utilities so >> broken that it justifies introducing this?" and "Is this any less >> invasive than simply fixing the messages in the utilities cleanly and >> not introducing a hack?" >> >> > > Yes, it's a hack, because it needs to be non-invasive. It does that by > not modifying the scripts themselves, but just wrapping them in the > context handler. So no functionality is broken, there are just > problems with extra messages printed or not printed. I think that's > the least invasive thing to do. So the answer to your second question > is yes. I'm not experienced enough to answer the first one. > > I opened https://fedorahosted.org/freeipa/ticket/2652 to track the > larger issue that needs solving: removing the code duplication in > these tools. This includes a common way to configure logging and > report errors. > > > Let us do the hack and pick the cleanup task in 3.0 so that we have things done correctly for future. If this task is not enough let us open other tasks to make sure that we track correctly the need to the right fix. We can even revert the change in 3.0 and go a different path. -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From dpal at redhat.com Tue Apr 17 22:42:36 2012 From: dpal at redhat.com (Dmitri Pal) Date: Tue, 17 Apr 2012 18:42:36 -0400 Subject: [Freeipa-devel] DNS zone serial number updates [#2554] In-Reply-To: <1334679211.16658.200.camel@willson.li.ssimo.org> References: <4F8D9110.3030703@redhat.com> <1334679211.16658.200.camel@willson.li.ssimo.org> Message-ID: <4F8DF1DC.1010605@redhat.com> On 04/17/2012 12:13 PM, Simo Sorce wrote: > On Tue, 2012-04-17 at 17:49 +0200, Petr Spacek wrote: >> Hello, >> >> there is IPA ticket #2554 "DNS zone serial number is not updated" [1], >> which is required by RFE "Support zone transfers in bind-dyndb-ldap" [2]. >> >> I think we need to discuss next steps with this issue: >> >> Basic support for zone transfers is already done in bind-dyndb-ldap. We >> need second part - correct behaviour during SOA serial number update. >> >> Bind-dyndb-ldap plugin handles dynamic update in correct way (each >> update increment serial #), so biggest problem lays in IPA for now. >> >> Modifying SOA serial number can be pretty hard, because of DS >> replication. There are potential race conditions, if records are >> modified/added/deleted on two or more places, replication takes some >> time (because of network connection latency/problem) and zone transfer >> is started in meanwhile. >> >> Question is: How consistent we want to be? > Enough, what we want to do is stop updating the SOA from bind-dyndb-ldap > and instead update it in a DS plugin. That's because a DS plugin is the > only thing that can see entries coming in from multiple servers. > If you update the SOA from bind-dyndb-ldap you can potentially set it > back in time because last write win in DS. > > This will require a persistent sarch so bind-dyndb-ldap can be updated > with the last SOA serial number, or bind-dyndb-ldap must not cache it > and always try to fetch it from ldap. > >> Can we accept these >> absolutely improbable race conditions? It will be probably corrected by >> next SOA update = by (any) next record change. It won't affect normal >> operations, only zone transfers. > Yes and No, the problem is that if 2 servers update the SOA > independently you may have the serial go backwards on replication. See > above. > >> (IMHO we should consider DNS "nature": In general is not strictly >> consistent, because of massive caching at every level.) > True, but the serial is normally considered monotonically increasing. > >> If it's acceptable, we can suppress explicit SOA serial number value in >> LDAP and derive actual value from latest modifyTimestamp value from all >> objects in cn=dns subtree. This approach saves some hooks in IPA's LDAP >> update code and will save problems with manual modifications. > It will cause a big search though. It also will not take in account when > there are changes replicated from another replica that are "backdated" > relative to the last modifyTimestamp. > Also using modifyTimestamp would needlessly increment the SOA if there > are changes to the entries that are not relevant to DNS (like admins > changing ACIs, or other actions like that). > >> Persistent search will be (probably) required for effective implementation. >> I think it's not a problem, because DNSSEC will require (with very high >> probability) persistent search for generating NSEC/NSEC3 records. >> >> [1] https://fedorahosted.org/freeipa/ticket/2554 >> >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=766233 >> >> >> Please, post your opinions about DNS consistency strictness. > I think we need to try to be more consistent than what we are now. There > may always be minor races, but the current races are too big to pass on > IMHO. > > Simo. > Are you saying that the update should be moved to the DS replication plugin? Do I get it right? -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From jcholast at redhat.com Wed Apr 18 07:00:28 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 18 Apr 2012 09:00:28 +0200 Subject: [Freeipa-devel] [PATCH] 75 Fix internal error when renaming user with an empty string In-Reply-To: <1334232763.23788.34.camel@balmora.brq.redhat.com> References: <4F86BDAA.9070102@redhat.com> <1334232763.23788.34.camel@balmora.brq.redhat.com> Message-ID: <4F8E668C.8020306@redhat.com> On 12.4.2012 14:12, Martin Kosek wrote: > On Thu, 2012-04-12 at 13:34 +0200, Jan Cholasta wrote: >> https://fedorahosted.org/freeipa/ticket/2629 >> >> Honza > > ACK. I will wait with push until the ticket is triaged. > > Martin > Push please :-) Honza -- Jan Cholasta From mkosek at redhat.com Wed Apr 18 07:05:33 2012 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 18 Apr 2012 09:05:33 +0200 Subject: [Freeipa-devel] [PATCH] 75 Fix internal error when renaming user with an empty string In-Reply-To: <4F8E668C.8020306@redhat.com> References: <4F86BDAA.9070102@redhat.com> <1334232763.23788.34.camel@balmora.brq.redhat.com> <4F8E668C.8020306@redhat.com> Message-ID: <1334732733.23954.4.camel@balmora.brq.redhat.com> On Wed, 2012-04-18 at 09:00 +0200, Jan Cholasta wrote: > On 12.4.2012 14:12, Martin Kosek wrote: > > On Thu, 2012-04-12 at 13:34 +0200, Jan Cholasta wrote: > >> https://fedorahosted.org/freeipa/ticket/2629 > >> > >> Honza > > > > ACK. I will wait with push until the ticket is triaged. > > > > Martin > > > > Push please :-) > > Honza > Pushed to master, ipa-2-2. Martin From pviktori at redhat.com Wed Apr 18 10:31:18 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 18 Apr 2012 12:31:18 +0200 Subject: [Freeipa-devel] [PATCH] 0038 Do not use extra command options in the automount plugin Message-ID: <4F8E97F6.2070703@redhat.com> Part of the work for https://fedorahosted.org/freeipa/ticket/2509 Validation doesn't complain about extra command options, which means e.g. that misspelling an option's name will cause it to be ignored. This allows errors in custom RPC clients (such as our test suite) to go unnoticed. Since I expect the review to take some time, and the master branch is a moving target, I'll post this in several patches. This patch is for the automount plugin; a patch for ACI and a patch to wrap up and enable the validation will follow later. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0038-Do-not-use-extra-command-options-in-the-automount-pl.patch Type: text/x-patch Size: 5277 bytes Desc: not available URL: From pviktori at redhat.com Wed Apr 18 11:33:05 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 18 Apr 2012 13:33:05 +0200 Subject: [Freeipa-devel] [PATCH 70] validate i18n strings when running "make lint" In-Reply-To: <4F8C81DA.5030800@redhat.com> References: <4F751025.7090204@redhat.com> <4F86D7F7.9040107@redhat.com> <4F8C81DA.5030800@redhat.com> Message-ID: <4F8EA671.1000006@redhat.com> On 04/16/2012 10:32 PM, John Dennis wrote: > On 04/12/2012 09:26 AM, Petr Viktorin wrote: >> On 03/30/2012 03:45 AM, John Dennis wrote: >>> Translatable strings have certain requirements for proper translation >>> and run time behaviour. We should routinely validate those strings. A >>> recent checkin to install/po/test_i18n.py makes it possible to validate >>> the contents of our current pot file by searching for problematic >>> strings. >>> >>> However, we only occasionally generate a new pot file, far less >>> frequently than translatable strings change in the source base. Just >>> like other checkin's to the source which are tested for correctness we >>> should also validate new or modified translation strings when they are >>> introduced and not accumulate problems to fix at the last minute. This >>> would also raise the awareness of developers as to the requirements for >>> proper string translation. >>> >>> The top level Makefile should be enhanced to create a temporary pot >>> files from the current sources and validate it. We need to use a >>> temporary pot file because we do not want to modify the pot file under >>> source code control and exported to Transifex. >>> >> >> NACK >> >> install/po/Makefile is not created early enough when running `make rpms` >> from a clean checkout. >> >> # git clean -fx >> ... >> # make rpms >> rm -rf /rpmbuild >> mkdir -p /rpmbuild/BUILD >> mkdir -p /rpmbuild/RPMS >> mkdir -p /rpmbuild/SOURCES >> mkdir -p /rpmbuild/SPECS >> mkdir -p /rpmbuild/SRPMS >> mkdir -p dist/rpms >> mkdir -p dist/srpms >> if [ ! -e RELEASE ]; then echo 0> RELEASE; fi >> sed -e s/__VERSION__/2.99.0GITde16a82/ -e s/__RELEASE__/0/ \ >> freeipa.spec.in> freeipa.spec >> sed -e s/__VERSION__/2.99.0GITde16a82/ version.m4.in \ >> > version.m4 >> sed -e s/__VERSION__/2.99.0GITde16a82/ ipapython/setup.py.in \ >> > ipapython/setup.py >> sed -e s/__VERSION__/2.99.0GITde16a82/ ipapython/version.py.in \ >> > ipapython/version.py >> perl -pi -e "s:__NUM_VERSION__:2990:" ipapython/version.py >> perl -pi -e "s:__API_VERSION__:2.34:" ipapython/version.py >> sed -e s/__VERSION__/2.99.0GITde16a82/ daemons/ipa-version.h.in \ >> > daemons/ipa-version.h >> perl -pi -e "s:__NUM_VERSION__:2990:" daemons/ipa-version.h >> perl -pi -e "s:__DATA_VERSION__:20100614120000:" daemons/ipa-version.h >> sed -e s/__VERSION__/2.99.0GITde16a82/ -e s/__RELEASE__/0/ \ >> ipa-client/ipa-client.spec.in> ipa-client/ipa-client.spec >> sed -e s/__VERSION__/2.99.0GITde16a82/ ipa-client/version.m4.in \ >> > ipa-client/version.m4 >> if [ "redhat" != "" ]; then \ >> sed -e s/SUPPORTED_PLATFORM/redhat/ ipapython/services.py.in \ >> > ipapython/services.py; \ >> fi >> if [ "" != "yes" ]; then \ >> ./makeapi --validate; \ >> fi >> make -C install/po validate-src-strings >> make[1]: Entering directory `/home/pviktori/freeipa/install/po' >> make[1]: *** No rule to make target `validate-src-strings'. Stop. >> make[1]: Leaving directory `/home/pviktori/freeipa/install/po' >> make: *** [validate-src-strings] Error 2 > > Updated patch attached. > > The fundamental problem is that we were trying to run code before > configure had ever been run. We have dependencies on the output and side > effects of configure. Therefore the solution is to add the > bootstrap-autogen target as a dependency. > > FWIW, I tried an approach that did not require having bootstrap-autogen > run first by having the validation occur as part of the normal "make > all". But that had some undesirable properties, first we really only > want to run validation for developers, not during a normal build, > secondly the file generation requires a git repo (see below). > > But there is another reason to require running bootstrap-autogen prior > to any validation (e.g. makeapi, make-lint, etc.) Those validation > utilities need access to generated source files and those generated > source files are supposed to be generated by the configure step. Right > now we're generating them as part of updating version information. I > will post a long email describing the problem on the devel list. So > we've created a situation we we must run configure early on, even as > part of "making the distribution" because part of "making the > distribution" is validating the distribution and that requires the full > content of the distribution be available. > > Also while trying to determine if the i18n validation step executed > correctly I realized the i18n validation only emitted a message if an > error was detected. That made it impossible to distinguish between empty > input (a problem when not running in a development tree) and successful > validation. Therefore I also updated i18n.py to output counts of > messages checked which also caused me to fix some validations that were > missing on plural forms. > > This still doesn't solve the problem: Make doesn't guarantee the order of dependencies, so with the rule: > lint: bootstrap-autogen validate-src-strings > ./make-lint $(LINT_OPTIONS) the validate-src-strings can fire before bootstrap-autogen: $ make lint if [ ! -e RELEASE ]; then echo 0 > RELEASE; fi /usr/bin/make -C install/po validate-src-strings make[1]: Entering directory `/home/pviktori/freeipa/install/po' make[1]: *** No rule to make target `validate-src-strings'. Stop. make[1]: Leaving directory `/home/pviktori/freeipa/install/po' make: *** [validate-src-strings] Error 2 make: *** Waiting for unfinished jobs.... When I move the bootstrap-autogen dependency to validate-src-strings, it works. (Quite a lot of needed for a lint now, but discussion about that is elsewhere.) Now that there are warnings, is pedantic mode necessary? The validate_file function still contains some `return 1` lines, you should update them all. Or just raise exceptions in these cases. $ tests/i18n.py --validate-pot nonexisting_file file does not exist "nonexisting_file" Traceback (most recent call last): File "tests/i18n.py", line 817, in sys.exit(main()) File "tests/i18n.py", line 761, in main n_entries, n_msgids, n_msgstrs, n_warnings, n_errors = validate_file(f, validation_mode) TypeError: 'int' object is not iterable I'd also return a namedtuple instead of a plain tuple, and have the caller access the items by name. That way, if validate_file decides to return more items in the future, we won't have to rewrite all calls to it. -- Petr? From pviktori at redhat.com Wed Apr 18 11:51:27 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 18 Apr 2012 13:51:27 +0200 Subject: [Freeipa-devel] Generating files from .in template files In-Reply-To: <4F8C835A.2070107@redhat.com> References: <4F8C835A.2070107@redhat.com> Message-ID: <4F8EAABF.4000600@redhat.com> On 04/16/2012 10:38 PM, John Dennis wrote: > Traditionally when using autotools it's job of the configure script to > generate files from .in template files. The reason it's done this way is > because configure has to probe the system it's running on to discover > the correct substitutions to be used in the .in templates. > > In our top level Makefile we generate several files from .in templates > under the "version-update" target. Most of this logic applies to > inserting version information into specific files. In the context of > release specific data this is probably a valid methodology although > other projects have incorporated the version munging into configure. For > version strings I don't think there is necessarily a correct approach > (e.g. part of configure or external to configure). Technically updating > the version strings is part of "make dist" step, i.e. the act of > creating the distribution, which is independent of configure. > > However, any template file that depends on the target system where > configure is being run should be created by configure. Why? Because > that's the design of autotools and how people expect things will work in > an autotools environment. Another reason is you might mistakenly use a > generated file on a target it was not configured for. When generated > files are not created by configure the likely hood of that mistake > increases because the distinction between a "distribution" file and a > file designed to be generated on the target is lost (in this case the > .in template is the distribution file). > > Currently we create ipapython/services.py from ipapython/services.py.in > in the top level Makefile, not from configure. I believe that's a > mistake, ipapython/services.py is a file that depends on the target it's > being built on and should be produced by configure. Also, it's > generation is part of the 'version-update' target in the top level > Makefile, but generating ipapython/services.py is completely divorced > from updating version information for the distribution snapshot, it > logically doesn't belong there. > > FWIW, I discovered this issue when trying to fix the build for the i18n > validation which is now run with the lint check (because it too has > dependencies on files generated by configure). > > Some of our test/validation utilities (e.g. makeapi, make-lint, etc) > locate all our source files. But if our set of source files is not > complete until we generate them (or not syntactically complete until > processed) then those validation steps will be incomplete or erroneous. > > We shouldn't run the validation code until AFTER we've generated all our > source files from their respective template files. Which means we > shouldn't run them until after configure has executed, which means > making bootstrap-autogen a dependency of the validation targets. > > The only reason it's worked so far is because everything is dependent on > updating the version information which as a side-effect is generating > other source code files. > > The problem manifests itself in other ways as well. Here is a more > general explanation: > > Many of our tools depend on knowing our set of source files. Some of our > tools locate them by searching, some have a hardcoded enumerated list, > and some derive the list by interrogating git. > > Anything that depends on git fails after a distribution tarball is > created because the git repo does not exist in the distribution. > Anything that depends on search is probably O.K. when building in an > antiseptic environment known to be clean (i.e. building an RPM) but is > likely to fail in a development environment that picks up cruft along > the way. Hardcoded lists fail when developers forget to update them. > > I discovered this problem when I tried to move the i18n validation > checks into the normal build process. It fails because it uses git to > generate the list of files to validate. Perhaps a better approach would > be to generate the file list using git and include the file list in the > distribution. > > Anyway, all of this is to say we need to be careful how and when we > generate files as well as our dependencies on those generated files as > well as what's in the distribution, where the build is occurring, and > how, when and what we choose to validate. > I'll add my own gripe: we only run autogen.sh in a directory (install, ipa-client, daemons) if there is no Makefile in that directory. This means that if e.g. install/Makefile.in changes, make doesn't rebuild install/Makefile. Is that standard autoconf practice? -- Petr? From simo at redhat.com Wed Apr 18 13:20:23 2012 From: simo at redhat.com (Simo Sorce) Date: Wed, 18 Apr 2012 09:20:23 -0400 Subject: [Freeipa-devel] DNS zone serial number updates [#2554] In-Reply-To: <4F8DF1DC.1010605@redhat.com> References: <4F8D9110.3030703@redhat.com> <1334679211.16658.200.camel@willson.li.ssimo.org> <4F8DF1DC.1010605@redhat.com> Message-ID: <1334755223.16658.228.camel@willson.li.ssimo.org> On Tue, 2012-04-17 at 18:42 -0400, Dmitri Pal wrote: > On 04/17/2012 12:13 PM, Simo Sorce wrote: > > On Tue, 2012-04-17 at 17:49 +0200, Petr Spacek wrote: > >> Hello, > >> > >> there is IPA ticket #2554 "DNS zone serial number is not updated" [1], > >> which is required by RFE "Support zone transfers in bind-dyndb-ldap" [2]. > >> > >> I think we need to discuss next steps with this issue: > >> > >> Basic support for zone transfers is already done in bind-dyndb-ldap. We > >> need second part - correct behaviour during SOA serial number update. > >> > >> Bind-dyndb-ldap plugin handles dynamic update in correct way (each > >> update increment serial #), so biggest problem lays in IPA for now. > >> > >> Modifying SOA serial number can be pretty hard, because of DS > >> replication. There are potential race conditions, if records are > >> modified/added/deleted on two or more places, replication takes some > >> time (because of network connection latency/problem) and zone transfer > >> is started in meanwhile. > >> > >> Question is: How consistent we want to be? > > Enough, what we want to do is stop updating the SOA from bind-dyndb-ldap > > and instead update it in a DS plugin. That's because a DS plugin is the > > only thing that can see entries coming in from multiple servers. > > If you update the SOA from bind-dyndb-ldap you can potentially set it > > back in time because last write win in DS. > > > > This will require a persistent sarch so bind-dyndb-ldap can be updated > > with the last SOA serial number, or bind-dyndb-ldap must not cache it > > and always try to fetch it from ldap. > > > >> Can we accept these > >> absolutely improbable race conditions? It will be probably corrected by > >> next SOA update = by (any) next record change. It won't affect normal > >> operations, only zone transfers. > > Yes and No, the problem is that if 2 servers update the SOA > > independently you may have the serial go backwards on replication. See > > above. > > > >> (IMHO we should consider DNS "nature": In general is not strictly > >> consistent, because of massive caching at every level.) > > True, but the serial is normally considered monotonically increasing. > > > >> If it's acceptable, we can suppress explicit SOA serial number value in > >> LDAP and derive actual value from latest modifyTimestamp value from all > >> objects in cn=dns subtree. This approach saves some hooks in IPA's LDAP > >> update code and will save problems with manual modifications. > > It will cause a big search though. It also will not take in account when > > there are changes replicated from another replica that are "backdated" > > relative to the last modifyTimestamp. > > Also using modifyTimestamp would needlessly increment the SOA if there > > are changes to the entries that are not relevant to DNS (like admins > > changing ACIs, or other actions like that). > > > >> Persistent search will be (probably) required for effective implementation. > >> I think it's not a problem, because DNSSEC will require (with very high > >> probability) persistent search for generating NSEC/NSEC3 records. > >> > >> [1] https://fedorahosted.org/freeipa/ticket/2554 > >> > >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=766233 > >> > >> > >> Please, post your opinions about DNS consistency strictness. > > I think we need to try to be more consistent than what we are now. There > > may always be minor races, but the current races are too big to pass on > > IMHO. > > > > Simo. > > > > Are you saying that the update should be moved to the DS replication plugin? > Do I get it right? not the DS replication plugin per se. But probably the best way to handle it is 'a' DS plugin. Simo. -- Simo Sorce * Red Hat, Inc * New York From dpal at redhat.com Wed Apr 18 13:26:37 2012 From: dpal at redhat.com (Dmitri Pal) Date: Wed, 18 Apr 2012 09:26:37 -0400 Subject: [Freeipa-devel] DNS zone serial number updates [#2554] In-Reply-To: <1334755223.16658.228.camel@willson.li.ssimo.org> References: <4F8D9110.3030703@redhat.com> <1334679211.16658.200.camel@willson.li.ssimo.org> <4F8DF1DC.1010605@redhat.com> <1334755223.16658.228.camel@willson.li.ssimo.org> Message-ID: <4F8EC10D.3080509@redhat.com> On 04/18/2012 09:20 AM, Simo Sorce wrote: > On Tue, 2012-04-17 at 18:42 -0400, Dmitri Pal wrote: >> On 04/17/2012 12:13 PM, Simo Sorce wrote: >>> On Tue, 2012-04-17 at 17:49 +0200, Petr Spacek wrote: >>>> Hello, >>>> >>>> there is IPA ticket #2554 "DNS zone serial number is not updated" [1], >>>> which is required by RFE "Support zone transfers in bind-dyndb-ldap" [2]. >>>> >>>> I think we need to discuss next steps with this issue: >>>> >>>> Basic support for zone transfers is already done in bind-dyndb-ldap. We >>>> need second part - correct behaviour during SOA serial number update. >>>> >>>> Bind-dyndb-ldap plugin handles dynamic update in correct way (each >>>> update increment serial #), so biggest problem lays in IPA for now. >>>> >>>> Modifying SOA serial number can be pretty hard, because of DS >>>> replication. There are potential race conditions, if records are >>>> modified/added/deleted on two or more places, replication takes some >>>> time (because of network connection latency/problem) and zone transfer >>>> is started in meanwhile. >>>> >>>> Question is: How consistent we want to be? >>> Enough, what we want to do is stop updating the SOA from bind-dyndb-ldap >>> and instead update it in a DS plugin. That's because a DS plugin is the >>> only thing that can see entries coming in from multiple servers. >>> If you update the SOA from bind-dyndb-ldap you can potentially set it >>> back in time because last write win in DS. >>> >>> This will require a persistent sarch so bind-dyndb-ldap can be updated >>> with the last SOA serial number, or bind-dyndb-ldap must not cache it >>> and always try to fetch it from ldap. >>> >>>> Can we accept these >>>> absolutely improbable race conditions? It will be probably corrected by >>>> next SOA update = by (any) next record change. It won't affect normal >>>> operations, only zone transfers. >>> Yes and No, the problem is that if 2 servers update the SOA >>> independently you may have the serial go backwards on replication. See >>> above. >>> >>>> (IMHO we should consider DNS "nature": In general is not strictly >>>> consistent, because of massive caching at every level.) >>> True, but the serial is normally considered monotonically increasing. >>> >>>> If it's acceptable, we can suppress explicit SOA serial number value in >>>> LDAP and derive actual value from latest modifyTimestamp value from all >>>> objects in cn=dns subtree. This approach saves some hooks in IPA's LDAP >>>> update code and will save problems with manual modifications. >>> It will cause a big search though. It also will not take in account when >>> there are changes replicated from another replica that are "backdated" >>> relative to the last modifyTimestamp. >>> Also using modifyTimestamp would needlessly increment the SOA if there >>> are changes to the entries that are not relevant to DNS (like admins >>> changing ACIs, or other actions like that). >>> >>>> Persistent search will be (probably) required for effective implementation. >>>> I think it's not a problem, because DNSSEC will require (with very high >>>> probability) persistent search for generating NSEC/NSEC3 records. >>>> >>>> [1] https://fedorahosted.org/freeipa/ticket/2554 >>>> >>>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=766233 >>>> >>>> >>>> Please, post your opinions about DNS consistency strictness. >>> I think we need to try to be more consistent than what we are now. There >>> may always be minor races, but the current races are too big to pass on >>> IMHO. >>> >>> Simo. >>> >> Are you saying that the update should be moved to the DS replication plugin? >> Do I get it right? > not the DS replication plugin per se. But probably the best way to > handle it is 'a' DS plugin. > > Simo. > I am not sure I follow. It is not clear to me how a DS plugin (if it is not a replication plugin) will help with the race condition that might be caused by changes made on different servers. The entries that were added on the other servers go only through the replication plugin. To the best of my knowledge the normal stack of plugins is not invoked for those entries as this stack is called when the entry is created in the first place on the other replica. -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From pspacek at redhat.com Wed Apr 18 13:29:44 2012 From: pspacek at redhat.com (Petr Spacek) Date: Wed, 18 Apr 2012 15:29:44 +0200 Subject: [Freeipa-devel] DNS zone serial number updates [#2554] In-Reply-To: <1334679211.16658.200.camel@willson.li.ssimo.org> References: <4F8D9110.3030703@redhat.com> <1334679211.16658.200.camel@willson.li.ssimo.org> Message-ID: <4F8EC1C8.4090204@redhat.com> Hello, first of all - snippet moved from the end: > I think we need to try to be more consistent than what we are now. There > may always be minor races, but the current races are too big to pass on > IMHO. > I definitely agree. Current state = completely broken zone transfer. Rest is in-line. On 04/17/2012 06:13 PM, Simo Sorce wrote: > On Tue, 2012-04-17 at 17:49 +0200, Petr Spacek wrote: >> Hello, >> >> there is IPA ticket #2554 "DNS zone serial number is not updated" [1], >> which is required by RFE "Support zone transfers in bind-dyndb-ldap" [2]. >> >> I think we need to discuss next steps with this issue: >> >> Basic support for zone transfers is already done in bind-dyndb-ldap. We >> need second part - correct behaviour during SOA serial number update. >> >> Bind-dyndb-ldap plugin handles dynamic update in correct way (each >> update increment serial #), so biggest problem lays in IPA for now. >> >> Modifying SOA serial number can be pretty hard, because of DS >> replication. There are potential race conditions, if records are >> modified/added/deleted on two or more places, replication takes some >> time (because of network connection latency/problem) and zone transfer >> is started in meanwhile. >> >> Question is: How consistent we want to be? > > Enough, what we want to do is stop updating the SOA from bind-dyndb-ldap > and instead update it in a DS plugin. That's because a DS plugin is the > only thing that can see entries coming in from multiple servers. > If you update the SOA from bind-dyndb-ldap you can potentially set it > back in time because last write win in DS. > > This will require a persistent sarch so bind-dyndb-ldap can be updated > with the last SOA serial number, or bind-dyndb-ldap must not cache it > and always try to fetch it from ldap. Bind-dyndb-ldap has users with OpenLDAP. I googled a bit and OpenLDAP should support Netscape SLAPI [3][4], but I don't know how hard is to code interoperable plugin. Accidentally I found existing SLAPI plugin for "concurrent" BIND-LDAP backend project [5]. Can we think for a while about another ways? I would like to find some (even sub-optimal) solution without DS plugin, if it's possible and comparable hard to code. >> Can we accept these >> absolutely improbable race conditions? It will be probably corrected by >> next SOA update = by (any) next record change. It won't affect normal >> operations, only zone transfers. > > Yes and No, the problem is that if 2 servers update the SOA > independently you may have the serial go backwards on replication. See > above. > >> (IMHO we should consider DNS "nature": In general is not strictly >> consistent, because of massive caching at every level.) > > True, but the serial is normally considered monotonically increasing. I agree. How DS will handle collisions? When same attribute is modified independently at two places? It's simply overwritten by one of values? I can't find information about this at directory.fedoraproject.org. (Side question: It's a real big problem? If it's result of very improbable race condition? It will broke zone transfer, but next zone update will correct this. As result of this failure last change is not transferred to slave DNS servers, before another zone update takes place.) >> If it's acceptable, we can suppress explicit SOA serial number value in >> LDAP and derive actual value from latest modifyTimestamp value from all >> objects in cn=dns subtree. This approach saves some hooks in IPA's LDAP >> update code and will save problems with manual modifications. > > It will cause a big search though. It also will not take in account when If we use persistent search it's not a problem. Persistent search dumps whole DB to RBT in BIND's memory and only changes are transferred after this initial query. We can compute maximal modifyTimestamp from all idnsRecord objects during initial query. Any change later in time will add only single attribute to transfer + max(currvalue, newvalue) operation. This way (with psearch) has reasonably small overhead, I think. > there are changes replicated from another replica that are "backdated" > relative to the last modifyTimestamp. If we maintain max(modifyTimestamp) value whole time, new backdated values will not backdate SOA, because max(modifyTimestamp) can't move back. It's not correct behaviour also, I know. But again: It's result of improbable race condition and next zone update will correct this situation. > Also using modifyTimestamp would needlessly increment the SOA if there > are changes to the entries that are not relevant to DNS (like admins > changing ACIs, or other actions like that). I think it's not a problem. Only consequence is unnecessary zone transfer. How often admin changes ACI? If we want to save this overhead, we can count max(modifyTimestamp) only for idnsRecord objects (and only for known attributes) - but I think it's not necessary. There are still problems to solve without DS plugin (specifically mapping/updating NN part from YYYYMMDDNN), but: Sounds this reasonable? Petr^2 Spacek >> Persistent search will be (probably) required for effective implementation. >> I think it's not a problem, because DNSSEC will require (with very high >> probability) persistent search for generating NSEC/NSEC3 records. >> >> [1] https://fedorahosted.org/freeipa/ticket/2554 >> >> [2] https://bugzilla.redhat.com/show_bug.cgi?id=766233 >> >> >> Please, post your opinions about DNS consistency strictness. > > I think we need to try to be more consistent than what we are now. There > may always be minor races, but the current races are too big to pass on > IMHO. > > Simo. > [3] http://opensource.apple.com/source/OpenLDAP/OpenLDAP-143/OpenLDAP/include/slapi-plugin.h [4] http://www.openldap.org/lists/openldap-devel/200111/msg00124.html [5] http://thewalter.net/stef/software/slapi-dnsnotify/ From simo at redhat.com Wed Apr 18 13:40:07 2012 From: simo at redhat.com (Simo Sorce) Date: Wed, 18 Apr 2012 09:40:07 -0400 Subject: [Freeipa-devel] DNS zone serial number updates [#2554] In-Reply-To: <4F8EC10D.3080509@redhat.com> References: <4F8D9110.3030703@redhat.com> <1334679211.16658.200.camel@willson.li.ssimo.org> <4F8DF1DC.1010605@redhat.com> <1334755223.16658.228.camel@willson.li.ssimo.org> <4F8EC10D.3080509@redhat.com> Message-ID: <1334756407.16658.233.camel@willson.li.ssimo.org> On Wed, 2012-04-18 at 09:26 -0400, Dmitri Pal wrote: > On 04/18/2012 09:20 AM, Simo Sorce wrote: > >> Are you saying that the update should be moved to the DS replication plugin? > >> Do I get it right? > > not the DS replication plugin per se. But probably the best way to > > handle it is 'a' DS plugin. > > > > Simo. > > > I am not sure I follow. It is not clear to me how a DS plugin (if it is > not a replication plugin) will help with the race condition that might > be caused by changes made on different servers. The entries that were > added on the other servers go only through the replication plugin. Every thing you store in DS can be intercepted by a plugin, even when it comes through replication. In fact we need to explicitly ignore replicated entries in some plugin before acting. > To > the best of my knowledge the normal stack of plugins is not invoked for > those entries as this stack is called when the entry is created in the > first place on the other replica. Nope, there is no such default exclusion. All plugins are called when something comes in independent of the source. Simo. -- Simo Sorce * Red Hat, Inc * New York From pspacek at redhat.com Wed Apr 18 13:55:32 2012 From: pspacek at redhat.com (Petr Spacek) Date: Wed, 18 Apr 2012 15:55:32 +0200 Subject: [Freeipa-devel] IP address check during IPA install Message-ID: <4F8EC7D4.6050007@redhat.com> Hello, please, can somebody explain to me, why our installer strictly checks IP addresses? I wonder about it from yesterday's IPA meeting and still can't get it. My naive insight is: "It's a network layer problem and application shouldn't care." Of course, there are many protocols with endpoint address inside application messages (like SIP or RTSP) for various reasons. Where are these addresses in our case? HTTP, LDAP, DNS and NTP should be Ok, I think. Or they aren't? It's Kerberos problem? I know about client IP address inside Kerberos ticket, but AFAIK it's usually filled with some constant with "ANY_ADDRESS meaning". I often travel with tickets in credentials cache and these tickets still work, when I change location and IP address. So - what I missed? Why pure NAT should create a problem? Thanks for clarification! Petr^2 Spacek From simo at redhat.com Wed Apr 18 14:04:24 2012 From: simo at redhat.com (Simo Sorce) Date: Wed, 18 Apr 2012 10:04:24 -0400 Subject: [Freeipa-devel] DNS zone serial number updates [#2554] In-Reply-To: <4F8EC1C8.4090204@redhat.com> References: <4F8D9110.3030703@redhat.com> <1334679211.16658.200.camel@willson.li.ssimo.org> <4F8EC1C8.4090204@redhat.com> Message-ID: <1334757864.16658.258.camel@willson.li.ssimo.org> On Wed, 2012-04-18 at 15:29 +0200, Petr Spacek wrote: > Hello, > > first of all - snippet moved from the end: > > I think we need to try to be more consistent than what we are now. There > > may always be minor races, but the current races are too big to pass on > > IMHO. > > > I definitely agree. Current state = completely broken zone transfer. > > Rest is in-line. > > On 04/17/2012 06:13 PM, Simo Sorce wrote: > > On Tue, 2012-04-17 at 17:49 +0200, Petr Spacek wrote: > >> Hello, > >> > >> there is IPA ticket #2554 "DNS zone serial number is not updated" [1], > >> which is required by RFE "Support zone transfers in bind-dyndb-ldap" [2]. > >> > >> I think we need to discuss next steps with this issue: > >> > >> Basic support for zone transfers is already done in bind-dyndb-ldap. We > >> need second part - correct behaviour during SOA serial number update. > >> > >> Bind-dyndb-ldap plugin handles dynamic update in correct way (each > >> update increment serial #), so biggest problem lays in IPA for now. > >> > >> Modifying SOA serial number can be pretty hard, because of DS > >> replication. There are potential race conditions, if records are > >> modified/added/deleted on two or more places, replication takes some > >> time (because of network connection latency/problem) and zone transfer > >> is started in meanwhile. > >> > >> Question is: How consistent we want to be? > > > > Enough, what we want to do is stop updating the SOA from bind-dyndb-ldap > > and instead update it in a DS plugin. That's because a DS plugin is the > > only thing that can see entries coming in from multiple servers. > > If you update the SOA from bind-dyndb-ldap you can potentially set it > > back in time because last write win in DS. > > > > This will require a persistent sarch so bind-dyndb-ldap can be updated > > with the last SOA serial number, or bind-dyndb-ldap must not cache it > > and always try to fetch it from ldap. > > Bind-dyndb-ldap has users with OpenLDAP. I googled a bit and OpenLDAP should > support Netscape SLAPI [3][4], but I don't know how hard is to code > interoperable plugin. > Accidentally I found existing SLAPI plugin for "concurrent" BIND-LDAP backend > project [5]. I don't think we need to provide plugins for other platforms, we just need an optiono in bind-dyndb-ldap to tell it to assume the SOA is being handled by the LDAP server. For servers that do not have a suitable plugin bind-dyndb-ldap will keep working as it does now. In those cases I would suggest people to use a single master, but up to the integrator of the other solution. > Can we think for a while about another ways? I would like to find some (even > sub-optimal) solution without DS plugin, if it's possible and comparable hard > to code. Yes, as I said you may still do something with a persistent search, but I do not know if persistent searches are available in OpenLDAP either. However with a persistent search you would see entries coming in in "real time" even replicated ones from other replicas, so you could always issue a SOA serial update. Of course you still need to check for SOA serial updates from other DNS master servers where another bind-dyndb-ldap plugin is running. You have N servers potentially updating the serial at the same time. As long as you do not update the serial just because the serial was itself updated you are just going to eat one or more serial numbers off. We also do not need to make it a requirement to have the serial updated atomically. If 2 servers both update the number to the same value it is ok because they will basically be both in sync in terms of hosted entries. Otherwise one of the servers will update the serial again as soon as other entries are received. If this happens, it is possible that on one of the masters the serial will be updated twice even though no other change was performed on the entry-set. That is not a big deal though, at most it will cause a useless zone transfer, but zone transfer should already be somewhat rate limited anyway, because our zones do change frequently due to DNS updates from clients. > >> Can we accept these > >> absolutely improbable race conditions? It will be probably corrected by > >> next SOA update = by (any) next record change. It won't affect normal > >> operations, only zone transfers. > > > > Yes and No, the problem is that if 2 servers update the SOA > > independently you may have the serial go backwards on replication. See > > above. > > > >> (IMHO we should consider DNS "nature": In general is not strictly > >> consistent, because of massive caching at every level.) > > > > True, but the serial is normally considered monotonically increasing. > I agree. How DS will handle collisions? When same attribute is modified > independently at two places? It's simply overwritten by one of values? I can't > find information about this at directory.fedoraproject.org. Last update wins. > (Side question: It's a real big problem? If it's result of very improbable > race condition? It will broke zone transfer, but next zone update will correct > this. As result of this failure last change is not transferred to slave DNS > servers, before another zone update takes place.) If there are no new updates the next zone transfer will see again a serial in the past and not update at all. So, yeah I think it is a big deal if the SOA goes backwards. > >> If it's acceptable, we can suppress explicit SOA serial number value in > >> LDAP and derive actual value from latest modifyTimestamp value from all > >> objects in cn=dns subtree. This approach saves some hooks in IPA's LDAP > >> update code and will save problems with manual modifications. > > > > It will cause a big search though. It also will not take in account when > If we use persistent search it's not a problem. Persistent search dumps whole > DB to RBT in BIND's memory and only changes are transferred after this initial > query. You still need to search the whole cache and save additional data. (I sure hope you do not keep in memory the whole ldap object but a parsed version of it, if you keep the whole LDAP object I think we just found another place for enhancement. Wasting all that memory is not a good idea IMO). > We can compute maximal modifyTimestamp from all idnsRecord objects during > initial query. Any change later in time will add only single attribute to > transfer + max(currvalue, newvalue) operation. This way (with psearch) has > reasonably small overhead, I think. The problem is that max(currvalue, newvalue) is not useful. This is the scenario: time 1: server A receives updates time 2: server B receives updates time 3: bind-dyndb-ldap B computes new SOA time 4: server A sends its updates to server B time 5: bind-dyndb-ldap B see that the max timestamp has not changed (all new entries are older than 'time 2' as they were generated at time 1). This is with perfectly synchronized clocks. If A has a clock slightly in the past compared to B then you could eve swap time 1 and 2 in absolute time and still get entries "in the past" at point 4. This is why using the modifyTimestamp is not workable in this case. > > there are changes replicated from another replica that are "backdated" > > relative to the last modifyTimestamp. > If we maintain max(modifyTimestamp) value whole time, new backdated values > will not backdate SOA, because max(modifyTimestamp) can't move back. no the problem is not backdating the SOA serial, the problem is *not* updating it when new entris become available because they were "in the past". So if no other changes are made to DNS a zone transfer may not kick at all indefinitely even though the master has new/changed entries. This would cause a long term de-synchronization of the slaves I think is not really acceptable. > It's not correct behaviour also, I know. But again: It's result of improbable > race condition and next zone update will correct this situation. It is not improbable at all, I think it would be a pretty common situation when you have different masters updating the same zone (common on the main zone), see explanation above. > > Also using modifyTimestamp would needlessly increment the SOA if there > > are changes to the entries that are not relevant to DNS (like admins > > changing ACIs, or other actions like that). > I think it's not a problem. Only consequence is unnecessary zone transfer. How > often admin changes ACI? Not often, so I concede the point. > If we want to save this overhead, we can count max(modifyTimestamp) only for > idnsRecord objects (and only for known attributes) - but I think it's not > necessary. I was already expecting that, but you cannot distinguish modifyTimestamp per attribute, only per object, so if modifyTimestamp is changed for an attribute you do not care about you still have to count it. > There are still problems to solve without DS plugin (specifically > mapping/updating NN part from YYYYMMDDNN), but: Sounds this reasonable? Well I am not sure we need to use a YYYYMMDDNN convention to start with. I expect with DYNDNS updates that a 2 digit NN will never be enough, plus it is never updated by a human so we do not need to keep it readable. But I do not care eiither way, as long as the serial can handle thousands of updates per day I am fine (if this is an issue we need to understand how to update the serial in time intervals). Simo. -- Simo Sorce * Red Hat, Inc * New York From rcritten at redhat.com Wed Apr 18 15:02:05 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Wed, 18 Apr 2012 11:02:05 -0400 Subject: [Freeipa-devel] [PATCH] 122 Added permission field to delegation In-Reply-To: <4F8D9C49.4050207@redhat.com> References: <4F8D9C49.4050207@redhat.com> Message-ID: <4F8ED76D.3050904@redhat.com> Petr Vobornik wrote: > Permission field is missing in delegation so it can't be set/modified. > > It was added to delegation details facet and adder dialog. > > The field is using checkboxes instead of multivalued textbox because it > can have only two effective values: 'read' and 'write'. > > https://fedorahosted.org/freeipa/ticket/2635 > Works great. ACK, pushed to master and ipa-2-2 rob From dpal at redhat.com Wed Apr 18 15:02:43 2012 From: dpal at redhat.com (Dmitri Pal) Date: Wed, 18 Apr 2012 11:02:43 -0400 Subject: [Freeipa-devel] IP address check during IPA install In-Reply-To: <4F8EC7D4.6050007@redhat.com> References: <4F8EC7D4.6050007@redhat.com> Message-ID: <4F8ED793.2070401@redhat.com> On 04/18/2012 09:55 AM, Petr Spacek wrote: > Hello, > > please, can somebody explain to me, why our installer strictly checks > IP addresses? I wonder about it from yesterday's IPA meeting and still > can't get it. > > My naive insight is: "It's a network layer problem and application > shouldn't care." > > Of course, there are many protocols with endpoint address inside > application messages (like SIP or RTSP) for various reasons. Where are > these addresses in our case? > > HTTP, LDAP, DNS and NTP should be Ok, I think. Or they aren't? > > It's Kerberos problem? I know about client IP address inside Kerberos > ticket, but AFAIK it's usually filled with some constant with > "ANY_ADDRESS meaning". > > I often travel with tickets in credentials cache and these tickets > still work, when I change location and IP address. > > So - what I missed? Why pure NAT should create a problem? > The problem is not the specific address. The problem is badly configured system. If the host <-> IP can't be resolved cleanly you get a problem with Kerberos and install will fail. This is why we make sure the name resolves properly and reverse lookups work at the install time. It does not matter what IP you have as long as it properly resolves. > > Thanks for clarification! > > Petr^2 Spacek > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From simo at redhat.com Wed Apr 18 15:18:52 2012 From: simo at redhat.com (Simo Sorce) Date: Wed, 18 Apr 2012 11:18:52 -0400 Subject: [Freeipa-devel] DNS zone serial number updates [#2554] In-Reply-To: <1334757864.16658.258.camel@willson.li.ssimo.org> References: <4F8D9110.3030703@redhat.com> <1334679211.16658.200.camel@willson.li.ssimo.org> <4F8EC1C8.4090204@redhat.com> <1334757864.16658.258.camel@willson.li.ssimo.org> Message-ID: <1334762332.16658.264.camel@willson.li.ssimo.org> On Wed, 2012-04-18 at 10:04 -0400, Simo Sorce wrote: > > We can compute maximal modifyTimestamp from all idnsRecord objects > during > > initial query. Any change later in time will add only single > attribute to > > transfer + max(currvalue, newvalue) operation. This way (with > psearch) has > > reasonably small overhead, I think. > > The problem is that max(currvalue, newvalue) is not useful. > > Ah btw about this point I forgot about entryUSN which is a DS extension we use in FreeIPA. with emtryUSn we can count on using max(curvalue, newvalue) as entryUSN always increases monotonically even for replicated entries independently of the timestamp they carry. However you _cannot_ use entryUSN as an absolute value as it is a per-server value and it can also be reset with a reinitialization. But you can use it as a trigger for a serial increment. Simo. -- Simo Sorce * Red Hat, Inc * New York From pspacek at redhat.com Wed Apr 18 15:21:04 2012 From: pspacek at redhat.com (Petr Spacek) Date: Wed, 18 Apr 2012 17:21:04 +0200 Subject: [Freeipa-devel] DNS zone serial number updates [#2554] In-Reply-To: <1334757864.16658.258.camel@willson.li.ssimo.org> References: <4F8D9110.3030703@redhat.com> <1334679211.16658.200.camel@willson.li.ssimo.org> <4F8EC1C8.4090204@redhat.com> <1334757864.16658.258.camel@willson.li.ssimo.org> Message-ID: <4F8EDBE0.7090108@redhat.com> On 04/18/2012 04:04 PM, Simo Sorce wrote: > On Wed, 2012-04-18 at 15:29 +0200, Petr Spacek wrote: >> Hello, >> >> first of all - snippet moved from the end: >> > I think we need to try to be more consistent than what we are now. There >> > may always be minor races, but the current races are too big to pass on >> > IMHO. >> > >> I definitely agree. Current state = completely broken zone transfer. >> >> Rest is in-line. >> >> On 04/17/2012 06:13 PM, Simo Sorce wrote: >>> On Tue, 2012-04-17 at 17:49 +0200, Petr Spacek wrote: >>>> Hello, >>>> >>>> there is IPA ticket #2554 "DNS zone serial number is not updated" [1], >>>> which is required by RFE "Support zone transfers in bind-dyndb-ldap" [2]. >>>> >>>> I think we need to discuss next steps with this issue: >>>> >>>> Basic support for zone transfers is already done in bind-dyndb-ldap. We >>>> need second part - correct behaviour during SOA serial number update. >>>> >>>> Bind-dyndb-ldap plugin handles dynamic update in correct way (each >>>> update increment serial #), so biggest problem lays in IPA for now. >>>> >>>> Modifying SOA serial number can be pretty hard, because of DS >>>> replication. There are potential race conditions, if records are >>>> modified/added/deleted on two or more places, replication takes some >>>> time (because of network connection latency/problem) and zone transfer >>>> is started in meanwhile. >>>> >>>> Question is: How consistent we want to be? >>> >>> Enough, what we want to do is stop updating the SOA from bind-dyndb-ldap >>> and instead update it in a DS plugin. That's because a DS plugin is the >>> only thing that can see entries coming in from multiple servers. >>> If you update the SOA from bind-dyndb-ldap you can potentially set it >>> back in time because last write win in DS. >>> >>> This will require a persistent sarch so bind-dyndb-ldap can be updated >>> with the last SOA serial number, or bind-dyndb-ldap must not cache it >>> and always try to fetch it from ldap. >> >> Bind-dyndb-ldap has users with OpenLDAP. I googled a bit and OpenLDAP should >> support Netscape SLAPI [3][4], but I don't know how hard is to code >> interoperable plugin. >> Accidentally I found existing SLAPI plugin for "concurrent" BIND-LDAP backend >> project [5]. > > I don't think we need to provide plugins for other platforms, we just > need an optiono in bind-dyndb-ldap to tell it to assume the SOA is being > handled by the LDAP server. > For servers that do not have a suitable plugin bind-dyndb-ldap will keep > working as it does now. In those cases I would suggest people to use a > single master, but up to the integrator of the other solution. > > >> Can we think for a while about another ways? I would like to find some (even >> sub-optimal) solution without DS plugin, if it's possible and comparable hard >> to code. > > Yes, as I said you may still do something with a persistent search, but > I do not know if persistent searches are available in OpenLDAP either. OpenLDAP has support for (newer and standardized) SyncRepl [6][7]. I plan to look into it and consider writing some compatibility layer for psearch/syncrepl in bind-dyndb-ldap. It should not be hard, I think. > However with a persistent search you would see entries coming in in > "real time" even replicated ones from other replicas, so you could > always issue a SOA serial update. Of course you still need to check for > SOA serial updates from other DNS master servers where another > bind-dyndb-ldap plugin is running. > > You have N servers potentially updating the serial at the same time. As > long as you do not update the serial just because the serial was itself > updated you are just going to eat one or more serial numbers off. > > We also do not need to make it a requirement to have the serial updated > atomically. If 2 servers both update the number to the same value it is > ok because they will basically be both in sync in terms of hosted > entries. > > Otherwise one of the servers will update the serial again as soon as > other entries are received. > > If this happens, it is possible that on one of the masters the serial > will be updated twice even though no other change was performed on the > entry-set. That is not a big deal though, at most it will cause a > useless zone transfer, but zone transfer should already be somewhat rate > limited anyway, because our zones do change frequently due to DNS > updates from clients. SOA record has also refresh, retry and expiry fields. These define how often zone transfer should happen. It's nicely described in [8]. There is next problem with transfers: Currently we support only full zone transfers (AXFR), not incremental updates (IXFR), because there is no "last record change"<->SOA# information. For now it's postponed, because nobody wanted it. >>>> Can we accept these >>>> absolutely improbable race conditions? It will be probably corrected by >>>> next SOA update = by (any) next record change. It won't affect normal >>>> operations, only zone transfers. >>> >>> Yes and No, the problem is that if 2 servers update the SOA >>> independently you may have the serial go backwards on replication. See >>> above. >>> >>>> (IMHO we should consider DNS "nature": In general is not strictly >>>> consistent, because of massive caching at every level.) >>> >>> True, but the serial is normally considered monotonically increasing. > >> I agree. How DS will handle collisions? When same attribute is modified >> independently at two places? It's simply overwritten by one of values? I can't >> find information about this at directory.fedoraproject.org. > > Last update wins. Good to know, thanks. I wrongly expected some kind of warning mechanism. (Special operational attribute or something like that.) >> (Side question: It's a real big problem? If it's result of very improbable >> race condition? It will broke zone transfer, but next zone update will correct >> this. As result of this failure last change is not transferred to slave DNS >> servers, before another zone update takes place.) > > If there are no new updates the next zone transfer will see again a > serial in the past and not update at all. So, yeah I think it is a big > deal if the SOA goes backwards. > >>>> If it's acceptable, we can suppress explicit SOA serial number value in >>>> LDAP and derive actual value from latest modifyTimestamp value from all >>>> objects in cn=dns subtree. This approach saves some hooks in IPA's LDAP >>>> update code and will save problems with manual modifications. >>> >>> It will cause a big search though. It also will not take in account when >> If we use persistent search it's not a problem. Persistent search dumps whole >> DB to RBT in BIND's memory and only changes are transferred after this initial >> query. > > You still need to search the whole cache and save additional data. (I > sure hope you do not keep in memory the whole ldap object but a parsed > version of it, if you keep the whole LDAP object I think we just found > another place for enhancement. Wasting all that memory is not a good > idea IMO). Only DNS records are stored, i.e. parsed objects. Please, can you explain "You still need to search the whole cache and save additional data."? I probably missed some important point. >> We can compute maximal modifyTimestamp from all idnsRecord objects during >> initial query. Any change later in time will add only single attribute to >> transfer + max(currvalue, newvalue) operation. This way (with psearch) has >> reasonably small overhead, I think. > > The problem is that max(currvalue, newvalue) is not useful. > > This is the scenario: > > time 1: server A receives updates > time 2: server B receives updates > time 3: bind-dyndb-ldap B computes new SOA > time 4: server A sends its updates to server B > time 5: bind-dyndb-ldap B see that the max timestamp has not changed > (all new entries are older than 'time 2' as they were generated at time > 1). > > This is with perfectly synchronized clocks. If A has a clock slightly in > the past compared to B then you could eve swap time 1 and 2 in absolute > time and still get entries "in the past" at point 4. > > This is why using the modifyTimestamp is not workable in this case. Ok, I didn't realized these problems. Now I know why DNS has single master :-D >>> there are changes replicated from another replica that are "backdated" >>> relative to the last modifyTimestamp. >> If we maintain max(modifyTimestamp) value whole time, new backdated values >> will not backdate SOA, because max(modifyTimestamp) can't move back. > > no the problem is not backdating the SOA serial, the problem is *not* > updating it when new entris become available because they were "in the > past". So if no other changes are made to DNS a zone transfer may not > kick at all indefinitely even though the master has new/changed entries. > This would cause a long term de-synchronization of the slaves I think is > not really acceptable. I agree with your long-term de-synchronization point, but with dynamic updates is not really probable. >> It's not correct behaviour also, I know. But again: It's result of improbable >> race condition and next zone update will correct this situation. > > It is not improbable at all, I think it would be a pretty common > situation when you have different masters updating the same zone (common > on the main zone), see explanation above. > >>> Also using modifyTimestamp would needlessly increment the SOA if there >>> are changes to the entries that are not relevant to DNS (like admins >>> changing ACIs, or other actions like that). >> I think it's not a problem. Only consequence is unnecessary zone transfer. How >> often admin changes ACI? > > Not often, so I concede the point. > >> If we want to save this overhead, we can count max(modifyTimestamp) only for >> idnsRecord objects (and only for known attributes) - but I think it's not >> necessary. > > I was already expecting that, but you cannot distinguish modifyTimestamp > per attribute, only per object, so if modifyTimestamp is changed for an > attribute you do not care about you still have to count it. AFAIK you can watch changes only for selected attributes (through psearch). >> There are still problems to solve without DS plugin (specifically >> mapping/updating NN part from YYYYMMDDNN), but: Sounds this reasonable? > > Well I am not sure we need to use a YYYYMMDDNN convention to start with. > I expect with DYNDNS updates that a 2 digit NN will never be enough, > plus it is never updated by a human so we do not need to keep it > readable. But I do not care eiither way, as long as the serial can > handle thousands of updates per day I am fine (if this is an issue we > need to understand how to update the serial in time intervals). Current BIND implementation handles overflow in one day gracefully: 2012041899 -> 2012041900 So SOA# can be in far future, if you changes zone too often :-) AFAIK this format is traditional, but not required by standard, if arithmetic works. [9] defines arithmetic for SOA serials, so DS plugin should follow it. It says "The maximum defined increment is 2147483647 (2^31 - 1)" This limit applies inside to one SOA TTL time window (so it shouldn't be a problem, I think). I didn't looked into in this RFC deeply. Some practical recommendations can be found in [10]. Thanks for your time. Petr^2 Spacek [6] http://www.openldap.org/doc/admin24/replication.html [7] http://tools.ietf.org/html/rfc4533 [8] http://www.zytrax.com/books/dns/ch8/soa.html [9] http://tools.ietf.org/html/rfc1982 [10] http://www.zytrax.com/books/dns/ch9/serial.html > Simo. From rmeggins at redhat.com Wed Apr 18 15:28:18 2012 From: rmeggins at redhat.com (Rich Megginson) Date: Wed, 18 Apr 2012 09:28:18 -0600 Subject: [Freeipa-devel] DNS zone serial number updates [#2554] In-Reply-To: <4F8EDBE0.7090108@redhat.com> References: <4F8D9110.3030703@redhat.com> <1334679211.16658.200.camel@willson.li.ssimo.org> <4F8EC1C8.4090204@redhat.com> <1334757864.16658.258.camel@willson.li.ssimo.org> <4F8EDBE0.7090108@redhat.com> Message-ID: <4F8EDD92.3010306@redhat.com> On 04/18/2012 09:21 AM, Petr Spacek wrote: > On 04/18/2012 04:04 PM, Simo Sorce wrote: >> On Wed, 2012-04-18 at 15:29 +0200, Petr Spacek wrote: >>> Hello, >>> >>> first of all - snippet moved from the end: >>> > I think we need to try to be more consistent than what we are >>> now. There >>> > may always be minor races, but the current races are too big to >>> pass on >>> > IMHO. >>> > >>> I definitely agree. Current state = completely broken zone transfer. >>> >>> Rest is in-line. >>> >>> On 04/17/2012 06:13 PM, Simo Sorce wrote: >>>> On Tue, 2012-04-17 at 17:49 +0200, Petr Spacek wrote: >>>>> Hello, >>>>> >>>>> there is IPA ticket #2554 "DNS zone serial number is not updated" >>>>> [1], >>>>> which is required by RFE "Support zone transfers in >>>>> bind-dyndb-ldap" [2]. >>>>> >>>>> I think we need to discuss next steps with this issue: >>>>> >>>>> Basic support for zone transfers is already done in >>>>> bind-dyndb-ldap. We >>>>> need second part - correct behaviour during SOA serial number update. >>>>> >>>>> Bind-dyndb-ldap plugin handles dynamic update in correct way (each >>>>> update increment serial #), so biggest problem lays in IPA for now. >>>>> >>>>> Modifying SOA serial number can be pretty hard, because of DS >>>>> replication. There are potential race conditions, if records are >>>>> modified/added/deleted on two or more places, replication takes some >>>>> time (because of network connection latency/problem) and zone >>>>> transfer >>>>> is started in meanwhile. >>>>> >>>>> Question is: How consistent we want to be? >>>> >>>> Enough, what we want to do is stop updating the SOA from >>>> bind-dyndb-ldap >>>> and instead update it in a DS plugin. That's because a DS plugin is >>>> the >>>> only thing that can see entries coming in from multiple servers. >>>> If you update the SOA from bind-dyndb-ldap you can potentially set it >>>> back in time because last write win in DS. >>>> >>>> This will require a persistent sarch so bind-dyndb-ldap can be updated >>>> with the last SOA serial number, or bind-dyndb-ldap must not cache it >>>> and always try to fetch it from ldap. >>> >>> Bind-dyndb-ldap has users with OpenLDAP. I googled a bit and >>> OpenLDAP should >>> support Netscape SLAPI [3][4], but I don't know how hard is to code >>> interoperable plugin. >>> Accidentally I found existing SLAPI plugin for "concurrent" >>> BIND-LDAP backend >>> project [5]. >> >> I don't think we need to provide plugins for other platforms, we just >> need an optiono in bind-dyndb-ldap to tell it to assume the SOA is being >> handled by the LDAP server. >> For servers that do not have a suitable plugin bind-dyndb-ldap will keep >> working as it does now. In those cases I would suggest people to use a >> single master, but up to the integrator of the other solution. >> >> >>> Can we think for a while about another ways? I would like to find >>> some (even >>> sub-optimal) solution without DS plugin, if it's possible and >>> comparable hard >>> to code. >> >> Yes, as I said you may still do something with a persistent search, but >> I do not know if persistent searches are available in OpenLDAP either. > OpenLDAP has support for (newer and standardized) SyncRepl [6][7]. I > plan to look into it and consider writing some compatibility layer for > psearch/syncrepl in bind-dyndb-ldap. It should not be hard, I think. > >> However with a persistent search you would see entries coming in in >> "real time" even replicated ones from other replicas, so you could >> always issue a SOA serial update. Of course you still need to check for >> SOA serial updates from other DNS master servers where another >> bind-dyndb-ldap plugin is running. >> >> You have N servers potentially updating the serial at the same time. As >> long as you do not update the serial just because the serial was itself >> updated you are just going to eat one or more serial numbers off. >> >> We also do not need to make it a requirement to have the serial updated >> atomically. If 2 servers both update the number to the same value it is >> ok because they will basically be both in sync in terms of hosted >> entries. >> >> Otherwise one of the servers will update the serial again as soon as >> other entries are received. >> >> If this happens, it is possible that on one of the masters the serial >> will be updated twice even though no other change was performed on the >> entry-set. That is not a big deal though, at most it will cause a >> useless zone transfer, but zone transfer should already be somewhat rate >> limited anyway, because our zones do change frequently due to DNS >> updates from clients. > SOA record has also refresh, retry and expiry fields. These define how > often zone transfer should happen. It's nicely described in [8]. > > There is next problem with transfers: Currently we support only full > zone transfers (AXFR), not incremental updates (IXFR), because there > is no "last record change"<->SOA# information. For now it's postponed, > because nobody wanted it. > >>>>> Can we accept these >>>>> absolutely improbable race conditions? It will be probably >>>>> corrected by >>>>> next SOA update = by (any) next record change. It won't affect normal >>>>> operations, only zone transfers. >>>> >>>> Yes and No, the problem is that if 2 servers update the SOA >>>> independently you may have the serial go backwards on replication. See >>>> above. >>>> >>>>> (IMHO we should consider DNS "nature": In general is not strictly >>>>> consistent, because of massive caching at every level.) >>>> >>>> True, but the serial is normally considered monotonically increasing. >> >>> I agree. How DS will handle collisions? When same attribute is modified >>> independently at two places? It's simply overwritten by one of >>> values? I can't >>> find information about this at directory.fedoraproject.org. >> >> Last update wins. > Good to know, thanks. I wrongly expected some kind of warning > mechanism. (Special operational attribute or something like that.) Only if there are "real" conflicts that cannot be solved by the simple resolution algorithm. See http://docs.redhat.com/docs/en-US/Red_Hat_Directory_Server/9.0/html/Administration_Guide/Managing_Replication-Solving_Common_Replication_Conflicts.html 389 handling of replication conflicts is somewhat problematic for IPA - see https://fedorahosted.org/389/ticket/160 > >>> (Side question: It's a real big problem? If it's result of very >>> improbable >>> race condition? It will broke zone transfer, but next zone update >>> will correct >>> this. As result of this failure last change is not transferred to >>> slave DNS >>> servers, before another zone update takes place.) >> >> If there are no new updates the next zone transfer will see again a >> serial in the past and not update at all. So, yeah I think it is a big >> deal if the SOA goes backwards. >> >>>>> If it's acceptable, we can suppress explicit SOA serial number >>>>> value in >>>>> LDAP and derive actual value from latest modifyTimestamp value >>>>> from all >>>>> objects in cn=dns subtree. This approach saves some hooks in IPA's >>>>> LDAP >>>>> update code and will save problems with manual modifications. >>>> >>>> It will cause a big search though. It also will not take in account >>>> when >>> If we use persistent search it's not a problem. Persistent search >>> dumps whole >>> DB to RBT in BIND's memory and only changes are transferred after >>> this initial >>> query. >> >> You still need to search the whole cache and save additional data. (I >> sure hope you do not keep in memory the whole ldap object but a parsed >> version of it, if you keep the whole LDAP object I think we just found >> another place for enhancement. Wasting all that memory is not a good >> idea IMO). > Only DNS records are stored, i.e. parsed objects. > > Please, can you explain "You still need to search the whole cache and > save additional data."? I probably missed some important point. > >>> We can compute maximal modifyTimestamp from all idnsRecord objects >>> during >>> initial query. Any change later in time will add only single >>> attribute to >>> transfer + max(currvalue, newvalue) operation. This way (with >>> psearch) has >>> reasonably small overhead, I think. >> >> The problem is that max(currvalue, newvalue) is not useful. >> >> This is the scenario: >> >> time 1: server A receives updates >> time 2: server B receives updates >> time 3: bind-dyndb-ldap B computes new SOA >> time 4: server A sends its updates to server B >> time 5: bind-dyndb-ldap B see that the max timestamp has not changed >> (all new entries are older than 'time 2' as they were generated at time >> 1). >> >> This is with perfectly synchronized clocks. If A has a clock slightly in >> the past compared to B then you could eve swap time 1 and 2 in absolute >> time and still get entries "in the past" at point 4. >> >> This is why using the modifyTimestamp is not workable in this case. > Ok, I didn't realized these problems. Now I know why DNS has single > master :-D > >>>> there are changes replicated from another replica that are "backdated" >>>> relative to the last modifyTimestamp. >>> If we maintain max(modifyTimestamp) value whole time, new backdated >>> values >>> will not backdate SOA, because max(modifyTimestamp) can't move back. >> >> no the problem is not backdating the SOA serial, the problem is *not* >> updating it when new entris become available because they were "in the >> past". So if no other changes are made to DNS a zone transfer may not >> kick at all indefinitely even though the master has new/changed entries. >> This would cause a long term de-synchronization of the slaves I think is >> not really acceptable. > I agree with your long-term de-synchronization point, but with dynamic > updates is not really probable. > >>> It's not correct behaviour also, I know. But again: It's result of >>> improbable >>> race condition and next zone update will correct this situation. >> >> It is not improbable at all, I think it would be a pretty common >> situation when you have different masters updating the same zone (common >> on the main zone), see explanation above. >> >>>> Also using modifyTimestamp would needlessly increment the SOA if there >>>> are changes to the entries that are not relevant to DNS (like admins >>>> changing ACIs, or other actions like that). >>> I think it's not a problem. Only consequence is unnecessary zone >>> transfer. How >>> often admin changes ACI? >> >> Not often, so I concede the point. >> >>> If we want to save this overhead, we can count max(modifyTimestamp) >>> only for >>> idnsRecord objects (and only for known attributes) - but I think >>> it's not >>> necessary. >> >> I was already expecting that, but you cannot distinguish modifyTimestamp >> per attribute, only per object, so if modifyTimestamp is changed for an >> attribute you do not care about you still have to count it. > AFAIK you can watch changes only for selected attributes (through > psearch). > >>> There are still problems to solve without DS plugin (specifically >>> mapping/updating NN part from YYYYMMDDNN), but: Sounds this reasonable? >> >> Well I am not sure we need to use a YYYYMMDDNN convention to start with. >> I expect with DYNDNS updates that a 2 digit NN will never be enough, >> plus it is never updated by a human so we do not need to keep it >> readable. But I do not care eiither way, as long as the serial can >> handle thousands of updates per day I am fine (if this is an issue we >> need to understand how to update the serial in time intervals). > Current BIND implementation handles overflow in one day gracefully: > 2012041899 -> 2012041900 > So SOA# can be in far future, if you changes zone too often :-) > > AFAIK this format is traditional, but not required by standard, if > arithmetic works. [9] defines arithmetic for SOA serials, so DS plugin > should follow it. > > It says "The maximum defined increment is 2147483647 (2^31 - 1)" > This limit applies inside to one SOA TTL time window (so it shouldn't > be a problem, I think). I didn't looked into in this RFC deeply. Some > practical recommendations can be found in [10]. > > Thanks for your time. > > Petr^2 Spacek > > [6] http://www.openldap.org/doc/admin24/replication.html > [7] http://tools.ietf.org/html/rfc4533 > [8] http://www.zytrax.com/books/dns/ch8/soa.html > [9] http://tools.ietf.org/html/rfc1982 > [10] http://www.zytrax.com/books/dns/ch9/serial.html > > >> Simo. > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel From edewata at redhat.com Wed Apr 18 15:38:40 2012 From: edewata at redhat.com (Endi Sukma Dewata) Date: Wed, 18 Apr 2012 10:38:40 -0500 Subject: [Freeipa-devel] [PATCH] 122 Added permission field to delegation In-Reply-To: <4F8ED76D.3050904@redhat.com> References: <4F8D9C49.4050207@redhat.com> <4F8ED76D.3050904@redhat.com> Message-ID: <4F8EE000.6000403@redhat.com> On 4/18/2012 10:02 AM, Rob Crittenden wrote: > Petr Vobornik wrote: >> Permission field is missing in delegation so it can't be set/modified. >> >> It was added to delegation details facet and adder dialog. >> >> The field is using checkboxes instead of multivalued textbox because it >> can have only two effective values: 'read' and 'write'. >> >> https://fedorahosted.org/freeipa/ticket/2635 >> > > Works great. ACK, pushed to master and ipa-2-2 > > rob Some possible enhancements: During delegation add the permission is optional but it defaults to "write". The UI doesn't say anything about the default value, it's only noted in the delegation-add help doc. During modify the help doc also says the default is "write" but permission is actually required (can't set to empty), so no need for a default value. 1. Would it be better in the UI during add we make the permission required but the "write" checkbox is checked by default? This way it's more consistent with the edit page and it also shows the default value. 2. Should we remove the "Default is write" from delegation-mod help doc? -- Endi S. Dewata From simo at redhat.com Wed Apr 18 15:53:24 2012 From: simo at redhat.com (Simo Sorce) Date: Wed, 18 Apr 2012 11:53:24 -0400 Subject: [Freeipa-devel] DNS zone serial number updates [#2554] In-Reply-To: <4F8EDBE0.7090108@redhat.com> References: <4F8D9110.3030703@redhat.com> <1334679211.16658.200.camel@willson.li.ssimo.org> <4F8EC1C8.4090204@redhat.com> <1334757864.16658.258.camel@willson.li.ssimo.org> <4F8EDBE0.7090108@redhat.com> Message-ID: <1334764404.16658.277.camel@willson.li.ssimo.org> On Wed, 2012-04-18 at 17:21 +0200, Petr Spacek wrote: > > If this happens, it is possible that on one of the masters the serial > > will be updated twice even though no other change was performed on the > > entry-set. That is not a big deal though, at most it will cause a > > useless zone transfer, but zone transfer should already be somewhat rate > > limited anyway, because our zones do change frequently due to DNS > > updates from clients. > SOA record has also refresh, retry and expiry fields. These define how often > zone transfer should happen. It's nicely described in [8]. Sure but we have 2 opposing requirements. On one hand we want to avoid doing too many transfers too often. On the other we want to reflect dynamic Updates fast enough to avoid stale entries in the slaves. So the problem is that we need to carefully balance and tradeoff. > There is next problem with transfers: Currently we support only full zone > transfers (AXFR), not incremental updates (IXFR), because there is no "last > record change"<->SOA# information. For now it's postponed, because nobody > wanted it. We can support IXFR, using entryUSN, all entries that have a entryUSN higher than the max we recorded at last trabfer time would need to be transfered. It's not urgent, but we have a way to do it relatively easily. I think the main issue here would be deleted records. We might need to access the tombstone for it, or keep some record of deleted entries somewhere, needs more investigation. > > You still need to search the whole cache and save additional data. (I > > sure hope you do not keep in memory the whole ldap object but a parsed > > version of it, if you keep the whole LDAP object I think we just found > > another place for enhancement. Wasting all that memory is not a good > > idea IMO). > Only DNS records are stored, i.e. parsed objects. Excellent. > Please, can you explain "You still need to search the whole cache and save > additional data."? I probably missed some important point. I was referring to your proposal to store the modifyTimestamp in the cache. But I think we agree modifyTimestamp is not the way to go so we can ignore the rest. > > This is why using the modifyTimestamp is not workable in this case. > Ok, I didn't realized these problems. Now I know why DNS has single master :-D Heh, hard problems need courage, it is easy to hide under the single-master rock :-) > > no the problem is not backdating the SOA serial, the problem is *not* > > updating it when new entris become available because they were "in the > > past". So if no other changes are made to DNS a zone transfer may not > > kick at all indefinitely even though the master has new/changed entries. > > This would cause a long term de-synchronization of the slaves I think is > > not really acceptable. > I agree with your long-term de-synchronization point, but with dynamic updates > is not really probable. Keep in mind that dynamic updates can be disabled, we cannot count on them. If we have them it will "help" (but also cause more load), but we need to work well enough when they are completely absent. > > I was already expecting that, but you cannot distinguish modifyTimestamp > > per attribute, only per object, so if modifyTimestamp is changed for an > > attribute you do not care about you still have to count it. > AFAIK you can watch changes only for selected attributes (through psearch). True, but if the connection drops, then you need a new full search and there you'll pick up all the new timestamps. So you can have inconsistent behavior depending on DS restarts, named restarts, and so on. Not a nice corner to paint ourselves in. > >> There are still problems to solve without DS plugin (specifically > >> mapping/updating NN part from YYYYMMDDNN), but: Sounds this reasonable? > > > > Well I am not sure we need to use a YYYYMMDDNN convention to start with. > > I expect with DYNDNS updates that a 2 digit NN will never be enough, > > plus it is never updated by a human so we do not need to keep it > > readable. But I do not care eiither way, as long as the serial can > > handle thousands of updates per day I am fine (if this is an issue we > > need to understand how to update the serial in time intervals). > Current BIND implementation handles overflow in one day gracefully: > 2012041899 -> 2012041900 > So SOA# can be in far future, if you changes zone too often :-) Sure, but then what's the point of keeping it in date format ? :-) > AFAIK this format is traditional, but not required by standard, if arithmetic > works. [9] defines arithmetic for SOA serials, so DS plugin should follow it. > > It says "The maximum defined increment is 2147483647 (2^31 - 1)" > This limit applies inside to one SOA TTL time window (so it shouldn't be a > problem, I think). I didn't looked into in this RFC deeply. Some practical > recommendations can be found in [10]. Yeah 2^31 is large enough for practical deployments if you start small, if you start close to the top (2012 is not the far from 2147) then you have substantially reduced the window. > Thanks for your time. YW Simo. -- Simo Sorce * Red Hat, Inc * New York From dpal at redhat.com Wed Apr 18 16:34:14 2012 From: dpal at redhat.com (Dmitri Pal) Date: Wed, 18 Apr 2012 12:34:14 -0400 Subject: [Freeipa-devel] DNS zone serial number updates [#2554] In-Reply-To: <1334764404.16658.277.camel@willson.li.ssimo.org> References: <4F8D9110.3030703@redhat.com> <1334679211.16658.200.camel@willson.li.ssimo.org> <4F8EC1C8.4090204@redhat.com> <1334757864.16658.258.camel@willson.li.ssimo.org> <4F8EDBE0.7090108@redhat.com> <1334764404.16658.277.camel@willson.li.ssimo.org> Message-ID: <4F8EED06.2070901@redhat.com> On 04/18/2012 11:53 AM, Simo Sorce wrote: > On Wed, 2012-04-18 at 17:21 +0200, Petr Spacek wrote: > >>> If this happens, it is possible that on one of the masters the serial >>> will be updated twice even though no other change was performed on the >>> entry-set. That is not a big deal though, at most it will cause a >>> useless zone transfer, but zone transfer should already be somewhat rate >>> limited anyway, because our zones do change frequently due to DNS >>> updates from clients. >> SOA record has also refresh, retry and expiry fields. These define how often >> zone transfer should happen. It's nicely described in [8]. > Sure but we have 2 opposing requirements. On one hand we want to avoid > doing too many transfers too often. On the other we want to reflect > dynamic Updates fast enough to avoid stale entries in the slaves. So the > problem is that we need to carefully balance and tradeoff. > >> There is next problem with transfers: Currently we support only full zone >> transfers (AXFR), not incremental updates (IXFR), because there is no "last >> record change"<->SOA# information. For now it's postponed, because nobody >> wanted it. > We can support IXFR, using entryUSN, all entries that have a entryUSN > higher than the max we recorded at last trabfer time would need to be > transfered. It's not urgent, but we have a way to do it relatively > easily. I think the main issue here would be deleted records. We might > need to access the tombstone for it, or keep some record of deleted > entries somewhere, needs more investigation. > >>> You still need to search the whole cache and save additional data. (I >>> sure hope you do not keep in memory the whole ldap object but a parsed >>> version of it, if you keep the whole LDAP object I think we just found >>> another place for enhancement. Wasting all that memory is not a good >>> idea IMO). >> Only DNS records are stored, i.e. parsed objects. > Excellent. > >> Please, can you explain "You still need to search the whole cache and save >> additional data."? I probably missed some important point. > I was referring to your proposal to store the modifyTimestamp in the > cache. But I think we agree modifyTimestamp is not the way to go so we > can ignore the rest. > >>> This is why using the modifyTimestamp is not workable in this case. >> Ok, I didn't realized these problems. Now I know why DNS has single master :-D > Heh, hard problems need courage, it is easy to hide under the > single-master rock :-) > >>> no the problem is not backdating the SOA serial, the problem is *not* >>> updating it when new entris become available because they were "in the >>> past". So if no other changes are made to DNS a zone transfer may not >>> kick at all indefinitely even though the master has new/changed entries. >>> This would cause a long term de-synchronization of the slaves I think is >>> not really acceptable. >> I agree with your long-term de-synchronization point, but with dynamic updates >> is not really probable. > Keep in mind that dynamic updates can be disabled, we cannot count on > them. If we have them it will "help" (but also cause more load), but we > need to work well enough when they are completely absent. > >>> I was already expecting that, but you cannot distinguish modifyTimestamp >>> per attribute, only per object, so if modifyTimestamp is changed for an >>> attribute you do not care about you still have to count it. >> AFAIK you can watch changes only for selected attributes (through psearch). > True, but if the connection drops, then you need a new full search and > there you'll pick up all the new timestamps. So you can have > inconsistent behavior depending on DS restarts, named restarts, and so > on. Not a nice corner to paint ourselves in. > >>>> There are still problems to solve without DS plugin (specifically >>>> mapping/updating NN part from YYYYMMDDNN), but: Sounds this reasonable? >>> Well I am not sure we need to use a YYYYMMDDNN convention to start with. >>> I expect with DYNDNS updates that a 2 digit NN will never be enough, >>> plus it is never updated by a human so we do not need to keep it >>> readable. But I do not care eiither way, as long as the serial can >>> handle thousands of updates per day I am fine (if this is an issue we >>> need to understand how to update the serial in time intervals). >> Current BIND implementation handles overflow in one day gracefully: >> 2012041899 -> 2012041900 >> So SOA# can be in far future, if you changes zone too often :-) > Sure, but then what's the point of keeping it in date format ? :-) > >> AFAIK this format is traditional, but not required by standard, if arithmetic >> works. [9] defines arithmetic for SOA serials, so DS plugin should follow it. >> >> It says "The maximum defined increment is 2147483647 (2^31 - 1)" >> This limit applies inside to one SOA TTL time window (so it shouldn't be a >> problem, I think). I didn't looked into in this RFC deeply. Some practical >> recommendations can be found in [10]. > Yeah 2^31 is large enough for practical deployments if you start small, > if you start close to the top (2012 is not the far from 2147) then you > have substantially reduced the window. > >> Thanks for your time. > YW > > Simo. > And this all complexity for the case when we want to support not IPA based DNS slaves. Is this correct? If so is it really a big use case and something that must be solved? May be instead we should focus on the IPA DNS slave configuration that does not have anything other than read only DS and a DNS server that would get the data over ldap instead of the DNS transfers. Would that be a reasonable alternative? I see a lot of complexity and challenges for a use case that might not be that significant and can be solved in a different way. -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From simo at redhat.com Wed Apr 18 16:53:08 2012 From: simo at redhat.com (Simo Sorce) Date: Wed, 18 Apr 2012 12:53:08 -0400 Subject: [Freeipa-devel] DNS zone serial number updates [#2554] In-Reply-To: <4F8EED06.2070901@redhat.com> References: <4F8D9110.3030703@redhat.com> <1334679211.16658.200.camel@willson.li.ssimo.org> <4F8EC1C8.4090204@redhat.com> <1334757864.16658.258.camel@willson.li.ssimo.org> <4F8EDBE0.7090108@redhat.com> <1334764404.16658.277.camel@willson.li.ssimo.org> <4F8EED06.2070901@redhat.com> Message-ID: <1334767988.16658.294.camel@willson.li.ssimo.org> On Wed, 2012-04-18 at 12:34 -0400, Dmitri Pal wrote: > And this all complexity for the case when we want to support not IPA > based DNS slaves. Is this correct? If so is it really a big use case > and > something that must be solved? Yes, I think we need to allow zone transfers. they are used not just for slaves but for other functions too. > May be instead we should focus on the IPA DNS slave configuration that > does not have anything other than read only DS and a DNS server that > would get the data over ldap instead of the DNS transfers. > Would that be a reasonable alternative? Would be a much bigger job imo. > I see a lot of complexity and challenges for a use case that might not > be that significant and can be solved in a different way. It's easier and much more flexible to support the standard zone transfer mechanism. The matter is complex, but the actual technical solution will not be a lot of code. Simo. -- Simo Sorce * Red Hat, Inc * New York From rmeggins at redhat.com Wed Apr 18 18:30:36 2012 From: rmeggins at redhat.com (Rich Megginson) Date: Wed, 18 Apr 2012 12:30:36 -0600 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <1334666521.16658.171.camel@willson.li.ssimo.org> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> Message-ID: <4F8F084C.5090106@redhat.com> On 04/17/2012 06:42 AM, Simo Sorce wrote: > On Tue, 2012-04-17 at 01:13 +0200, Ondrej Hamada wrote: >>>> Sorry for inactivity, I was struggling with a lot of school stuff. >>>> >>>> I've summed up the main goals, do you agree on them or should I >>>> add/remove any? >>>> >>>> >>>> GOALS >>>> =========================================== >>>> Create Hub and Consumer types of replica with following features: >>>> >>>> * Hub is read-only >>>> >>>> * Hub interconnects Masters with Consumers or Masters with Hubs >>>> or Hubs with other Hubs >>>> >>>> * Hub is hidden in the network topology >>>> >>>> * Consumer is read-only >>>> >>>> * Consumer interconnects Masters/Hubs with clients >>>> >>>> * Write operations should be forwarded to Master >>>> >>>> * Consumer should be able to log users into system without >>>> communication with master >>> We need to define how this can be done, it will almost certainly mean >>> part of the consumer is writable, plus it also means you need additional >>> access control and policies, on what the Consumer should be allowed to >>> see. >>> >>>> * Consumer should cache user's credentials >>> Ok what credentials ? As I explained earlier Kerberos creds cannot >>> really be cached. Either they are transferred with replication or the >>> KDC needs to be change to do chaining. Neither I consider as 'caching'. >>> A password obtained through an LDAP bind could be cached, but I am not >>> sure it is worth it. >>> >>>> * Caching of credentials should be configurable >>> See above. >>> >>>> * CA server should not be allowed on Hubs and Consumers >>> Missing points: >>> - Masters should not transfer KRB keys to HUBs/Consumers by default. >>> >>> - We need selective replication if you want to allow distributing a >>> partial set of Kerberos credentials to consumers. With Hubs it becomes >>> complicated to decide what to replicate about credentials. >>> >>> Simo. >>> >> Can you please have a look at this draft and comment it please? >> >> >> Design document draft: More types of replicas in FreeIPA >> >> GOALS >> ================================================================= >> >> Create Hub and Consumer types of replica with following features: >> >> * Hub is read-only >> >> * Hub interconnects Masters with Consumers or Masters with Hubs >> or Hubs with other Hubs >> >> * Hub is hidden in the network topology >> >> * Consumer is read-only >> >> * Consumer interconnects Masters/Hubs with clients >> >> * Write operations should be forwarded to Master > Do we need to specify how this is done ? Referrals vs Chain-on-update ? > >> * Consumer should be able to log users into system without >> communication with master >> >> * Consumer should be able to store user's credentials > Can you expand on this ? Do you mean user keys ? > >> * Storing of credentials should be configurable and disabled by default >> >> * Credentials expiration on replica should be configurable > What does this mean ? > >> * CA server should not be allowed on Hubs and Consumers >> >> ISSUES >> ================================================================= >> >> - SSSD is currently supposed to cooperate with one LDAP server only > Is this a problem in having an LDAP server that doesn't also have a KDC > on the same host ? Or something else ? > >> - OpenLDAP client and its support for referrals > Should we avoid referrals and use chain-on-update ? > What does it mean for access control ? > How do consumers authenticate to masters ? > Should we use s4u2proxy ? > >> - 389-DS allows replication of whole suffix only > What kind of filters do we think we need ? We can already exclude > specific attributes from replication. fractional replication had originally planned to support search filters in addition to attribute lists - I think Ondrej wants to include or exclude certain entries from being replicated > >> - Storing credentials and allowing authentication against Consumer server >> >> >> POSSIBLE SOLUTIONS >> ================================================================= >> >> 389-DS allows replication of whole suffix only: >> >> * Rich said that they are planning to allow the fractional replication >> in DS to >> use LDAP filters. It will allow us to do selective replication what >> is mainly >> important for replication of user's credentials. > I guess we want to do this to selectively prevent replication of only > some kerberos keys ? Based on groups ? Would filtes allow that using > memberof ? Using filters with fractional replication would allow you to include or exclude anything that can be expressed as an LDAP search filter > >> ______________________________________ >> >> Forwarding of requests in LDAP: >> >> * use existing 389-DS plugin "Chain-on-update" - we can try it as a proof of >> concept solution, but for real deployment it won't be very cool solution >> as it >> will increase the demands on Hubs. > Why do you think it would increase demands for hubs ? Doesn't the > consumer directly contact the masters skipping the hubs ? Yeah, not sure what you mean here, unless you are taking the document http://port389.org/wiki/Howto:ChainOnUpdate as the only way to implement chain on update - it is not - that document was taken from an early proof-of-concept for a planned deployment at a customer many years ago. > >> * better way is to use the referrals. The master server(s) to be referred >> might be: >> 1) specified at install time > This is not really useful, as it would break updates every time the > specified master is offline, it would also require some work to > reconfigure stuff if the mastrer is retired. > >> 2) looked up in DNS records > Probably easier to look up in LDAP, we have a record for each master in > the domain. > >> 3) find master dynamically - Consumers and Hubs will be in fact master >> servers (from 389-DS point of view), this means that every >> consumer or hub >> knows his direct suppliers a they know their suppliers ... > Not clear what this means, can you elaborate ? > >> ISSUE: support for referrals in OpenLDAP client > We've had quite some issue with referrals indeed, and a lot of client > software dopes not properly handle referrals. That would leave a bunch > of clients unable to modify the Directory. OTOH very few clients need to > modify the directory, so maybe that's good enough. > >> * SSSD must be improved to allow cooperation with more than one LDAP server > Can you elaborate what you think is missing in SSSD ? Is it about the > need to fix referrals handling ? Or something else ? > >> _____________________________________ >> >> Authentication and replication of credentials: >> >> * authentication policies, every user must authenticate against master >> server by >> default > If users always contact the master, what are the consumers for ? > Need to elaborate on this and explain. > >> - if the authentication is successful and proper policy is set for >> him, the >> user will be added into a specific user group. Each consumer will >> have one >> of these groups. These groups will be used by LDAP filters in >> fractional >> replication to distribute the Krb creds to the chosen Consumers only. > Why should this depend on authentication ?? > Keep in mind that changing filters will not cause any replication to > occur, replication would occur only when a change happens. Therefore > placing a user in one group should happen before the kerberos keys are > created. > Also in order to move a user from one group to another, which would > theoretically cause deletion of credentials from a group of servers and > distribution to another we will probably need a plugin. > This plugin would take care of intercepting this special membership > change. > On servers that loos membership this plugin would go and delete locally > stored keys from the user(s) that lost membership. > On servers that gained membership they would have to go to one of the > master and fetch the keys and store them locally, this would need to be > in a way that prevent replication and retain the master modification > time so that later replication events will not conflict in any way. > There i also the problem of rekeying and having different master keys on > hubs/consumers, not an easy problem, and would require quite some custom > changes to the replication protocol for these special entries. > >> - The groups will be created and modified on the master, so they will get >> replicated to all Hubs and Consumers. Hubs make it more complicated >> as they >> must know which groups are relevant for them. Because of that I >> suppose that >> every Hub will have to know about all its 'subordinates' - this >> information >> will have to be generated dynamically - probably on every change to the >> replication topology (adding/removing replicas is usually not a very >> frequent operation) > Hubs will simply be made members of these groups just like consumers. > All members of a group are authorized to do something with that group > membership. The grouping part doesn't seem complicated to me, but I may > have missed a detail, care to elaborate what you see as difficult ? > >> - The policy must also specify the credentials expiration time. If >> user tries to >> authenticate with expired credential, he will be refused and >> redirected to Master >> server for authentication. > How is this different from current status ? All accounts already have > password expiration times and account expiration times. What am I > missing ? > >> ISSUE: How to deal with creds. expiration in replication? The replication of >> credential to the Consumer could be stopped by removing the user >> from the >> Consumer specific user group (mentioned above). The easiest way >> would be to >> delete him when he tries to auth. > See above, we need a plugin IMO. > >> with expired credentials or do a >> regular >> check (intervals specified in policy) and delete all expired creds. > It's not clear to me what we mean by expired creds, what am I missing ? > >> Because >> of the removal of expired creds. we will have to grant the Consumer the >> permission to delete users from the Consumer specific user group >> (but only >> deleting, adding users will be possible on Masters only). > I do not understand this. > >> Offline authentication: >> >> * Consumer (and Hub) must allow write operations just for a small set of >> attributes: last login date and time, count of unsuccessful logins >> and the >> lockup of account > This shouldn't be a problem, we already do that with masters, the trick > is in non replicating those attributes so that they never conflict. > >> - to be able to do that, both Consumers and Hubs must be Masters(from >> 389-DS point of view). > This doesn't sound right at all. All server can always write locally, > what prevents them from doing so are referrals/configuration. Consumers > and hubs do not and cannot be masters. > >> When the Master<->Consumer connection is >> broken, the >> lockup information is saved only locally and will be pushed to Master >> on connection restoration. I suppose that only the lockup information >> should >> be replicated. In case of lockup the user will have to authenticate >> against >> Master server only. > What is the lookup information ? What connection is broken ? There > aren't persistent connections between masters and consumers (esp. when > hubs are in between there are none). > > >> Transfer of Krb keys: >> >> * Consumer server will have to have realm krbtgt. > I guess you mean "a krbtgt usuable in the realm", not 'the' realm > krbtgt, right ? > >> This means that we >> will have >> to distribute every Consumer's krbtgt to the Master servers. > It's the other way around. All keys are generated on the masters just > like with any other principal key, and then replicated to consumers. > >> The >> Masters will >> need to have a logic for using those keys instead of the normal krbtgt to >> perform operations when user's krbtgt are presented to a different >> server. > Yes, we will need potentially quite invasive changes to the KDC when the > 'krbtgt' is involved. We will need to plan this ahead with MIT to > validate our idea or see if they have different ideas on how to solve > this problem. > > Simo. > From jdennis at redhat.com Wed Apr 18 19:32:29 2012 From: jdennis at redhat.com (John Dennis) Date: Wed, 18 Apr 2012 15:32:29 -0400 Subject: [Freeipa-devel] [PATCH 70] validate i18n strings when running "make lint" In-Reply-To: <4F8EA671.1000006@redhat.com> References: <4F751025.7090204@redhat.com> <4F86D7F7.9040107@redhat.com> <4F8C81DA.5030800@redhat.com> <4F8EA671.1000006@redhat.com> Message-ID: <4F8F16CD.3070105@redhat.com> On 04/18/2012 07:33 AM, Petr Viktorin wrote: > On 04/16/2012 10:32 PM, John Dennis wrote: >> On 04/12/2012 09:26 AM, Petr Viktorin wrote: >>> On 03/30/2012 03:45 AM, John Dennis wrote: >>>> Translatable strings have certain requirements for proper translation >>>> and run time behaviour. We should routinely validate those strings. A >>>> recent checkin to install/po/test_i18n.py makes it possible to validate >>>> the contents of our current pot file by searching for problematic >>>> strings. >>>> >>>> However, we only occasionally generate a new pot file, far less >>>> frequently than translatable strings change in the source base. Just >>>> like other checkin's to the source which are tested for correctness we >>>> should also validate new or modified translation strings when they are >>>> introduced and not accumulate problems to fix at the last minute. This >>>> would also raise the awareness of developers as to the requirements for >>>> proper string translation. >>>> >>>> The top level Makefile should be enhanced to create a temporary pot >>>> files from the current sources and validate it. We need to use a >>>> temporary pot file because we do not want to modify the pot file under >>>> source code control and exported to Transifex. >>>> >>> >>> NACK >>> >>> install/po/Makefile is not created early enough when running `make rpms` >>> from a clean checkout. >>> >>> # git clean -fx >>> ... >>> # make rpms >>> rm -rf /rpmbuild >>> mkdir -p /rpmbuild/BUILD >>> mkdir -p /rpmbuild/RPMS >>> mkdir -p /rpmbuild/SOURCES >>> mkdir -p /rpmbuild/SPECS >>> mkdir -p /rpmbuild/SRPMS >>> mkdir -p dist/rpms >>> mkdir -p dist/srpms >>> if [ ! -e RELEASE ]; then echo 0> RELEASE; fi >>> sed -e s/__VERSION__/2.99.0GITde16a82/ -e s/__RELEASE__/0/ \ >>> freeipa.spec.in> freeipa.spec >>> sed -e s/__VERSION__/2.99.0GITde16a82/ version.m4.in \ >>>> version.m4 >>> sed -e s/__VERSION__/2.99.0GITde16a82/ ipapython/setup.py.in \ >>>> ipapython/setup.py >>> sed -e s/__VERSION__/2.99.0GITde16a82/ ipapython/version.py.in \ >>>> ipapython/version.py >>> perl -pi -e "s:__NUM_VERSION__:2990:" ipapython/version.py >>> perl -pi -e "s:__API_VERSION__:2.34:" ipapython/version.py >>> sed -e s/__VERSION__/2.99.0GITde16a82/ daemons/ipa-version.h.in \ >>>> daemons/ipa-version.h >>> perl -pi -e "s:__NUM_VERSION__:2990:" daemons/ipa-version.h >>> perl -pi -e "s:__DATA_VERSION__:20100614120000:" daemons/ipa-version.h >>> sed -e s/__VERSION__/2.99.0GITde16a82/ -e s/__RELEASE__/0/ \ >>> ipa-client/ipa-client.spec.in> ipa-client/ipa-client.spec >>> sed -e s/__VERSION__/2.99.0GITde16a82/ ipa-client/version.m4.in \ >>>> ipa-client/version.m4 >>> if [ "redhat" != "" ]; then \ >>> sed -e s/SUPPORTED_PLATFORM/redhat/ ipapython/services.py.in \ >>>> ipapython/services.py; \ >>> fi >>> if [ "" != "yes" ]; then \ >>> ./makeapi --validate; \ >>> fi >>> make -C install/po validate-src-strings >>> make[1]: Entering directory `/home/pviktori/freeipa/install/po' >>> make[1]: *** No rule to make target `validate-src-strings'. Stop. >>> make[1]: Leaving directory `/home/pviktori/freeipa/install/po' >>> make: *** [validate-src-strings] Error 2 >> >> Updated patch attached. >> >> The fundamental problem is that we were trying to run code before >> configure had ever been run. We have dependencies on the output and side >> effects of configure. Therefore the solution is to add the >> bootstrap-autogen target as a dependency. >> >> FWIW, I tried an approach that did not require having bootstrap-autogen >> run first by having the validation occur as part of the normal "make >> all". But that had some undesirable properties, first we really only >> want to run validation for developers, not during a normal build, >> secondly the file generation requires a git repo (see below). >> >> But there is another reason to require running bootstrap-autogen prior >> to any validation (e.g. makeapi, make-lint, etc.) Those validation >> utilities need access to generated source files and those generated >> source files are supposed to be generated by the configure step. Right >> now we're generating them as part of updating version information. I >> will post a long email describing the problem on the devel list. So >> we've created a situation we we must run configure early on, even as >> part of "making the distribution" because part of "making the >> distribution" is validating the distribution and that requires the full >> content of the distribution be available. >> >> Also while trying to determine if the i18n validation step executed >> correctly I realized the i18n validation only emitted a message if an >> error was detected. That made it impossible to distinguish between empty >> input (a problem when not running in a development tree) and successful >> validation. Therefore I also updated i18n.py to output counts of >> messages checked which also caused me to fix some validations that were >> missing on plural forms. >> >> > > This still doesn't solve the problem: Make doesn't guarantee the order > of dependencies, so with the rule: > > lint: bootstrap-autogen validate-src-strings > > ./make-lint $(LINT_OPTIONS) > the validate-src-strings can fire before bootstrap-autogen: Fixed by having bootstrap-autogen be a dependency of the lint target and adding validate-src-strings to the lint rules. > > $ make lint > if [ ! -e RELEASE ]; then echo 0> RELEASE; fi > /usr/bin/make -C install/po validate-src-strings > make[1]: Entering directory `/home/pviktori/freeipa/install/po' > make[1]: *** No rule to make target `validate-src-strings'. Stop. > make[1]: Leaving directory `/home/pviktori/freeipa/install/po' > make: *** [validate-src-strings] Error 2 > make: *** Waiting for unfinished jobs.... > > When I move the bootstrap-autogen dependency to validate-src-strings, it > works. (Quite a lot of needed for a lint now, but discussion about that > is elsewhere.) I'm not sure if you're concern is with having to run bootstrap-autogen for lint or adding the string validation to the lint check. As I tried to point out in my email irrespective of validating the i18n strings we should have run bootstrap-autogen prior to lint because that is what is supposed to create the generated Python file(s) that we're asking pylint to validate. If the concern is with validating the i18n strings as part of lint then I'll make these observations: 1) validating the source strings is logically part of the lint check. Why? Because lint validates source files. The i18n string validation is also validating the contents of the same source files, it's just doing a job traditional lint can't perform, thus it's a logical extension of a lint type validation. 2) If the concern is with performing extra steps, performance, elapsed time to run the lint check etc. then a) On my laptop i18n validation takes 0.77s elapsed time b) On my laptop make-lint takes 1m17s or 77s elapsed time. c) Validating i18n strings is 100x faster than lint'ing the source code, or put another way adding i18n validation adds a mere 1% to the elapsed time needed to run lint. > Now that there are warnings, is pedantic mode necessary? Great question, I also pondered that as well. My conclusion was there was value in separating aggressiveness of error checking from the verbosity of the output. Also I didn't think we wanted warnings showing in normal checking for things which are suspicious but not known to be incorrect. So under the current scheme pedantic mode enables reporting of suspicious constructs. You can still get a warning in the normal mode for things which aren't fatal but are provably incorrect. An example of this would be missing plural translations, it won't cause a run time failure and we can be definite about their absence, however they should be fixed, but it's not mandatory they be fixed, a warning in this case seems appropriate. > The validate_file function still contains some `return 1` lines, you > should update them all. Or just raise exceptions in these cases. > > $ tests/i18n.py --validate-pot nonexisting_file > file does not exist "nonexisting_file" > Traceback (most recent call last): > File "tests/i18n.py", line 817, in > sys.exit(main()) > File "tests/i18n.py", line 761, in main > n_entries, n_msgids, n_msgstrs, n_warnings, n_errors = > validate_file(f, validation_mode) > TypeError: 'int' object is not iterable Good catch, thank you! (see below) > I'd also return a namedtuple instead of a plain tuple, and have the > caller access the items by name. That way, if validate_file decides to > return more items in the future, we won't have to rewrite all calls to it. Yeah, FWIW I thought about making the return values named tuple when I modified the code (are you a mind reader?). Not sure why I didn't, probably because I wanted to limit the engineering effort. Anyway, it's now a named tuple and the return values have been sanitized. Revised patch attached. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jdennis-0070-2-validate-i18n-strings-when-running-make-lint.patch Type: text/x-patch Size: 11325 bytes Desc: not available URL: From pviktori at redhat.com Thu Apr 19 10:33:15 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 19 Apr 2012 12:33:15 +0200 Subject: [Freeipa-devel] [PATCH] 0039 Remove duplicate and unused utility code Message-ID: <4F8FE9EB.6030602@redhat.com> IPA has some unused code from abandoned features (Radius, ipa 1.x user input, command-line tab completion), as well as some duplicate utilities. This patch cleans up the utility modules. https://fedorahosted.org/freeipa/ticket/2650 -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0039-Remove-duplicate-and-unused-utility-code.patch Type: text/x-patch Size: 53410 bytes Desc: not available URL: From pviktori at redhat.com Thu Apr 19 11:04:08 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 19 Apr 2012 13:04:08 +0200 Subject: [Freeipa-devel] [PATCH 70] validate i18n strings when running "make lint" In-Reply-To: <4F8F16CD.3070105@redhat.com> References: <4F751025.7090204@redhat.com> <4F86D7F7.9040107@redhat.com> <4F8C81DA.5030800@redhat.com> <4F8EA671.1000006@redhat.com> <4F8F16CD.3070105@redhat.com> Message-ID: <4F8FF128.1060601@redhat.com> On 04/18/2012 09:32 PM, John Dennis wrote: > On 04/18/2012 07:33 AM, Petr Viktorin wrote: >> On 04/16/2012 10:32 PM, John Dennis wrote: >>> On 04/12/2012 09:26 AM, Petr Viktorin wrote: >>>> On 03/30/2012 03:45 AM, John Dennis wrote: >>>>> Translatable strings have certain requirements for proper translation >>>>> and run time behaviour. We should routinely validate those strings. A >>>>> recent checkin to install/po/test_i18n.py makes it possible to >>>>> validate >>>>> the contents of our current pot file by searching for problematic >>>>> strings. >>>>> >>>>> However, we only occasionally generate a new pot file, far less >>>>> frequently than translatable strings change in the source base. Just >>>>> like other checkin's to the source which are tested for correctness we >>>>> should also validate new or modified translation strings when they are >>>>> introduced and not accumulate problems to fix at the last minute. This >>>>> would also raise the awareness of developers as to the requirements >>>>> for >>>>> proper string translation. >>>>> >>>>> The top level Makefile should be enhanced to create a temporary pot >>>>> files from the current sources and validate it. We need to use a >>>>> temporary pot file because we do not want to modify the pot file under >>>>> source code control and exported to Transifex. >>>>> >>>> >>>> NACK >>>> >>>> install/po/Makefile is not created early enough when running `make >>>> rpms` >>>> from a clean checkout. >>>> >>>> # git clean -fx >>>> ... >>>> # make rpms >>>> rm -rf /rpmbuild >>>> mkdir -p /rpmbuild/BUILD >>>> mkdir -p /rpmbuild/RPMS >>>> mkdir -p /rpmbuild/SOURCES >>>> mkdir -p /rpmbuild/SPECS >>>> mkdir -p /rpmbuild/SRPMS >>>> mkdir -p dist/rpms >>>> mkdir -p dist/srpms >>>> if [ ! -e RELEASE ]; then echo 0> RELEASE; fi >>>> sed -e s/__VERSION__/2.99.0GITde16a82/ -e s/__RELEASE__/0/ \ >>>> freeipa.spec.in> freeipa.spec >>>> sed -e s/__VERSION__/2.99.0GITde16a82/ version.m4.in \ >>>>> version.m4 >>>> sed -e s/__VERSION__/2.99.0GITde16a82/ ipapython/setup.py.in \ >>>>> ipapython/setup.py >>>> sed -e s/__VERSION__/2.99.0GITde16a82/ ipapython/version.py.in \ >>>>> ipapython/version.py >>>> perl -pi -e "s:__NUM_VERSION__:2990:" ipapython/version.py >>>> perl -pi -e "s:__API_VERSION__:2.34:" ipapython/version.py >>>> sed -e s/__VERSION__/2.99.0GITde16a82/ daemons/ipa-version.h.in \ >>>>> daemons/ipa-version.h >>>> perl -pi -e "s:__NUM_VERSION__:2990:" daemons/ipa-version.h >>>> perl -pi -e "s:__DATA_VERSION__:20100614120000:" daemons/ipa-version.h >>>> sed -e s/__VERSION__/2.99.0GITde16a82/ -e s/__RELEASE__/0/ \ >>>> ipa-client/ipa-client.spec.in> ipa-client/ipa-client.spec >>>> sed -e s/__VERSION__/2.99.0GITde16a82/ ipa-client/version.m4.in \ >>>>> ipa-client/version.m4 >>>> if [ "redhat" != "" ]; then \ >>>> sed -e s/SUPPORTED_PLATFORM/redhat/ ipapython/services.py.in \ >>>>> ipapython/services.py; \ >>>> fi >>>> if [ "" != "yes" ]; then \ >>>> ./makeapi --validate; \ >>>> fi >>>> make -C install/po validate-src-strings >>>> make[1]: Entering directory `/home/pviktori/freeipa/install/po' >>>> make[1]: *** No rule to make target `validate-src-strings'. Stop. >>>> make[1]: Leaving directory `/home/pviktori/freeipa/install/po' >>>> make: *** [validate-src-strings] Error 2 >>> >>> Updated patch attached. >>> >>> The fundamental problem is that we were trying to run code before >>> configure had ever been run. We have dependencies on the output and side >>> effects of configure. Therefore the solution is to add the >>> bootstrap-autogen target as a dependency. >>> >>> FWIW, I tried an approach that did not require having bootstrap-autogen >>> run first by having the validation occur as part of the normal "make >>> all". But that had some undesirable properties, first we really only >>> want to run validation for developers, not during a normal build, >>> secondly the file generation requires a git repo (see below). >>> >>> But there is another reason to require running bootstrap-autogen prior >>> to any validation (e.g. makeapi, make-lint, etc.) Those validation >>> utilities need access to generated source files and those generated >>> source files are supposed to be generated by the configure step. Right >>> now we're generating them as part of updating version information. I >>> will post a long email describing the problem on the devel list. So >>> we've created a situation we we must run configure early on, even as >>> part of "making the distribution" because part of "making the >>> distribution" is validating the distribution and that requires the full >>> content of the distribution be available. >>> >>> Also while trying to determine if the i18n validation step executed >>> correctly I realized the i18n validation only emitted a message if an >>> error was detected. That made it impossible to distinguish between empty >>> input (a problem when not running in a development tree) and successful >>> validation. Therefore I also updated i18n.py to output counts of >>> messages checked which also caused me to fix some validations that were >>> missing on plural forms. >>> >>> >> >> This still doesn't solve the problem: Make doesn't guarantee the order >> of dependencies, so with the rule: >> > lint: bootstrap-autogen validate-src-strings >> > ./make-lint $(LINT_OPTIONS) >> the validate-src-strings can fire before bootstrap-autogen: > > Fixed by having bootstrap-autogen be a dependency of the lint target and > adding validate-src-strings to the lint rules. > >> >> $ make lint >> if [ ! -e RELEASE ]; then echo 0> RELEASE; fi >> /usr/bin/make -C install/po validate-src-strings >> make[1]: Entering directory `/home/pviktori/freeipa/install/po' >> make[1]: *** No rule to make target `validate-src-strings'. Stop. >> make[1]: Leaving directory `/home/pviktori/freeipa/install/po' >> make: *** [validate-src-strings] Error 2 >> make: *** Waiting for unfinished jobs.... >> >> When I move the bootstrap-autogen dependency to validate-src-strings, it >> works. (Quite a lot of needed for a lint now, but discussion about that >> is elsewhere.) > > I'm not sure if you're concern is with having to run bootstrap-autogen > for lint or adding the string validation to the lint check. > > As I tried to point out in my email irrespective of validating the i18n > strings we should have run bootstrap-autogen prior to lint because that > is what is supposed to create the generated Python file(s) that we're > asking pylint to validate. > > If the concern is with validating the i18n strings as part of lint then > I'll make these observations: > > 1) validating the source strings is logically part of the lint check. > Why? Because lint validates source files. The i18n string validation is > also validating the contents of the same source files, it's just doing a > job traditional lint can't perform, thus it's a logical extension of a > lint type validation. > > 2) If the concern is with performing extra steps, performance, elapsed > time to run the lint check etc. then > > a) On my laptop i18n validation takes 0.77s elapsed time > > b) On my laptop make-lint takes 1m17s or 77s elapsed time. > > c) Validating i18n strings is 100x faster than lint'ing the source > code, or put another way adding i18n validation adds a mere 1% to > the elapsed time needed to run lint. Right, your solution is correct here. >> Now that there are warnings, is pedantic mode necessary? > > Great question, I also pondered that as well. My conclusion was there > was value in separating aggressiveness of error checking from the > verbosity of the output. Also I didn't think we wanted warnings showing > in normal checking for things which are suspicious but not known to be > incorrect. So under the current scheme pedantic mode enables reporting > of suspicious constructs. You can still get a warning in the normal mode > for things which aren't fatal but are provably incorrect. An example of > this would be missing plural translations, it won't cause a run time > failure and we can be definite about their absence, however they should > be fixed, but it's not mandatory they be fixed, a warning in this case > seems appropriate. If they should be fixed, we should fix them, and treat them as errors the same way we treat lint's warnings as errors. If the pedantic mode is an obscure option of some test module, I worry that nobody will ever run it. Separating aggressiveness of checking from verbosity is not a bad idea. But since we now have two severity levels, and the checking is cheap, I'm not convinced that the aggressiveness should be tunable. How about always counting the pedantic warnings, but not showing the details? Then, if such warnings are found, have the tool say how to run it to get a full report. That way people will notice it. >> The validate_file function still contains some `return 1` lines, you >> should update them all. Or just raise exceptions in these cases. >> >> $ tests/i18n.py --validate-pot nonexisting_file >> file does not exist "nonexisting_file" >> Traceback (most recent call last): >> File "tests/i18n.py", line 817, in >> sys.exit(main()) >> File "tests/i18n.py", line 761, in main >> n_entries, n_msgids, n_msgstrs, n_warnings, n_errors = >> validate_file(f, validation_mode) >> TypeError: 'int' object is not iterable > > Good catch, thank you! (see below) > >> I'd also return a namedtuple instead of a plain tuple, and have the >> caller access the items by name. That way, if validate_file decides to >> return more items in the future, we won't have to rewrite all calls to >> it. > > Yeah, FWIW I thought about making the return values named tuple when I > modified the code (are you a mind reader?). There should, after all, be one obvious way to do it :) > Not sure why I didn't, > probably because I wanted to limit the engineering effort. Anyway, it's > now a named tuple and the return values have been sanitized. > > Revised patch attached. -- Petr? From pviktori at redhat.com Thu Apr 19 11:40:49 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 19 Apr 2012 13:40:49 +0200 Subject: [Freeipa-devel] [PATCH] 0014 Add final debug message in installers In-Reply-To: <4F8DF0EE.9020205@redhat.com> References: <4F44B860.9050809@redhat.com> <4F4BB48C.7010200@redhat.com> <1330535684.32367.30.camel@balmora.brq.redhat.com> <4F4E728C.2070109@redhat.com> <4F4F5363.4030408@redhat.com> <4F61C4BC.60809@redhat.com> <4F6C9049.3030507@redhat.com> <4F708335.7030400@redhat.com> <4F708CBA.9000608@redhat.com> <4F758939.4060503@redhat.com> <4F761EF5.4050704@redhat.com> <4F7C3864.3070505@redhat.com> <4F849AE5.5030409@redhat.com> <4F86BCDE.1080508@redhat.com> <4F87FF2B.90108@redhat.com> <4F8C5270.5030605@redhat.com> <4F8C5773.4030200@redhat.com> <4F8C7CB3.2080309@redhat.com> <4F8C7DE7.8080609@redhat.com> <4F8C9438.8090805@redhat.com> <4F8C994E.7070605@redhat.com> <4F8D708E.3050701@redhat.com> <4F8D7A43.5060303@redhat.com> <4F8D802E.6010201@redhat.com> <4F8D9E5D.9000901@redhat.com> <4F8DA4B4.1010409@redhat.com> <4F8DF0EE.9020205@redhat.com> Message-ID: <4F8FF9C1.8020005@redhat.com> On 04/18/2012 12:38 AM, Dmitri Pal wrote: > On 04/17/2012 01:13 PM, Petr Viktorin wrote: >> On 04/17/2012 06:46 PM, John Dennis wrote: >>> Thank you for the explanation Petr, it's very much appreciated. >>> >>> I do have a problem with this patch and I'm inclined to NAK it, but I'm >>> open to discussion. Here's my thoughts, if I've made mistakes in my >>> reasoning please point them out. >>> >>> The fundamental problem is many of our command line utilities do not do >>> logging correctly. >>> >>> Fixing the logging is not terribly difficult but it raises concerns over >>> invasive changes at the last minute. >>> >>> To address the problem we're going to introduce what can only be called >>> a "hack" to compensate for the original deficiency. The hack is a bit >>> obscure and convoluted (and I'm not sure entirely robust). It introduces >>> enough complexity it's not obvious or easy to see what is going on. Code >>> that obscures should be treated with skepticism and be justified by >>> important needs. I'm also afraid what should really be a short term >>> work-around will get ensconced in the code and never cleaned up, it will >>> be duplicated, and used as an example of how things are supposed to >>> work. >>> >>> So my question is, "Is the output of the command line utilities so >>> broken that it justifies introducing this?" and "Is this any less >>> invasive than simply fixing the messages in the utilities cleanly and >>> not introducing a hack?" >>> >>> >> >> Yes, it's a hack, because it needs to be non-invasive. It does that by >> not modifying the scripts themselves, but just wrapping them in the >> context handler. So no functionality is broken, there are just >> problems with extra messages printed or not printed. I think that's >> the least invasive thing to do. So the answer to your second question >> is yes. I'm not experienced enough to answer the first one. >> >> I opened https://fedorahosted.org/freeipa/ticket/2652 to track the >> larger issue that needs solving: removing the code duplication in >> these tools. This includes a common way to configure logging and >> report errors. >> >> >> > Let us do the hack and pick the cleanup task in 3.0 so that we have > things done correctly for future. > If this task is not enough let us open other tasks to make sure that we > track correctly the need to the right fix. > We can even revert the change in 3.0 and go a different path. > > NACK We have agreed outside this list to drop the issue for 2.2, and fix it properly for 3.0. -- Petr? From ohamada at redhat.com Thu Apr 19 12:18:15 2012 From: ohamada at redhat.com (Ondrej Hamada) Date: Thu, 19 Apr 2012 14:18:15 +0200 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F8F084C.5090106@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> Message-ID: <4F900287.8080606@redhat.com> On 04/18/2012 08:30 PM, Rich Megginson wrote: > On 04/17/2012 06:42 AM, Simo Sorce wrote: >> On Tue, 2012-04-17 at 01:13 +0200, Ondrej Hamada wrote: >>>>> Sorry for inactivity, I was struggling with a lot of school stuff. >>>>> >>>>> I've summed up the main goals, do you agree on them or should I >>>>> add/remove any? >>>>> >>>>> >>>>> GOALS >>>>> =========================================== >>>>> Create Hub and Consumer types of replica with following features: >>>>> >>>>> * Hub is read-only >>>>> >>>>> * Hub interconnects Masters with Consumers or Masters with Hubs >>>>> or Hubs with other Hubs >>>>> >>>>> * Hub is hidden in the network topology >>>>> >>>>> * Consumer is read-only >>>>> >>>>> * Consumer interconnects Masters/Hubs with clients >>>>> >>>>> * Write operations should be forwarded to Master >>>>> >>>>> * Consumer should be able to log users into system without >>>>> communication with master >>>> We need to define how this can be done, it will almost certainly mean >>>> part of the consumer is writable, plus it also means you need >>>> additional >>>> access control and policies, on what the Consumer should be allowed to >>>> see. >>>> >>>>> * Consumer should cache user's credentials >>>> Ok what credentials ? As I explained earlier Kerberos creds cannot >>>> really be cached. Either they are transferred with replication or the >>>> KDC needs to be change to do chaining. Neither I consider as >>>> 'caching'. >>>> A password obtained through an LDAP bind could be cached, but I am not >>>> sure it is worth it. >>>> >>>>> * Caching of credentials should be configurable >>>> See above. >>>> >>>>> * CA server should not be allowed on Hubs and Consumers >>>> Missing points: >>>> - Masters should not transfer KRB keys to HUBs/Consumers by default. >>>> >>>> - We need selective replication if you want to allow distributing a >>>> partial set of Kerberos credentials to consumers. With Hubs it becomes >>>> complicated to decide what to replicate about credentials. >>>> >>>> Simo. >>>> >>> Can you please have a look at this draft and comment it please? >>> >>> >>> Design document draft: More types of replicas in FreeIPA >>> >>> GOALS >>> ================================================================= >>> >>> Create Hub and Consumer types of replica with following features: >>> >>> * Hub is read-only >>> >>> * Hub interconnects Masters with Consumers or Masters with Hubs >>> or Hubs with other Hubs >>> >>> * Hub is hidden in the network topology >>> >>> * Consumer is read-only >>> >>> * Consumer interconnects Masters/Hubs with clients >>> >>> * Write operations should be forwarded to Master >> Do we need to specify how this is done ? Referrals vs Chain-on-update ? Both options are in game. >> >>> * Consumer should be able to log users into system without >>> communication with master >>> >>> * Consumer should be able to store user's credentials >> Can you expand on this ? Do you mean user keys ? Yes, the consumer should be able to store all data necessary for user being authenticated. >> >>> * Storing of credentials should be configurable and disabled by default >>> >>> * Credentials expiration on replica should be configurable >> What does this mean ? We should store credentials for a subset of users only. As this subset might change over time, we should flush the credentials for users that haven't showed up for some while (even despite the credentials are not expired yet). >> >>> * CA server should not be allowed on Hubs and Consumers >>> >>> ISSUES >>> ================================================================= >>> >>> - SSSD is currently supposed to cooperate with one LDAP server only >> Is this a problem in having an LDAP server that doesn't also have a KDC >> on the same host ? Or something else ? >> >>> - OpenLDAP client and its support for referrals >> Should we avoid referrals and use chain-on-update ? Maybe. I've come across several mentions that the referrals support in openldap client is not working properly. >> What does it mean for access control ? >> How do consumers authenticate to masters ? >> Should we use s4u2proxy ? >> >>> - 389-DS allows replication of whole suffix only >> What kind of filters do we think we need ? We can already exclude >> specific attributes from replication. > > fractional replication had originally planned to support search > filters in addition to attribute lists - I think Ondrej wants to > include or exclude certain entries from being replicated Yes, my point is, that the Consumer should strore credentials only for users that are authenticating against him, so we need to exclude some attributes, but just for specific subset of users. > >> >>> - Storing credentials and allowing authentication against Consumer >>> server >>> >>> >>> POSSIBLE SOLUTIONS >>> ================================================================= >>> >>> 389-DS allows replication of whole suffix only: >>> >>> * Rich said that they are planning to allow the fractional replication >>> in DS to >>> use LDAP filters. It will allow us to do selective replication what >>> is mainly >>> important for replication of user's credentials. >> I guess we want to do this to selectively prevent replication of only >> some kerberos keys ? Based on groups ? Would filtes allow that using >> memberof ? > > Using filters with fractional replication would allow you to include > or exclude anything that can be expressed as an LDAP search filter > >> >>> ______________________________________ >>> >>> Forwarding of requests in LDAP: >>> >>> * use existing 389-DS plugin "Chain-on-update" - we can try it as a >>> proof of >>> concept solution, but for real deployment it won't be very cool >>> solution >>> as it >>> will increase the demands on Hubs. >> Why do you think it would increase demands for hubs ? Doesn't the >> consumer directly contact the masters skipping the hubs ? > > Yeah, not sure what you mean here, unless you are taking the document > http://port389.org/wiki/Howto:ChainOnUpdate as the only way to > implement chain on update - it is not - that document was taken from > an early proof-of-concept for a planned deployment at a customer many > years ago. > you're right, my fault, i took it too much dogmatic >> >>> * better way is to use the referrals. The master server(s) to be >>> referred >>> might be: >>> 1) specified at install time >> This is not really useful, as it would break updates every time the >> specified master is offline, it would also require some work to >> reconfigure stuff if the mastrer is retired. agree, just wanted to list the option, to be sure it's not the way >> >>> 2) looked up in DNS records >> Probably easier to look up in LDAP, we have a record for each master in >> the domain. >> >>> 3) find master dynamically - Consumers and Hubs will be in fact >>> master >>> servers (from 389-DS point of view), this means that every >>> consumer or hub >>> knows his direct suppliers a they know their suppliers ... >> Not clear what this means, can you elaborate ? Replication agreements posses the information about suppliers. It means we can dynamically discover where are the masters by going through all nodes and asking who's their supplier. Thinking about it again, it will be probably very slow and less reliable. The lookup of dns records in LDAP would be better. >> >>> ISSUE: support for referrals in OpenLDAP client >> We've had quite some issue with referrals indeed, and a lot of client >> software dopes not properly handle referrals. That would leave a bunch >> of clients unable to modify the Directory. OTOH very few clients need to >> modify the directory, so maybe that's good enough. >> >>> * SSSD must be improved to allow cooperation with more than one LDAP >>> server >> Can you elaborate what you think is missing in SSSD ? Is it about the >> need to fix referrals handling ? Or something else ? I'm afraid of the situation when user authenticates and the information is not present on Consumer. If we'll use referrals and the authentication will have to be done against master, would the SSSD be able to handle it? >> >>> _____________________________________ >>> >>> Authentication and replication of credentials: >>> >>> * authentication policies, every user must authenticate against master >>> server by >>> default >> If users always contact the master, what are the consumers for ? >> Need to elaborate on this and explain. As was mentioned earlier in the discussion, there are two scenarios - in the first one the consumer serves only as a source of information(dns,ntp,accounts...), the second one allows distribution of credentials and thus enables the authentication against the consumer locally. The first one is more secure since the creds are not stored on consumers that might be more easily corrupted. >> >>> - if the authentication is successful and proper policy is set for >>> him, the >>> user will be added into a specific user group. Each consumer will >>> have one >>> of these groups. These groups will be used by LDAP filters in >>> fractional >>> replication to distribute the Krb creds to the chosen >>> Consumers only. >> Why should this depend on authentication ?? >> Keep in mind that changing filters will not cause any replication to >> occur, replication would occur only when a change happens. Therefore >> placing a user in one group should happen before the kerberos keys are >> created. >> Also in order to move a user from one group to another, which would >> theoretically cause deletion of credentials from a group of servers and >> distribution to another we will probably need a plugin. >> This plugin would take care of intercepting this special membership >> change. >> On servers that loos membership this plugin would go and delete locally >> stored keys from the user(s) that lost membership. >> On servers that gained membership they would have to go to one of the >> master and fetch the keys and store them locally, this would need to be >> in a way that prevent replication and retain the master modification >> time so that later replication events will not conflict in any way. >> There i also the problem of rekeying and having different master keys on >> hubs/consumers, not an easy problem, and would require quite some custom >> changes to the replication protocol for these special entries. >> I agree, seems like a good idea >>> - The groups will be created and modified on the master, so they >>> will get >>> replicated to all Hubs and Consumers. Hubs make it more >>> complicated >>> as they >>> must know which groups are relevant for them. Because of that I >>> suppose that >>> every Hub will have to know about all its 'subordinates' - this >>> information >>> will have to be generated dynamically - probably on every >>> change to the >>> replication topology (adding/removing replicas is usually not >>> a very >>> frequent operation) >> Hubs will simply be made members of these groups just like consumers. >> All members of a group are authorized to do something with that group >> membership. The grouping part doesn't seem complicated to me, but I may >> have missed a detail, care to elaborate what you see as difficult ? >> >>> - The policy must also specify the credentials expiration time. If >>> user tries to >>> authenticate with expired credential, he will be refused and >>> redirected to Master >>> server for authentication. >> How is this different from current status ? All accounts already have >> password expiration times and account expiration times. What am I >> missing ? Sorry, I wrote it unclear. I meant that the credentials, we store on Consumer should be there available only for a specified period of time. After that time they should be flushed away (means they are still valid, just not stored on the Consumer), no matter what is their expiration time. This is mainly for the scenario when someone authenticates against our Consumer on some occasion and then he never gets back - it's worthless storing his credentials any more, so I think that the it should be possible to define some limit for storing credentials. >> >>> ISSUE: How to deal with creds. expiration in replication? The >>> replication of >>> credential to the Consumer could be stopped by removing the user >>> from the >>> Consumer specific user group (mentioned above). The easiest way >>> would be to >>> delete him when he tries to auth. >> See above, we need a plugin IMO. >> >>> with expired credentials or do a >>> regular >>> check (intervals specified in policy) and delete all expired >>> creds. >> It's not clear to me what we mean by expired creds, what am I missing ? see above >> >>> Because >>> of the removal of expired creds. we will have to grant the >>> Consumer the >>> permission to delete users from the Consumer specific user group >>> (but only >>> deleting, adding users will be possible on Masters only). >> I do not understand this. When user hasn't authenticated against Consumer for a long time and his credentials were flushed from Consumer, his credentials should be also omitted from being replicated to the Consumer. This might be solved by the proposed plugin as well. >> >>> Offline authentication: >>> >>> * Consumer (and Hub) must allow write operations just for a small >>> set of >>> attributes: last login date and time, count of unsuccessful logins >>> and the >>> lockup of account >> This shouldn't be a problem, we already do that with masters, the trick >> is in non replicating those attributes so that they never conflict. >> >>> - to be able to do that, both Consumers and Hubs must be >>> Masters(from >>> 389-DS point of view). >> This doesn't sound right at all. All server can always write locally, >> what prevents them from doing so are referrals/configuration. Consumers >> and hubs do not and cannot be masters. But what about the information of account getting locked? We need to lock the account locally immediately and also tell the master (and thus every other node) that the specific account is locked. >> >>> When the Master<->Consumer connection is >>> broken, the >>> lockup information is saved only locally and will be pushed to >>> Master >>> on connection restoration. I suppose that only the lockup >>> information >>> should >>> be replicated. In case of lockup the user will have to authenticate >>> against >>> Master server only. >> What is the lookup information ? What connection is broken ? There >> aren't persistent connections between masters and consumers (esp. when >> hubs are in between there are none). typo there probably - by lock up information I mean reporting the situation, when the account gets locked due to too many failed logins. Broken connection - when it is not possible to tell the master, that the account got locked. >> >> >>> Transfer of Krb keys: >>> >>> * Consumer server will have to have realm krbtgt. >> I guess you mean "a krbtgt usuable in the realm", not 'the' realm >> krbtgt, right ? right >> >>> This means that we >>> will have >>> to distribute every Consumer's krbtgt to the Master servers. >> It's the other way around. All keys are generated on the masters just >> like with any other principal key, and then replicated to consumers. >> >>> The >>> Masters will >>> need to have a logic for using those keys instead of the normal >>> krbtgt to >>> perform operations when user's krbtgt are presented to a different >>> server. >> Yes, we will need potentially quite invasive changes to the KDC when the >> 'krbtgt' is involved. We will need to plan this ahead with MIT to >> validate our idea or see if they have different ideas on how to solve >> this problem. >> >> Simo. >> > -- Regards, Ondrej Hamada FreeIPA team jabber: ohama at jabbim.cz IRC: ohamada From simo at redhat.com Thu Apr 19 13:03:51 2012 From: simo at redhat.com (Simo Sorce) Date: Thu, 19 Apr 2012 09:03:51 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F900287.8080606@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> Message-ID: <1334840631.16658.325.camel@willson.li.ssimo.org> On Thu, 2012-04-19 at 14:18 +0200, Ondrej Hamada wrote: > On 04/18/2012 08:30 PM, Rich Megginson wrote: > >>> * Credentials expiration on replica should be configurable > >> What does this mean ? > We should store credentials for a subset of users only. As this subset > might change over time, we should flush the credentials for users that > haven't showed up for some while (even despite the credentials are not > expired yet). This should be determined through group membership or similar mechanism, talking about 'expiration' seem wrong and confusing, perhaps just a language problem ? > > fractional replication had originally planned to support search > > filters in addition to attribute lists - I think Ondrej wants to > > include or exclude certain entries from being replicated > > Yes, my point is, that the Consumer should strore credentials only for > users that are authenticating against him, so we need to exclude some > attributes, but just for specific subset of users. I am not sure we can achieve this, with just a fractional replication filter, not easily anyway. A search filter singles out entire entries. In order to have different sets of attributes replicated we need an additional, per-filter attribute exclusion list. > >>> 3) find master dynamically - Consumers and Hubs will be in fact > >>> master > >>> servers (from 389-DS point of view), this means that every > >>> consumer or hub > >>> knows his direct suppliers a they know their suppliers ... > >> Not clear what this means, can you elaborate ? > Replication agreements posses the information about suppliers. It means > we can dynamically discover where are the masters by going through all > nodes and asking who's their supplier. Thinking about it again, it will > be probably very slow and less reliable. The lookup of dns records in > LDAP would be better. Neither, we have the list of masters in LDAP in the cn=etc subtree for these uses, it's a simple search, and it is the authoritative list. Remember we may not always control the DNS, so relying on a manually maintained DNS would be bad. > >>> * SSSD must be improved to allow cooperation with more than one LDAP > >>> server > >> Can you elaborate what you think is missing in SSSD ? Is it about the > >> need to fix referrals handling ? Or something else ? > I'm afraid of the situation when user authenticates and the information > is not present on Consumer. If we'll use referrals and the > authentication will have to be done against master, would the SSSD be > able to handle it? Currently SSSD can handle referrals, although it does so poorly due to issues with the openldap libraries. Stephen tells me there are plans to handle referrals in the SSSD code directly instead of deferring to openldap libs. When that is done we should have no more issues. However, for authentication purposes I am not sure referrals are the way to go. For the Kerberos case referrals won't work, because we will not let a consumer have read access to keys in a master (besides the consumer will not have the same master key so will not be able to decrypt them), so we will need to handle the Krb case differently. For ldap binds, we might do referrals, or we could chain binds and avoid that issue entirely. If we chain binds we can also temporarily cache credentials in the same way we do in SSSD so that if the server get cut off the network it can keep serving requests. I am not thrilled about caching users passwords this way and should probably not enabled by default, but we'd have the option. > >>> * authentication policies, every user must authenticate against master > >>> server by > >>> default > >> If users always contact the master, what are the consumers for ? > >> Need to elaborate on this and explain. > As was mentioned earlier in the discussion, there are two scenarios - in > the first one the consumer serves only as a source of > information(dns,ntp,accounts...), the second one allows distribution of > credentials and thus enables the authentication against the consumer > locally. The first one is more secure since the creds are not stored on > consumers that might be more easily corrupted. Ok, makes sense, but I would handle this transparently to the clients, as noted above. Trying to build knowledge in clients or rely on referrals is going to work poorly with a lot of clients, making the solution not really useful in real deployments where a mix of machines that do not use SSSD is present. > >>> - The policy must also specify the credentials expiration time. If > >>> user tries to > >>> authenticate with expired credential, he will be refused and > >>> redirected to Master > >>> server for authentication. > >> How is this different from current status ? All accounts already have > >> password expiration times and account expiration times. What am I > >> missing ? > Sorry, I wrote it unclear. I meant that the credentials, we store on > Consumer should be there available only for a specified period of time. Why ? > After that time they should be flushed away (means they are still valid, > just not stored on the Consumer), no matter what is their expiration > time. I do not see what's the point. If we are replicating the keys to a consumer why would it try to delete them after a while ? > This is mainly for the scenario when someone authenticates against > our Consumer on some occasion and then he never gets back - it's > worthless storing his credentials any more, so I think that the it > should be possible to define some limit for storing credentials. Ok so we are mixing things here. I guess the scenario you are referring to here, is the one where we do not replicate any key to the consumer, and some user does an ldap bind against with chained-binds and we do decide to cache the password as a hash if auth is successful. This is a rare corner case in my mind. Note that we cannot do this with Kerberos. we can't "cache" kerberos keys, we either have a copy permanently (or until the policy/group membership of the consumer is changed) or never have them. > >>> Because > >>> of the removal of expired creds. we will have to grant the > >>> Consumer the > >>> permission to delete users from the Consumer specific user group > >>> (but only > >>> deleting, adding users will be possible on Masters only). > >> I do not understand this. > When user hasn't authenticated against Consumer for a long time and his > credentials were flushed from Consumer, his credentials should be also > omitted from being replicated to the Consumer. This might be solved by > the proposed plugin as well. Either the user is marked as part of the location server by this consumer, and therefore we replicate keys or we do not. We cannot delete keys, as nothing would replicate them back to us until a password change occurs. Also, you have no way to tell the master what it should replicate, dynamically. I would remove this point, it is not something we need or want, I think. > >>> - to be able to do that, both Consumers and Hubs must be > >>> Masters(from > >>> 389-DS point of view). > >> This doesn't sound right at all. All server can always write locally, > >> what prevents them from doing so are referrals/configuration. Consumers > >> and hubs do not and cannot be masters. > But what about the information of account getting locked? We need to > lock the account locally immediately and also tell the master (and thus > every other node) that the specific account is locked. For account lockouts we will need to do an explicit write to a master. (probably yet another plugin or an option to the password plugin). We cannot use replication to forward this information, as consumers do not have a replication agreement that go from consumer to master. > >>> When the Master<->Consumer connection is > >>> broken, the > >>> lockup information is saved only locally and will be pushed to > >>> Master > >>> on connection restoration. I suppose that only the lockup > >>> information > >>> should > >>> be replicated. In case of lockup the user will have to authenticate > >>> against > >>> Master server only. > >> What is the lookup information ? What connection is broken ? There > >> aren't persistent connections between masters and consumers (esp. when > >> hubs are in between there are none). > typo there probably - by lock up information I mean reporting the > situation, when the account gets locked due to too many failed logins. This will be hard indeed. > Broken connection - when it is not possible to tell the master, that the > account got locked. Ok replace with "when the masters are unreachable". Simo. -- Simo Sorce * Red Hat, Inc * New York From jdennis at redhat.com Thu Apr 19 13:04:34 2012 From: jdennis at redhat.com (John Dennis) Date: Thu, 19 Apr 2012 09:04:34 -0400 Subject: [Freeipa-devel] [PATCH 74] Fix name error in hbactest Message-ID: <201204191304.q3JD4YV8021634@int-mx10.intmail.prod.int.phx2.redhat.com> Ticket #2512 In hbactest.py there is a name error wrapped inside a try/except block that ignores all errors so the code block exits prematurely leaving a critical variable uninitialized. The name error is the result of a cut-n-paste error that references a variable that had never been initialized in the scope of the code block. Python generates an exception when this variable is referenced but because it's wrapped in a try/except block that catches all errors and ignores all errors there is no evidence that something went wrong. The fix is to use the correct variables. At some point we may want to revist if ignoring all errors and proceding as if nothing happened is actually correct. Alexander tells me this mimics what SSSD does in the hbac rule processing, thus the ignoring of errors is intentional. But in a plugin whose purpose is to test and exercise hbac rules I'm not sure ignoring all errors is really the right behavior. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jdennis-0074-Fix-name-error-in-hbactest.patch Type: text/x-patch Size: 2576 bytes Desc: not available URL: From pspacek at redhat.com Thu Apr 19 13:10:39 2012 From: pspacek at redhat.com (Petr Spacek) Date: Thu, 19 Apr 2012 15:10:39 +0200 Subject: [Freeipa-devel] IP address check during IPA install In-Reply-To: <4F8ED793.2070401@redhat.com> References: <4F8EC7D4.6050007@redhat.com> <4F8ED793.2070401@redhat.com> Message-ID: <4F900ECF.7060104@redhat.com> On 04/18/2012 05:02 PM, Dmitri Pal wrote: > On 04/18/2012 09:55 AM, Petr Spacek wrote: >> Hello, >> >> please, can somebody explain to me, why our installer strictly checks >> IP addresses? I wonder about it from yesterday's IPA meeting and still >> can't get it. >> >> My naive insight is: "It's a network layer problem and application >> shouldn't care." >> >> Of course, there are many protocols with endpoint address inside >> application messages (like SIP or RTSP) for various reasons. Where are >> these addresses in our case? >> >> HTTP, LDAP, DNS and NTP should be Ok, I think. Or they aren't? >> >> It's Kerberos problem? I know about client IP address inside Kerberos >> ticket, but AFAIK it's usually filled with some constant with >> "ANY_ADDRESS meaning". >> >> I often travel with tickets in credentials cache and these tickets >> still work, when I change location and IP address. >> >> So - what I missed? Why pure NAT should create a problem? >> > > The problem is not the specific address. The problem is badly configured > system. If the host<-> IP can't be resolved cleanly you get a problem > with Kerberos and install will fail. This is why we make sure the name > resolves properly and reverse lookups work at the install time. It does > not matter what IP you have as long as it properly resolves. Ok, I understand that. Error message "No network interface matches the provided IP address and netmask" confused me. I thought it was explicit IP address check, not a DNS check. There should be absolutely clear error message about that, not something cryptic like current message. (It is extraordinarily confusing in situation when you didn't provide any address explicitly :-) I created ticket for this: https://fedorahosted.org/freeipa/ticket/2654 >> Thanks for clarification! Petr^2 Spacek From abokovoy at redhat.com Thu Apr 19 13:17:29 2012 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Thu, 19 Apr 2012 16:17:29 +0300 Subject: [Freeipa-devel] [PATCH 74] Fix name error in hbactest In-Reply-To: <201204191304.q3JD4YV8021634@int-mx10.intmail.prod.int.phx2.redhat.com> References: <201204191304.q3JD4YV8021634@int-mx10.intmail.prod.int.phx2.redhat.com> Message-ID: <20120419131729.GC21946@redhat.com> On Thu, 19 Apr 2012, John Dennis wrote: >Ticket #2512 > >In hbactest.py there is a name error wrapped inside a try/except block >that ignores all errors so the code block exits prematurely leaving a >critical variable uninitialized. > >The name error is the result of a cut-n-paste error that references a >variable that had never been initialized in the scope of the code >block. Python generates an exception when this variable is referenced >but because it's wrapped in a try/except block that catches all errors >and ignores all errors there is no evidence that something went wrong. > >The fix is to use the correct variables. > >At some point we may want to revist if ignoring all errors and >proceding as if nothing happened is actually correct. Alexander tells >me this mimics what SSSD does in the hbac rule processing, thus the >ignoring of errors is intentional. But in a plugin whose purpose is to >test and exercise hbac rules I'm not sure ignoring all errors is >really the right behavior. ACK. Could you please file a ticket for these enhancements for error reporting? It would require additional UI work to present the errors from missing entries different to rules' errors themselves. -- / Alexander Bokovoy From mkosek at redhat.com Thu Apr 19 13:24:18 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 19 Apr 2012 15:24:18 +0200 Subject: [Freeipa-devel] [PATCH 74] Fix name error in hbactest In-Reply-To: <20120419131729.GC21946@redhat.com> References: <201204191304.q3JD4YV8021634@int-mx10.intmail.prod.int.phx2.redhat.com> <20120419131729.GC21946@redhat.com> Message-ID: <1334841858.14621.1.camel@balmora.brq.redhat.com> On Thu, 2012-04-19 at 16:17 +0300, Alexander Bokovoy wrote: > On Thu, 19 Apr 2012, John Dennis wrote: > >Ticket #2512 > > > >In hbactest.py there is a name error wrapped inside a try/except block > >that ignores all errors so the code block exits prematurely leaving a > >critical variable uninitialized. > > > >The name error is the result of a cut-n-paste error that references a > >variable that had never been initialized in the scope of the code > >block. Python generates an exception when this variable is referenced > >but because it's wrapped in a try/except block that catches all errors > >and ignores all errors there is no evidence that something went wrong. > > > >The fix is to use the correct variables. > > > >At some point we may want to revist if ignoring all errors and > >proceding as if nothing happened is actually correct. Alexander tells > >me this mimics what SSSD does in the hbac rule processing, thus the > >ignoring of errors is intentional. But in a plugin whose purpose is to > >test and exercise hbac rules I'm not sure ignoring all errors is > >really the right behavior. > ACK. > > Could you please file a ticket for these enhancements for error > reporting? It would require additional UI work to present the errors > from missing entries different to rules' errors themselves. > Pushed to master, ipa-2-2. Martin From dpal at redhat.com Thu Apr 19 14:10:25 2012 From: dpal at redhat.com (Dmitri Pal) Date: Thu, 19 Apr 2012 10:10:25 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <1334840631.16658.325.camel@willson.li.ssimo.org> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> Message-ID: <4F901CD1.5050703@redhat.com> On 04/19/2012 09:03 AM, Simo Sorce wrote: > On Thu, 2012-04-19 at 14:18 +0200, Ondrej Hamada wrote: >> On 04/18/2012 08:30 PM, Rich Megginson wrote: >>>>> * Credentials expiration on replica should be configurable >>>> What does this mean ? >> We should store credentials for a subset of users only. As this subset >> might change over time, we should flush the credentials for users that >> haven't showed up for some while (even despite the credentials are not >> expired yet). > This should be determined through group membership or similar mechanism, > talking about 'expiration' seem wrong and confusing, perhaps just a > language problem ? > >>> fractional replication had originally planned to support search >>> filters in addition to attribute lists - I think Ondrej wants to >>> include or exclude certain entries from being replicated >> Yes, my point is, that the Consumer should strore credentials only for >> users that are authenticating against him, so we need to exclude some >> attributes, but just for specific subset of users. > I am not sure we can achieve this, with just a fractional replication > filter, not easily anyway. A search filter singles out entire entries. > In order to have different sets of attributes replicated we need an > additional, per-filter attribute exclusion list. > >>>>> 3) find master dynamically - Consumers and Hubs will be in fact >>>>> master >>>>> servers (from 389-DS point of view), this means that every >>>>> consumer or hub >>>>> knows his direct suppliers a they know their suppliers ... >>>> Not clear what this means, can you elaborate ? >> Replication agreements posses the information about suppliers. It means >> we can dynamically discover where are the masters by going through all >> nodes and asking who's their supplier. Thinking about it again, it will >> be probably very slow and less reliable. The lookup of dns records in >> LDAP would be better. > Neither, we have the list of masters in LDAP in the cn=etc subtree for > these uses, it's a simple search, and it is the authoritative list. > Remember we may not always control the DNS, so relying on a manually > maintained DNS would be bad. > >>>>> * SSSD must be improved to allow cooperation with more than one LDAP >>>>> server >>>> Can you elaborate what you think is missing in SSSD ? Is it about the >>>> need to fix referrals handling ? Or something else ? >> I'm afraid of the situation when user authenticates and the information >> is not present on Consumer. If we'll use referrals and the >> authentication will have to be done against master, would the SSSD be >> able to handle it? > Currently SSSD can handle referrals, although it does so poorly due to > issues with the openldap libraries. Stephen tells me there are plans to > handle referrals in the SSSD code directly instead of deferring to > openldap libs. When that is done we should have no more issues. > However, for authentication purposes I am not sure referrals are the way > to go. > For the Kerberos case referrals won't work, because we will not let a > consumer have read access to keys in a master (besides the consumer will > not have the same master key so will not be able to decrypt them), so we > will need to handle the Krb case differently. > For ldap binds, we might do referrals, or we could chain binds and avoid > that issue entirely. If we chain binds we can also temporarily cache > credentials in the same way we do in SSSD so that if the server get cut > off the network it can keep serving requests. I am not thrilled about > caching users passwords this way and should probably not enabled by > default, but we'd have the option. > >>>>> * authentication policies, every user must authenticate against master >>>>> server by >>>>> default >>>> If users always contact the master, what are the consumers for ? >>>> Need to elaborate on this and explain. >> As was mentioned earlier in the discussion, there are two scenarios - in >> the first one the consumer serves only as a source of >> information(dns,ntp,accounts...), the second one allows distribution of >> credentials and thus enables the authentication against the consumer >> locally. The first one is more secure since the creds are not stored on >> consumers that might be more easily corrupted. > Ok, makes sense, but I would handle this transparently to the clients, > as noted above. Trying to build knowledge in clients or rely on > referrals is going to work poorly with a lot of clients, making the > solution not really useful in real deployments where a mix of machines > that do not use SSSD is present. > >>>>> - The policy must also specify the credentials expiration time. If >>>>> user tries to >>>>> authenticate with expired credential, he will be refused and >>>>> redirected to Master >>>>> server for authentication. >>>> How is this different from current status ? All accounts already have >>>> password expiration times and account expiration times. What am I >>>> missing ? >> Sorry, I wrote it unclear. I meant that the credentials, we store on >> Consumer should be there available only for a specified period of time. > Why ? > >> After that time they should be flushed away (means they are still valid, >> just not stored on the Consumer), no matter what is their expiration >> time. > I do not see what's the point. If we are replicating the keys to a > consumer why would it try to delete them after a while ? > >> This is mainly for the scenario when someone authenticates against >> our Consumer on some occasion and then he never gets back - it's >> worthless storing his credentials any more, so I think that the it >> should be possible to define some limit for storing credentials. > Ok so we are mixing things here. > > I guess the scenario you are referring to here, is the one where we do > not replicate any key to the consumer, and some user does an ldap bind > against with chained-binds and we do decide to cache the password as a > hash if auth is successful. This is a rare corner case in my mind. > > Note that we cannot do this with Kerberos. we can't "cache" kerberos > keys, we either have a copy permanently (or until the policy/group > membership of the consumer is changed) or never have them. > > >>>>> Because >>>>> of the removal of expired creds. we will have to grant the >>>>> Consumer the >>>>> permission to delete users from the Consumer specific user group >>>>> (but only >>>>> deleting, adding users will be possible on Masters only). >>>> I do not understand this. >> When user hasn't authenticated against Consumer for a long time and his >> credentials were flushed from Consumer, his credentials should be also >> omitted from being replicated to the Consumer. This might be solved by >> the proposed plugin as well. > Either the user is marked as part of the location server by this > consumer, and therefore we replicate keys or we do not. We cannot delete > keys, as nothing would replicate them back to us until a password change > occurs. Also, you have no way to tell the master what it should > replicate, dynamically. > I would remove this point, it is not something we need or want, I think. > >>>>> - to be able to do that, both Consumers and Hubs must be >>>>> Masters(from >>>>> 389-DS point of view). >>>> This doesn't sound right at all. All server can always write locally, >>>> what prevents them from doing so are referrals/configuration. Consumers >>>> and hubs do not and cannot be masters. >> But what about the information of account getting locked? We need to >> lock the account locally immediately and also tell the master (and thus >> every other node) that the specific account is locked. > For account lockouts we will need to do an explicit write to a master. > (probably yet another plugin or an option to the password plugin). We > cannot use replication to forward this information, as consumers do not > have a replication agreement that go from consumer to master. > >>>>> When the Master<->Consumer connection is >>>>> broken, the >>>>> lockup information is saved only locally and will be pushed to >>>>> Master >>>>> on connection restoration. I suppose that only the lockup >>>>> information >>>>> should >>>>> be replicated. In case of lockup the user will have to authenticate >>>>> against >>>>> Master server only. >>>> What is the lookup information ? What connection is broken ? There >>>> aren't persistent connections between masters and consumers (esp. when >>>> hubs are in between there are none). >> typo there probably - by lock up information I mean reporting the >> situation, when the account gets locked due to too many failed logins. > This will be hard indeed. > >> Broken connection - when it is not possible to tell the master, that the >> account got locked. > Ok replace with "when the masters are unreachable". > > Simo. > > There is one aspect that is missing in this discussion. If we are talking about a remote office and about a Consumer that serves this office we need to understand not only the flow of the initial authentication but are there other authentications happening. I mean are we just talking about logging into the machines in the remote office then LDAP auth with pass-through and caching would be sufficient on the consumer (I will explain how it could be done below) or there is an eSSO involved and expected? I guess if the eSSO is required for example to access NFS shares there should be a local IPA server with KDC in the remote office. In this case it probably makes sense to make it just a normal replica but with limited modification capabilities and potentially with a subset of users and other entries replicated to that location. If the eSSO is not required and we talk about the initial login only we can have a DS instance as a consumer do not need to have the whole IPA becuase KDC, CA and management frameworks are not needed. This DS can replicate a subset of the users, groups and other data using fractional replication for the identity lookups can and use PAM pass-through feature with SSSD configured to go to the real master for authentication. So effectively there are two different use cases: 1) eSSO server in the remote office 2) Login server in the remote office The solutions seem completely different so I suggest starting with one or another. -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From ohamada at redhat.com Thu Apr 19 15:26:23 2012 From: ohamada at redhat.com (Ondrej Hamada) Date: Thu, 19 Apr 2012 17:26:23 +0200 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F901CD1.5050703@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> Message-ID: <4F902E9F.9030106@redhat.com> On 04/19/2012 04:10 PM, Dmitri Pal wrote: > On 04/19/2012 09:03 AM, Simo Sorce wrote: >> On Thu, 2012-04-19 at 14:18 +0200, Ondrej Hamada wrote: >>> On 04/18/2012 08:30 PM, Rich Megginson wrote: >>>>>> * Credentials expiration on replica should be configurable >>>>> What does this mean ? >>> We should store credentials for a subset of users only. As this subset >>> might change over time, we should flush the credentials for users that >>> haven't showed up for some while (even despite the credentials are not >>> expired yet). >> This should be determined through group membership or similar mechanism, >> talking about 'expiration' seem wrong and confusing, perhaps just a >> language problem ? Right, thanks for correction. >> >>>> fractional replication had originally planned to support search >>>> filters in addition to attribute lists - I think Ondrej wants to >>>> include or exclude certain entries from being replicated >>> Yes, my point is, that the Consumer should strore credentials only for >>> users that are authenticating against him, so we need to exclude some >>> attributes, but just for specific subset of users. >> I am not sure we can achieve this, with just a fractional replication >> filter, not easily anyway. A search filter singles out entire entries. >> In order to have different sets of attributes replicated we need an >> additional, per-filter attribute exclusion list. >> >>>>>> 3) find master dynamically - Consumers and Hubs will be in fact >>>>>> master >>>>>> servers (from 389-DS point of view), this means that every >>>>>> consumer or hub >>>>>> knows his direct suppliers a they know their suppliers ... >>>>> Not clear what this means, can you elaborate ? >>> Replication agreements posses the information about suppliers. It means >>> we can dynamically discover where are the masters by going through all >>> nodes and asking who's their supplier. Thinking about it again, it will >>> be probably very slow and less reliable. The lookup of dns records in >>> LDAP would be better. >> Neither, we have the list of masters in LDAP in the cn=etc subtree for >> these uses, it's a simple search, and it is the authoritative list. >> Remember we may not always control the DNS, so relying on a manually >> maintained DNS would be bad. Good point, i forget about the master entries. >>>>>> * SSSD must be improved to allow cooperation with more than one LDAP >>>>>> server >>>>> Can you elaborate what you think is missing in SSSD ? Is it about the >>>>> need to fix referrals handling ? Or something else ? >>> I'm afraid of the situation when user authenticates and the information >>> is not present on Consumer. If we'll use referrals and the >>> authentication will have to be done against master, would the SSSD be >>> able to handle it? >> Currently SSSD can handle referrals, although it does so poorly due to >> issues with the openldap libraries. Stephen tells me there are plans to >> handle referrals in the SSSD code directly instead of deferring to >> openldap libs. When that is done we should have no more issues. >> However, for authentication purposes I am not sure referrals are the way >> to go. >> For the Kerberos case referrals won't work, because we will not let a >> consumer have read access to keys in a master (besides the consumer will >> not have the same master key so will not be able to decrypt them), so we >> will need to handle the Krb case differently. >> For ldap binds, we might do referrals, or we could chain binds and avoid >> that issue entirely. If we chain binds we can also temporarily cache >> credentials in the same way we do in SSSD so that if the server get cut >> off the network it can keep serving requests. I am not thrilled about >> caching users passwords this way and should probably not enabled by >> default, but we'd have the option. >> >>>>>> * authentication policies, every user must authenticate against master >>>>>> server by >>>>>> default >>>>> If users always contact the master, what are the consumers for ? >>>>> Need to elaborate on this and explain. >>> As was mentioned earlier in the discussion, there are two scenarios - in >>> the first one the consumer serves only as a source of >>> information(dns,ntp,accounts...), the second one allows distribution of >>> credentials and thus enables the authentication against the consumer >>> locally. The first one is more secure since the creds are not stored on >>> consumers that might be more easily corrupted. >> Ok, makes sense, but I would handle this transparently to the clients, >> as noted above. Trying to build knowledge in clients or rely on >> referrals is going to work poorly with a lot of clients, making the >> solution not really useful in real deployments where a mix of machines >> that do not use SSSD is present. >> >>>>>> - The policy must also specify the credentials expiration time. If >>>>>> user tries to >>>>>> authenticate with expired credential, he will be refused and >>>>>> redirected to Master >>>>>> server for authentication. >>>>> How is this different from current status ? All accounts already have >>>>> password expiration times and account expiration times. What am I >>>>> missing ? >>> Sorry, I wrote it unclear. I meant that the credentials, we store on >>> Consumer should be there available only for a specified period of time. >> Why ? >> >>> After that time they should be flushed away (means they are still valid, >>> just not stored on the Consumer), no matter what is their expiration >>> time. >> I do not see what's the point. If we are replicating the keys to a >> consumer why would it try to delete them after a while ? Security? My original idea was, that if consumer gets corrupted, there should be stored as less credentials as possible. This behaviour should mainly flush credentials of users, who don't auth against the consumer regularly. All the paragraphs about flushing credentials below were inspired by this idea. >> >>> This is mainly for the scenario when someone authenticates against >>> our Consumer on some occasion and then he never gets back - it's >>> worthless storing his credentials any more, so I think that the it >>> should be possible to define some limit for storing credentials. >> Ok so we are mixing things here. >> >> I guess the scenario you are referring to here, is the one where we do >> not replicate any key to the consumer, and some user does an ldap bind >> against with chained-binds and we do decide to cache the password as a >> hash if auth is successful. This is a rare corner case in my mind. >> >> Note that we cannot do this with Kerberos. we can't "cache" kerberos >> keys, we either have a copy permanently (or until the policy/group >> membership of the consumer is changed) or never have them. >> >> >>>>>> Because >>>>>> of the removal of expired creds. we will have to grant the >>>>>> Consumer the >>>>>> permission to delete users from the Consumer specific user group >>>>>> (but only >>>>>> deleting, adding users will be possible on Masters only). >>>>> I do not understand this. >>> When user hasn't authenticated against Consumer for a long time and his >>> credentials were flushed from Consumer, his credentials should be also >>> omitted from being replicated to the Consumer. This might be solved by >>> the proposed plugin as well. >> Either the user is marked as part of the location server by this >> consumer, and therefore we replicate keys or we do not. We cannot delete >> keys, as nothing would replicate them back to us until a password change >> occurs. Also, you have no way to tell the master what it should >> replicate, dynamically. >> I would remove this point, it is not something we need or want, I think. The point was that not only the credentials will be removed, but also the user will be unmarked. >> >>>>>> - to be able to do that, both Consumers and Hubs must be >>>>>> Masters(from >>>>>> 389-DS point of view). >>>>> This doesn't sound right at all. All server can always write locally, >>>>> what prevents them from doing so are referrals/configuration. Consumers >>>>> and hubs do not and cannot be masters. >>> But what about the information of account getting locked? We need to >>> lock the account locally immediately and also tell the master (and thus >>> every other node) that the specific account is locked. >> For account lockouts we will need to do an explicit write to a master. >> (probably yet another plugin or an option to the password plugin). We >> cannot use replication to forward this information, as consumers do not >> have a replication agreement that go from consumer to master. If the consumers and hubs were masters in fact, it would be possible to replicate it. But if we can manage it via plugin and thus keep then consumers/hubs to be really read-only, then it's definitely a better solution. >>>>>> When the Master<->Consumer connection is >>>>>> broken, the >>>>>> lockup information is saved only locally and will be pushed to >>>>>> Master >>>>>> on connection restoration. I suppose that only the lockup >>>>>> information >>>>>> should >>>>>> be replicated. In case of lockup the user will have to authenticate >>>>>> against >>>>>> Master server only. >>>>> What is the lookup information ? What connection is broken ? There >>>>> aren't persistent connections between masters and consumers (esp. when >>>>> hubs are in between there are none). >>> typo there probably - by lock up information I mean reporting the >>> situation, when the account gets locked due to too many failed logins. >> This will be hard indeed. >> >>> Broken connection - when it is not possible to tell the master, that the >>> account got locked. >> Ok replace with "when the masters are unreachable". >> >> Simo. >> >> > There is one aspect that is missing in this discussion. If we are > talking about a remote office and about a Consumer that serves this > office we need to understand not only the flow of the initial > authentication but are there other authentications happening. I mean are > we just talking about logging into the machines in the remote office > then LDAP auth with pass-through and caching would be sufficient on the > consumer (I will explain how it could be done below) or there is an eSSO > involved and expected? > > I guess if the eSSO is required for example to access NFS shares there > should be a local IPA server with KDC in the remote office. In this case > it probably makes sense to make it just a normal replica but with > limited modification capabilities and potentially with a subset of users > and other entries replicated to that location. > > If the eSSO is not required and we talk about the initial login only we > can have a DS instance as a consumer do not need to have the whole IPA > becuase KDC, CA and management frameworks are not needed. This DS can > replicate a subset of the users, groups and other data using fractional > replication for the identity lookups can and use PAM pass-through > feature with SSSD configured to go to the real master for authentication. > > So effectively there are two different use cases: > 1) eSSO server in the remote office > 2) Login server in the remote office > > The solutions seem completely different so I suggest starting with one > or another. So far the discussion seems to be more about the second option (login server in the remote office), so I would prefer to stick with it for now. -- Regards, Ondrej Hamada FreeIPA team jabber: ohama at jabbim.cz IRC: ohamada From dpal at redhat.com Thu Apr 19 15:58:02 2012 From: dpal at redhat.com (Dmitri Pal) Date: Thu, 19 Apr 2012 11:58:02 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F902E9F.9030106@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <4F902E9F.9030106@redhat.com> Message-ID: <4F90360A.7090908@redhat.com> On 04/19/2012 11:26 AM, Ondrej Hamada wrote: >> There is one aspect that is missing in this discussion. If we are >> talking about a remote office and about a Consumer that serves this >> office we need to understand not only the flow of the initial >> authentication but are there other authentications happening. I mean are >> we just talking about logging into the machines in the remote office >> then LDAP auth with pass-through and caching would be sufficient on the >> consumer (I will explain how it could be done below) or there is an eSSO >> involved and expected? >> >> I guess if the eSSO is required for example to access NFS shares there >> should be a local IPA server with KDC in the remote office. In this case >> it probably makes sense to make it just a normal replica but with >> limited modification capabilities and potentially with a subset of users >> and other entries replicated to that location. >> >> If the eSSO is not required and we talk about the initial login only we >> can have a DS instance as a consumer do not need to have the whole IPA >> becuase KDC, CA and management frameworks are not needed. This DS can >> replicate a subset of the users, groups and other data using fractional >> replication for the identity lookups can and use PAM pass-through >> feature with SSSD configured to go to the real master for >> authentication. >> >> So effectively there are two different use cases: >> 1) eSSO server in the remote office >> 2) Login server in the remote office >> >> The solutions seem completely different so I suggest starting with one >> or another. > > So far the discussion seems to be more about the second option (login > server in the remote office), so I would prefer to stick with it for now. Then you probably does not need a full IPA server there but rather a special Read Only replica that is configured as described above. -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From simo at redhat.com Thu Apr 19 16:33:53 2012 From: simo at redhat.com (Simo Sorce) Date: Thu, 19 Apr 2012 12:33:53 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F901CD1.5050703@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> Message-ID: <1334853233.16658.345.camel@willson.li.ssimo.org> On Thu, 2012-04-19 at 10:10 -0400, Dmitri Pal wrote: > If the eSSO is not required and we talk about the initial login only > we > can have a DS instance as a consumer do not need to have the whole IPA > becuase KDC, CA and management frameworks are not needed. This DS can > replicate a subset of the users, groups and other data using > fractional > replication for the identity lookups can and use PAM pass-through > feature with SSSD configured to go to the real master for > authentication. > What's the point of a "login" server if SSSD then has to go and talk to a different remote server ? I mean what is the use case ? Also we need to keep in mind that we cannot assume SSSD to be always available on clients. Also replicating a subset of user is easy, but once you try to decide about other data it becomes progressively more difficult, do you replicate all the groups that are referenced by the users you replicated ? (not possible with current fractional replication) What about HBAC and hosts groups and sudo rules ... ? Or do we let admins decide what to replicate and rely on referrals to resolve missing parts referenced by the users ? Referrals have 2 issues: a lot of clients do not support them properly and they would probably break the compat plugins (although I guess we can fix the plugins to add referrals as well for the ldap part). Simo. -- Simo Sorce * Red Hat, Inc * New York From simo at redhat.com Thu Apr 19 16:48:32 2012 From: simo at redhat.com (Simo Sorce) Date: Thu, 19 Apr 2012 12:48:32 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F902E9F.9030106@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <4F902E9F.9030106@redhat.com> Message-ID: <1334854112.16658.351.camel@willson.li.ssimo.org> On Thu, 2012-04-19 at 17:26 +0200, Ondrej Hamada wrote: > >>> Sorry, I wrote it unclear. I meant that the credentials, we store on > >>> Consumer should be there available only for a specified period of time. > >> Why ? > >> > >>> After that time they should be flushed away (means they are still valid, > >>> just not stored on the Consumer), no matter what is their expiration > >>> time. > >> I do not see what's the point. If we are replicating the keys to a > >> consumer why would it try to delete them after a while ? > Security? My original idea was, that if consumer gets corrupted, there > should be stored as less credentials as possible. This behaviour should > mainly flush credentials of users, who don't auth against the consumer > regularly. All the paragraphs about flushing credentials below were > inspired by this idea. Ah but you may be worsening security if you do that, because now you need to add a pull mechanism where you allow consumer to read keys off of a master. A better way would be to gather statistics and have available reports admins can check to trim consumer membership lists (these lists determine what users should have their credentials replicated to the consumer). > >> Either the user is marked as part of the location server by this > >> consumer, and therefore we replicate keys or we do not. We cannot delete > >> keys, as nothing would replicate them back to us until a password change > >> occurs. Also, you have no way to tell the master what it should > >> replicate, dynamically. > >> I would remove this point, it is not something we need or want, I think. > The point was that not only the credentials will be removed, but also > the user will be unmarked. Which means the consumer need to be able to change group memberships on the master, which means it will probably also be able to add them, so in the end we open up a venue for an attacker to play with these lists and potentially force a master to send us all keys. Sounds brittle to me. > >> For account lockouts we will need to do an explicit write to a master. > >> (probably yet another plugin or an option to the password plugin). We > >> cannot use replication to forward this information, as consumers do not > >> have a replication agreement that go from consumer to master. > If the consumers and hubs were masters in fact, it would be possible to > replicate it. But if we can manage it via plugin and thus keep then > consumers/hubs to be really read-only, then it's definitely a better > solution. If the consumers were masters and they could replicate we'd had no security, as they would be able to change all the rules that block replication back easily enough. Or at the very least poison the data in the masters. I think rule 0 here is that consumers do not write to masters, ever, and when they have to for some reason (like locking accounts) they should go through a very strict and limited extended operation that does the operation on their behalf. Simo. -- Simo Sorce * Red Hat, Inc * New York From mkosek at redhat.com Thu Apr 19 17:58:46 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 19 Apr 2012 19:58:46 +0200 Subject: [Freeipa-devel] [PATCH] 253 Fix help of --hostname option in ipa-client-install Message-ID: <1334858326.2484.1.camel@priserak> Fix issue found during QE testing. Pushed to master, ipa-2-2 under the one-liner rule. Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-253-fix-help-of-hostname-option-in-ipa-client-install.patch Type: text/x-patch Size: 1583 bytes Desc: not available URL: From jdennis at redhat.com Thu Apr 19 18:35:12 2012 From: jdennis at redhat.com (John Dennis) Date: Thu, 19 Apr 2012 14:35:12 -0400 Subject: [Freeipa-devel] [PATCH 70] validate i18n strings when running "make lint" In-Reply-To: <4F8FF128.1060601@redhat.com> References: <4F751025.7090204@redhat.com> <4F86D7F7.9040107@redhat.com> <4F8C81DA.5030800@redhat.com> <4F8EA671.1000006@redhat.com> <4F8F16CD.3070105@redhat.com> <4F8FF128.1060601@redhat.com> Message-ID: <4F905AE0.2080109@redhat.com> On 04/19/2012 07:04 AM, Petr Viktorin wrote: > On 04/18/2012 09:32 PM, John Dennis wrote: >>> Now that there are warnings, is pedantic mode necessary? >> >> Great question, I also pondered that as well. My conclusion was there >> was value in separating aggressiveness of error checking from the >> verbosity of the output. Also I didn't think we wanted warnings showing >> in normal checking for things which are suspicious but not known to be >> incorrect. So under the current scheme pedantic mode enables reporting >> of suspicious constructs. You can still get a warning in the normal mode >> for things which aren't fatal but are provably incorrect. An example of >> this would be missing plural translations, it won't cause a run time >> failure and we can be definite about their absence, however they should >> be fixed, but it's not mandatory they be fixed, a warning in this case >> seems appropriate. > > If they should be fixed, we should fix them, and treat them as errors > the same way we treat lint's warnings as errors. If the pedantic mode is > an obscure option of some test module, I worry that nobody will ever run it. The value of pedantic mode is for the person maintaining the translations (at the moment that's me). It's not normally needed, but when something goes wrong it may be helpful to diagnose what might be amiss, in this case false positives are tolerable, in normal mode false positives should be silenced (a request of yours from an earlier review). Another thing to note is that a number of the warnings are limited to po checking, once again this is a translation maintainer feature, not a general test/developer feature. > Separating aggressiveness of checking from verbosity is not a bad idea. > But since we now have two severity levels, and the checking is cheap, > I'm not convinced that the aggressiveness should be tunable. > How about always counting the pedantic warnings, but not showing the > details? Then, if such warnings are found, have the tool say how to run > it to get a full report. That way people will notice it. In an earlier review you asked to limit the output to only what is an actual provable error. I agreed with you and modified the code. One reason not to modify it again is the amount of time being devoted to polishing what is an internal developer tool. I've tweaked the reporting 3-4 times already, I don't think it's time well spent to go back and do it again. After all, this is an internal tool, it will never be seen by a customer, if as we get experience with it we discover it's needs tweaking because it's not doing the job we hoped it would then that's the moment to invest more engineering resources on the output, validation, or whatever the deficiency is. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From dpal at redhat.com Thu Apr 19 19:00:32 2012 From: dpal at redhat.com (Dmitri Pal) Date: Thu, 19 Apr 2012 15:00:32 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <1334853233.16658.345.camel@willson.li.ssimo.org> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <1334853233.16658.345.camel@willson.li.ssimo.org> Message-ID: <4F9060D0.3010204@redhat.com> On 04/19/2012 12:33 PM, Simo Sorce wrote: > On Thu, 2012-04-19 at 10:10 -0400, Dmitri Pal wrote: >> If the eSSO is not required and we talk about the initial login only >> we >> can have a DS instance as a consumer do not need to have the whole IPA >> becuase KDC, CA and management frameworks are not needed. This DS can >> replicate a subset of the users, groups and other data using >> fractional >> replication for the identity lookups can and use PAM pass-through >> feature with SSSD configured to go to the real master for >> authentication. >> > What's the point of a "login" server if SSSD then has to go and talk to > a different remote server ? > I mean what is the use case ? Local server is the central hub for the authentications in the remote office. The client machines with SSSD or LDAP clients might not have access to the central datacenter directly. Another reason for having such login server is to reduce the traffic between the remote office and central data center by pointing to the clients to the login server for the identity lookups and cached authentication via LDAP with pam proxy and SSSD under it. The SSSD will be on the server. > Also we need to keep in mind that we cannot assume SSSD to be always > available on clients. We can assume that SSSD can be used on this login server via pam proxy. > Also replicating a subset of user is easy, but once you try to decide > about other data it becomes progressively more difficult, do you > replicate all the groups that are referenced by the users you > replicated ? (not possible with current fractional replication) > What about HBAC and hosts groups and sudo rules ... ? > Or do we let admins decide what to replicate and rely on referrals to > resolve missing parts referenced by the users ? > > Referrals have 2 issues: a lot of clients do not support them properly > and they would probably break the compat plugins (although I guess we > can fix the plugins to add referrals as well for the ldap part). I agree with the questions and challenges. This is something for Ondrej to research as an owner of the thesis but I would think that it should be up to the admin to define what to synch. We have a finite list of the things that we can pre-create filter for. Think about some kind of the container that would define what to replicate. The object would contain following attributes: uuid - unique rule id name - descriptive name description - comment filter - ldap filter base DN - base DN scope - scope of the search attrs - list of attributes to sync enabled - whether the rule is enabled we can define (but disable by default) rules for SUDO, HBAC, SELinux, Hosts etc and tel admin to enable an tweak. > Simo. > -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From simo at redhat.com Thu Apr 19 19:44:42 2012 From: simo at redhat.com (Simo Sorce) Date: Thu, 19 Apr 2012 15:44:42 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F9060D0.3010204@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1330529763.25597.55.camel@willson.li.ssimo.org> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <1334853233.16658.345.camel@willson.li.ssimo.org> <4F9060D0.3010204@redhat.com> Message-ID: <1334864682.16658.358.camel@willson.li.ssimo.org> On Thu, 2012-04-19 at 15:00 -0400, Dmitri Pal wrote: > Local server is the central hub for the authentications in the remote > office. The client machines with SSSD or LDAP clients might not have > access to the central datacenter directly. Another reason for having > such login server is to reduce the traffic between the remote office > and > central data center by pointing to the clients to the login server for > the identity lookups and cached authentication via LDAP with pam proxy > and SSSD under it. > > The SSSD will be on the server. > > > Also we need to keep in mind that we cannot assume SSSD to be always > > available on clients. > > We can assume that SSSD can be used on this login server via pam > proxy. Sorry I do not get how SSSD is going to make any difference on the server. > I agree with the questions and challenges. This is something for > Ondrej > to research as an owner of the thesis but I would think that it should > be up to the admin to define what to synch. We have a finite list of > the > things that we can pre-create filter for. > > Think about some kind of the container that would define what to > replicate. > The object would contain following attributes: > > uuid - unique rule id > name - descriptive name > description - comment > filter - ldap filter > base DN - base DN > scope - scope of the search > attrs - list of attributes to sync > enabled - whether the rule is enabled > > we can define (but disable by default) rules for SUDO, HBAC, SELinux, > Hosts etc and tel admin to enable an tweak. > This would be really complicated to manage. At the very least we need to define what's the expected user experience and data set access, then we can start devising technical solutions. What we really want to protect here, at the core, is user keys, for user and services not located in the branch office. All the info that is accessible from the branch office and is not super sensitive can always be replicated. The cost of choosing what to replicate and what not should be deferred to a later "enhancement". I would try to get the mechanism right for the authentication subset first, and deal with more fine-grained 'exclusions' later, once we solved that problem and gained experience with at least one aspect of data segregation. Simo. -- Simo Sorce * Red Hat, Inc * New York From dpal at redhat.com Thu Apr 19 20:29:13 2012 From: dpal at redhat.com (Dmitri Pal) Date: Thu, 19 Apr 2012 16:29:13 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <1334864682.16658.358.camel@willson.li.ssimo.org> References: <4F4E41FC.6020606@redhat.com> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <1334853233.16658.345.camel@willson.li.ssimo.org> <4F9060D0.3010204@redhat.com> <1334864682.16658.358.camel@willson.li.ssimo.org> Message-ID: <4F907599.6060609@redhat.com> On 04/19/2012 03:44 PM, Simo Sorce wrote: > On Thu, 2012-04-19 at 15:00 -0400, Dmitri Pal wrote: >> Local server is the central hub for the authentications in the remote >> office. The client machines with SSSD or LDAP clients might not have >> access to the central datacenter directly. Another reason for having >> such login server is to reduce the traffic between the remote office >> and >> central data center by pointing to the clients to the login server for >> the identity lookups and cached authentication via LDAP with pam proxy >> and SSSD under it. >> >> The SSSD will be on the server. >> >>> Also we need to keep in mind that we cannot assume SSSD to be always >>> available on clients. >> We can assume that SSSD can be used on this login server via pam >> proxy. > Sorry I do not get how SSSD is going to make any difference on the > server. > >> I agree with the questions and challenges. This is something for >> Ondrej >> to research as an owner of the thesis but I would think that it should >> be up to the admin to define what to synch. We have a finite list of >> the >> things that we can pre-create filter for. >> >> Think about some kind of the container that would define what to >> replicate. >> The object would contain following attributes: >> >> uuid - unique rule id >> name - descriptive name >> description - comment >> filter - ldap filter >> base DN - base DN >> scope - scope of the search >> attrs - list of attributes to sync >> enabled - whether the rule is enabled >> >> we can define (but disable by default) rules for SUDO, HBAC, SELinux, >> Hosts etc and tel admin to enable an tweak. >> > This would be really complicated to manage. At the very least we need to > define what's the expected user experience and data set access, then we > can start devising technical solutions. > > What we really want to protect here, at the core, is user keys, for user > and services not located in the branch office. All the info that is > accessible from the branch office and is not super sensitive can always > be replicated. The cost of choosing what to replicate and what not > should be deferred to a later "enhancement". I would try to get the > mechanism right for the authentication subset first, and deal with more > fine-grained 'exclusions' later, once we solved that problem and gained > experience with at least one aspect of data segregation. > > Simo. > I do not see a problem and it looks like you do not see the solution that I see. So let me try again: 1) Consumer (remote office server) will sync in whatever user and other data that we decide is safe to sync (may be configurable but this is a separate question). 2) It will not sync the kerberos keys . 3) The authentication between clients and Consumer will be done using LDAP since eSSO is not a requirement so Kerberos tickets are not needed 4) The consumer would use pam proxy to do the authentication against the real master on the client behalf 5) We will configure SSSD under the pam proxy to point to the real master. If access to the realm master is broken the authentication in the remote office would still be successful as they would be performed against credentials cached by SSSD. Online case: client -> ldap auth -> Consumer -> pam proxy -> PAM -> pam_sss -> SSSD -> real master Offline case: client -> ldap auth -> Consumer -> pam proxy -> PAM -> pam_sss -> SSSD -> SSSD cached credentials 6) We can assume that the Consumer is the latest RHEL and this can leverage SSSD capabilities regardles of what OS or versions the actual clients in the remote office run. If we need Kerberos SSO in the remote office this is a different use case. Does this make sense now? -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From simo at redhat.com Thu Apr 19 21:28:29 2012 From: simo at redhat.com (Simo Sorce) Date: Thu, 19 Apr 2012 17:28:29 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F907599.6060609@redhat.com> References: <4F4E41FC.6020606@redhat.com> <4F4F7A71.7060708@redhat.com> <4F52AA32.5070200@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <1334853233.16658.345.camel@willson.li.ssimo.org> <4F9060D0.3010204@redhat.com> <1334864682.16658.358.camel@willson.li.ssimo.org> <4F907599.6060609@redhat.com> Message-ID: <1334870909.16658.437.camel@willson.li.ssimo.org> On Thu, 2012-04-19 at 16:29 -0400, Dmitri Pal wrote: > On 04/19/2012 03:44 PM, Simo Sorce wrote: > > On Thu, 2012-04-19 at 15:00 -0400, Dmitri Pal wrote: > >> Local server is the central hub for the authentications in the remote > >> office. The client machines with SSSD or LDAP clients might not have > >> access to the central datacenter directly. Another reason for having > >> such login server is to reduce the traffic between the remote office > >> and > >> central data center by pointing to the clients to the login server for > >> the identity lookups and cached authentication via LDAP with pam proxy > >> and SSSD under it. > >> > >> The SSSD will be on the server. > >> > >>> Also we need to keep in mind that we cannot assume SSSD to be always > >>> available on clients. > >> We can assume that SSSD can be used on this login server via pam > >> proxy. > > Sorry I do not get how SSSD is going to make any difference on the > > server. > > > >> I agree with the questions and challenges. This is something for > >> Ondrej > >> to research as an owner of the thesis but I would think that it should > >> be up to the admin to define what to synch. We have a finite list of > >> the > >> things that we can pre-create filter for. > >> > >> Think about some kind of the container that would define what to > >> replicate. > >> The object would contain following attributes: > >> > >> uuid - unique rule id > >> name - descriptive name > >> description - comment > >> filter - ldap filter > >> base DN - base DN > >> scope - scope of the search > >> attrs - list of attributes to sync > >> enabled - whether the rule is enabled > >> > >> we can define (but disable by default) rules for SUDO, HBAC, SELinux, > >> Hosts etc and tel admin to enable an tweak. > >> > > This would be really complicated to manage. At the very least we need to > > define what's the expected user experience and data set access, then we > > can start devising technical solutions. > > > > What we really want to protect here, at the core, is user keys, for user > > and services not located in the branch office. All the info that is > > accessible from the branch office and is not super sensitive can always > > be replicated. The cost of choosing what to replicate and what not > > should be deferred to a later "enhancement". I would try to get the > > mechanism right for the authentication subset first, and deal with more > > fine-grained 'exclusions' later, once we solved that problem and gained > > experience with at least one aspect of data segregation. > > > > Simo. > > > I do not see a problem and it looks like you do not see the solution > that I see. So let me try again: > > 1) Consumer (remote office server) will sync in whatever user and other > data that we decide is safe to sync (may be configurable but this is a > separate question). > 2) It will not sync the kerberos keys . > 3) The authentication between clients and Consumer will be done using > LDAP since eSSO is not a requirement so Kerberos tickets are not needed > 4) The consumer would use pam proxy to do the authentication against the > real master on the client behalf > 5) We will configure SSSD under the pam proxy to point to the real > master. If access to the realm master is broken the authentication in > the remote office would still be successful as they would be performed > against credentials cached by SSSD. > Online case: client -> ldap auth -> Consumer -> pam proxy -> PAM -> > pam_sss -> SSSD -> real master > Offline case: client -> ldap auth -> Consumer -> pam proxy -> PAM -> > pam_sss -> SSSD -> SSSD cached credentials > 6) We can assume that the Consumer is the latest RHEL and this can > leverage SSSD capabilities regardles of what OS or versions the actual > clients in the remote office run. Well if we have to go through all these steps just to cache the user password we can also simply replicate the userPassword attribute for select users and just not the kerberos keys. Much simpler architecture and gives you the same results w/o any chaining at all. > If we need Kerberos SSO in the remote office this is a different use case. > Does this make sense now? Yes, but I was assuming Kerberos to be naturally necessary. If an organization deploys kerberized services then naturally users will have to use them. Case in point the Web UI for FreeIPA itself. Proxying pure LDAP is a solved problem, there are a number of meta-directories that could probably also allow caching LDAP binds, I think we want to concentrate on a solution that gives you the full functionality of a freeIPA realm, but that's just me. I am not opposed to a lesser solution, just not sure we should concentrate too many energies into it. Simo. -- Simo Sorce * Red Hat, Inc * New York From dpal at redhat.com Thu Apr 19 22:25:53 2012 From: dpal at redhat.com (Dmitri Pal) Date: Thu, 19 Apr 2012 18:25:53 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <1334870909.16658.437.camel@willson.li.ssimo.org> References: <4F4E41FC.6020606@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <1334853233.16658.345.camel@willson.li.ssimo.org> <4F9060D0.3010204@redhat.com> <1334864682.16658.358.camel@willson.li.ssimo.org> <4F907599.6060609@redhat.com> <1334870909.16658.437.camel@willson.li.ssimo.org> Message-ID: <4F9090F1.1000005@redhat.com> On 04/19/2012 05:28 PM, Simo Sorce wrote: > On Thu, 2012-04-19 at 16:29 -0400, Dmitri Pal wrote: >> On 04/19/2012 03:44 PM, Simo Sorce wrote: >>> On Thu, 2012-04-19 at 15:00 -0400, Dmitri Pal wrote: >>>> Local server is the central hub for the authentications in the remote >>>> office. The client machines with SSSD or LDAP clients might not have >>>> access to the central datacenter directly. Another reason for having >>>> such login server is to reduce the traffic between the remote office >>>> and >>>> central data center by pointing to the clients to the login server for >>>> the identity lookups and cached authentication via LDAP with pam proxy >>>> and SSSD under it. >>>> >>>> The SSSD will be on the server. >>>> >>>>> Also we need to keep in mind that we cannot assume SSSD to be always >>>>> available on clients. >>>> We can assume that SSSD can be used on this login server via pam >>>> proxy. >>> Sorry I do not get how SSSD is going to make any difference on the >>> server. >>> >>>> I agree with the questions and challenges. This is something for >>>> Ondrej >>>> to research as an owner of the thesis but I would think that it should >>>> be up to the admin to define what to synch. We have a finite list of >>>> the >>>> things that we can pre-create filter for. >>>> >>>> Think about some kind of the container that would define what to >>>> replicate. >>>> The object would contain following attributes: >>>> >>>> uuid - unique rule id >>>> name - descriptive name >>>> description - comment >>>> filter - ldap filter >>>> base DN - base DN >>>> scope - scope of the search >>>> attrs - list of attributes to sync >>>> enabled - whether the rule is enabled >>>> >>>> we can define (but disable by default) rules for SUDO, HBAC, SELinux, >>>> Hosts etc and tel admin to enable an tweak. >>>> >>> This would be really complicated to manage. At the very least we need to >>> define what's the expected user experience and data set access, then we >>> can start devising technical solutions. >>> >>> What we really want to protect here, at the core, is user keys, for user >>> and services not located in the branch office. All the info that is >>> accessible from the branch office and is not super sensitive can always >>> be replicated. The cost of choosing what to replicate and what not >>> should be deferred to a later "enhancement". I would try to get the >>> mechanism right for the authentication subset first, and deal with more >>> fine-grained 'exclusions' later, once we solved that problem and gained >>> experience with at least one aspect of data segregation. >>> >>> Simo. >>> >> I do not see a problem and it looks like you do not see the solution >> that I see. So let me try again: >> >> 1) Consumer (remote office server) will sync in whatever user and other >> data that we decide is safe to sync (may be configurable but this is a >> separate question). >> 2) It will not sync the kerberos keys . >> 3) The authentication between clients and Consumer will be done using >> LDAP since eSSO is not a requirement so Kerberos tickets are not needed >> 4) The consumer would use pam proxy to do the authentication against the >> real master on the client behalf >> 5) We will configure SSSD under the pam proxy to point to the real >> master. If access to the realm master is broken the authentication in >> the remote office would still be successful as they would be performed >> against credentials cached by SSSD. >> Online case: client -> ldap auth -> Consumer -> pam proxy -> PAM -> >> pam_sss -> SSSD -> real master >> Offline case: client -> ldap auth -> Consumer -> pam proxy -> PAM -> >> pam_sss -> SSSD -> SSSD cached credentials >> 6) We can assume that the Consumer is the latest RHEL and this can >> leverage SSSD capabilities regardles of what OS or versions the actual >> clients in the remote office run. > Well if we have to go through all these steps just to cache the user > password we can also simply replicate the userPassword attribute for > select users and just not the kerberos keys. Much simpler architecture > and gives you the same results w/o any chaining at all. Even better. >> If we need Kerberos SSO in the remote office this is a different use case. >> Does this make sense now? > Yes, but I was assuming Kerberos to be naturally necessary. If an > organization deploys kerberized services then naturally users will have > to use them. Case in point the Web UI for FreeIPA itself. > > Proxying pure LDAP is a solved problem, there are a number of > meta-directories that could probably also allow caching LDAP binds, I > think we want to concentrate on a solution that gives you the full > functionality of a freeIPA realm, but that's just me. I am not opposed > to a lesser solution, just not sure we should concentrate too many > energies into it. > > Simo. > I asked the question earlier in the thread. I said that there are two options: login server and eSSO server. Ondrej indicated that he wants to focus on the pure login server. This is why I continued exploring this option. IMO these are two different use cases and two different solutions that we should provide. One solution a striped down DS replica like we discussed above. The eSSO solution is more challenging. I agree. So far I think I have a way to solve the problem if we allow storing user passwords using symmetric keys in the LDAP. This would allow us to decrypt it in memory and generate a kerberos hash at need. If this is a non starter I do not see a good way to provide a solution that allow someone to get Kerberos tickets in the remote office and do not replicate kerberos hashes. But let us assume that it is OK to encrypt user passwords with a long key (like master kerberos key) and store encrypted passwords in another subtype of the userPassword attribute. So when a user password is set or updated such attribute is created. Now admin decides to create a special "remote" replica. As a part of this operation a new master key, specific for that replica, will be generated. It is different a Kerberos key from the Kerberos key used for the rest of masters in the domain. This key is installed on the remote replica. As replication starts (first push) between some normal master and remote replica the replication plugin will detect that one side of this agreement is a normal master while the other side is a special remote server. In this case the replication plugin would instead of taking userPassword attribute as is from the normal master would get the userPassword that can be decrypted will use the master key for the remote replica to generate password hash on the fly and inject into the replication stream instaed of the userPassword attribute stored on real master. Using this approach we have several benefits: 1) Remote office can enjoy the eSSO 2) If the remote replica is jeopardized there is nothing in it that can be used to attack central server without cracking the kerberos hashes. 3) Authentication against the remote server would not be respected against the resources in the central server creating confined environment 4) Remote office can operate with eSSO even if there is no connection to the central server. So we get security and convenience at the same time. For the password changes the remote server kpasswd should be updated to proxy the password change to the real server. So change will happen on the real master and then replicated back to the remote server using the logic described above. To avoid the latency we can generate the new password hash right away and save it locally without waiting for the full replication cycle to get back to the remote server. Remote server would not have a CA. It might have DNS. There will be one way replication agreement from the central server to that remote server and the ACIs would be configured in such a way that no change can be done locally except probably local failure counts and password update by KDC. The data replicated will be controlled by the flexible set of filters that can be controlled by the admin when the replication agreement is defined as we discussed earlier in the thread. If the remote office needs more than one server for redundancy purpose another remote server can be created in the same way. Since there is no data updates there is no need to replicate anything between the two remote servers. Seems like a decent approach. In future there might also be an option to use one way domains trusts and subdomains but I am not exactly sure how that might work. -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From simo at redhat.com Thu Apr 19 23:43:34 2012 From: simo at redhat.com (Simo Sorce) Date: Thu, 19 Apr 2012 19:43:34 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F9090F1.1000005@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1330896212.26197.32.camel@willson.li.ssimo.org> <4F5633AE.2090102@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <1334853233.16658.345.camel@willson.li.ssimo.org> <4F9060D0.3010204@redhat.com> <1334864682.16658.358.camel@willson.li.ssimo.org> <4F907599.6060609@redhat.com> <1334870909.16658.437.camel@willson.li.ssimo.org> <4F9090F1.1000005@redhat.com> Message-ID: <1334879014.16658.499.camel@willson.li.ssimo.org> On Thu, 2012-04-19 at 18:25 -0400, Dmitri Pal wrote: > On 04/19/2012 05:28 PM, Simo Sorce wrote: > > On Thu, 2012-04-19 at 16:29 -0400, Dmitri Pal wrote: > >> On 04/19/2012 03:44 PM, Simo Sorce wrote: > >>> On Thu, 2012-04-19 at 15:00 -0400, Dmitri Pal wrote: > >>>> Local server is the central hub for the authentications in the remote > >>>> office. The client machines with SSSD or LDAP clients might not have > >>>> access to the central datacenter directly. Another reason for having > >>>> such login server is to reduce the traffic between the remote office > >>>> and > >>>> central data center by pointing to the clients to the login server for > >>>> the identity lookups and cached authentication via LDAP with pam proxy > >>>> and SSSD under it. > >>>> > >>>> The SSSD will be on the server. > >>>> > >>>>> Also we need to keep in mind that we cannot assume SSSD to be always > >>>>> available on clients. > >>>> We can assume that SSSD can be used on this login server via pam > >>>> proxy. > >>> Sorry I do not get how SSSD is going to make any difference on the > >>> server. > >>> > >>>> I agree with the questions and challenges. This is something for > >>>> Ondrej > >>>> to research as an owner of the thesis but I would think that it should > >>>> be up to the admin to define what to synch. We have a finite list of > >>>> the > >>>> things that we can pre-create filter for. > >>>> > >>>> Think about some kind of the container that would define what to > >>>> replicate. > >>>> The object would contain following attributes: > >>>> > >>>> uuid - unique rule id > >>>> name - descriptive name > >>>> description - comment > >>>> filter - ldap filter > >>>> base DN - base DN > >>>> scope - scope of the search > >>>> attrs - list of attributes to sync > >>>> enabled - whether the rule is enabled > >>>> > >>>> we can define (but disable by default) rules for SUDO, HBAC, SELinux, > >>>> Hosts etc and tel admin to enable an tweak. > >>>> > >>> This would be really complicated to manage. At the very least we need to > >>> define what's the expected user experience and data set access, then we > >>> can start devising technical solutions. > >>> > >>> What we really want to protect here, at the core, is user keys, for user > >>> and services not located in the branch office. All the info that is > >>> accessible from the branch office and is not super sensitive can always > >>> be replicated. The cost of choosing what to replicate and what not > >>> should be deferred to a later "enhancement". I would try to get the > >>> mechanism right for the authentication subset first, and deal with more > >>> fine-grained 'exclusions' later, once we solved that problem and gained > >>> experience with at least one aspect of data segregation. > >>> > >>> Simo. > >>> > >> I do not see a problem and it looks like you do not see the solution > >> that I see. So let me try again: > >> > >> 1) Consumer (remote office server) will sync in whatever user and other > >> data that we decide is safe to sync (may be configurable but this is a > >> separate question). > >> 2) It will not sync the kerberos keys . > >> 3) The authentication between clients and Consumer will be done using > >> LDAP since eSSO is not a requirement so Kerberos tickets are not needed > >> 4) The consumer would use pam proxy to do the authentication against the > >> real master on the client behalf > >> 5) We will configure SSSD under the pam proxy to point to the real > >> master. If access to the realm master is broken the authentication in > >> the remote office would still be successful as they would be performed > >> against credentials cached by SSSD. > >> Online case: client -> ldap auth -> Consumer -> pam proxy -> PAM -> > >> pam_sss -> SSSD -> real master > >> Offline case: client -> ldap auth -> Consumer -> pam proxy -> PAM -> > >> pam_sss -> SSSD -> SSSD cached credentials > >> 6) We can assume that the Consumer is the latest RHEL and this can > >> leverage SSSD capabilities regardles of what OS or versions the actual > >> clients in the remote office run. > > Well if we have to go through all these steps just to cache the user > > password we can also simply replicate the userPassword attribute for > > select users and just not the kerberos keys. Much simpler architecture > > and gives you the same results w/o any chaining at all. > > Even better. > > >> If we need Kerberos SSO in the remote office this is a different use case. > >> Does this make sense now? > > Yes, but I was assuming Kerberos to be naturally necessary. If an > > organization deploys kerberized services then naturally users will have > > to use them. Case in point the Web UI for FreeIPA itself. > > > > Proxying pure LDAP is a solved problem, there are a number of > > meta-directories that could probably also allow caching LDAP binds, I > > think we want to concentrate on a solution that gives you the full > > functionality of a freeIPA realm, but that's just me. I am not opposed > > to a lesser solution, just not sure we should concentrate too many > > energies into it. > > > > Simo. > > > I asked the question earlier in the thread. I said that there are two > options: login server and eSSO server. Ondrej indicated that he wants to > focus on the pure login server. This is why I continued exploring this > option. IMO these are two different use cases and two different > solutions that we should provide. One solution a striped down DS replica > like we discussed above. Ok. > The eSSO solution is more challenging. I agree. > So far I think I have a way to solve the problem if we allow storing > user passwords using symmetric keys in the LDAP. This would allow us to > decrypt it in memory and generate a kerberos hash at need. > If this is a non starter I do not see a good way to provide a solution > that allow someone to get Kerberos tickets in the remote office and do > not replicate kerberos hashes. I think it is a non-starter to store a clear-text password even if reversibly encrypted, but I also think we do not need to do that all. > But let us assume that it is OK to encrypt user passwords with a long > key (like master kerberos key) and store encrypted passwords in another > subtype of the userPassword attribute. > So when a user password is set or updated such attribute is created. > > Now admin decides to create a special "remote" replica. As a part of > this operation a new master key, specific for that replica, will be > generated. It is different a Kerberos key from the Kerberos key used for > the rest of masters in the domain. This key is installed on the remote > replica. > As replication starts (first push) between some normal master and remote > replica the replication plugin will detect that one side of this > agreement is a normal master while the other side is a special remote > server. In this case the replication plugin would instead of taking > userPassword attribute as is from the normal master would get the > userPassword that can be decrypted will use the master key for the > remote replica to generate password hash on the fly and inject into the > replication stream instaed of the userPassword attribute stored on real > master. Ok, this come close to a proper solution but not quite. So first of all, kerberos keys are available in the master, we do not need to also store the clear txt password and regenerate them, but we do need to be able to convey them to the replica wrapped in a different master key. Let me explain what I had in mind to attack this problem (I hinted at it in a previous email). The simplest way is to use a principal named after the branch office replica as the 'local'-master and (possibly a different one) 'local'-krbtgt key, this way the key is available both to the FreeIPA masters and to the branch replica. Assume a realm named IPA.COM we would have principals named something like krbtgt-branch1.ipa.com at IPA.COM As for transmitting these keys we have 2 options. a) similar to what you describe above, unwrap kerberos keys before replication and re-wrap them in the branch office master key. b) store multiple copies of the keys already wrapped with the various branch office master keys so that the replication plugin doesn't need to do expensive crypto but only select the right pair With a) the disadvantage are that it is an expensive operation, and also makes it hard to deal with hubs (if we want to prevent hubs from getting access to access krb keys). With b) the disadvantage is that you have to create all other copies and keep them around. So any principal may end up having as many copies are there are branch offices in the degenerate case. It also make it difficult to deal with master key changes (both main master key and branch office master keys). Both approaches complicate a bit the initial replica setup, but not terribly so. > Using this approach we have several benefits: > > 1) Remote office can enjoy the eSSO > 2) If the remote replica is jeopardized there is nothing in it that can > be used to attack central server without cracking the kerberos hashes. > 3) Authentication against the remote server would not be respected > against the resources in the central server creating confined environment > 4) Remote office can operate with eSSO even if there is no connection to > the central server. Not so fast! :-) User keys are not all is needed. Re-encrypting user keys is only the first (and simplest) step. Then you have to deal with a) the 'krbtgt' key problem and b) the Ticket granting Service problem. They are solvable problems (also because MS neatly demonstrated it can be solved in their RODC), but requires some more work on the KDC side. So problem (a): the krbtgt. When a user in a branch office obtain a ticket from the branch office KDC it will get this ticket encrypted with the branch office krbtgt key which is different from the other branch offices or the main ipa server krbtgt key. This means that if the user then tries to obtain a ticket from another KDC, this other KDC needs to be able to recognize that a different krbtgt key was used and find out the right key to decrypt the user ticket to verify its TGT is valid before it can proceed. We also have a worse reverse problem, where a user obtained a krbtgt for the ipa masters but then tries to get a service ticket from the branch office TGS which does not have access to the main krbtgt at all (or it could impersonate any user in the realm). So for all these case we need a mechanism in the KDC to i) recognize the situation ii) find out if it can decrypt the user tgt iii) either redirect the user to a TGS server that can decrypt the tgt or 'proxy' the request to a KDC that can do that. The problem in (b) is of the same nature, but stems from the problem that the branch office KDC will not have the keys for a lot (or all) of the service principal for the various servers in the organization. So after having solved problem (a) now we need again to either redirect the user to a TGS that have access to those keys so it can provide a ticket, or we need 'proxy' the ticket request to a server that can. In general we should try to avoid tring to 'redirect' because referrals are not really well supported (and not really well standardized yet) by many clients and I am not sure they could actually convey the information we need to anyways, so trying to redirect would end up being an inferior solution as it would work only for a tiny subset of clients that have 'enhanced' libraries. OTOH proxying has a couple of technical problems we need to solve. The first is that we need a protocol to handle that, but using something derived from the s4u2 extensions may work out. The second problem is that it requires network operations in the KDC, and our KDC is not currently multithreaded. However thanks to the recent changes to make the KDC more async friendly this may not be a problem after all. As the KDC could shelve the request until a timeout occurs or the remote KDC replies with the answer we need. > So we get security and convenience at the same time. I am all for it :) > For the password changes the remote server kpasswd should be updated to > proxy the password change to the real server. We do not need this, kerberos differentiate between kdcs and kpasswd servers, so all we need is to not start any kadmin/kpasswd server on the branch replica and set SRV records for kpasswd that point exclusively to masters. > So change will happen on > the real master and then replicated back to the remote server using the > logic described above. Yes, but w/o the proxying :) > To avoid the latency we can generate the new > password hash right away and save it locally without waiting for the > full replication cycle to get back to the remote server. Replication latency is another issue I was not going to address now, but we will definitely need to deal with it somehow, the simplest way when SSSD is involved is to point krb libs temporarily to the master where we just operated the password change, and revert back to the local branch office only later. But I would defer thinking about this problem for now. > Remote server would not have a CA. Agreed. > It might have DNS. Agreed. > There will be one > way replication agreement from the central server to that remote server > and the ACIs would be configured in such a way that no change can be > done locally except probably local failure counts and password update by > KDC. We may need some other changes, but this is a technicality, we will start as strict as possible and relax when needed. > The data replicated will be controlled by the flexible set of > filters that can be controlled by the admin when the replication > agreement is defined as we discussed earlier in the thread. Yeah well we should make it simple at start, I would probably default to replicate almost everything except krb keys. If people have a valid need for replicating less we will hear soon enough. But I would avoid getting too fancy with fractional replication because LDAP clients are generally not very smart and we do not want to have them broken by default because they were not able to chase referrals. Also a branch office is effectively a cache for most purposes so we should copy locally as much as possible. > If the remote office needs more than one server for redundancy purpose > another remote server can be created in the same way. Since there is no > data updates there is no need to replicate anything between the two > remote servers. True but to keep down bandwidth consumption it may be desirable to have one replicate from the central masters and the other from the first server. The krbtgt and master keys will not be a real problem as we can define these keys per location instead of per server, so multiple servers in one location would use the same set of keys (which is what you want anyways so clients can use either server w/o issues cause by different krbtgt keys per server). > Seems like a decent approach. > In future there might also be an option to use one way domains trusts > and subdomains but I am not exactly sure how that might work. I wasn't going to mention it, but using a "subdomain" would make the kerberos part easier to manage to some degree, at least from the pov of managing to provide tickets for services not in the branch replica, as it would basically give a way to 'redirect' clients by pretending the masters that have the keys are another realm. However this introduces a different set of issues I am not going to delve into deeply now, but that I think would make the problem probably more irksome to solve. As for 'real' subdomains, I am not sure I see much value in differentiating between a 'subdomain' and a separated normal trusted realm that just happen to live in a DNS subdomain. Simo. -- Simo Sorce * Red Hat, Inc * New York From mkosek at redhat.com Fri Apr 20 06:39:40 2012 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 20 Apr 2012 08:39:40 +0200 Subject: [Freeipa-devel] [PATCH] 0042-0048 AD trusts support (master) In-Reply-To: <1334243807.777.6.camel@balmora.brq.redhat.com> References: <20120403104135.GD23171@redhat.com> <20120403145749.GD8996@localhost.localdomain> <1334242615.777.3.camel@balmora.brq.redhat.com> <20120412150803.GA24623@redhat.com> <1334243807.777.6.camel@balmora.brq.redhat.com> Message-ID: <1334903980.2985.6.camel@balmora.brq.redhat.com> On Thu, 2012-04-12 at 17:16 +0200, Martin Kosek wrote: > On Thu, 2012-04-12 at 18:08 +0300, Alexander Bokovoy wrote: > > Hi Martin! > > > > On Thu, 12 Apr 2012, Martin Kosek wrote: > ... > > >3) I would not try to import ipaserver.dcerpc every time the command is > > >executed: > > >+ try: > > >+ import ipaserver.dcerpc > > >+ except Exception, e: > > >+ raise errors.NotFound(name=_('AD Trust setup'), > > >+ reason=_('Cannot perform join operation without Samba > > >4 python bindings installed')) > > > > > >I would rather do it once in the beginning and set a flag: > > > > > >try: > > > import ipaserver.dcerpc > > > _bindings_installed = True > > >except Exception: > > > _bindings_installed = False > > > > > >... > > The idea was that this code is only executed on the server. We need to > > differentiate between: > > - running on client > > - running on server, no samba4 python bindings > > - running on server with samba4 python bindings > > > > By making it executed all time you are affecting the client code as > > well while with current approach it only affects server side. > > Across our code base, this situation is currently solved with this > condition: > > if api.env.in_server and api.env.context in ['lite', 'server']: > # try-import block > > > > > > > >+ def execute(self, *keys, **options): > > >+ # Join domain using full credentials and with random trustdom > > >+ # secret (will be generated by the join method) > > >+ trustinstance = None > > >+ if not _bindings_installed: > > >+ raise errors.NotFound(name=_('AD Trust setup'), > > >+ reason=_('Cannot perform join operation without Samba > > >4 python bindings installed')) > > > > > > > > >4) Another import inside a function: > > >+ def arcfour_encrypt(key, data): > > >+ from Crypto.Cipher import ARC4 > > >+ c = ARC4.new(key) > > >+ return c.encrypt(data) > > Same here, it is only needed on server side. > > > > Let us get consensus over 3) and 4) and I'll fix patches altogether (and > > push). > > > > Yeah, I would fix in the same way as 3). > I am running another run of test to finish my review of your patches, but I stumbled in 389-ds error when I was installing IPA server from package built from your git tree: git://fedorapeople.org/home/fedora/abbra/public_git/freeipa.git # rpm -q freeipa-server 389-ds-base freeipa-server-2.99.0GITc30f375-0.fc17.x86_64 389-ds-base-1.2.11-0.1.a1.fc17.x86_64 # ipa-server-install -p kokos123 -a kokos123 ... [16/18]: issuing RA agent certificate [17/18]: adding RA agent as a trusted user [18/18]: Configure HTTP to proxy connections done configuring pki-cad. Configuring directory server: Estimated time 1 minute [1/35]: creating directory server user [2/35]: creating directory server instance [3/35]: adding default schema [4/35]: enabling memberof plugin [5/35]: enabling referential integrity plugin [6/35]: enabling winsync plugin [7/35]: configuring replication version plugin [8/35]: enabling IPA enrollment plugin [9/35]: enabling ldapi [10/35]: configuring uniqueness plugin [11/35]: configuring uuid plugin [12/35]: configuring modrdn plugin [13/35]: enabling entryUSN plugin [14/35]: configuring lockout plugin [15/35]: creating indices [16/35]: configuring ssl for ds instance [17/35]: configuring certmap.conf [18/35]: configure autobind for root [19/35]: configure new location for managed entries [20/35]: restarting directory server [21/35]: adding default layout [22/35]: adding delegation layout ipa : CRITICAL Failed to load delegation.ldif: Command '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v -f /tmp/tmpdXcWF3 -x -D cn=Directory Manager -y /tmp/tmp8qtnOS' returned non-zero exit status 255 [23/35]: adding replication acis ipa : CRITICAL Failed to load replica-acis.ldif: Command '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v -f /tmp/tmptivfJ_ -x -D cn=Directory Manager -y /tmp/tmpr_Z1lp' returned non-zero exit status 255 [24/35]: creating container for managed entries ipa : CRITICAL Failed to load managed-entries.ldif: Command '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v -f /tmp/tmpNkmoDk -x -D cn=Directory Manager -y /tmp/tmpXU0lbx' returned non-zero exit status 255 [25/35]: configuring user private groups ipa : CRITICAL Failed to load user_private_groups.ldif: Command '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v -f /tmp/tmp7uDqaG -x -D cn=Directory Manager -y /tmp/tmp6E_uPl' returned non-zero exit status 255 [26/35]: configuring netgroups from hostgroups ipa : CRITICAL Failed to load host_nis_groups.ldif: Command '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v -f /tmp/tmphxoVQf -x -D cn=Directory Manager -y /tmp/tmpsAhhwd' returned non-zero exit status 255 [27/35]: creating default Sudo bind user ipa : CRITICAL Failed to load sudobind.ldif: Command '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v -f /tmp/tmpCVpYqT -x -D cn=Directory Manager -y /tmp/tmp97b_6d' returned non-zero exit status 255 [28/35]: creating default Auto Member layout ipa : CRITICAL Failed to load automember.ldif: Command '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v -f /tmp/tmpvcFbwK -x -D cn=Directory Manager -y /tmp/tmpSUownE' returned non-zero exit status 255 [29/35]: creating default HBAC rule allow_all ipa : CRITICAL Failed to load default-hbac.ldif: Command '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v -f /tmp/tmpYoYkBy -x -D cn=Directory Manager -y /tmp/tmp_9le4C' returned non-zero exit status 255 [30/35]: initializing group membership ipa : CRITICAL Failed to load memberof-task.ldif: Command '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v -f /tmp/tmpD9mIxC -x -D cn=Directory Manager -y /tmp/tmpeTqozO' returned non-zero exit status 255 Unexpected error - see ipaserver-install.log for details: {'desc': "Can't contact LDAP server"} # tail /var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors [20/Apr/2012:02:19:16 -0400] - 389-Directory/1.2.11.a1 B2012.090.2135 starting up [20/Apr/2012:02:19:16 -0400] attrcrypt - No symmetric key found for cipher AES in backend userRoot, attempting to create one... [20/Apr/2012:02:19:16 -0400] attrcrypt - Key for cipher AES successfully generated and stored [20/Apr/2012:02:19:16 -0400] attrcrypt - No symmetric key found for cipher 3DES in backend userRoot, attempting to create one... [20/Apr/2012:02:19:16 -0400] attrcrypt - Key for cipher 3DES successfully generated and stored [20/Apr/2012:02:19:16 -0400] - slapd started. Listening on All Interfaces port 389 for LDAP requests [20/Apr/2012:02:19:16 -0400] - Listening on All Interfaces port 636 for LDAPS requests [20/Apr/2012:02:19:16 -0400] - Listening on /var/run/slapd-IDM-LAB-BOS-REDHAT-COM.socket for LDAPI requests [20/Apr/2012:02:19:17 -0400] - Skipping CoS Definition cn=Password Policy,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com--no CoS Templates found, which should be added before the CoS Definition. [20/Apr/2012:02:19:17 -0400] entryrdn-index - _entryrdn_put_data: Adding the self link (62) failed: BDB0068 DB_LOCK_DEADLOCK: Locker killed to resolve a deadlock (-30993) Martin From pviktori at redhat.com Fri Apr 20 08:14:00 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 20 Apr 2012 10:14:00 +0200 Subject: [Freeipa-devel] [PATCH 70] validate i18n strings when running "make lint" In-Reply-To: <4F905AE0.2080109@redhat.com> References: <4F751025.7090204@redhat.com> <4F86D7F7.9040107@redhat.com> <4F8C81DA.5030800@redhat.com> <4F8EA671.1000006@redhat.com> <4F8F16CD.3070105@redhat.com> <4F8FF128.1060601@redhat.com> <4F905AE0.2080109@redhat.com> Message-ID: <4F911AC8.8050004@redhat.com> On 04/19/2012 08:35 PM, John Dennis wrote: > On 04/19/2012 07:04 AM, Petr Viktorin wrote: >> On 04/18/2012 09:32 PM, John Dennis wrote: >>>> Now that there are warnings, is pedantic mode necessary? >>> >>> Great question, I also pondered that as well. My conclusion was there >>> was value in separating aggressiveness of error checking from the >>> verbosity of the output. Also I didn't think we wanted warnings showing >>> in normal checking for things which are suspicious but not known to be >>> incorrect. So under the current scheme pedantic mode enables reporting >>> of suspicious constructs. You can still get a warning in the normal mode >>> for things which aren't fatal but are provably incorrect. An example of >>> this would be missing plural translations, it won't cause a run time >>> failure and we can be definite about their absence, however they should >>> be fixed, but it's not mandatory they be fixed, a warning in this case >>> seems appropriate. >> >> If they should be fixed, we should fix them, and treat them as errors >> the same way we treat lint's warnings as errors. If the pedantic mode is >> an obscure option of some test module, I worry that nobody will ever >> run it. > > The value of pedantic mode is for the person maintaining the > translations (at the moment that's me). It's not normally needed, but > when something goes wrong it may be helpful to diagnose what might be > amiss, in this case false positives are tolerable, in normal mode false > positives should be silenced (a request of yours from an earlier > review). Another thing to note is that a number of the warnings are > limited to po checking, once again this is a translation maintainer > feature, not a general test/developer feature. Thanks for the clarification. Now I see the use. >> Separating aggressiveness of checking from verbosity is not a bad idea. >> But since we now have two severity levels, and the checking is cheap, >> I'm not convinced that the aggressiveness should be tunable. >> How about always counting the pedantic warnings, but not showing the >> details? Then, if such warnings are found, have the tool say how to run >> it to get a full report. That way people will notice it. > > In an earlier review you asked to limit the output to only what is an > actual provable error. I agreed with you and modified the code. One > reason not to modify it again is the amount of time being devoted to > polishing what is an internal developer tool. I've tweaked the reporting > 3-4 times already, I don't think it's time well spent to go back and do > it again. After all, this is an internal tool, it will never be seen by > a customer, if as we get experience with it we discover it's needs > tweaking because it's not doing the job we hoped it would then that's > the moment to invest more engineering resources on the output, > validation, or whatever the deficiency is. > ACK (it's a 3.0 task, please push to master only) -- Petr? From rcritten at redhat.com Fri Apr 20 15:10:01 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Fri, 20 Apr 2012 11:10:01 -0400 Subject: [Freeipa-devel] [PATCH] 1008 use mixed-case for dns permission name Message-ID: <4F917C49.9060703@redhat.com> A new DNS permission that went into 2.2 uses all lower case to be consistent with existing DNS Permissions. This switches it to use mixed case as well. We'll investigate renaming the existing entries as well. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1008-permissions.patch Type: text/x-diff Size: 4195 bytes Desc: not available URL: From jdennis at redhat.com Fri Apr 20 16:37:08 2012 From: jdennis at redhat.com (John Dennis) Date: Fri, 20 Apr 2012 12:37:08 -0400 Subject: [Freeipa-devel] samba4 woes Message-ID: <4F9190B4.1080104@redhat.com> We're supposed to be working on master now, not 2.2. But master has dependencies on samba4. Those dependencies can only be resolved on F17, an unreleased platform. I think it's reasonable for IPA developers to work on the current Fedora release (F16) and not have to resort to trying to develop inside a vm or run builds inside of mock, etc. Even the most basic developer task such as running make-lint won't work because it requires you run make to populate the generated files. But you can't run make because the configure step fails, it fails because it can't satisfy the samba4 dependencies. The samba4 dependencies can't be satisfied on F16 (I know, I tried). FWIW, I tried to build samba4-4.0.0-42alpha18.fc17.src.rpm in mock on my local F16 machine and it fails because it depends on libldb 1.1.4 which is only in F17, and so on and so on. So how are others working on master and how are you getting around this problem? -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From abokovoy at redhat.com Fri Apr 20 17:35:16 2012 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Fri, 20 Apr 2012 20:35:16 +0300 Subject: [Freeipa-devel] samba4 woes In-Reply-To: <4F9190B4.1080104@redhat.com> References: <4F9190B4.1080104@redhat.com> Message-ID: <20120420173516.GE21946@redhat.com> Hi John, On Fri, 20 Apr 2012, John Dennis wrote: >We're supposed to be working on master now, not 2.2. But master has >dependencies on samba4. Those dependencies can only be resolved on >F17, an unreleased platform. > >I think it's reasonable for IPA developers to work on the current >Fedora release (F16) and not have to resort to trying to develop >inside a vm or run builds inside of mock, etc. > >Even the most basic developer task such as running make-lint won't >work because it requires you run make to populate the generated >files. But you can't run make because the configure step fails, it >fails because it can't satisfy the samba4 dependencies. The samba4 >dependencies can't be satisfied on F16 (I know, I tried). Please try http://koji.fedoraproject.org/koji/taskinfo?taskID=4009116 We are trying to make sure samba4 development packages and libraries are installable in parallel to samba3 libraries so that other F16 components would continue working. Unfortunately, it is a bit more complicated than needed due to versioning symbols in multiple libraries. The update above should solve the issues. >FWIW, I tried to build samba4-4.0.0-42alpha18.fc17.src.rpm in mock on >my local F16 machine and it fails because it depends on libldb 1.1.4 >which is only in F17, and so on and so on. > >So how are others working on master and how are you getting around >this problem? Updated F16 build is http://koji.fedoraproject.org/koji/taskinfo?taskID=4009116 -- / Alexander Bokovoy From abokovoy at redhat.com Fri Apr 20 17:42:52 2012 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Fri, 20 Apr 2012 20:42:52 +0300 Subject: [Freeipa-devel] samba4 woes In-Reply-To: <20120420173516.GE21946@redhat.com> References: <4F9190B4.1080104@redhat.com> <20120420173516.GE21946@redhat.com> Message-ID: <20120420174252.GF21946@redhat.com> ... and this build failed due to issues in Koji DEBUG util.py:257: http://kojipkgs.fedoraproject.org/packages/authconfig/6.1.16/2.fc16/x86_64/authconfig-6.1.16-2.fc16.x86_64.rpm: [Errno 14] PYCURL ERROR 7 - "couldn't connect to host" DEBUG util.py:257: Trying other mirror. Looks like some networking issues. On Fri, 20 Apr 2012, Alexander Bokovoy wrote: >Hi John, > >On Fri, 20 Apr 2012, John Dennis wrote: >>We're supposed to be working on master now, not 2.2. But master has >>dependencies on samba4. Those dependencies can only be resolved on >>F17, an unreleased platform. >> >>I think it's reasonable for IPA developers to work on the current >>Fedora release (F16) and not have to resort to trying to develop >>inside a vm or run builds inside of mock, etc. >> >>Even the most basic developer task such as running make-lint won't >>work because it requires you run make to populate the generated >>files. But you can't run make because the configure step fails, it >>fails because it can't satisfy the samba4 dependencies. The samba4 >>dependencies can't be satisfied on F16 (I know, I tried). >Please try http://koji.fedoraproject.org/koji/taskinfo?taskID=4009116 > >We are trying to make sure samba4 development packages and libraries are >installable in parallel to samba3 libraries so that other F16 components >would continue working. Unfortunately, it is a bit more complicated than >needed due to versioning symbols in multiple libraries. The update above >should solve the issues. > >>FWIW, I tried to build samba4-4.0.0-42alpha18.fc17.src.rpm in mock >>on my local F16 machine and it fails because it depends on libldb >>1.1.4 which is only in F17, and so on and so on. >> >>So how are others working on master and how are you getting around >>this problem? >Updated F16 build is >http://koji.fedoraproject.org/koji/taskinfo?taskID=4009116 > >-- >/ Alexander Bokovoy > >_______________________________________________ >Freeipa-devel mailing list >Freeipa-devel at redhat.com >https://www.redhat.com/mailman/listinfo/freeipa-devel -- / Alexander Bokovoy From jdennis at redhat.com Fri Apr 20 17:43:49 2012 From: jdennis at redhat.com (John Dennis) Date: Fri, 20 Apr 2012 13:43:49 -0400 Subject: [Freeipa-devel] samba4 woes In-Reply-To: <20120420173516.GE21946@redhat.com> References: <4F9190B4.1080104@redhat.com> <20120420173516.GE21946@redhat.com> Message-ID: <4F91A055.3010603@redhat.com> On 04/20/2012 01:35 PM, Alexander Bokovoy wrote: > Hi John, > > On Fri, 20 Apr 2012, John Dennis wrote: >> We're supposed to be working on master now, not 2.2. But master has >> dependencies on samba4. Those dependencies can only be resolved on >> F17, an unreleased platform. >> >> I think it's reasonable for IPA developers to work on the current >> Fedora release (F16) and not have to resort to trying to develop >> inside a vm or run builds inside of mock, etc. >> >> Even the most basic developer task such as running make-lint won't >> work because it requires you run make to populate the generated >> files. But you can't run make because the configure step fails, it >> fails because it can't satisfy the samba4 dependencies. The samba4 >> dependencies can't be satisfied on F16 (I know, I tried). > Please try http://koji.fedoraproject.org/koji/taskinfo?taskID=4009116 > > We are trying to make sure samba4 development packages and libraries are > installable in parallel to samba3 libraries so that other F16 components > would continue working. Unfortunately, it is a bit more complicated than > needed due to versioning symbols in multiple libraries. The update above > should solve the issues. > >> FWIW, I tried to build samba4-4.0.0-42alpha18.fc17.src.rpm in mock on >> my local F16 machine and it fails because it depends on libldb 1.1.4 >> which is only in F17, and so on and so on. >> >> So how are others working on master and how are you getting around >> this problem? > Updated F16 build is > http://koji.fedoraproject.org/koji/taskinfo?taskID=4009116 Did you notice the koji builds you referenced failed to build? This is why I tried to do a local mock build of the srpm, which also failed due to libldb dependencies. Which is what caused me to write the original mail :-) -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From jdennis at redhat.com Fri Apr 20 18:07:31 2012 From: jdennis at redhat.com (John Dennis) Date: Fri, 20 Apr 2012 14:07:31 -0400 Subject: [Freeipa-devel] [PATCH 75] log dogtag errors Message-ID: <201204201807.q3KI7VuR019115@int-mx11.intmail.prod.int.phx2.redhat.com> Ticket #2622 If we get an error from dogtag we always did raise a CertificateOperationError exception with a message describing the problem. Unfortuanately that error message did not go into the log, just sent back to the caller. The fix is to format the error message and send the same message to both the log and use it to initialize the CertificateOperationError exception. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jdennis-0075-log-dogtag-errors.patch Type: text/x-patch Size: 6976 bytes Desc: not available URL: From abokovoy at redhat.com Fri Apr 20 19:27:40 2012 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Fri, 20 Apr 2012 22:27:40 +0300 Subject: [Freeipa-devel] samba4 woes In-Reply-To: <4F91A055.3010603@redhat.com> References: <4F9190B4.1080104@redhat.com> <20120420173516.GE21946@redhat.com> <4F91A055.3010603@redhat.com> Message-ID: <20120420192740.GG21946@redhat.com> On Fri, 20 Apr 2012, John Dennis wrote: >On 04/20/2012 01:35 PM, Alexander Bokovoy wrote: >>Hi John, >> >>On Fri, 20 Apr 2012, John Dennis wrote: >>>We're supposed to be working on master now, not 2.2. But master has >>>dependencies on samba4. Those dependencies can only be resolved on >>>F17, an unreleased platform. >>> >>>I think it's reasonable for IPA developers to work on the current >>>Fedora release (F16) and not have to resort to trying to develop >>>inside a vm or run builds inside of mock, etc. >>> >>>Even the most basic developer task such as running make-lint won't >>>work because it requires you run make to populate the generated >>>files. But you can't run make because the configure step fails, it >>>fails because it can't satisfy the samba4 dependencies. The samba4 >>>dependencies can't be satisfied on F16 (I know, I tried). >>Please try http://koji.fedoraproject.org/koji/taskinfo?taskID=4009116 >> >>We are trying to make sure samba4 development packages and libraries are >>installable in parallel to samba3 libraries so that other F16 components >>would continue working. Unfortunately, it is a bit more complicated than >>needed due to versioning symbols in multiple libraries. The update above >>should solve the issues. >> >>>FWIW, I tried to build samba4-4.0.0-42alpha18.fc17.src.rpm in mock on >>>my local F16 machine and it fails because it depends on libldb 1.1.4 >>>which is only in F17, and so on and so on. >>> >>>So how are others working on master and how are you getting around >>>this problem? >>Updated F16 build is >>http://koji.fedoraproject.org/koji/taskinfo?taskID=4009116 > >Did you notice the koji builds you referenced failed to build? This >is why I tried to do a local mock build of the srpm, which also >failed due to libldb dependencies. Which is what caused me to write >the original mail :-) :) It failed to build due to koji issues, not the build issues. We had also incompatible libldb in F16/F15 that prevented us going to alpha18 instead of alpha16 in those distributions. I hope Andreas (CC:) will be able to look at the issue soon... -- / Alexander Bokovoy From sgallagh at redhat.com Fri Apr 20 19:39:56 2012 From: sgallagh at redhat.com (Stephen Gallagher) Date: Fri, 20 Apr 2012 15:39:56 -0400 Subject: [Freeipa-devel] samba4 woes In-Reply-To: <20120420192740.GG21946@redhat.com> References: <4F9190B4.1080104@redhat.com> <20120420173516.GE21946@redhat.com> <4F91A055.3010603@redhat.com> <20120420192740.GG21946@redhat.com> Message-ID: <1334950796.4949.24.camel@sgallagh520.sgallagh.bos.redhat.com> On Fri, 2012-04-20 at 22:27 +0300, Alexander Bokovoy wrote: > :) > > It failed to build due to koji issues, not the build issues. > > We had also incompatible libldb in F16/F15 that prevented us going to > alpha18 instead of alpha16 in those distributions. > > I hope Andreas (CC:) will be able to look at the issue soon... > FWIW, I own libldb, so if you need it updated, I'm your man. Just so you know, rebuilding libldb ALSO forces a rebuild of libtalloc, libtdb, libtevent, SSSD and possibly openchange, so an update like that has to be carefully coordinated. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From abokovoy at redhat.com Fri Apr 20 19:50:49 2012 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Fri, 20 Apr 2012 22:50:49 +0300 Subject: [Freeipa-devel] samba4 woes In-Reply-To: <1334950796.4949.24.camel@sgallagh520.sgallagh.bos.redhat.com> References: <4F9190B4.1080104@redhat.com> <20120420173516.GE21946@redhat.com> <4F91A055.3010603@redhat.com> <20120420192740.GG21946@redhat.com> <1334950796.4949.24.camel@sgallagh520.sgallagh.bos.redhat.com> Message-ID: <20120420195049.GH21946@redhat.com> On Fri, 20 Apr 2012, Stephen Gallagher wrote: >On Fri, 2012-04-20 at 22:27 +0300, Alexander Bokovoy wrote: >> :) >> >> It failed to build due to koji issues, not the build issues. >> >> We had also incompatible libldb in F16/F15 that prevented us going to >> alpha18 instead of alpha16 in those distributions. >> >> I hope Andreas (CC:) will be able to look at the issue soon... >> > >FWIW, I own libldb, so if you need it updated, I'm your man. Just so you >know, rebuilding libldb ALSO forces a rebuild of libtalloc, libtdb, >libtevent, SSSD and possibly openchange, so an update like that has to >be carefully coordinated. We'll probably go that way once samba4 build against system libkrb5 will be ready as that will cause code changes we can't apply to alpha16 anymore. Before that forcing full circle of updates will not give us anything other than heating the air. -- / Alexander Bokovoy From jdennis at redhat.com Fri Apr 20 19:55:05 2012 From: jdennis at redhat.com (John Dennis) Date: Fri, 20 Apr 2012 15:55:05 -0400 Subject: [Freeipa-devel] samba4 woes In-Reply-To: <1334950796.4949.24.camel@sgallagh520.sgallagh.bos.redhat.com> References: <4F9190B4.1080104@redhat.com> <20120420173516.GE21946@redhat.com> <4F91A055.3010603@redhat.com> <20120420192740.GG21946@redhat.com> <1334950796.4949.24.camel@sgallagh520.sgallagh.bos.redhat.com> Message-ID: <4F91BF19.7030808@redhat.com> On 04/20/2012 03:39 PM, Stephen Gallagher wrote: > On Fri, 2012-04-20 at 22:27 +0300, Alexander Bokovoy wrote: >> :) >> >> It failed to build due to koji issues, not the build issues. Yes, the koji build failed due to koji issues, but since koji was bonked I tried to build it in mock on my machine with an f16-i386 profile, it failed with this error: ERROR: System library ldb of version 1.1.4 not found, and bundling disabled Since I don't see a 1.1.4 version ldb in koji for f16 I assumed had the koji build proceeded it too would have hit the same error. >> >> We had also incompatible libldb in F16/F15 that prevented us going to >> alpha18 instead of alpha16 in those distributions. >> >> I hope Andreas (CC:) will be able to look at the issue soon... >> > > FWIW, I own libldb, so if you need it updated, I'm your man. Just so you > know, rebuilding libldb ALSO forces a rebuild of libtalloc, libtdb, > libtevent, SSSD and possibly openchange, so an update like that has to > be carefully coordinated. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From dpal at redhat.com Fri Apr 20 20:09:33 2012 From: dpal at redhat.com (Dmitri Pal) Date: Fri, 20 Apr 2012 16:09:33 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <1334879014.16658.499.camel@willson.li.ssimo.org> References: <4F4E41FC.6020606@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <1334853233.16658.345.camel@willson.li.ssimo.org> <4F9060D0.3010204@redhat.com> <1334864682.16658.358.camel@willson.li.ssimo.org> <4F907599.6060609@redhat.com> <1334870909.16658.437.camel@willson.li.ssimo.org> <4F9090F1.1000005@redhat.com> <1334879014.16658.499.camel@willson.li.ssimo.org> Message-ID: <4F91C27D.6070507@redhat.com> On 04/19/2012 07:43 PM, Simo Sorce wrote: > Ok, this come close to a proper solution but not quite. > So first of all, kerberos keys are available in the master, we do not > need to also store the clear txt password and regenerate them, but we do > need to be able to convey them to the replica wrapped in a different > master key. > > Let me explain what I had in mind to attack this problem (I hinted at it > in a previous email). > > The simplest way is to use a principal named after the branch office > replica as the 'local'-master and (possibly a different one) > 'local'-krbtgt key, this way the key is available both to the FreeIPA > masters and to the branch replica. > > Assume a realm named IPA.COM we would have principals named something > like krbtgt-branch1.ipa.com at IPA.COM > > As for transmitting these keys we have 2 options. > a) similar to what you describe above, unwrap kerberos keys before > replication and re-wrap them in the branch office master key. > b) store multiple copies of the keys already wrapped with the various > branch office master keys so that the replication plugin doesn't need to > do expensive crypto but only select the right pair I was under the assumption that to be able to wrap things properly you need both user password in clear that you have only at the moment the hashes are created and the key for the branch office replica. Is this the wrong assumption? If you do not need raw password you can rewrap things later but you should do it only on the fly when the attribute is replicated. You do not want to create re-wrapped hashes when the replica is added - that will be DDoS attack. If the assumption is correct then you would have to force password change every time you add a branch office replica. > With a) the disadvantage are that it is an expensive operation, and also > makes it hard to deal with hubs (if we want to prevent hubs from getting > access to access krb keys). Why this is a problem? > With b) the disadvantage is that you have to create all other copies and > keep them around. So any principal may end up having as many copies are > there are branch offices in the degenerate case. It also make it > difficult to deal with master key changes (both main master key and > branch office master keys). > > Both approaches complicate a bit the initial replica setup, but not > terribly so. So which one do you propose? It is not clear. >> > Using this approach we have several benefits: >> > >> > 1) Remote office can enjoy the eSSO >> > 2) If the remote replica is jeopardized there is nothing in it that can >> > be used to attack central server without cracking the kerberos hashes. >> > 3) Authentication against the remote server would not be respected >> > against the resources in the central server creating confined environment >> > 4) Remote office can operate with eSSO even if there is no connection to >> > the central server. > Not so fast! :-) > User keys are not all is needed. > Re-encrypting user keys is only the first (and simplest) step. > Then you have to deal with a) the 'krbtgt' key problem and b) the Ticket > granting Service problem. They are solvable problems (also because MS > neatly demonstrated it can be solved in their RODC), but requires some > more work on the KDC side. > > So problem (a): the krbtgt. > When a user in a branch office obtain a ticket from the branch office > KDC it will get this ticket encrypted with the branch office krbtgt key > which is different from the other branch offices or the main ipa server > krbtgt key. > This means that if the user then tries to obtain a ticket from another > KDC, this other KDC needs to be able to recognize that a different > krbtgt key was used and find out the right key to decrypt the user > ticket to verify its TGT is valid before it can proceed. I am not sure this needs to be solved. It depends what kinds of central services are required for the local office users to access. Can we defer this? > We also have a worse reverse problem, where a user obtained a krbtgt for > the ipa masters but then tries to get a service ticket from the branch > office TGS which does not have access to the main krbtgt at all (or it > could impersonate any user in the realm). > So for all these case we need a mechanism in the KDC to i) recognize the > situation ii) find out if it can decrypt the user tgt iii) either > redirect the user to a TGS server that can decrypt the tgt or 'proxy' > the request to a KDC that can do that. Seems very complex and too generic. May be we should slice it into several use cases and address them one at a time. > The problem in (b) is of the same nature, but stems from the problem > that the branch office KDC will not have the keys for a lot (or all) of > the service principal for the various servers in the organization. > So after having solved problem (a) now we need again to either redirect > the user to a TGS that have access to those keys so it can provide a > ticket, or we need 'proxy' the ticket request to a server that can. > > > In general we should try to avoid tring to 'redirect' because referrals > are not really well supported (and not really well standardized yet) by > many clients and I am not sure they could actually convey the > information we need to anyways, so trying to redirect would end up being > an inferior solution as it would work only for a tiny subset of clients > that have 'enhanced' libraries. > OTOH proxying has a couple of technical problems we need to solve. > The first is that we need a protocol to handle that, but using something > derived from the s4u2 extensions may work out. The second problem is > that it requires network operations in the KDC, and our KDC is not > currently multithreaded. However thanks to the recent changes to make > the KDC more async friendly this may not be a problem after all. As the > KDC could shelve the request until a timeout occurs or the remote KDC > replies with the answer we need. > -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From simo at redhat.com Fri Apr 20 20:29:52 2012 From: simo at redhat.com (Simo Sorce) Date: Fri, 20 Apr 2012 16:29:52 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F91C27D.6070507@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1331049562.26197.82.camel@willson.li.ssimo.org> <4F563F8F.5080504@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <1334853233.16658.345.camel@willson.li.ssimo.org> <4F9060D0.3010204@redhat.com> <1334864682.16658.358.camel@willson.li.ssimo.org> <4F907599.6060609@redhat.com> <1334870909.16658.437.camel@willson.li.ssimo.org> <4F9090F1.1000005@redhat.com> <1334879014.16658.499.camel@willson.li.ssimo.org> <4F91C27D.6070507@redhat.com> Message-ID: <1334953792.16658.526.camel@willson.li.ssimo.org> On Fri, 2012-04-20 at 16:09 -0400, Dmitri Pal wrote: > I was under the assumption that to be able to wrap things properly you > need both user password in clear that you have only at the moment the > hashes are created and the key for the branch office replica. Is this > the wrong assumption? If you do not need raw password you can rewrap > things later but you should do it only on the fly when the attribute is > replicated. You do not want to create re-wrapped hashes when the > replica is added - that will be DDoS attack. > If the assumption is correct then you would have to force password > change every time you add a branch office replica. The assumption is incorrect. We can rewrap at replication tiem in a different master key, and that doesn't require the user clear text password, only access to the old and the new master key. A similar mechanism is used internally > > With a) the disadvantage are that it is an expensive operation, and also > > makes it hard to deal with hubs (if we want to prevent hubs from getting > > access to access krb keys). > > Why this is a problem? Performance wise it probably isn't a big deal after all, it will slow down replication a bit, but except for mass enrollments it will not be that big of an issue. The main issue I see with hubs is that I was under the impression we didn't want to let hubs be able to decrypt keys. In that case we can handle only replicating to hubs that have a single 'branch office' they replicate to because they will not be able to do an re-wrapping. That is why I mentioned option (b) where you pre-generate copies for the target branches. If we are ok giving hubs the ability to re-wrap keys, then it is not an issue, as we can provide hubs with their own master key and then we can re-wrap master -> hub and then again hub -> branch by giving the hub also the branch master key. > > With b) the disadvantage is that you have to create all other copies and > > keep them around. So any principal may end up having as many copies are > > there are branch offices in the degenerate case. It also make it > > difficult to deal with master key changes (both main master key and > > branch office master keys). > > > > Both approaches complicate a bit the initial replica setup, but not > > terribly so. > > So which one do you propose? It is not clear. I am still unsure, both have advantages and drawbacks, see above about hubs for example. > >> > Using this approach we have several benefits: > >> > > >> > 1) Remote office can enjoy the eSSO > >> > 2) If the remote replica is jeopardized there is nothing in it that can > >> > be used to attack central server without cracking the kerberos hashes. > >> > 3) Authentication against the remote server would not be respected > >> > against the resources in the central server creating confined environment > >> > 4) Remote office can operate with eSSO even if there is no connection to > >> > the central server. > > Not so fast! :-) > > User keys are not all is needed. > > Re-encrypting user keys is only the first (and simplest) step. > > Then you have to deal with a) the 'krbtgt' key problem and b) the Ticket > > granting Service problem. They are solvable problems (also because MS > > neatly demonstrated it can be solved in their RODC), but requires some > > more work on the KDC side. > > > > So problem (a): the krbtgt. > > When a user in a branch office obtain a ticket from the branch office > > KDC it will get this ticket encrypted with the branch office krbtgt key > > which is different from the other branch offices or the main ipa server > > krbtgt key. > > This means that if the user then tries to obtain a ticket from another > > KDC, this other KDC needs to be able to recognize that a different > > krbtgt key was used and find out the right key to decrypt the user > > ticket to verify its TGT is valid before it can proceed. > > I am not sure this needs to be solved. It depends what kinds of central > services are required for the local office users to access. Can we defer > this? No. It is key to have this working, having a krbtgt that cannot be used to acquire tickets is useless. > > We also have a worse reverse problem, where a user obtained a krbtgt for > > the ipa masters but then tries to get a service ticket from the branch > > office TGS which does not have access to the main krbtgt at all (or it > > could impersonate any user in the realm). > > So for all these case we need a mechanism in the KDC to i) recognize the > > situation ii) find out if it can decrypt the user tgt iii) either > > redirect the user to a TGS server that can decrypt the tgt or 'proxy' > > the request to a KDC that can do that. > > Seems very complex and too generic. May be we should slice it into > several use cases and address them one at a time. It's a single use case, acquiring a ticket. Nothing we can really split into discrete pieces. Simo. -- Simo Sorce * Red Hat, Inc * New York From rcritten at redhat.com Fri Apr 20 21:00:18 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Fri, 20 Apr 2012 17:00:18 -0400 Subject: [Freeipa-devel] [PATCH] 1009 improve user-status Message-ID: <4F91CE62.7040800@redhat.com> Make some minor improvements to user-status. This beefs up the docs a bit, adds nsaccountlock to the output and includes the time we checked each master. It is a bit of a kludge to put nsaccountlock into the summary but I don't want to return it per-server, it should be the same across them all. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1009-status.patch Type: text/x-diff Size: 3833 bytes Desc: not available URL: From sbose at redhat.com Fri Apr 20 21:49:41 2012 From: sbose at redhat.com (Sumit Bose) Date: Fri, 20 Apr 2012 23:49:41 +0200 Subject: [Freeipa-devel] samba4 woes In-Reply-To: <4F9190B4.1080104@redhat.com> References: <4F9190B4.1080104@redhat.com> Message-ID: <20120420214941.GD2361@localhost.localdomain> On Fri, Apr 20, 2012 at 12:37:08PM -0400, John Dennis wrote: > We're supposed to be working on master now, not 2.2. But master has > dependencies on samba4. Those dependencies can only be resolved on > F17, an unreleased platform. > > I think it's reasonable for IPA developers to work on the current > Fedora release (F16) and not have to resort to trying to develop > inside a vm or run builds inside of mock, etc. > > Even the most basic developer task such as running make-lint won't > work because it requires you run make to populate the generated > files. But you can't run make because the configure step fails, it > fails because it can't satisfy the samba4 dependencies. The samba4 > dependencies can't be satisfied on F16 (I know, I tried). > > FWIW, I tried to build samba4-4.0.0-42alpha18.fc17.src.rpm in mock > on my local F16 machine and it fails because it depends on libldb > 1.1.4 which is only in F17, and so on and so on. > > So how are others working on master and how are you getting around > this problem? I take samba4 and libldb from the ipa-devel repo. There are even versions for my very old F15 devel system. HTH bye, Sumit > > -- > John Dennis > > Looking to carve out IT costs? > www.redhat.com/carveoutcosts/ > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel From jdennis at redhat.com Fri Apr 20 23:20:32 2012 From: jdennis at redhat.com (John Dennis) Date: Fri, 20 Apr 2012 19:20:32 -0400 Subject: [Freeipa-devel] samba4 woes In-Reply-To: <20120420214941.GD2361@localhost.localdomain> References: <4F9190B4.1080104@redhat.com> <20120420214941.GD2361@localhost.localdomain> Message-ID: <4F91EF40.7070107@redhat.com> On 04/20/2012 05:49 PM, Sumit Bose wrote: > I take samba4 and libldb from the ipa-devel repo. There are even > versions for my very old F15 devel system. Yup, one of the first things I tried. But those conflict with the libsmbclient in f16. If you try to remove or update libsmbclient you'll discover there are number of packages that require libsmbclient on a normal f16 system. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From sbose at redhat.com Sat Apr 21 10:38:04 2012 From: sbose at redhat.com (Sumit Bose) Date: Sat, 21 Apr 2012 12:38:04 +0200 Subject: [Freeipa-devel] samba4 woes In-Reply-To: <4F91EF40.7070107@redhat.com> References: <4F9190B4.1080104@redhat.com> <20120420214941.GD2361@localhost.localdomain> <4F91EF40.7070107@redhat.com> Message-ID: <20120421103804.GE2361@localhost.localdomain> On Fri, Apr 20, 2012 at 07:20:32PM -0400, John Dennis wrote: > On 04/20/2012 05:49 PM, Sumit Bose wrote: > >I take samba4 and libldb from the ipa-devel repo. There are even > >versions for my very old F15 devel system. > > Yup, one of the first things I tried. > > But those conflict with the libsmbclient in f16. If you try to > remove or update libsmbclient you'll discover there are number of > packages that require libsmbclient on a normal f16 system. ah, sorry, I didn't see this issue because I have a separate devel VM on my notebook, with a minimal install. If I read the related thread on samba-technical correctly Andreas agreed with upstream on a way how to solve it. I will talk to him on Monday morning and hopefully there are updated packages on ipa-devel without this conflict when you start working on Monday morning. bye, Sumit > > > -- > John Dennis > > Looking to carve out IT costs? > www.redhat.com/carveoutcosts/ From mkosek at redhat.com Mon Apr 23 08:09:06 2012 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 23 Apr 2012 10:09:06 +0200 Subject: [Freeipa-devel] [PATCH] 1008 use mixed-case for dns permission name In-Reply-To: <4F917C49.9060703@redhat.com> References: <4F917C49.9060703@redhat.com> Message-ID: <1335168546.32391.0.camel@balmora.brq.redhat.com> On Fri, 2012-04-20 at 11:10 -0400, Rob Crittenden wrote: > A new DNS permission that went into 2.2 uses all lower case to be > consistent with existing DNS Permissions. This switches it to use mixed > case as well. We'll investigate renaming the existing entries as well. > > rob ACK. Pushed to master, ipa-2-2. Martin From mkosek at redhat.com Mon Apr 23 08:24:15 2012 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 23 Apr 2012 10:24:15 +0200 Subject: [Freeipa-devel] [PATCH] 1009 improve user-status In-Reply-To: <4F91CE62.7040800@redhat.com> References: <4F91CE62.7040800@redhat.com> Message-ID: <1335169455.32391.4.camel@balmora.brq.redhat.com> On Fri, 2012-04-20 at 17:00 -0400, Rob Crittenden wrote: > Make some minor improvements to user-status. > > This beefs up the docs a bit, adds nsaccountlock to the output and > includes the time we checked each master. > > It is a bit of a kludge to put nsaccountlock into the summary but I > don't want to return it per-server, it should be the same across them all. > > rob ACK, this approach is OK for now. I just did a minor tweak to "now" output attribute before pushing to make it's time format consistent both in standard and raw mode. Pushed to master, ipa-2-2. Martin From pviktori at redhat.com Mon Apr 23 09:19:51 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 23 Apr 2012 11:19:51 +0200 Subject: [Freeipa-devel] [PATCH] 0040 Move install script error handling to a common function Message-ID: <4F951EB7.9050908@redhat.com> This fixes https://fedorahosted.org/freeipa/ticket/2071 (Add final debug message in installers). I submitted an earlier version of this patch before (0014), but it was too much to include in 2.2. Hopefully now there's more space for restructuring. I think it's better to start a new thread with this approach. The try/except blocks at the end of installers/management scripts are replaced by a call to a common function, which includes the final message. For each specific error, the error handlers in all scripts was almost the same, but each script handled a different selection of errors. Instead of having this copy/pasted code (with subtle differences creeping in over time), this patch consolidates it all in one place. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0040-Move-install-script-error-handling-to-a-common-funct.patch Type: text/x-patch Size: 35289 bytes Desc: not available URL: From atkac at redhat.com Mon Apr 23 09:23:41 2012 From: atkac at redhat.com (Adam Tkac) Date: Mon, 23 Apr 2012 11:23:41 +0200 Subject: [Freeipa-devel] DNS zone serial number updates [#2554] In-Reply-To: <1334764404.16658.277.camel@willson.li.ssimo.org> References: <4F8D9110.3030703@redhat.com> <1334679211.16658.200.camel@willson.li.ssimo.org> <4F8EC1C8.4090204@redhat.com> <1334757864.16658.258.camel@willson.li.ssimo.org> <4F8EDBE0.7090108@redhat.com> <1334764404.16658.277.camel@willson.li.ssimo.org> Message-ID: <20120423092340.GC1586@redhat.com> On Wed, Apr 18, 2012 at 11:53:24AM -0400, Simo Sorce wrote: > On Wed, 2012-04-18 at 17:21 +0200, Petr Spacek wrote: > > > > If this happens, it is possible that on one of the masters the serial > > > will be updated twice even though no other change was performed on the > > > entry-set. That is not a big deal though, at most it will cause a > > > useless zone transfer, but zone transfer should already be somewhat rate > > > limited anyway, because our zones do change frequently due to DNS > > > updates from clients. > > SOA record has also refresh, retry and expiry fields. These define how often > > zone transfer should happen. It's nicely described in [8]. > > Sure but we have 2 opposing requirements. On one hand we want to avoid > doing too many transfers too often. On the other we want to reflect > dynamic Updates fast enough to avoid stale entries in the slaves. So the > problem is that we need to carefully balance and tradeoff. Just FYI, BIND has some limits set by default. BIND slaves don't transfer zone more than once for 5 minutes (the "min-refresh-time" named.conf parameter). So even when you modify zone often, BIND slave just fetches all updates after 5 minutes. If the SOA's "refresh" attribute is lower than "min-refresh-time" option, the "min-refresh-time" is used. > > >> There are still problems to solve without DS plugin (specifically > > >> mapping/updating NN part from YYYYMMDDNN), but: Sounds this reasonable? > > > > > > Well I am not sure we need to use a YYYYMMDDNN convention to start with. > > > I expect with DYNDNS updates that a 2 digit NN will never be enough, > > > plus it is never updated by a human so we do not need to keep it > > > readable. But I do not care eiither way, as long as the serial can > > > handle thousands of updates per day I am fine (if this is an issue we > > > need to understand how to update the serial in time intervals). > > Current BIND implementation handles overflow in one day gracefully: > > 2012041899 -> 2012041900 > > So SOA# can be in far future, if you changes zone too often :-) > > Sure, but then what's the point of keeping it in date format ? :-) > > > AFAIK this format is traditional, but not required by standard, if arithmetic > > works. [9] defines arithmetic for SOA serials, so DS plugin should follow it. > > > > It says "The maximum defined increment is 2147483647 (2^31 - 1)" > > This limit applies inside to one SOA TTL time window (so it shouldn't be a > > problem, I think). I didn't looked into in this RFC deeply. Some practical > > recommendations can be found in [10]. > > Yeah 2^31 is large enough for practical deployments if you start small, > if you start close to the top (2012 is not the far from 2147) then you > have substantially reduced the window. I think the easiest way is not to use serial in date format. We can simply create zone with serial "1" and then increment it every time when we modify the zone. Regards, Adam -- Adam Tkac, Red Hat, Inc. From dpal at redhat.com Mon Apr 23 13:54:25 2012 From: dpal at redhat.com (Dmitri Pal) Date: Mon, 23 Apr 2012 09:54:25 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <1334953792.16658.526.camel@willson.li.ssimo.org> References: <4F4E41FC.6020606@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <1334853233.16658.345.camel@willson.li.ssimo.org> <4F9060D0.3010204@redhat.com> <1334864682.16658.358.camel@willson.li.ssimo.org> <4F907599.6060609@redhat.com> <1334870909.16658.437.camel@willson.li.ssimo.org> <4F9090F1.1000005@redhat.com> <1334879014.16658.499.camel@willson.li.ssimo.org> <4F91C27D.6070507@redhat.com> <1334953792.16658.526.camel@willson.li.ssimo.org> Message-ID: <4F955F11.1000003@redhat.com> On 04/20/2012 04:29 PM, Simo Sorce wrote: > On Fri, 2012-04-20 at 16:09 -0400, Dmitri Pal wrote: > >> I was under the assumption that to be able to wrap things properly you >> need both user password in clear that you have only at the moment the >> hashes are created and the key for the branch office replica. Is this >> the wrong assumption? If you do not need raw password you can rewrap >> things later but you should do it only on the fly when the attribute is >> replicated. You do not want to create re-wrapped hashes when the >> replica is added - that will be DDoS attack. >> If the assumption is correct then you would have to force password >> change every time you add a branch office replica. > The assumption is incorrect. > We can rewrap at replication tiem in a different master key, and that > doesn't require the user clear text password, only access to the old and > the new master key. A similar mechanism is used internally > >>> With a) the disadvantage are that it is an expensive operation, and also >>> makes it hard to deal with hubs (if we want to prevent hubs from getting >>> access to access krb keys). >> Why this is a problem? > Performance wise it probably isn't a big deal after all, it will slow > down replication a bit, but except for mass enrollments it will not be > that big of an issue. > The main issue I see with hubs is that I was under the impression we > didn't want to let hubs be able to decrypt keys. In that case we can > handle only replicating to hubs that have a single 'branch office' they > replicate to because they will not be able to do an re-wrapping. That is > why I mentioned option (b) where you pre-generate copies for the target > branches. If we are ok giving hubs the ability to re-wrap keys, then it > is not an issue, as we can provide hubs with their own master key and > then we can re-wrap master -> hub and then again hub -> branch by giving > the hub also the branch master key. > >>> With b) the disadvantage is that you have to create all other copies and >>> keep them around. So any principal may end up having as many copies are >>> there are branch offices in the degenerate case. It also make it >>> difficult to deal with master key changes (both main master key and >>> branch office master keys). >>> >>> Both approaches complicate a bit the initial replica setup, but not >>> terribly so. >> So which one do you propose? It is not clear. > I am still unsure, both have advantages and drawbacks, see above about > hubs for example. > >>>>> Using this approach we have several benefits: >>>>> >>>>> 1) Remote office can enjoy the eSSO >>>>> 2) If the remote replica is jeopardized there is nothing in it that can >>>>> be used to attack central server without cracking the kerberos hashes. >>>>> 3) Authentication against the remote server would not be respected >>>>> against the resources in the central server creating confined environment >>>>> 4) Remote office can operate with eSSO even if there is no connection to >>>>> the central server. >>> Not so fast! :-) >>> User keys are not all is needed. >>> Re-encrypting user keys is only the first (and simplest) step. >>> Then you have to deal with a) the 'krbtgt' key problem and b) the Ticket >>> granting Service problem. They are solvable problems (also because MS >>> neatly demonstrated it can be solved in their RODC), but requires some >>> more work on the KDC side. >>> >>> So problem (a): the krbtgt. >>> When a user in a branch office obtain a ticket from the branch office >>> KDC it will get this ticket encrypted with the branch office krbtgt key >>> which is different from the other branch offices or the main ipa server >>> krbtgt key. >>> This means that if the user then tries to obtain a ticket from another >>> KDC, this other KDC needs to be able to recognize that a different >>> krbtgt key was used and find out the right key to decrypt the user >>> ticket to verify its TGT is valid before it can proceed. >> I am not sure this needs to be solved. It depends what kinds of central >> services are required for the local office users to access. Can we defer >> this? > No. It is key to have this working, having a krbtgt that cannot be used > to acquire tickets is useless. > >>> We also have a worse reverse problem, where a user obtained a krbtgt for >>> the ipa masters but then tries to get a service ticket from the branch >>> office TGS which does not have access to the main krbtgt at all (or it >>> could impersonate any user in the realm). >>> So for all these case we need a mechanism in the KDC to i) recognize the >>> situation ii) find out if it can decrypt the user tgt iii) either >>> redirect the user to a TGS server that can decrypt the tgt or 'proxy' >>> the request to a KDC that can do that. >> Seems very complex and too generic. May be we should slice it into >> several use cases and address them one at a time. > It's a single use case, acquiring a ticket. Nothing we can really split > into discrete pieces. > > Simo. > I disagree. I see at least several use cases: 1) User has TGT form the branch office replica and needs to access a resource in the branch office 2) User has TGT form the branch office replica and needs to access a resource in the central data center 3) User has TGT form the central replica and needs to access a resource in the branch office Not all of the scenarios need to be addressed day 1. May be we just focus on 1and then extend to 2 and defer 3 till later? -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From simo at redhat.com Mon Apr 23 14:02:38 2012 From: simo at redhat.com (Simo Sorce) Date: Mon, 23 Apr 2012 10:02:38 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F955F11.1000003@redhat.com> References: <4F4E41FC.6020606@redhat.com> <4F5657B8.6080409@redhat.com> <4F58D620.90107@redhat.com> <4F5E50AF.5090701@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <1334853233.16658.345.camel@willson.li.ssimo.org> <4F9060D0.3010204@redhat.com> <1334864682.16658.358.camel@willson.li.ssimo.org> <4F907599.6060609@redhat.com> <1334870909.16658.437.camel@willson.li.ssimo.org> <4F9090F1.1000005@redhat.com> <1334879014.16658.499.camel@willson.li.ssimo.org> <4F91C27D.6070507@redhat.com> <1334953792.16658.526.camel@willson.li.ssimo.org> <4F955F11.1000003@redhat.com> Message-ID: <1335189758.16658.611.camel@willson.li.ssimo.org> On Mon, 2012-04-23 at 09:54 -0400, Dmitri Pal wrote: > On 04/20/2012 04:29 PM, Simo Sorce wrote: > > On Fri, 2012-04-20 at 16:09 -0400, Dmitri Pal wrote: > > > >> I was under the assumption that to be able to wrap things properly you > >> need both user password in clear that you have only at the moment the > >> hashes are created and the key for the branch office replica. Is this > >> the wrong assumption? If you do not need raw password you can rewrap > >> things later but you should do it only on the fly when the attribute is > >> replicated. You do not want to create re-wrapped hashes when the > >> replica is added - that will be DDoS attack. > >> If the assumption is correct then you would have to force password > >> change every time you add a branch office replica. > > The assumption is incorrect. > > We can rewrap at replication tiem in a different master key, and that > > doesn't require the user clear text password, only access to the old and > > the new master key. A similar mechanism is used internally > > > >>> With a) the disadvantage are that it is an expensive operation, and also > >>> makes it hard to deal with hubs (if we want to prevent hubs from getting > >>> access to access krb keys). > >> Why this is a problem? > > Performance wise it probably isn't a big deal after all, it will slow > > down replication a bit, but except for mass enrollments it will not be > > that big of an issue. > > The main issue I see with hubs is that I was under the impression we > > didn't want to let hubs be able to decrypt keys. In that case we can > > handle only replicating to hubs that have a single 'branch office' they > > replicate to because they will not be able to do an re-wrapping. That is > > why I mentioned option (b) where you pre-generate copies for the target > > branches. If we are ok giving hubs the ability to re-wrap keys, then it > > is not an issue, as we can provide hubs with their own master key and > > then we can re-wrap master -> hub and then again hub -> branch by giving > > the hub also the branch master key. > > > >>> With b) the disadvantage is that you have to create all other copies and > >>> keep them around. So any principal may end up having as many copies are > >>> there are branch offices in the degenerate case. It also make it > >>> difficult to deal with master key changes (both main master key and > >>> branch office master keys). > >>> > >>> Both approaches complicate a bit the initial replica setup, but not > >>> terribly so. > >> So which one do you propose? It is not clear. > > I am still unsure, both have advantages and drawbacks, see above about > > hubs for example. > > > >>>>> Using this approach we have several benefits: > >>>>> > >>>>> 1) Remote office can enjoy the eSSO > >>>>> 2) If the remote replica is jeopardized there is nothing in it that can > >>>>> be used to attack central server without cracking the kerberos hashes. > >>>>> 3) Authentication against the remote server would not be respected > >>>>> against the resources in the central server creating confined environment > >>>>> 4) Remote office can operate with eSSO even if there is no connection to > >>>>> the central server. > >>> Not so fast! :-) > >>> User keys are not all is needed. > >>> Re-encrypting user keys is only the first (and simplest) step. > >>> Then you have to deal with a) the 'krbtgt' key problem and b) the Ticket > >>> granting Service problem. They are solvable problems (also because MS > >>> neatly demonstrated it can be solved in their RODC), but requires some > >>> more work on the KDC side. > >>> > >>> So problem (a): the krbtgt. > >>> When a user in a branch office obtain a ticket from the branch office > >>> KDC it will get this ticket encrypted with the branch office krbtgt key > >>> which is different from the other branch offices or the main ipa server > >>> krbtgt key. > >>> This means that if the user then tries to obtain a ticket from another > >>> KDC, this other KDC needs to be able to recognize that a different > >>> krbtgt key was used and find out the right key to decrypt the user > >>> ticket to verify its TGT is valid before it can proceed. > >> I am not sure this needs to be solved. It depends what kinds of central > >> services are required for the local office users to access. Can we defer > >> this? > > No. It is key to have this working, having a krbtgt that cannot be used > > to acquire tickets is useless. > > > >>> We also have a worse reverse problem, where a user obtained a krbtgt for > >>> the ipa masters but then tries to get a service ticket from the branch > >>> office TGS which does not have access to the main krbtgt at all (or it > >>> could impersonate any user in the realm). > >>> So for all these case we need a mechanism in the KDC to i) recognize the > >>> situation ii) find out if it can decrypt the user tgt iii) either > >>> redirect the user to a TGS server that can decrypt the tgt or 'proxy' > >>> the request to a KDC that can do that. > >> Seems very complex and too generic. May be we should slice it into > >> several use cases and address them one at a time. > > It's a single use case, acquiring a ticket. Nothing we can really split > > into discrete pieces. > > > > Simo. > > > I disagree. I see at least several use cases: > > 1) User has TGT form the branch office replica and needs to access a > resource in the branch office This will work > 2) User has TGT form the branch office replica and needs to access a > resource in the central data center This will fail > 3) User has TGT form the central replica and needs to access a resource > in the branch office This will fail > Not all of the scenarios need to be addressed day 1. > May be we just focus on 1and then extend to 2 and defer 3 till later? The problem is that you have no fallback case. If the central office have kerberized resources you may simply get access denied and that's it. Having the wrong ticket is not always the same as not having a ticket. Simo. -- Simo Sorce * Red Hat, Inc * New York From jcholast at redhat.com Mon Apr 23 14:33:32 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 23 Apr 2012 16:33:32 +0200 Subject: [Freeipa-devel] [PATCH] 76 Refactor exc_callback invocation Message-ID: <4F95683C.8020700@redhat.com> Hi, this patch replaces _call_exc_callbacks with a function wrapper, which will automatically call exception callbacks when an exception is raised from the function. This removes the need to specify the function and its arguments twice (once in the function call itself and once in _call_exc_callbacks). Code like this: try: # original call ret = func(arg, kwarg=0) except ExecutionError, e: try: # the function and its arguments need to be specified again! ret = self._call_exc_callbacks(args, options, e, func, arg, kwarg=0) except ExecutionErrorSubclass, e: handle_error(e) becomes this: try: ret = self._exc_wrapper(args, options, func)(arg, kwarg=0) except ExecutionErrorSubclass, e: handle_error(e) As you can see, the resulting code is shorter and you don't have to remember to make changes to the arguments in two places. Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-76-refactor-exc-callback.patch Type: text/x-patch Size: 22314 bytes Desc: not available URL: From jcholast at redhat.com Mon Apr 23 14:40:11 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 23 Apr 2012 16:40:11 +0200 Subject: [Freeipa-devel] [PATCH] index fqdn and macAddress attributes In-Reply-To: <20120416203221.GC8158@redhat.com> References: <20120416203221.GC8158@redhat.com> Message-ID: <4F9569CB.2050206@redhat.com> On 16.4.2012 22:32, Nalin Dahyabhai wrote: > When we implement ticket #2259, indexing fqdn and macAddress should help > the Schema Compatibility and NIS Server plugins locate relevant computer > entries more easily. > > Nalin > Please add the indices to install/share/indices.ldif as well. Honza -- Jan Cholasta From jcholast at redhat.com Mon Apr 23 15:03:28 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 23 Apr 2012 17:03:28 +0200 Subject: [Freeipa-devel] [PATCH] compat ieee802Device entries for ipaHost entries In-Reply-To: <20120416203918.GD8158@redhat.com> References: <20120416203918.GD8158@redhat.com> Message-ID: <4F956F40.6010801@redhat.com> On 16.4.2012 22:39, Nalin Dahyabhai wrote: > This bit of configuration creates a cn=computers area under cn=compat > which we populate with ieee802Device entries corresponding to any > ipaHost entries which have both fqdn and macAddress values. > > Nalin > Please add this to install/updates/10-schema_compat.update as well. Honza -- Jan Cholasta From jdennis at redhat.com Mon Apr 23 15:05:27 2012 From: jdennis at redhat.com (John Dennis) Date: Mon, 23 Apr 2012 11:05:27 -0400 Subject: [Freeipa-devel] [PATCH] 0040 Move install script error handling to a common function In-Reply-To: <4F951EB7.9050908@redhat.com> References: <4F951EB7.9050908@redhat.com> Message-ID: <4F956FB7.3040000@redhat.com> On 04/23/2012 05:19 AM, Petr Viktorin wrote: > This fixes https://fedorahosted.org/freeipa/ticket/2071 (Add final debug > message in installers). > > I submitted an earlier version of this patch before (0014), but it was > too much to include in 2.2. Hopefully now there's more space for > restructuring. I think it's better to start a new thread with this approach. > > The try/except blocks at the end of installers/management scripts are > replaced by a call to a common function, which includes the final message. > For each specific error, the error handlers in all scripts was almost > the same, but each script handled a different selection of errors. > Instead of having this copy/pasted code (with subtle differences > creeping in over time), this patch consolidates it all in one place. I like this approach much better than the earlier patch, great, thanks. I'm a big fan of calling into common code instead of copying code to my mind the refactoring to utilize common code is great approach. I also like the fact the logging configuration is not modified after it's established. At some point we may want to revist how the log messages are generated. For example should all communication to the console pass through the console handler? Is there a logger established for the script? Should the format of messages emitted to the console be altered? Should all command line utilities accept the both the verbose and debug flag? Etc. But for now this is fantastic start in the right direction. I have not installed and exercised the patch so I can't comment on any runtime time issues that might be present, but from code inspection only it has my ACK. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From jcholast at redhat.com Mon Apr 23 15:21:28 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 23 Apr 2012 17:21:28 +0200 Subject: [Freeipa-devel] [PATCH] add ethers.byname and ethers.byaddr NIS maps In-Reply-To: <20120416205154.GE8158@redhat.com> References: <20120416205154.GE8158@redhat.com> Message-ID: <4F957378.1070604@redhat.com> On 16.4.2012 22:51, Nalin Dahyabhai wrote: > The ethers.byname and ethers.byaddr NIS maps pair host names and > hardware network addresses. This should close ticket #2259. > > Nalin > Please add this to install/updates/50-nis.update as well. Besides that, ACK on all 3 patches. I have checked only if ypcat and ypmatch work as expected, I would prefer if someone with more LDAP/NIS knowledge took a look at the patches before pushing them. Honza -- Jan Cholasta From jcholast at redhat.com Mon Apr 23 15:40:27 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 23 Apr 2012 17:40:27 +0200 Subject: [Freeipa-devel] [PATCH] add ethers.byname and ethers.byaddr NIS maps In-Reply-To: <4F957378.1070604@redhat.com> References: <20120416205154.GE8158@redhat.com> <4F957378.1070604@redhat.com> Message-ID: <4F9577EB.6040902@redhat.com> On 23.4.2012 17:21, Jan Cholasta wrote: > On 16.4.2012 22:51, Nalin Dahyabhai wrote: >> The ethers.byname and ethers.byaddr NIS maps pair host names and >> hardware network addresses. This should close ticket #2259. >> >> Nalin >> > > Please add this to install/updates/50-nis.update as well. > > Besides that, ACK on all 3 patches. I have checked only if ypcat and > ypmatch work as expected, I would prefer if someone with more LDAP/NIS > knowledge took a look at the patches before pushing them. > > Honza > I have just noticed one issue: we allow the octets in MAC addresses to be separated not only by ":", but also by "|", "\" or "-". Your patch doesn't seem to work for MAC addresses not using ":" as a separator: $ ipa host-mod host.example.com --macaddress 00:11:22:33:44:55 $ ypcat ethers 00:11:22:33:44:55 host.example.com $ ipa host-mod host.example.com --macaddress 00-11-22-33-44-55 $ ypcat ethers Honza -- Jan Cholasta From rcritten at redhat.com Mon Apr 23 15:43:00 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 23 Apr 2012 11:43:00 -0400 Subject: [Freeipa-devel] [PATCH] add ethers.byname and ethers.byaddr NIS maps In-Reply-To: <4F9577EB.6040902@redhat.com> References: <20120416205154.GE8158@redhat.com> <4F957378.1070604@redhat.com> <4F9577EB.6040902@redhat.com> Message-ID: <4F957884.3090002@redhat.com> Jan Cholasta wrote: > On 23.4.2012 17:21, Jan Cholasta wrote: >> On 16.4.2012 22:51, Nalin Dahyabhai wrote: >>> The ethers.byname and ethers.byaddr NIS maps pair host names and >>> hardware network addresses. This should close ticket #2259. >>> >>> Nalin >>> >> >> Please add this to install/updates/50-nis.update as well. >> >> Besides that, ACK on all 3 patches. I have checked only if ypcat and >> ypmatch work as expected, I would prefer if someone with more LDAP/NIS >> knowledge took a look at the patches before pushing them. >> >> Honza >> > > I have just noticed one issue: we allow the octets in MAC addresses to > be separated not only by ":", but also by "|", "\" or "-". Your patch > doesn't seem to work for MAC addresses not using ":" as a separator: > > $ ipa host-mod host.example.com --macaddress 00:11:22:33:44:55 > > $ ypcat ethers > 00:11:22:33:44:55 host.example.com > > $ ipa host-mod host.example.com --macaddress 00-11-22-33-44-55 > > $ ypcat ethers > > > Honza > We can always change that if need be. I made the regex rather generous. rob From jdennis at redhat.com Mon Apr 23 15:51:09 2012 From: jdennis at redhat.com (John Dennis) Date: Mon, 23 Apr 2012 11:51:09 -0400 Subject: [Freeipa-devel] have you been running master? Message-ID: <4F957A6D.4000809@redhat.com> Just curious, some changes went into master that modified how we call into ldap (for both the installer and normal server operation). But those changes occurred when many of us we working on 2.2 almost exclusively. So has anybody been using master, doing installs, running the server, etc. on a regular basis and have you experienced any issues? Just curious, I want to make sure those changes are rock solid. I haven't heard of any issues, but I thought I would sanity check now that our attention is back on master again. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From jcholast at redhat.com Mon Apr 23 16:00:52 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 23 Apr 2012 18:00:52 +0200 Subject: [Freeipa-devel] have you been running master? In-Reply-To: <4F957A6D.4000809@redhat.com> References: <4F957A6D.4000809@redhat.com> Message-ID: <4F957CB4.8020704@redhat.com> On 23.4.2012 17:51, John Dennis wrote: > Just curious, some changes went into master that modified how we call > into ldap (for both the installer and normal server operation). But > those changes occurred when many of us we working on 2.2 almost > exclusively. So has anybody been using master, doing installs, running > the server, etc. on a regular basis and have you experienced any issues? > Just curious, I want to make sure those changes are rock solid. I > haven't heard of any issues, but I thought I would sanity check now that > our attention is back on master again. > I usually do several master installs per day and I haven't noticed any issues. Honza -- Jan Cholasta From mareynol at redhat.com Mon Apr 23 16:05:57 2012 From: mareynol at redhat.com (Mark Reynolds) Date: Mon, 23 Apr 2012 12:05:57 -0400 Subject: [Freeipa-devel] please review ticket #337 - improve CLEANRUV functionality Message-ID: <4F957DE5.1040406@redhat.com> https://fedorahosted.org/389/ticket/337 https://fedorahosted.org/389/attachment/ticket/337/0001-Ticket-337-RFE-Improve-CLEANRUV-functionality.patch Previously the steps to remove a replica and its RUV was problematic. I created two new "tasks" to take care of the entire replication environment. [1] The new task "CLEANALLRUV" - run it once on any master * This marks the rid as invalid. Used to reject updates to the changelog, and the database RUV * It then sends a "CLEANRUV" extended operation to each agreement. * Then it cleans its own RUV. * The CLEANRUV extended op then triggers that replica to send the same CLEANRUV extop to its replicas, then it cleans its own RID. Basically this operation cascades through the entire replication environment. [2] The "RELEASERUV" task - run it once on any master * Once the RUV's have been cleaned on all the replicas, you need to "release" the rid so that it can be reused. This operation also cascades through the entire replication environment. This also triggers changelog trimming. For all of this to work correctly, there is a list of steps that needs to be followed. This procedure is attached to the ticket. https://fedorahosted.org/389/attachment/ticket/337/cleanruv-proceedure Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdennis at redhat.com Mon Apr 23 16:16:16 2012 From: jdennis at redhat.com (John Dennis) Date: Mon, 23 Apr 2012 12:16:16 -0400 Subject: [Freeipa-devel] have you been running master? In-Reply-To: <4F957CB4.8020704@redhat.com> References: <4F957A6D.4000809@redhat.com> <4F957CB4.8020704@redhat.com> Message-ID: <4F958050.5010306@redhat.com> On 04/23/2012 12:00 PM, Jan Cholasta wrote: > On 23.4.2012 17:51, John Dennis wrote: >> Just curious, some changes went into master that modified how we call >> into ldap (for both the installer and normal server operation). But >> those changes occurred when many of us we working on 2.2 almost >> exclusively. So has anybody been using master, doing installs, running >> the server, etc. on a regular basis and have you experienced any issues? >> Just curious, I want to make sure those changes are rock solid. I >> haven't heard of any issues, but I thought I would sanity check now that >> our attention is back on master again. >> > > I usually do several master installs per day and I haven't noticed any > issues. Wonderful! Thanks! -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From sbose at redhat.com Mon Apr 23 16:19:41 2012 From: sbose at redhat.com (Sumit Bose) Date: Mon, 23 Apr 2012 18:19:41 +0200 Subject: [Freeipa-devel] have you been running master? In-Reply-To: <4F957A6D.4000809@redhat.com> References: <4F957A6D.4000809@redhat.com> Message-ID: <20120423161941.GQ2361@localhost.localdomain> On Mon, Apr 23, 2012 at 11:51:09AM -0400, John Dennis wrote: > Just curious, some changes went into master that modified how we > call into ldap (for both the installer and normal server operation). > But those changes occurred when many of us we working on 2.2 almost > exclusively. So has anybody been using master, doing installs, > running the server, etc. on a regular basis and have you experienced > any issues? Just curious, I want to make sure those changes are rock > solid. I haven't heard of any issues, but I thought I would sanity > check now that our attention is back on master again. There was an issue with s4u2proxy which forced you to always use 'ipa --delegate ...' but this was fixed by Simo about a month ago in "Fix MS-PAC checks when using s4u2proxy". I'm currently not aware of any other issues and I only use master. HTH bye, Sumit > > -- > John Dennis > > Looking to carve out IT costs? > www.redhat.com/carveoutcosts/ > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel From pviktori at redhat.com Mon Apr 23 16:47:31 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 23 Apr 2012 18:47:31 +0200 Subject: [Freeipa-devel] [PATCH] 76 Refactor exc_callback invocation In-Reply-To: <4F95683C.8020700@redhat.com> References: <4F95683C.8020700@redhat.com> Message-ID: <4F9587A3.5070803@redhat.com> On 04/23/2012 04:33 PM, Jan Cholasta wrote: > Hi, > > this patch replaces _call_exc_callbacks with a function wrapper, which > will automatically call exception callbacks when an exception is raised > from the function. This removes the need to specify the function and its > arguments twice (once in the function call itself and once in > _call_exc_callbacks). > > Code like this: > > try: > # original call > ret = func(arg, kwarg=0) > except ExecutionError, e: > try: > # the function and its arguments need to be specified again! > ret = self._call_exc_callbacks(args, options, e, func, arg, > kwarg=0) > except ExecutionErrorSubclass, e: > handle_error(e) > > becomes this: > > try: > ret = self._exc_wrapper(args, options, func)(arg, kwarg=0) > except ExecutionErrorSubclass, e: > handle_error(e) > > As you can see, the resulting code is shorter and you don't have to > remember to make changes to the arguments in two places. > > Honza Please add a test, too. I've attached one you can use. See also some style nitpicks below. > -- > Jan Cholasta > > freeipa-jcholast-76-refactor-exc-callback.patch > > >> From 8e070f571472ed5a27339bcc980b67ecca41b337 Mon Sep 17 00:00:00 2001 > From: Jan Cholasta > Date: Thu, 19 Apr 2012 08:06:32 -0400 > Subject: [PATCH] Refactor exc_callback invocation. > > Replace _call_exc_callbacks with a function wrapper, which will automatically > call exception callbacks when an exception is raised from the function. This > removes the need to specify the function and its arguments twice (once in the > function call itself and once in _call_exc_callbacks). > --- > ipalib/plugins/baseldap.py | 227 ++++++++++++++---------------------------- > ipalib/plugins/entitle.py | 19 ++-- > ipalib/plugins/group.py | 7 +- > ipalib/plugins/permission.py | 27 +++--- > ipalib/plugins/pwpolicy.py | 11 +- > 5 files changed, 109 insertions(+), 182 deletions(-) > > diff --git a/ipalib/plugins/baseldap.py b/ipalib/plugins/baseldap.py > index f185977..f7a3bbc 100644 > --- a/ipalib/plugins/baseldap.py > +++ b/ipalib/plugins/baseldap.py > @@ -744,26 +744,24 @@ class CallbackInterface(Method): > else: > klass.INTERACTIVE_PROMPT_CALLBACKS.append(callback) > > - def _call_exc_callbacks(self, args, options, exc, call_func, *call_args, **call_kwargs): > - rv = None > - for i in xrange(len(getattr(self, 'EXC_CALLBACKS', []))): > - callback = self.EXC_CALLBACKS[i] > - try: > - if hasattr(callback, 'im_self'): > - rv = callback( > - args, options, exc, call_func, *call_args, **call_kwargs > - ) > - else: > - rv = callback( > - self, args, options, exc, call_func, *call_args, > - **call_kwargs > - ) > - except errors.ExecutionError, e: > - if (i + 1)< len(self.EXC_CALLBACKS): > - exc = e > - continue > - raise e > - return rv > + def _exc_wrapper(self, keys, options, call_func): Consider adding a docstring, e.g. """Function wrapper that automatically calls exception callbacks""" > + def wrapped(*call_args, **call_kwargs): > + func = call_func > + callbacks = list(getattr(self, 'EXC_CALLBACKS', [])) > + while True: > + try: You have some clever code here, rebinding `func` like you do. It'd be nice if there was a comment warning that you're redefining a function, in case someone who's not a Python expert looks at this. Consider: # `func` is either the original function, or the current error callback > + return func(*call_args, **call_kwargs) > + except errors.ExecutionError, e: > + if len(callbacks) == 0: Use just `if not callbacks`, as per PEP8. > + raise > + callback = callbacks.pop(0) > + if hasattr(callback, 'im_self'): > + def func(*args, **kwargs): #pylint: disable=E0102 > + return callback(keys, options, e, call_func, *args, **kwargs) > + else: > + def func(*args, **kwargs): #pylint: disable=E0102 > + return callback(self, keys, options, e, call_func, *args, **kwargs) > + return wrapped > > > class BaseLDAPCommand(CallbackInterface, Command): [...] > diff --git a/ipalib/plugins/entitle.py b/ipalib/plugins/entitle.py > index 28d2c5d..6ade854 100644 > --- a/ipalib/plugins/entitle.py > +++ b/ipalib/plugins/entitle.py > @@ -642,12 +642,12 @@ class entitle_import(LDAPUpdate): > If we are adding the first entry there are no updates so EmptyModlist > will get thrown. Ignore it. > """ > - if isinstance(exc, errors.EmptyModlist): > - if not getattr(context, 'entitle_import', False): > - raise exc > - return (call_args, {}) > - else: > - raise exc > + if call_func.func_name == 'update_entry': > + if isinstance(exc, errors.EmptyModlist): > + if not getattr(context, 'entitle_import', False): You didn't mention the additional checks for 'update_entry' in the commit message. By the way, the need for these checks suggests that a per-class registry of error callbacks might not be the best design. But that's for more long-term thinking. > + raise exc > + return (call_args, {}) > + raise exc > > def execute(self, *keys, **options): > super(entitle_import, self).execute(*keys, **options) [...] -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: test_baseldap_plugin.py Type: text/x-python Size: 2161 bytes Desc: not available URL: From dpal at redhat.com Mon Apr 23 17:04:24 2012 From: dpal at redhat.com (Dmitri Pal) Date: Mon, 23 Apr 2012 13:04:24 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <1335189758.16658.611.camel@willson.li.ssimo.org> References: <4F4E41FC.6020606@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <1334853233.16658.345.camel@willson.li.ssimo.org> <4F9060D0.3010204@redhat.com> <1334864682.16658.358.camel@willson.li.ssimo.org> <4F907599.6060609@redhat.com> <1334870909.16658.437.camel@willson.li.ssimo.org> <4F9090F1.1000005@redhat.com> <1334879014.16658.499.camel@willson.li.ssimo.org> <4F91C27D.6070507@redhat.com> <1334953792.16658.526.camel@willson.li.ssimo.org> <4F955F11.1000003@redhat.com> <1335189758.16658.611.camel@willson.li.ssimo.org> Message-ID: <4F958B98.2020702@redhat.com> On 04/23/2012 10:02 AM, Simo Sorce wrote: > On Mon, 2012-04-23 at 09:54 -0400, Dmitri Pal wrote: >> On 04/20/2012 04:29 PM, Simo Sorce wrote: >>> On Fri, 2012-04-20 at 16:09 -0400, Dmitri Pal wrote: >>> >>>> I was under the assumption that to be able to wrap things properly you >>>> need both user password in clear that you have only at the moment the >>>> hashes are created and the key for the branch office replica. Is this >>>> the wrong assumption? If you do not need raw password you can rewrap >>>> things later but you should do it only on the fly when the attribute is >>>> replicated. You do not want to create re-wrapped hashes when the >>>> replica is added - that will be DDoS attack. >>>> If the assumption is correct then you would have to force password >>>> change every time you add a branch office replica. >>> The assumption is incorrect. >>> We can rewrap at replication tiem in a different master key, and that >>> doesn't require the user clear text password, only access to the old and >>> the new master key. A similar mechanism is used internally >>> >>>>> With a) the disadvantage are that it is an expensive operation, and also >>>>> makes it hard to deal with hubs (if we want to prevent hubs from getting >>>>> access to access krb keys). >>>> Why this is a problem? >>> Performance wise it probably isn't a big deal after all, it will slow >>> down replication a bit, but except for mass enrollments it will not be >>> that big of an issue. >>> The main issue I see with hubs is that I was under the impression we >>> didn't want to let hubs be able to decrypt keys. In that case we can >>> handle only replicating to hubs that have a single 'branch office' they >>> replicate to because they will not be able to do an re-wrapping. That is >>> why I mentioned option (b) where you pre-generate copies for the target >>> branches. If we are ok giving hubs the ability to re-wrap keys, then it >>> is not an issue, as we can provide hubs with their own master key and >>> then we can re-wrap master -> hub and then again hub -> branch by giving >>> the hub also the branch master key. >>> >>>>> With b) the disadvantage is that you have to create all other copies and >>>>> keep them around. So any principal may end up having as many copies are >>>>> there are branch offices in the degenerate case. It also make it >>>>> difficult to deal with master key changes (both main master key and >>>>> branch office master keys). >>>>> >>>>> Both approaches complicate a bit the initial replica setup, but not >>>>> terribly so. >>>> So which one do you propose? It is not clear. >>> I am still unsure, both have advantages and drawbacks, see above about >>> hubs for example. >>> >>>>>>> Using this approach we have several benefits: >>>>>>> >>>>>>> 1) Remote office can enjoy the eSSO >>>>>>> 2) If the remote replica is jeopardized there is nothing in it that can >>>>>>> be used to attack central server without cracking the kerberos hashes. >>>>>>> 3) Authentication against the remote server would not be respected >>>>>>> against the resources in the central server creating confined environment >>>>>>> 4) Remote office can operate with eSSO even if there is no connection to >>>>>>> the central server. >>>>> Not so fast! :-) >>>>> User keys are not all is needed. >>>>> Re-encrypting user keys is only the first (and simplest) step. >>>>> Then you have to deal with a) the 'krbtgt' key problem and b) the Ticket >>>>> granting Service problem. They are solvable problems (also because MS >>>>> neatly demonstrated it can be solved in their RODC), but requires some >>>>> more work on the KDC side. >>>>> >>>>> So problem (a): the krbtgt. >>>>> When a user in a branch office obtain a ticket from the branch office >>>>> KDC it will get this ticket encrypted with the branch office krbtgt key >>>>> which is different from the other branch offices or the main ipa server >>>>> krbtgt key. >>>>> This means that if the user then tries to obtain a ticket from another >>>>> KDC, this other KDC needs to be able to recognize that a different >>>>> krbtgt key was used and find out the right key to decrypt the user >>>>> ticket to verify its TGT is valid before it can proceed. >>>> I am not sure this needs to be solved. It depends what kinds of central >>>> services are required for the local office users to access. Can we defer >>>> this? >>> No. It is key to have this working, having a krbtgt that cannot be used >>> to acquire tickets is useless. >>> >>>>> We also have a worse reverse problem, where a user obtained a krbtgt for >>>>> the ipa masters but then tries to get a service ticket from the branch >>>>> office TGS which does not have access to the main krbtgt at all (or it >>>>> could impersonate any user in the realm). >>>>> So for all these case we need a mechanism in the KDC to i) recognize the >>>>> situation ii) find out if it can decrypt the user tgt iii) either >>>>> redirect the user to a TGS server that can decrypt the tgt or 'proxy' >>>>> the request to a KDC that can do that. >>>> Seems very complex and too generic. May be we should slice it into >>>> several use cases and address them one at a time. >>> It's a single use case, acquiring a ticket. Nothing we can really split >>> into discrete pieces. >>> >>> Simo. >>> >> I disagree. I see at least several use cases: >> >> 1) User has TGT form the branch office replica and needs to access a >> resource in the branch office > This will work > >> 2) User has TGT form the branch office replica and needs to access a >> resource in the central data center > This will fail This will fail if we do not what you propose, right? And I think we can do it in iterations. May be it can be fixed in the kerberos library on the service side to allow it acquiring its own ticket against different KDCs with different krbtgts in parallel. Then when the user from a branch office accesses the service the service would have multiple tickets and would be able to see which one to use? I am a bit vague here as I do not know exactly what the issue is and how exactly the service deals with the user ticket that is issued by KDCs with different krbtgt so please correct me. >> 3) User has TGT form the central replica and needs to access a resource >> in the branch office > This will fail And this should not be a big deal. If you have access to the branch office service you probably can re-authenticate against branch office replica. IMO this is a corner case. >> Not all of the scenarios need to be addressed day 1. >> May be we just focus on 1and then extend to 2 and defer 3 till later? > The problem is that you have no fallback case. > If the central office have kerberized resources you may simply get > access denied and that's it. > Having the wrong ticket is not always the same as not having a ticket. > > Simo. > -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From simo at redhat.com Mon Apr 23 17:35:25 2012 From: simo at redhat.com (Simo Sorce) Date: Mon, 23 Apr 2012 13:35:25 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F958B98.2020702@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1331583365.9238.105.camel@willson.li.ssimo.org> <4F5E6D5E.2080807@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <1334853233.16658.345.camel@willson.li.ssimo.org> <4F9060D0.3010204@redhat.com> <1334864682.16658.358.camel@willson.li.ssimo.org> <4F907599.6060609@redhat.com> <1334870909.16658.437.camel@willson.li.ssimo.org> <4F9090F1.1000005@redhat.com> <1334879014.16658.499.camel@willson.li.ssimo.org> <4F91C27D.6070507@redhat.com> <1334953792.16658.526.camel@willson.li.ssimo.org> <4F955F11.1000003@redhat.com> <1335189758.16658.611.camel@willson.li.ssimo.org> <4F958B98.2020702@redhat.com> Message-ID: <1335202525.16658.640.camel@willson.li.ssimo.org> On Mon, 2012-04-23 at 13:04 -0400, Dmitri Pal wrote: > On 04/23/2012 10:02 AM, Simo Sorce wrote: > > On Mon, 2012-04-23 at 09:54 -0400, Dmitri Pal wrote: > >> On 04/20/2012 04:29 PM, Simo Sorce wrote: > >>> On Fri, 2012-04-20 at 16:09 -0400, Dmitri Pal wrote: > >>> > >>>> I was under the assumption that to be able to wrap things properly you > >>>> need both user password in clear that you have only at the moment the > >>>> hashes are created and the key for the branch office replica. Is this > >>>> the wrong assumption? If you do not need raw password you can rewrap > >>>> things later but you should do it only on the fly when the attribute is > >>>> replicated. You do not want to create re-wrapped hashes when the > >>>> replica is added - that will be DDoS attack. > >>>> If the assumption is correct then you would have to force password > >>>> change every time you add a branch office replica. > >>> The assumption is incorrect. > >>> We can rewrap at replication tiem in a different master key, and that > >>> doesn't require the user clear text password, only access to the old and > >>> the new master key. A similar mechanism is used internally > >>> > >>>>> With a) the disadvantage are that it is an expensive operation, and also > >>>>> makes it hard to deal with hubs (if we want to prevent hubs from getting > >>>>> access to access krb keys). > >>>> Why this is a problem? > >>> Performance wise it probably isn't a big deal after all, it will slow > >>> down replication a bit, but except for mass enrollments it will not be > >>> that big of an issue. > >>> The main issue I see with hubs is that I was under the impression we > >>> didn't want to let hubs be able to decrypt keys. In that case we can > >>> handle only replicating to hubs that have a single 'branch office' they > >>> replicate to because they will not be able to do an re-wrapping. That is > >>> why I mentioned option (b) where you pre-generate copies for the target > >>> branches. If we are ok giving hubs the ability to re-wrap keys, then it > >>> is not an issue, as we can provide hubs with their own master key and > >>> then we can re-wrap master -> hub and then again hub -> branch by giving > >>> the hub also the branch master key. > >>> > >>>>> With b) the disadvantage is that you have to create all other copies and > >>>>> keep them around. So any principal may end up having as many copies are > >>>>> there are branch offices in the degenerate case. It also make it > >>>>> difficult to deal with master key changes (both main master key and > >>>>> branch office master keys). > >>>>> > >>>>> Both approaches complicate a bit the initial replica setup, but not > >>>>> terribly so. > >>>> So which one do you propose? It is not clear. > >>> I am still unsure, both have advantages and drawbacks, see above about > >>> hubs for example. > >>> > >>>>>>> Using this approach we have several benefits: > >>>>>>> > >>>>>>> 1) Remote office can enjoy the eSSO > >>>>>>> 2) If the remote replica is jeopardized there is nothing in it that can > >>>>>>> be used to attack central server without cracking the kerberos hashes. > >>>>>>> 3) Authentication against the remote server would not be respected > >>>>>>> against the resources in the central server creating confined environment > >>>>>>> 4) Remote office can operate with eSSO even if there is no connection to > >>>>>>> the central server. > >>>>> Not so fast! :-) > >>>>> User keys are not all is needed. > >>>>> Re-encrypting user keys is only the first (and simplest) step. > >>>>> Then you have to deal with a) the 'krbtgt' key problem and b) the Ticket > >>>>> granting Service problem. They are solvable problems (also because MS > >>>>> neatly demonstrated it can be solved in their RODC), but requires some > >>>>> more work on the KDC side. > >>>>> > >>>>> So problem (a): the krbtgt. > >>>>> When a user in a branch office obtain a ticket from the branch office > >>>>> KDC it will get this ticket encrypted with the branch office krbtgt key > >>>>> which is different from the other branch offices or the main ipa server > >>>>> krbtgt key. > >>>>> This means that if the user then tries to obtain a ticket from another > >>>>> KDC, this other KDC needs to be able to recognize that a different > >>>>> krbtgt key was used and find out the right key to decrypt the user > >>>>> ticket to verify its TGT is valid before it can proceed. > >>>> I am not sure this needs to be solved. It depends what kinds of central > >>>> services are required for the local office users to access. Can we defer > >>>> this? > >>> No. It is key to have this working, having a krbtgt that cannot be used > >>> to acquire tickets is useless. > >>> > >>>>> We also have a worse reverse problem, where a user obtained a krbtgt for > >>>>> the ipa masters but then tries to get a service ticket from the branch > >>>>> office TGS which does not have access to the main krbtgt at all (or it > >>>>> could impersonate any user in the realm). > >>>>> So for all these case we need a mechanism in the KDC to i) recognize the > >>>>> situation ii) find out if it can decrypt the user tgt iii) either > >>>>> redirect the user to a TGS server that can decrypt the tgt or 'proxy' > >>>>> the request to a KDC that can do that. > >>>> Seems very complex and too generic. May be we should slice it into > >>>> several use cases and address them one at a time. > >>> It's a single use case, acquiring a ticket. Nothing we can really split > >>> into discrete pieces. > >>> > >>> Simo. > >>> > >> I disagree. I see at least several use cases: > >> > >> 1) User has TGT form the branch office replica and needs to access a > >> resource in the branch office > > This will work > > > >> 2) User has TGT form the branch office replica and needs to access a > >> resource in the central data center > > This will fail > This will fail if we do not what you propose, right? Right. > And I think we can do it in iterations. Not really. > May be it can be fixed in the kerberos library on the service side to > allow it acquiring its own ticket against different KDCs with different > krbtgts in parallel. I would want to do this, it would require intimate knowledge of stuff a client isn't supposed to know, and would be a big change in the library for a temporary change. But most importantly, this would address only clients you control, and we do not control all of the clients. I do not like having a core feature work only for a subset of machines, and leave the others completely broken. > Then when the user from a branch office accesses > the service the service would have multiple tickets and would be able to > see which one to use? Not really, it would require for the client to know how to distinguish tickets (hint: it can't :-) > I am a bit vague here as I do not know exactly what the issue is and how > exactly the service deals with the user ticket that is issued by KDCs > with different krbtgt so please correct me. To a client the krbtgt issue from on or another KDC of the same realm is indistinguishable. The krbtgt is encrypted and the client has not way to know what's in it. > >> 3) User has TGT form the central replica and needs to access a resource > >> in the branch office > > This will fail > > And this should not be a big deal. If you have access to the branch > office service you probably can re-authenticate against branch office > replica. IMO this is a corner case. The problem here is that you cannot ""re-authenticate"" if you are not a user in the local replica. Even putting aside the ugliness of having to explain your users what to do which will be very difficult. A roaming user visiting a branch office (say a manager going to visit an office abroad) never has it's credentials in the branch office, so he will always need to have it's request to the local KDC proxied to one of the master KDCs, even if we could redirect instead of proxy then any access to local resources would still need to go thorough the main KDc because the local branch office does not know how to crack open the user krbtgt. If, we implement proxying instead the central KDC can give back to the branch KDC that is proxying the request a krbtgt that is encrypted with the branch office KDC krbtgt key, so that all following interactions can be done at the branch office level for branch office controlled resources. Let's put it this way, I would rather defer this feature completely by a year than try to do it in steps as you describe. Simo. -- Simo Sorce * Red Hat, Inc * New York From dpal at redhat.com Mon Apr 23 17:50:57 2012 From: dpal at redhat.com (Dmitri Pal) Date: Mon, 23 Apr 2012 13:50:57 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <1335202525.16658.640.camel@willson.li.ssimo.org> References: <4F4E41FC.6020606@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <1334853233.16658.345.camel@willson.li.ssimo.org> <4F9060D0.3010204@redhat.com> <1334864682.16658.358.camel@willson.li.ssimo.org> <4F907599.6060609@redhat.com> <1334870909.16658.437.camel@willson.li.ssimo.org> <4F9090F1.1000005@redhat.com> <1334879014.16658.499.camel@willson.li.ssimo.org> <4F91C27D.6070507@redhat.com> <1334953792.16658.526.camel@willson.li.ssimo.org> <4F955F11.1000003@redhat.com> <1335189758.16658.611.camel@willson.li.ssimo.org> <4F958B98.2020702@redhat.com> <1335202525.16658.640.camel@willson.li.ssimo.org> Message-ID: <4F959681.7090501@redhat.com> On 04/23/2012 01:35 PM, Simo Sorce wrote: > On Mon, 2012-04-23 at 13:04 -0400, Dmitri Pal wrote: >> On 04/23/2012 10:02 AM, Simo Sorce wrote: >>> On Mon, 2012-04-23 at 09:54 -0400, Dmitri Pal wrote: >>>> On 04/20/2012 04:29 PM, Simo Sorce wrote: >>>>> On Fri, 2012-04-20 at 16:09 -0400, Dmitri Pal wrote: >>>>> >>>>>> I was under the assumption that to be able to wrap things properly you >>>>>> need both user password in clear that you have only at the moment the >>>>>> hashes are created and the key for the branch office replica. Is this >>>>>> the wrong assumption? If you do not need raw password you can rewrap >>>>>> things later but you should do it only on the fly when the attribute is >>>>>> replicated. You do not want to create re-wrapped hashes when the >>>>>> replica is added - that will be DDoS attack. >>>>>> If the assumption is correct then you would have to force password >>>>>> change every time you add a branch office replica. >>>>> The assumption is incorrect. >>>>> We can rewrap at replication tiem in a different master key, and that >>>>> doesn't require the user clear text password, only access to the old and >>>>> the new master key. A similar mechanism is used internally >>>>> >>>>>>> With a) the disadvantage are that it is an expensive operation, and also >>>>>>> makes it hard to deal with hubs (if we want to prevent hubs from getting >>>>>>> access to access krb keys). >>>>>> Why this is a problem? >>>>> Performance wise it probably isn't a big deal after all, it will slow >>>>> down replication a bit, but except for mass enrollments it will not be >>>>> that big of an issue. >>>>> The main issue I see with hubs is that I was under the impression we >>>>> didn't want to let hubs be able to decrypt keys. In that case we can >>>>> handle only replicating to hubs that have a single 'branch office' they >>>>> replicate to because they will not be able to do an re-wrapping. That is >>>>> why I mentioned option (b) where you pre-generate copies for the target >>>>> branches. If we are ok giving hubs the ability to re-wrap keys, then it >>>>> is not an issue, as we can provide hubs with their own master key and >>>>> then we can re-wrap master -> hub and then again hub -> branch by giving >>>>> the hub also the branch master key. >>>>> >>>>>>> With b) the disadvantage is that you have to create all other copies and >>>>>>> keep them around. So any principal may end up having as many copies are >>>>>>> there are branch offices in the degenerate case. It also make it >>>>>>> difficult to deal with master key changes (both main master key and >>>>>>> branch office master keys). >>>>>>> >>>>>>> Both approaches complicate a bit the initial replica setup, but not >>>>>>> terribly so. >>>>>> So which one do you propose? It is not clear. >>>>> I am still unsure, both have advantages and drawbacks, see above about >>>>> hubs for example. >>>>> >>>>>>>>> Using this approach we have several benefits: >>>>>>>>> >>>>>>>>> 1) Remote office can enjoy the eSSO >>>>>>>>> 2) If the remote replica is jeopardized there is nothing in it that can >>>>>>>>> be used to attack central server without cracking the kerberos hashes. >>>>>>>>> 3) Authentication against the remote server would not be respected >>>>>>>>> against the resources in the central server creating confined environment >>>>>>>>> 4) Remote office can operate with eSSO even if there is no connection to >>>>>>>>> the central server. >>>>>>> Not so fast! :-) >>>>>>> User keys are not all is needed. >>>>>>> Re-encrypting user keys is only the first (and simplest) step. >>>>>>> Then you have to deal with a) the 'krbtgt' key problem and b) the Ticket >>>>>>> granting Service problem. They are solvable problems (also because MS >>>>>>> neatly demonstrated it can be solved in their RODC), but requires some >>>>>>> more work on the KDC side. >>>>>>> >>>>>>> So problem (a): the krbtgt. >>>>>>> When a user in a branch office obtain a ticket from the branch office >>>>>>> KDC it will get this ticket encrypted with the branch office krbtgt key >>>>>>> which is different from the other branch offices or the main ipa server >>>>>>> krbtgt key. >>>>>>> This means that if the user then tries to obtain a ticket from another >>>>>>> KDC, this other KDC needs to be able to recognize that a different >>>>>>> krbtgt key was used and find out the right key to decrypt the user >>>>>>> ticket to verify its TGT is valid before it can proceed. >>>>>> I am not sure this needs to be solved. It depends what kinds of central >>>>>> services are required for the local office users to access. Can we defer >>>>>> this? >>>>> No. It is key to have this working, having a krbtgt that cannot be used >>>>> to acquire tickets is useless. >>>>> >>>>>>> We also have a worse reverse problem, where a user obtained a krbtgt for >>>>>>> the ipa masters but then tries to get a service ticket from the branch >>>>>>> office TGS which does not have access to the main krbtgt at all (or it >>>>>>> could impersonate any user in the realm). >>>>>>> So for all these case we need a mechanism in the KDC to i) recognize the >>>>>>> situation ii) find out if it can decrypt the user tgt iii) either >>>>>>> redirect the user to a TGS server that can decrypt the tgt or 'proxy' >>>>>>> the request to a KDC that can do that. >>>>>> Seems very complex and too generic. May be we should slice it into >>>>>> several use cases and address them one at a time. >>>>> It's a single use case, acquiring a ticket. Nothing we can really split >>>>> into discrete pieces. >>>>> >>>>> Simo. >>>>> >>>> I disagree. I see at least several use cases: >>>> >>>> 1) User has TGT form the branch office replica and needs to access a >>>> resource in the branch office >>> This will work >>> >>>> 2) User has TGT form the branch office replica and needs to access a >>>> resource in the central data center >>> This will fail >> This will fail if we do not what you propose, right? > Right. > >> And I think we can do it in iterations. > Not really. > >> May be it can be fixed in the kerberos library on the service side to >> allow it acquiring its own ticket against different KDCs with different >> krbtgts in parallel. > I would want to do this, it would require intimate knowledge of stuff a > client isn't supposed to know, and would be a big change in the library > for a temporary change. > > But most importantly, this would address only clients you control, and > we do not control all of the clients. I do not like having a core > feature work only for a subset of machines, and leave the others > completely broken. > >> Then when the user from a branch office accesses >> the service the service would have multiple tickets and would be able to >> see which one to use? > Not really, it would require for the client to know how to distinguish > tickets (hint: it can't :-) > >> I am a bit vague here as I do not know exactly what the issue is and how >> exactly the service deals with the user ticket that is issued by KDCs >> with different krbtgt so please correct me. > To a client the krbtgt issue from on or another KDC of the same realm is > indistinguishable. The krbtgt is encrypted and the client has not way to > know what's in it. > >>>> 3) User has TGT form the central replica and needs to access a resource >>>> in the branch office >>> This will fail >> And this should not be a big deal. If you have access to the branch >> office service you probably can re-authenticate against branch office >> replica. IMO this is a corner case. > The problem here is that you cannot ""re-authenticate"" if you are not a > user in the local replica. Even putting aside the ugliness of having to > explain your users what to do which will be very difficult. > A roaming user visiting a branch office (say a manager going to visit an > office abroad) never has it's credentials in the branch office, so he > will always need to have it's request to the local KDC proxied to one of > the master KDCs, even if we could redirect instead of proxy then any > access to local resources would still need to go thorough the main KDc > because the local branch office does not know how to crack open the user > krbtgt. > If, we implement proxying instead the central KDC can give back to the > branch KDC that is proxying the request a krbtgt that is encrypted with > the branch office KDC krbtgt key, so that all following interactions can > be done at the branch office level for branch office controlled > resources. > > Let's put it this way, I would rather defer this feature completely by a > year than try to do it in steps as you describe. > Ah OK. Another semantic difference. Doing it in phases is one thing and delivering is another. Let us say we identified 10 things that needs to be implemented. The problem is so huge that Ondrej would likely be able to tackle only couple items from the list. So what should be do with the rest if it is not possible to deliver until all 10 items are completed? IMO the work can be started and deferred till someone else can come back and continue what Ondrej have started and bring it to the shape when we are comfortable releasing it. Ondra it time for you to sit down, read this thread thoroughly and craft a design out of it. Then you would be able to focus on a reasonable subset of what is possible to complete in the remaining time frame. > Simo. > -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From simo at redhat.com Mon Apr 23 17:58:45 2012 From: simo at redhat.com (Simo Sorce) Date: Mon, 23 Apr 2012 13:58:45 -0400 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <4F959681.7090501@redhat.com> References: <4F4E41FC.6020606@redhat.com> <1331590243.9238.113.camel@willson.li.ssimo.org> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <1334853233.16658.345.camel@willson.li.ssimo.org> <4F9060D0.3010204@redhat.com> <1334864682.16658.358.camel@willson.li.ssimo.org> <4F907599.6060609@redhat.com> <1334870909.16658.437.camel@willson.li.ssimo.org> <4F9090F1.1000005@redhat.com> <1334879014.16658.499.camel@willson.li.ssimo.org> <4F91C27D.6070507@redhat.com> <1334953792.16658.526.camel@willson.li.ssimo.org> <4F955F11.1000003@redhat.com> <1335189758.16658.611.camel@willson.li.ssimo.org> <4F958B98.2020702@redhat.com> <1335202525.16658.640.camel@willson.li.ssimo.org> <4F959681.7090501@redhat.com> Message-ID: <1335203925.16658.647.camel@willson.li.ssimo.org> On Mon, 2012-04-23 at 13:50 -0400, Dmitri Pal wrote: > Ah OK. Another semantic difference. Doing it in phases is one thing and > delivering is another. Let us say we identified 10 things that needs to > be implemented. The problem is so huge that Ondrej would likely be able > to tackle only couple items from the list. So what should be do with the > rest if it is not possible to deliver until all 10 items are completed? Ok, so most of the work here is in the KDC, so I think we should first go to MIT, present the problem and see what htey think about the solution we have in mind. I will try to have a preliminary discussion With Tom and Greg about the general idea this week to see what they think. Once that is done we can slice the implementation how we want in a private branch until it is fully backed. MIT wouldn't, rightly so, accept a half backed solution I would guess, but we also do not need to try to rush patches in. Once cleanup work in the KDC has been done as part of the 1.11 work I think these interfaces will change little so there shouldn't be a risk of wasting too much time to follow upstream while we work on one of these problems at a time. > IMO the work can be started and deferred till someone else can come back > and continue what Ondrej have started and bring it to the shape when we > are comfortable releasing it. Absolutely, esp if we can start after he changes MIT plans to make in 1.11 or at least if we plan together so we know which internal interfaces are going to be destabilized so we can plan ahead. > Ondra it time for you to sit down, read this thread thoroughly and craft > a design out of it. Then you would be able to focus on a reasonable > subset of what is possible to complete in the remaining time frame. Ok. Simo. -- Simo Sorce * Red Hat, Inc * New York From nalin at redhat.com Mon Apr 23 20:45:11 2012 From: nalin at redhat.com (Nalin Dahyabhai) Date: Mon, 23 Apr 2012 16:45:11 -0400 Subject: [Freeipa-devel] [PATCH] compat ieee802Device entries for ipaHost entries In-Reply-To: <4F956F40.6010801@redhat.com> References: <20120416203918.GD8158@redhat.com> <4F956F40.6010801@redhat.com> Message-ID: <20120423204511.GA31145@redhat.com> On Mon, Apr 23, 2012 at 05:03:28PM +0200, Jan Cholasta wrote: > On 16.4.2012 22:39, Nalin Dahyabhai wrote: > >This bit of configuration creates a cn=computers area under cn=compat > >which we populate with ieee802Device entries corresponding to any > >ipaHost entries which have both fqdn and macAddress values. > > Please add this to install/updates/10-schema_compat.update as well. Okay, I think a simple copy is enough, but am not yet sufficiently familiar with the install/{share,update} stuff to be completely sure. Nalin -------------- next part -------------- From 9cfbef42a0efa8898caf3454c07b729f58f526ba Mon Sep 17 00:00:00 2001 From: Nalin Dahyabhai Date: Mon, 16 Apr 2012 15:31:12 -0400 Subject: [PATCH 2/3] - create a "cn=computers" compat area populated with ieee802Device entries corresponding to computers with fqdn and macAddress attributes --- install/share/schema_compat.uldif | 14 ++++++++++++++ install/updates/10-schema_compat.update | 15 +++++++++++++++ 2 files changed, 29 insertions(+) diff --git a/install/share/schema_compat.uldif b/install/share/schema_compat.uldif index f042edf..38bf678 100644 --- a/install/share/schema_compat.uldif +++ b/install/share/schema_compat.uldif @@ -92,6 +92,20 @@ add:schema-compat-entry-attribute: 'sudoRunAsGroup=%{ipaSudoRunAsExtGroup}' add:schema-compat-entry-attribute: 'sudoRunAsGroup=%deref("ipaSudoRunAs","cn")' add:schema-compat-entry-attribute: 'sudoOption=%{ipaSudoOpt}' +dn: cn=computers, cn=Schema Compatibility, cn=plugins, cn=config +default:objectClass: top +default:objectClass: extensibleObject +default:cn: computers +default:schema-compat-container-group: cn=compat, $SUFFIX +default:schema-compat-container-rdn: cn=computers +default:schema-compat-search-base: cn=computers, cn=accounts, $SUFFIX +default:schema-compat-search-filter: (&(macAddress=*)(fqdn=*)(objectClass=ipaHost)) +default:schema-compat-entry-rdn: 'cn=%first("%{fqdn}")' +default:schema-compat-entry-attribute: objectclass=device +default:schema-compat-entry-attribute: objectclass=ieee802Device +default:schema-compat-entry-attribute: cn=%{fqdn} +default:schema-compat-entry-attribute: macAddress=%{macAddress} + # Enable anonymous VLV browsing for Solaris dn: oid=2.16.840.1.113730.3.4.9,cn=features,cn=config only:aci: '(targetattr !="aci")(version 3.0; acl "VLV Request Control"; allow (read, search, compare, proxy) userdn = "ldap:///anyone"; )' diff --git a/install/updates/10-schema_compat.update b/install/updates/10-schema_compat.update index 8ef1424..46a94c3 100644 --- a/install/updates/10-schema_compat.update +++ b/install/updates/10-schema_compat.update @@ -4,3 +4,18 @@ replace: schema-compat-entry-attribute:'sudoRunAsGroup=%deref("ipaSudoRunAs","cn # as the original, '' or -. dn: cn=ng,cn=Schema Compatibility,cn=plugins,cn=config replace: schema-compat-entry-attribute:'nisNetgroupTriple=(%link("%ifeq(\"hostCategory\",\"all\",\"\",\"%collect(\\\"%{externalHost}\\\",\\\"%deref(\\\\\\\"memberHost\\\\\\\",\\\\\\\"fqdn\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"member\\\\\\\",\\\\\\\"fqdn\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"memberHost\\\\\\\",\\\\\\\"member\\\\\\\",\\\\\\\"fqdn\\\\\\\")\\\")\")","-",",","%ifeq(\"userCategory\",\"all\",\"\",\"%collect(\\\"%deref(\\\\\\\"memberUser\\\\\\\",\\\\\\\"uid\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"member\\\\\\\",\\\\\\\"uid\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"memberUser\\\\\\\",\\\\\\\"member\\\\\\\",\\\\\\\"uid\\\\\\\")\\\")\")","-"),%{nisDomainName:-})::nisNetgroupTriple=(%link("%ifeq(\"hostCategory\",\"all\",\"\",\"%collect(\\\"%{externalHost}\\\",\\\"%deref(\\\\\\\"memberHost\\\\\\\",\\\\\\\"fqdn\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"member\\\\\\\",\\\\\\\"fqdn\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"memberHost\\\\\\\",\\\\\\\"member\\\\\\\",\\\\\\\"fqdn\\\\\\\")\\\")\")","%ifeq(\"hostCategory\",\"all\",\"\",\"-\")",",","%ifeq(\"userCategory\",\"all\",\"\",\"%collect(\\\"%deref(\\\\\\\"memberUser\\\\\\\",\\\\\\\"uid\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"member\\\\\\\",\\\\\\\"uid\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"memberUser\\\\\\\",\\\\\\\"member\\\\\\\",\\\\\\\"uid\\\\\\\")\\\")\")","%ifeq(\"userCategory\",\"all\",\"\",\"-\")"),%{nisDomainName:-})' + +dn: cn=computers, cn=Schema Compatibility, cn=plugins, cn=config +default:objectClass: top +default:objectClass: extensibleObject +default:cn: computers +default:schema-compat-container-group: cn=compat, $SUFFIX +default:schema-compat-container-rdn: cn=computers +default:schema-compat-search-base: cn=computers, cn=accounts, $SUFFIX +default:schema-compat-search-filter: (&(macAddress=*)(fqdn=*)(objectClass=ipaHost)) +default:schema-compat-entry-rdn: 'cn=%first("%{fqdn}")' +default:schema-compat-entry-attribute: objectclass=device +default:schema-compat-entry-attribute: objectclass=ieee802Device +default:schema-compat-entry-attribute: cn=%{fqdn} +default:schema-compat-entry-attribute: macAddress=%{macAddress} + -- 1.7.10 From nalin at redhat.com Mon Apr 23 20:46:17 2012 From: nalin at redhat.com (Nalin Dahyabhai) Date: Mon, 23 Apr 2012 16:46:17 -0400 Subject: [Freeipa-devel] [PATCH] index fqdn and macAddress attributes In-Reply-To: <4F9569CB.2050206@redhat.com> References: <20120416203221.GC8158@redhat.com> <4F9569CB.2050206@redhat.com> Message-ID: <20120423204617.GB31145@redhat.com> On Mon, Apr 23, 2012 at 04:40:11PM +0200, Jan Cholasta wrote: > On 16.4.2012 22:32, Nalin Dahyabhai wrote: > >When we implement ticket #2259, indexing fqdn and macAddress should help > >the Schema Compatibility and NIS Server plugins locate relevant computer > >entries more easily. > > Please add the indices to install/share/indices.ldif as well. It's a bit of guesswork, but this should match the rest of the contents of that file. Nalin -------------- next part -------------- >From 3cdb82a3746931a0f566503c84c474909446de12 Mon Sep 17 00:00:00 2001 From: Nalin Dahyabhai Date: Mon, 16 Apr 2012 15:26:50 -0400 Subject: [PATCH 1/3] - index the fqdn and macAddress attributes for the sake of the compat plugin --- install/share/indices.ldif | 19 +++++++++++++++++++ install/updates/20-indices.update | 16 ++++++++++++++++ 2 files changed, 35 insertions(+) diff --git a/install/share/indices.ldif b/install/share/indices.ldif index 05c2765..6233d71 100644 --- a/install/share/indices.ldif +++ b/install/share/indices.ldif @@ -91,3 +91,22 @@ dn: cn=ntUserDomainId,cn=index,cn=userRoot,cn=ldbm database,cn=plugins,cn=config changetype: modify replace: nsIndexType nsIndexType: eq,pres + +dn: cn=fqdn,cn=index,cn=userRoot,cn=ldbm database,cn=plugins,cn=config +changetype: add +ObjectClass: top +ObjectClass: nsIndex +cn: fqdn +nsSystemIndex: false +nsIndexType: eq +nsIndexType: pres + +dn: cn=macAddress,cn=index,cn=userRoot,cn=ldbm database,cn=plugins,cn=config +changetype: add +ObjectClass: top +ObjectClass: nsIndex +cn: macAddress +nsSystemIndex: false +nsIndexType: eq +nsIndexType: pres + diff --git a/install/updates/20-indices.update b/install/updates/20-indices.update index b0e2f36..ecca027 100644 --- a/install/updates/20-indices.update +++ b/install/updates/20-indices.update @@ -32,3 +32,19 @@ default:ObjectClass: top default:ObjectClass: nsIndex default:nsSystemIndex: false default:nsIndexType: eq + +dn: cn=fqdn,cn=index,cn=userRoot,cn=ldbm database,cn=plugins,cn=config +default:cn: fqdn +default:ObjectClass: top +default:ObjectClass: nsIndex +default:nsSystemIndex: false +default:nsIndexType: eq +default:nsIndexType: pres + +dn: cn=macAddress,cn=index,cn=userRoot,cn=ldbm database,cn=plugins,cn=config +default:cn: macAddress +default:ObjectClass: top +default:ObjectClass: nsIndex +default:nsSystemIndex: false +default:nsIndexType: eq +default:nsIndexType: pres -- 1.7.10 From nalin at redhat.com Mon Apr 23 21:18:02 2012 From: nalin at redhat.com (Nalin Dahyabhai) Date: Mon, 23 Apr 2012 17:18:02 -0400 Subject: [Freeipa-devel] [PATCH] add ethers.byname and ethers.byaddr NIS maps In-Reply-To: <4F9577EB.6040902@redhat.com> References: <20120416205154.GE8158@redhat.com> <4F957378.1070604@redhat.com> <4F9577EB.6040902@redhat.com> Message-ID: <20120423211802.GC31145@redhat.com> On Mon, Apr 23, 2012 at 05:40:27PM +0200, Jan Cholasta wrote: > On 23.4.2012 17:21, Jan Cholasta wrote: > >On 16.4.2012 22:51, Nalin Dahyabhai wrote: > >>The ethers.byname and ethers.byaddr NIS maps pair host names and > >>hardware network addresses. This should close ticket #2259. > > > >Please add this to install/updates/50-nis.update as well. > > > >Besides that, ACK on all 3 patches. I have checked only if ypcat and > >ypmatch work as expected, I would prefer if someone with more LDAP/NIS > >knowledge took a look at the patches before pushing them. > > I have just noticed one issue: we allow the octets in MAC addresses > to be separated not only by ":", but also by "|", "\" or "-". Your > patch doesn't seem to work for MAC addresses not using ":" as a > separator: > > $ ipa host-mod host.example.com --macaddress 00:11:22:33:44:55 > > $ ypcat ethers > 00:11:22:33:44:55 host.example.com > > $ ipa host-mod host.example.com --macaddress 00-11-22-33-44-55 > > $ ypcat ethers > Updated patch attached, but I'm skeptical that software which consumes this data will handle anything other than ':', as neither RFC 2307 nor ethers(5) mention it. For that reason I'd lean toward either not accepting data in that format, or fixing it up on its way in to the directory -- we can fix it up when the compat plugins are computing the data they'll serve (and I can revise the patch to configure them to do so), but software that looks at the non-compat data won't benefit from it. Nalin -------------- next part -------------- From 7bb76d236db9b9c0ed5b2c8faf959dc34a399a7c Mon Sep 17 00:00:00 2001 From: Nalin Dahyabhai Date: Mon, 16 Apr 2012 15:33:42 -0400 Subject: [PATCH 3/3] - add a pair of ethers maps for computers with hardware addresses on file --- install/share/nis.uldif | 23 +++++++++++++++++++++++ install/updates/50-nis.update | 23 +++++++++++++++++++++++ 2 files changed, 46 insertions(+) diff --git a/install/share/nis.uldif b/install/share/nis.uldif index 2255541..f9747d5 100644 --- a/install/share/nis.uldif +++ b/install/share/nis.uldif @@ -70,3 +70,26 @@ default:nis-filter: (objectClass=ipanisNetgroup) default:nis-key-format: %{cn} default:nis-value-format:%merge(" ","%deref_f(\"member\",\"(objectclass=ipanisNetgroup)\",\"cn\")","(%link(\"%ifeq(\\\"hostCategory\\\",\\\"all\\\",\\\"\\\",\\\"%collect(\\\\\\\"%{externalHost}\\\\\\\",\\\\\\\"%deref(\\\\\\\\\\\\\\\"memberHost\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"memberHost\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\")\\\")\",\"%ifeq(\\\"hostCategory\\\",\\\"all\\\",\\\"\\\",\\\"-\\\")\",\",\",\"%ifeq(\\\"userCategory\\\",\\\"all\\\",\\\"\\\",\\\"%collect(\\\\\\\"%deref(\\\\\\\\\\\\\\\"memberUser\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"memberUser\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\")\\\")\",\"%ifeq(\\\"userCategory\\\",\\\"all\\\",\\\"\\\",\\\"-\\\")\"),%{nisDomainName:-})") default:nis-secure: no + +dn: nis-domain=$DOMAIN+nis-map=ethers.byaddr, cn=NIS Server, cn=plugins, cn=config +default:objectclass: top +default:objectclass: extensibleObject +default:nis-domain: $DOMAIN +default:nis-map: ethers.byaddr +default:nis-base: cn=computers, cn=accounts, $SUFFIX +default:nis-filter: (&(macAddress=*)(fqdn=*)(objectClass=ipaHost)) +default:nis-keys-format: %mregsub("%{macAddress} %{fqdn}","(..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..) (.*)","%1") +default:nis-values-format: %{macAddress} %{fqdn} +default:nis-secure: no + +dn: nis-domain=$DOMAIN+nis-map=ethers.byname, cn=NIS Server, cn=plugins, cn=config +default:objectclass: top +default:objectclass: extensibleObject +default:nis-domain: $DOMAIN +default:nis-map: ethers.byname +default:nis-base: cn=computers, cn=accounts, $SUFFIX +default:nis-filter: (&(macAddress=*)(fqdn=*)(objectClass=ipaHost)) +default:nis-keys-format: %mregsub("%{macAddress} %{fqdn}","(..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..) (.*)","%2") +default:nis-values-format: %{macAddress} %{fqdn} +default:nis-secure: no + diff --git a/install/updates/50-nis.update b/install/updates/50-nis.update index 5c72639..6c1ca15 100644 --- a/install/updates/50-nis.update +++ b/install/updates/50-nis.update @@ -12,3 +12,26 @@ replace:nis-value-format: '%merge(" ","%{memberNisNetgroup}","(%link(\"%ifeq(\\\ # https://bugzilla.redhat.com/show_bug.cgi?id=767372 dn: nis-domain=$DOMAIN+nis-map=netgroup, cn=NIS Server, cn=plugins, cn=config replace:nis-value-format: '%merge(" ","%deref_f(\"member\",\"(objectclass=ipanisNetgroup)\",\"cn\")","(%link(\"%ifeq(\\\"hostCategory\\\",\\\"all\\\",\\\"\\\",\\\"%collect(\\\\\\\"%{externalHost}\\\\\\\",\\\\\\\"%deref(\\\\\\\\\\\\\\\"memberHost\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"memberHost\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\")\\\")\",\"-\",\",\",\"%ifeq(\\\"userCategory\\\",\\\"all\\\",\\\"\\\",\\\"%collect(\\\\\\\"%deref(\\\\\\\\\\\\\\\"memberUser\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"memberUser\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\")\\\")\",\"-\"),%{nisDomainName:-})")::%merge(" ","%deref_f(\"member\",\"(objectclass=ipanisNetgroup)\",\"cn\")","(%link(\"%ifeq(\\\"hostCategory\\\",\\\"all\\\",\\\"\\\",\\\"%collect(\\\\\\\"%{externalHost}\\\\\\\",\\\\\\\"%deref(\\\\\\\\\\\\\\\"memberHost\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"memberHost\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\")\\\")\",\"%ifeq(\\\"hostCategory\\\",\\\"all\\\",\\\"\\\",\\\"-\\\")\",\",\",\"%ifeq(\\\"userCategory\\\",\\\"all\\\",\\\"\\\",\\\"%collect(\\\\\\\"%deref(\\\\\\\\\\\\\\\"memberUser\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"memberUser\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\")\\\")\",\"%ifeq(\\\"userCategory\\\",\\\"all\\\",\\\"\\\",\\\"-\\\")\"),%{nisDomainName:-})")' + +dn: nis-domain=$DOMAIN+nis-map=ethers.byaddr, cn=NIS Server, cn=plugins, cn=config +default:objectclass: top +default:objectclass: extensibleObject +default:nis-domain: $DOMAIN +default:nis-map: ethers.byaddr +default:nis-base: cn=computers, cn=accounts, $SUFFIX +default:nis-filter: (&(macAddress=*)(fqdn=*)(objectClass=ipaHost)) +default:nis-keys-format: %mregsub("%{macAddress} %{fqdn}","(..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..) (.*)","%1") +default:nis-values-format: %{macAddress} %{fqdn} +default:nis-secure: no + +dn: nis-domain=$DOMAIN+nis-map=ethers.byname, cn=NIS Server, cn=plugins, cn=config +default:objectclass: top +default:objectclass: extensibleObject +default:nis-domain: $DOMAIN +default:nis-map: ethers.byname +default:nis-base: cn=computers, cn=accounts, $SUFFIX +default:nis-filter: (&(macAddress=*)(fqdn=*)(objectClass=ipaHost)) +default:nis-keys-format: %mregsub("%{macAddress} %{fqdn}","(..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..) (.*)","%2") +default:nis-values-format: %{macAddress} %{fqdn} +default:nis-secure: no + -- 1.7.10 From jcholast at redhat.com Tue Apr 24 08:39:19 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 24 Apr 2012 10:39:19 +0200 Subject: [Freeipa-devel] [PATCH] index fqdn and macAddress attributes In-Reply-To: <20120423204617.GB31145@redhat.com> References: <20120416203221.GC8158@redhat.com> <4F9569CB.2050206@redhat.com> <20120423204617.GB31145@redhat.com> Message-ID: <4F9666B7.9010701@redhat.com> On 23.4.2012 22:46, Nalin Dahyabhai wrote: > On Mon, Apr 23, 2012 at 04:40:11PM +0200, Jan Cholasta wrote: >> On 16.4.2012 22:32, Nalin Dahyabhai wrote: >>> When we implement ticket #2259, indexing fqdn and macAddress should help >>> the Schema Compatibility and NIS Server plugins locate relevant computer >>> entries more easily. >> >> Please add the indices to install/share/indices.ldif as well. > > It's a bit of guesswork, but this should match the rest of the contents > of that file. > > Nalin Thanks. You got it right, ACK. Honza -- Jan Cholasta From ohamada at redhat.com Tue Apr 24 08:47:21 2012 From: ohamada at redhat.com (Ondrej Hamada) Date: Tue, 24 Apr 2012 10:47:21 +0200 Subject: [Freeipa-devel] More types of replica in FreeIPA In-Reply-To: <1335203925.16658.647.camel@willson.li.ssimo.org> References: <4F4E41FC.6020606@redhat.com> <4F5E910D.3080604@redhat.com> <4F7B2937.4060300@redhat.com> <1333544545.22628.287.camel@willson.li.ssimo.org> <4F8CA7B0.2080000@redhat.com> <1334666521.16658.171.camel@willson.li.ssimo.org> <4F8F084C.5090106@redhat.com> <4F900287.8080606@redhat.com> <1334840631.16658.325.camel@willson.li.ssimo.org> <4F901CD1.5050703@redhat.com> <1334853233.16658.345.camel@willson.li.ssimo.org> <4F9060D0.3010204@redhat.com> <1334864682.16658.358.camel@willson.li.ssimo.org> <4F907599.6060609@redhat.com> <1334870909.16658.437.camel@willson.li.ssimo.org> <4F9090F1.1000005@redhat.com> <1334879014.16658.499.camel@willson.li.ssimo.org> <4F91C27D.6070507@redhat.com> <1334953792.16658.526.camel@willson.li.ssimo.org> <4F955F11.1000003@redhat.com> <1335189758.16658.611.camel@willson.li.ssimo.org> <4F958B98.2020702@redhat.com> <1335202525.16658.640.camel@willson.li.ssimo.org> <4F959681.7090501@redhat.com> <1335203925.16658.647.camel@willson.li.ssimo.org> Message-ID: <4F966899.9040901@redhat.com> On 04/23/2012 07:58 PM, Simo Sorce wrote: > On Mon, 2012-04-23 at 13:50 -0400, Dmitri Pal wrote: >> Ah OK. Another semantic difference. Doing it in phases is one thing and >> delivering is another. Let us say we identified 10 things that needs to >> be implemented. The problem is so huge that Ondrej would likely be able >> to tackle only couple items from the list. So what should be do with the >> rest if it is not possible to deliver until all 10 items are completed? > Ok, so most of the work here is in the KDC, so I think we should first > go to MIT, present the problem and see what htey think about the > solution we have in mind. I will try to have a preliminary discussion > With Tom and Greg about the general idea this week to see what they > think. > > Once that is done we can slice the implementation how we want in a > private branch until it is fully backed. MIT wouldn't, rightly so, > accept a half backed solution I would guess, but we also do not need to > try to rush patches in. Once cleanup work in the KDC has been done as > part of the 1.11 work I think these interfaces will change little so > there shouldn't be a risk of wasting too much time to follow upstream > while we work on one of these problems at a time. > >> IMO the work can be started and deferred till someone else can come back >> and continue what Ondrej have started and bring it to the shape when we >> are comfortable releasing it. > Absolutely, esp if we can start after he changes MIT plans to make in > 1.11 or at least if we plan together so we know which internal > interfaces are going to be destabilized so we can plan ahead. > >> Ondra it time for you to sit down, read this thread thoroughly and craft >> a design out of it. Then you would be able to focus on a reasonable >> subset of what is possible to complete in the remaining time frame. Ok, will do. I would like to start with the login server scenario. It will be possible to use it later as a 'training field' for the fractional replication and help deciding what entries should and shouldn't be replicated. > Ok. > Simo. > -- Regards, Ondrej Hamada FreeIPA team jabber: ohama at jabbim.cz IRC: ohamada From jcholast at redhat.com Tue Apr 24 10:03:31 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 24 Apr 2012 12:03:31 +0200 Subject: [Freeipa-devel] [PATCH] compat ieee802Device entries for ipaHost entries In-Reply-To: <20120423204511.GA31145@redhat.com> References: <20120416203918.GD8158@redhat.com> <4F956F40.6010801@redhat.com> <20120423204511.GA31145@redhat.com> Message-ID: <4F967A73.9030001@redhat.com> On 23.4.2012 22:45, Nalin Dahyabhai wrote: > On Mon, Apr 23, 2012 at 05:03:28PM +0200, Jan Cholasta wrote: >> On 16.4.2012 22:39, Nalin Dahyabhai wrote: >>> This bit of configuration creates a cn=computers area under cn=compat >>> which we populate with ieee802Device entries corresponding to any >>> ipaHost entries which have both fqdn and macAddress values. >> >> Please add this to install/updates/10-schema_compat.update as well. > > Okay, I think a simple copy is enough, but am not yet sufficiently > familiar with the install/{share,update} stuff to be completely sure. > > Nalin I did some more testing and found out that this line: default:schema-compat-entry-rdn: 'cn=%first("%{fqdn}")' needs to be changed to: default:schema-compat-entry-rdn: cn=%first("%{fqdn}") in both install/share/schema_compat.uldif and install/updates/10-schema_compat.update, otherwise we get entries with DN like this: "'cn=test.example.com',cn=computers,cn=compat,dc=example,dc=com". Besides this, both clean installs and upgrades seem to work fine with this patch. Honza -- Jan Cholasta From jcholast at redhat.com Tue Apr 24 11:02:44 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 24 Apr 2012 13:02:44 +0200 Subject: [Freeipa-devel] [PATCH] add ethers.byname and ethers.byaddr NIS maps In-Reply-To: <20120423211802.GC31145@redhat.com> References: <20120416205154.GE8158@redhat.com> <4F957378.1070604@redhat.com> <4F9577EB.6040902@redhat.com> <20120423211802.GC31145@redhat.com> Message-ID: <4F968854.20305@redhat.com> On 23.4.2012 23:18, Nalin Dahyabhai wrote: > On Mon, Apr 23, 2012 at 05:40:27PM +0200, Jan Cholasta wrote: >> On 23.4.2012 17:21, Jan Cholasta wrote: >>> On 16.4.2012 22:51, Nalin Dahyabhai wrote: >>>> The ethers.byname and ethers.byaddr NIS maps pair host names and >>>> hardware network addresses. This should close ticket #2259. >>> >>> Please add this to install/updates/50-nis.update as well. >>> >>> Besides that, ACK on all 3 patches. I have checked only if ypcat and >>> ypmatch work as expected, I would prefer if someone with more LDAP/NIS >>> knowledge took a look at the patches before pushing them. >> >> I have just noticed one issue: we allow the octets in MAC addresses >> to be separated not only by ":", but also by "|", "\" or "-". Your >> patch doesn't seem to work for MAC addresses not using ":" as a >> separator: >> >> $ ipa host-mod host.example.com --macaddress 00:11:22:33:44:55 >> >> $ ypcat ethers >> 00:11:22:33:44:55 host.example.com >> >> $ ipa host-mod host.example.com --macaddress 00-11-22-33-44-55 >> >> $ ypcat ethers >> > > Updated patch attached, but I'm skeptical that software which consumes > this data will handle anything other than ':', as neither RFC 2307 nor > ethers(5) mention it. For that reason I'd lean toward either not > accepting data in that format, or fixing it up on its way in to the > directory -- we can fix it up when the compat plugins are computing the > data they'll serve (and I can revise the patch to configure them to do > so), but software that looks at the non-compat data won't benefit from > it. > > Nalin I agree and IMO fixing the value when the compat plugins are computing the data they'll serve is the best way to go, as someone might already have non-colon separated MAC addresses in their DS. The patch works fine, however it causes an error during IPA installs and upgrades. Excerpt from ipaserver-install.log: INFO New entry: nis-domain=idm.lab.bos.redhat.com+nis-map=ethers.byaddr, cn=NIS Server, cn=plugins, cn=config ... ERROR Add failure 'NoneType' object is not callable INFO New entry: nis-domain=idm.lab.bos.redhat.com+nis-map=ethers.byname, cn=NIS Server, cn=plugins, cn=config ... ERROR Add failure 'NoneType' object is not callable The error is: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ipaserver/install/ldapupdate.py", line 652, in __update_record self.conn.addEntry(entry) File "/usr/lib/python2.7/site-packages/ipaserver/ipaldap.py", line 495, in addEntry arg_desc = 'entry=%s' % (entry) TypeError: 'NoneType' object is not callable I'm not sure what is causing it. You might be triggering some bug in LDAP updater code (Rob, can you take a look at this please?) I'm just curious, why you do this: default:nis-keys-format: %mregsub("%{macAddress} %{fqdn}","(..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..) (.*)","%1") and not simply this: default:nis-keys-format: ${macAddress} ? Honza -- Jan Cholasta From pviktori at redhat.com Tue Apr 24 11:16:55 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 24 Apr 2012 13:16:55 +0200 Subject: [Freeipa-devel] [PATCH] add ethers.byname and ethers.byaddr NIS maps In-Reply-To: <4F968854.20305@redhat.com> References: <20120416205154.GE8158@redhat.com> <4F957378.1070604@redhat.com> <4F9577EB.6040902@redhat.com> <20120423211802.GC31145@redhat.com> <4F968854.20305@redhat.com> Message-ID: <4F968BA7.9050903@redhat.com> On 04/24/2012 01:02 PM, Jan Cholasta wrote: > The error is: > > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/ipaserver/install/ldapupdate.py", > line 652, in __update_record > self.conn.addEntry(entry) > File "/usr/lib/python2.7/site-packages/ipaserver/ipaldap.py", line 495, > in addEntry > arg_desc = 'entry=%s' % (entry) > TypeError: 'NoneType' object is not callable > > I'm not sure what is causing it. You might be triggering some bug in > LDAP updater code (Rob, can you take a look at this please?) > > For "convenience", the old Entry class returns None for missing attributes. This unfortunately applies to __str__ as well, so you can't convert Entries to strings. Surprisingly, exactly that is done in ipaldap itself. https://fedorahosted.org/freeipa/ticket/1880 (description of problem) https://fedorahosted.org/freeipa/ticket/2660 (currently open ticket) A quick fix could be to change `'entry=%s' % (entry)` to `'entry=%s' % entry.toDict()`. -- Petr? From pspacek at redhat.com Tue Apr 24 13:21:40 2012 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 24 Apr 2012 15:21:40 +0200 Subject: [Freeipa-devel] [PATCH 0018] Deadlock detection logic Message-ID: <4F96A8E4.7010002@redhat.com> Hello, this patch adds deadlock detection (based on simple timeout) to current code. If (probable) deadlock is detected, current action is stopped with proper error. It properly detects "Simo's deadlock" with 'connections' parameter == 1. (Described in https://fedorahosted.org/bind-dyndb-ldap/ticket/66) Deadlock itself will be fixed by separate patch. Petr^2 Spacek -------------- next part -------------- A non-text attachment was scrubbed... Name: bind-dyndb-ldap-pspacek-0018-Add-simple-semaphore-deadlock-detection-logic.patch Type: text/x-patch Size: 10613 bytes Desc: not available URL: From pspacek at redhat.com Tue Apr 24 13:52:00 2012 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 24 Apr 2012 15:52:00 +0200 Subject: [Freeipa-devel] [PATCH 0018] Deadlock detection logic In-Reply-To: <4F96A8E4.7010002@redhat.com> References: <4F96A8E4.7010002@redhat.com> Message-ID: <4F96B000.204@redhat.com> On 04/24/2012 03:21 PM, Petr Spacek wrote: > Hello, > > this patch adds deadlock detection (based on simple timeout) to current code. > If (probable) deadlock is detected, current action is stopped with proper error. > > It properly detects "Simo's deadlock" with 'connections' parameter == 1. > (Described in https://fedorahosted.org/bind-dyndb-ldap/ticket/66) > > Deadlock itself will be fixed by separate patch. > > Petr^2 Spacek Self-NACK :-) Second version of the patch is attached. ldap_pool_getconnection() and ldap_pool_putconnection() now has same interface and more consistent behaviour. Overall functionality is same. Petr^2 Spacek -------------- next part -------------- A non-text attachment was scrubbed... Name: bind-dyndb-ldap-pspacek-0018-Add-simple-semaphore-deadlock-detection-logic.patch Type: text/x-patch Size: 11946 bytes Desc: not available URL: From nalin at redhat.com Tue Apr 24 14:21:51 2012 From: nalin at redhat.com (Nalin Dahyabhai) Date: Tue, 24 Apr 2012 10:21:51 -0400 Subject: [Freeipa-devel] [PATCH] compat ieee802Device entries for ipaHost entries In-Reply-To: <4F967A73.9030001@redhat.com> References: <20120416203918.GD8158@redhat.com> <4F956F40.6010801@redhat.com> <20120423204511.GA31145@redhat.com> <4F967A73.9030001@redhat.com> Message-ID: <20120424142151.GA2015@redhat.com> On Tue, Apr 24, 2012 at 12:03:31PM +0200, Jan Cholasta wrote: > I did some more testing and found out that this line: > > default:schema-compat-entry-rdn: 'cn=%first("%{fqdn}")' > > needs to be changed to: > > default:schema-compat-entry-rdn: cn=%first("%{fqdn}") > > in both install/share/schema_compat.uldif and > install/updates/10-schema_compat.update, otherwise we get entries > with DN like this: > "'cn=test.example.com',cn=computers,cn=compat,dc=example,dc=com". > > Besides this, both clean installs and upgrades seem to work fine > with this patch. Right, the quoting rules. Revised again, in case you need it. Thanks! Nalin -------------- next part -------------- From 837575de789228428618e1338256321769720abb Mon Sep 17 00:00:00 2001 From: Nalin Dahyabhai Date: Mon, 16 Apr 2012 15:31:12 -0400 Subject: [PATCH 2/3] - create a "cn=computers" compat area populated with ieee802Device entries corresponding to computers with fqdn and macAddress attributes --- install/share/schema_compat.uldif | 14 ++++++++++++++ install/updates/10-schema_compat.update | 15 +++++++++++++++ 2 files changed, 29 insertions(+) diff --git a/install/share/schema_compat.uldif b/install/share/schema_compat.uldif index f042edf..deca1bb 100644 --- a/install/share/schema_compat.uldif +++ b/install/share/schema_compat.uldif @@ -92,6 +92,20 @@ add:schema-compat-entry-attribute: 'sudoRunAsGroup=%{ipaSudoRunAsExtGroup}' add:schema-compat-entry-attribute: 'sudoRunAsGroup=%deref("ipaSudoRunAs","cn")' add:schema-compat-entry-attribute: 'sudoOption=%{ipaSudoOpt}' +dn: cn=computers, cn=Schema Compatibility, cn=plugins, cn=config +default:objectClass: top +default:objectClass: extensibleObject +default:cn: computers +default:schema-compat-container-group: cn=compat, $SUFFIX +default:schema-compat-container-rdn: cn=computers +default:schema-compat-search-base: cn=computers, cn=accounts, $SUFFIX +default:schema-compat-search-filter: (&(macAddress=*)(fqdn=*)(objectClass=ipaHost)) +default:schema-compat-entry-rdn: cn=%first("%{fqdn}") +default:schema-compat-entry-attribute: objectclass=device +default:schema-compat-entry-attribute: objectclass=ieee802Device +default:schema-compat-entry-attribute: cn=%{fqdn} +default:schema-compat-entry-attribute: macAddress=%{macAddress} + # Enable anonymous VLV browsing for Solaris dn: oid=2.16.840.1.113730.3.4.9,cn=features,cn=config only:aci: '(targetattr !="aci")(version 3.0; acl "VLV Request Control"; allow (read, search, compare, proxy) userdn = "ldap:///anyone"; )' diff --git a/install/updates/10-schema_compat.update b/install/updates/10-schema_compat.update index 8ef1424..9835bb8 100644 --- a/install/updates/10-schema_compat.update +++ b/install/updates/10-schema_compat.update @@ -4,3 +4,18 @@ replace: schema-compat-entry-attribute:'sudoRunAsGroup=%deref("ipaSudoRunAs","cn # as the original, '' or -. dn: cn=ng,cn=Schema Compatibility,cn=plugins,cn=config replace: schema-compat-entry-attribute:'nisNetgroupTriple=(%link("%ifeq(\"hostCategory\",\"all\",\"\",\"%collect(\\\"%{externalHost}\\\",\\\"%deref(\\\\\\\"memberHost\\\\\\\",\\\\\\\"fqdn\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"member\\\\\\\",\\\\\\\"fqdn\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"memberHost\\\\\\\",\\\\\\\"member\\\\\\\",\\\\\\\"fqdn\\\\\\\")\\\")\")","-",",","%ifeq(\"userCategory\",\"all\",\"\",\"%collect(\\\"%deref(\\\\\\\"memberUser\\\\\\\",\\\\\\\"uid\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"member\\\\\\\",\\\\\\\"uid\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"memberUser\\\\\\\",\\\\\\\"member\\\\\\\",\\\\\\\"uid\\\\\\\")\\\")\")","-"),%{nisDomainName:-})::nisNetgroupTriple=(%link("%ifeq(\"hostCategory\",\"all\",\"\",\"%collect(\\\"%{externalHost}\\\",\\\"%deref(\\\\\\\"memberHost\\\\\\\",\\\\\\\"fqdn\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"member\\\\\\\",\\\\\\\"fqdn\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"memberHost\\\\\\\",\\\\\\\"member\\\\\\\",\\\\\\\"fqdn\\\\\\\")\\\")\")","%ifeq(\"hostCategory\",\"all\",\"\",\"-\")",",","%ifeq(\"userCategory\",\"all\",\"\",\"%collect(\\\"%deref(\\\\\\\"memberUser\\\\\\\",\\\\\\\"uid\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"member\\\\\\\",\\\\\\\"uid\\\\\\\")\\\",\\\"%deref_r(\\\\\\\"memberUser\\\\\\\",\\\\\\\"member\\\\\\\",\\\\\\\"uid\\\\\\\")\\\")\")","%ifeq(\"userCategory\",\"all\",\"\",\"-\")"),%{nisDomainName:-})' + +dn: cn=computers, cn=Schema Compatibility, cn=plugins, cn=config +default:objectClass: top +default:objectClass: extensibleObject +default:cn: computers +default:schema-compat-container-group: cn=compat, $SUFFIX +default:schema-compat-container-rdn: cn=computers +default:schema-compat-search-base: cn=computers, cn=accounts, $SUFFIX +default:schema-compat-search-filter: (&(macAddress=*)(fqdn=*)(objectClass=ipaHost)) +default:schema-compat-entry-rdn: cn=%first("%{fqdn}") +default:schema-compat-entry-attribute: objectclass=device +default:schema-compat-entry-attribute: objectclass=ieee802Device +default:schema-compat-entry-attribute: cn=%{fqdn} +default:schema-compat-entry-attribute: macAddress=%{macAddress} + -- 1.7.10 From nalin at redhat.com Tue Apr 24 14:57:21 2012 From: nalin at redhat.com (Nalin Dahyabhai) Date: Tue, 24 Apr 2012 10:57:21 -0400 Subject: [Freeipa-devel] [PATCH] add ethers.byname and ethers.byaddr NIS maps In-Reply-To: <4F968854.20305@redhat.com> References: <20120416205154.GE8158@redhat.com> <4F957378.1070604@redhat.com> <4F9577EB.6040902@redhat.com> <20120423211802.GC31145@redhat.com> <4F968854.20305@redhat.com> Message-ID: <20120424145721.GB2015@redhat.com> On Tue, Apr 24, 2012 at 01:02:44PM +0200, Jan Cholasta wrote: > I'm just curious, why you do this: > > default:nis-keys-format: %mregsub("%{macAddress} > %{fqdn}","(..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..) > (.*)","%1") > > and not simply this: > > default:nis-keys-format: ${macAddress} > > ? Good eye. It's because of an implementation detail of the server plugin: when computing entries for a NIS map, it has to be able to deal with the list of keys which it computes having a different number of items in it than the list of corresponding values. If an entry has, say, two 'fqdn' values, and three 'macAddress' values, then for keys "%{macAddress}" would produce three values, and for values, "%{fqdn} %{macAddress} would produce six, since it's generating all of the combinations. In that case the plugin, assuming you want to make all six values visible to clients, has to figure out how to match up three keys to six values. It can repeat the list of keys as the second (or rightmost) variable changes, like this: key="fqdn1", value="macAddress1 fqdn1" key="fqdn2", value="macAddress1 fqdn2" key="fqdn3", value="macAddress1 fqdn3" key="fqdn1", value="macAddress2 fqdn1" key="fqdn2", value="macAddress2 fqdn2" key="fqdn3", value="macAddress2 fqdn3" or it can repeat the list of keys as the first (or leftmost) variable changes, like this: key="fqdn1", value="macAddress1 fqdn1" key="fqdn2", value="macAddress2 fqdn1" key="fqdn3", value="macAddress1 fqdn2" key="fqdn1", value="macAddress2 fqdn2" key="fqdn2", value="macAddress1 fqdn3" key="fqdn3", value="macAddress2 fqdn3" Now, if your key is the second column, that's not what you want. If it's the first column, the second way actually looks right: key="macAddress1", value="macAddress1 fqdn1" key="macAddress2", value="macAddress2 fqdn1" key="macAddress1", value="macAddress1 fqdn2" key="macAddress2", value="macAddress2 fqdn2" key="macAddress1", value="macAddress1 fqdn3" key="macAddress2", value="macAddress2 fqdn3" The plugin's not smart enough to figure out which way is correct (and at the moment I can't even remember which way I ended up choosing), so the configuration just makes sure that the list of keys starts out at the same length as the list of values, and then uses the regex to strip out the parts we don't want. Revised patch attached. Cheers, Nalin -------------- next part -------------- From 33aea09a1c1b48d6dcc3deef884fd33c938a1d6f Mon Sep 17 00:00:00 2001 From: Nalin Dahyabhai Date: Mon, 16 Apr 2012 15:33:42 -0400 Subject: [PATCH 3/3] - add a pair of ethers maps for computers with hardware addresses on file --- install/share/nis.uldif | 23 +++++++++++++++++++++++ install/updates/50-nis.update | 23 +++++++++++++++++++++++ 2 files changed, 46 insertions(+) diff --git a/install/share/nis.uldif b/install/share/nis.uldif index 2255541..1e54828 100644 --- a/install/share/nis.uldif +++ b/install/share/nis.uldif @@ -70,3 +70,26 @@ default:nis-filter: (objectClass=ipanisNetgroup) default:nis-key-format: %{cn} default:nis-value-format:%merge(" ","%deref_f(\"member\",\"(objectclass=ipanisNetgroup)\",\"cn\")","(%link(\"%ifeq(\\\"hostCategory\\\",\\\"all\\\",\\\"\\\",\\\"%collect(\\\\\\\"%{externalHost}\\\\\\\",\\\\\\\"%deref(\\\\\\\\\\\\\\\"memberHost\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"memberHost\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\")\\\")\",\"%ifeq(\\\"hostCategory\\\",\\\"all\\\",\\\"\\\",\\\"-\\\")\",\",\",\"%ifeq(\\\"userCategory\\\",\\\"all\\\",\\\"\\\",\\\"%collect(\\\\\\\"%deref(\\\\\\\\\\\\\\\"memberUser\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"memberUser\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\")\\\")\",\"%ifeq(\\\"userCategory\\\",\\\"all\\\",\\\"\\\",\\\"-\\\")\"),%{nisDomainName:-})") default:nis-secure: no + +dn: nis-domain=$DOMAIN+nis-map=ethers.byaddr, cn=NIS Server, cn=plugins, cn=config +default:objectclass: top +default:objectclass: extensibleObject +default:nis-domain: $DOMAIN +default:nis-map: ethers.byaddr +default:nis-base: cn=computers, cn=accounts, $SUFFIX +default:nis-filter: (&(macAddress=*)(fqdn=*)(objectClass=ipaHost)) +default:nis-keys-format: %mregsub("%{macAddress} %{fqdn}","(..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..) (.*)","%1:%2:%3:%4:%5:%6") +default:nis-values-format: %mregsub("%{macAddress} %{fqdn}","(..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..) (.*)","%1:%2:%3:%4:%5:%6 %7") +default:nis-secure: no + +dn: nis-domain=$DOMAIN+nis-map=ethers.byname, cn=NIS Server, cn=plugins, cn=config +default:objectclass: top +default:objectclass: extensibleObject +default:nis-domain: $DOMAIN +default:nis-map: ethers.byname +default:nis-base: cn=computers, cn=accounts, $SUFFIX +default:nis-filter: (&(macAddress=*)(fqdn=*)(objectClass=ipaHost)) +default:nis-keys-format: %mregsub("%{macAddress} %{fqdn}","(..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..) (.*)","%7") +default:nis-values-format: %mregsub("%{macAddress} %{fqdn}","(..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..) (.*)","%1:%2:%3:%4:%5:%6 %7") +default:nis-secure: no + diff --git a/install/updates/50-nis.update b/install/updates/50-nis.update index 5c72639..fc61b8d 100644 --- a/install/updates/50-nis.update +++ b/install/updates/50-nis.update @@ -12,3 +12,26 @@ replace:nis-value-format: '%merge(" ","%{memberNisNetgroup}","(%link(\"%ifeq(\\\ # https://bugzilla.redhat.com/show_bug.cgi?id=767372 dn: nis-domain=$DOMAIN+nis-map=netgroup, cn=NIS Server, cn=plugins, cn=config replace:nis-value-format: '%merge(" ","%deref_f(\"member\",\"(objectclass=ipanisNetgroup)\",\"cn\")","(%link(\"%ifeq(\\\"hostCategory\\\",\\\"all\\\",\\\"\\\",\\\"%collect(\\\\\\\"%{externalHost}\\\\\\\",\\\\\\\"%deref(\\\\\\\\\\\\\\\"memberHost\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"memberHost\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\")\\\")\",\"-\",\",\",\"%ifeq(\\\"userCategory\\\",\\\"all\\\",\\\"\\\",\\\"%collect(\\\\\\\"%deref(\\\\\\\\\\\\\\\"memberUser\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"memberUser\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\")\\\")\",\"-\"),%{nisDomainName:-})")::%merge(" ","%deref_f(\"member\",\"(objectclass=ipanisNetgroup)\",\"cn\")","(%link(\"%ifeq(\\\"hostCategory\\\",\\\"all\\\",\\\"\\\",\\\"%collect(\\\\\\\"%{externalHost}\\\\\\\",\\\\\\\"%deref(\\\\\\\\\\\\\\\"memberHost\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"memberHost\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"fqdn\\\\\\\\\\\\\\\")\\\\\\\")\\\")\",\"%ifeq(\\\"hostCategory\\\",\\\"all\\\",\\\"\\\",\\\"-\\\")\",\",\",\"%ifeq(\\\"userCategory\\\",\\\"all\\\",\\\"\\\",\\\"%collect(\\\\\\\"%deref(\\\\\\\\\\\\\\\"memberUser\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\",\\\\\\\"%deref_r(\\\\\\\\\\\\\\\"memberUser\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"member\\\\\\\\\\\\\\\",\\\\\\\\\\\\\\\"uid\\\\\\\\\\\\\\\")\\\\\\\")\\\")\",\"%ifeq(\\\"userCategory\\\",\\\"all\\\",\\\"\\\",\\\"-\\\")\"),%{nisDomainName:-})")' + +dn: nis-domain=$DOMAIN+nis-map=ethers.byaddr, cn=NIS Server, cn=plugins, cn=config +default:objectclass: top +default:objectclass: extensibleObject +default:nis-domain: $DOMAIN +default:nis-map: ethers.byaddr +default:nis-base: cn=computers, cn=accounts, $SUFFIX +default:nis-filter: (&(macAddress=*)(fqdn=*)(objectClass=ipaHost)) +default:nis-keys-format: %mregsub("%{macAddress} %{fqdn}","(..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..) (.*)","%1:%2:%3:%4:%5:%6") +default:nis-values-format: %mregsub("%{macAddress} %{fqdn}","(..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..) (.*)","%1:%2:%3:%4:%5:%6 %7") +default:nis-secure: no + +dn: nis-domain=$DOMAIN+nis-map=ethers.byname, cn=NIS Server, cn=plugins, cn=config +default:objectclass: top +default:objectclass: extensibleObject +default:nis-domain: $DOMAIN +default:nis-map: ethers.byname +default:nis-base: cn=computers, cn=accounts, $SUFFIX +default:nis-filter: (&(macAddress=*)(fqdn=*)(objectClass=ipaHost)) +default:nis-keys-format: %mregsub("%{macAddress} %{fqdn}","(..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..) (.*)","%7") +default:nis-values-format: %mregsub("%{macAddress} %{fqdn}","(..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..)[:\\\|-](..) (.*)","%1:%2:%3:%4:%5:%6 %7") +default:nis-secure: no + -- 1.7.10 From jcholast at redhat.com Tue Apr 24 15:38:47 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 24 Apr 2012 17:38:47 +0200 Subject: [Freeipa-devel] [PATCH] 76 Refactor exc_callback invocation In-Reply-To: <4F9587A3.5070803@redhat.com> References: <4F95683C.8020700@redhat.com> <4F9587A3.5070803@redhat.com> Message-ID: <4F96C907.9090906@redhat.com> On 23.4.2012 18:47, Petr Viktorin wrote: > On 04/23/2012 04:33 PM, Jan Cholasta wrote: >> Hi, >> >> this patch replaces _call_exc_callbacks with a function wrapper, which >> will automatically call exception callbacks when an exception is raised >> from the function. This removes the need to specify the function and its >> arguments twice (once in the function call itself and once in >> _call_exc_callbacks). >> >> Code like this: >> >> try: >> # original call >> ret = func(arg, kwarg=0) >> except ExecutionError, e: >> try: >> # the function and its arguments need to be specified again! >> ret = self._call_exc_callbacks(args, options, e, func, arg, >> kwarg=0) >> except ExecutionErrorSubclass, e: >> handle_error(e) >> >> becomes this: >> >> try: >> ret = self._exc_wrapper(args, options, func)(arg, kwarg=0) >> except ExecutionErrorSubclass, e: >> handle_error(e) >> >> As you can see, the resulting code is shorter and you don't have to >> remember to make changes to the arguments in two places. >> >> Honza > > Please add a test, too. I've attached one you can use. OK, thanks. > > See also some style nitpicks below. > >> -- >> Jan Cholasta >> >> freeipa-jcholast-76-refactor-exc-callback.patch >> >> >>> From 8e070f571472ed5a27339bcc980b67ecca41b337 Mon Sep 17 00:00:00 2001 >> From: Jan Cholasta >> Date: Thu, 19 Apr 2012 08:06:32 -0400 >> Subject: [PATCH] Refactor exc_callback invocation. >> >> Replace _call_exc_callbacks with a function wrapper, which will >> automatically >> call exception callbacks when an exception is raised from the >> function. This >> removes the need to specify the function and its arguments twice (once >> in the >> function call itself and once in _call_exc_callbacks). >> --- >> ipalib/plugins/baseldap.py | 227 >> ++++++++++++++---------------------------- >> ipalib/plugins/entitle.py | 19 ++-- >> ipalib/plugins/group.py | 7 +- >> ipalib/plugins/permission.py | 27 +++--- >> ipalib/plugins/pwpolicy.py | 11 +- >> 5 files changed, 109 insertions(+), 182 deletions(-) >> >> diff --git a/ipalib/plugins/baseldap.py b/ipalib/plugins/baseldap.py >> index f185977..f7a3bbc 100644 >> --- a/ipalib/plugins/baseldap.py >> +++ b/ipalib/plugins/baseldap.py >> @@ -744,26 +744,24 @@ class CallbackInterface(Method): >> else: >> klass.INTERACTIVE_PROMPT_CALLBACKS.append(callback) >> >> - def _call_exc_callbacks(self, args, options, exc, call_func, >> *call_args, **call_kwargs): >> - rv = None >> - for i in xrange(len(getattr(self, 'EXC_CALLBACKS', []))): >> - callback = self.EXC_CALLBACKS[i] >> - try: >> - if hasattr(callback, 'im_self'): >> - rv = callback( >> - args, options, exc, call_func, *call_args, **call_kwargs >> - ) >> - else: >> - rv = callback( >> - self, args, options, exc, call_func, *call_args, >> - **call_kwargs >> - ) >> - except errors.ExecutionError, e: >> - if (i + 1)< len(self.EXC_CALLBACKS): >> - exc = e >> - continue >> - raise e >> - return rv >> + def _exc_wrapper(self, keys, options, call_func): > > Consider adding a docstring, e.g. > """Function wrapper that automatically calls exception callbacks""" Added. > >> + def wrapped(*call_args, **call_kwargs): >> + func = call_func >> + callbacks = list(getattr(self, 'EXC_CALLBACKS', [])) >> + while True: >> + try: > > You have some clever code here, rebinding `func` like you do. > It'd be nice if there was a comment warning that you're redefining a > function, in case someone who's not a Python expert looks at this. > Consider: > # `func` is either the original function, or the current error callback Changed the code a bit so that it is more readable. > >> + return func(*call_args, **call_kwargs) >> + except errors.ExecutionError, e: >> + if len(callbacks) == 0: > > Use just `if not callbacks`, as per PEP8. OK. > >> + raise >> + callback = callbacks.pop(0) >> + if hasattr(callback, 'im_self'): >> + def func(*args, **kwargs): #pylint: disable=E0102 >> + return callback(keys, options, e, call_func, *args, **kwargs) >> + else: >> + def func(*args, **kwargs): #pylint: disable=E0102 >> + return callback(self, keys, options, e, call_func, *args, **kwargs) >> + return wrapped >> >> >> class BaseLDAPCommand(CallbackInterface, Command): > [...] >> diff --git a/ipalib/plugins/entitle.py b/ipalib/plugins/entitle.py >> index 28d2c5d..6ade854 100644 >> --- a/ipalib/plugins/entitle.py >> +++ b/ipalib/plugins/entitle.py >> @@ -642,12 +642,12 @@ class entitle_import(LDAPUpdate): >> If we are adding the first entry there are no updates so EmptyModlist >> will get thrown. Ignore it. >> """ >> - if isinstance(exc, errors.EmptyModlist): >> - if not getattr(context, 'entitle_import', False): >> - raise exc >> - return (call_args, {}) >> - else: >> - raise exc >> + if call_func.func_name == 'update_entry': >> + if isinstance(exc, errors.EmptyModlist): >> + if not getattr(context, 'entitle_import', False): > > You didn't mention the additional checks for 'update_entry' in the > commit message. Fixed. > > By the way, the need for these checks suggests that a per-class registry > of error callbacks might not be the best design. But that's for more > long-term thinking. They are not strictly needed, I added them just to be on the safe side. > >> + raise exc >> + return (call_args, {}) >> + raise exc >> >> def execute(self, *keys, **options): >> super(entitle_import, self).execute(*keys, **options) > [...] > Updated patch attached. Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-76.1-refactor-exc-callback.patch Type: text/x-patch Size: 25151 bytes Desc: not available URL: From jcholast at redhat.com Tue Apr 24 16:42:16 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 24 Apr 2012 18:42:16 +0200 Subject: [Freeipa-devel] [PATCH] compat ieee802Device entries for ipaHost entries In-Reply-To: <20120424142151.GA2015@redhat.com> References: <20120416203918.GD8158@redhat.com> <4F956F40.6010801@redhat.com> <20120423204511.GA31145@redhat.com> <4F967A73.9030001@redhat.com> <20120424142151.GA2015@redhat.com> Message-ID: <4F96D7E8.9060609@redhat.com> On 24.4.2012 16:21, Nalin Dahyabhai wrote: > On Tue, Apr 24, 2012 at 12:03:31PM +0200, Jan Cholasta wrote: >> I did some more testing and found out that this line: >> >> default:schema-compat-entry-rdn: 'cn=%first("%{fqdn}")' >> >> needs to be changed to: >> >> default:schema-compat-entry-rdn: cn=%first("%{fqdn}") >> >> in both install/share/schema_compat.uldif and >> install/updates/10-schema_compat.update, otherwise we get entries >> with DN like this: >> "'cn=test.example.com',cn=computers,cn=compat,dc=example,dc=com". >> >> Besides this, both clean installs and upgrades seem to work fine >> with this patch. > > Right, the quoting rules. Revised again, in case you need it. > > Thanks! > > Nalin ACK. Honza -- Jan Cholasta From pviktori at redhat.com Wed Apr 25 08:41:59 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 25 Apr 2012 10:41:59 +0200 Subject: [Freeipa-devel] [PATCH] 76 Refactor exc_callback invocation In-Reply-To: <4F96C907.9090906@redhat.com> References: <4F95683C.8020700@redhat.com> <4F9587A3.5070803@redhat.com> <4F96C907.9090906@redhat.com> Message-ID: <4F97B8D7.2080300@redhat.com> On 04/24/2012 05:38 PM, Jan Cholasta wrote: > On 23.4.2012 18:47, Petr Viktorin wrote: >> On 04/23/2012 04:33 PM, Jan Cholasta wrote: >>> Hi, >>> >>> this patch replaces _call_exc_callbacks with a function wrapper, which >>> will automatically call exception callbacks when an exception is raised >>> from the function. This removes the need to specify the function and its >>> arguments twice (once in the function call itself and once in >>> _call_exc_callbacks). >>> >>> Code like this: >>> >>> try: >>> # original call >>> ret = func(arg, kwarg=0) >>> except ExecutionError, e: >>> try: >>> # the function and its arguments need to be specified again! >>> ret = self._call_exc_callbacks(args, options, e, func, arg, >>> kwarg=0) >>> except ExecutionErrorSubclass, e: >>> handle_error(e) >>> >>> becomes this: >>> >>> try: >>> ret = self._exc_wrapper(args, options, func)(arg, kwarg=0) >>> except ExecutionErrorSubclass, e: >>> handle_error(e) >>> >>> As you can see, the resulting code is shorter and you don't have to >>> remember to make changes to the arguments in two places. >>> >>> Honza >> >> Please add a test, too. I've attached one you can use. > > OK, thanks. > >> >> See also some style nitpicks below. >> >>> -- >>> Jan Cholasta >>> >>> freeipa-jcholast-76-refactor-exc-callback.patch >>> >>> >>>> From 8e070f571472ed5a27339bcc980b67ecca41b337 Mon Sep 17 00:00:00 2001 >>> From: Jan Cholasta >>> Date: Thu, 19 Apr 2012 08:06:32 -0400 >>> Subject: [PATCH] Refactor exc_callback invocation. >>> >>> Replace _call_exc_callbacks with a function wrapper, which will >>> automatically >>> call exception callbacks when an exception is raised from the >>> function. This >>> removes the need to specify the function and its arguments twice (once >>> in the >>> function call itself and once in _call_exc_callbacks). >>> --- >>> ipalib/plugins/baseldap.py | 227 >>> ++++++++++++++---------------------------- >>> ipalib/plugins/entitle.py | 19 ++-- >>> ipalib/plugins/group.py | 7 +- >>> ipalib/plugins/permission.py | 27 +++--- >>> ipalib/plugins/pwpolicy.py | 11 +- >>> 5 files changed, 109 insertions(+), 182 deletions(-) >>> >>> diff --git a/ipalib/plugins/baseldap.py b/ipalib/plugins/baseldap.py >>> index f185977..f7a3bbc 100644 >>> --- a/ipalib/plugins/baseldap.py >>> +++ b/ipalib/plugins/baseldap.py >>> @@ -744,26 +744,24 @@ class CallbackInterface(Method): >>> else: >>> klass.INTERACTIVE_PROMPT_CALLBACKS.append(callback) >>> >>> - def _call_exc_callbacks(self, args, options, exc, call_func, >>> *call_args, **call_kwargs): >>> - rv = None >>> - for i in xrange(len(getattr(self, 'EXC_CALLBACKS', []))): >>> - callback = self.EXC_CALLBACKS[i] >>> - try: >>> - if hasattr(callback, 'im_self'): >>> - rv = callback( >>> - args, options, exc, call_func, *call_args, **call_kwargs >>> - ) >>> - else: >>> - rv = callback( >>> - self, args, options, exc, call_func, *call_args, >>> - **call_kwargs >>> - ) >>> - except errors.ExecutionError, e: >>> - if (i + 1)< len(self.EXC_CALLBACKS): >>> - exc = e >>> - continue >>> - raise e >>> - return rv >>> + def _exc_wrapper(self, keys, options, call_func): >> >> Consider adding a docstring, e.g. >> """Function wrapper that automatically calls exception callbacks""" > > Added. > >> >>> + def wrapped(*call_args, **call_kwargs): >>> + func = call_func >>> + callbacks = list(getattr(self, 'EXC_CALLBACKS', [])) >>> + while True: >>> + try: >> >> You have some clever code here, rebinding `func` like you do. >> It'd be nice if there was a comment warning that you're redefining a >> function, in case someone who's not a Python expert looks at this. >> Consider: >> # `func` is either the original function, or the current error callback > > Changed the code a bit so that it is more readable. > >> >>> + return func(*call_args, **call_kwargs) >>> + except errors.ExecutionError, e: >>> + if len(callbacks) == 0: >> >> Use just `if not callbacks`, as per PEP8. > > OK. > >> >>> + raise >>> + callback = callbacks.pop(0) >>> + if hasattr(callback, 'im_self'): >>> + def func(*args, **kwargs): #pylint: disable=E0102 >>> + return callback(keys, options, e, call_func, *args, **kwargs) >>> + else: >>> + def func(*args, **kwargs): #pylint: disable=E0102 >>> + return callback(self, keys, options, e, call_func, *args, **kwargs) >>> + return wrapped >>> >>> >>> class BaseLDAPCommand(CallbackInterface, Command): >> [...] >>> diff --git a/ipalib/plugins/entitle.py b/ipalib/plugins/entitle.py >>> index 28d2c5d..6ade854 100644 >>> --- a/ipalib/plugins/entitle.py >>> +++ b/ipalib/plugins/entitle.py >>> @@ -642,12 +642,12 @@ class entitle_import(LDAPUpdate): >>> If we are adding the first entry there are no updates so EmptyModlist >>> will get thrown. Ignore it. >>> """ >>> - if isinstance(exc, errors.EmptyModlist): >>> - if not getattr(context, 'entitle_import', False): >>> - raise exc >>> - return (call_args, {}) >>> - else: >>> - raise exc >>> + if call_func.func_name == 'update_entry': >>> + if isinstance(exc, errors.EmptyModlist): >>> + if not getattr(context, 'entitle_import', False): >> >> You didn't mention the additional checks for 'update_entry' in the >> commit message. > > Fixed. > >> >> By the way, the need for these checks suggests that a per-class registry >> of error callbacks might not be the best design. But that's for more >> long-term thinking. > > They are not strictly needed, I added them just to be on the safe side. > >> >>> + raise exc >>> + return (call_args, {}) >>> + raise exc >>> >>> def execute(self, *keys, **options): >>> super(entitle_import, self).execute(*keys, **options) >> [...] >> > > Updated patch attached. > > Honza > ACK -- Petr? From jcholast at redhat.com Wed Apr 25 09:24:28 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 25 Apr 2012 11:24:28 +0200 Subject: [Freeipa-devel] [PATCH] add ethers.byname and ethers.byaddr NIS maps In-Reply-To: <20120424145721.GB2015@redhat.com> References: <20120416205154.GE8158@redhat.com> <4F957378.1070604@redhat.com> <4F9577EB.6040902@redhat.com> <20120423211802.GC31145@redhat.com> <4F968854.20305@redhat.com> <20120424145721.GB2015@redhat.com> Message-ID: <4F97C2CC.6090908@redhat.com> On 24.4.2012 16:57, Nalin Dahyabhai wrote: > On Tue, Apr 24, 2012 at 01:02:44PM +0200, Jan Cholasta wrote: >> I'm just curious, why you do this: >> >> default:nis-keys-format: %mregsub("%{macAddress} >> %{fqdn}","(..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..) >> (.*)","%1") >> >> and not simply this: >> >> default:nis-keys-format: ${macAddress} >> >> ? > > Good eye. It's because of an implementation detail of the server > plugin: when computing entries for a NIS map, it has to be able to deal > with the list of keys which it computes having a different number of > items in it than the list of corresponding values. > > If an entry has, say, two 'fqdn' values, and three 'macAddress' values, > then for keys "%{macAddress}" would produce three values, and for > values, "%{fqdn} %{macAddress} would produce six, since it's generating > all of the combinations. > > In that case the plugin, assuming you want to make all six values > visible to clients, has to figure out how to match up three keys to six > values. It can repeat the list of keys as the second (or rightmost) > variable changes, like this: > key="fqdn1", value="macAddress1 fqdn1" > key="fqdn2", value="macAddress1 fqdn2" > key="fqdn3", value="macAddress1 fqdn3" > key="fqdn1", value="macAddress2 fqdn1" > key="fqdn2", value="macAddress2 fqdn2" > key="fqdn3", value="macAddress2 fqdn3" > or it can repeat the list of keys as the first (or leftmost) variable > changes, like this: > key="fqdn1", value="macAddress1 fqdn1" > key="fqdn2", value="macAddress2 fqdn1" > key="fqdn3", value="macAddress1 fqdn2" > key="fqdn1", value="macAddress2 fqdn2" > key="fqdn2", value="macAddress1 fqdn3" > key="fqdn3", value="macAddress2 fqdn3" > Now, if your key is the second column, that's not what you want. If > it's the first column, the second way actually looks right: > key="macAddress1", value="macAddress1 fqdn1" > key="macAddress2", value="macAddress2 fqdn1" > key="macAddress1", value="macAddress1 fqdn2" > key="macAddress2", value="macAddress2 fqdn2" > key="macAddress1", value="macAddress1 fqdn3" > key="macAddress2", value="macAddress2 fqdn3" > > The plugin's not smart enough to figure out which way is correct (and at > the moment I can't even remember which way I ended up choosing), so the > configuration just makes sure that the list of keys starts out at the > same length as the list of values, and then uses the regex to strip out > the parts we don't want. Thanks for the detailed explanation. > > Revised patch attached. > > Cheers, > > Nalin ACK. Honza -- Jan Cholasta From mkosek at redhat.com Wed Apr 25 14:41:28 2012 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 25 Apr 2012 16:41:28 +0200 Subject: [Freeipa-devel] [PATCH] 254 Sort password policies properly with --pkey-only Message-ID: <1335364888.5633.26.camel@balmora.brq.redhat.com> Password policy plugin sorts password policies by its COS priority. However, when the pwpolicy-find command is run with --pkey-only, the resulting entries do not contain COS priority and the sort function crashes. This patch makes sure that cospriority is present in the time of the result sorting process and removes the cospriority again when the sorting is done. This way, the entries are sorted properly both with and without --pkey-only flag. Previous entries_sortfn member attribute of LDAPSearch class containing custom user sorting function was replaced just with a flag indicating if a sorting in LDAPSearch shall be done at all. This change makes it possible to sort entries in a custom post_callback which is much more powerful (and essential for sorting like in pwpolicy plugin) approach than a plain sorting function. https://fedorahosted.org/freeipa/ticket/2676 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-254-sort-password-policies-properly-with-pkey-only.patch Type: text/x-patch Size: 5102 bytes Desc: not available URL: From mkosek at redhat.com Thu Apr 26 07:03:25 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Apr 2012 09:03:25 +0200 Subject: [Freeipa-devel] [PATCH] index fqdn and macAddress attributes In-Reply-To: <4F9666B7.9010701@redhat.com> References: <20120416203221.GC8158@redhat.com> <4F9569CB.2050206@redhat.com> <20120423204617.GB31145@redhat.com> <4F9666B7.9010701@redhat.com> Message-ID: <1335423805.20837.9.camel@balmora.brq.redhat.com> On Tue, 2012-04-24 at 10:39 +0200, Jan Cholasta wrote: > On 23.4.2012 22:46, Nalin Dahyabhai wrote: > > On Mon, Apr 23, 2012 at 04:40:11PM +0200, Jan Cholasta wrote: > >> On 16.4.2012 22:32, Nalin Dahyabhai wrote: > >>> When we implement ticket #2259, indexing fqdn and macAddress should help > >>> the Schema Compatibility and NIS Server plugins locate relevant computer > >>> entries more easily. > >> > >> Please add the indices to install/share/indices.ldif as well. > > > > It's a bit of guesswork, but this should match the rest of the contents > > of that file. > > > > Nalin > > Thanks. You got it right, ACK. > > Honza > Pushed to master. Martin From mkosek at redhat.com Thu Apr 26 07:03:34 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Apr 2012 09:03:34 +0200 Subject: [Freeipa-devel] [PATCH] compat ieee802Device entries for ipaHost entries In-Reply-To: <4F96D7E8.9060609@redhat.com> References: <20120416203918.GD8158@redhat.com> <4F956F40.6010801@redhat.com> <20120423204511.GA31145@redhat.com> <4F967A73.9030001@redhat.com> <20120424142151.GA2015@redhat.com> <4F96D7E8.9060609@redhat.com> Message-ID: <1335423814.20837.10.camel@balmora.brq.redhat.com> On Tue, 2012-04-24 at 18:42 +0200, Jan Cholasta wrote: > On 24.4.2012 16:21, Nalin Dahyabhai wrote: > > On Tue, Apr 24, 2012 at 12:03:31PM +0200, Jan Cholasta wrote: > >> I did some more testing and found out that this line: > >> > >> default:schema-compat-entry-rdn: 'cn=%first("%{fqdn}")' > >> > >> needs to be changed to: > >> > >> default:schema-compat-entry-rdn: cn=%first("%{fqdn}") > >> > >> in both install/share/schema_compat.uldif and > >> install/updates/10-schema_compat.update, otherwise we get entries > >> with DN like this: > >> "'cn=test.example.com',cn=computers,cn=compat,dc=example,dc=com". > >> > >> Besides this, both clean installs and upgrades seem to work fine > >> with this patch. > > > > Right, the quoting rules. Revised again, in case you need it. > > > > Thanks! > > > > Nalin > > ACK. > > Honza > Pushed to master. Martin From mkosek at redhat.com Thu Apr 26 07:03:41 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Apr 2012 09:03:41 +0200 Subject: [Freeipa-devel] [PATCH] add ethers.byname and ethers.byaddr NIS maps In-Reply-To: <4F97C2CC.6090908@redhat.com> References: <20120416205154.GE8158@redhat.com> <4F957378.1070604@redhat.com> <4F9577EB.6040902@redhat.com> <20120423211802.GC31145@redhat.com> <4F968854.20305@redhat.com> <20120424145721.GB2015@redhat.com> <4F97C2CC.6090908@redhat.com> Message-ID: <1335423821.20837.11.camel@balmora.brq.redhat.com> On Wed, 2012-04-25 at 11:24 +0200, Jan Cholasta wrote: > On 24.4.2012 16:57, Nalin Dahyabhai wrote: > > On Tue, Apr 24, 2012 at 01:02:44PM +0200, Jan Cholasta wrote: > >> I'm just curious, why you do this: > >> > >> default:nis-keys-format: %mregsub("%{macAddress} > >> %{fqdn}","(..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..[:\\\|-]..) > >> (.*)","%1") > >> > >> and not simply this: > >> > >> default:nis-keys-format: ${macAddress} > >> > >> ? > > > > Good eye. It's because of an implementation detail of the server > > plugin: when computing entries for a NIS map, it has to be able to deal > > with the list of keys which it computes having a different number of > > items in it than the list of corresponding values. > > > > If an entry has, say, two 'fqdn' values, and three 'macAddress' values, > > then for keys "%{macAddress}" would produce three values, and for > > values, "%{fqdn} %{macAddress} would produce six, since it's generating > > all of the combinations. > > > > In that case the plugin, assuming you want to make all six values > > visible to clients, has to figure out how to match up three keys to six > > values. It can repeat the list of keys as the second (or rightmost) > > variable changes, like this: > > key="fqdn1", value="macAddress1 fqdn1" > > key="fqdn2", value="macAddress1 fqdn2" > > key="fqdn3", value="macAddress1 fqdn3" > > key="fqdn1", value="macAddress2 fqdn1" > > key="fqdn2", value="macAddress2 fqdn2" > > key="fqdn3", value="macAddress2 fqdn3" > > or it can repeat the list of keys as the first (or leftmost) variable > > changes, like this: > > key="fqdn1", value="macAddress1 fqdn1" > > key="fqdn2", value="macAddress2 fqdn1" > > key="fqdn3", value="macAddress1 fqdn2" > > key="fqdn1", value="macAddress2 fqdn2" > > key="fqdn2", value="macAddress1 fqdn3" > > key="fqdn3", value="macAddress2 fqdn3" > > Now, if your key is the second column, that's not what you want. If > > it's the first column, the second way actually looks right: > > key="macAddress1", value="macAddress1 fqdn1" > > key="macAddress2", value="macAddress2 fqdn1" > > key="macAddress1", value="macAddress1 fqdn2" > > key="macAddress2", value="macAddress2 fqdn2" > > key="macAddress1", value="macAddress1 fqdn3" > > key="macAddress2", value="macAddress2 fqdn3" > > > > The plugin's not smart enough to figure out which way is correct (and at > > the moment I can't even remember which way I ended up choosing), so the > > configuration just makes sure that the list of keys starts out at the > > same length as the list of values, and then uses the regex to strip out > > the parts we don't want. > > Thanks for the detailed explanation. > > > > > Revised patch attached. > > > > Cheers, > > > > Nalin > > ACK. > > Honza > Pushed to master. Martin From mkosek at redhat.com Thu Apr 26 07:03:53 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Apr 2012 09:03:53 +0200 Subject: [Freeipa-devel] [PATCH] 76 Refactor exc_callback invocation In-Reply-To: <4F97B8D7.2080300@redhat.com> References: <4F95683C.8020700@redhat.com> <4F9587A3.5070803@redhat.com> <4F96C907.9090906@redhat.com> <4F97B8D7.2080300@redhat.com> Message-ID: <1335423833.20837.12.camel@balmora.brq.redhat.com> On Wed, 2012-04-25 at 10:41 +0200, Petr Viktorin wrote: > On 04/24/2012 05:38 PM, Jan Cholasta wrote: > > On 23.4.2012 18:47, Petr Viktorin wrote: > >> On 04/23/2012 04:33 PM, Jan Cholasta wrote: > >>> Hi, > >>> > >>> this patch replaces _call_exc_callbacks with a function wrapper, which > >>> will automatically call exception callbacks when an exception is raised > >>> from the function. This removes the need to specify the function and its > >>> arguments twice (once in the function call itself and once in > >>> _call_exc_callbacks). > >>> > >>> Code like this: > >>> > >>> try: > >>> # original call > >>> ret = func(arg, kwarg=0) > >>> except ExecutionError, e: > >>> try: > >>> # the function and its arguments need to be specified again! > >>> ret = self._call_exc_callbacks(args, options, e, func, arg, > >>> kwarg=0) > >>> except ExecutionErrorSubclass, e: > >>> handle_error(e) > >>> > >>> becomes this: > >>> > >>> try: > >>> ret = self._exc_wrapper(args, options, func)(arg, kwarg=0) > >>> except ExecutionErrorSubclass, e: > >>> handle_error(e) > >>> > >>> As you can see, the resulting code is shorter and you don't have to > >>> remember to make changes to the arguments in two places. > >>> > >>> Honza > >> > >> Please add a test, too. I've attached one you can use. > > > > OK, thanks. > > > >> > >> See also some style nitpicks below. > >> > >>> -- > >>> Jan Cholasta > >>> > >>> freeipa-jcholast-76-refactor-exc-callback.patch > >>> > >>> > >>>> From 8e070f571472ed5a27339bcc980b67ecca41b337 Mon Sep 17 00:00:00 2001 > >>> From: Jan Cholasta > >>> Date: Thu, 19 Apr 2012 08:06:32 -0400 > >>> Subject: [PATCH] Refactor exc_callback invocation. > >>> > >>> Replace _call_exc_callbacks with a function wrapper, which will > >>> automatically > >>> call exception callbacks when an exception is raised from the > >>> function. This > >>> removes the need to specify the function and its arguments twice (once > >>> in the > >>> function call itself and once in _call_exc_callbacks). > >>> --- > >>> ipalib/plugins/baseldap.py | 227 > >>> ++++++++++++++---------------------------- > >>> ipalib/plugins/entitle.py | 19 ++-- > >>> ipalib/plugins/group.py | 7 +- > >>> ipalib/plugins/permission.py | 27 +++--- > >>> ipalib/plugins/pwpolicy.py | 11 +- > >>> 5 files changed, 109 insertions(+), 182 deletions(-) > >>> > >>> diff --git a/ipalib/plugins/baseldap.py b/ipalib/plugins/baseldap.py > >>> index f185977..f7a3bbc 100644 > >>> --- a/ipalib/plugins/baseldap.py > >>> +++ b/ipalib/plugins/baseldap.py > >>> @@ -744,26 +744,24 @@ class CallbackInterface(Method): > >>> else: > >>> klass.INTERACTIVE_PROMPT_CALLBACKS.append(callback) > >>> > >>> - def _call_exc_callbacks(self, args, options, exc, call_func, > >>> *call_args, **call_kwargs): > >>> - rv = None > >>> - for i in xrange(len(getattr(self, 'EXC_CALLBACKS', []))): > >>> - callback = self.EXC_CALLBACKS[i] > >>> - try: > >>> - if hasattr(callback, 'im_self'): > >>> - rv = callback( > >>> - args, options, exc, call_func, *call_args, **call_kwargs > >>> - ) > >>> - else: > >>> - rv = callback( > >>> - self, args, options, exc, call_func, *call_args, > >>> - **call_kwargs > >>> - ) > >>> - except errors.ExecutionError, e: > >>> - if (i + 1)< len(self.EXC_CALLBACKS): > >>> - exc = e > >>> - continue > >>> - raise e > >>> - return rv > >>> + def _exc_wrapper(self, keys, options, call_func): > >> > >> Consider adding a docstring, e.g. > >> """Function wrapper that automatically calls exception callbacks""" > > > > Added. > > > >> > >>> + def wrapped(*call_args, **call_kwargs): > >>> + func = call_func > >>> + callbacks = list(getattr(self, 'EXC_CALLBACKS', [])) > >>> + while True: > >>> + try: > >> > >> You have some clever code here, rebinding `func` like you do. > >> It'd be nice if there was a comment warning that you're redefining a > >> function, in case someone who's not a Python expert looks at this. > >> Consider: > >> # `func` is either the original function, or the current error callback > > > > Changed the code a bit so that it is more readable. > > > >> > >>> + return func(*call_args, **call_kwargs) > >>> + except errors.ExecutionError, e: > >>> + if len(callbacks) == 0: > >> > >> Use just `if not callbacks`, as per PEP8. > > > > OK. > > > >> > >>> + raise > >>> + callback = callbacks.pop(0) > >>> + if hasattr(callback, 'im_self'): > >>> + def func(*args, **kwargs): #pylint: disable=E0102 > >>> + return callback(keys, options, e, call_func, *args, **kwargs) > >>> + else: > >>> + def func(*args, **kwargs): #pylint: disable=E0102 > >>> + return callback(self, keys, options, e, call_func, *args, **kwargs) > >>> + return wrapped > >>> > >>> > >>> class BaseLDAPCommand(CallbackInterface, Command): > >> [...] > >>> diff --git a/ipalib/plugins/entitle.py b/ipalib/plugins/entitle.py > >>> index 28d2c5d..6ade854 100644 > >>> --- a/ipalib/plugins/entitle.py > >>> +++ b/ipalib/plugins/entitle.py > >>> @@ -642,12 +642,12 @@ class entitle_import(LDAPUpdate): > >>> If we are adding the first entry there are no updates so EmptyModlist > >>> will get thrown. Ignore it. > >>> """ > >>> - if isinstance(exc, errors.EmptyModlist): > >>> - if not getattr(context, 'entitle_import', False): > >>> - raise exc > >>> - return (call_args, {}) > >>> - else: > >>> - raise exc > >>> + if call_func.func_name == 'update_entry': > >>> + if isinstance(exc, errors.EmptyModlist): > >>> + if not getattr(context, 'entitle_import', False): > >> > >> You didn't mention the additional checks for 'update_entry' in the > >> commit message. > > > > Fixed. > > > >> > >> By the way, the need for these checks suggests that a per-class registry > >> of error callbacks might not be the best design. But that's for more > >> long-term thinking. > > > > They are not strictly needed, I added them just to be on the safe side. > > > >> > >>> + raise exc > >>> + return (call_args, {}) > >>> + raise exc > >>> > >>> def execute(self, *keys, **options): > >>> super(entitle_import, self).execute(*keys, **options) > >> [...] > >> > > > > Updated patch attached. > > > > Honza > > > > ACK > Pushed to master. Martin From pvoborni at redhat.com Thu Apr 26 07:06:01 2012 From: pvoborni at redhat.com (Petr Vobornik) Date: Thu, 26 Apr 2012 09:06:01 +0200 Subject: [Freeipa-devel] [PATCH] 254 Sort password policies properly with --pkey-only In-Reply-To: <1335364888.5633.26.camel@balmora.brq.redhat.com> References: <1335364888.5633.26.camel@balmora.brq.redhat.com> Message-ID: <4F98F3D9.7020309@redhat.com> On 04/25/2012 04:41 PM, Martin Kosek wrote: > Password policy plugin sorts password policies by its COS priority. > However, when the pwpolicy-find command is run with --pkey-only, > the resulting entries do not contain COS priority and the sort > function crashes. > > This patch makes sure that cospriority is present in the time > of the result sorting process and removes the cospriority again > when the sorting is done. This way, the entries are sorted properly > both with and without --pkey-only flag. > > Previous entries_sortfn member attribute of LDAPSearch class > containing custom user sorting function was replaced just with > a flag indicating if a sorting in LDAPSearch shall be done at all. > This change makes it possible to sort entries in a custom > post_callback which is much more powerful (and essential for > sorting like in pwpolicy plugin) approach than a plain sorting > function. > > https://fedorahosted.org/freeipa/ticket/2676 > Attaching patch which disables paging in password policy page. More details in patch description. -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0123-Paging-disable-for-password-policies.patch Type: text/x-patch Size: 1335 bytes Desc: not available URL: From pviktori at redhat.com Thu Apr 26 09:05:07 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 26 Apr 2012 11:05:07 +0200 Subject: [Freeipa-devel] [PATCH] 254 Sort password policies properly with --pkey-only In-Reply-To: <1335364888.5633.26.camel@balmora.brq.redhat.com> References: <1335364888.5633.26.camel@balmora.brq.redhat.com> Message-ID: <4F990FC3.9030203@redhat.com> On 04/25/2012 04:41 PM, Martin Kosek wrote: > Password policy plugin sorts password policies by its COS priority. > However, when the pwpolicy-find command is run with --pkey-only, > the resulting entries do not contain COS priority and the sort > function crashes. > > This patch makes sure that cospriority is present in the time > of the result sorting process and removes the cospriority again > when the sorting is done. This way, the entries are sorted properly > both with and without --pkey-only flag. > > Previous entries_sortfn member attribute of LDAPSearch class > containing custom user sorting function was replaced just with > a flag indicating if a sorting in LDAPSearch shall be done at all. > This change makes it possible to sort entries in a custom > post_callback which is much more powerful (and essential for > sorting like in pwpolicy plugin) approach than a plain sorting > function. > > https://fedorahosted.org/freeipa/ticket/2676 > ACK -- Petr? From pviktori at redhat.com Thu Apr 26 09:13:28 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 26 Apr 2012 11:13:28 +0200 Subject: [Freeipa-devel] [PATCH] 0041 Additional tests for pwpolicy Message-ID: <4F9911B8.4060301@redhat.com> Adding a test I wrote as part of review for Martin's patch 254 (Sort password policies properly with --pkey-only), and a test to check that a pwpolicy gets removed when its group is deleted. Also I renamed the test module to match the other plugin tests (I use a shell shortcut to run the plugin tests, so consistency helps). -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0041-Additional-tests-for-pwpolicy.patch Type: text/x-patch Size: 3382 bytes Desc: not available URL: From mkosek at redhat.com Thu Apr 26 11:58:14 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Apr 2012 13:58:14 +0200 Subject: [Freeipa-devel] [PATCH 70] validate i18n strings when running "make lint" In-Reply-To: <4F911AC8.8050004@redhat.com> References: <4F751025.7090204@redhat.com> <4F86D7F7.9040107@redhat.com> <4F8C81DA.5030800@redhat.com> <4F8EA671.1000006@redhat.com> <4F8F16CD.3070105@redhat.com> <4F8FF128.1060601@redhat.com> <4F905AE0.2080109@redhat.com> <4F911AC8.8050004@redhat.com> Message-ID: <1335441494.20837.14.camel@balmora.brq.redhat.com> On Fri, 2012-04-20 at 10:14 +0200, Petr Viktorin wrote: > On 04/19/2012 08:35 PM, John Dennis wrote: > > On 04/19/2012 07:04 AM, Petr Viktorin wrote: > >> On 04/18/2012 09:32 PM, John Dennis wrote: > >>>> Now that there are warnings, is pedantic mode necessary? > >>> > >>> Great question, I also pondered that as well. My conclusion was there > >>> was value in separating aggressiveness of error checking from the > >>> verbosity of the output. Also I didn't think we wanted warnings showing > >>> in normal checking for things which are suspicious but not known to be > >>> incorrect. So under the current scheme pedantic mode enables reporting > >>> of suspicious constructs. You can still get a warning in the normal mode > >>> for things which aren't fatal but are provably incorrect. An example of > >>> this would be missing plural translations, it won't cause a run time > >>> failure and we can be definite about their absence, however they should > >>> be fixed, but it's not mandatory they be fixed, a warning in this case > >>> seems appropriate. > >> > >> If they should be fixed, we should fix them, and treat them as errors > >> the same way we treat lint's warnings as errors. If the pedantic mode is > >> an obscure option of some test module, I worry that nobody will ever > >> run it. > > > > The value of pedantic mode is for the person maintaining the > > translations (at the moment that's me). It's not normally needed, but > > when something goes wrong it may be helpful to diagnose what might be > > amiss, in this case false positives are tolerable, in normal mode false > > positives should be silenced (a request of yours from an earlier > > review). Another thing to note is that a number of the warnings are > > limited to po checking, once again this is a translation maintainer > > feature, not a general test/developer feature. > > Thanks for the clarification. Now I see the use. > > >> Separating aggressiveness of checking from verbosity is not a bad idea. > >> But since we now have two severity levels, and the checking is cheap, > >> I'm not convinced that the aggressiveness should be tunable. > >> How about always counting the pedantic warnings, but not showing the > >> details? Then, if such warnings are found, have the tool say how to run > >> it to get a full report. That way people will notice it. > > > > In an earlier review you asked to limit the output to only what is an > > actual provable error. I agreed with you and modified the code. One > > reason not to modify it again is the amount of time being devoted to > > polishing what is an internal developer tool. I've tweaked the reporting > > 3-4 times already, I don't think it's time well spent to go back and do > > it again. After all, this is an internal tool, it will never be seen by > > a customer, if as we get experience with it we discover it's needs > > tweaking because it's not doing the job we hoped it would then that's > > the moment to invest more engineering resources on the output, > > validation, or whatever the deficiency is. > > > > > ACK (it's a 3.0 task, please push to master only) > > Rebased and pushed to master. Martin From mkosek at redhat.com Thu Apr 26 12:46:39 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Apr 2012 14:46:39 +0200 Subject: [Freeipa-devel] [PATCH] 254 Sort password policies properly with --pkey-only In-Reply-To: <4F990FC3.9030203@redhat.com> References: <1335364888.5633.26.camel@balmora.brq.redhat.com> <4F990FC3.9030203@redhat.com> Message-ID: <1335444399.20837.15.camel@balmora.brq.redhat.com> On Thu, 2012-04-26 at 11:05 +0200, Petr Viktorin wrote: > On 04/25/2012 04:41 PM, Martin Kosek wrote: > > Password policy plugin sorts password policies by its COS priority. > > However, when the pwpolicy-find command is run with --pkey-only, > > the resulting entries do not contain COS priority and the sort > > function crashes. > > > > This patch makes sure that cospriority is present in the time > > of the result sorting process and removes the cospriority again > > when the sorting is done. This way, the entries are sorted properly > > both with and without --pkey-only flag. > > > > Previous entries_sortfn member attribute of LDAPSearch class > > containing custom user sorting function was replaced just with > > a flag indicating if a sorting in LDAPSearch shall be done at all. > > This change makes it possible to sort entries in a custom > > post_callback which is much more powerful (and essential for > > sorting like in pwpolicy plugin) approach than a plain sorting > > function. > > > > https://fedorahosted.org/freeipa/ticket/2676 > > > > ACK > Pushed to master, ipa-2-2. Martin From mkosek at redhat.com Thu Apr 26 12:46:55 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Apr 2012 14:46:55 +0200 Subject: [Freeipa-devel] [PATCH] 0041 Additional tests for pwpolicy In-Reply-To: <4F9911B8.4060301@redhat.com> References: <4F9911B8.4060301@redhat.com> Message-ID: <1335444415.20837.16.camel@balmora.brq.redhat.com> On Thu, 2012-04-26 at 11:13 +0200, Petr Viktorin wrote: > Adding a test I wrote as part of review for Martin's patch 254 (Sort > password policies properly with --pkey-only), and a test to check that a > pwpolicy gets removed when its group is deleted. > > Also I renamed the test module to match the other plugin tests (I use a > shell shortcut to run the plugin tests, so consistency helps). > ACK. Pushed to master, ipa-2-2. Martin From mkosek at redhat.com Thu Apr 26 12:48:14 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Apr 2012 14:48:14 +0200 Subject: [Freeipa-devel] [PATCH] 254 Sort password policies properly with --pkey-only In-Reply-To: <4F98F3D9.7020309@redhat.com> References: <1335364888.5633.26.camel@balmora.brq.redhat.com> <4F98F3D9.7020309@redhat.com> Message-ID: <1335444494.20837.18.camel@balmora.brq.redhat.com> On Thu, 2012-04-26 at 09:06 +0200, Petr Vobornik wrote: > On 04/25/2012 04:41 PM, Martin Kosek wrote: > > Password policy plugin sorts password policies by its COS priority. > > However, when the pwpolicy-find command is run with --pkey-only, > > the resulting entries do not contain COS priority and the sort > > function crashes. > > > > This patch makes sure that cospriority is present in the time > > of the result sorting process and removes the cospriority again > > when the sorting is done. This way, the entries are sorted properly > > both with and without --pkey-only flag. > > > > Previous entries_sortfn member attribute of LDAPSearch class > > containing custom user sorting function was replaced just with > > a flag indicating if a sorting in LDAPSearch shall be done at all. > > This change makes it possible to sort entries in a custom > > post_callback which is much more powerful (and essential for > > sorting like in pwpolicy plugin) approach than a plain sorting > > function. > > > > https://fedorahosted.org/freeipa/ticket/2676 > > > > Attaching patch which disables paging in password policy page. More > details in patch description. ACK. Proper fix with both paging and correct order should be fixed in a ticket I created: https://fedorahosted.org/freeipa/ticket/2677 Pushed to master, ipa-2-2. Martin From mkosek at redhat.com Thu Apr 26 13:18:57 2012 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Apr 2012 15:18:57 +0200 Subject: [Freeipa-devel] [PATCH] 0042-0048 AD trusts support (master) In-Reply-To: <1334903980.2985.6.camel@balmora.brq.redhat.com> References: <20120403104135.GD23171@redhat.com> <20120403145749.GD8996@localhost.localdomain> <1334242615.777.3.camel@balmora.brq.redhat.com> <20120412150803.GA24623@redhat.com> <1334243807.777.6.camel@balmora.brq.redhat.com> <1334903980.2985.6.camel@balmora.brq.redhat.com> Message-ID: <1335446337.20837.20.camel@balmora.brq.redhat.com> On Fri, 2012-04-20 at 08:39 +0200, Martin Kosek wrote: > On Thu, 2012-04-12 at 17:16 +0200, Martin Kosek wrote: > > On Thu, 2012-04-12 at 18:08 +0300, Alexander Bokovoy wrote: > > > Hi Martin! > > > > > > On Thu, 12 Apr 2012, Martin Kosek wrote: > > ... > > > >3) I would not try to import ipaserver.dcerpc every time the command is > > > >executed: > > > >+ try: > > > >+ import ipaserver.dcerpc > > > >+ except Exception, e: > > > >+ raise errors.NotFound(name=_('AD Trust setup'), > > > >+ reason=_('Cannot perform join operation without Samba > > > >4 python bindings installed')) > > > > > > > >I would rather do it once in the beginning and set a flag: > > > > > > > >try: > > > > import ipaserver.dcerpc > > > > _bindings_installed = True > > > >except Exception: > > > > _bindings_installed = False > > > > > > > >... > > > The idea was that this code is only executed on the server. We need to > > > differentiate between: > > > - running on client > > > - running on server, no samba4 python bindings > > > - running on server with samba4 python bindings > > > > > > By making it executed all time you are affecting the client code as > > > well while with current approach it only affects server side. > > > > Across our code base, this situation is currently solved with this > > condition: > > > > if api.env.in_server and api.env.context in ['lite', 'server']: > > # try-import block > > > > > > > > > > > >+ def execute(self, *keys, **options): > > > >+ # Join domain using full credentials and with random trustdom > > > >+ # secret (will be generated by the join method) > > > >+ trustinstance = None > > > >+ if not _bindings_installed: > > > >+ raise errors.NotFound(name=_('AD Trust setup'), > > > >+ reason=_('Cannot perform join operation without Samba > > > >4 python bindings installed')) > > > > > > > > > > > >4) Another import inside a function: > > > >+ def arcfour_encrypt(key, data): > > > >+ from Crypto.Cipher import ARC4 > > > >+ c = ARC4.new(key) > > > >+ return c.encrypt(data) > > > Same here, it is only needed on server side. > > > > > > Let us get consensus over 3) and 4) and I'll fix patches altogether (and > > > push). > > > > > > > Yeah, I would fix in the same way as 3). > > > > I am running another run of test to finish my review of your patches, > but I stumbled in 389-ds error when I was installing IPA server from > package built from your git tree: > git://fedorapeople.org/home/fedora/abbra/public_git/freeipa.git > > # rpm -q freeipa-server 389-ds-base > freeipa-server-2.99.0GITc30f375-0.fc17.x86_64 > 389-ds-base-1.2.11-0.1.a1.fc17.x86_64 > # ipa-server-install -p kokos123 -a kokos123 > ... > [16/18]: issuing RA agent certificate > [17/18]: adding RA agent as a trusted user > [18/18]: Configure HTTP to proxy connections > done configuring pki-cad. > Configuring directory server: Estimated time 1 minute > [1/35]: creating directory server user > [2/35]: creating directory server instance > [3/35]: adding default schema > [4/35]: enabling memberof plugin > [5/35]: enabling referential integrity plugin > [6/35]: enabling winsync plugin > [7/35]: configuring replication version plugin > [8/35]: enabling IPA enrollment plugin > [9/35]: enabling ldapi > [10/35]: configuring uniqueness plugin > [11/35]: configuring uuid plugin > [12/35]: configuring modrdn plugin > [13/35]: enabling entryUSN plugin > [14/35]: configuring lockout plugin > [15/35]: creating indices > [16/35]: configuring ssl for ds instance > [17/35]: configuring certmap.conf > [18/35]: configure autobind for root > [19/35]: configure new location for managed entries > [20/35]: restarting directory server > [21/35]: adding default layout > [22/35]: adding delegation layout > ipa : CRITICAL Failed to load delegation.ldif: Command > '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v > -f /tmp/tmpdXcWF3 -x -D cn=Directory Manager -y /tmp/tmp8qtnOS' returned > non-zero exit status 255 > [23/35]: adding replication acis > ipa : CRITICAL Failed to load replica-acis.ldif: Command > '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v > -f /tmp/tmptivfJ_ -x -D cn=Directory Manager -y /tmp/tmpr_Z1lp' returned > non-zero exit status 255 > [24/35]: creating container for managed entries > ipa : CRITICAL Failed to load managed-entries.ldif: Command > '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v > -f /tmp/tmpNkmoDk -x -D cn=Directory Manager -y /tmp/tmpXU0lbx' returned > non-zero exit status 255 > [25/35]: configuring user private groups > ipa : CRITICAL Failed to load user_private_groups.ldif: Command > '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v > -f /tmp/tmp7uDqaG -x -D cn=Directory Manager -y /tmp/tmp6E_uPl' returned > non-zero exit status 255 > [26/35]: configuring netgroups from hostgroups > ipa : CRITICAL Failed to load host_nis_groups.ldif: Command > '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v > -f /tmp/tmphxoVQf -x -D cn=Directory Manager -y /tmp/tmpsAhhwd' returned > non-zero exit status 255 > [27/35]: creating default Sudo bind user > ipa : CRITICAL Failed to load sudobind.ldif: Command > '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v > -f /tmp/tmpCVpYqT -x -D cn=Directory Manager -y /tmp/tmp97b_6d' returned > non-zero exit status 255 > [28/35]: creating default Auto Member layout > ipa : CRITICAL Failed to load automember.ldif: Command > '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v > -f /tmp/tmpvcFbwK -x -D cn=Directory Manager -y /tmp/tmpSUownE' returned > non-zero exit status 255 > [29/35]: creating default HBAC rule allow_all > ipa : CRITICAL Failed to load default-hbac.ldif: Command > '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v > -f /tmp/tmpYoYkBy -x -D cn=Directory Manager -y /tmp/tmp_9le4C' returned > non-zero exit status 255 > [30/35]: initializing group membership > ipa : CRITICAL Failed to load memberof-task.ldif: Command > '/usr/bin/ldapmodify -h vm-079.idm.lab.bos.redhat.com -v > -f /tmp/tmpD9mIxC -x -D cn=Directory Manager -y /tmp/tmpeTqozO' returned > non-zero exit status 255 > Unexpected error - see ipaserver-install.log for details: > {'desc': "Can't contact LDAP server"} > > > # tail /var/log/dirsrv/slapd-IDM-LAB-BOS-REDHAT-COM/errors > [20/Apr/2012:02:19:16 -0400] - 389-Directory/1.2.11.a1 B2012.090.2135 > starting up > [20/Apr/2012:02:19:16 -0400] attrcrypt - No symmetric key found for > cipher AES in backend userRoot, attempting to create one... > [20/Apr/2012:02:19:16 -0400] attrcrypt - Key for cipher AES successfully > generated and stored > [20/Apr/2012:02:19:16 -0400] attrcrypt - No symmetric key found for > cipher 3DES in backend userRoot, attempting to create one... > [20/Apr/2012:02:19:16 -0400] attrcrypt - Key for cipher 3DES > successfully generated and stored > [20/Apr/2012:02:19:16 -0400] - slapd started. Listening on All > Interfaces port 389 for LDAP requests > [20/Apr/2012:02:19:16 -0400] - Listening on All Interfaces port 636 for > LDAPS requests > [20/Apr/2012:02:19:16 -0400] - Listening > on /var/run/slapd-IDM-LAB-BOS-REDHAT-COM.socket for LDAPI requests > [20/Apr/2012:02:19:17 -0400] - Skipping CoS Definition cn=Password > Policy,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com--no CoS > Templates found, which should be added before the CoS Definition. > [20/Apr/2012:02:19:17 -0400] entryrdn-index - _entryrdn_put_data: Adding > the self link (62) failed: BDB0068 DB_LOCK_DEADLOCK: Locker killed to > resolve a deadlock (-30993) > > Martin > I reproduced this issue even on another clean VM, I filed a BZ for that: https://bugzilla.redhat.com/show_bug.cgi?id=816590 Martin From pviktori at redhat.com Fri Apr 27 08:45:01 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 27 Apr 2012 10:45:01 +0200 Subject: [Freeipa-devel] [PATCH 75] log dogtag errors In-Reply-To: <201204201807.q3KI7VuR019115@int-mx11.intmail.prod.int.phx2.redhat.com> References: <201204201807.q3KI7VuR019115@int-mx11.intmail.prod.int.phx2.redhat.com> Message-ID: <4F9A5C8D.9000004@redhat.com> On 04/20/2012 08:07 PM, John Dennis wrote: > Ticket #2622 > > If we get an error from dogtag we always did raise a > CertificateOperationError exception with a message describing the > problem. Unfortuanately that error message did not go into the log, > just sent back to the caller. The fix is to format the error message > and send the same message to both the log and use it to initialize the > CertificateOperationError exception. > The patch contains five hunks with almost exactly the same code, applying the same changes in each case. Wouldn't it make sense to move the _sslget call, parsing, and error handling to a common method? -- Petr? From mkosek at redhat.com Fri Apr 27 12:36:10 2012 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 27 Apr 2012 14:36:10 +0200 Subject: [Freeipa-devel] Ticket #2293 - permission attribute check Message-ID: <1335530170.12167.11.camel@balmora.brq.redhat.com> I revisited ticket #2293 after it failed QE check. After some considerations, I think we should revert this type of check for permissions. Here is my reasoning: 1) This check fails when the target type does not have all its possible objectclasses defined in the LDAPObject, like when users or hosts miss kerberos or samba auxiliary classes as they are just classes that the object may potentially have: # ipa permission-mod "Change a user password" --attrs=userpassword,krbprincipalkey,sambalmpassword,passwordhistory ipa: ERROR: attribute(s) "sambalmpassword,passwordhistory" not allowed To fix this point, we would need to add all possible object classes to our user, host, ... objectclasses. 2) It severely limits permission flexibility for custom user objectclasses. They would need to extend our plugins to make them work. Observe this inconsistency: Setting custom OC+attribute works (replace "sudocmd" with some meaningful object class"): # ipa user-mod fbar --addattr=objectclass=ipasudocmd --setattr=sudocmd=fbar -------------------- Modified user "fbar" -------------------- User login: fbar First name: Foo Last name: Bar Home directory: /home/fbar Login shell: /bin/sh UID: 61400016 GID: 61400016 Account disabled: False Password: True Member of groups: ipausers Kerberos keys available: True # ipa user-show --all fbar dn: uid=fbar,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com User login: fbar First name: Foo Last name: Bar ... mepmanagedentry: cn=fbar,cn=groups,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com objectclass: top, person, organizationalperson, inetorgperson, inetuser, posixaccount, krbprincipalaux, krbticketpolicyaux, ipaobject, ipasshuser, ipaSshGroupOfPubKeys, mepOriginEntry, ipasudocmd sudocmd: fbar But adding a custom permission to control this attribute fails: # ipa permission-add "Can manage user sudocmd" --type=user --permissions=write --attrs=sudocmd ipa: ERROR: attribute(s) "sudocmd" not allowed Bottom line is that I would remove this check at all and just check that the attribute is right - as we already do for permission without "--type" specified: # ipa permission-add "Can write barbar" --filter="(objectclass=posixuser)" --permissions=write --attrs=barbar ipa: ERROR: targetattr "barbar" does not exist in schema. Please add attributeTypes "barbar" to schema if necessary. ACL Syntax Error(-5):(targetattr = \22barbar\22)(targetfilter = \22(objectclass=posixuser)\22)(version 3.0;acl \22permission:foo \22;allow (write) groupdn = \22ldap:///cn=foo,cn=permissions,cn=pbac,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com\22;): Invalid syntax. Martin From pviktori at redhat.com Fri Apr 27 13:23:48 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 27 Apr 2012 15:23:48 +0200 Subject: [Freeipa-devel] [PATCHES] 0041-42 Fix internal server errors with empty options Message-ID: <4F9A9DE4.9060500@redhat.com> Empty values in reverse_member options, and attattr/setattr/delattr, caused internal server errors. We convert empty values to None and bypass normal validation, so they need special care. https://fedorahosted.org/freeipa/ticket/2680 https://fedorahosted.org/freeipa/ticket/2681 -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0042-Do-not-crash-on-empty-reverse-member-options.patch Type: text/x-patch Size: 5605 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0043-Do-not-crash-on-empty-setattr-getattr-addattr.patch Type: text/x-patch Size: 2711 bytes Desc: not available URL: From ohamada at redhat.com Fri Apr 27 15:52:50 2012 From: ohamada at redhat.com (Ondrej Hamada) Date: Fri, 27 Apr 2012 17:52:50 +0200 Subject: [Freeipa-devel] Ticket #2293 - permission attribute check In-Reply-To: <1335530170.12167.11.camel@balmora.brq.redhat.com> References: <1335530170.12167.11.camel@balmora.brq.redhat.com> Message-ID: <4F9AC0D2.2040000@redhat.com> On 04/27/2012 02:36 PM, Martin Kosek wrote: > I revisited ticket #2293 after it failed QE check. After some > considerations, I think we should revert this type of check for > permissions. Here is my reasoning: > > 1) This check fails when the target type does not have all its possible > objectclasses defined in the LDAPObject, like when users or hosts miss > kerberos or samba auxiliary classes as they are just classes that the > object may potentially have: > > # ipa permission-mod "Change a user password" > --attrs=userpassword,krbprincipalkey,sambalmpassword,passwordhistory > ipa: ERROR: attribute(s) "sambalmpassword,passwordhistory" not allowed > > To fix this point, we would need to add all possible object classes to > our user, host, ... objectclasses. > > > 2) It severely limits permission flexibility for custom user > objectclasses. They would need to extend our plugins to make them work. > Observe this inconsistency: > > Setting custom OC+attribute works (replace "sudocmd" with some > meaningful object class"): > > # ipa user-mod fbar --addattr=objectclass=ipasudocmd --setattr=sudocmd=fbar > -------------------- > Modified user "fbar" > -------------------- > User login: fbar > First name: Foo > Last name: Bar > Home directory: /home/fbar > Login shell: /bin/sh > UID: 61400016 > GID: 61400016 > Account disabled: False > Password: True > Member of groups: ipausers > Kerberos keys available: True > > # ipa user-show --all fbar > dn: uid=fbar,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com > User login: fbar > First name: Foo > Last name: Bar > ... > mepmanagedentry: cn=fbar,cn=groups,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com > objectclass: top, person, organizationalperson, inetorgperson, inetuser, posixaccount, > krbprincipalaux, krbticketpolicyaux, ipaobject, ipasshuser, ipaSshGroupOfPubKeys, > mepOriginEntry, ipasudocmd > sudocmd: fbar > > > But adding a custom permission to control this attribute fails: > # ipa permission-add "Can manage user sudocmd" --type=user --permissions=write --attrs=sudocmd > ipa: ERROR: attribute(s) "sudocmd" not allowed > > > Bottom line is that I would remove this check at all and just check that > the attribute is right - as we already do for permission without > "--type" specified: > > # ipa permission-add "Can write barbar" > --filter="(objectclass=posixuser)" --permissions=write --attrs=barbar > ipa: ERROR: targetattr "barbar" does not exist in schema. Please add > attributeTypes "barbar" to schema if necessary. ACL Syntax > Error(-5):(targetattr = \22barbar\22)(targetfilter = > \22(objectclass=posixuser)\22)(version 3.0;acl \22permission:foo > \22;allow (write) groupdn = > \22ldap:///cn=foo,cn=permissions,cn=pbac,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com\22;): Invalid syntax. > > Martin > What about simply let the command succeed and print out a warning like: 'Attribute "passwordhistory" is not a default one for specified object type. The permission might not be properly evaluated.' -- Regards, Ondrej Hamada FreeIPA team jabber: ohama at jabbim.cz IRC: ohamada From dpal at redhat.com Fri Apr 27 19:00:51 2012 From: dpal at redhat.com (Dmitri Pal) Date: Fri, 27 Apr 2012 15:00:51 -0400 Subject: [Freeipa-devel] Ticket #2293 - permission attribute check In-Reply-To: <4F9AC0D2.2040000@redhat.com> References: <1335530170.12167.11.camel@balmora.brq.redhat.com> <4F9AC0D2.2040000@redhat.com> Message-ID: <4F9AECE3.6050409@redhat.com> On 04/27/2012 11:52 AM, Ondrej Hamada wrote: > On 04/27/2012 02:36 PM, Martin Kosek wrote: >> I revisited ticket #2293 after it failed QE check. After some >> considerations, I think we should revert this type of check for >> permissions. Here is my reasoning: >> >> 1) This check fails when the target type does not have all its possible >> objectclasses defined in the LDAPObject, like when users or hosts miss >> kerberos or samba auxiliary classes as they are just classes that the >> object may potentially have: >> >> # ipa permission-mod "Change a user password" >> --attrs=userpassword,krbprincipalkey,sambalmpassword,passwordhistory >> ipa: ERROR: attribute(s) "sambalmpassword,passwordhistory" not allowed >> >> To fix this point, we would need to add all possible object classes to >> our user, host, ... objectclasses. >> >> >> 2) It severely limits permission flexibility for custom user >> objectclasses. They would need to extend our plugins to make them work. >> Observe this inconsistency: >> >> Setting custom OC+attribute works (replace "sudocmd" with some >> meaningful object class"): >> >> # ipa user-mod fbar --addattr=objectclass=ipasudocmd >> --setattr=sudocmd=fbar >> -------------------- >> Modified user "fbar" >> -------------------- >> User login: fbar >> First name: Foo >> Last name: Bar >> Home directory: /home/fbar >> Login shell: /bin/sh >> UID: 61400016 >> GID: 61400016 >> Account disabled: False >> Password: True >> Member of groups: ipausers >> Kerberos keys available: True >> >> # ipa user-show --all fbar >> dn: >> uid=fbar,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com >> User login: fbar >> First name: Foo >> Last name: Bar >> ... >> mepmanagedentry: >> cn=fbar,cn=groups,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com >> objectclass: top, person, organizationalperson, inetorgperson, >> inetuser, posixaccount, >> krbprincipalaux, krbticketpolicyaux, ipaobject, >> ipasshuser, ipaSshGroupOfPubKeys, >> mepOriginEntry, ipasudocmd >> sudocmd: fbar >> >> >> But adding a custom permission to control this attribute fails: >> # ipa permission-add "Can manage user sudocmd" --type=user >> --permissions=write --attrs=sudocmd >> ipa: ERROR: attribute(s) "sudocmd" not allowed >> >> >> Bottom line is that I would remove this check at all and just check that >> the attribute is right - as we already do for permission without >> "--type" specified: >> >> # ipa permission-add "Can write barbar" >> --filter="(objectclass=posixuser)" --permissions=write --attrs=barbar >> ipa: ERROR: targetattr "barbar" does not exist in schema. Please add >> attributeTypes "barbar" to schema if necessary. ACL Syntax >> Error(-5):(targetattr = \22barbar\22)(targetfilter = >> \22(objectclass=posixuser)\22)(version 3.0;acl \22permission:foo >> \22;allow (write) groupdn = >> \22ldap:///cn=foo,cn=permissions,cn=pbac,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com\22;): >> Invalid syntax. >> >> Martin >> > What about simply let the command succeed and print out a warning > like: 'Attribute "passwordhistory" is not a default one for specified > object type. The permission might not be properly evaluated.' > Makes sense. -- Thank you, Dmitri Pal Sr. Engineering Manager IPA project, Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From rcritten at redhat.com Fri Apr 27 19:28:37 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Fri, 27 Apr 2012 15:28:37 -0400 Subject: [Freeipa-devel] Ticket #2293 - permission attribute check In-Reply-To: <4F9AC0D2.2040000@redhat.com> References: <1335530170.12167.11.camel@balmora.brq.redhat.com> <4F9AC0D2.2040000@redhat.com> Message-ID: <4F9AF365.6020101@redhat.com> Ondrej Hamada wrote: > On 04/27/2012 02:36 PM, Martin Kosek wrote: >> I revisited ticket #2293 after it failed QE check. After some >> considerations, I think we should revert this type of check for >> permissions. Here is my reasoning: >> >> 1) This check fails when the target type does not have all its possible >> objectclasses defined in the LDAPObject, like when users or hosts miss >> kerberos or samba auxiliary classes as they are just classes that the >> object may potentially have: >> >> # ipa permission-mod "Change a user password" >> --attrs=userpassword,krbprincipalkey,sambalmpassword,passwordhistory >> ipa: ERROR: attribute(s) "sambalmpassword,passwordhistory" not allowed >> >> To fix this point, we would need to add all possible object classes to >> our user, host, ... objectclasses. >> >> >> 2) It severely limits permission flexibility for custom user >> objectclasses. They would need to extend our plugins to make them work. >> Observe this inconsistency: >> >> Setting custom OC+attribute works (replace "sudocmd" with some >> meaningful object class"): >> >> # ipa user-mod fbar --addattr=objectclass=ipasudocmd >> --setattr=sudocmd=fbar >> -------------------- >> Modified user "fbar" >> -------------------- >> User login: fbar >> First name: Foo >> Last name: Bar >> Home directory: /home/fbar >> Login shell: /bin/sh >> UID: 61400016 >> GID: 61400016 >> Account disabled: False >> Password: True >> Member of groups: ipausers >> Kerberos keys available: True >> >> # ipa user-show --all fbar >> dn: uid=fbar,cn=users,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com >> User login: fbar >> First name: Foo >> Last name: Bar >> ... >> mepmanagedentry: >> cn=fbar,cn=groups,cn=accounts,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com >> objectclass: top, person, organizationalperson, inetorgperson, >> inetuser, posixaccount, >> krbprincipalaux, krbticketpolicyaux, ipaobject, ipasshuser, >> ipaSshGroupOfPubKeys, >> mepOriginEntry, ipasudocmd >> sudocmd: fbar >> >> >> But adding a custom permission to control this attribute fails: >> # ipa permission-add "Can manage user sudocmd" --type=user >> --permissions=write --attrs=sudocmd >> ipa: ERROR: attribute(s) "sudocmd" not allowed >> >> >> Bottom line is that I would remove this check at all and just check that >> the attribute is right - as we already do for permission without >> "--type" specified: >> >> # ipa permission-add "Can write barbar" >> --filter="(objectclass=posixuser)" --permissions=write --attrs=barbar >> ipa: ERROR: targetattr "barbar" does not exist in schema. Please add >> attributeTypes "barbar" to schema if necessary. ACL Syntax >> Error(-5):(targetattr = \22barbar\22)(targetfilter = >> \22(objectclass=posixuser)\22)(version 3.0;acl \22permission:foo >> \22;allow (write) groupdn = >> \22ldap:///cn=foo,cn=permissions,cn=pbac,dc=idm,dc=lab,dc=bos,dc=redhat,dc=com\22;): >> Invalid syntax. >> >> Martin >> > What about simply let the command succeed and print out a warning like: > 'Attribute "passwordhistory" is not a default one for specified object > type. The permission might not be properly evaluated.' > For IPA 2.2 we are going to revert the check. For 3.0 we will re-add the check and make it non-fatal and instead return a warning to the user that some attributes may not apply to this object. rob From jdennis at redhat.com Sat Apr 28 13:50:55 2012 From: jdennis at redhat.com (John Dennis) Date: Sat, 28 Apr 2012 09:50:55 -0400 Subject: [Freeipa-devel] [PATCH 75] log dogtag errors In-Reply-To: <4F9A5C8D.9000004@redhat.com> References: <201204201807.q3KI7VuR019115@int-mx11.intmail.prod.int.phx2.redhat.com> <4F9A5C8D.9000004@redhat.com> Message-ID: <4F9BF5BF.5030302@redhat.com> On 04/27/2012 04:45 AM, Petr Viktorin wrote: > On 04/20/2012 08:07 PM, John Dennis wrote: >> Ticket #2622 >> >> If we get an error from dogtag we always did raise a >> CertificateOperationError exception with a message describing the >> problem. Unfortuanately that error message did not go into the log, >> just sent back to the caller. The fix is to format the error message >> and send the same message to both the log and use it to initialize the >> CertificateOperationError exception. >> > > The patch contains five hunks with almost exactly the same code, > applying the same changes in each case. > Wouldn't it make sense to move the _sslget call, parsing, and error > handling to a common method? > Yes it would and ordinarily I would have taken that approach. However on IRC (or phone?) with Rob we decided not to perturb the code too much for this particular issue because we intend to refactor the code later. This was one of the last patches destined for 2.2 which is why we took the more conservative approach. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From mkosek at redhat.com Mon Apr 30 12:02:35 2012 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 30 Apr 2012 14:02:35 +0200 Subject: [Freeipa-devel] [PATCH] 255 Improve error message in zonemgr validator Message-ID: <1335787355.2895.8.camel@balmora.brq.redhat.com> This patch consolidates zonemgr function to move the most of the checks to common functions in order to provide consistent output. The error messages produced by the validator should now be more helpful when identifying the source of error. https://fedorahosted.org/freeipa/ticket/1966 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-255-improve-error-message-in-zonemgr-validator.patch Type: text/x-patch Size: 3712 bytes Desc: not available URL: From pviktori at redhat.com Mon Apr 30 12:13:27 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 30 Apr 2012 14:13:27 +0200 Subject: [Freeipa-devel] [PATCH] 0044 Validate externalhost (when added by --addattr/--setattr) Message-ID: <4F9E81E7.3030005@redhat.com> Change the externalhost attribute of hbacrule, netgroup and sudorule into a full-fledged Parameter, and attach a validator to it. RFC 1123 specifies that only [-a-z0-9] are allowed, but apparently Windows and some phones also use underscores in hostnames. So the new validator allows the underscore. https://fedorahosted.org/freeipa/ticket/2649 -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0044-Validate-externalhost-when-added-by-addattr-setattr.patch Type: text/x-patch Size: 6460 bytes Desc: not available URL: From pviktori at redhat.com Mon Apr 30 12:17:54 2012 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 30 Apr 2012 14:17:54 +0200 Subject: [Freeipa-devel] [PATCHES] 0042-43 Fix internal server errors with empty options In-Reply-To: <4F9A9DE4.9060500@redhat.com> References: <4F9A9DE4.9060500@redhat.com> Message-ID: <4F9E82F2.1010205@redhat.com> On 04/27/2012 03:23 PM, Petr Viktorin wrote: > Empty values in reverse_member options, and attattr/setattr/delattr, > caused internal server errors. > > We convert empty values to None and bypass normal validation, so they > need special care. > > > https://fedorahosted.org/freeipa/ticket/2680 > https://fedorahosted.org/freeipa/ticket/2681 > Fixing the subject, sorry for the noise. -- Petr? From mkosek at redhat.com Mon Apr 30 14:29:28 2012 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 30 Apr 2012 16:29:28 +0200 Subject: [Freeipa-devel] [PATCH] 255 Improve error message in zonemgr validator In-Reply-To: <1335787355.2895.8.camel@balmora.brq.redhat.com> References: <1335787355.2895.8.camel@balmora.brq.redhat.com> Message-ID: <1335796168.2895.31.camel@balmora.brq.redhat.com> On Mon, 2012-04-30 at 14:02 +0200, Martin Kosek wrote: > This patch consolidates zonemgr function to move the most of the > checks to common functions in order to provide consistent output. > The error messages produced by the validator should now be more > helpful when identifying the source of error. > > https://fedorahosted.org/freeipa/ticket/1966 Rob found a corner case with "foo..bar" where the error message was not as helpful as it could be. Now, the empty parts are handled better: # ipa dnszone-mod example.com --admin-email=.foo ipa: ERROR: invalid 'admin_email': missing mail account # ipa dnszone-mod example.com --admin-email=foo. ipa: ERROR: invalid 'admin_email': missing address domain # ipa dnszone-mod example.com --admin-email=foo..bar ipa: ERROR: invalid 'admin_email': empty DNS label Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-255-2-improve-error-message-in-zonemgr-validator.patch Type: text/x-patch Size: 4086 bytes Desc: not available URL: From rcritten at redhat.com Mon Apr 30 15:20:52 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 30 Apr 2012 11:20:52 -0400 Subject: [Freeipa-devel] [PATCH] reverted patches for ticket 2293 Message-ID: <4F9EADD4.1030008@redhat.com> I pushed out reverts for 2 patches in ticket https://fedorahosted.org/freeipa/ticket/2293 to master and ipa-2-2. We are going to take a different approach to this problem. Rather than trying to restrict permissions to a set of attributes, which can change over time, we are going to warn when attributes don't seem to apply to a container. rob From mkosek at redhat.com Mon Apr 30 15:29:20 2012 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 30 Apr 2012 17:29:20 +0200 Subject: [Freeipa-devel] [PATCH] 255 Improve error message in zonemgr validator In-Reply-To: <1335796168.2895.31.camel@balmora.brq.redhat.com> References: <1335787355.2895.8.camel@balmora.brq.redhat.com> <1335796168.2895.31.camel@balmora.brq.redhat.com> Message-ID: <1335799760.2895.35.camel@balmora.brq.redhat.com> On Mon, 2012-04-30 at 16:29 +0200, Martin Kosek wrote: > On Mon, 2012-04-30 at 14:02 +0200, Martin Kosek wrote: > > This patch consolidates zonemgr function to move the most of the > > checks to common functions in order to provide consistent output. > > The error messages produced by the validator should now be more > > helpful when identifying the source of error. > > > > https://fedorahosted.org/freeipa/ticket/1966 > > Rob found a corner case with "foo..bar" where the error message was not > as helpful as it could be. Now, the empty parts are handled better: > > # ipa dnszone-mod example.com --admin-email=.foo > ipa: ERROR: invalid 'admin_email': missing mail account > # ipa dnszone-mod example.com --admin-email=foo. > ipa: ERROR: invalid 'admin_email': missing address domain > # ipa dnszone-mod example.com --admin-email=foo..bar > ipa: ERROR: invalid 'admin_email': empty DNS label > > Martin Rob found one more issue with the error message (this one is actually quite old). DNS label starting with a hyphen is also not allowed, but we state that we forbid just a label ending with it. The attached patch fixes the error message. # ipa dnszone-mod example.com --admin-email=foo.-baz ipa: ERROR: invalid 'admin_email': only letters, numbers, and - are allowed. DNS label may not start or end with - Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-255-3-improve-error-message-in-zonemgr-validator.patch Type: text/x-patch Size: 4272 bytes Desc: not available URL: From jcholast at redhat.com Mon Apr 30 16:03:50 2012 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 30 Apr 2012 18:03:50 +0200 Subject: [Freeipa-devel] [PATCH] Set the "KerberosAuthentication" option in sshd_config to "no" instead of "yes" Message-ID: <4F9EB7E6.4050708@redhat.com> Setting the option to "yes" causes sshd to handle kinits itself, bypassing SSSD. https://fedorahosted.org/freeipa/ticket/2689 Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-77-sshd-kerberos-fix.patch Type: text/x-patch Size: 960 bytes Desc: not available URL: From rcritten at redhat.com Mon Apr 30 16:50:44 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 30 Apr 2012 12:50:44 -0400 Subject: [Freeipa-devel] [PATCH] 255 Improve error message in zonemgr validator In-Reply-To: <1335799760.2895.35.camel@balmora.brq.redhat.com> References: <1335787355.2895.8.camel@balmora.brq.redhat.com> <1335796168.2895.31.camel@balmora.brq.redhat.com> <1335799760.2895.35.camel@balmora.brq.redhat.com> Message-ID: <4F9EC2E4.60308@redhat.com> Martin Kosek wrote: > On Mon, 2012-04-30 at 16:29 +0200, Martin Kosek wrote: >> On Mon, 2012-04-30 at 14:02 +0200, Martin Kosek wrote: >>> This patch consolidates zonemgr function to move the most of the >>> checks to common functions in order to provide consistent output. >>> The error messages produced by the validator should now be more >>> helpful when identifying the source of error. >>> >>> https://fedorahosted.org/freeipa/ticket/1966 >> >> Rob found a corner case with "foo..bar" where the error message was not >> as helpful as it could be. Now, the empty parts are handled better: >> >> # ipa dnszone-mod example.com --admin-email=.foo >> ipa: ERROR: invalid 'admin_email': missing mail account >> # ipa dnszone-mod example.com --admin-email=foo. >> ipa: ERROR: invalid 'admin_email': missing address domain >> # ipa dnszone-mod example.com --admin-email=foo..bar >> ipa: ERROR: invalid 'admin_email': empty DNS label >> >> Martin > > Rob found one more issue with the error message (this one is actually > quite old). DNS label starting with a hyphen is also not allowed, but we > state that we forbid just a label ending with it. The attached patch > fixes the error message. > > # ipa dnszone-mod example.com --admin-email=foo.-baz > ipa: ERROR: invalid 'admin_email': only letters, numbers, and - are > allowed. DNS label may not start or end with - > > Martin ACK, pushed to master and ipa-2-2 rob From rcritten at redhat.com Mon Apr 30 16:54:01 2012 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 30 Apr 2012 12:54:01 -0400 Subject: [Freeipa-devel] [PATCH] Set the "KerberosAuthentication" option in sshd_config to "no" instead of "yes" In-Reply-To: <4F9EB7E6.4050708@redhat.com> References: <4F9EB7E6.4050708@redhat.com> Message-ID: <4F9EC3A9.5070505@redhat.com> Jan Cholasta wrote: > Setting the option to "yes" causes sshd to handle kinits itself, > bypassing SSSD. > > https://fedorahosted.org/freeipa/ticket/2689 > > Honza > ACK, pushed to master and ipa-2-2