From mkosek at redhat.com Wed Jan 2 09:51:18 2013 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 02 Jan 2013 10:51:18 +0100 Subject: [Freeipa-devel] [PATCH] 345 Do not crash when Kerberos SRV record is not found Message-ID: <50E40316.1000309@redhat.com> ipa-client-install crashed when IPA server realm TXT record was configured, but the referred domain (lower-case realm value) did not contain any Kerberos SRV record (_kerberos._udp..) https://fedorahosted.org/freeipa/ticket/3316 ---- This is regression to FreeIPA 2.2, so I would like to get it to master, ipa-3-1 and ipa-3-0 branches. Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-345-kerberos-srv-record-crash.patch Type: text/x-patch Size: 1309 bytes Desc: not available URL: From pviktori at redhat.com Wed Jan 2 12:01:53 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 02 Jan 2013 13:01:53 +0100 Subject: [Freeipa-devel] [PATCH] 345 Do not crash when Kerberos SRV record is not found In-Reply-To: <50E40316.1000309@redhat.com> References: <50E40316.1000309@redhat.com> Message-ID: <50E421B1.7050408@redhat.com> On 01/02/2013 10:51 AM, Martin Kosek wrote: > ipa-client-install crashed when IPA server realm TXT record was > configured, but the referred domain (lower-case realm value) did > not contain any Kerberos SRV record (_kerberos._udp..) > > https://fedorahosted.org/freeipa/ticket/3316 > > ---- > > This is regression to FreeIPA 2.2, so I would like to get it to master, ipa-3-1 > and ipa-3-0 branches. > ACK -- Petr? From mkosek at redhat.com Wed Jan 2 13:16:08 2013 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 02 Jan 2013 14:16:08 +0100 Subject: [Freeipa-devel] [PATCH] 345 Do not crash when Kerberos SRV record is not found In-Reply-To: <50E421B1.7050408@redhat.com> References: <50E40316.1000309@redhat.com> <50E421B1.7050408@redhat.com> Message-ID: <50E43318.70101@redhat.com> On 01/02/2013 01:01 PM, Petr Viktorin wrote: > On 01/02/2013 10:51 AM, Martin Kosek wrote: >> ipa-client-install crashed when IPA server realm TXT record was >> configured, but the referred domain (lower-case realm value) did >> not contain any Kerberos SRV record (_kerberos._udp..) >> >> https://fedorahosted.org/freeipa/ticket/3316 >> >> ---- >> >> This is regression to FreeIPA 2.2, so I would like to get it to master, ipa-3-1 >> and ipa-3-0 branches. >> > > ACK > Pushed to master, ipa-3-1, ipa-3-0. Martin From akrivoka at redhat.com Thu Jan 3 11:56:15 2013 From: akrivoka at redhat.com (Ana Krivokapic) Date: Thu, 03 Jan 2013 12:56:15 +0100 Subject: [Freeipa-devel] [PATCH] 0001 Raise ValidationError for incorrect subtree option Message-ID: <50E571DF.5090101@redhat.com> Using incorrect input for --subtree option of ipa permission-add command now raises a ValidationError. Previously, a ValueError was raised, which resulted in a user unfriendly error message: ipa: ERROR: an internal error has occurred I have added a try-except block to catch the ValueError and raise an appropriate ValidationError. https://fedorahosted.org/freeipa/ticket/3233 -- Regards, Ana Krivokapic Associate Software Engineer FreeIPA team Red Hat Inc. -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-akrivoka-0001-Raise-ValidationError-for-incorrect-subtree-option.patch Type: text/x-patch Size: 1259 bytes Desc: not available URL: From pviktori at redhat.com Thu Jan 3 12:42:50 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 03 Jan 2013 13:42:50 +0100 Subject: [Freeipa-devel] [PATCH] 0001 Raise ValidationError for incorrect subtree option In-Reply-To: <50E571DF.5090101@redhat.com> References: <50E571DF.5090101@redhat.com> Message-ID: <50E57CCA.1050003@redhat.com> On 01/03/2013 12:56 PM, Ana Krivokapic wrote: > Using incorrect input for --subtree option of ipa permission-add command > now raises a ValidationError. > > Previously, a ValueError was raised, which resulted in a user unfriendly > error message: > ipa: ERROR: an internal error has occurred > > I have added a try-except block to catch the ValueError and raise an > appropriate ValidationError. > > https://fedorahosted.org/freeipa/ticket/3233 > ... > --- a/ipalib/plugins/aci.py > +++ b/ipalib/plugins/aci.py > @@ -341,7 +341,10 @@ def _aci_to_kw(ldap, a, test=False, pkey_only=False): > else: > # See if the target is a group. If so we set the > # targetgroup attr, otherwise we consider it a subtree > - targetdn = DN(target.replace('ldap:///','')) > + try: > + targetdn = DN(target.replace('ldap:///','')) > + except ValueError as e: > + raise errors.ValidationError(name='subtree', error=_(e.message)) `_(e.message)` is useless. The message can only be translated if the string is grabbed by gettext, which uses static analysis. In other words, the argument to _() must be a literal string. You can do either `_("invalid DN")`, or if the error message is important, include it like this (e.message still won't be translated, but at least the users will get something in their language): _("invalid DN (%s)") % e.message -- Petr? From pviktori at redhat.com Thu Jan 3 13:00:18 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 03 Jan 2013 14:00:18 +0100 Subject: [Freeipa-devel] [PATCHES] 0117-0118 Port ipa-replica-prepare to the admintool framework Message-ID: <50E580E2.8090502@redhat.com> Hello, The first patch implements logging-related changes to the admintool framework and ipa-ldap-updater (the only thing ported to it so far). The design document is at http://freeipa.org/page/V3/Logging_and_output John, I decided to go ahead and put an explicit "logger" attribute on the tool class rather than adding debug, info, warn. etc methods dynamically using log_mgr.get_logger. I believe it's the cleanest solution. We had a discussion about this in this thread: https://www.redhat.com/archives/freeipa-devel/2012-July/msg00223.html; I didn't get a reaction to my conclusion so I'm letting you know in case you have more to say. The second patch ports ipa-replica-prepare to the framework. (I chose ipa-replica-prepare because there was a bug filed against its error handling, something the framework should take care of.) As far as Git can tell, it's a complete rewrite, so it might be hard to do a review. I have several smaller patches that make it easier to see what gets moved where. Please say I'm wrong, but as I understand, broken commits aren't allowed in the FreeIPA repo so I can only present the squashed patch for review. To get the smaller commits, do `git fetch git://github.com/encukou/freeipa.git replica-prepare:pviktori-replica-prepare`. Part of: https://fedorahosted.org/freeipa/ticket/2652 Fixes: https://fedorahosted.org/freeipa/ticket/3285 -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0117-Better-logging-for-AdminTool-and-ipa-ldap-updater.patch Type: text/x-patch Size: 15733 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0118-Port-ipa-replica-prepare-to-the-admintool-framework.patch Type: text/x-patch Size: 45488 bytes Desc: not available URL: From akrivoka at redhat.com Thu Jan 3 13:55:23 2013 From: akrivoka at redhat.com (Ana Krivokapic) Date: Thu, 03 Jan 2013 14:55:23 +0100 Subject: [Freeipa-devel] [PATCH] 0001 Raise ValidationError for incorrect subtree option In-Reply-To: <50E57CCA.1050003@redhat.com> References: <50E571DF.5090101@redhat.com> <50E57CCA.1050003@redhat.com> Message-ID: <50E58DCB.1020307@redhat.com> On 01/03/2013 01:42 PM, Petr Viktorin wrote: > On 01/03/2013 12:56 PM, Ana Krivokapic wrote: >> Using incorrect input for --subtree option of ipa permission-add command >> now raises a ValidationError. >> >> Previously, a ValueError was raised, which resulted in a user unfriendly >> error message: >> ipa: ERROR: an internal error has occurred >> >> I have added a try-except block to catch the ValueError and raise an >> appropriate ValidationError. >> >> https://fedorahosted.org/freeipa/ticket/3233 >> > ... > >> --- a/ipalib/plugins/aci.py >> +++ b/ipalib/plugins/aci.py >> @@ -341,7 +341,10 @@ def _aci_to_kw(ldap, a, test=False, >> pkey_only=False): >> else: >> # See if the target is a group. If so we set the >> # targetgroup attr, otherwise we consider it a subtree >> - targetdn = DN(target.replace('ldap:///','')) >> + try: >> + targetdn = DN(target.replace('ldap:///','')) >> + except ValueError as e: >> + raise errors.ValidationError(name='subtree', >> error=_(e.message)) > > `_(e.message)` is useless. The message can only be translated if the > string is grabbed by gettext, which uses static analysis. In other > words, the argument to _() must be a literal string. > > You can do either `_("invalid DN")`, or if the error message is > important, include it like this (e.message still won't be translated, > but at least the users will get something in their language): > _("invalid DN (%s)") % e.message > > Fixed. Thanks, Petr. -- Regards, Ana Krivokapic Associate Software Engineer FreeIPA team Red Hat Inc. -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-akrivoka-0001-02-Raise-ValidationError-for-incorrect-subtree-option.patch Type: text/x-patch Size: 1279 bytes Desc: not available URL: From jdennis at redhat.com Thu Jan 3 13:56:17 2013 From: jdennis at redhat.com (John Dennis) Date: Thu, 03 Jan 2013 08:56:17 -0500 Subject: [Freeipa-devel] [PATCHES] 0117-0118 Port ipa-replica-prepare to the admintool framework In-Reply-To: <50E580E2.8090502@redhat.com> References: <50E580E2.8090502@redhat.com> Message-ID: <50E58E01.3060402@redhat.com> On 01/03/2013 08:00 AM, Petr Viktorin wrote: > Hello, > > The first patch implements logging-related changes to the admintool > framework and ipa-ldap-updater (the only thing ported to it so far). > The design document is at http://freeipa.org/page/V3/Logging_and_output > > John, I decided to go ahead and put an explicit "logger" attribute on > the tool class rather than adding debug, info, warn. etc methods > dynamically using log_mgr.get_logger. I believe it's the cleanest solution. > We had a discussion about this in this thread: > https://www.redhat.com/archives/freeipa-devel/2012-July/msg00223.html; I > didn't get a reaction to my conclusion so I'm letting you know in case > you have more to say. I'm fine with not directly adding the debug, info, warn etc. methods, that practice was historical dating back to the days of Jason. However I do think it's useful to use a named logger and not the global root_logger. I'd prefer we got away from using the root_logger, it's continued existence is historical as well and the idea was over time we would slowly eliminate it's usage. FWIW the log_mgr.get_logger() is still useful for what you want to do. def get_logger(self, who, bind_logger_names=False) If you don't set bind_logger_names to True (and pass the class instance as who) you won't get the offensive debug, info, etc. methods added to the class instance. But it still does all the other bookeeping. The 'who' in this instance could be either the name of the admin tool or the class instance. Also I'd prefer using the attribute 'log' rather than 'logger'. That would make it consistent with code which does already use get_logger() passing a class instance because it's adds a 'log' attribute which is the logger. Also 'log' is twice as succinct than 'logger' (shorter line lengths). Thus if you do: log_mgr.get_logger(self) I think you'll get exactly what you want. A logger named for the class and being able to say self.log.debug() self.log.error() inside the class. In summary, just drop the True from the get_logger() call. > > > > The second patch ports ipa-replica-prepare to the framework. (I chose > ipa-replica-prepare because there was a bug filed against its error > handling, something the framework should take care of.) > As far as Git can tell, it's a complete rewrite, so it might be hard to > do a review. I have several smaller patches that make it easier to see > what gets moved where. Please say I'm wrong, but as I understand, broken > commits aren't allowed in the FreeIPA repo so I can only present the > squashed patch for review. > To get the smaller commits, do `git fetch > git://github.com/encukou/freeipa.git > replica-prepare:pviktori-replica-prepare`. > > Part of: https://fedorahosted.org/freeipa/ticket/2652 > Fixes: https://fedorahosted.org/freeipa/ticket/3285 > -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From pviktori at redhat.com Fri Jan 4 09:54:17 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 04 Jan 2013 10:54:17 +0100 Subject: [Freeipa-devel] Client JSON-RPC design doc Message-ID: <50E6A6C9.4030202@redhat.com> Hello, I've put a JSON-RPC design doc at http://freeipa.org/page/V3/JSON-RPC. It's also below for easier quoting. I found that constants.py already has an env variable called "rpc_json_uri", and xmlrpc_uri has this comment: # FIXME: let's renamed xmlrpc_uri to rpc_xml_uri AFAICS rpc_json_uri is unused, undocumented, and the installer always leaves it at the default ("http://localhost:8888/ipa/json"). I don't think it's feasible to rename xmlrpc_uri any more, so I used "jsonrpc_uri" for consistency, and I plan to remove rpc_json_uri in the JSON patch. ----- __NOTOC__ [https://fedorahosted.org/freeipa/ticket/3299 #3299] Switch the client to JSON-RPC = Overview = IPA currently uses XML-RPC to communicate with the server. The Web UI uses JSON-RPC. Using JSON-RPC also in the IPA client will allow us to include additional information in errors, such as instructions or log messages. Also, switching the protocol will allow us to assume the latest client version when the client doesn't send a version option (see [http://www.redhat.com/archives/freeipa-devel/2012-December/msg00164.html discussion on freeipa-devel]). This RFE is only for the ipa client. Other (especially non-Python) tools such as ipa-join will continue to use XML-RPC. The features that JSON-RPC will allow aren't essential for these tools. The features will simply not be available over XML-RPC. = Use Cases = N/A = Design= Two options will be added to default.conf: * rpc_protocol * jsonrpc_uri If jsonrpc_uri is not given, it will be derived from xmlrpc_uri by replacing "/xml" with "/json". If rpc_protocol is set to "jsonrpc" (the default), the client will use JSON-RPC to talk to the server. If it's set to "xmlrpc", the client will use XML-RPC. = Implementation = No additional requirements or changes discovered during the implementation phase. = Feature Managment: CLI = Developers can use the -e rpc_protocol=xmlrpc option to ipa to use the old protocol. = Major configuration options and enablement = Two new env variables, see Design. = Replication = N/A = Updates and Upgrades = N/A = Dependencies = N/A = External Impact = N/A From pviktori at redhat.com Fri Jan 4 11:26:56 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 04 Jan 2013 12:26:56 +0100 Subject: [Freeipa-devel] [PATCH] 0001 Raise ValidationError for incorrect subtree option In-Reply-To: <50E58DCB.1020307@redhat.com> References: <50E571DF.5090101@redhat.com> <50E57CCA.1050003@redhat.com> <50E58DCB.1020307@redhat.com> Message-ID: <50E6BC80.6050308@redhat.com> On 01/03/2013 02:55 PM, Ana Krivokapic wrote: > On 01/03/2013 01:42 PM, Petr Viktorin wrote: >> On 01/03/2013 12:56 PM, Ana Krivokapic wrote: >>> Using incorrect input for --subtree option of ipa permission-add command >>> now raises a ValidationError. >>> >>> Previously, a ValueError was raised, which resulted in a user unfriendly >>> error message: >>> ipa: ERROR: an internal error has occurred >>> >>> I have added a try-except block to catch the ValueError and raise an >>> appropriate ValidationError. >>> >>> https://fedorahosted.org/freeipa/ticket/3233 >>> >> ... >> >>> --- a/ipalib/plugins/aci.py >>> +++ b/ipalib/plugins/aci.py >>> @@ -341,7 +341,10 @@ def _aci_to_kw(ldap, a, test=False, >>> pkey_only=False): >>> else: >>> # See if the target is a group. If so we set the >>> # targetgroup attr, otherwise we consider it a subtree >>> - targetdn = DN(target.replace('ldap:///','')) >>> + try: >>> + targetdn = DN(target.replace('ldap:///','')) >>> + except ValueError as e: >>> + raise errors.ValidationError(name='subtree', >>> error=_(e.message)) >> >> `_(e.message)` is useless. The message can only be translated if the >> string is grabbed by gettext, which uses static analysis. In other >> words, the argument to _() must be a literal string. >> >> You can do either `_("invalid DN")`, or if the error message is >> important, include it like this (e.message still won't be translated, >> but at least the users will get something in their language): >> _("invalid DN (%s)") % e.message >> >> > Fixed. > > Thanks, Petr. > ACK -- Petr? From jcholast at redhat.com Fri Jan 4 12:43:05 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Fri, 04 Jan 2013 13:43:05 +0100 Subject: [Freeipa-devel] Client JSON-RPC design doc In-Reply-To: <50E6A6C9.4030202@redhat.com> References: <50E6A6C9.4030202@redhat.com> Message-ID: <50E6CE59.6000208@redhat.com> Hi, On 4.1.2013 10:54, Petr Viktorin wrote: > Hello, > I've put a JSON-RPC design doc at http://freeipa.org/page/V3/JSON-RPC. > It's also below for easier quoting. > > > I found that constants.py already has an env variable called > "rpc_json_uri", and xmlrpc_uri has this comment: > # FIXME: let's renamed xmlrpc_uri to rpc_xml_uri > AFAICS rpc_json_uri is unused, undocumented, and the installer always > leaves it at the default ("http://localhost:8888/ipa/json"). > I don't think it's feasible to rename xmlrpc_uri any more, so I used > "jsonrpc_uri" for consistency, and I plan to remove rpc_json_uri in the > JSON patch. > > > > > > ----- > > __NOTOC__ > > [https://fedorahosted.org/freeipa/ticket/3299 #3299] Switch the client > to JSON-RPC > > = Overview = > > IPA currently uses XML-RPC to communicate with the server. The Web UI > uses JSON-RPC. > > Using JSON-RPC also in the IPA client will allow us to include additional > information in errors, such as instructions or log messages. > > Also, switching the protocol will allow us to assume the latest client > version when the client doesn't send a version option (see > [http://www.redhat.com/archives/freeipa-devel/2012-December/msg00164.html discussion > on freeipa-devel]). > > This RFE is only for the ipa client. Other (especially non-Python) > tools such as ipa-join will continue to use XML-RPC. > The features that JSON-RPC will allow aren't essential for these tools. The > features will simply not be available over XML-RPC. > > = Use Cases = > > N/A > > = Design= > > Two options will be added to default.conf: > > * rpc_protocol > * jsonrpc_uri > > If jsonrpc_uri is not given, it will be derived from xmlrpc_uri by > replacing > "/xml" with "/json". > > If rpc_protocol is set to "jsonrpc" (the default), the client will use > JSON-RPC > to talk to the server. If it's set to "xmlrpc", the client will use > XML-RPC. What is the benefit of supporting both protocols? > > = Implementation = > > No additional requirements or changes discovered during the > implementation phase. > > = Feature Managment: CLI = > > Developers can use the -e rpc_protocol=xmlrpc option to > ipa to use the old protocol. > > = Major configuration options and enablement = > > Two new env variables, see Design. > > = Replication = > > N/A > > = Updates and Upgrades = > > N/A > > = Dependencies = > > N/A > > = External Impact = > > N/A > Honza -- Jan Cholasta From pviktori at redhat.com Fri Jan 4 13:43:58 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 04 Jan 2013 14:43:58 +0100 Subject: [Freeipa-devel] [PATCHES] 0117-0118 Port ipa-replica-prepare to the admintool framework In-Reply-To: <50E58E01.3060402@redhat.com> References: <50E580E2.8090502@redhat.com> <50E58E01.3060402@redhat.com> Message-ID: <50E6DC9E.5050702@redhat.com> On 01/03/2013 02:56 PM, John Dennis wrote: > On 01/03/2013 08:00 AM, Petr Viktorin wrote: >> Hello, >> >> The first patch implements logging-related changes to the admintool >> framework and ipa-ldap-updater (the only thing ported to it so far). >> The design document is at http://freeipa.org/page/V3/Logging_and_output >> >> John, I decided to go ahead and put an explicit "logger" attribute on >> the tool class rather than adding debug, info, warn. etc methods >> dynamically using log_mgr.get_logger. I believe it's the cleanest >> solution. >> We had a discussion about this in this thread: >> https://www.redhat.com/archives/freeipa-devel/2012-July/msg00223.html; I >> didn't get a reaction to my conclusion so I'm letting you know in case >> you have more to say. > > I'm fine with not directly adding the debug, info, warn etc. methods, > that practice was historical dating back to the days of Jason. However I > do think it's useful to use a named logger and not the global > root_logger. I'd prefer we got away from using the root_logger, it's > continued existence is historical as well and the idea was over time we > would slowly eliminate it's usage. FWIW the log_mgr.get_logger() is > still useful for what you want to do. > > def get_logger(self, who, bind_logger_names=False) > > If you don't set bind_logger_names to True (and pass the class instance > as who) you won't get the offensive debug, info, etc. methods added to > the class instance. But it still does all the other bookeeping. > > The 'who' in this instance could be either the name of the admin tool or > the class instance. > > Also I'd prefer using the attribute 'log' rather than 'logger'. That > would make it consistent with code which does already use get_logger() > passing a class instance because it's adds a 'log' attribute which is > the logger. Also 'log' is twice as succinct than 'logger' (shorter line > lengths). > > Thus if you do: > > log_mgr.get_logger(self) > > I think you'll get exactly what you want. A logger named for the class > and being able to say > > self.log.debug() > self.log.error() > > inside the class. > > In summary, just drop the True from the get_logger() call. > Thanks! Yes, this works better. Updated patches attached. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0117-02-Better-logging-for-AdminTool-and-ipa-ldap-updater.patch Type: text/x-patch Size: 15663 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0118-02-Port-ipa-replica-prepare-to-the-admintool-framework.patch Type: text/x-patch Size: 45428 bytes Desc: not available URL: From pviktori at redhat.com Fri Jan 4 14:13:09 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 04 Jan 2013 15:13:09 +0100 Subject: [Freeipa-devel] Client JSON-RPC design doc In-Reply-To: <50E6CE59.6000208@redhat.com> References: <50E6A6C9.4030202@redhat.com> <50E6CE59.6000208@redhat.com> Message-ID: <50E6E375.4010703@redhat.com> On 01/04/2013 01:43 PM, Jan Cholasta wrote: > Hi, > > On 4.1.2013 10:54, Petr Viktorin wrote: ... >> >> If rpc_protocol is set to "jsonrpc" (the default), the client will use >> JSON-RPC >> to talk to the server. If it's set to "xmlrpc", the client will use >> XML-RPC. > > What is the benefit of supporting both protocols? > Mainly developer testing. Since we need to maintain backwards compatibility with XML-RPC clients, ability to issue XML-RPC from the client, or run the tests with XML-RPC, will be useful. -- Petr? From mkosek at redhat.com Fri Jan 4 14:23:39 2013 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 04 Jan 2013 15:23:39 +0100 Subject: [Freeipa-devel] Client JSON-RPC design doc In-Reply-To: <50E6E375.4010703@redhat.com> References: <50E6A6C9.4030202@redhat.com> <50E6CE59.6000208@redhat.com> <50E6E375.4010703@redhat.com> Message-ID: <50E6E5EB.6070405@redhat.com> On 01/04/2013 03:13 PM, Petr Viktorin wrote: > On 01/04/2013 01:43 PM, Jan Cholasta wrote: >> Hi, >> >> On 4.1.2013 10:54, Petr Viktorin wrote: > ... >>> >>> If rpc_protocol is set to "jsonrpc" (the default), the client will use >>> JSON-RPC >>> to talk to the server. If it's set to "xmlrpc", the client will use >>> XML-RPC. >> >> What is the benefit of supporting both protocols? >> > > Mainly developer testing. Since we need to maintain backwards compatibility > with XML-RPC clients, ability to issue XML-RPC from the client, or run the > tests with XML-RPC, will be useful. > +1 for this ability. Options like this one may prove very useful as a quick workaround in case something breaks in the new protocol in a production environment... Martin From pviktori at redhat.com Fri Jan 4 18:20:15 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 04 Jan 2013 19:20:15 +0100 Subject: [Freeipa-devel] [PATCHES] 0104-0106 Provide means of displaying warning and informational messages on clients In-Reply-To: <50CADD74.1090802@redhat.com> References: <50C9A709.8070806@redhat.com> <50C9F7A4.80909@redhat.com> <50CA0BDB.4000807@redhat.com> <50CADD74.1090802@redhat.com> Message-ID: <50E71D5F.4000304@redhat.com> On 12/14/2012 09:04 AM, Jan Cholasta wrote: > On 13.12.2012 18:09, Petr Viktorin wrote: >> On 12/13/2012 04:43 PM, Martin Kosek wrote: >>> On 12/13/2012 10:59 AM, Petr Viktorin wrote: >>>> It's time to give this to another set of eyes :) >>>> >>>> Design document: http://freeipa.org/page/V3/Messages >>>> Ticket: https://fedorahosted.org/freeipa/ticket/2732 >>>> >>>> More info is in commit messages. >>>> >>>> >>>> Because of https://fedorahosted.org/freeipa/ticket/3294, I needed to >>>> change the >>>> design document: when the client doesn't send the API version, it is >>>> assumed >>>> it's at a version before capabilities were introduced (i.e. 2.47). >>>> The client still gets a warning if the version is missing. Except for >>>> those >>>> commands where IPA didn't send a version -- ping, cert-show, etc. -- >>>> the >>>> warning wouldn't pass validation on old clients. (I'm assuming that >>>> our client >>>> is so far the only one that validates so strictly.) >>> >>> I did a basic test of this patch and also quickly read through the >>> patches and >>> besides nitpicks (like unused inspect module in >>> tests/test_ipalib/test_messages.py in patch 0105) I did not find any >>> obvious >>> errors in the Python code. >> >> Noted, will fix in future versions of the patch. >> >>> >>> However, this patch breaks WebUI badly, I did not even get to a log in >>> screen. >>> Cooperation with Petr Vobornik will be needed. In my case, I got blank >>> screen >>> and Javascript error: >>> >>> TypeError: IPA.messages.dialogs is undefined >>> https://vm-037.idm.lab.bos.redhat.com/ipa/ui/ipa.js >>> Line 1460 >>> >>> I assume this is related to the Internal Error that was returned in >>> the JSON call >>> >>> { >>> "error": null, >>> "id": null, >>> "principal": "admin at IDM.LAB.BOS.REDHAT.COM", >>> "result": { >>> "count": 5, >>> "results": [ >>> { >>> "error": "an internal error has occurred", >>> "error_code": 903, >>> "error_name": "InternalError" >>> }, >>> { >>> ... >>> >>> This can be reproduced with: >>> >>> # curl -v -H "Content-Type:application/json" -H >>> "referer:https://`hostname`/ipa" -H "Accept:applicaton/json" >>> --negotiate -u : >>> --cacert /etc/ipa/ca.crt -d >>> '{"method":"i18n_messages","params":[[],{}],"id":0}' -X POST >>> https://`hostname`/ipa/json >> >> Good catch! The i18n_messages plugin already defines a "messages" >> output. When I renamed this from "warnings" to "messages" I forgot to >> check for clashes. >> Since i18n_messages is an internal command only used by the Web UI, we >> can rename its output to "texts" without breaking compatibility. >> >> I'm attaching a preliminary fix (for both backend and UI), but hopefully >> it won't be necessary, see below. >> >>> I am also not sure I like the requirement of a specific version option >>> to be >>> always passed. I would prefer that missing version option would mean >>> "I use the >>> most recent version of API" instead - it would make the custom >>> JSONRPC/XMLRPC >>> calls easier to use. >>> >>> But since the version option was not being sent for some commands, we >>> may not >>> have a choice anyway if we do not want to break old clients in case we >>> add some >>> capabilities to these commands. >>> >> >> I see three other options, all worse: >> - Do not use capabilities for the affected commands, meaning no new >> functionality can be added to them (and by extension, no new >> functionality common to all commands can be added). >> - Treat a missing version as the current version >> - Break backwards compatibility >> >> And one possibly better (thanks to Petr? and Martin for opening my eyes >> off-list!): >> - Deprecate XML-RPC. All XML-RPC requests would be pinned to current >> version (2.47). Capabilities/messages would only apply to JSON-RPC. >> >> This would also allow us to solve the above name-clashing problem >> elegantly. Here is a reminder of what a JSON response looks like: >> >> { >> "error": null, >> "id": 0, >> "principal": "admin at IDM.LAB.BOS.REDHAT.COM", >> "result": { >> "summary": "IPA server version 3.1.0GIT2e4bd02. API version >> 2.47" >> }, >> "version": "3.1.0GIT2e4bd02" >> } >> >> A XML-RPC response only contains the "result" part of that. >> So with JSON, we can put the messages in the top level, which is much >> better design. Custom info in the "top level" seems to be a violation of the JSON-RPC spec. I'd rather not do more of those, so I'm withdrawing this idea. >> >> XML-RPC sucks in other ways too. We already have a workaround for its >> inability to attach extra info to errors (commit >> 88262a75ffe7a25640333dcc4da2100830cae821, Add instructions support to >> PublicError). >> >> I've opened a RFC here: https://fedorahosted.org/freeipa/ticket/3299. >> > > +1, XML-RPC sucks. This should have been done a long time ago. > > Honza > Here are new patches. XML-RPC requests with missing version are assumed to be old (the version before capabilities are introduced, 2.47). This takes care of backcompat with clients with bug 3294. JSON-RPC requests with missing version are assumed to be testing calls (e.g. curl), they get behavior of the latest version but also a warning. I've also added this info to the design doc. It turns out that these patches don't depend on whether our client uses XML-RPC or JSON-RPC. If/when it supports both, I'll be able to add some extra unit tests. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0105-02-Add-ipalib.messages.patch Type: text/x-patch Size: 17068 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0106-02-Add-client-capabilities-enable-messages.patch Type: text/x-patch Size: 18949 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0115-02-Rename-the-messages-Output-of-the-i18n_messages-comm.patch Type: text/x-patch Size: 3691 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0104-Add-the-version-option-to-all-Commands.patch Type: text/x-patch Size: 68089 bytes Desc: not available URL: From rcritten at redhat.com Sun Jan 6 20:00:50 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Sun, 06 Jan 2013 15:00:50 -0500 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues Message-ID: <50E9D7F2.1060800@redhat.com> Each of the CA subsystem certificates would trigger a restart during renewal. This generally caused one or more of the renewals to fail due to the CA being down. We also need to fix the trust on the audit cert post-installation. It was possible that both certmonger and certutil could have the NSS database open read/write which is almost guaranteed to result in corruption. So intead I picked the audit cert as the "lead" cert. It will handle restarting the CA. It will also wait until all the other CA subsystem certs are in a MONITORING state before trying to update the trust. This should prevent the multiple read/write problem. The CA wasn't actually working post-renewal anyway because the user it uses to bind to DS wasn't being updated properly. certmap.conf is confiugred to compare the cert provided by the client with that stored in LDAP and since we weren't updating it, dogtag couldn't properly bind to its own DS instance. We also update a ou=People entry for the RA agent cert so I pulled that updating code into cainstance.py for easier sharing. Finally, the wrong service name was being used for tomcat to do the restart. This is fixed. I've tested this with 3.1/dogtag 10 but it should work with dogtag 9 as well (which uses a different service naming convention). This is how I test: - ipa-server-install ... - getcert list | grep expires - examine the first four certs, pick an expiration date ~28 days prior - date MMDDhhmmCCYY - getcert list|grep status Wait until all but one is in MONITORING. That last one should be the audit cert. I usually at this point switch to watching a tail of /var/log/messages until the CA restarts. Confirm that things are working with: - ipa cert-show 1 To really be sure, use the ipa cert-request command to issue a new cert. Ideally you'll verify that things are working, then trigger another renewal event. Do the getcert list|grep expires to renew the HTTP/DS server certs, then do this again for the CA subsystem certs. It should come up again. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1079-renewal.patch Type: text/x-patch Size: 13929 bytes Desc: not available URL: From pviktori at redhat.com Mon Jan 7 11:25:58 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 07 Jan 2013 12:25:58 +0100 Subject: [Freeipa-devel] [Freeipa-users] ipa admin tool error "ipa: ERROR: Client is not configured. Run ipa-client-install." In-Reply-To: References: Message-ID: <50EAB0C6.1030508@redhat.com> On 01/07/2013 11:00 AM, Natxo Asenjo wrote: > hi, > > on a workstation *not* joined to the IPA domain but with the the ipa > admin tools installed I get this error when trying to modify dns > settings and I have a kerberos ticket of an admin user: > > $ kinit user.admin at UNIX.DOMAIN.TLD > Password for user.admin at UNIX.DOMAIN.TLD > $ klist > Ticket cache: FILE:/tmp/krb5cc_500 > Default principal: user.admin at UNIX.DOMAIN.TLD > > Valid starting Expires Service principal > 01/07/13 10:47:09 01/08/13 10:47:06 krbtgt/UNIX.DOMAIN.TLD at UNIX.DOMAIN.TLD > renew until 01/14/13 10:47:06 > > $ ipa dnsrecord-mod unix.domain.tld ipaclient01 --ttl=300 > ipa: ERROR: Client is not configured. Run ipa-client-install. > > Is this 'by design'? This limitation on the cli tool does not apply to > the web interface, by the way, that is, I can login the web interface > without being joined to the domain and modify all kind of stuff there > ;-). > > To be more specific: this is not a problem, I can run this command on > a joined host, but I was just curious. > I think the check we're making here (at least one directive has to be read from a config file) is rather limiting. I'd expect the following to work: ipa -e xmlrpc_uri=https://ipa.example.com/ipa/xml dnsrecord-mod example.com ipa --ttl=300 -- Petr? From pvoborni at redhat.com Mon Jan 7 11:53:10 2013 From: pvoborni at redhat.com (Petr Vobornik) Date: Mon, 07 Jan 2013 12:53:10 +0100 Subject: [Freeipa-devel] [PATCHES] 228-237 Confirmation of dialogs by keyboard, better password dialogs In-Reply-To: <50B4E163.5080707@redhat.com> References: <50AA65E7.4070301@redhat.com> <50AA6926.6000804@redhat.com> <50AB7D76.6010809@redhat.com> <50B4E163.5080707@redhat.com> Message-ID: <50EAB726.7080902@redhat.com> On 11/27/2012 04:50 PM, Endi Sukma Dewata wrote: > On 11/20/2012 6:54 AM, Petr Vobornik wrote: >> New design page: >> http://www.freeipa.org/page/V3/WebUI_keyboard_confirmation >> >> Link to design page was added to tickets #3200 and #2910. >> >> In the ticket list of previous mail is a mistake. This effort is related >> to tickets #3200, #2910 and #2884. Probably commit messages should be >> amended. > > ACK. Pushed to master. > > I had to do some reading on mixin, but the patches look good. > > Just some comments and related issues: > > 1. In patch #230 the inheritance is inverted. Previously the > confirmation dialog inherits from message dialog. Now it's the other way > around. Usually the higher the class in the hierarchy it will become > more abstract and simpler. In this patch the confirmation dialog has an > OK and a Cancel buttons, but then the message dialog simplifies it by > removing the Cancel button. > > The code itself works just fine, but if it were me I'd try to keep the > original hierarchy. On the other hand, OO discussion often times comes > down to preference, so this is not really an issue. When we look at it I added some kind of confirmation to every dialog. So maybe the mixin can be incorporated to the dialog base class itself. On the other hand, if we considered the dialog class as a part of separated framework (which is long-term target), the implementation might become too specific. I will leave it as is for the time being. > > 2. It might be useful to show which button is the default button. On > some OS's the default button is shown with bolder text or thicker border. > > How about highlighting the default button as if it were in focus (i.e. > shown with black background)? If you change focus to another button then > the default button will go back to normal (grey), but the point is > there's always a highlighted button which will be activated if you hit > Enter anywhere in the dialog. https://fedorahosted.org/freeipa/ticket/3325 > > 3. The drop-down list cannot be operated with a keyboard. This is a > separate issue. > > 4. The focus area (surrounded by dotted lines) of the down arrow in the > drop-down list is incorrect. Try clicking inside a drop-down list, then > hit Tab. Right now the focus area appears on the right side of the arrow > and very tiny. Ideally the focus area should be a box around the arrow. > This is also a separate issue. > #3, #4: https://fedorahosted.org/freeipa/ticket/3324 -- Petr Vobornik From pviktori at redhat.com Mon Jan 7 12:35:01 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 07 Jan 2013 13:35:01 +0100 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50E9D7F2.1060800@redhat.com> References: <50E9D7F2.1060800@redhat.com> Message-ID: <50EAC0F5.8050607@redhat.com> On 01/06/2013 09:00 PM, Rob Crittenden wrote: > Each of the CA subsystem certificates would trigger a restart during > renewal. This generally caused one or more of the renewals to fail due > to the CA being down. > > We also need to fix the trust on the audit cert post-installation. It > was possible that both certmonger and certutil could have the NSS > database open read/write which is almost guaranteed to result in > corruption. > > So intead I picked the audit cert as the "lead" cert. It will handle > restarting the CA. > > It will also wait until all the other CA subsystem certs are in a > MONITORING state before trying to update the trust. This should prevent > the multiple read/write problem. > > The CA wasn't actually working post-renewal anyway because the user it > uses to bind to DS wasn't being updated properly. certmap.conf is > confiugred to compare the cert provided by the client with that stored > in LDAP and since we weren't updating it, dogtag couldn't properly bind > to its own DS instance. > > We also update a ou=People entry for the RA agent cert so I pulled that > updating code into cainstance.py for easier sharing. > > Finally, the wrong service name was being used for tomcat to do the > restart. This is fixed. I've tested this with 3.1/dogtag 10 but it > should work with dogtag 9 as well (which uses a different service naming > convention). > > This is how I test: > > - ipa-server-install ... > - getcert list | grep expires > - examine the first four certs, pick an expiration date ~28 days prior > - date MMDDhhmmCCYY > - getcert list|grep status > > Wait until all but one is in MONITORING. That last one should be the > audit cert. > > I usually at this point switch to watching a tail of /var/log/messages > until the CA restarts. > > Confirm that things are working with: > > - ipa cert-show 1 > > To really be sure, use the ipa cert-request command to issue a new cert. > > Ideally you'll verify that things are working, then trigger another > renewal event. Do the getcert list|grep expires to renew the HTTP/DS > server certs, then do this again for the CA subsystem certs. > > It should come up again. > > rob > Works for me, but I have some questions (this is an area I know little about). Can we be 100% sure these certs are always renewed together? Is certmonger the only possible mechanism to update them? Can we be sure certmonger always does the updates in parallel? If it managed to update the audit cert before starting on the others, we'd get no CA restart for the others. And anyway, why does certmonger do renewals in parallel? It seems that if it did one at a time, always waiting until the post-renew script is done, this patch wouldn't be necessary. -- Petr? From rcritten at redhat.com Mon Jan 7 14:09:17 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 07 Jan 2013 09:09:17 -0500 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50EAC0F5.8050607@redhat.com> References: <50E9D7F2.1060800@redhat.com> <50EAC0F5.8050607@redhat.com> Message-ID: <50EAD70D.8010000@redhat.com> Petr Viktorin wrote: > On 01/06/2013 09:00 PM, Rob Crittenden wrote: >> Each of the CA subsystem certificates would trigger a restart during >> renewal. This generally caused one or more of the renewals to fail due >> to the CA being down. >> >> We also need to fix the trust on the audit cert post-installation. It >> was possible that both certmonger and certutil could have the NSS >> database open read/write which is almost guaranteed to result in >> corruption. >> >> So intead I picked the audit cert as the "lead" cert. It will handle >> restarting the CA. >> >> It will also wait until all the other CA subsystem certs are in a >> MONITORING state before trying to update the trust. This should prevent >> the multiple read/write problem. >> >> The CA wasn't actually working post-renewal anyway because the user it >> uses to bind to DS wasn't being updated properly. certmap.conf is >> confiugred to compare the cert provided by the client with that stored >> in LDAP and since we weren't updating it, dogtag couldn't properly bind >> to its own DS instance. >> >> We also update a ou=People entry for the RA agent cert so I pulled that >> updating code into cainstance.py for easier sharing. >> >> Finally, the wrong service name was being used for tomcat to do the >> restart. This is fixed. I've tested this with 3.1/dogtag 10 but it >> should work with dogtag 9 as well (which uses a different service naming >> convention). >> >> This is how I test: >> >> - ipa-server-install ... >> - getcert list | grep expires >> - examine the first four certs, pick an expiration date ~28 days prior >> - date MMDDhhmmCCYY >> - getcert list|grep status >> >> Wait until all but one is in MONITORING. That last one should be the >> audit cert. >> >> I usually at this point switch to watching a tail of /var/log/messages >> until the CA restarts. >> >> Confirm that things are working with: >> >> - ipa cert-show 1 >> >> To really be sure, use the ipa cert-request command to issue a new cert. >> >> Ideally you'll verify that things are working, then trigger another >> renewal event. Do the getcert list|grep expires to renew the HTTP/DS >> server certs, then do this again for the CA subsystem certs. >> >> It should come up again. >> >> rob >> > > Works for me, but I have some questions (this is an area I know little > about). > > Can we be 100% sure these certs are always renewed together? Is > certmonger the only possible mechanism to update them? You raise a good point. If though some mechanism someone replaces one of these certs it will cause the script to fail. Some notification of this failure will be logged though, and of course, the certs won't be renewed. One could conceivably manually renew one of these certificates. It is probably a very remote possibility but it is non-zero. > Can we be sure certmonger always does the updates in parallel? If it > managed to update the audit cert before starting on the others, we'd get > no CA restart for the others. These all get issued at the same time so should expire at the same time as well (see problem above). The script will hang around for 10 minutes waiting for the renewal to complete, then give up. The state the system would be in is this: - audit cert trust not updated, so next restart of CA will fail - CA is not restarted so will not use updated certificates > And anyway, why does certmonger do renewals in parallel? It seems that > if it did one at a time, always waiting until the post-renew script is > done, this patch wouldn't be necessary. > From what Nalin told me certmonger has some coarse locking such that renewals in a the same NSS database are serialized. As you point out, it would be nice to extend this locking to the post renewal scripts. We can ask Nalin about it. That would fix the potential corruption issue. It is still much nicer to not have to restart dogtag 4 times. rob From pviktori at redhat.com Mon Jan 7 15:14:35 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 07 Jan 2013 16:14:35 +0100 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50EAD70D.8010000@redhat.com> References: <50E9D7F2.1060800@redhat.com> <50EAC0F5.8050607@redhat.com> <50EAD70D.8010000@redhat.com> Message-ID: <50EAE65B.60409@redhat.com> On 01/07/2013 03:09 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> On 01/06/2013 09:00 PM, Rob Crittenden wrote: >>> Each of the CA subsystem certificates would trigger a restart during >>> renewal. This generally caused one or more of the renewals to fail due >>> to the CA being down. >>> >>> We also need to fix the trust on the audit cert post-installation. It >>> was possible that both certmonger and certutil could have the NSS >>> database open read/write which is almost guaranteed to result in >>> corruption. >>> >>> So intead I picked the audit cert as the "lead" cert. It will handle >>> restarting the CA. >>> >>> It will also wait until all the other CA subsystem certs are in a >>> MONITORING state before trying to update the trust. This should prevent >>> the multiple read/write problem. >>> >>> The CA wasn't actually working post-renewal anyway because the user it >>> uses to bind to DS wasn't being updated properly. certmap.conf is >>> confiugred to compare the cert provided by the client with that stored >>> in LDAP and since we weren't updating it, dogtag couldn't properly bind >>> to its own DS instance. >>> >>> We also update a ou=People entry for the RA agent cert so I pulled that >>> updating code into cainstance.py for easier sharing. >>> >>> Finally, the wrong service name was being used for tomcat to do the >>> restart. This is fixed. I've tested this with 3.1/dogtag 10 but it >>> should work with dogtag 9 as well (which uses a different service naming >>> convention). >>> >>> This is how I test: >>> >>> - ipa-server-install ... >>> - getcert list | grep expires >>> - examine the first four certs, pick an expiration date ~28 days prior >>> - date MMDDhhmmCCYY >>> - getcert list|grep status >>> >>> Wait until all but one is in MONITORING. That last one should be the >>> audit cert. >>> >>> I usually at this point switch to watching a tail of /var/log/messages >>> until the CA restarts. >>> >>> Confirm that things are working with: >>> >>> - ipa cert-show 1 >>> >>> To really be sure, use the ipa cert-request command to issue a new cert. >>> >>> Ideally you'll verify that things are working, then trigger another >>> renewal event. Do the getcert list|grep expires to renew the HTTP/DS >>> server certs, then do this again for the CA subsystem certs. >>> >>> It should come up again. >>> >>> rob >>> >> >> Works for me, but I have some questions (this is an area I know little >> about). >> >> Can we be 100% sure these certs are always renewed together? Is >> certmonger the only possible mechanism to update them? > > You raise a good point. If though some mechanism someone replaces one of > these certs it will cause the script to fail. Some notification of this > failure will be logged though, and of course, the certs won't be renewed. > > One could conceivably manually renew one of these certificates. It is > probably a very remote possibility but it is non-zero. > >> Can we be sure certmonger always does the updates in parallel? If it >> managed to update the audit cert before starting on the others, we'd get >> no CA restart for the others. > > These all get issued at the same time so should expire at the same time > as well (see problem above). The script will hang around for 10 minutes > waiting for the renewal to complete, then give up. The certs might take different amounts of time to update, right? Eventually, the expirations could go out of sync enough for it to matter. AFAICS, without proper locking we still get a race condition when the other certs start being renewed some time (much less than 10 min) after the audit one: (time axis goes down) audit cert other cert ---------- ---------- certmonger does renew . post-renew script starts . check state of other certs: OK . . certmonger starts renew certutil modifies NSS DB + certmonger modifies NSS DB == boom! > The state the system would be in is this: > > - audit cert trust not updated, so next restart of CA will fail > - CA is not restarted so will not use updated certificates > >> And anyway, why does certmonger do renewals in parallel? It seems that >> if it did one at a time, always waiting until the post-renew script is >> done, this patch wouldn't be necessary. >> > > From what Nalin told me certmonger has some coarse locking such that > renewals in a the same NSS database are serialized. As you point out, it > would be nice to extend this locking to the post renewal scripts. We can > ask Nalin about it. That would fix the potential corruption issue. It is > still much nicer to not have to restart dogtag 4 times. > Well, three extra restarts every few years seems like a small price to pay for robustness. -- Petr? From rcritten at redhat.com Mon Jan 7 16:42:38 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 07 Jan 2013 11:42:38 -0500 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50EAE65B.60409@redhat.com> References: <50E9D7F2.1060800@redhat.com> <50EAC0F5.8050607@redhat.com> <50EAD70D.8010000@redhat.com> <50EAE65B.60409@redhat.com> Message-ID: <50EAFAFE.1040200@redhat.com> Petr Viktorin wrote: > On 01/07/2013 03:09 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> On 01/06/2013 09:00 PM, Rob Crittenden wrote: >>>> Each of the CA subsystem certificates would trigger a restart during >>>> renewal. This generally caused one or more of the renewals to fail due >>>> to the CA being down. >>>> >>>> We also need to fix the trust on the audit cert post-installation. It >>>> was possible that both certmonger and certutil could have the NSS >>>> database open read/write which is almost guaranteed to result in >>>> corruption. >>>> >>>> So intead I picked the audit cert as the "lead" cert. It will handle >>>> restarting the CA. >>>> >>>> It will also wait until all the other CA subsystem certs are in a >>>> MONITORING state before trying to update the trust. This should prevent >>>> the multiple read/write problem. >>>> >>>> The CA wasn't actually working post-renewal anyway because the user it >>>> uses to bind to DS wasn't being updated properly. certmap.conf is >>>> confiugred to compare the cert provided by the client with that stored >>>> in LDAP and since we weren't updating it, dogtag couldn't properly bind >>>> to its own DS instance. >>>> >>>> We also update a ou=People entry for the RA agent cert so I pulled that >>>> updating code into cainstance.py for easier sharing. >>>> >>>> Finally, the wrong service name was being used for tomcat to do the >>>> restart. This is fixed. I've tested this with 3.1/dogtag 10 but it >>>> should work with dogtag 9 as well (which uses a different service >>>> naming >>>> convention). >>>> >>>> This is how I test: >>>> >>>> - ipa-server-install ... >>>> - getcert list | grep expires >>>> - examine the first four certs, pick an expiration date ~28 days prior >>>> - date MMDDhhmmCCYY >>>> - getcert list|grep status >>>> >>>> Wait until all but one is in MONITORING. That last one should be the >>>> audit cert. >>>> >>>> I usually at this point switch to watching a tail of /var/log/messages >>>> until the CA restarts. >>>> >>>> Confirm that things are working with: >>>> >>>> - ipa cert-show 1 >>>> >>>> To really be sure, use the ipa cert-request command to issue a new >>>> cert. >>>> >>>> Ideally you'll verify that things are working, then trigger another >>>> renewal event. Do the getcert list|grep expires to renew the HTTP/DS >>>> server certs, then do this again for the CA subsystem certs. >>>> >>>> It should come up again. >>>> >>>> rob >>>> >>> >>> Works for me, but I have some questions (this is an area I know little >>> about). >>> >>> Can we be 100% sure these certs are always renewed together? Is >>> certmonger the only possible mechanism to update them? >> >> You raise a good point. If though some mechanism someone replaces one of >> these certs it will cause the script to fail. Some notification of this >> failure will be logged though, and of course, the certs won't be renewed. >> >> One could conceivably manually renew one of these certificates. It is >> probably a very remote possibility but it is non-zero. >> >>> Can we be sure certmonger always does the updates in parallel? If it >>> managed to update the audit cert before starting on the others, we'd get >>> no CA restart for the others. >> >> These all get issued at the same time so should expire at the same time >> as well (see problem above). The script will hang around for 10 minutes >> waiting for the renewal to complete, then give up. > > The certs might take different amounts of time to update, right? > Eventually, the expirations could go out of sync enough for it to matter. > AFAICS, without proper locking we still get a race condition when the > other certs start being renewed some time (much less than 10 min) after > the audit one: > > (time axis goes down) > > audit cert other cert > ---------- ---------- > certmonger does renew . > post-renew script starts . > check state of other certs: OK . > . certmonger starts renew > certutil modifies NSS DB + certmonger modifies NSS DB == boom! This can't happen because we count the # of expected certs and wait until all are in MONITORING before continuing. The worse that would happen is the trust wouldn't be set on the audit cert and dogtag wouldn't be restarted. > > >> The state the system would be in is this: >> >> - audit cert trust not updated, so next restart of CA will fail >> - CA is not restarted so will not use updated certificates >> >>> And anyway, why does certmonger do renewals in parallel? It seems that >>> if it did one at a time, always waiting until the post-renew script is >>> done, this patch wouldn't be necessary. >>> >> >> From what Nalin told me certmonger has some coarse locking such that >> renewals in a the same NSS database are serialized. As you point out, it >> would be nice to extend this locking to the post renewal scripts. We can >> ask Nalin about it. That would fix the potential corruption issue. It is >> still much nicer to not have to restart dogtag 4 times. >> > > Well, three extra restarts every few years seems like a small price to > pay for robustness. It is a bit of a problem though because the certs all renew within seconds so end up fighting over who is restarting dogtag. This can cause some renewals go into a failure state to be retried later. This is fine functionally but makes QE a bit of a pain. You then have to make sure that renewal is basically done, then restart certmonger and check everything again, over and over until all the certs are renewed. This is difficult to automate. rob From rcritten at redhat.com Mon Jan 7 16:47:19 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 07 Jan 2013 11:47:19 -0500 Subject: [Freeipa-devel] [Freeipa-users] ipa admin tool error "ipa: ERROR: Client is not configured. Run ipa-client-install." In-Reply-To: <50EAB0C6.1030508@redhat.com> References: <50EAB0C6.1030508@redhat.com> Message-ID: <50EAFC17.70907@redhat.com> Petr Viktorin wrote: > On 01/07/2013 11:00 AM, Natxo Asenjo wrote: >> hi, >> >> on a workstation *not* joined to the IPA domain but with the the ipa >> admin tools installed I get this error when trying to modify dns >> settings and I have a kerberos ticket of an admin user: >> >> $ kinit user.admin at UNIX.DOMAIN.TLD >> Password for user.admin at UNIX.DOMAIN.TLD >> $ klist >> Ticket cache: FILE:/tmp/krb5cc_500 >> Default principal: user.admin at UNIX.DOMAIN.TLD >> >> Valid starting Expires Service principal >> 01/07/13 10:47:09 01/08/13 10:47:06 >> krbtgt/UNIX.DOMAIN.TLD at UNIX.DOMAIN.TLD >> renew until 01/14/13 10:47:06 >> >> $ ipa dnsrecord-mod unix.domain.tld ipaclient01 --ttl=300 >> ipa: ERROR: Client is not configured. Run ipa-client-install. >> >> Is this 'by design'? This limitation on the cli tool does not apply to >> the web interface, by the way, that is, I can login the web interface >> without being joined to the domain and modify all kind of stuff there >> ;-). >> >> To be more specific: this is not a problem, I can run this command on >> a joined host, but I was just curious. >> > > > I think the check we're making here (at least one directive has to be > read from a config file) is rather limiting. I'd expect the following to > work: > > ipa -e xmlrpc_uri=https://ipa.example.com/ipa/xml dnsrecord-mod > example.com ipa --ttl=300 > The reason is you get a really crappy error if you try to run the tool on an unconfigured machine without cleverly passing in the URI via -e. rob From pviktori at redhat.com Mon Jan 7 16:57:12 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 07 Jan 2013 17:57:12 +0100 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50EAFAFE.1040200@redhat.com> References: <50E9D7F2.1060800@redhat.com> <50EAC0F5.8050607@redhat.com> <50EAD70D.8010000@redhat.com> <50EAE65B.60409@redhat.com> <50EAFAFE.1040200@redhat.com> Message-ID: <50EAFE68.3030109@redhat.com> On 01/07/2013 05:42 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> On 01/07/2013 03:09 PM, Rob Crittenden wrote: >>> Petr Viktorin wrote: [...] >>>> >>>> Works for me, but I have some questions (this is an area I know little >>>> about). >>>> >>>> Can we be 100% sure these certs are always renewed together? Is >>>> certmonger the only possible mechanism to update them? >>> >>> You raise a good point. If though some mechanism someone replaces one of >>> these certs it will cause the script to fail. Some notification of this >>> failure will be logged though, and of course, the certs won't be >>> renewed. >>> >>> One could conceivably manually renew one of these certificates. It is >>> probably a very remote possibility but it is non-zero. >>> >>>> Can we be sure certmonger always does the updates in parallel? If it >>>> managed to update the audit cert before starting on the others, we'd >>>> get >>>> no CA restart for the others. >>> >>> These all get issued at the same time so should expire at the same time >>> as well (see problem above). The script will hang around for 10 minutes >>> waiting for the renewal to complete, then give up. >> >> The certs might take different amounts of time to update, right? >> Eventually, the expirations could go out of sync enough for it to matter. >> AFAICS, without proper locking we still get a race condition when the >> other certs start being renewed some time (much less than 10 min) after >> the audit one: >> >> (time axis goes down) >> >> audit cert other cert >> ---------- ---------- >> certmonger does renew . >> post-renew script starts . >> check state of other certs: OK . >> . certmonger starts renew >> certutil modifies NSS DB + certmonger modifies NSS DB == boom! > > This can't happen because we count the # of expected certs and wait > until all are in MONITORING before continuing. The problem is that they're also in MONITORING before the whole renewal starts. If the script happens to check just before the state changes from MONITORING to GENERATING_CSR or whatever, we can get corruption. > The worse that would > happen is the trust wouldn't be set on the audit cert and dogtag > wouldn't be restarted. > >> >> >>> The state the system would be in is this: >>> >>> - audit cert trust not updated, so next restart of CA will fail >>> - CA is not restarted so will not use updated certificates >>> >>>> And anyway, why does certmonger do renewals in parallel? It seems that >>>> if it did one at a time, always waiting until the post-renew script is >>>> done, this patch wouldn't be necessary. >>>> >>> >>> From what Nalin told me certmonger has some coarse locking such that >>> renewals in a the same NSS database are serialized. As you point out, it >>> would be nice to extend this locking to the post renewal scripts. We can >>> ask Nalin about it. That would fix the potential corruption issue. It is >>> still much nicer to not have to restart dogtag 4 times. >>> >> >> Well, three extra restarts every few years seems like a small price to >> pay for robustness. > > It is a bit of a problem though because the certs all renew within > seconds so end up fighting over who is restarting dogtag. This can cause > some renewals go into a failure state to be retried later. This is fine > functionally but makes QE a bit of a pain. You then have to make sure > that renewal is basically done, then restart certmonger and check > everything again, over and over until all the certs are renewed. This is > difficult to automate. So we need to extend the certmonger lock, and wait until Dogtag is back up before exiting the script. That way it'd still take longer than 1 restart, but all the renews should succeed. -- Petr? From rcritten at redhat.com Mon Jan 7 19:14:13 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 07 Jan 2013 14:14:13 -0500 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50EAFE68.3030109@redhat.com> References: <50E9D7F2.1060800@redhat.com> <50EAC0F5.8050607@redhat.com> <50EAD70D.8010000@redhat.com> <50EAE65B.60409@redhat.com> <50EAFAFE.1040200@redhat.com> <50EAFE68.3030109@redhat.com> Message-ID: <50EB1E85.2070109@redhat.com> Petr Viktorin wrote: > On 01/07/2013 05:42 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> On 01/07/2013 03:09 PM, Rob Crittenden wrote: >>>> Petr Viktorin wrote: > [...] >>>>> >>>>> Works for me, but I have some questions (this is an area I know little >>>>> about). >>>>> >>>>> Can we be 100% sure these certs are always renewed together? Is >>>>> certmonger the only possible mechanism to update them? >>>> >>>> You raise a good point. If though some mechanism someone replaces >>>> one of >>>> these certs it will cause the script to fail. Some notification of this >>>> failure will be logged though, and of course, the certs won't be >>>> renewed. >>>> >>>> One could conceivably manually renew one of these certificates. It is >>>> probably a very remote possibility but it is non-zero. >>>> >>>>> Can we be sure certmonger always does the updates in parallel? If it >>>>> managed to update the audit cert before starting on the others, we'd >>>>> get >>>>> no CA restart for the others. >>>> >>>> These all get issued at the same time so should expire at the same time >>>> as well (see problem above). The script will hang around for 10 minutes >>>> waiting for the renewal to complete, then give up. >>> >>> The certs might take different amounts of time to update, right? >>> Eventually, the expirations could go out of sync enough for it to >>> matter. >>> AFAICS, without proper locking we still get a race condition when the >>> other certs start being renewed some time (much less than 10 min) after >>> the audit one: >>> >>> (time axis goes down) >>> >>> audit cert other cert >>> ---------- ---------- >>> certmonger does renew . >>> post-renew script starts . >>> check state of other certs: OK . >>> . certmonger starts renew >>> certutil modifies NSS DB + certmonger modifies NSS DB == boom! >> >> This can't happen because we count the # of expected certs and wait >> until all are in MONITORING before continuing. > > The problem is that they're also in MONITORING before the whole renewal > starts. If the script happens to check just before the state changes > from MONITORING to GENERATING_CSR or whatever, we can get corruption. > >> The worse that would >> happen is the trust wouldn't be set on the audit cert and dogtag >> wouldn't be restarted. >> >>> >>> >>>> The state the system would be in is this: >>>> >>>> - audit cert trust not updated, so next restart of CA will fail >>>> - CA is not restarted so will not use updated certificates >>>> >>>>> And anyway, why does certmonger do renewals in parallel? It seems that >>>>> if it did one at a time, always waiting until the post-renew script is >>>>> done, this patch wouldn't be necessary. >>>>> >>>> >>>> From what Nalin told me certmonger has some coarse locking such that >>>> renewals in a the same NSS database are serialized. As you point >>>> out, it >>>> would be nice to extend this locking to the post renewal scripts. We >>>> can >>>> ask Nalin about it. That would fix the potential corruption issue. >>>> It is >>>> still much nicer to not have to restart dogtag 4 times. >>>> >>> >>> Well, three extra restarts every few years seems like a small price to >>> pay for robustness. >> >> It is a bit of a problem though because the certs all renew within >> seconds so end up fighting over who is restarting dogtag. This can cause >> some renewals go into a failure state to be retried later. This is fine >> functionally but makes QE a bit of a pain. You then have to make sure >> that renewal is basically done, then restart certmonger and check >> everything again, over and over until all the certs are renewed. This is >> difficult to automate. > > So we need to extend the certmonger lock, and wait until Dogtag is back > up before exiting the script. That way it'd still take longer than 1 > restart, but all the renews should succeed. > Right, but older dogtag versions don't have the handy servlet to tell that the service is actually up and responding. So it is difficult to tell from tomcat alone whether the CA is actually up and handling requests. rob From pspacek at redhat.com Tue Jan 8 07:44:43 2013 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 08 Jan 2013 08:44:43 +0100 Subject: [Freeipa-devel] [Freeipa-users] ipa admin tool error "ipa: ERROR: Client is not configured. Run ipa-client-install." In-Reply-To: <50EAFC17.70907@redhat.com> References: <50EAB0C6.1030508@redhat.com> <50EAFC17.70907@redhat.com> Message-ID: <50EBCE6B.80005@redhat.com> On 7.1.2013 17:47, Rob Crittenden wrote: > Petr Viktorin wrote: >> On 01/07/2013 11:00 AM, Natxo Asenjo wrote: >>> hi, >>> >>> on a workstation *not* joined to the IPA domain but with the the ipa >>> admin tools installed I get this error when trying to modify dns >>> settings and I have a kerberos ticket of an admin user: >>> >>> $ kinit user.admin at UNIX.DOMAIN.TLD >>> Password for user.admin at UNIX.DOMAIN.TLD >>> $ klist >>> Ticket cache: FILE:/tmp/krb5cc_500 >>> Default principal: user.admin at UNIX.DOMAIN.TLD >>> >>> Valid starting Expires Service principal >>> 01/07/13 10:47:09 01/08/13 10:47:06 >>> krbtgt/UNIX.DOMAIN.TLD at UNIX.DOMAIN.TLD >>> renew until 01/14/13 10:47:06 >>> >>> $ ipa dnsrecord-mod unix.domain.tld ipaclient01 --ttl=300 >>> ipa: ERROR: Client is not configured. Run ipa-client-install. >>> >>> Is this 'by design'? This limitation on the cli tool does not apply to >>> the web interface, by the way, that is, I can login the web interface >>> without being joined to the domain and modify all kind of stuff there >>> ;-). >>> >>> To be more specific: this is not a problem, I can run this command on >>> a joined host, but I was just curious. >>> >> >> >> I think the check we're making here (at least one directive has to be >> read from a config file) is rather limiting. I'd expect the following to >> work: >> >> ipa -e xmlrpc_uri=https://ipa.example.com/ipa/xml dnsrecord-mod >> example.com ipa --ttl=300 >> > > The reason is you get a really crappy error if you try to run the tool on an > unconfigured machine without cleverly passing in the URI via -e. IMHO the error message could be much clearer: IPA client is not configured on this machine. Configure xmlrpc_uri in ~/.ipa/default.conf or add "-e xmlrpc_uri=" parameter before using IPA admin tools. Something like that ... -- Petr^2 Spacek From rcritten at redhat.com Tue Jan 8 14:49:41 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 08 Jan 2013 09:49:41 -0500 Subject: [Freeipa-devel] [Freeipa-users] ipa admin tool error "ipa: ERROR: Client is not configured. Run ipa-client-install." In-Reply-To: <50EBCE6B.80005@redhat.com> References: <50EAB0C6.1030508@redhat.com> <50EAFC17.70907@redhat.com> <50EBCE6B.80005@redhat.com> Message-ID: <50EC3205.40602@redhat.com> Petr Spacek wrote: > On 7.1.2013 17:47, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> On 01/07/2013 11:00 AM, Natxo Asenjo wrote: >>>> hi, >>>> >>>> on a workstation *not* joined to the IPA domain but with the the ipa >>>> admin tools installed I get this error when trying to modify dns >>>> settings and I have a kerberos ticket of an admin user: >>>> >>>> $ kinit user.admin at UNIX.DOMAIN.TLD >>>> Password for user.admin at UNIX.DOMAIN.TLD >>>> $ klist >>>> Ticket cache: FILE:/tmp/krb5cc_500 >>>> Default principal: user.admin at UNIX.DOMAIN.TLD >>>> >>>> Valid starting Expires Service principal >>>> 01/07/13 10:47:09 01/08/13 10:47:06 >>>> krbtgt/UNIX.DOMAIN.TLD at UNIX.DOMAIN.TLD >>>> renew until 01/14/13 10:47:06 >>>> >>>> $ ipa dnsrecord-mod unix.domain.tld ipaclient01 --ttl=300 >>>> ipa: ERROR: Client is not configured. Run ipa-client-install. >>>> >>>> Is this 'by design'? This limitation on the cli tool does not apply to >>>> the web interface, by the way, that is, I can login the web interface >>>> without being joined to the domain and modify all kind of stuff there >>>> ;-). >>>> >>>> To be more specific: this is not a problem, I can run this command on >>>> a joined host, but I was just curious. >>>> >>> >>> >>> I think the check we're making here (at least one directive has to be >>> read from a config file) is rather limiting. I'd expect the following to >>> work: >>> >>> ipa -e xmlrpc_uri=https://ipa.example.com/ipa/xml dnsrecord-mod >>> example.com ipa --ttl=300 >>> >> >> The reason is you get a really crappy error if you try to run the tool >> on an >> unconfigured machine without cleverly passing in the URI via -e. > > IMHO the error message could be much clearer: > IPA client is not configured on this machine. Configure xmlrpc_uri in > ~/.ipa/default.conf or add "-e xmlrpc_uri=" parameter before using IPA > admin tools. > > Something like that ... > I think I'd prefer to write a note on the wiki on how to manually minimally configure a host to use the ipa tool. This is the first time this has come up on the list, so it isn't a particularly hot issue. rob From rcritten at redhat.com Tue Jan 8 15:56:42 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 08 Jan 2013 10:56:42 -0500 Subject: [Freeipa-devel] [PATCH] 0043 Allow-PKI-CA-Replica-Installs-when-CRL-exceeds-default In-Reply-To: <50D34191.3010209@redhat.com> References: <1148EF9A-1016-4852-BD78-7B4389EEC7A3@citrixonline.com> <1355956353.2894.38.camel@willson.li.ssimo.org> <09B4CD8E-23A9-46F5-B5F5-7245B3B937B5@citrixonline.com> <1355970935.2894.40.camel@willson.li.ssimo.org> <1356014377.15924.50.camel@aleeredhat.laptop> <1356014955.2894.65.camel@willson.li.ssimo.org> <50D34191.3010209@redhat.com> Message-ID: <50EC41BA.3070104@redhat.com> Andrew Wnuk wrote: > On 12/20/2012 06:49 AM, Simo Sorce wrote: >> On Thu, 2012-12-20 at 09:39 -0500, Ade Lee wrote: >>> On Wed, 2012-12-19 at 21:35 -0500, Simo Sorce wrote: >>>> On Wed, 2012-12-19 at 22:41 +0000, JR Aquino wrote: >>>>> On Dec 19, 2012, at 2:32 PM, Simo Sorce wrote: >>>>> >>>>>> On Wed, 2012-12-19 at 20:52 +0000, JR Aquino wrote: >>>>>>> Due to a limitation with 389 DS, the nsslapd-maxbersize cannot be >>>>>>> set dynamically. >>>>>>> This causes an issue during IPA PKI-CA Replica installs, when the >>>>>>> master has a CRL that exceeds the default limit. >>>>>>> The cainstance.py code attempts to set this value prior to >>>>>>> performing the initial PKI-CA replication, however, since the >>>>>>> value cannot be set dynamically, the installation fails. >>>>>>> >>>>>>> This patch works around the issue by adding the ldif to the >>>>>>> original initialization values bootstrapped by the call to >>>>>>> setup-ds.pl >>>>>> Why are we not simply restarting the instance after setting the >>>>>> value ? >>>>>> >>>>>> What's in database.ldif ? What produces it ? >>>>> /usr/share/pki/ca/conf/database.ldif is part of the dogtag >>>>> installation and it contains the following entry: >>>>> dn: cn=config >>>>> changetype: modify >>>>> replace: nsslapd-maxbersize >>>>> nsslapd-maxbersize: 209715200 >>>>> >>>>> It's purpose is to increase the limit for maxbersize from 2097152 >>>>> to 209715200. > If your CA is relatively recent, 209715200 should give you enough room > to generate CRLs v1 with up to 9.4 millions entries. > If you plan on having bigger CRLs, consider further increase of > nsslapd-maxbersize. >>>>> >>>>> The ldif is inserted via the jars that are wrapped by pkisilent... >>>>> So this leaves 3 options: >>>>> >>>>> #1 Add code to perform the ldap insert followed by a dirsrv restart >>>>> into the cainstance.py code >>>>> #2 Add recode the jar files from DogTag to require a dirsrv restart >>>>> after the insert, but prior to the replication >>>>> #3 Just initialize the dirsrv database with the correct value to >>>>> begin with. <1 line fix> >>>>> #4 Ask 389 to allow maxbersize to be a dynamically initialized >>>>> variable >>>>> >>>>> #3 Seemed the path of least resistance. >>>>> I did take the time to code #1 and verify that it worked as well. >>>>> I have a ticket open for #4 >>>>> Alee hinted that the jar modifications for #2 might not be trivial... >>>> Method #3 is ok, but for master, where we have unified ds instances, >>>> you >>>> should look at doing ti as we do change other similar attributes in >>>> install/updates/10-config.update so that older installations are >>>> updated >>>> as well. >>>> If you do it only at install and the CRL grows later you'd get older >>>> server start choking because they have not been updated. >>>> >>> Are you referring to masters which have been converted from non-unified >>> DS to a single DS using an as-yet-to-be-written script? >> I was thinking of a current 3.1 setup with multiple replicas installed >> before this patch lands in Fedora. >> >> Old master (3.0) with split instances, new replicas (3.1) with unified >> instances. >> >> After a while CRL in master grows past limit. >> All replicas break because no update fixed them. >> >>> The ldif change mentioned above is already performed as part of the >>> dogtag install. For a freshly installed master, there is no large CRL >>> to break the installation. >>> >>> In the replica scenario, this change is needed before we attempt >>> replication because the large CRL breaks replication. In fact, if that >>> value had not been set on the master, there would be no large CRL to >>> cause replication problems. >> Understood, I am not asking for a huge change, just that the change is >> done in an update file and not just on a fresh install. >> >> Simo. >> After a fair bit of discussion I went ahead and pushed this to master, ipa-3-1 and ipa-3-0. This will will new replica creation. We are still a little unclear about whether existing masters will need any changes, but this patch is clearly a step forward. rob From pvoborni at redhat.com Tue Jan 8 16:46:19 2013 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 08 Jan 2013 17:46:19 +0100 Subject: [Freeipa-devel] [PATCH] 240-252 AMD modules and Web UI build Message-ID: <50EC4D5B.5080501@redhat.com> This patchset does following things: * integrates Dojo library into FreeIPA Web UI * encapsulates UI parts by AMD definition so they can be used in build process and by AMD loader * introduces Dojo builder for building Web UI and UglifyJS for minimizing JS * contains help and build scripts for developers and a make process Overall it makes final application smaller, faster loadable and it's a foundation for additional refactoring. More information in design page: http://www.freeipa.org/page/V3/WebUI_build https://fedorahosted.org/freeipa/ticket/112 These patches introduce one regression: extension.js file isn't used. I want to fix it in #3235 ([RFE] Make WebUI extensions functional just by adding files). Which I will implement after I finish #3236 ([RFE] more extensible navigation) on which I'm working now. -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0240-Use-Uglify.js-for-JS-optimization.patch Type: text/x-patch Size: 192697 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0241-Dojo-Builder.patch Type: text/x-patch Size: 197915 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0242-Config-files-for-builder-of-FreeIPA-UI-layer.patch Type: text/x-patch Size: 7036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0243-Minimal-Dojo-layer.patch Type: text/x-patch Size: 82269 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0244-Web-UI-development-environment-directory-structure-a.patch Type: text/x-patch Size: 5676 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0245-Web-UI-Sync-development-utility.patch Type: text/x-patch Size: 9254 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0246-Move-of-Web-UI-non-AMD-dep.-libs-to-libs-subdirector.patch Type: text/x-patch Size: 5803 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0247-Move-of-core-Web-UI-files-to-AMD-directory.patch Type: text/x-patch Size: 9867 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0248-Update-JavaScript-Lint-configuration-file.patch Type: text/x-patch Size: 3733 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0249-AMD-config-file.patch Type: text/x-patch Size: 5647 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0250-Change-Web-UI-sources-to-simple-AMD-modules.patch Type: text/x-patch Size: 41161 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0251-Updated-makefiles-to-build-FreeIPA-Web-UI-layer.patch Type: text/x-patch Size: 7622 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0252-Change-tests-to-use-AMD-loader.patch Type: text/x-patch Size: 34527 bytes Desc: not available URL: From pvoborni at redhat.com Tue Jan 8 16:48:32 2013 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 08 Jan 2013 17:48:32 +0100 Subject: [Freeipa-devel] [PATCH] 253 Enable mod_deflate Message-ID: <50EC4DE0.1070100@redhat.com> Design page: http://www.freeipa.org/page/V3/WebUI_gzip_compression Enabled mod_deflate for: * text/html (HTML files) * text/plain (for future use) * text/css (CSS files) * text/xml (XML RPC) * application/javascript (JavaScript files) * application/json (JSON RPC) * application/x-font-woff (woff fonts) Added proper mime type for woff fonts. Disabled etag header because it doesn't work with mod_deflate. https://fedorahosted.org/freeipa/ticket/3326 -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0253-Enable-mod_deflate.patch Type: text/x-patch Size: 1822 bytes Desc: not available URL: From rcritten at redhat.com Tue Jan 8 16:58:14 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 08 Jan 2013 11:58:14 -0500 Subject: [Freeipa-devel] [PATCH] 253 Enable mod_deflate In-Reply-To: <50EC4DE0.1070100@redhat.com> References: <50EC4DE0.1070100@redhat.com> Message-ID: <50EC5026.7050109@redhat.com> Petr Vobornik wrote: > Design page: http://www.freeipa.org/page/V3/WebUI_gzip_compression > > Enabled mod_deflate for: > * text/html (HTML files) > * text/plain (for future use) > * text/css (CSS files) > * text/xml (XML RPC) > * application/javascript (JavaScript files) > * application/json (JSON RPC) > * application/x-font-woff (woff fonts) > > Added proper mime type for woff fonts. > Disabled etag header because it doesn't work with mod_deflate. > > https://fedorahosted.org/freeipa/ticket/3326 Should this be enabled on upgrades as well? rob From pvoborni at redhat.com Tue Jan 8 17:05:28 2013 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 08 Jan 2013 18:05:28 +0100 Subject: [Freeipa-devel] [PATCH] 253 Enable mod_deflate In-Reply-To: <50EC5026.7050109@redhat.com> References: <50EC4DE0.1070100@redhat.com> <50EC5026.7050109@redhat.com> Message-ID: <50EC51D8.7060609@redhat.com> On 01/08/2013 05:58 PM, Rob Crittenden wrote: > Petr Vobornik wrote: >> Design page: http://www.freeipa.org/page/V3/WebUI_gzip_compression >> >> Enabled mod_deflate for: >> * text/html (HTML files) >> * text/plain (for future use) >> * text/css (CSS files) >> * text/xml (XML RPC) >> * application/javascript (JavaScript files) >> * application/json (JSON RPC) >> * application/x-font-woff (woff fonts) >> >> Added proper mime type for woff fonts. >> Disabled etag header because it doesn't work with mod_deflate. >> >> https://fedorahosted.org/freeipa/ticket/3326 > > Should this be enabled on upgrades as well? Yes, I don't see a reason not to. > > rob > -- Petr Vobornik From rcritten at redhat.com Tue Jan 8 22:10:42 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 08 Jan 2013 17:10:42 -0500 Subject: [Freeipa-devel] Announcing FreeIPA v3.1.1 Release Message-ID: <50EC9962.9060309@redhat.com> The FreeIPA team is proud to announce version FreeIPA v3.1.1. It can be downloaded from http://www.freeipa.org/page/Downloads. == Highlights in 3.1.1 == * Increase default maxbersize so new replica installs won't fail if the CRL becomes very large. * Do not crash in ipa-client-install in DNS discovery when a TXT record was configured, but the referred domain did not contain any Kerberos SRV record. * Cookie Expires date should be locale insensitive * Enable SSSD on installation in client installes. We had relied on authconfig to enable it but it no longer does so when passed the --enablesssd flag. It just configures SSSD in the PAM stack. == Upgrading == An IPA server can be upgraded simply by installing updated rpms. The server does not need to be shut down in advance. Please note, that the referential integrity extension requires an extended set of indexes to be configured. RPM update for an IPA server with a excessive number of hosts, SUDO or HBAC entries may require several minutes to finish. If you have multiple servers you may upgrade them one at a time. It is expected that all servers will be upgraded in a relatively short period (days or weeks not months). They should be able to co-exist peacefully but new features will not be available on old servers and enrolling a new client against an old server will result in the SSH keys not being uploaded. Downgrading a server once upgraded is not supported. Upgrading from 2.2.0 is supported. Upgrading from previous versions is not supported and has not been tested. An enrolled client does not need the new packages installed unless you want to re-enroll it. SSH keys for already installed clients are not uploaded, you will have to re-enroll the client or manually upload the keys. == Feedback == Please provide comments, bugs and other feedback via the freeipa-devel mailing list: http://www.redhat.com/mailman/listinfo/freeipa-devel == Detailed Changelog since 3.1.0 == JR Aquino (1): * Allow PKI-CA Replica Installs when CRL exceeds default maxber value John Dennis (1): * Cookie Expires date should be locale insensitive Lynn Root (2): * Fixed the catch of the hostname option during ipa-server-install * Raise ValidationError when CSR does not have a subject hostname Martin Kosek (4): * Add Lynn Root to Contributors.txt * Enable SSSD on client install * Fix delegation-find command --group handling * Do not crash when Kerberos SRV record is not found Rob Crittenden (1): * Become IPA 3.1.1 Simo Sorce (1): * Log info on failure to connect Tomas Babej (2): * Relax restriction for leading/trailing whitespaces in *-find commands * Forbid overlapping rid ranges for the same id range From mkosek at redhat.com Wed Jan 9 07:41:59 2013 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 09 Jan 2013 08:41:59 +0100 Subject: [Freeipa-devel] [PATCH] 253 Enable mod_deflate In-Reply-To: <50EC51D8.7060609@redhat.com> References: <50EC4DE0.1070100@redhat.com> <50EC5026.7050109@redhat.com> <50EC51D8.7060609@redhat.com> Message-ID: <50ED1F47.2050405@redhat.com> On 01/08/2013 06:05 PM, Petr Vobornik wrote: > On 01/08/2013 05:58 PM, Rob Crittenden wrote: >> Petr Vobornik wrote: >>> Design page: http://www.freeipa.org/page/V3/WebUI_gzip_compression >>> >>> Enabled mod_deflate for: >>> * text/html (HTML files) >>> * text/plain (for future use) >>> * text/css (CSS files) >>> * text/xml (XML RPC) >>> * application/javascript (JavaScript files) >>> * application/json (JSON RPC) >>> * application/x-font-woff (woff fonts) >>> >>> Added proper mime type for woff fonts. >>> Disabled etag header because it doesn't work with mod_deflate. >>> >>> https://fedorahosted.org/freeipa/ticket/3326 >> >> Should this be enabled on upgrades as well? > > Yes, I don't see a reason not to. This should be enabled on upgrades as is, since Petr bumped VERSION in install/conf/ipa.conf. We should carefully check that enabling it also for xmlrpc/json does not cause any grief. Martin From pviktori at redhat.com Wed Jan 9 10:59:01 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 09 Jan 2013 11:59:01 +0100 Subject: [Freeipa-devel] [PATCH] 253 Enable mod_deflate In-Reply-To: <50ED1F47.2050405@redhat.com> References: <50EC4DE0.1070100@redhat.com> <50EC5026.7050109@redhat.com> <50EC51D8.7060609@redhat.com> <50ED1F47.2050405@redhat.com> Message-ID: <50ED4D75.9040103@redhat.com> On 01/09/2013 08:41 AM, Martin Kosek wrote: > On 01/08/2013 06:05 PM, Petr Vobornik wrote: >> On 01/08/2013 05:58 PM, Rob Crittenden wrote: >>> Petr Vobornik wrote: >>>> Design page: http://www.freeipa.org/page/V3/WebUI_gzip_compression >>>> >>>> Enabled mod_deflate for: >>>> * text/html (HTML files) >>>> * text/plain (for future use) >>>> * text/css (CSS files) >>>> * text/xml (XML RPC) >>>> * application/javascript (JavaScript files) >>>> * application/json (JSON RPC) >>>> * application/x-font-woff (woff fonts) >>>> >>>> Added proper mime type for woff fonts. >>>> Disabled etag header because it doesn't work with mod_deflate. >>>> >>>> https://fedorahosted.org/freeipa/ticket/3326 >>> >>> Should this be enabled on upgrades as well? >> >> Yes, I don't see a reason not to. > > This should be enabled on upgrades as is, since Petr bumped VERSION in > install/conf/ipa.conf. > > We should carefully check that enabling it also for xmlrpc/json does not cause > any grief. > > Martin > HTTP libraries won't ask for gzip if they can't handle it, so there shouldn't be any grief. I tested the UI, installing client & replica, and the CLI tool. All work fine. Just one thing: WOFF is already compressed so we shouldn't gzip it again. -- Petr? From pvoborni at redhat.com Wed Jan 9 13:06:51 2013 From: pvoborni at redhat.com (Petr Vobornik) Date: Wed, 09 Jan 2013 14:06:51 +0100 Subject: [Freeipa-devel] [PATCH] 253 Enable mod_deflate In-Reply-To: <50ED4D75.9040103@redhat.com> References: <50EC4DE0.1070100@redhat.com> <50EC5026.7050109@redhat.com> <50EC51D8.7060609@redhat.com> <50ED1F47.2050405@redhat.com> <50ED4D75.9040103@redhat.com> Message-ID: <50ED6B6B.5010406@redhat.com> On 01/09/2013 11:59 AM, Petr Viktorin wrote: > On 01/09/2013 08:41 AM, Martin Kosek wrote: >> On 01/08/2013 06:05 PM, Petr Vobornik wrote: >>> On 01/08/2013 05:58 PM, Rob Crittenden wrote: >>>> Petr Vobornik wrote: >>>>> Design page: http://www.freeipa.org/page/V3/WebUI_gzip_compression >>>>> >>>>> Enabled mod_deflate for: >>>>> * text/html (HTML files) >>>>> * text/plain (for future use) >>>>> * text/css (CSS files) >>>>> * text/xml (XML RPC) >>>>> * application/javascript (JavaScript files) >>>>> * application/json (JSON RPC) >>>>> * application/x-font-woff (woff fonts) >>>>> >>>>> Added proper mime type for woff fonts. >>>>> Disabled etag header because it doesn't work with mod_deflate. >>>>> >>>>> https://fedorahosted.org/freeipa/ticket/3326 >>>> >>>> Should this be enabled on upgrades as well? >>> >>> Yes, I don't see a reason not to. >> >> This should be enabled on upgrades as is, since Petr bumped VERSION in >> install/conf/ipa.conf. >> >> We should carefully check that enabling it also for xmlrpc/json does >> not cause >> any grief. >> >> Martin >> > > HTTP libraries won't ask for gzip if they can't handle it, so there > shouldn't be any grief. > I tested the UI, installing client & replica, and the CLI tool. All work > fine. > > Just one thing: WOFF is already compressed so we shouldn't gzip it again. > Thanks. Compression for application/x-font-woff removed. Updated patch attached. -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0253-1-Enable-mod_deflate.patch Type: text/x-patch Size: 1793 bytes Desc: not available URL: From pviktori at redhat.com Wed Jan 9 16:18:07 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 09 Jan 2013 17:18:07 +0100 Subject: [Freeipa-devel] [PATCH] 253 Enable mod_deflate In-Reply-To: <50ED6B6B.5010406@redhat.com> References: <50EC4DE0.1070100@redhat.com> <50EC5026.7050109@redhat.com> <50EC51D8.7060609@redhat.com> <50ED1F47.2050405@redhat.com> <50ED4D75.9040103@redhat.com> <50ED6B6B.5010406@redhat.com> Message-ID: <50ED983F.6040705@redhat.com> On 01/09/2013 02:06 PM, Petr Vobornik wrote: > On 01/09/2013 11:59 AM, Petr Viktorin wrote: >> On 01/09/2013 08:41 AM, Martin Kosek wrote: >>> On 01/08/2013 06:05 PM, Petr Vobornik wrote: >>>> On 01/08/2013 05:58 PM, Rob Crittenden wrote: >>>>> Petr Vobornik wrote: >>>>>> Design page: http://www.freeipa.org/page/V3/WebUI_gzip_compression >>>>>> >>>>>> Enabled mod_deflate for: >>>>>> * text/html (HTML files) >>>>>> * text/plain (for future use) >>>>>> * text/css (CSS files) >>>>>> * text/xml (XML RPC) >>>>>> * application/javascript (JavaScript files) >>>>>> * application/json (JSON RPC) >>>>>> * application/x-font-woff (woff fonts) >>>>>> >>>>>> Added proper mime type for woff fonts. >>>>>> Disabled etag header because it doesn't work with mod_deflate. >>>>>> >>>>>> https://fedorahosted.org/freeipa/ticket/3326 >>>>> >>>>> Should this be enabled on upgrades as well? >>>> >>>> Yes, I don't see a reason not to. >>> >>> This should be enabled on upgrades as is, since Petr bumped VERSION in >>> install/conf/ipa.conf. >>> >>> We should carefully check that enabling it also for xmlrpc/json does >>> not cause >>> any grief. >>> >>> Martin >>> >> >> HTTP libraries won't ask for gzip if they can't handle it, so there >> shouldn't be any grief. >> I tested the UI, installing client & replica, and the CLI tool. All work >> fine. >> >> Just one thing: WOFF is already compressed so we shouldn't gzip it again. >> > > Thanks. Compression for application/x-font-woff removed. Updated patch > attached. > ACK -- Petr? From jcholast at redhat.com Wed Jan 9 16:21:47 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 09 Jan 2013 17:21:47 +0100 Subject: [Freeipa-devel] RFC 6594 DNS SSHFP records design doc Message-ID: <50ED991B.4010305@redhat.com> Hi, you can find the design doc at . It's also inlined below. Honza = Overview = IPA supports automatic update of SSHFP DNS records for managed hosts in the ipa-client-install script and in host-* commands. The support is currently limited to the original SSHFP specification from RFC 4255; SSHFP records generated by IPA contain SHA-1 fingerprints of RSA and DSS host keys. Recently, RFC 6594 was released. It extends the original SSHFP specification with support for SHA-256 fingerprints and ECDSA host keys. Add support for RFC 6594 SSHFP records to IPA, generate both SHA-1 and SHA-256 fingerprints for RSA, DSS and ECDSA host keys. = Use Cases = Automatic generation of SSHFP DNS records on IPA client install: # ipa-client-install Discovery was successful! Hostname: host1.example.com Realm: EXAMPLE.COM DNS Domain: example.com IPA Server: ipa.example.com BaseDN: dc=example,dc=com Continue to configure the system with these values? [no]: yes User authorized to enroll computers: admin Synchronizing time with KDC... Password for admin at EXAMPLE.COM: Enrolled in IPA realm EXAMPLE.COM Created /etc/ipa/default.conf New SSSD config will be created Configured /etc/sssd/sssd.conf Configured /etc/krb5.conf for IPA realm EXAMPLE.COM trying https://ipa.example.com/ipa/xml Hostname (host1.example.com) not found in DNS DNS server record set to: host1.example.com -> 192.168.1.1 Adding SSH public key from /etc/ssh/ssh_host_rsa_key.pub Adding SSH public key from /etc/ssh/ssh_host_dsa_key.pub Forwarding 'host_mod' to server u'https://ipa.example.com/ipa/xml' SSSD enabled Configured /etc/openldap/ldap.conf NTP enabled Configured /etc/ssh/ssh_config Configured /etc/ssh/sshd_config Client configuration complete. $ dig host1.example.com SSHFP +short 2 2 0E04A7E09D037934492108ED5590612416BE736AD1BCAEAE1EA4148E 80C956E2 2 1 F2A1353FF919AD785B6BD42B588F6236D1F67459 1 2 3E475EEAF17975C36EE1413DDD659275FDD19C97C2C74A3651BA12F7 52E12A18 1 1 A308B1B02A8B43CB5192E26FA50280F752BB3A14 Automatic generation of SSHFP DNS records when modifying a host: $ ipa host-mod host2.example.com --updatedns --sshpubkey='ssh-rsa ' --sshpubkey='ssh-dss ' --sshpubkey='ecdsa-sha2-nistp256 ' --------------------------------------------- Modified host "host2.example.com" --------------------------------------------- Host name: host2.example.com Principal name: host/host2.example.com at EXAMPLE.COM MAC address: 00:11:22:33:44:55 SSH public key: ecdsa-sha2-nistp256 , ssh-dss , ssh-rsa Keytab: True Managed by: host2.example.com SSH public key fingerprint: 6C:9F:07:51:63:36:32:8B:ED:CF:8C:4C:5F:F2:BF:AE (ecdsa-sha2-nistp256), 07:5D:0D:55:64:62:A3:FE:02:AE:FC:CD:F6:ED:E1:D9 (ssh-dss), 8C:C3:27:A8:40:9F:80:01:61:99:D2:25:55:A3:52:30 (ssh-rsa) $ dig host2.example.com SSHFP +short 2 2 43FFD792089442F08892CA753059FD8B7FA939E990CE4687A3D1FB75 E0B8F6DE 2 1 4C2C50EDEAE6BC6107A37EAE7A05694C15CFEC53 3 1 B1D733A262E29B44A4D8A9FAF4B3B9E78302D1DB 1 2 E5382308CFD60DE4F0ACF3BCB0366314EECFC71030A28AAF75280041 5FDF81A8 3 2 545055E921E94128AF6BFE68E6E2804333628F7808B8EAE10E297B11 3270862F 1 1 DA7A6687AE4B2C242E12A67DACDC67D26E374AD5 = Design= Implement support for SHA-256 fingerprints and ECDSA keys in SSHFP records in the ipapython.ssh module (add new method fingerprint_dns_sha256). Extend ipa-client-install and the host plugin to add all types of SSHFP records to DNS. = Implementation = N/A = Feature Managment = N/A = Major configuration options and enablement = N/A = Replication = N/A = Updates and Upgrades = N/A = Dependencies = N/A = External Impact = N/A -- Jan Cholasta From jcholast at redhat.com Wed Jan 9 17:11:42 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 09 Jan 2013 18:11:42 +0100 Subject: [Freeipa-devel] [PATCH] 89 Raise ValidationError on invalid CSV values Message-ID: <50EDA4CE.1060200@redhat.com> Hi, this patch fixes . Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-89-Raise-ValidationError-on-invalid-CSV-values.patch Type: text/x-patch Size: 1393 bytes Desc: not available URL: From jcholast at redhat.com Wed Jan 9 17:13:09 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 09 Jan 2013 18:13:09 +0100 Subject: [Freeipa-devel] [PATCH] 90 Run interactive_prompt callbacks after CSV values are split Message-ID: <50EDA525.1090104@redhat.com> Hi, this patch fixes . Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-90-Run-interactive_prompt-callbacks-after-CSV-values-ar.patch Type: text/x-patch Size: 1467 bytes Desc: not available URL: From jcholast at redhat.com Thu Jan 10 04:56:12 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 10 Jan 2013 05:56:12 +0100 Subject: [Freeipa-devel] [PATCHES] 91-92 Add support for RFC 6594 SSHFP DNS records Message-ID: <50EE49EC.7000500@redhat.com> Hi, Patch 91 removes module ipapython.compat. The code that uses it doesn't work with ancient Python versions anyway, so there's no need to keep it around. Patch 92 adds support for automatic generation of RFC 6594 SSHFP DNS records to ipa-client-install and host plugin, as described in . Note that still applies. https://fedorahosted.org/freeipa/ticket/2642 Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-91-Drop-ipapython.compat.patch Type: text/x-patch Size: 3640 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-92-Add-support-for-RFC-6594-SSHFP-DNS-records.patch Type: text/x-patch Size: 3095 bytes Desc: not available URL: From jcholast at redhat.com Thu Jan 10 11:03:39 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 10 Jan 2013 12:03:39 +0100 Subject: [Freeipa-devel] [PATCHES] 91-92 Add support for RFC 6594 SSHFP DNS records In-Reply-To: <50EE49EC.7000500@redhat.com> References: <50EE49EC.7000500@redhat.com> Message-ID: <50EEA00B.8090505@redhat.com> On 10.1.2013 05:56, Jan Cholasta wrote: > Hi, > > Patch 91 removes module ipapython.compat. The code that uses it doesn't > work with ancient Python versions anyway, so there's no need to keep it > around. > > Patch 92 adds support for automatic generation of RFC 6594 SSHFP DNS > records to ipa-client-install and host plugin, as described in > . Note that > still applies. > > https://fedorahosted.org/freeipa/ticket/2642 > > Honza > Self-NACK, forgot to actually remove ipapython/compat.py in the first patch. Also removed an unnecessary try block from the second patch. Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-91.1-Drop-ipapython.compat.patch Type: text/x-patch Size: 6600 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-92.1-Add-support-for-RFC-6594-SSHFP-DNS-records.patch Type: text/x-patch Size: 3017 bytes Desc: not available URL: From mkosek at redhat.com Thu Jan 10 11:19:54 2013 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 10 Jan 2013 12:19:54 +0100 Subject: [Freeipa-devel] [PATCH] 346 permission-find no longer crashes with --targetgroup Message-ID: <50EEA3DA.7000105@redhat.com> Target Group parameter was not processed correctly which caused permission-find to always crash when this search parameter was used. Fix the crash and create a unit test case to avoid future regression. https://fedorahosted.org/freeipa/ticket/3335 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-346-permission-find-no-longer-crashes-with-targetgroup.patch Type: text/x-patch Size: 2992 bytes Desc: not available URL: From pspacek at redhat.com Thu Jan 10 13:14:06 2013 From: pspacek at redhat.com (Petr Spacek) Date: Thu, 10 Jan 2013 14:14:06 +0100 Subject: [Freeipa-devel] CSV support in IPA administration tools - to be, or not to be? Message-ID: <50EEBE9E.5080609@redhat.com> Hello, is there any user of CSV support built-in to IPA administration tools ("ipa" command)? Do you consider it sane or even useful? Please reply. I wanted to add single TXT record with double quotation marks (") inside the TXT data. I spent some time figuring out how it is supposed to work ... and with help of Petr^3 I managed to write the command. The resulting command (for BASH) is absolutely crazy: ipa dnsrecord-add example.test. newrec --txt-rec='"""created on 13:01:23"""' Do we really need support for this piece of insanity? Shells can do the same thing with much less pain :-) IPA with CSV support can add multiple attributes at once, e.g. ipa dnsrecord-add example.test. newrec --txt-rec=1,2,3,4,5,6,7,8,9 will add TXT records with value 1, 2, 3 etc. BASH can do the same thing (without the escaping hell): ipa dnsrecord-add example.test. newrec --txt-rec={1,2,3,4,5,6,7,8,9} and ipa dnsrecord-add example.test. newrec --txt-rec={1..9} BASH would expand to ipa dnsrecord-add example.test. newrec --txt-rec=1 --txt-rec=2 --txt-rec=3 --txt-rec=4 --txt-rec=5 --txt-rec=6 --txt-rec=7 --txt-rec=8 --txt-rec=9 -- Petr^2 Spacek From mkosek at redhat.com Thu Jan 10 13:37:17 2013 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 10 Jan 2013 14:37:17 +0100 Subject: [Freeipa-devel] [PATCH] 347 Avoid CRL migration error message Message-ID: <50EEC40D.6080701@redhat.com> When CRL files are being migrated to a new directory, the upgrade log may contain an error message raised during MasterCRL.bin symlink migration. This is actually being caused by `chown' operation which tried to chown a symlinked file that was not migrated yet. Sort migrated files before the migration process and put symlinks at the end of the list. Also do not run chown on the symlinks as it is a redundant operation since the symlinked file will be chown'ed on its own. https://fedorahosted.org/freeipa/ticket/3336 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-347-avoid-crl-migration-error-message.patch Type: text/x-patch Size: 2210 bytes Desc: not available URL: From jcholast at redhat.com Thu Jan 10 15:11:08 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 10 Jan 2013 16:11:08 +0100 Subject: [Freeipa-devel] CSV support in IPA administration tools - to be, or not to be? In-Reply-To: <50EEBE9E.5080609@redhat.com> References: <50EEBE9E.5080609@redhat.com> Message-ID: <50EEDA0C.6010308@redhat.com> Hi, On 10.1.2013 14:14, Petr Spacek wrote: > Hello, > > is there any user of CSV support built-in to IPA administration tools > ("ipa" command)? Do you consider it sane or even useful? Please reply. > No and no. We should disable CSV in new installs. As you pointed out, shell does a better job than what we have in IPA. Honza -- Jan Cholasta From rcritten at redhat.com Thu Jan 10 15:17:42 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 10 Jan 2013 10:17:42 -0500 Subject: [Freeipa-devel] CSV support in IPA administration tools - to be, or not to be? In-Reply-To: <50EEDA0C.6010308@redhat.com> References: <50EEBE9E.5080609@redhat.com> <50EEDA0C.6010308@redhat.com> Message-ID: <50EEDB96.9010203@redhat.com> Jan Cholasta wrote: > Hi, > > On 10.1.2013 14:14, Petr Spacek wrote: >> Hello, >> >> is there any user of CSV support built-in to IPA administration tools >> ("ipa" command)? Do you consider it sane or even useful? Please reply. >> > > No and no. We should disable CSV in new installs. As you pointed out, > shell does a better job than what we have in IPA. > > Honza > We would need some sort of policy on how long to support deprecated API. That is probably the most important piece. rob From mkosek at redhat.com Thu Jan 10 15:23:48 2013 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 10 Jan 2013 16:23:48 +0100 Subject: [Freeipa-devel] CSV support in IPA administration tools - to be, or not to be? In-Reply-To: <50EEDB96.9010203@redhat.com> References: <50EEBE9E.5080609@redhat.com> <50EEDA0C.6010308@redhat.com> <50EEDB96.9010203@redhat.com> Message-ID: <50EEDD04.9080001@redhat.com> On 01/10/2013 04:17 PM, Rob Crittenden wrote: > Jan Cholasta wrote: >> Hi, >> >> On 10.1.2013 14:14, Petr Spacek wrote: >>> Hello, >>> >>> is there any user of CSV support built-in to IPA administration tools >>> ("ipa" command)? Do you consider it sane or even useful? Please reply. >>> >> >> No and no. We should disable CSV in new installs. As you pointed out, >> shell does a better job than what we have in IPA. >> >> Honza >> > > We would need some sort of policy on how long to support deprecated API. That > is probably the most important piece. > > rob AFAIU, the API will not change as we do the CSV processing only on client side and send processed entries to the server. CSV processing on old clients should still work fine. Martin From jcholast at redhat.com Thu Jan 10 15:30:49 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 10 Jan 2013 16:30:49 +0100 Subject: [Freeipa-devel] CSV support in IPA administration tools - to be, or not to be? In-Reply-To: <50EEDD04.9080001@redhat.com> References: <50EEBE9E.5080609@redhat.com> <50EEDA0C.6010308@redhat.com> <50EEDB96.9010203@redhat.com> <50EEDD04.9080001@redhat.com> Message-ID: <50EEDEA9.5000209@redhat.com> On 10.1.2013 16:23, Martin Kosek wrote: > On 01/10/2013 04:17 PM, Rob Crittenden wrote: >> Jan Cholasta wrote: >>> Hi, >>> >>> On 10.1.2013 14:14, Petr Spacek wrote: >>>> Hello, >>>> >>>> is there any user of CSV support built-in to IPA administration tools >>>> ("ipa" command)? Do you consider it sane or even useful? Please reply. >>>> >>> >>> No and no. We should disable CSV in new installs. As you pointed out, >>> shell does a better job than what we have in IPA. >>> >>> Honza >>> >> >> We would need some sort of policy on how long to support deprecated API. That >> is probably the most important piece. >> >> rob > > AFAIU, the API will not change as we do the CSV processing only on client side > and send processed entries to the server. CSV processing on old clients should > still work fine. > > Martin > Correct. -- Jan Cholasta From akrivoka at redhat.com Thu Jan 10 15:43:00 2013 From: akrivoka at redhat.com (Ana Krivokapic) Date: Thu, 10 Jan 2013 16:43:00 +0100 Subject: [Freeipa-devel] [PATCH] 0002 Add missing error message when adding duplicate external member to group Message-ID: <50EEE184.6080804@redhat.com> When adding a duplicate member to a group, an error message is issued, informing the user that the entry is already a member of the group. This message was missing in case of an external member. Ticket: https://fedorahosted.org/freeipa/ticket/3254 -- Regards, Ana Krivokapic Associate Software Engineer FreeIPA team Red Hat Inc. -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-akrivoka-0002-Add-missing-error-message-when-adding-duplicate-exte.patch Type: text/x-patch Size: 1331 bytes Desc: not available URL: From jdennis at redhat.com Thu Jan 10 16:11:59 2013 From: jdennis at redhat.com (John Dennis) Date: Thu, 10 Jan 2013 11:11:59 -0500 Subject: [Freeipa-devel] CSV support in IPA administration tools - to be, or not to be? In-Reply-To: <50EEDD04.9080001@redhat.com> References: <50EEBE9E.5080609@redhat.com> <50EEDA0C.6010308@redhat.com> <50EEDB96.9010203@redhat.com> <50EEDD04.9080001@redhat.com> Message-ID: <50EEE84F.3080106@redhat.com> On 01/10/2013 10:23 AM, Martin Kosek wrote: > AFAIU, the API will not change as we do the CSV processing only on client side > and send processed entries to the server. CSV processing on old clients should > still work fine. Is that really true? I thought I remembered CSV parsing logic inside the plugins. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From rcritten at redhat.com Thu Jan 10 20:34:14 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 10 Jan 2013 15:34:14 -0500 Subject: [Freeipa-devel] [PATCH] 1080 fix migration of uniqueMember Message-ID: <50EF25C6.1080400@redhat.com> We were asserting that the uniqueMember contain DN objects but weren't actually making them DN objects. A sample entry looks like: dn: cn=Group1,ou=Groups,dc=example,dc=com gidNumber: 1001 objectClass: top objectClass: groupOfUniqueNames objectClass: posixGroup cn: Group1 uniqueMember: uid=puser2,ou=People,dc=example,dc=com rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1080-migrate.patch Type: text/x-patch Size: 2122 bytes Desc: not available URL: From rcritten at redhat.com Thu Jan 10 21:53:35 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 10 Jan 2013 16:53:35 -0500 Subject: [Freeipa-devel] [PATCH] 346 permission-find no longer crashes with --targetgroup In-Reply-To: <50EEA3DA.7000105@redhat.com> References: <50EEA3DA.7000105@redhat.com> Message-ID: <50EF385F.1070703@redhat.com> Martin Kosek wrote: > Target Group parameter was not processed correctly which caused > permission-find to always crash when this search parameter was used. > Fix the crash and create a unit test case to avoid future regression. > > https://fedorahosted.org/freeipa/ticket/3335 ACK From rcritten at redhat.com Thu Jan 10 23:10:17 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 10 Jan 2013 18:10:17 -0500 Subject: [Freeipa-devel] [PATCH] 347 Avoid CRL migration error message In-Reply-To: <50EEC40D.6080701@redhat.com> References: <50EEC40D.6080701@redhat.com> Message-ID: <50EF4A59.1030905@redhat.com> Martin Kosek wrote: > When CRL files are being migrated to a new directory, the upgrade > log may contain an error message raised during MasterCRL.bin symlink > migration. This is actually being caused by `chown' operation which > tried to chown a symlinked file that was not migrated yet. > > Sort migrated files before the migration process and put symlinks > at the end of the list. Also do not run chown on the symlinks as > it is a redundant operation since the symlinked file will be > chown'ed on its own. > > https://fedorahosted.org/freeipa/ticket/3336 ACK From pviktori at redhat.com Fri Jan 11 08:59:37 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 11 Jan 2013 09:59:37 +0100 Subject: [Freeipa-devel] CSV support in IPA administration tools - to be, or not to be? In-Reply-To: <50EEE84F.3080106@redhat.com> References: <50EEBE9E.5080609@redhat.com> <50EEDA0C.6010308@redhat.com> <50EEDB96.9010203@redhat.com> <50EEDD04.9080001@redhat.com> <50EEE84F.3080106@redhat.com> Message-ID: <50EFD479.7010309@redhat.com> On 01/10/2013 05:11 PM, John Dennis wrote: > On 01/10/2013 10:23 AM, Martin Kosek wrote: >> AFAIU, the API will not change as we do the CSV processing only on >> client side >> and send processed entries to the server. CSV processing on old >> clients should >> still work fine. > > Is that really true? I thought I remembered CSV parsing logic inside the > plugins. > No. CSV parsing now only happens on the client. For example, when using the Web UI no CSV is involved at all. This would be a CLI change, not an API one. But I guess we need a policy for CLI changes, too. -- Petr? From mkosek at redhat.com Fri Jan 11 09:44:23 2013 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 11 Jan 2013 10:44:23 +0100 Subject: [Freeipa-devel] [PATCH] 1080 fix migration of uniqueMember In-Reply-To: <50EF25C6.1080400@redhat.com> References: <50EF25C6.1080400@redhat.com> Message-ID: <50EFDEF7.90708@redhat.com> On 01/10/2013 09:34 PM, Rob Crittenden wrote: > We were asserting that the uniqueMember contain DN objects but weren't actually > making them DN objects. > > A sample entry looks like: > > dn: cn=Group1,ou=Groups,dc=example,dc=com > gidNumber: 1001 > objectClass: top > objectClass: groupOfUniqueNames > objectClass: posixGroup > cn: Group1 > uniqueMember: uid=puser2,ou=People,dc=example,dc=com > > rob > Works fine. ACK. Pushed to master, ipa-3-1, ipa-3-0. Martin From mkosek at redhat.com Fri Jan 11 09:53:31 2013 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 11 Jan 2013 10:53:31 +0100 Subject: [Freeipa-devel] [PATCH] 346 permission-find no longer crashes with --targetgroup In-Reply-To: <50EF385F.1070703@redhat.com> References: <50EEA3DA.7000105@redhat.com> <50EF385F.1070703@redhat.com> Message-ID: <50EFE11B.3090506@redhat.com> On 01/10/2013 10:53 PM, Rob Crittenden wrote: > Martin Kosek wrote: >> Target Group parameter was not processed correctly which caused >> permission-find to always crash when this search parameter was used. >> Fix the crash and create a unit test case to avoid future regression. >> >> https://fedorahosted.org/freeipa/ticket/3335 > > ACK > Pushed to master, ipa-3-1, ipa-3-0. Martin From mkosek at redhat.com Fri Jan 11 09:56:17 2013 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 11 Jan 2013 10:56:17 +0100 Subject: [Freeipa-devel] [PATCH] 347 Avoid CRL migration error message In-Reply-To: <50EF4A59.1030905@redhat.com> References: <50EEC40D.6080701@redhat.com> <50EF4A59.1030905@redhat.com> Message-ID: <50EFE1C1.5000708@redhat.com> On 01/11/2013 12:10 AM, Rob Crittenden wrote: > Martin Kosek wrote: >> When CRL files are being migrated to a new directory, the upgrade >> log may contain an error message raised during MasterCRL.bin symlink >> migration. This is actually being caused by `chown' operation which >> tried to chown a symlinked file that was not migrated yet. >> >> Sort migrated files before the migration process and put symlinks >> at the end of the list. Also do not run chown on the symlinks as >> it is a redundant operation since the symlinked file will be >> chown'ed on its own. >> >> https://fedorahosted.org/freeipa/ticket/3336 > > ACK > Pushed to master, ipa-3-1, ipa-3-0. Martin From pviktori at redhat.com Fri Jan 11 11:59:04 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 11 Jan 2013 12:59:04 +0100 Subject: [Freeipa-devel] Redesigning LDAP code Message-ID: <50EFFE88.3010404@redhat.com> We had a small discussion off-list about how we want IPA's LDAP handling to look in the future. To continue the discussion publicly I've summarized the results and added some of my own ideas to a page. John gets credit for the overview (the mistakes & WTFs are mine). The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below. -- Petr? __NOTOC__ = Overview = Ticket [https://fedorahosted.org/freeipa/ticket/2660 #2660] installer code should use ldap2 This is important to do. We really should have just one API and set of classes for dealing with LDAP. For the DN work we had to refactor a fair amount of code in order to force most things to funnel through one common code location. Because ldap is so decentralized and we had so many different APIs, classes, etc it was a large chunk of work and is only partially completed, it got a lot better but it wasn't finished. The primary thing which needs to be resolved is our use of Entity and Entry classes. There never should have been two almost identical classes. One or both of Entity/Entry needs to be removed. As it stands now we have two basic ways we access ldap results. In the installer code it's mostly via Entity/Entry objects. But in the server code (ldpa2) it's done by accessing the data as returned by the python ldap module (e.g. list of (DN, attr_dict) tuples). We need to decide which of the two basic interfaces we're going to use and converge on it. Each approach has merits. But 3 different APIs for interacting with ldap is 2 too many. = Use Cases = N/A = Design= == Entry representation == LDAP entries will be encapsulated in objects. These will perform type checking and validation (ticket #2357). They should grow from ldap2.LDAPEntry (which is currently just a "dn, data" namedtuple). These objects will behave like a dict of lists: entry[attrname] = [value] attrname in entry del entry[attrname] entry.keys(), .values(), .items() # but NOT `for key in entry`, see below The keys are case-insensitive but case-preserving. We'll use lists for all attributes, even single-valued ones, because "single-valuedness" can change. QUESTION: Would having entry.dn as an alias for entry['dn'] be useful enough to break the rules? The object should also "rembember" its original set of attributes, so we don't have to retrieve them from LDAP again when it's updated. == The connection/backend class == We'll use the ldap2 class. The class has some overly specific "helper" methods like remove_principal_key or modify_password. We shouldn't add new ones, and the existing ones should be moved away eventually. == Backwards compatibility, porting == For compatibility with existing plugins, the LDAPEntry object will unpack to a tuple: dn, entry_attrs = entry (the entry_attrs can be entry itself, so we keep the object's validation powers) I'd rather not do keep indexing (dn = entry[0]; entry_attrs = entry[1]). We'll also temporarily add the legacy Entry interface (toTupleList, toDict, setValues, data, origDataDict, ...) to LDAPEntry. The IPAdmin class will be subclassed from ldap2. All its uses will be gradually converted to only use the ldap2 interface, at which point IPAdmin can be removed. The backwards compatibility stuff should be removed as soon as it's unused. Of course code using the raw python-ldap API will also be converted to ldap2. = Implementation = No additional requirements or changes discovered during the implementation phase. = Feature Managment = N/A = Major configuration options and enablement = N/A = Replication = N/A = Updates and Upgrades = N/A = Dependencies = N/A = External Impact = N/A = Design page authors = ~~~, jdennis From mkosek at redhat.com Fri Jan 11 12:51:46 2013 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 11 Jan 2013 13:51:46 +0100 Subject: [Freeipa-devel] [PATCH] 348 Sort LDAP updates properly Message-ID: <50F00AE2.1080907@redhat.com> LDAP updates were sorted by number of RDNs in DN. This, however, sometimes caused updates to be executed before cn=schema updates. If the update required an objectClass or attributeType added during the cn=schema update, the update operation failed. Fix the sorting so that the cn=schema updates are always run first and then the other updates sorted by RDN count. https://fedorahosted.org/freeipa/ticket/3342 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-348-sort-ldap-updates-properly.patch Type: text/x-patch Size: 2660 bytes Desc: not available URL: From rcritten at redhat.com Fri Jan 11 14:10:24 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Fri, 11 Jan 2013 09:10:24 -0500 Subject: [Freeipa-devel] Redesigning LDAP code In-Reply-To: <50EFFE88.3010404@redhat.com> References: <50EFFE88.3010404@redhat.com> Message-ID: <50F01D50.9090201@redhat.com> Petr Viktorin wrote: > We had a small discussion off-list about how we want IPA's LDAP handling > to look in the future. > To continue the discussion publicly I've summarized the results and > added some of my own ideas to a page. > John gets credit for the overview (the mistakes & WTFs are mine). > > The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below. > > IIRC some of the the python-ldap code is used b/c ldap2 may require a configuration to be set up prior to working. That is one of the nice things about the IPAdmin interface, it is much easier to create connections to other hosts. rob From jdennis at redhat.com Fri Jan 11 14:48:39 2013 From: jdennis at redhat.com (John Dennis) Date: Fri, 11 Jan 2013 09:48:39 -0500 Subject: [Freeipa-devel] Redesigning LDAP code In-Reply-To: <50F01D50.9090201@redhat.com> References: <50EFFE88.3010404@redhat.com> <50F01D50.9090201@redhat.com> Message-ID: <50F02647.6040805@redhat.com> On 01/11/2013 09:10 AM, Rob Crittenden wrote: > Petr Viktorin wrote: >> We had a small discussion off-list about how we want IPA's LDAP handling >> to look in the future. >> To continue the discussion publicly I've summarized the results and >> added some of my own ideas to a page. >> John gets credit for the overview (the mistakes & WTFs are mine). >> >> The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below. >> >> > > IIRC some of the the python-ldap code is used b/c ldap2 may require a > configuration to be set up prior to working. That is one of the nice > things about the IPAdmin interface, it is much easier to create > connections to other hosts. Good point. But I don't believe that issue affects having a common API or a single point where LDAP data flows through. It might mean having more than one initialization method or subclassing. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From rcritten at redhat.com Fri Jan 11 14:55:01 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Fri, 11 Jan 2013 09:55:01 -0500 Subject: [Freeipa-devel] Redesigning LDAP code In-Reply-To: <50F02647.6040805@redhat.com> References: <50EFFE88.3010404@redhat.com> <50F01D50.9090201@redhat.com> <50F02647.6040805@redhat.com> Message-ID: <50F027C5.3000607@redhat.com> John Dennis wrote: > On 01/11/2013 09:10 AM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> We had a small discussion off-list about how we want IPA's LDAP handling >>> to look in the future. >>> To continue the discussion publicly I've summarized the results and >>> added some of my own ideas to a page. >>> John gets credit for the overview (the mistakes & WTFs are mine). >>> >>> The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below. >>> >>> >> >> IIRC some of the the python-ldap code is used b/c ldap2 may require a >> configuration to be set up prior to working. That is one of the nice >> things about the IPAdmin interface, it is much easier to create >> connections to other hosts. > > Good point. But I don't believe that issue affects having a common API > or a single point where LDAP data flows through. It might mean having > more than one initialization method or subclassing. > > Right. We may need to decouple from api a bit. I haven't looked at this for a while but one of the problems is that api locks its values after finalization which can make things a bit inflexible. We use some nasty override code in some place but it isn't something I'd want to see spread further. rob From pviktori at redhat.com Fri Jan 11 15:04:29 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 11 Jan 2013 16:04:29 +0100 Subject: [Freeipa-devel] i18n infrastructure improvements In-Reply-To: <50EEEF49.4040909@redhat.com> References: <50ECAEB5.6090905@redhat.com> <50ED70FD.9010306@redhat.com> <50ED84F1.70003@redhat.com> <50EE899D.5080304@redhat.com> <50EEEF49.4040909@redhat.com> Message-ID: <50F029FD.8030409@redhat.com> Hello list, This discussion was started in private; I'll continue it here. On 01/10/2013 05:41 PM, John Dennis wrote: > On 01/10/2013 04:27 AM, Petr Viktorin wrote: >> On 01/09/2013 03:55 PM, John Dennis wrote: > >>>> And I could work on improving the i18n/translations infrastructure, >>>> starting by writing up a RFE+design. > >>> Could you elaborate as to what you perceive as the current problems and >>> what this work would address. > >> Here are my notes: > >> - Use fake translations for tests > > We already do (but perhaps not sufficiently). I mean use it in *all* tests, to ensure all the right things are translated and weird characters are handled well. See https://www.redhat.com/archives/freeipa-devel/2012-October/msg00278.html >> - Split up huge strings so the entire text doesn't have to be >> retranslated each time something changes/is added > > Good idea. But one question I have is should we be optimizing for our > programmers time or the translators time? The Transifex tool should make > available to translators similar existing translations (in fact it > might, I seem to recall some functionality in this area). Wouldn't it be > better to address this issue in Transifex where all projects would benefit? > > Also the exact same functionality is needed to support release versions. > The strings between releases are often close but not identical. The > Transifex tool should make available a close match from a previous > version to the translator working on a new version (or visa versa). See > your issue below concerning versions. > > IMHO this is a Transifex issue which needs to be solved there, not > something we should be investing precious IPA programmers time on. Plus > if it's solved in Transifex it's a *huge* win for *everyone*, not just IPA. Huh? Splitting the strings provides additional information (paragraph/context boundaries) that Transifex can't get otherwise. From what I hear it's a pretty standard technique when working with gettext. For typos, gettext has the "fuzzy" functionality that we explicitly turn off. I think we're on our own here. >> - Keep a history/repo of the translations, since Transifex only stores >> the latest version > > We already do keep a history, it's in git. It's not updated often enough. If I mess something up before a release and Transifex gets wiped, or if a rogue translator deletes some translations, the work is gone. >> - Update the source strings on Transifex more often (ideally as soon as >> patches are pushed) > > Yes, great idea, this would be really useful and is necessary. > >> - Break Git dependencies: make it possible generate the POT in an >> unpacked tarball > > Are you talking about the fact our scripts invoke git to determine what > files to process? If so then yes, this would be a good dependency to get > rid of. However it does mean we somehow have to maintain a manifest list > of some sort somewhere. A directory listing is fine IMO. We use it for more critical things, like loading plugins, without any trouble. Also, when run in a Git repo the Makefile can compare the file list with what Git says and warn accordingly. >> - Figure out how to best share messages across versions (2.x vs. 3.x) so >> they only have to be translated once > > There is a crying need for this, but isn't this a Transifex issue? Why > would we solving this in IPA? What about SSSD and every other project, > they all have identical issues. As far as I can tell Transifex has never > addressed this issue sufficiently (see above) and the onus is on them to > do so. I don't think waiting for Transifex will solve the problem. >> - Clean up checked-in PO files even more, for nicer diffs > > A nice feature, but I'm wondering to extent we're currently suffering > because of this. It's rare that we have to compare PO files. Plus diff > is not well suited for comparing PO's because PO files with equivalent > data can be formatted differently. That's why I wrote some tools to read > PO files, normalize the contents and then do a comparison. Anyway my top > level question is is this something we really need at this point? You're right that files have to be normalized to diff well.That's actually the point here :) Anyway I'm just thinking of sorting the PO alphabetically - an extra option to msgattrib should do it. >> - Automate & document the process so any dev can do it > > Excellent goal, we're not too far from it now, but of all the things on > the list this is the most important. -- Petr? From jdennis at redhat.com Fri Jan 11 15:11:02 2013 From: jdennis at redhat.com (John Dennis) Date: Fri, 11 Jan 2013 10:11:02 -0500 Subject: [Freeipa-devel] Redesigning LDAP code In-Reply-To: <50F027C5.3000607@redhat.com> References: <50EFFE88.3010404@redhat.com> <50F01D50.9090201@redhat.com> <50F02647.6040805@redhat.com> <50F027C5.3000607@redhat.com> Message-ID: <50F02B86.1040000@redhat.com> On 01/11/2013 09:55 AM, Rob Crittenden wrote: > John Dennis wrote: >> On 01/11/2013 09:10 AM, Rob Crittenden wrote: >>> Petr Viktorin wrote: >>>> We had a small discussion off-list about how we want IPA's LDAP handling >>>> to look in the future. >>>> To continue the discussion publicly I've summarized the results and >>>> added some of my own ideas to a page. >>>> John gets credit for the overview (the mistakes & WTFs are mine). >>>> >>>> The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below. >>>> >>>> >>> >>> IIRC some of the the python-ldap code is used b/c ldap2 may require a >>> configuration to be set up prior to working. That is one of the nice >>> things about the IPAdmin interface, it is much easier to create >>> connections to other hosts. >> >> Good point. But I don't believe that issue affects having a common API >> or a single point where LDAP data flows through. It might mean having >> more than one initialization method or subclassing. >> >> > > Right. We may need to decouple from api a bit. I haven't looked at this > for a while but one of the problems is that api locks its values after > finalization which can make things a bit inflexible. We use some nasty > override code in some place but it isn't something I'd want to see > spread further. Ah, object locking, yes I've been bitten by that too. I'm not sure I recall having problems with locked ldap objects but I've certainly have been frustrated with trying to modify other objects which were locked at api creation time. I wonder if the object locking Jason introduced at the early stages of our development is an example of a good idea that's not wonderful in practice. You either have to find the exact moment where an object gets created and update it then which is sometimes awkward or worse impossible or you have to resort to overriding it with the setattr() big hammer. Judging by our use of setattr it's obvious there are numerous places we need to modify a locked object. It's not clear to me if the issues with modifying a locked object are indicative of problems with the locking concept or bad code structure which forces us to violate the locking concept. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From pviktori at redhat.com Fri Jan 11 15:20:54 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 11 Jan 2013 16:20:54 +0100 Subject: [Freeipa-devel] Redesigning LDAP code In-Reply-To: <50F027C5.3000607@redhat.com> References: <50EFFE88.3010404@redhat.com> <50F01D50.9090201@redhat.com> <50F02647.6040805@redhat.com> <50F027C5.3000607@redhat.com> Message-ID: <50F02DD6.8040402@redhat.com> On 01/11/2013 03:55 PM, Rob Crittenden wrote: > John Dennis wrote: >> On 01/11/2013 09:10 AM, Rob Crittenden wrote: >>> Petr Viktorin wrote: >>>> We had a small discussion off-list about how we want IPA's LDAP >>>> handling >>>> to look in the future. >>>> To continue the discussion publicly I've summarized the results and >>>> added some of my own ideas to a page. >>>> John gets credit for the overview (the mistakes & WTFs are mine). >>>> >>>> The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below. >>>> >>>> >>> >>> IIRC some of the the python-ldap code is used b/c ldap2 may require a >>> configuration to be set up prior to working. That is one of the nice >>> things about the IPAdmin interface, it is much easier to create >>> connections to other hosts. >> >> Good point. But I don't believe that issue affects having a common API >> or a single point where LDAP data flows through. It might mean having >> more than one initialization method or subclassing. Yes. I looked at the code again and saw the same thing. Fortunately, there's not too much that needs the api object: creating the connection, `get_ipa_config` which shouldn't really be at this level, CrudBackend-specific things, and `normalize_dn` (which I'd really like to remove but it's probably not worth the effort). My working plan now is to have a ipaldap.LDAPBackend base class (please give me a better name), and subclass ldap2 & IPAdmin from that. IPAdmin would just add the legacy API which we'll try to move away from; ldap2 would add the api-specific setup and CrudBackend bits (plus its own legacy methods). So, the ticket shouldn't really be named "installer code should use ldap2" :) > Right. We may need to decouple from api a bit. I haven't looked at this > for a while but one of the problems is that api locks its values after > finalization which can make things a bit inflexible. We use some nasty > override code in some place but it isn't something I'd want to see > spread further. -- Petr? From rcritten at redhat.com Fri Jan 11 15:23:02 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Fri, 11 Jan 2013 10:23:02 -0500 Subject: [Freeipa-devel] Redesigning LDAP code In-Reply-To: <50F02B86.1040000@redhat.com> References: <50EFFE88.3010404@redhat.com> <50F01D50.9090201@redhat.com> <50F02647.6040805@redhat.com> <50F027C5.3000607@redhat.com> <50F02B86.1040000@redhat.com> Message-ID: <50F02E56.8000500@redhat.com> John Dennis wrote: > On 01/11/2013 09:55 AM, Rob Crittenden wrote: >> John Dennis wrote: >>> On 01/11/2013 09:10 AM, Rob Crittenden wrote: >>>> Petr Viktorin wrote: >>>>> We had a small discussion off-list about how we want IPA's LDAP >>>>> handling >>>>> to look in the future. >>>>> To continue the discussion publicly I've summarized the results and >>>>> added some of my own ideas to a page. >>>>> John gets credit for the overview (the mistakes & WTFs are mine). >>>>> >>>>> The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below. >>>>> >>>>> >>>> >>>> IIRC some of the the python-ldap code is used b/c ldap2 may require a >>>> configuration to be set up prior to working. That is one of the nice >>>> things about the IPAdmin interface, it is much easier to create >>>> connections to other hosts. >>> >>> Good point. But I don't believe that issue affects having a common API >>> or a single point where LDAP data flows through. It might mean having >>> more than one initialization method or subclassing. >>> >>> >> >> Right. We may need to decouple from api a bit. I haven't looked at this >> for a while but one of the problems is that api locks its values after >> finalization which can make things a bit inflexible. We use some nasty >> override code in some place but it isn't something I'd want to see >> spread further. > > Ah, object locking, yes I've been bitten by that too. I'm not sure I > recall having problems with locked ldap objects but I've certainly have > been frustrated with trying to modify other objects which were locked at > api creation time. > > I wonder if the object locking Jason introduced at the early stages of > our development is an example of a good idea that's not wonderful in > practice. You either have to find the exact moment where an object gets > created and update it then which is sometimes awkward or worse > impossible or you have to resort to overriding it with the setattr() big > hammer. Judging by our use of setattr it's obvious there are numerous > places we need to modify a locked object. > > It's not clear to me if the issues with modifying a locked object are > indicative of problems with the locking concept or bad code structure > which forces us to violate the locking concept. > > The reasoning IIRC was we didn't want a plugin developer mucking with a lot of this for their one plugin as it would affect the entire server. rob From mkosek at redhat.com Fri Jan 11 15:39:15 2013 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 11 Jan 2013 16:39:15 +0100 Subject: [Freeipa-devel] [PATCH] 349 Test NetBIOS name clash before creating a trust Message-ID: <50F03223.8070802@redhat.com> Give a clear message about what is wrong with current Trust settings before letting AD to return a confusing error message. https://fedorahosted.org/freeipa/ticket/3193 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-349-test-netbios-name-clash-before-creating-a-trust.patch Type: text/x-patch Size: 1299 bytes Desc: not available URL: From jdennis at redhat.com Fri Jan 11 16:23:03 2013 From: jdennis at redhat.com (John Dennis) Date: Fri, 11 Jan 2013 11:23:03 -0500 Subject: [Freeipa-devel] i18n infrastructure improvements In-Reply-To: <50F029FD.8030409@redhat.com> References: <50ECAEB5.6090905@redhat.com> <50ED70FD.9010306@redhat.com> <50ED84F1.70003@redhat.com> <50EE899D.5080304@redhat.com> <50EEEF49.4040909@redhat.com> <50F029FD.8030409@redhat.com> Message-ID: <50F03C67.5050304@redhat.com> On 01/11/2013 10:04 AM, Petr Viktorin wrote: > Hello list, > This discussion was started in private; I'll continue it here. > > On 01/10/2013 05:41 PM, John Dennis wrote: >> On 01/10/2013 04:27 AM, Petr Viktorin wrote: >>> On 01/09/2013 03:55 PM, John Dennis wrote: >> >>>>> And I could work on improving the i18n/translations infrastructure, >>>>> starting by writing up a RFE+design. >> >>>> Could you elaborate as to what you perceive as the current problems and >>>> what this work would address. >> >>> Here are my notes: >> >>> - Use fake translations for tests >> >> We already do (but perhaps not sufficiently). > > I mean use it in *all* tests, to ensure all the right things are > translated and weird characters are handled well. > See https://www.redhat.com/archives/freeipa-devel/2012-October/msg00278.html Ah yes, I like the idea of a test domain for strings, this is a good idea. Not only would it exercise our i18n code more but it could insulate the tests from string changes (the test would look for a canonical string in the test domain) > >>> - Split up huge strings so the entire text doesn't have to be >>> retranslated each time something changes/is added >> >> Good idea. But one question I have is should we be optimizing for our >> programmers time or the translators time? The Transifex tool should make >> available to translators similar existing translations (in fact it >> might, I seem to recall some functionality in this area). Wouldn't it be >> better to address this issue in Transifex where all projects would benefit? >> >> Also the exact same functionality is needed to support release versions. >> The strings between releases are often close but not identical. The >> Transifex tool should make available a close match from a previous >> version to the translator working on a new version (or visa versa). See >> your issue below concerning versions. >> >> IMHO this is a Transifex issue which needs to be solved there, not >> something we should be investing precious IPA programmers time on. Plus >> if it's solved in Transifex it's a *huge* win for *everyone*, not just IPA. > > Huh? Splitting the strings provides additional information > (paragraph/context boundaries) that Transifex can't get otherwise. From > what I hear it's a pretty standard technique when working with gettext. I'm not sure how splitting text into smaller units gives more context but I can see the argument for each msgid being a logical paragraph. We don't have too many multi-paragraph strings now so it shouldn't be too involved. > > For typos, gettext has the "fuzzy" functionality that we explicitly turn > off. I think we're on our own here. Be very afraid of turning on fuzzy matching. Before we moved to TX we used the entire gnu tool chain. I discovered a number of our PO files were horribly corrupted. With a lot of work I traced this down to fuzzy matches. If memory serves me right here is what happened. When a msgstr was absent a fuzzy match was performed and inserted as a candidate msgstr. Somehow the fuzzy candidates got accepted as actual msgstr's. I'm not sure if we ever figured out how this happened. The two most likely explanations were 1) a known bug in TX that stripped the fuzzy flag off the msgstr or 2) a translator who blindly accepted all "TX suggestions". (A suggestion in TX comes from a fuzzy match). But the real problem is the fuzzy matching is horribly bad. Most of the fuzzy suggestions (primarily on short strings) were wildly incorrect. I had to go back to a number of PO files and manually locate all fuzzy suggestions that had been promoted to legitimate msgstr's. A tedious process I hope to never repeat. BTW, if memory serves me correctly the fuzzy suggestions got into the PO files in the first place because we were running the full gnu tool chain (sorry off the top of my head I don't recall exactly which component inserts the fuzzy suggestion), but I think we've since turned that off, for a very good reason. > >>> - Keep a history/repo of the translations, since Transifex only stores >>> the latest version >> >> We already do keep a history, it's in git. > > It's not updated often enough. If I mess something up before a release > and Transifex gets wiped, or if a rogue translator deletes some > translations, the work is gone. Yes, updating more frequently is an excellent goal. > >>> - Update the source strings on Transifex more often (ideally as soon as >>> patches are pushed) >> >> Yes, great idea, this would be really useful and is necessary. >> >>> - Break Git dependencies: make it possible generate the POT in an >>> unpacked tarball >> >> Are you talking about the fact our scripts invoke git to determine what >> files to process? If so then yes, this would be a good dependency to get >> rid of. However it does mean we somehow have to maintain a manifest list >> of some sort somewhere. > > A directory listing is fine IMO. We use it for more critical things, > like loading plugins, without any trouble. > Also, when run in a Git repo the Makefile can compare the file list with > what Git says and warn accordingly. How do you which files in a directory should be scanned for translations? Perhaps it doesn't hurt to scan every file, we never tried scanning inappropriate files so I don't know what the consequence would be. A little history: Originally there was a manifest of every file to be scanned. Simo made some fixes to the i18n infrastructure and didn't like the manifest idea (it has an obvious downside, it has to be updated when source files are added/deleted/moved). Simo used git to get a list of source files. But that mechanism depended on identifying a source files via it's filename extension. Our scripts don't have an extension so those were hardcoded just like the original manifest. It didn't eliminate the maintenance problem, it just made it smaller. To my mind that was the worst of both worlds, it introduced a git dependency but didn't solve the manual manifest problem. One could do something else, if a file doesn't have an extension you can read the beginning of the file and look for a shebang (#!) interpreter line, that would identify the file as a script and hence a translation candidate. But perhaps your idea of scan everything and throw away (or ignore) anything which won't scan correctly because because it's not valid input is the best. As I said I don't think we ever explored that, but it might also be because in some instances we have to tell the scanner what type of file the input is (a chicken-n-egg problem). But it's an interesting idea and we should see how it works in practice. > >>> - Figure out how to best share messages across versions (2.x vs. 3.x) so >>> they only have to be translated once >> >> There is a crying need for this, but isn't this a Transifex issue? Why >> would we solving this in IPA? What about SSSD and every other project, >> they all have identical issues. As far as I can tell Transifex has never >> addressed this issue sufficiently (see above) and the onus is on them to >> do so. > > I don't think waiting for Transifex will solve the problem. Then what is your suggestion? Pull every msgid from every version, put it into one massive unified pot file and then split the resulting unified PO files back into version specific PO files? Well I suppose we wouldn't have to split the PO files, they could just contain translations that are never referenced, it would make them larger but wouldn't hurt anything. Of course merging the strings from every version into one unified POT would play havoc with the msgid references (where is the string located) unless the filename was modified to include it's git branch. Just thinking off the top of my head. > >>> - Clean up checked-in PO files even more, for nicer diffs >> >> A nice feature, but I'm wondering to extent we're currently suffering >> because of this. It's rare that we have to compare PO files. Plus diff >> is not well suited for comparing PO's because PO files with equivalent >> data can be formatted differently. That's why I wrote some tools to read >> PO files, normalize the contents and then do a comparison. Anyway my top >> level question is is this something we really need at this point? > > You're right that files have to be normalized to diff well.That's > actually the point here :) > Anyway I'm just thinking of sorting the PO alphabetically - an extra > option to msgattrib should do it. > >>> - Automate & document the process so any dev can do it >> >> Excellent goal, we're not too far from it now, but of all the things on >> the list this is the most important. > -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From lroot at redhat.com Fri Jan 11 17:47:25 2013 From: lroot at redhat.com (Lynn Root) Date: Fri, 11 Jan 2013 09:47:25 -0800 Subject: [Freeipa-devel] [PATCH 0005] Clarified error message with ipa-client-automount In-Reply-To: <50BCA720.50305@redhat.com> References: <1129149924.5633962.1353923519895.JavaMail.root@redhat.com> <50B926A9.5030107@redhat.com> <50BCA720.50305@redhat.com> Message-ID: <50F0502D.6010907@redhat.com> On Mon 03 Dec 2012 05:20:32 AM PST, Lynn Root wrote: > On 11/30/2012 10:35 PM, Rob Crittenden wrote: >> Lynn Root wrote: >>> Returns a clearer hint when user is running ipa-client-automount with >>> possible firewall up and blocking need ports. >>> >>> Not sure if this patch is worded correctly in order to address the >>> potential firewall block when running ipa-client-automount. Perhaps a >>> different error should be thrown, rather than NOT_IPA_SERVER. >>> >>> Ticket: https://fedorahosted.org/freeipa/ticket/3080 >> >> Tomas made a similar change recently in ipa-client-install which >> includes more information on the ports we need. You may want to take >> a look at that. It was for ticket >> https://fedorahosted.org/freeipa/ticket/2816 >> >> rob > Thank you Rob - I adapted the same approach in this updated patch. Let > me know if it addresses the blocked port issue better. > > Thanks! Just bumping this thread - I think this might have fallen on the way-side; certainly lost track of it myself after returning home/holidays. However I noticed that this ticket (https://fedorahosted.org/freeipa/ticket/3080) now has an RFE tag - don't _believe_ that was there when I started working on it in late November. I believe the whole design doc conversation was going on around then. I assume I'll need to start one for this? Thanks! -- Lynn Root @roguelynn Associate Software Engineer Red Hat, Inc From pspacek at redhat.com Fri Jan 11 17:47:52 2013 From: pspacek at redhat.com (Petr Spacek) Date: Fri, 11 Jan 2013 18:47:52 +0100 Subject: [Freeipa-devel] [PATCH 0107] Don't fail if idnsSOAserial attribute is missing in LDAP Message-ID: <50F05048.80402@redhat.com> Hello, Don't fail if idnsSOAserial attribute is missing in LDAP. DNS zones created on remote IPA 3.0 server don't have idnsSOAserial attribute present in LDAP. https://bugzilla.redhat.com/show_bug.cgi?id=894131 Attached patch contains the minimal set of changes need for resurrecting BIND. In configurations with serial auto-increment: - enabled (IPA 3.0+ default) - some new serial is written back to LDAP nearly immediately - disabled - the attribute will be missing forever -- Petr^2 Spacek -------------- next part -------------- A non-text attachment was scrubbed... Name: bind-dyndb-ldap-pspacek-0107-Don-t-fail-if-idnsSOAserial-attribute-is-missing-in-.patch Type: text/x-patch Size: 2023 bytes Desc: not available URL: From jfenal at gmail.com Fri Jan 11 19:44:27 2013 From: jfenal at gmail.com (=?UTF-8?B?SsOpcsO0bWUgRmVuYWw=?=) Date: Fri, 11 Jan 2013 20:44:27 +0100 Subject: [Freeipa-devel] i18n infrastructure improvements In-Reply-To: <50F03C67.5050304@redhat.com> References: <50ECAEB5.6090905@redhat.com> <50ED70FD.9010306@redhat.com> <50ED84F1.70003@redhat.com> <50EE899D.5080304@redhat.com> <50EEEF49.4040909@redhat.com> <50F029FD.8030409@redhat.com> <50F03C67.5050304@redhat.com> Message-ID: 2013/1/11 John Dennis > On 01/11/2013 10:04 AM, Petr Viktorin wrote: > >> Hello list, >> This discussion was started in private; I'll continue it here. >> >> On 01/10/2013 05:41 PM, John Dennis wrote: >> >>> On 01/10/2013 04:27 AM, Petr Viktorin wrote: >>> >>>> On 01/09/2013 03:55 PM, John Dennis wrote: >>>> >>> >>> And I could work on improving the i18n/translations infrastructure, >>>>>> starting by writing up a RFE+design. >>>>>> >>>>> >>> Could you elaborate as to what you perceive as the current problems and >>>>> what this work would address. >>>>> >>>> >>> Here are my notes: >>>> >>> >>> - Use fake translations for tests >>>> >>> >>> We already do (but perhaps not sufficiently). >>> >> >> I mean use it in *all* tests, to ensure all the right things are >> translated and weird characters are handled well. >> See https://www.redhat.com/**archives/freeipa-devel/2012-** >> October/msg00278.html >> > > Ah yes, I like the idea of a test domain for strings, this is a good idea. > Not only would it exercise our i18n code more but it could insulate the > tests from string changes (the test would look for a canonical string in > the test domain) > FWIW, KDE also uses an empty .po (e.g. empty translated messages) in order to easier spot strings not marked for translations. > - Split up huge strings so the entire text doesn't have to be >>>> retranslated each time something changes/is added >>>> >>> >>> Good idea. But one question I have is should we be optimizing for our >>> programmers time or the translators time? The Transifex tool should make >>> available to translators similar existing translations (in fact it >>> might, I seem to recall some functionality in this area). Wouldn't it be >>> better to address this issue in Transifex where all projects would >>> benefit? >>> >>> Also the exact same functionality is needed to support release versions. >>> The strings between releases are often close but not identical. The >>> Transifex tool should make available a close match from a previous >>> version to the translator working on a new version (or visa versa). See >>> your issue below concerning versions. >>> >>> IMHO this is a Transifex issue which needs to be solved there, not >>> something we should be investing precious IPA programmers time on. Plus >>> if it's solved in Transifex it's a *huge* win for *everyone*, not just >>> IPA. >>> >> >> Huh? Splitting the strings provides additional information >> (paragraph/context boundaries) that Transifex can't get otherwise. From >> what I hear it's a pretty standard technique when working with gettext. >> > > I'm not sure how splitting text into smaller units gives more context but > I can see the argument for each msgid being a logical paragraph. We don't > have too many multi-paragraph strings now so it shouldn't be too involved. > One issue also discussed on this list is the problem of 100+ lines strings in man pages generated from ___doc___ tags in scripts. Those are a _real_ pain for translators to maintain when only one line is changed. Didn't have the time yet to explore splitting those strings, I need to take some to do so. > >> For typos, gettext has the "fuzzy" functionality that we explicitly turn >> off. I think we're on our own here. >> > > Be very afraid of turning on fuzzy matching. Before we moved to TX we used > the entire gnu tool chain. I discovered a number of our PO files were > horribly corrupted. With a lot of work I traced this down to fuzzy matches. > If memory serves me right here is what happened. > > When a msgstr was absent a fuzzy match was performed and inserted as a > candidate msgstr. Somehow the fuzzy candidates got accepted as actual > msgstr's. I'm not sure if we ever figured out how this happened. The two > most likely explanations were 1) a known bug in TX that stripped the fuzzy > flag off the msgstr or 2) a translator who blindly accepted all "TX > suggestions". (A suggestion in TX comes from a fuzzy match). > > But the real problem is the fuzzy matching is horribly bad. Most of the > fuzzy suggestions (primarily on short strings) were wildly incorrect. > > I had to go back to a number of PO files and manually locate all fuzzy > suggestions that had been promoted to legitimate msgstr's. A tedious > process I hope to never repeat. > > BTW, if memory serves me correctly the fuzzy suggestions got into the PO > files in the first place because we were running the full gnu tool chain > (sorry off the top of my head I don't recall exactly which component > inserts the fuzzy suggestion), but I think we've since turned that off, for > a very good reason. > > > >> - Keep a history/repo of the translations, since Transifex only stores >>>> the latest version >>>> >>> >>> We already do keep a history, it's in git. >>> >> >> It's not updated often enough. If I mess something up before a release >> and Transifex gets wiped, or if a rogue translator deletes some >> translations, the work is gone. >> > > Yes, updating more frequently is an excellent goal. > Yes, please! Having nothing to translate for months on Transifex is not fun. Having a mass of new strings to translate every once in a while when a new set of strings is eventually pushed to Transifex when someone remembers to so is certainly not. - Update the source strings on Transifex more often (ideally as soon as >>>> patches are pushed) >>>> >>> >>> Yes, great idea, this would be really useful and is necessary. >>> >>> - Break Git dependencies: make it possible generate the POT in an >>>> unpacked tarball >>>> >>> >>> Are you talking about the fact our scripts invoke git to determine what >>> files to process? If so then yes, this would be a good dependency to get >>> rid of. However it does mean we somehow have to maintain a manifest list >>> of some sort somewhere. >>> >> >> A directory listing is fine IMO. We use it for more critical things, >> like loading plugins, without any trouble. >> Also, when run in a Git repo the Makefile can compare the file list with >> what Git says and warn accordingly. >> > > How do you which files in a directory should be scanned for translations? > Perhaps it doesn't hurt to scan every file, we never tried scanning > inappropriate files so I don't know what the consequence would be. > > A little history: Originally there was a manifest of every file to be > scanned. Simo made some fixes to the i18n infrastructure and didn't like > the manifest idea (it has an obvious downside, it has to be updated when > source files are added/deleted/moved). Simo used git to get a list of > source files. But that mechanism depended on identifying a source files via > it's filename extension. Our scripts don't have an extension so those were > hardcoded just like the original manifest. It didn't eliminate the > maintenance problem, it just made it smaller. To my mind that was the worst > of both worlds, it introduced a git dependency but didn't solve the manual > manifest problem. > > One could do something else, if a file doesn't have an extension you can > read the beginning of the file and look for a shebang (#!) interpreter > line, that would identify the file as a script and hence a translation > candidate. > At least this is scriptable easily using file(1). > But perhaps your idea of scan everything and throw away (or ignore) > anything which won't scan correctly because because it's not valid input is > the best. As I said I don't think we ever explored that, but it might also > be because in some instances we have to tell the scanner what type of file > the input is (a chicken-n-egg problem). But it's an interesting idea and we > should see how it works in practice. > > >> - Figure out how to best share messages across versions (2.x vs. 3.x) so >>>> they only have to be translated once >>>> >>> >>> There is a crying need for this, but isn't this a Transifex issue? Why >>> would we solving this in IPA? What about SSSD and every other project, >>> they all have identical issues. As far as I can tell Transifex has never >>> addressed this issue sufficiently (see above) and the onus is on them to >>> do so. >>> >> >> I don't think waiting for Transifex will solve the problem. >> > > Then what is your suggestion? > > Pull every msgid from every version, put it into one massive unified pot > file and then split the resulting unified PO files back into version > specific PO files? > > Well I suppose we wouldn't have to split the PO files, they could just > contain translations that are never referenced, it would make them larger > but wouldn't hurt anything. > > Of course merging the strings from every version into one unified POT > would play havoc with the msgid references (where is the string located) > unless the filename was modified to include it's git branch. > I'd see a few remarks here: - this massive .po file would grow wildly, especially when a typo is corrected in huge strings (__doc___), when additional sentences are added to those, etc. - breaking down bigger strings in smaller ones will certainly help here in avoiding duplicated content, - in Transifex, it is easy to upload a .po onto another branch, and only untranslated matching strings would be updated. I used it on ananconda where there are multiple branches between Fedora & RHEL5/6 & master, that worked easily without breaking anything. > Just thinking off the top of my head. > > >> - Clean up checked-in PO files even more, for nicer diffs >>>> >>> >>> A nice feature, but I'm wondering to extent we're currently suffering >>> because of this. It's rare that we have to compare PO files. Plus diff >>> is not well suited for comparing PO's because PO files with equivalent >>> data can be formatted differently. That's why I wrote some tools to read >>> PO files, normalize the contents and then do a comparison. Anyway my top >>> level question is is this something we really need at this point? >>> >> >> You're right that files have to be normalized to diff well.That's >> actually the point here :) >> > That would be nice before getting translated content from Transifex, or any other sources of translation, since every single po editor seems to use its own format, being one massive line including \n, or multi line by ending and reopening quotes to keep everything in 72/80 cols. NB: That one is usually not a problem for translators (except those using a mail based review process, where having 72 cols may help, or not). > Anyway I'm just thinking of sorting the PO alphabetically - an extra >> option to msgattrib should do it. >> >> - Automate & document the process so any dev can do it >>>> >>> >>> Excellent goal, we're not too far from it now, but of all the things on >>> the list this is the most important. >>> >> Indeed. Please setup automation on pushing strings to Transifex, ideally in a "master" branch. Then each time a new significant release is prepared: - advertise the strings freeze, - possibly create a specific branch in Transifex, using available translations from the master branch, - advertise the new branch for translators to work on it. My 2 cents, J. -- J?r?me Fenal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdennis at redhat.com Fri Jan 11 20:16:13 2013 From: jdennis at redhat.com (John Dennis) Date: Fri, 11 Jan 2013 15:16:13 -0500 Subject: [Freeipa-devel] i18n infrastructure improvements In-Reply-To: References: <50ECAEB5.6090905@redhat.com> <50ED70FD.9010306@redhat.com> <50ED84F1.70003@redhat.com> <50EE899D.5080304@redhat.com> <50EEEF49.4040909@redhat.com> <50F029FD.8030409@redhat.com> <50F03C67.5050304@redhat.com> Message-ID: <50F0730D.4050508@redhat.com> On 01/11/2013 02:44 PM, J?r?me Fenal wrote: > 2013/1/11 John Dennis > Thank you J?r?me for your insights as a translator. We have a lop-sided perspective mostly from the developer point of view. We need to better understand the translator's perspective. > I'm not sure how splitting text into smaller units gives more > context but I can see the argument for each msgid being a logical > paragraph. We don't have too many multi-paragraph strings now so it > shouldn't be too involved. > > > One issue also discussed on this list is the problem of 100+ lines > strings in man pages generated from ___doc___ tags in scripts. > Those are a _real_ pain for translators to maintain when only one line > is changed. I still think TX should attempt to match the msgid from a previous pot with an updated pot and show the *word* differences between the strings along with an edit window for the original translation. That would be so useful to translators I can't believe TX does not have that feature. All you would have to do is make a few trivial edits in the translation and save it. But heck, I'm not a translator and I haven't used the translator's part of the TX tool much other than to explore how it works (and that was a while ago). > I'd see a few remarks here: > - this massive .po file would grow wildly, especially when a typo is > corrected in huge strings (__doc___), when additional sentences are > added to those, etc. > - breaking down bigger strings in smaller ones will certainly help here > in avoiding duplicated content, > - in Transifex, it is easy to upload a .po onto another branch, and only > untranslated matching strings would be updated. I used it on ananconda > where there are multiple branches between Fedora & RHEL5/6 & master, > that worked easily without breaking anything. When you say easy to upload a .po onto another branch I assume you don't mean branch (TX has no such concept) but rather another TX resource. Anyway this is good to know, perhaps the way TX handles versions is not half as bad as it would appear. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From dpal at redhat.com Fri Jan 11 20:43:01 2013 From: dpal at redhat.com (Dmitri Pal) Date: Fri, 11 Jan 2013 15:43:01 -0500 Subject: [Freeipa-devel] [PATCH 0005] Clarified error message with ipa-client-automount In-Reply-To: <50F0502D.6010907@redhat.com> References: <1129149924.5633962.1353923519895.JavaMail.root@redhat.com> <50B926A9.5030107@redhat.com> <50BCA720.50305@redhat.com> <50F0502D.6010907@redhat.com> Message-ID: <50F07955.8070704@redhat.com> On 01/11/2013 12:47 PM, Lynn Root wrote: > On Mon 03 Dec 2012 05:20:32 AM PST, Lynn Root wrote: >> On 11/30/2012 10:35 PM, Rob Crittenden wrote: >>> Lynn Root wrote: >>>> Returns a clearer hint when user is running ipa-client-automount with >>>> possible firewall up and blocking need ports. >>>> >>>> Not sure if this patch is worded correctly in order to address the >>>> potential firewall block when running ipa-client-automount. Perhaps a >>>> different error should be thrown, rather than NOT_IPA_SERVER. >>>> >>>> Ticket: https://fedorahosted.org/freeipa/ticket/3080 >>> >>> Tomas made a similar change recently in ipa-client-install which >>> includes more information on the ports we need. You may want to take >>> a look at that. It was for ticket >>> https://fedorahosted.org/freeipa/ticket/2816 >>> >>> rob >> Thank you Rob - I adapted the same approach in this updated patch. Let >> me know if it addresses the blocked port issue better. >> >> Thanks! > > Just bumping this thread - I think this might have fallen on the > way-side; certainly lost track of it myself after returning > home/holidays. > > However I noticed that this ticket > (https://fedorahosted.org/freeipa/ticket/3080) now has an RFE tag - > don't _believe_ that was there when I started working on it in late > November. I believe the whole design doc conversation was going on > around then. I assume I'll need to start one for this? > > Thanks! > It is an RFE, just was not marked as such. Good catch. Yes, since it is an RFE design page will be required. > -- > Lynn Root > > @roguelynn > Associate Software Engineer > Red Hat, Inc > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel -- Thank you, Dmitri Pal Sr. Engineering Manager for IdM portfolio Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From jfenal at gmail.com Fri Jan 11 21:00:16 2013 From: jfenal at gmail.com (=?UTF-8?B?SsOpcsO0bWUgRmVuYWw=?=) Date: Fri, 11 Jan 2013 22:00:16 +0100 Subject: [Freeipa-devel] i18n infrastructure improvements In-Reply-To: <50F0730D.4050508@redhat.com> References: <50ECAEB5.6090905@redhat.com> <50ED70FD.9010306@redhat.com> <50ED84F1.70003@redhat.com> <50EE899D.5080304@redhat.com> <50EEEF49.4040909@redhat.com> <50F029FD.8030409@redhat.com> <50F03C67.5050304@redhat.com> <50F0730D.4050508@redhat.com> Message-ID: 2013/1/11 John Dennis > On 01/11/2013 02:44 PM, J?r?me Fenal wrote: > >> 2013/1/11 John Dennis > >> > > Thank you J?r?me for your insights as a translator. We have a lop-sided > perspective mostly from the developer point of view. We need to better > understand the translator's perspective. You're welcome. I'm not an expert at Transifex though. I've yet to schedule a lunch with Kevin Raymond (he works a few kms away from the French Red Hat office) who is coordinating the whole Fedora translation effort, but customers first, yada yada... :) I'm not sure how splitting text into smaller units gives more > context but I can see the argument for each msgid being a logical > paragraph. We don't have too many multi-paragraph strings now so it > shouldn't be too involved. > > > One issue also discussed on this list is the problem of 100+ lines > strings in man pages generated from ___doc___ tags in scripts. > Those are a _real_ pain for translators to maintain when only one line > is changed. > I still think TX should attempt to match the msgid from a previous pot with > an updated pot and show the *word* differences between the strings along > with an edit window for the original translation. That would be so useful > to translators I can't believe TX does not have that feature. All you would > have to do is make a few trivial edits in the translation and save it. > I agree with you. But transifex developers seem to be overloaded at the moment. I can check with Kevin (and internally) if Zanata would provide a better home to host the translation effort. > But heck, I'm not a translator and I haven't used the translator's part of > the TX tool much other than to explore how it works (and that was a while > ago). I can understand that... :) Hopefully, the IPA dev team is mutllingual ;) I'd see a few remarks here: > - this massive .po file would grow wildly, especially when a typo is > corrected in huge strings (__doc___), when additional sentences are > added to those, etc. > - breaking down bigger strings in smaller ones will certainly help here > in avoiding duplicated content, > - in Transifex, it is easy to upload a .po onto another branch, and only > untranslated matching strings would be updated. I used it on ananconda > where there are multiple branches between Fedora & RHEL5/6 & master, > that worked easily without breaking anything. > When you say easy to upload a .po onto another branch I assume you don't > mean branch (TX has no such concept) but rather another TX resource. Anyway > this is good to know, perhaps the way TX handles versions is not half as > bad as it would appear. You're right. See how anaconda is organized, for instance: https://fedora.transifex.com/projects/p/fedora/language/en/?project=2059 Regards, J. -- J?r?me Fenal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdennis at redhat.com Fri Jan 11 21:19:56 2013 From: jdennis at redhat.com (John Dennis) Date: Fri, 11 Jan 2013 16:19:56 -0500 Subject: [Freeipa-devel] i18n infrastructure improvements In-Reply-To: References: <50ECAEB5.6090905@redhat.com> <50ED70FD.9010306@redhat.com> <50ED84F1.70003@redhat.com> <50EE899D.5080304@redhat.com> <50EEEF49.4040909@redhat.com> <50F029FD.8030409@redhat.com> <50F03C67.5050304@redhat.com> <50F0730D.4050508@redhat.com> Message-ID: <50F081FC.2050400@redhat.com> On 01/11/2013 04:00 PM, J?r?me Fenal wrote: > When you say easy to upload a .po onto another branch I assume you > don't mean branch (TX has no such concept) but rather another TX > resource. Anyway this is good to know, perhaps the way TX handles > versions is not half as bad as it would appear. > > > You're right. See how anaconda is organized, for instance: > https://fedora.transifex.com/projects/p/fedora/language/en/?project=2059 We follow the same model as anaconda, a new TX resource per version. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From jfenal at gmail.com Fri Jan 11 21:39:14 2013 From: jfenal at gmail.com (=?UTF-8?B?SsOpcsO0bWUgRmVuYWw=?=) Date: Fri, 11 Jan 2013 22:39:14 +0100 Subject: [Freeipa-devel] i18n infrastructure improvements In-Reply-To: <50F081FC.2050400@redhat.com> References: <50ECAEB5.6090905@redhat.com> <50ED70FD.9010306@redhat.com> <50ED84F1.70003@redhat.com> <50EE899D.5080304@redhat.com> <50EEEF49.4040909@redhat.com> <50F029FD.8030409@redhat.com> <50F03C67.5050304@redhat.com> <50F0730D.4050508@redhat.com> <50F081FC.2050400@redhat.com> Message-ID: 2013/1/11 John Dennis > On 01/11/2013 04:00 PM, J?r?me Fenal wrote: > >> When you say easy to upload a .po onto another branch I assume you >> don't mean branch (TX has no such concept) but rather another TX >> resource. Anyway this is good to know, perhaps the way TX handles >> versions is not half as bad as it would appear. >> >> >> You're right. See how anaconda is organized, for instance: >> https://fedora.transifex.com/**projects/p/fedora/language/en/** >> ?project=2059 >> > > We follow the same model as anaconda, a new TX resource per version. Yup. Minus the frequent updates on master/head resource ipa (and no IPA 3.x resource, but that is not a problem for IPA, given its fast pace and no long maintenance on older branches). -- J?r?me Fenal -------------- next part -------------- An HTML attachment was scrubbed... URL: From rcritten at redhat.com Fri Jan 11 23:49:08 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Fri, 11 Jan 2013 18:49:08 -0500 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50EB1E85.2070109@redhat.com> References: <50E9D7F2.1060800@redhat.com> <50EAC0F5.8050607@redhat.com> <50EAD70D.8010000@redhat.com> <50EAE65B.60409@redhat.com> <50EAFAFE.1040200@redhat.com> <50EAFE68.3030109@redhat.com> <50EB1E85.2070109@redhat.com> Message-ID: <50F0A4F4.5030304@redhat.com> Rob Crittenden wrote: > Petr Viktorin wrote: >> On 01/07/2013 05:42 PM, Rob Crittenden wrote: >>> Petr Viktorin wrote: >>>> On 01/07/2013 03:09 PM, Rob Crittenden wrote: >>>>> Petr Viktorin wrote: >> [...] >>>>>> >>>>>> Works for me, but I have some questions (this is an area I know >>>>>> little >>>>>> about). >>>>>> >>>>>> Can we be 100% sure these certs are always renewed together? Is >>>>>> certmonger the only possible mechanism to update them? >>>>> >>>>> You raise a good point. If though some mechanism someone replaces >>>>> one of >>>>> these certs it will cause the script to fail. Some notification of >>>>> this >>>>> failure will be logged though, and of course, the certs won't be >>>>> renewed. >>>>> >>>>> One could conceivably manually renew one of these certificates. It is >>>>> probably a very remote possibility but it is non-zero. >>>>> >>>>>> Can we be sure certmonger always does the updates in parallel? If it >>>>>> managed to update the audit cert before starting on the others, we'd >>>>>> get >>>>>> no CA restart for the others. >>>>> >>>>> These all get issued at the same time so should expire at the same >>>>> time >>>>> as well (see problem above). The script will hang around for 10 >>>>> minutes >>>>> waiting for the renewal to complete, then give up. >>>> >>>> The certs might take different amounts of time to update, right? >>>> Eventually, the expirations could go out of sync enough for it to >>>> matter. >>>> AFAICS, without proper locking we still get a race condition when the >>>> other certs start being renewed some time (much less than 10 min) after >>>> the audit one: >>>> >>>> (time axis goes down) >>>> >>>> audit cert other cert >>>> ---------- ---------- >>>> certmonger does renew . >>>> post-renew script starts . >>>> check state of other certs: OK . >>>> . certmonger starts renew >>>> certutil modifies NSS DB + certmonger modifies NSS DB == boom! >>> >>> This can't happen because we count the # of expected certs and wait >>> until all are in MONITORING before continuing. >> >> The problem is that they're also in MONITORING before the whole renewal >> starts. If the script happens to check just before the state changes >> from MONITORING to GENERATING_CSR or whatever, we can get corruption. >> >>> The worse that would >>> happen is the trust wouldn't be set on the audit cert and dogtag >>> wouldn't be restarted. >>> >>>> >>>> >>>>> The state the system would be in is this: >>>>> >>>>> - audit cert trust not updated, so next restart of CA will fail >>>>> - CA is not restarted so will not use updated certificates >>>>> >>>>>> And anyway, why does certmonger do renewals in parallel? It seems >>>>>> that >>>>>> if it did one at a time, always waiting until the post-renew >>>>>> script is >>>>>> done, this patch wouldn't be necessary. >>>>>> >>>>> >>>>> From what Nalin told me certmonger has some coarse locking such that >>>>> renewals in a the same NSS database are serialized. As you point >>>>> out, it >>>>> would be nice to extend this locking to the post renewal scripts. We >>>>> can >>>>> ask Nalin about it. That would fix the potential corruption issue. >>>>> It is >>>>> still much nicer to not have to restart dogtag 4 times. >>>>> >>>> >>>> Well, three extra restarts every few years seems like a small price to >>>> pay for robustness. >>> >>> It is a bit of a problem though because the certs all renew within >>> seconds so end up fighting over who is restarting dogtag. This can cause >>> some renewals go into a failure state to be retried later. This is fine >>> functionally but makes QE a bit of a pain. You then have to make sure >>> that renewal is basically done, then restart certmonger and check >>> everything again, over and over until all the certs are renewed. This is >>> difficult to automate. >> >> So we need to extend the certmonger lock, and wait until Dogtag is back >> up before exiting the script. That way it'd still take longer than 1 >> restart, but all the renews should succeed. >> > > Right, but older dogtag versions don't have the handy servlet to tell > that the service is actually up and responding. So it is difficult to > tell from tomcat alone whether the CA is actually up and handling requests. > Revised patch that takes advantage of new version of certmonger. certmonger-0.65 adds locking from the time renewal begins to the end of the post_save_command. This lets us be sure that no other certmonger renewals will have the NSS database open in read-write mode. We need to be sure that tomcat is shut down before we let certmonger save the certificate to the NSS database because dogtag opens its database read/write and two writers can cause corruption. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1079-2-renewal.patch Type: text/x-patch Size: 21679 bytes Desc: not available URL: From mkosek at redhat.com Mon Jan 14 08:26:05 2013 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 14 Jan 2013 09:26:05 +0100 Subject: [Freeipa-devel] [PATCH] 348 Sort LDAP updates properly In-Reply-To: <50F00AE2.1080907@redhat.com> References: <50F00AE2.1080907@redhat.com> Message-ID: <50F3C11D.3050507@redhat.com> On 01/11/2013 01:51 PM, Martin Kosek wrote: > LDAP updates were sorted by number of RDNs in DN. This, however, > sometimes caused updates to be executed before cn=schema updates. > If the update required an objectClass or attributeType added during > the cn=schema update, the update operation failed. > > Fix the sorting so that the cn=schema updates are always run first > and then the other updates sorted by RDN count. > > https://fedorahosted.org/freeipa/ticket/3342 Just for the record: this patch was ACKed by Rob and pushed to master, ipa-3-1 and ipa-3-0. Martin From jcholast at redhat.com Mon Jan 14 10:29:41 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 14 Jan 2013 11:29:41 +0100 Subject: [Freeipa-devel] Redesigning LDAP code In-Reply-To: <50F02DD6.8040402@redhat.com> References: <50EFFE88.3010404@redhat.com> <50F01D50.9090201@redhat.com> <50F02647.6040805@redhat.com> <50F027C5.3000607@redhat.com> <50F02DD6.8040402@redhat.com> Message-ID: <50F3DE15.9060100@redhat.com> On 11.1.2013 16:20, Petr Viktorin wrote: > On 01/11/2013 03:55 PM, Rob Crittenden wrote: >> John Dennis wrote: >>> On 01/11/2013 09:10 AM, Rob Crittenden wrote: >>>> Petr Viktorin wrote: >>>>> We had a small discussion off-list about how we want IPA's LDAP >>>>> handling >>>>> to look in the future. >>>>> To continue the discussion publicly I've summarized the results and >>>>> added some of my own ideas to a page. >>>>> John gets credit for the overview (the mistakes & WTFs are mine). >>>>> >>>>> The text is on http://freeipa.org/page/V3/LDAP_code, and echoed below. >>>>> >>>>> >>>> >>>> IIRC some of the the python-ldap code is used b/c ldap2 may require a >>>> configuration to be set up prior to working. That is one of the nice >>>> things about the IPAdmin interface, it is much easier to create >>>> connections to other hosts. >>> >>> Good point. But I don't believe that issue affects having a common API >>> or a single point where LDAP data flows through. It might mean having >>> more than one initialization method or subclassing. > > Yes. I looked at the code again and saw the same thing. Fortunately, > there's not too much that needs the api object: creating the connection, > `get_ipa_config` which shouldn't really be at this level, > CrudBackend-specific things, and `normalize_dn` (which I'd really like > to remove but it's probably not worth the effort). > > > My working plan now is to have a ipaldap.LDAPBackend base class (please > give me a better name), and subclass ldap2 & IPAdmin from that. > IPAdmin would just add the legacy API which we'll try to move away from; > ldap2 would add the api-specific setup and CrudBackend bits (plus its > own legacy methods). I would prefer if ldap2 was the base class, but I guess that's just an implementation detail. "ipaldap.LDAPClient" sounds better? > > So, the ticket shouldn't really be named "installer code should use > ldap2" :) > > >> Right. We may need to decouple from api a bit. I haven't looked at this >> for a while but one of the problems is that api locks its values after >> finalization which can make things a bit inflexible. We use some nasty >> override code in some place but it isn't something I'd want to see >> spread further. > Honza -- Jan Cholasta From pviktori at redhat.com Mon Jan 14 11:56:34 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 14 Jan 2013 12:56:34 +0100 Subject: [Freeipa-devel] [PATCH] 89 Raise ValidationError on invalid CSV values In-Reply-To: <50EDA4CE.1060200@redhat.com> References: <50EDA4CE.1060200@redhat.com> Message-ID: <50F3F272.2000501@redhat.com> On 01/09/2013 06:11 PM, Jan Cholasta wrote: > Hi, > > this patch fixes . > > Honza > The patch works well, but could you also add a test to ensure we don't regress in the future? -- Petr? From mkosek at redhat.com Mon Jan 14 13:13:53 2013 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 14 Jan 2013 14:13:53 +0100 Subject: [Freeipa-devel] [PATCH] 0001 Raise ValidationError for incorrect subtree option In-Reply-To: <50E6BC80.6050308@redhat.com> References: <50E571DF.5090101@redhat.com> <50E57CCA.1050003@redhat.com> <50E58DCB.1020307@redhat.com> <50E6BC80.6050308@redhat.com> Message-ID: <50F40491.2080301@redhat.com> On 01/04/2013 12:26 PM, Petr Viktorin wrote: > On 01/03/2013 02:55 PM, Ana Krivokapic wrote: >> On 01/03/2013 01:42 PM, Petr Viktorin wrote: >>> On 01/03/2013 12:56 PM, Ana Krivokapic wrote: >>>> Using incorrect input for --subtree option of ipa permission-add command >>>> now raises a ValidationError. >>>> >>>> Previously, a ValueError was raised, which resulted in a user unfriendly >>>> error message: >>>> ipa: ERROR: an internal error has occurred >>>> >>>> I have added a try-except block to catch the ValueError and raise an >>>> appropriate ValidationError. >>>> >>>> https://fedorahosted.org/freeipa/ticket/3233 >>>> >>> ... >>> >>>> --- a/ipalib/plugins/aci.py >>>> +++ b/ipalib/plugins/aci.py >>>> @@ -341,7 +341,10 @@ def _aci_to_kw(ldap, a, test=False, >>>> pkey_only=False): >>>> else: >>>> # See if the target is a group. If so we set the >>>> # targetgroup attr, otherwise we consider it a subtree >>>> - targetdn = DN(target.replace('ldap:///','')) >>>> + try: >>>> + targetdn = DN(target.replace('ldap:///','')) >>>> + except ValueError as e: >>>> + raise errors.ValidationError(name='subtree', >>>> error=_(e.message)) >>> >>> `_(e.message)` is useless. The message can only be translated if the >>> string is grabbed by gettext, which uses static analysis. In other >>> words, the argument to _() must be a literal string. >>> >>> You can do either `_("invalid DN")`, or if the error message is >>> important, include it like this (e.message still won't be translated, >>> but at least the users will get something in their language): >>> _("invalid DN (%s)") % e.message >>> >>> >> Fixed. >> >> Thanks, Petr. >> > > ACK > Pushed to master, ipa-3-1. Martin From mkosek at redhat.com Mon Jan 14 13:40:25 2013 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 14 Jan 2013 14:40:25 +0100 Subject: [Freeipa-devel] [PATCH] convert the base platform modules into packages In-Reply-To: <50C879FB.4040302@redhat.com> References: <505C804B.9010506@ubuntu.com> <507EB5F1.1040903@redhat.com> <50BF459A.8080306@ubuntu.com> <50BF62EB.5040006@ubuntu.com> <50C0A275.9090508@redhat.com> <50C879FB.4040302@redhat.com> Message-ID: <50F40AC9.2070107@redhat.com> On 12/12/2012 01:35 PM, Petr Viktorin wrote: > On 12/06/2012 02:49 PM, Petr Viktorin wrote: >> On 12/05/2012 04:06 PM, Timo Aaltonen wrote: >>> On 05.12.2012 15:01, Timo Aaltonen wrote: >>>> On 17.10.2012 16:43, Petr Viktorin wrote: >>>>> On 09/21/2012 04:57 PM, Timo Aaltonen wrote: >>>>>> Ok, so this is the first step before we can start to rewrite bits from >>>>>> ipaserver/install to make them support other distros. There are no >>>>>> real >>>>>> functional changes yet. >>>>>> >>>>>> had some dependency issues installing the resulting rpm's, so didn't >>>>>> test the install scripts but they should work :) >>>>>> >>>>>> >>>>> >>>>> Hello, >>>>> >>>>> I recommend giving the -M flag to git format-patch, so it's easier to >>>>> see changes in the patch. >>>>> >>>>> >>>>> Your split of the fedora16 code into two modules is unfortunate: each >>>>> tries to import the other one, and one is the other's parent. This >>>>> would >>>>> need special care to get working correctly. >>>>> >>>>> The best option here would probably be to put restore_context & >>>>> check_selinux_status into a separate submodule, so you don't need to >>>>> import fedora16 from services. >>>>> >>>>> Furthermore, in fedora16/__init__.py, you have: >>>>> from ipapython.platform.fedora16.service import * >>>>> This imports everything from that module, including e.g. "redhat" or >>>>> "os". >>>>> Please avoid star imports. List all the imported names explicitly, or >>>>> import the module and then use qualified names. >>>>> >>>>> >>>>> Other than that, after a trivial rebase the patch seems to work fine on >>>>> Fedora. Thanks! >>>> >>>> And finally, here is version 2. >>>> >>>> fixed all the above, I think.. make-lint passes, make rpms too. >>> >>> Here's v3, thanks to your rebase to an even more current master :) >>> >> >> Thank you! This works fine on f17 and f18. ACK. >> >> We're stabilizing for a 3.1 release right now, so we might hold pushing >> this to master until work on 3.2 starts. >> > > Another rebase is needed for the time services (chrony) change. > Attached. Pushed to master. Martin From pviktori at redhat.com Mon Jan 14 13:47:10 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 14 Jan 2013 14:47:10 +0100 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50F0A4F4.5030304@redhat.com> References: <50E9D7F2.1060800@redhat.com> <50EAC0F5.8050607@redhat.com> <50EAD70D.8010000@redhat.com> <50EAE65B.60409@redhat.com> <50EAFAFE.1040200@redhat.com> <50EAFE68.3030109@redhat.com> <50EB1E85.2070109@redhat.com> <50F0A4F4.5030304@redhat.com> Message-ID: <50F40C5E.9010302@redhat.com> On 01/12/2013 12:49 AM, Rob Crittenden wrote: > Rob Crittenden wrote: >> Petr Viktorin wrote: >>> On 01/07/2013 05:42 PM, Rob Crittenden wrote: >>>> Petr Viktorin wrote: >>>>> On 01/07/2013 03:09 PM, Rob Crittenden wrote: >>>>>> Petr Viktorin wrote: >>> [...] >>>>>>> >>>>>>> Works for me, but I have some questions (this is an area I know >>>>>>> little >>>>>>> about). >>>>>>> >>>>>>> Can we be 100% sure these certs are always renewed together? Is >>>>>>> certmonger the only possible mechanism to update them? >>>>>> >>>>>> You raise a good point. If though some mechanism someone replaces >>>>>> one of >>>>>> these certs it will cause the script to fail. Some notification of >>>>>> this >>>>>> failure will be logged though, and of course, the certs won't be >>>>>> renewed. >>>>>> >>>>>> One could conceivably manually renew one of these certificates. It is >>>>>> probably a very remote possibility but it is non-zero. >>>>>> >>>>>>> Can we be sure certmonger always does the updates in parallel? If it >>>>>>> managed to update the audit cert before starting on the others, we'd >>>>>>> get >>>>>>> no CA restart for the others. >>>>>> >>>>>> These all get issued at the same time so should expire at the same >>>>>> time >>>>>> as well (see problem above). The script will hang around for 10 >>>>>> minutes >>>>>> waiting for the renewal to complete, then give up. >>>>> >>>>> The certs might take different amounts of time to update, right? >>>>> Eventually, the expirations could go out of sync enough for it to >>>>> matter. >>>>> AFAICS, without proper locking we still get a race condition when the >>>>> other certs start being renewed some time (much less than 10 min) >>>>> after >>>>> the audit one: >>>>> >>>>> (time axis goes down) >>>>> >>>>> audit cert other cert >>>>> ---------- ---------- >>>>> certmonger does renew . >>>>> post-renew script starts . >>>>> check state of other certs: OK . >>>>> . certmonger starts renew >>>>> certutil modifies NSS DB + certmonger modifies NSS DB == boom! >>>> >>>> This can't happen because we count the # of expected certs and wait >>>> until all are in MONITORING before continuing. >>> >>> The problem is that they're also in MONITORING before the whole renewal >>> starts. If the script happens to check just before the state changes >>> from MONITORING to GENERATING_CSR or whatever, we can get corruption. >>> >>>> The worse that would >>>> happen is the trust wouldn't be set on the audit cert and dogtag >>>> wouldn't be restarted. >>>> >>>>> >>>>> >>>>>> The state the system would be in is this: >>>>>> >>>>>> - audit cert trust not updated, so next restart of CA will fail >>>>>> - CA is not restarted so will not use updated certificates >>>>>> >>>>>>> And anyway, why does certmonger do renewals in parallel? It seems >>>>>>> that >>>>>>> if it did one at a time, always waiting until the post-renew >>>>>>> script is >>>>>>> done, this patch wouldn't be necessary. >>>>>>> >>>>>> >>>>>> From what Nalin told me certmonger has some coarse locking such that >>>>>> renewals in a the same NSS database are serialized. As you point >>>>>> out, it >>>>>> would be nice to extend this locking to the post renewal scripts. We >>>>>> can >>>>>> ask Nalin about it. That would fix the potential corruption issue. >>>>>> It is >>>>>> still much nicer to not have to restart dogtag 4 times. >>>>>> >>>>> >>>>> Well, three extra restarts every few years seems like a small price to >>>>> pay for robustness. >>>> >>>> It is a bit of a problem though because the certs all renew within >>>> seconds so end up fighting over who is restarting dogtag. This can >>>> cause >>>> some renewals go into a failure state to be retried later. This is fine >>>> functionally but makes QE a bit of a pain. You then have to make sure >>>> that renewal is basically done, then restart certmonger and check >>>> everything again, over and over until all the certs are renewed. >>>> This is >>>> difficult to automate. >>> >>> So we need to extend the certmonger lock, and wait until Dogtag is back >>> up before exiting the script. That way it'd still take longer than 1 >>> restart, but all the renews should succeed. >>> >> >> Right, but older dogtag versions don't have the handy servlet to tell >> that the service is actually up and responding. So it is difficult to >> tell from tomcat alone whether the CA is actually up and handling >> requests. >> > > Revised patch that takes advantage of new version of certmonger. > certmonger-0.65 adds locking from the time renewal begins to the end of > the post_save_command. This lets us be sure that no other certmonger > renewals will have the NSS database open in read-write mode. > > We need to be sure that tomcat is shut down before we let certmonger > save the certificate to the NSS database because dogtag opens its > database read/write and two writers can cause corruption. > > rob > stop_pkicad and start_pkicad need the Dogtag version check to select pki_cad/pki_tomcatd. A more serious issue is that stop_pkicad needs to be installed on upgrades. Currently the whole enable_certificate_renewal step in ipa-upgradeconfig is skipped if it was done before. In stop_pkicad can you change the first log message to "certmonger stopping %sd"? It's before the action so we don't want past tense. -- Petr? From atkac at redhat.com Mon Jan 14 14:17:35 2013 From: atkac at redhat.com (Adam Tkac) Date: Mon, 14 Jan 2013 15:17:35 +0100 Subject: [Freeipa-devel] [PATCH 0107] Don't fail if idnsSOAserial attribute is missing in LDAP In-Reply-To: <50F05048.80402@redhat.com> References: <50F05048.80402@redhat.com> Message-ID: <20130114141734.GA17399@redhat.com> On Fri, Jan 11, 2013 at 06:47:52PM +0100, Petr Spacek wrote: > Hello, > > Don't fail if idnsSOAserial attribute is missing in LDAP. > > DNS zones created on remote IPA 3.0 server don't have > idnsSOAserial attribute present in LDAP. > > https://bugzilla.redhat.com/show_bug.cgi?id=894131 > > > Attached patch contains the minimal set of changes need for resurrecting BIND. > > In configurations with serial auto-increment: > - enabled (IPA 3.0+ default) - some new serial is written back to > LDAP nearly immediately > - disabled - the attribute will be missing forever Ack > From 958f46a5ceee336e2466686bafbb203082e2ccc1 Mon Sep 17 00:00:00 2001 > From: Petr Spacek > Date: Fri, 11 Jan 2013 17:30:03 +0100 > Subject: [PATCH] Don't fail if idnsSOAserial attribute is missing in LDAP. > > DNS zones created on remote IPA 3.0 server don't have > idnsSOAserial attribute present in LDAP. > > https://bugzilla.redhat.com/show_bug.cgi?id=894131 > > Signed-off-by: Petr Spacek > --- > src/ldap_entry.c | 18 ++++++++++++++++-- > 1 file changed, 16 insertions(+), 2 deletions(-) > > diff --git a/src/ldap_entry.c b/src/ldap_entry.c > index 1e165ca696ccafa177f17b97bda08ed9cc344c7d..52b927d410300eb6df98ea058c3a08b426d66a70 100644 > --- a/src/ldap_entry.c > +++ b/src/ldap_entry.c > @@ -350,8 +350,9 @@ ldap_entry_getfakesoa(ldap_entry_t *entry, const ld_string_t *fake_mname, > ldap_valuelist_t values; > int i = 0; > > + const char *soa_serial_attr = "idnsSOAserial"; > const char *soa_attrs[] = { > - "idnsSOAmName", "idnsSOArName", "idnsSOAserial", > + "idnsSOAmName", "idnsSOArName", soa_serial_attr, > "idnsSOArefresh", "idnsSOAretry", "idnsSOAexpire", > "idnsSOAminimum", NULL > }; > @@ -366,12 +367,25 @@ ldap_entry_getfakesoa(ldap_entry_t *entry, const ld_string_t *fake_mname, > CHECK(str_cat_char(target, " ")); > } > for (; soa_attrs[i] != NULL; i++) { > - CHECK(ldap_entry_getvalues(entry, soa_attrs[i], &values)); > + result = ldap_entry_getvalues(entry, soa_attrs[i], &values); > + /** Workaround for > + * https://bugzilla.redhat.com/show_bug.cgi?id=894131 > + * DNS zones created on remote IPA 3.0 server don't have > + * idnsSOAserial attribute present in LDAP. */ > + if (result == ISC_R_NOTFOUND > + && soa_attrs[i] == soa_serial_attr) { > + /* idnsSOAserial is missing! Read it as 1. */ > + CHECK(str_cat_char(target, "1 ")); > + continue; > + } else if (result != ISC_R_SUCCESS) > + goto cleanup; > + > CHECK(str_cat_char(target, HEAD(values)->value)); > CHECK(str_cat_char(target, " ")); > } > > cleanup: > + /* TODO: check for memory leaks */ > return result; > } > > -- > 1.7.11.7 > -- Adam Tkac, Red Hat, Inc. From tbabej at redhat.com Mon Jan 14 15:46:33 2013 From: tbabej at redhat.com (Tomas Babej) Date: Mon, 14 Jan 2013 16:46:33 +0100 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration Message-ID: <50F42859.1070807@redhat.com> Hi, Since in Kerberos V5 are used 32-bit unix timestamps, setting maxlife in pwpolicy to values such as 9999 days would cause integer overflow in krbPasswordExpiration attribute. This would result into unpredictable behaviour such as users not being able to log in after password expiration if password policy was changed (#3114) or new users not being able to log in at all (#3312). https://fedorahosted.org/freeipa/ticket/3312 https://fedorahosted.org/freeipa/ticket/3114 Tomas -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0026-Prevent-integer-overflow-when-setting-krbPasswordExp.patch Type: text/x-patch Size: 3701 bytes Desc: not available URL: From pviktori at redhat.com Mon Jan 14 16:06:17 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 14 Jan 2013 17:06:17 +0100 Subject: [Freeipa-devel] Command instantiation Message-ID: <50F42CF9.2000108@redhat.com> IPA Command objects sometimes need to pass some data between their various methods. Currently that's done using the thread-local context. For an example see dnsrecord_del, which sets a "del_all" flag in pre_callback and then checks it in execute. While that works for now, it's far from best practice. For example, if some Command can call another Command, we need to carefully check that the global data isn't overwritten. The other way data is passed around is arguments. The callback methods take a bunch of arguments that are the same for a particular Command invocation (ldap, entry_attrs, *keys, **options). By now, there's no hope of adding a new one, since all the callbacks would need to be rewritten. (Well, one could add an artificial option, but that's clearly not a good solution.) In OOP, this problem is usually solved by putting the data in an object and passing that around. Or better, putting it in the object the methods are called on. This got me thinking -- why do we not make a new Command instance for every command invocation? Currently Command objects are only created once and added to the api, so they can't be used to store per-command data. It seems that having `api.Command.user_add` create a new instance would work better for us. (Of course it's not the best syntax for creating a new object, but having to change all the calling code would be too disruptive). What do you think? -- Petr? From pviktori at redhat.com Mon Jan 14 16:09:35 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 14 Jan 2013 17:09:35 +0100 Subject: [Freeipa-devel] Refactorings wiki page Message-ID: <50F42DBF.5050403@redhat.com> Hello, I've created a Wiki page to list our infrastructure improvement efforts/proposals: http://freeipa.org/page/V3/Refactorings Please add your own. -- Petr? From jdennis at redhat.com Mon Jan 14 16:48:08 2013 From: jdennis at redhat.com (John Dennis) Date: Mon, 14 Jan 2013 11:48:08 -0500 Subject: [Freeipa-devel] Command instantiation In-Reply-To: <50F42CF9.2000108@redhat.com> References: <50F42CF9.2000108@redhat.com> Message-ID: <50F436C8.903@redhat.com> On 01/14/2013 11:06 AM, Petr Viktorin wrote: > > IPA Command objects sometimes need to pass some data between their > various methods. Currently that's done using the thread-local context. > For an example see dnsrecord_del, which sets a "del_all" flag in > pre_callback and then checks it in execute. > > While that works for now, it's far from best practice. For example, if > some Command can call another Command, we need to carefully check that > the global data isn't overwritten. > > > The other way data is passed around is arguments. The callback methods > take a bunch of arguments that are the same for a particular Command > invocation (ldap, entry_attrs, *keys, **options). By now, there's no > hope of adding a new one, since all the callbacks would need to be > rewritten. (Well, one could add an artificial option, but that's clearly > not a good solution.) > In OOP, this problem is usually solved by putting the data in an object > and passing that around. Or better, putting it in the object the methods > are called on. > > This got me thinking -- why do we not make a new Command instance for > every command invocation? Currently Command objects are only created > once and added to the api, so they can't be used to store per-command data. > It seems that having `api.Command.user_add` create a new instance would > work better for us. (Of course it's not the best syntax for creating a > new object, but having to change all the calling code would be too > disruptive). > What do you think? > Just a few thoughts, no answers ... :-) I agree with you that using thread local context blocks to pass cooperating data is indicative of a design flaw elsewhere. See the discussion from a couple of days ago concerning api locking and making our system robust against foreign plugins. As I understand it that design prohibits modifying the singleton command objects. Under your proposal how would we maintain that protection? The fact the commands are singletons is less of an issue than the fact they lock themselves when the api is finalized. If the commands instead acted as a "factory" and produced new instances wouldn't they also have to observe the same locking rules thus defeating the value of instantiating copies of the command? I think the entire design of the plugin system has at it's heart non-modifiable singleton command objects which does not carry state. FWIW, I was never convinced the trade-off between protecting our API and being able to make smart coding choices (such as your suggestion) struck the right balance. Going back to one of your suggestions of passing a "context block" as a parameter. Our method signatures do not currently support that as you observe. But given the fact by conscious design a thread only executes one *top-level* command at a time and then clears it state the thread local context block is effectively playing the almost exactly same role as if it were passed as a parameter. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From jcholast at redhat.com Mon Jan 14 17:23:28 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 14 Jan 2013 18:23:28 +0100 Subject: [Freeipa-devel] Command instantiation In-Reply-To: <50F42CF9.2000108@redhat.com> References: <50F42CF9.2000108@redhat.com> Message-ID: <50F43F10.1070206@redhat.com> On 14.1.2013 17:06, Petr Viktorin wrote: > > IPA Command objects sometimes need to pass some data between their > various methods. Currently that's done using the thread-local context. > For an example see dnsrecord_del, which sets a "del_all" flag in > pre_callback and then checks it in execute. > > While that works for now, it's far from best practice. For example, if > some Command can call another Command, we need to carefully check that > the global data isn't overwritten. > > > The other way data is passed around is arguments. The callback methods > take a bunch of arguments that are the same for a particular Command > invocation (ldap, entry_attrs, *keys, **options). By now, there's no > hope of adding a new one, since all the callbacks would need to be > rewritten. (Well, one could add an artificial option, but that's clearly > not a good solution.) > In OOP, this problem is usually solved by putting the data in an object > and passing that around. Or better, putting it in the object the methods > are called on. > > This got me thinking -- why do we not make a new Command instance for > every command invocation? Currently Command objects are only created > once and added to the api, so they can't be used to store per-command data. > It seems that having `api.Command.user_add` create a new instance would > work better for us. (Of course it's not the best syntax for creating a > new object, but having to change all the calling code would be too > disruptive). > What do you think? > You could extend that to other plugin types as well (e.g. having Object instances with access to a single object's params instead of passing data around in a dict would be superb), but I'm afraid this kind of change won't be easy to do now. The framework was designed around singleton plugin objects right from the beginning. I personally think this design sucks, as every kind of entity in IPA is described by such an object (object classes are singleton objects, methods are singleton objects, etc.), instead of using appropriate Python primitives for the job (object classes should be Python classes, methods should be methods of these classes, etc.). I really would like to see this improve, but I'm not sure if it's possible without rewriting the whole framework. Honza -- Jan Cholasta From pviktori at redhat.com Mon Jan 14 17:26:22 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 14 Jan 2013 18:26:22 +0100 Subject: [Freeipa-devel] Command instantiation In-Reply-To: <50F436C8.903@redhat.com> References: <50F42CF9.2000108@redhat.com> <50F436C8.903@redhat.com> Message-ID: <50F43FBE.80401@redhat.com> On 01/14/2013 05:48 PM, John Dennis wrote: > On 01/14/2013 11:06 AM, Petr Viktorin wrote: >> >> IPA Command objects sometimes need to pass some data between their >> various methods. Currently that's done using the thread-local context. >> For an example see dnsrecord_del, which sets a "del_all" flag in >> pre_callback and then checks it in execute. >> >> While that works for now, it's far from best practice. For example, if >> some Command can call another Command, we need to carefully check that >> the global data isn't overwritten. >> >> >> The other way data is passed around is arguments. The callback methods >> take a bunch of arguments that are the same for a particular Command >> invocation (ldap, entry_attrs, *keys, **options). By now, there's no >> hope of adding a new one, since all the callbacks would need to be >> rewritten. (Well, one could add an artificial option, but that's clearly >> not a good solution.) >> In OOP, this problem is usually solved by putting the data in an object >> and passing that around. Or better, putting it in the object the methods >> are called on. >> >> This got me thinking -- why do we not make a new Command instance for >> every command invocation? Currently Command objects are only created >> once and added to the api, so they can't be used to store per-command >> data. >> It seems that having `api.Command.user_add` create a new instance would >> work better for us. (Of course it's not the best syntax for creating a >> new object, but having to change all the calling code would be too >> disruptive). >> What do you think? >> > > Just a few thoughts, no answers ... :-) > > I agree with you that using thread local context blocks to pass > cooperating data is indicative of a design flaw elsewhere. > > See the discussion from a couple of days ago concerning api locking and > making our system robust against foreign plugins. As I understand it > that design prohibits modifying the singleton command objects. Under > your proposal how would we maintain that protection? The fact the > commands are singletons is less of an issue than the fact they lock > themselves when the api is finalized. If the commands instead acted as a > "factory" and produced new instances wouldn't they also have to observe > the same locking rules thus defeating the value of instantiating copies > of the command? Good point. Actually, making api a factory would eliminate the reason for locking Commands: the foreign plugin could do whatever it wanted to its instance of the Command, and all would be well unless it modifies the class itself (which is about as bad as using setattr). > I think the entire design of the plugin system has at it's heart > non-modifiable singleton command objects which does not carry state. Yes. I guess I'm just trying to find ways to get out of that trap... > FWIW, I was never convinced the trade-off between protecting our API and > being able to make smart coding choices (such as your suggestion) struck > the right balance. > > Going back to one of your suggestions of passing a "context block" as a > parameter. Our method signatures do not currently support that as you > observe. But given the fact by conscious design a thread only executes > one *top-level* command at a time and then clears it state the thread > local context block is effectively playing the almost exactly same role > as if it were passed as a parameter. Almost. But, lots of Commands call other Commands, and the caller needs to know the internals of the callee to make sure the context attributes don't clash. Not to mention the other things, such as connection backends, that use the thread-local object. It's fragile. As for clearing the state, you already can't rely on that: the batch plugin doesn't do it. -- Petr? From abokovoy at redhat.com Mon Jan 14 17:31:51 2013 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Mon, 14 Jan 2013 19:31:51 +0200 Subject: [Freeipa-devel] Command instantiation In-Reply-To: <50F43F10.1070206@redhat.com> References: <50F42CF9.2000108@redhat.com> <50F43F10.1070206@redhat.com> Message-ID: <20130114173151.GJ8651@redhat.com> On Mon, 14 Jan 2013, Jan Cholasta wrote: >On 14.1.2013 17:06, Petr Viktorin wrote: >> >>IPA Command objects sometimes need to pass some data between their >>various methods. Currently that's done using the thread-local context. >>For an example see dnsrecord_del, which sets a "del_all" flag in >>pre_callback and then checks it in execute. >> >>While that works for now, it's far from best practice. For example, if >>some Command can call another Command, we need to carefully check that >>the global data isn't overwritten. >> >> >>The other way data is passed around is arguments. The callback methods >>take a bunch of arguments that are the same for a particular Command >>invocation (ldap, entry_attrs, *keys, **options). By now, there's no >>hope of adding a new one, since all the callbacks would need to be >>rewritten. (Well, one could add an artificial option, but that's clearly >>not a good solution.) >>In OOP, this problem is usually solved by putting the data in an object >>and passing that around. Or better, putting it in the object the methods >>are called on. >> >>This got me thinking -- why do we not make a new Command instance for >>every command invocation? Currently Command objects are only created >>once and added to the api, so they can't be used to store per-command data. >>It seems that having `api.Command.user_add` create a new instance would >>work better for us. (Of course it's not the best syntax for creating a >>new object, but having to change all the calling code would be too >>disruptive). >>What do you think? >> > >You could extend that to other plugin types as well (e.g. having >Object instances with access to a single object's params instead of >passing data around in a dict would be superb), but I'm afraid this >kind of change won't be easy to do now. > >The framework was designed around singleton plugin objects right from >the beginning. I personally think this design sucks, as every kind of >entity in IPA is described by such an object (object classes are >singleton objects, methods are singleton objects, etc.), instead of >using appropriate Python primitives for the job (object classes >should be Python classes, methods should be methods of these classes, >etc.). One note here. Having method classes separated from the object classes allows to add new methods to existing objects without affecting those objects and without redefining object classes. Ability to write simple plugins to extend existing objects is very attractive. In case of methods being methods of Python object classes you'd lose this feature and in order to gain it back you would need to deal with some sort of method descriptors -- in many cases methods are in fact collection of actual procedures working together, for example, most of CRUD methods have pre/post modifier callbacks, spanned over elaborate Method inheritance tree and allow to amend actual execute code. Pulling this into a single Python method would require more work that would later grow into an imitation of a separate method class anyway. -- / Alexander Bokovoy From pviktori at redhat.com Mon Jan 14 17:50:52 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 14 Jan 2013 18:50:52 +0100 Subject: [Freeipa-devel] Command instantiation In-Reply-To: <20130114173151.GJ8651@redhat.com> References: <50F42CF9.2000108@redhat.com> <50F43F10.1070206@redhat.com> <20130114173151.GJ8651@redhat.com> Message-ID: <50F4457C.8020303@redhat.com> On 01/14/2013 06:31 PM, Alexander Bokovoy wrote: > On Mon, 14 Jan 2013, Jan Cholasta wrote: >> On 14.1.2013 17:06, Petr Viktorin wrote: >>> >>> IPA Command objects sometimes need to pass some data between their >>> various methods. Currently that's done using the thread-local context. >>> For an example see dnsrecord_del, which sets a "del_all" flag in >>> pre_callback and then checks it in execute. >>> >>> While that works for now, it's far from best practice. For example, if >>> some Command can call another Command, we need to carefully check that >>> the global data isn't overwritten. >>> >>> >>> The other way data is passed around is arguments. The callback methods >>> take a bunch of arguments that are the same for a particular Command >>> invocation (ldap, entry_attrs, *keys, **options). By now, there's no >>> hope of adding a new one, since all the callbacks would need to be >>> rewritten. (Well, one could add an artificial option, but that's clearly >>> not a good solution.) >>> In OOP, this problem is usually solved by putting the data in an object >>> and passing that around. Or better, putting it in the object the methods >>> are called on. >>> >>> This got me thinking -- why do we not make a new Command instance for >>> every command invocation? Currently Command objects are only created >>> once and added to the api, so they can't be used to store per-command >>> data. >>> It seems that having `api.Command.user_add` create a new instance would >>> work better for us. (Of course it's not the best syntax for creating a >>> new object, but having to change all the calling code would be too >>> disruptive). >>> What do you think? >>> >> >> You could extend that to other plugin types as well (e.g. having >> Object instances with access to a single object's params instead of >> passing data around in a dict would be superb), but I'm afraid this >> kind of change won't be easy to do now. Ah, yes, you've discovered my ultimate goal: rewriting the whole framefork :) But, rewriting the whole framework at once is not realistic, so I'd like to do this in little steps. Starting with Commands. That should be possible. >> The framework was designed around singleton plugin objects right from >> the beginning. I personally think this design sucks, as every kind of >> entity in IPA is described by such an object (object classes are >> singleton objects, methods are singleton objects, etc.), instead of >> using appropriate Python primitives for the job (object classes should >> be Python classes, methods should be methods of these classes, etc.). > One note here. Having method classes separated from the object classes > allows to add new methods to existing objects without affecting those > objects and without redefining object classes. > > Ability to write simple plugins to extend existing objects is very > attractive. > > In case of methods being methods of Python object classes you'd lose this > feature and in order to gain it back you would need to deal with > some sort of method descriptors -- in many cases methods are in fact > collection of actual procedures working together, for example, most > of CRUD methods have pre/post modifier callbacks, spanned over elaborate > Method inheritance tree and allow to amend actual execute code. > Pulling this into a single Python method would require more work that > would later grow into an imitation of a separate method class anyway. > The difference between functions, methods, and callable objects in Python is surprisingly small. I'd bet allowing the implementer to use the best tool out out of those for her job would even involve less magic than the current approach. (We actually parse the class names with a regex!) But as I said, this is generalizing my idea too much. I'd like to keep my feet on the ground so I'm currently proposing the change for Commands only. -- Petr? From jcholast at redhat.com Mon Jan 14 17:56:35 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 14 Jan 2013 18:56:35 +0100 Subject: [Freeipa-devel] Command instantiation In-Reply-To: <50F4457C.8020303@redhat.com> References: <50F42CF9.2000108@redhat.com> <50F43F10.1070206@redhat.com> <20130114173151.GJ8651@redhat.com> <50F4457C.8020303@redhat.com> Message-ID: <50F446D3.3050804@redhat.com> On 14.1.2013 18:50, Petr Viktorin wrote: > On 01/14/2013 06:31 PM, Alexander Bokovoy wrote: >> On Mon, 14 Jan 2013, Jan Cholasta wrote: >>> On 14.1.2013 17:06, Petr Viktorin wrote: >>>> >>>> IPA Command objects sometimes need to pass some data between their >>>> various methods. Currently that's done using the thread-local context. >>>> For an example see dnsrecord_del, which sets a "del_all" flag in >>>> pre_callback and then checks it in execute. >>>> >>>> While that works for now, it's far from best practice. For example, if >>>> some Command can call another Command, we need to carefully check that >>>> the global data isn't overwritten. >>>> >>>> >>>> The other way data is passed around is arguments. The callback methods >>>> take a bunch of arguments that are the same for a particular Command >>>> invocation (ldap, entry_attrs, *keys, **options). By now, there's no >>>> hope of adding a new one, since all the callbacks would need to be >>>> rewritten. (Well, one could add an artificial option, but that's >>>> clearly >>>> not a good solution.) >>>> In OOP, this problem is usually solved by putting the data in an object >>>> and passing that around. Or better, putting it in the object the >>>> methods >>>> are called on. >>>> >>>> This got me thinking -- why do we not make a new Command instance for >>>> every command invocation? Currently Command objects are only created >>>> once and added to the api, so they can't be used to store per-command >>>> data. >>>> It seems that having `api.Command.user_add` create a new instance would >>>> work better for us. (Of course it's not the best syntax for creating a >>>> new object, but having to change all the calling code would be too >>>> disruptive). >>>> What do you think? >>>> >>> >>> You could extend that to other plugin types as well (e.g. having >>> Object instances with access to a single object's params instead of >>> passing data around in a dict would be superb), but I'm afraid this >>> kind of change won't be easy to do now. > > > Ah, yes, you've discovered my ultimate goal: rewriting the whole > framefork :) It would seem we share the same ultimate goal, sir! :-) > But, rewriting the whole framework at once is not realistic, so I'd like > to do this in little steps. Starting with Commands. That should be > possible. > >>> The framework was designed around singleton plugin objects right from >>> the beginning. I personally think this design sucks, as every kind of >>> entity in IPA is described by such an object (object classes are >>> singleton objects, methods are singleton objects, etc.), instead of >>> using appropriate Python primitives for the job (object classes should >>> be Python classes, methods should be methods of these classes, etc.). >> One note here. Having method classes separated from the object classes >> allows to add new methods to existing objects without affecting those >> objects and without redefining object classes. >> >> Ability to write simple plugins to extend existing objects is very >> attractive. >> >> In case of methods being methods of Python object classes you'd lose this >> feature and in order to gain it back you would need to deal with >> some sort of method descriptors -- in many cases methods are in fact >> collection of actual procedures working together, for example, most >> of CRUD methods have pre/post modifier callbacks, spanned over elaborate >> Method inheritance tree and allow to amend actual execute code. >> Pulling this into a single Python method would require more work that >> would later grow into an imitation of a separate method class anyway. >> Well, that is your idea, there are more ways around this. It would certainly feel more natural to work with such objects in Python than what we do now (feels rather C-ish to me). > > The difference between functions, methods, and callable objects in > Python is surprisingly small. I'd bet allowing the implementer to use > the best tool out out of those for her job would even involve less magic > than the current approach. (We actually parse the class names with a > regex!) +1 > > > But as I said, this is generalizing my idea too much. I'd like to keep > my feet on the ground so I'm currently proposing the change for Commands > only. > -- Jan Cholasta From rcritten at redhat.com Mon Jan 14 18:17:19 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 14 Jan 2013 13:17:19 -0500 Subject: [Freeipa-devel] Command instantiation In-Reply-To: <50F43FBE.80401@redhat.com> References: <50F42CF9.2000108@redhat.com> <50F436C8.903@redhat.com> <50F43FBE.80401@redhat.com> Message-ID: <50F44BAF.6040808@redhat.com> Petr Viktorin wrote: > On 01/14/2013 05:48 PM, John Dennis wrote: >> On 01/14/2013 11:06 AM, Petr Viktorin wrote: >>> >>> IPA Command objects sometimes need to pass some data between their >>> various methods. Currently that's done using the thread-local context. >>> For an example see dnsrecord_del, which sets a "del_all" flag in >>> pre_callback and then checks it in execute. >>> >>> While that works for now, it's far from best practice. For example, if >>> some Command can call another Command, we need to carefully check that >>> the global data isn't overwritten. >>> >>> >>> The other way data is passed around is arguments. The callback methods >>> take a bunch of arguments that are the same for a particular Command >>> invocation (ldap, entry_attrs, *keys, **options). By now, there's no >>> hope of adding a new one, since all the callbacks would need to be >>> rewritten. (Well, one could add an artificial option, but that's clearly >>> not a good solution.) >>> In OOP, this problem is usually solved by putting the data in an object >>> and passing that around. Or better, putting it in the object the methods >>> are called on. >>> >>> This got me thinking -- why do we not make a new Command instance for >>> every command invocation? Currently Command objects are only created >>> once and added to the api, so they can't be used to store per-command >>> data. >>> It seems that having `api.Command.user_add` create a new instance would >>> work better for us. (Of course it's not the best syntax for creating a >>> new object, but having to change all the calling code would be too >>> disruptive). >>> What do you think? >>> >> >> Just a few thoughts, no answers ... :-) >> >> I agree with you that using thread local context blocks to pass >> cooperating data is indicative of a design flaw elsewhere. >> >> See the discussion from a couple of days ago concerning api locking and >> making our system robust against foreign plugins. As I understand it >> that design prohibits modifying the singleton command objects. Under >> your proposal how would we maintain that protection? The fact the >> commands are singletons is less of an issue than the fact they lock >> themselves when the api is finalized. If the commands instead acted as a >> "factory" and produced new instances wouldn't they also have to observe >> the same locking rules thus defeating the value of instantiating copies >> of the command? > > Good point. Actually, making api a factory would eliminate the reason > for locking Commands: the foreign plugin could do whatever it wanted to > its instance of the Command, and all would be well unless it modifies > the class itself (which is about as bad as using setattr). I still think this would invite people to do even more dangerous things, like changing the API on-the-fly. > >> I think the entire design of the plugin system has at it's heart >> non-modifiable singleton command objects which does not carry state. > > Yes. I guess I'm just trying to find ways to get out of that trap... context is your state, right? It is sort of bolted on but it is per-request and guaranteed not to leak into anything else (except batch, as noted below). > >> FWIW, I was never convinced the trade-off between protecting our API and >> being able to make smart coding choices (such as your suggestion) struck >> the right balance. >> >> Going back to one of your suggestions of passing a "context block" as a >> parameter. Our method signatures do not currently support that as you >> observe. But given the fact by conscious design a thread only executes >> one *top-level* command at a time and then clears it state the thread >> local context block is effectively playing the almost exactly same role >> as if it were passed as a parameter. > > Almost. But, lots of Commands call other Commands, and the caller needs > to know the internals of the callee to make sure the context attributes > don't clash. Not to mention the other things, such as connection > backends, that use the thread-local object. It's fragile. No argument there. We could clean this up somewhat by imposing a namespace into variable naming. > > As for clearing the state, you already can't rely on that: the batch > plugin doesn't do it. > Yes, speaking of bolted on, that defines the batch plugin pretty well. It should be fairly straightforward to clear the state between executions though (or at least parts of it, there may be some batch-specif cthings we'd want to maintain). rob From pviktori at redhat.com Mon Jan 14 18:40:50 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 14 Jan 2013 19:40:50 +0100 Subject: [Freeipa-devel] Command instantiation In-Reply-To: <50F44BAF.6040808@redhat.com> References: <50F42CF9.2000108@redhat.com> <50F436C8.903@redhat.com> <50F43FBE.80401@redhat.com> <50F44BAF.6040808@redhat.com> Message-ID: <50F45132.6000007@redhat.com> On 01/14/2013 07:17 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> On 01/14/2013 05:48 PM, John Dennis wrote: >>> On 01/14/2013 11:06 AM, Petr Viktorin wrote: >>>> >>>> IPA Command objects sometimes need to pass some data between their >>>> various methods. Currently that's done using the thread-local context. >>>> For an example see dnsrecord_del, which sets a "del_all" flag in >>>> pre_callback and then checks it in execute. >>>> >>>> While that works for now, it's far from best practice. For example, if >>>> some Command can call another Command, we need to carefully check that >>>> the global data isn't overwritten. >>>> >>>> >>>> The other way data is passed around is arguments. The callback methods >>>> take a bunch of arguments that are the same for a particular Command >>>> invocation (ldap, entry_attrs, *keys, **options). By now, there's no >>>> hope of adding a new one, since all the callbacks would need to be >>>> rewritten. (Well, one could add an artificial option, but that's >>>> clearly >>>> not a good solution.) >>>> In OOP, this problem is usually solved by putting the data in an object >>>> and passing that around. Or better, putting it in the object the >>>> methods >>>> are called on. >>>> >>>> This got me thinking -- why do we not make a new Command instance for >>>> every command invocation? Currently Command objects are only created >>>> once and added to the api, so they can't be used to store per-command >>>> data. >>>> It seems that having `api.Command.user_add` create a new instance would >>>> work better for us. (Of course it's not the best syntax for creating a >>>> new object, but having to change all the calling code would be too >>>> disruptive). >>>> What do you think? >>>> >>> >>> Just a few thoughts, no answers ... :-) >>> >>> I agree with you that using thread local context blocks to pass >>> cooperating data is indicative of a design flaw elsewhere. >>> >>> See the discussion from a couple of days ago concerning api locking and >>> making our system robust against foreign plugins. As I understand it >>> that design prohibits modifying the singleton command objects. Under >>> your proposal how would we maintain that protection? The fact the >>> commands are singletons is less of an issue than the fact they lock >>> themselves when the api is finalized. If the commands instead acted as a >>> "factory" and produced new instances wouldn't they also have to observe >>> the same locking rules thus defeating the value of instantiating copies >>> of the command? >> >> Good point. Actually, making api a factory would eliminate the reason >> for locking Commands: the foreign plugin could do whatever it wanted to >> its instance of the Command, and all would be well unless it modifies >> the class itself (which is about as bad as using setattr). > > I still think this would invite people to do even more dangerous things, > like changing the API on-the-fly. Then let them. We're all adults here. You can't stop people from writing bad plugins. You can't stop attackers from killing your machine if you let them run any code. What's the point? The current design invites *us* to do things like setattr and thread-local state, just to get out of the boundaries we've set for ourselves. And the boundaries we need will grow over time, see the discussion of bolted-on things below. Sharp knives are safer than dull ones. >>> I think the entire design of the plugin system has at it's heart >>> non-modifiable singleton command objects which does not carry state. >> >> Yes. I guess I'm just trying to find ways to get out of that trap... > > context is your state, right? It is sort of bolted on but it is > per-request and guaranteed not to leak into anything else (except batch, > as noted below). We have and support the batch plugin, so that guarantee isn't worth much. Thread-local state works for now but as more things are built on it, it'll become unmanageable. >>> FWIW, I was never convinced the trade-off between protecting our API and >>> being able to make smart coding choices (such as your suggestion) struck >>> the right balance. >>> >>> Going back to one of your suggestions of passing a "context block" as a >>> parameter. Our method signatures do not currently support that as you >>> observe. But given the fact by conscious design a thread only executes >>> one *top-level* command at a time and then clears it state the thread >>> local context block is effectively playing the almost exactly same role >>> as if it were passed as a parameter. >> >> Almost. But, lots of Commands call other Commands, and the caller needs >> to know the internals of the callee to make sure the context attributes >> don't clash. Not to mention the other things, such as connection >> backends, that use the thread-local object. It's fragile. > > No argument there. We could clean this up somewhat by imposing a > namespace into variable naming. >> >> As for clearing the state, you already can't rely on that: the batch >> plugin doesn't do it. >> > > Yes, speaking of bolted on, that defines the batch plugin pretty well. > It should be fairly straightforward to clear the state between > executions though (or at least parts of it, there may be some > batch-specif cthings we'd want to maintain). > There are also connection-specific things to watch out for. But I'd rather refactor than bolt on more things as we need them, because the code will only get worse with each thing that's bolted on. -- Petr? From jdennis at redhat.com Mon Jan 14 19:17:48 2013 From: jdennis at redhat.com (John Dennis) Date: Mon, 14 Jan 2013 14:17:48 -0500 Subject: [Freeipa-devel] Command instantiation In-Reply-To: <50F44BAF.6040808@redhat.com> References: <50F42CF9.2000108@redhat.com> <50F436C8.903@redhat.com> <50F43FBE.80401@redhat.com> <50F44BAF.6040808@redhat.com> Message-ID: <50F459DC.1090407@redhat.com> On 01/14/2013 01:17 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> As for clearing the state, you already can't rely on that: the batch >> plugin doesn't do it. > Yes, speaking of bolted on, that defines the batch plugin pretty well. > It should be fairly straightforward to clear the state between > executions though (or at least parts of it, there may be some > batch-specif cthings we'd want to maintain). I think the problem is there are different groups of data being maintained in the context and we don't separate them into their logical domains. There is connection information that persists across all commands in the batch. Then there is the per command information specific to each command in the batch. Each should have it's own context. But as Petr3 points out the per thread context state is kinda a junk box where we throw in a undisciplined manner all the things we can't fit into a best practices programming model. Some of this is the fault of the framework design and the priorities that drove it. But not all of this can be laid at the feet of our framework. Some of it is due to the inflexibility of the core Python modules we use and their poor design. A good example is the fact the request URL is unavailable when processing a HTTP response in one of the libraries we're using, thats just bad design and because we use that library we have to live with it, hence sticking the request URL into the thread local context. There are numerous example of things like this, most are expedient workarounds to bad code design (some under our control, some not). -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From jdennis at redhat.com Mon Jan 14 19:45:38 2013 From: jdennis at redhat.com (John Dennis) Date: Mon, 14 Jan 2013 14:45:38 -0500 Subject: [Freeipa-devel] Command instantiation In-Reply-To: <50F446D3.3050804@redhat.com> References: <50F42CF9.2000108@redhat.com> <50F43F10.1070206@redhat.com> <20130114173151.GJ8651@redhat.com> <50F4457C.8020303@redhat.com> <50F446D3.3050804@redhat.com> Message-ID: <50F46062.4040502@redhat.com> On 01/14/2013 12:56 PM, Jan Cholasta wrote: > On 14.1.2013 18:50, Petr Viktorin wrote: >> Ah, yes, you've discovered my ultimate goal: rewriting the whole >> framefork :) > > It would seem we share the same ultimate goal, sir! :-) Well it's reassuring I'm not alone in my frustration with elements of the framework. I thought it was just me :-) I have one other general complaint about the framework: Too much magic! What do I mean by magic? Things which spring into existence at run time for which there is no static definition. I've spent (wasted) significant amounts of time trying to figure out how something gets instantiated and initialized. These things don't exist in the static code, you can't search for them because they are synthetic. You can see these things being referenced but you'll never find a class definition or __init__() method or assignment to a specific object attribute. It's all very very clever but at the same time very obscure. If you just use the framework in a cookie-cutter fashion this has probably never bothered you, but if you have modify what the framework provides it can be difficult. But I don't want to carp on the framework too much without giving credit to Jason first. His mandate was to produce a pluggable framework that was robust, extensible, and supported easy plugin authorship. Jason was dedicated, almost maniacal in his attention to detail and best practices. He also had to design most of this himself (the rest of the team was heads down on other things at the time). It has mostly stood the test of time. It's pretty hard to anticipate the pain points, that's something only experience with system can give you down the road, which is where we find ourselves now. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From rcritten at redhat.com Mon Jan 14 21:56:42 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 14 Jan 2013 16:56:42 -0500 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50F40C5E.9010302@redhat.com> References: <50E9D7F2.1060800@redhat.com> <50EAC0F5.8050607@redhat.com> <50EAD70D.8010000@redhat.com> <50EAE65B.60409@redhat.com> <50EAFAFE.1040200@redhat.com> <50EAFE68.3030109@redhat.com> <50EB1E85.2070109@redhat.com> <50F0A4F4.5030304@redhat.com> <50F40C5E.9010302@redhat.com> Message-ID: <50F47F1A.30702@redhat.com> Petr Viktorin wrote: > On 01/12/2013 12:49 AM, Rob Crittenden wrote: >> Rob Crittenden wrote: >>> Petr Viktorin wrote: >>>> On 01/07/2013 05:42 PM, Rob Crittenden wrote: >>>>> Petr Viktorin wrote: >>>>>> On 01/07/2013 03:09 PM, Rob Crittenden wrote: >>>>>>> Petr Viktorin wrote: >>>> [...] >>>>>>>> >>>>>>>> Works for me, but I have some questions (this is an area I know >>>>>>>> little >>>>>>>> about). >>>>>>>> >>>>>>>> Can we be 100% sure these certs are always renewed together? Is >>>>>>>> certmonger the only possible mechanism to update them? >>>>>>> >>>>>>> You raise a good point. If though some mechanism someone replaces >>>>>>> one of >>>>>>> these certs it will cause the script to fail. Some notification of >>>>>>> this >>>>>>> failure will be logged though, and of course, the certs won't be >>>>>>> renewed. >>>>>>> >>>>>>> One could conceivably manually renew one of these certificates. >>>>>>> It is >>>>>>> probably a very remote possibility but it is non-zero. >>>>>>> >>>>>>>> Can we be sure certmonger always does the updates in parallel? >>>>>>>> If it >>>>>>>> managed to update the audit cert before starting on the others, >>>>>>>> we'd >>>>>>>> get >>>>>>>> no CA restart for the others. >>>>>>> >>>>>>> These all get issued at the same time so should expire at the same >>>>>>> time >>>>>>> as well (see problem above). The script will hang around for 10 >>>>>>> minutes >>>>>>> waiting for the renewal to complete, then give up. >>>>>> >>>>>> The certs might take different amounts of time to update, right? >>>>>> Eventually, the expirations could go out of sync enough for it to >>>>>> matter. >>>>>> AFAICS, without proper locking we still get a race condition when the >>>>>> other certs start being renewed some time (much less than 10 min) >>>>>> after >>>>>> the audit one: >>>>>> >>>>>> (time axis goes down) >>>>>> >>>>>> audit cert other cert >>>>>> ---------- ---------- >>>>>> certmonger does renew . >>>>>> post-renew script starts . >>>>>> check state of other certs: OK . >>>>>> . certmonger starts renew >>>>>> certutil modifies NSS DB + certmonger modifies NSS DB == boom! >>>>> >>>>> This can't happen because we count the # of expected certs and wait >>>>> until all are in MONITORING before continuing. >>>> >>>> The problem is that they're also in MONITORING before the whole renewal >>>> starts. If the script happens to check just before the state changes >>>> from MONITORING to GENERATING_CSR or whatever, we can get corruption. >>>> >>>>> The worse that would >>>>> happen is the trust wouldn't be set on the audit cert and dogtag >>>>> wouldn't be restarted. >>>>> >>>>>> >>>>>> >>>>>>> The state the system would be in is this: >>>>>>> >>>>>>> - audit cert trust not updated, so next restart of CA will fail >>>>>>> - CA is not restarted so will not use updated certificates >>>>>>> >>>>>>>> And anyway, why does certmonger do renewals in parallel? It seems >>>>>>>> that >>>>>>>> if it did one at a time, always waiting until the post-renew >>>>>>>> script is >>>>>>>> done, this patch wouldn't be necessary. >>>>>>>> >>>>>>> >>>>>>> From what Nalin told me certmonger has some coarse locking such >>>>>>> that >>>>>>> renewals in a the same NSS database are serialized. As you point >>>>>>> out, it >>>>>>> would be nice to extend this locking to the post renewal scripts. We >>>>>>> can >>>>>>> ask Nalin about it. That would fix the potential corruption issue. >>>>>>> It is >>>>>>> still much nicer to not have to restart dogtag 4 times. >>>>>>> >>>>>> >>>>>> Well, three extra restarts every few years seems like a small >>>>>> price to >>>>>> pay for robustness. >>>>> >>>>> It is a bit of a problem though because the certs all renew within >>>>> seconds so end up fighting over who is restarting dogtag. This can >>>>> cause >>>>> some renewals go into a failure state to be retried later. This is >>>>> fine >>>>> functionally but makes QE a bit of a pain. You then have to make sure >>>>> that renewal is basically done, then restart certmonger and check >>>>> everything again, over and over until all the certs are renewed. >>>>> This is >>>>> difficult to automate. >>>> >>>> So we need to extend the certmonger lock, and wait until Dogtag is back >>>> up before exiting the script. That way it'd still take longer than 1 >>>> restart, but all the renews should succeed. >>>> >>> >>> Right, but older dogtag versions don't have the handy servlet to tell >>> that the service is actually up and responding. So it is difficult to >>> tell from tomcat alone whether the CA is actually up and handling >>> requests. >>> >> >> Revised patch that takes advantage of new version of certmonger. >> certmonger-0.65 adds locking from the time renewal begins to the end of >> the post_save_command. This lets us be sure that no other certmonger >> renewals will have the NSS database open in read-write mode. >> >> We need to be sure that tomcat is shut down before we let certmonger >> save the certificate to the NSS database because dogtag opens its >> database read/write and two writers can cause corruption. >> >> rob >> > > stop_pkicad and start_pkicad need the Dogtag version check to select > pki_cad/pki_tomcatd. Fixed. > > A more serious issue is that stop_pkicad needs to be installed on > upgrades. Currently the whole enable_certificate_renewal step in > ipa-upgradeconfig is skipped if it was done before. I added a separate upgrade test for this. It currently won't work in SELinux enforcing mode because certmonger isn't allowed to talk to dbus in an rpm post script. It's being looked at. > In stop_pkicad can you change the first log message to "certmonger > stopping %sd"? It's before the action so we don't want past tense. Fixed. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1079-3-renewal.patch Type: text/x-patch Size: 27128 bytes Desc: not available URL: From nalin at redhat.com Mon Jan 14 22:18:28 2013 From: nalin at redhat.com (Nalin Dahyabhai) Date: Mon, 14 Jan 2013 17:18:28 -0500 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50F0A4F4.5030304@redhat.com> References: <50E9D7F2.1060800@redhat.com> <50EAC0F5.8050607@redhat.com> <50EAD70D.8010000@redhat.com> <50EAE65B.60409@redhat.com> <50EAFAFE.1040200@redhat.com> <50EAFE68.3030109@redhat.com> <50EB1E85.2070109@redhat.com> <50F0A4F4.5030304@redhat.com> Message-ID: <20130114221828.GK29792@redhat.com> On Fri, Jan 11, 2013 at 06:49:08PM -0500, Rob Crittenden wrote: > Revised patch that takes advantage of new version of certmonger. > certmonger-0.65 adds locking from the time renewal begins to the end > of the post_save_command. A note: the lock isn't obtained until after we've obtained a certificate from a CA, and we're ready to save it to the specified location. That's why attempting to renew multiple certificates at the same time can result in transient CA-unreachable errors being encountered for some of them: while we're attempting to obtain one certificate, we may also be restarting the CA as part of the process of saving one that we've already obtained. In these cases, the daemon will try to contact the CA again later, so it should all sort itself out in the end. HTH, Nalin From pviktori at redhat.com Tue Jan 15 09:23:16 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 15 Jan 2013 10:23:16 +0100 Subject: [Freeipa-devel] Command instantiation In-Reply-To: <50F46062.4040502@redhat.com> References: <50F42CF9.2000108@redhat.com> <50F43F10.1070206@redhat.com> <20130114173151.GJ8651@redhat.com> <50F4457C.8020303@redhat.com> <50F446D3.3050804@redhat.com> <50F46062.4040502@redhat.com> Message-ID: <50F52004.9020507@redhat.com> On 01/14/2013 08:45 PM, John Dennis wrote: > On 01/14/2013 12:56 PM, Jan Cholasta wrote: >> On 14.1.2013 18:50, Petr Viktorin wrote: >>> Ah, yes, you've discovered my ultimate goal: rewriting the whole >>> framefork :) >> >> It would seem we share the same ultimate goal, sir! :-) > > Well it's reassuring I'm not alone in my frustration with elements of > the framework. I thought it was just me :-) > > I have one other general complaint about the framework: > > Too much magic! > > What do I mean by magic? Things which spring into existence at run time > for which there is no static definition. I've spent (wasted) significant > amounts of time trying to figure out how something gets instantiated and > initialized. These things don't exist in the static code, you can't > search for them because they are synthetic. You can see these things > being referenced but you'll never find a class definition or __init__() > method or assignment to a specific object attribute. It's all very very > clever but at the same time very obscure. If you just use the framework > in a cookie-cutter fashion this has probably never bothered you, but if > you have modify what the framework provides it can be difficult. > > But I don't want to carp on the framework too much without giving credit > to Jason first. His mandate was to produce a pluggable framework that > was robust, extensible, and supported easy plugin authorship. Jason was > dedicated, almost maniacal in his attention to detail and best > practices. He also had to design most of this himself (the rest of the > team was heads down on other things at the time). It has mostly stood > the test of time. It's pretty hard to anticipate the pain points, that's > something only experience with system can give you down the road, which > is where we find ourselves now. > +1. It's easy to criticize in hindsight, and I have great respect for the framework and its author. Nevertheless, software grows over time and we need to balance bolting things on with improving the foundations, so that we don't get stuck in a maze of workarounds in a few years. -- Petr? From pspacek at redhat.com Tue Jan 15 09:57:30 2013 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 15 Jan 2013 10:57:30 +0100 Subject: [Freeipa-devel] [PATCH 0107] Don't fail if idnsSOAserial attribute is missing in LDAP In-Reply-To: <20130114141734.GA17399@redhat.com> References: <50F05048.80402@redhat.com> <20130114141734.GA17399@redhat.com> Message-ID: <50F5280A.8030707@redhat.com> On 14.1.2013 15:17, Adam Tkac wrote: > On Fri, Jan 11, 2013 at 06:47:52PM +0100, Petr Spacek wrote: >> >Hello, >> > >> > Don't fail if idnsSOAserial attribute is missing in LDAP. >> > >> > DNS zones created on remote IPA 3.0 server don't have >> > idnsSOAserial attribute present in LDAP. >> > >> > https://bugzilla.redhat.com/show_bug.cgi?id=894131 >> > >> > >> >Attached patch contains the minimal set of changes need for resurrecting BIND. >> > >> >In configurations with serial auto-increment: >> >- enabled (IPA 3.0+ default) - some new serial is written back to >> >LDAP nearly immediately >> >- disabled - the attribute will be missing forever > Ack Pushed to master and v2: 5fcfb292ca07d0aa3a0d1a87baf2f6b35336dba2 -- Petr^2 Spacek From akrivoka at redhat.com Tue Jan 15 11:22:22 2013 From: akrivoka at redhat.com (Ana Krivokapic) Date: Tue, 15 Jan 2013 12:22:22 +0100 Subject: [Freeipa-devel] [PATCH] 0003 Add crond as a default HBAC service Message-ID: <50F53BEE.7020306@redhat.com> crond was not included in the list of default HBAC services - it needed to be added manually. As crond is a commonly used service, it is now included as a default HBAC service. Ticket: https://fedorahosted.org/freeipa/ticket/3215 I'm not sure whether a design document is needed for this ticket. It _is_ marked as an RFE, but it is a pretty simple and specific change. After yesterday's meeting, I'm not really clear on whether a design page is needed in cases like this. Thoughts? -- Regards, Ana Krivokapic Associate Software Engineer FreeIPA team Red Hat Inc. -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-akrivoka-0003-Add-crond-as-a-default-HBAC-service.patch Type: text/x-patch Size: 967 bytes Desc: not available URL: From pviktori at redhat.com Tue Jan 15 11:36:11 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 15 Jan 2013 12:36:11 +0100 Subject: [Freeipa-devel] [PATCH] 0119 Switch client to JSON-RPC Message-ID: <50F53F2B.4020406@redhat.com> I meant to hold this patch a while longer to let it mature, but from what Brian Smith asked on the user list it seems it could help him. Design: http://freeipa.org/page/V3/JSON-RPC Ticket: https://fedorahosted.org/freeipa/ticket/3299 See the design page for what the patch does. As much as I've tried to avoid them, the code includes some workarounds: It extends xmlrpclib to also support JSON. This is rather intrusive, but to not do that I'd need to write a parallel stack for JSON, without the help of a standard library. The registration of either jsonclient or xmlclient as "rpcclient" in the API also needs a bit of magic, since the framework requires the class name to match the attribute. To prevent backwards compatibility problems, we need to ensure that all official JSON clients send the API version, so this patch should be applied after my patches 0104-0106. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0119-Switch-client-to-JSON-RPC.patch Type: text/x-patch Size: 33918 bytes Desc: not available URL: From abokovoy at redhat.com Tue Jan 15 11:56:51 2013 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 15 Jan 2013 13:56:51 +0200 Subject: [Freeipa-devel] [PATCH] 0003 Add crond as a default HBAC service In-Reply-To: <50F53BEE.7020306@redhat.com> References: <50F53BEE.7020306@redhat.com> Message-ID: <20130115115651.GA28090@redhat.com> On Tue, 15 Jan 2013, Ana Krivokapic wrote: >crond was not included in the list of default HBAC services - it >needed to be added manually. As crond is a commonly used service, it >is now included as a default HBAC service. > >Ticket: https://fedorahosted.org/freeipa/ticket/3215 ACK. Simple and obvious. >I'm not sure whether a design document is needed for this ticket. It >_is_ marked as an RFE, but it is a pretty simple and specific change. >After yesterday's meeting, I'm not really clear on whether a design >page is needed in cases like this. Thoughts? I suggested that additions like this do not need specific design document. Since this is the whole patch: >--- a/install/updates/50-hbacservice.update >+++ b/install/updates/50-hbacservice.update >@@ -1,3 +1,10 @@ >+dn: cn=crond,cn=hbacservices,cn=hbac,$SUFFIX >+default:objectclass: ipahbacservice >+default:objectclass: ipaobject >+default:cn: crond >+default:description: crond >+default:ipauniqueid:autogenerate >+ > dn: cn=vsftpd,cn=hbacservices,cn=hbac,$SUFFIX > default:objectclass: ipahbacservice > default:objectclass: ipaobject .. I would simply add a link to the mail archive of this thread to the ticket itself. Mailing archive of freeipa-devel@ is as stable as our wiki, if not more resilient. ;) -- / Alexander Bokovoy From pspacek at redhat.com Tue Jan 15 12:35:17 2013 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 15 Jan 2013 13:35:17 +0100 Subject: [Freeipa-devel] [PATCH 0108] Update NEWS file and bump NVR to 2.4 Message-ID: <50F54D05.7020706@redhat.com> Hello, Update NEWS file and bump NVR to 2.4. -- Petr^2 Spacek -------------- next part -------------- A non-text attachment was scrubbed... Name: bind-dyndb-ldap-pspacek-0108-Update-NEWS-file-and-bump-NVR-to-2.4.patch Type: text/x-patch Size: 2134 bytes Desc: not available URL: From mkosek at redhat.com Tue Jan 15 12:30:12 2013 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 15 Jan 2013 13:30:12 +0100 Subject: [Freeipa-devel] [PATCH] 350 Upgrade process should not crash on named restart Message-ID: <50F54BD4.8020600@redhat.com> When either dirsrv or krb5kdc is down, named service restart in ipa-upgradeconfig will fail and cause a crash of the whole upgrade process. Rather only report a failure to restart the service and continue with the upgrade as it does not need the named service running. Do the same precaution for pki-ca service restart. https://fedorahosted.org/freeipa/ticket/3350 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-350-upgrade-process-should-not-crash-on-named-restart.patch Type: text/x-patch Size: 2021 bytes Desc: not available URL: From dpal at redhat.com Tue Jan 15 12:32:05 2013 From: dpal at redhat.com (Dmitri Pal) Date: Tue, 15 Jan 2013 07:32:05 -0500 Subject: [Freeipa-devel] [PATCH] 0003 Add crond as a default HBAC service In-Reply-To: <20130115115651.GA28090@redhat.com> References: <50F53BEE.7020306@redhat.com> <20130115115651.GA28090@redhat.com> Message-ID: <50F54C45.2020100@redhat.com> On 01/15/2013 06:56 AM, Alexander Bokovoy wrote: > On Tue, 15 Jan 2013, Ana Krivokapic wrote: >> crond was not included in the list of default HBAC services - it >> needed to be added manually. As crond is a commonly used service, it >> is now included as a default HBAC service. >> >> Ticket: https://fedorahosted.org/freeipa/ticket/3215 > ACK. > > Simple and obvious. > >> I'm not sure whether a design document is needed for this ticket. It >> _is_ marked as an RFE, but it is a pretty simple and specific change. >> After yesterday's meeting, I'm not really clear on whether a design >> page is needed in cases like this. Thoughts? > I suggested that additions like this do not need specific design > document. I suggest that we have a catch all page on the wiki that would be called "Minor Enhancements". It will consist of the table with two columns. First column would be the link to the ticket. The second column would be a short comment. F this ticket it will look like this: Enhancement Comment https://fedorahosted.org/freeipa/ticket/3215 Crond was added to the list of available HBAC services Such approach would not break the reporting and would show what has been done for minor enhancements like this. > > Since this is the whole patch: >> --- a/install/updates/50-hbacservice.update >> +++ b/install/updates/50-hbacservice.update >> @@ -1,3 +1,10 @@ >> +dn: cn=crond,cn=hbacservices,cn=hbac,$SUFFIX >> +default:objectclass: ipahbacservice >> +default:objectclass: ipaobject >> +default:cn: crond >> +default:description: crond >> +default:ipauniqueid:autogenerate >> + >> dn: cn=vsftpd,cn=hbacservices,cn=hbac,$SUFFIX >> default:objectclass: ipahbacservice >> default:objectclass: ipaobject > .. I would simply add a link to the mail archive of this thread to the > ticket itself. Mailing archive of freeipa-devel@ is as stable as our > wiki, if not more resilient. ;) > -- Thank you, Dmitri Pal Sr. Engineering Manager for IdM portfolio Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From abokovoy at redhat.com Tue Jan 15 12:56:23 2013 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 15 Jan 2013 14:56:23 +0200 Subject: [Freeipa-devel] [PATCH] 0003 Add crond as a default HBAC service In-Reply-To: <50F54C45.2020100@redhat.com> References: <50F53BEE.7020306@redhat.com> <20130115115651.GA28090@redhat.com> <50F54C45.2020100@redhat.com> Message-ID: <20130115125623.GB28090@redhat.com> On Tue, 15 Jan 2013, Dmitri Pal wrote: >On 01/15/2013 06:56 AM, Alexander Bokovoy wrote: >> On Tue, 15 Jan 2013, Ana Krivokapic wrote: >>> crond was not included in the list of default HBAC services - it >>> needed to be added manually. As crond is a commonly used service, it >>> is now included as a default HBAC service. >>> >>> Ticket: https://fedorahosted.org/freeipa/ticket/3215 >> ACK. >> >> Simple and obvious. >> >>> I'm not sure whether a design document is needed for this ticket. It >>> _is_ marked as an RFE, but it is a pretty simple and specific change. >>> After yesterday's meeting, I'm not really clear on whether a design >>> page is needed in cases like this. Thoughts? >> I suggested that additions like this do not need specific design >> document. > > >I suggest that we have a catch all page on the wiki that would be called >"Minor Enhancements". >It will consist of the table with two columns. >First column would be the link to the ticket. >The second column would be a short comment. >F this ticket it will look like this: > >Enhancement Comment >https://fedorahosted.org/freeipa/ticket/3215 Crond was added to the list of available HBAC services > > > >Such approach would not break the reporting and would show what has been >done for minor enhancements like this. This would work as well. If we still link to the mail thread in the ticket, all ends will be connected. +1 from me. -- / Alexander Bokovoy From pspacek at redhat.com Tue Jan 15 13:01:05 2013 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 15 Jan 2013 14:01:05 +0100 Subject: [Freeipa-devel] [PATCH 0109] Add verbose_checks option to README Message-ID: <50F55311.1060604@redhat.com> Hello, Add verbose_checks option to README. -- Petr^2 Spacek -------------- next part -------------- A non-text attachment was scrubbed... Name: bind-dyndb-ldap-pspacek-0109-Add-verbose_checks-option-to-README.patch Type: text/x-patch Size: 964 bytes Desc: not available URL: From simo at redhat.com Tue Jan 15 13:39:01 2013 From: simo at redhat.com (Simo Sorce) Date: Tue, 15 Jan 2013 08:39:01 -0500 Subject: [Freeipa-devel] Command instantiation In-Reply-To: <50F52004.9020507@redhat.com> References: <50F42CF9.2000108@redhat.com> <50F43F10.1070206@redhat.com> <20130114173151.GJ8651@redhat.com> <50F4457C.8020303@redhat.com> <50F446D3.3050804@redhat.com> <50F46062.4040502@redhat.com> <50F52004.9020507@redhat.com> Message-ID: <1358257141.15136.119.camel@willson.li.ssimo.org> On Tue, 2013-01-15 at 10:23 +0100, Petr Viktorin wrote: > On 01/14/2013 08:45 PM, John Dennis wrote: > > On 01/14/2013 12:56 PM, Jan Cholasta wrote: > >> On 14.1.2013 18:50, Petr Viktorin wrote: > >>> Ah, yes, you've discovered my ultimate goal: rewriting the whole > >>> framefork :) > >> > >> It would seem we share the same ultimate goal, sir! :-) > > > > Well it's reassuring I'm not alone in my frustration with elements of > > the framework. I thought it was just me :-) > > > > I have one other general complaint about the framework: > > > > Too much magic! > > > > What do I mean by magic? Things which spring into existence at run time > > for which there is no static definition. I've spent (wasted) significant > > amounts of time trying to figure out how something gets instantiated and > > initialized. These things don't exist in the static code, you can't > > search for them because they are synthetic. You can see these things > > being referenced but you'll never find a class definition or __init__() > > method or assignment to a specific object attribute. It's all very very > > clever but at the same time very obscure. If you just use the framework > > in a cookie-cutter fashion this has probably never bothered you, but if > > you have modify what the framework provides it can be difficult. > > > > But I don't want to carp on the framework too much without giving credit > > to Jason first. His mandate was to produce a pluggable framework that > > was robust, extensible, and supported easy plugin authorship. Jason was > > dedicated, almost maniacal in his attention to detail and best > > practices. He also had to design most of this himself (the rest of the > > team was heads down on other things at the time). It has mostly stood > > the test of time. It's pretty hard to anticipate the pain points, that's > > something only experience with system can give you down the road, which > > is where we find ourselves now. > > > > +1. > It's easy to criticize in hindsight, and I have great respect for the > framework and its author. > Nevertheless, software grows over time and we need to balance bolting > things on with improving the foundations, so that we don't get stuck in > a maze of workarounds in a few years. Can someone summarize how big a change this would be ? I do understand the general discussion, but I have not been involved deeply enough in the framework code to tell. Also how much would this conflict with the proposed LDAP change ? Do we have a way to slowly change stuff or will it require big all-or-nothing changes ? Simo. -- Simo Sorce * Red Hat, Inc * New York From simo at redhat.com Tue Jan 15 13:43:15 2013 From: simo at redhat.com (Simo Sorce) Date: Tue, 15 Jan 2013 08:43:15 -0500 Subject: [Freeipa-devel] [PATCH] 350 Upgrade process should not crash on named restart In-Reply-To: <50F54BD4.8020600@redhat.com> References: <50F54BD4.8020600@redhat.com> Message-ID: <1358257395.15136.120.camel@willson.li.ssimo.org> On Tue, 2013-01-15 at 13:30 +0100, Martin Kosek wrote: > When either dirsrv or krb5kdc is down, named service restart in > ipa-upgradeconfig will fail and cause a crash of the whole upgrade > process. > > Rather only report a failure to restart the service and continue > with the upgrade as it does not need the named service running. Do > the same precaution for pki-ca service restart. > > https://fedorahosted.org/freeipa/ticket/3350 Shouldn't we note it failed and retry later ? Is there a risk it will be down at the end of the upgrade process ? Simo. -- Simo Sorce * Red Hat, Inc * New York From simo at redhat.com Tue Jan 15 13:48:22 2013 From: simo at redhat.com (Simo Sorce) Date: Tue, 15 Jan 2013 08:48:22 -0500 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <50F42859.1070807@redhat.com> References: <50F42859.1070807@redhat.com> Message-ID: <1358257702.15136.124.camel@willson.li.ssimo.org> On Mon, 2013-01-14 at 16:46 +0100, Tomas Babej wrote: > Hi, > > Since in Kerberos V5 are used 32-bit unix timestamps, setting > maxlife in pwpolicy to values such as 9999 days would cause > integer overflow in krbPasswordExpiration attribute. > > This would result into unpredictable behaviour such as users > not being able to log in after password expiration if password > policy was changed (#3114) or new users not being able to log > in at all (#3312). > > https://fedorahosted.org/freeipa/ticket/3312 > https://fedorahosted.org/freeipa/ticket/3114 Given that we control the KDC LDAP driver I think we should not limit the time in LDAP but rather 'fix-it-up' for the KDC in the DAL driver. So I would like to Nack this one, sorry. Simo. -- Simo Sorce * Red Hat, Inc * New York From pspacek at redhat.com Tue Jan 15 14:19:30 2013 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 15 Jan 2013 15:19:30 +0100 Subject: [Freeipa-devel] [PATCH 0108] Update NEWS file and bump NVR to 2.4 In-Reply-To: <50F54D05.7020706@redhat.com> References: <50F54D05.7020706@redhat.com> Message-ID: <50F56572.2020608@redhat.com> On 15.1.2013 13:35, Petr Spacek wrote: > Hello, > > Update NEWS file and bump NVR to 2.4. ACKed on IRC, pushed to master and v2: ca4b1602d4fa27c4117563f2ed44cc7181755c31 -- Petr^2 Spacek From pspacek at redhat.com Tue Jan 15 14:20:01 2013 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 15 Jan 2013 15:20:01 +0100 Subject: [Freeipa-devel] [PATCH 0109] Add verbose_checks option to README In-Reply-To: <50F55311.1060604@redhat.com> References: <50F55311.1060604@redhat.com> Message-ID: <50F56591.9070001@redhat.com> On 15.1.2013 14:01, Petr Spacek wrote: > Hello, > > Add verbose_checks option to README. ACKed on IRC, pushed to master and v2: a2ce021d3d219e07b37a04dd5fec5c9abbd1ed60 -- Petr^2 Spacek From mkosek at redhat.com Tue Jan 15 14:37:23 2013 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 15 Jan 2013 15:37:23 +0100 Subject: [Freeipa-devel] [PATCH] 350 Upgrade process should not crash on named restart In-Reply-To: <1358257395.15136.120.camel@willson.li.ssimo.org> References: <50F54BD4.8020600@redhat.com> <1358257395.15136.120.camel@willson.li.ssimo.org> Message-ID: <50F569A3.4050203@redhat.com> On 01/15/2013 02:43 PM, Simo Sorce wrote: > On Tue, 2013-01-15 at 13:30 +0100, Martin Kosek wrote: >> When either dirsrv or krb5kdc is down, named service restart in >> ipa-upgradeconfig will fail and cause a crash of the whole upgrade >> process. >> >> Rather only report a failure to restart the service and continue >> with the upgrade as it does not need the named service running. Do >> the same precaution for pki-ca service restart. >> >> https://fedorahosted.org/freeipa/ticket/3350 > > Shouldn't we note it failed and retry later ? > Is there a risk it will be down at the end of the upgrade process ? > > Simo. > Seems like an overkill to me. It would not certainly help in this case, because the processes that named requires are down. As Rob suggested, user upgrading the IPA may be running in a lower run level for example, it that case I think we may not even try to restart the service. Now when I am thinking about it, maybe we should only try to restart if the service is running - because otherwise it would be started later and the changes that were done in scope of upgrade script would be applied. Martin From pviktori at redhat.com Tue Jan 15 14:41:06 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 15 Jan 2013 15:41:06 +0100 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50F47F1A.30702@redhat.com> References: <50E9D7F2.1060800@redhat.com> <50EAC0F5.8050607@redhat.com> <50EAD70D.8010000@redhat.com> <50EAE65B.60409@redhat.com> <50EAFAFE.1040200@redhat.com> <50EAFE68.3030109@redhat.com> <50EB1E85.2070109@redhat.com> <50F0A4F4.5030304@redhat.com> <50F40C5E.9010302@redhat.com> <50F47F1A.30702@redhat.com> Message-ID: <50F56A82.4090806@redhat.com> On 01/14/2013 10:56 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> On 01/12/2013 12:49 AM, Rob Crittenden wrote: >>> Rob Crittenden wrote: >>>> Petr Viktorin wrote: >>>>> On 01/07/2013 05:42 PM, Rob Crittenden wrote: >>>>>> Petr Viktorin wrote: >>>>>>> On 01/07/2013 03:09 PM, Rob Crittenden wrote: >>>>>>>> Petr Viktorin wrote: >>>>> [...] >>>>>>>>> >>>>>>>>> Works for me, but I have some questions (this is an area I know >>>>>>>>> little >>>>>>>>> about). >>>>>>>>> >>>>>>>>> Can we be 100% sure these certs are always renewed together? Is >>>>>>>>> certmonger the only possible mechanism to update them? >>>>>>>> >>>>>>>> You raise a good point. If though some mechanism someone replaces >>>>>>>> one of >>>>>>>> these certs it will cause the script to fail. Some notification of >>>>>>>> this >>>>>>>> failure will be logged though, and of course, the certs won't be >>>>>>>> renewed. >>>>>>>> >>>>>>>> One could conceivably manually renew one of these certificates. >>>>>>>> It is >>>>>>>> probably a very remote possibility but it is non-zero. >>>>>>>> >>>>>>>>> Can we be sure certmonger always does the updates in parallel? >>>>>>>>> If it >>>>>>>>> managed to update the audit cert before starting on the others, >>>>>>>>> we'd >>>>>>>>> get >>>>>>>>> no CA restart for the others. >>>>>>>> >>>>>>>> These all get issued at the same time so should expire at the same >>>>>>>> time >>>>>>>> as well (see problem above). The script will hang around for 10 >>>>>>>> minutes >>>>>>>> waiting for the renewal to complete, then give up. >>>>>>> >>>>>>> The certs might take different amounts of time to update, right? >>>>>>> Eventually, the expirations could go out of sync enough for it to >>>>>>> matter. >>>>>>> AFAICS, without proper locking we still get a race condition when >>>>>>> the >>>>>>> other certs start being renewed some time (much less than 10 min) >>>>>>> after >>>>>>> the audit one: >>>>>>> >>>>>>> (time axis goes down) >>>>>>> >>>>>>> audit cert other cert >>>>>>> ---------- ---------- >>>>>>> certmonger does renew . >>>>>>> post-renew script starts . >>>>>>> check state of other certs: OK . >>>>>>> . certmonger starts renew >>>>>>> certutil modifies NSS DB + certmonger modifies NSS DB == boom! >>>>>> >>>>>> This can't happen because we count the # of expected certs and wait >>>>>> until all are in MONITORING before continuing. >>>>> >>>>> The problem is that they're also in MONITORING before the whole >>>>> renewal >>>>> starts. If the script happens to check just before the state changes >>>>> from MONITORING to GENERATING_CSR or whatever, we can get corruption. >>>>> >>>>>> The worse that would >>>>>> happen is the trust wouldn't be set on the audit cert and dogtag >>>>>> wouldn't be restarted. >>>>>> >>>>>>> >>>>>>> >>>>>>>> The state the system would be in is this: >>>>>>>> >>>>>>>> - audit cert trust not updated, so next restart of CA will fail >>>>>>>> - CA is not restarted so will not use updated certificates >>>>>>>> >>>>>>>>> And anyway, why does certmonger do renewals in parallel? It seems >>>>>>>>> that >>>>>>>>> if it did one at a time, always waiting until the post-renew >>>>>>>>> script is >>>>>>>>> done, this patch wouldn't be necessary. >>>>>>>>> >>>>>>>> >>>>>>>> From what Nalin told me certmonger has some coarse locking such >>>>>>>> that >>>>>>>> renewals in a the same NSS database are serialized. As you point >>>>>>>> out, it >>>>>>>> would be nice to extend this locking to the post renewal >>>>>>>> scripts. We >>>>>>>> can >>>>>>>> ask Nalin about it. That would fix the potential corruption issue. >>>>>>>> It is >>>>>>>> still much nicer to not have to restart dogtag 4 times. >>>>>>>> >>>>>>> >>>>>>> Well, three extra restarts every few years seems like a small >>>>>>> price to >>>>>>> pay for robustness. >>>>>> >>>>>> It is a bit of a problem though because the certs all renew within >>>>>> seconds so end up fighting over who is restarting dogtag. This can >>>>>> cause >>>>>> some renewals go into a failure state to be retried later. This is >>>>>> fine >>>>>> functionally but makes QE a bit of a pain. You then have to make sure >>>>>> that renewal is basically done, then restart certmonger and check >>>>>> everything again, over and over until all the certs are renewed. >>>>>> This is >>>>>> difficult to automate. >>>>> >>>>> So we need to extend the certmonger lock, and wait until Dogtag is >>>>> back >>>>> up before exiting the script. That way it'd still take longer than 1 >>>>> restart, but all the renews should succeed. >>>>> >>>> >>>> Right, but older dogtag versions don't have the handy servlet to tell >>>> that the service is actually up and responding. So it is difficult to >>>> tell from tomcat alone whether the CA is actually up and handling >>>> requests. >>>> >>> >>> Revised patch that takes advantage of new version of certmonger. >>> certmonger-0.65 adds locking from the time renewal begins to the end of >>> the post_save_command. This lets us be sure that no other certmonger >>> renewals will have the NSS database open in read-write mode. >>> >>> We need to be sure that tomcat is shut down before we let certmonger >>> save the certificate to the NSS database because dogtag opens its >>> database read/write and two writers can cause corruption. >>> >>> rob >>> >> >> stop_pkicad and start_pkicad need the Dogtag version check to select >> pki_cad/pki_tomcatd. > > Fixed. > >> >> A more serious issue is that stop_pkicad needs to be installed on >> upgrades. Currently the whole enable_certificate_renewal step in >> ipa-upgradeconfig is skipped if it was done before. > > I added a separate upgrade test for this. It currently won't work in > SELinux enforcing mode because certmonger isn't allowed to talk to dbus > in an rpm post script. It's being looked at. > >> In stop_pkicad can you change the first log message to "certmonger >> stopping %sd"? It's before the action so we don't want past tense. > > Fixed. > > rob I get a bunch of errors when installing the RPM: Updating : freeipa-server-3.1.0GITfe82329-0.fc18.x86_64 4/14 certmonger failed to stop tracking certificate: Command '/usr/bin/getcert stop-tracking -i 20240902001817' returned non-zero exit status 1 certmonger failed to stop tracking certificate: Command '/usr/bin/getcert stop-tracking -i 20240902001813' returned non-zero exit status 1 certmonger failed to stop tracking certificate: Command '/usr/bin/getcert stop-tracking -i 20240902001814' returned non-zero exit status 1 certmonger failed to stop tracking certificate: Command '/usr/bin/getcert stop-tracking -i 20240902001815' returned non-zero exit status 1 certmonger failed to stop tracking certificate: Command '/usr/bin/getcert stop-tracking -i 20240902001816' returned non-zero exit status 1 certmonger failed to start tracking certificate: Command '/usr/bin/getcert start-tracking -d /etc/pki/pki-tomcat/alias -n auditSigningCert cert-pki-ca -c dogtag-ipa-renew-agent -B /usr/lib64/ipa/certmonger/stop_pkicad -C /usr/lib64/ipa/certmonger/renew_ca_cert "auditSigningCert cert-pki-ca" -P XXXXXXXX' returned non-zero exit status 1 certmonger failed to start tracking certificate: Command '/usr/bin/getcert start-tracking -d /etc/pki/pki-tomcat/alias -n ocspSigningCert cert-pki-ca -c dogtag-ipa-renew-agent -B /usr/lib64/ipa/certmonger/stop_pkicad -C /usr/lib64/ipa/certmonger/renew_ca_cert "ocspSigningCert cert-pki-ca" -P XXXXXXXX' returned non-zero exit status 1 certmonger failed to start tracking certificate: Command '/usr/bin/getcert start-tracking -d /etc/pki/pki-tomcat/alias -n subsystemCert cert-pki-ca -c dogtag-ipa-renew-agent -B /usr/lib64/ipa/certmonger/stop_pkicad -C /usr/lib64/ipa/certmonger/renew_ca_cert "subsystemCert cert-pki-ca" -P XXXXXXXX' returned non-zero exit status 1 certmonger failed to start tracking certificate: Command '/usr/bin/getcert start-tracking -d /etc/httpd/alias -n ipaCert -c dogtag-ipa-renew-agent -C /usr/lib64/ipa/certmonger/renew_ra_cert -p /etc/httpd/alias/pwdfile.txt' returned non-zero exit status 1 certmonger failed to start tracking certificate: Command '/usr/bin/getcert start-tracking -d /etc/pki/pki-tomcat/alias -n Server-Cert cert-pki-ca -c dogtag-ipa-renew-agent -P XXXXXXXX' returned non-zero exit status 1 For each stop-tracking the ipaupgrade.log says: 2030-07-20T04:07:40Z DEBUG Starting external process 2030-07-20T04:07:40Z DEBUG args=/usr/bin/getcert stop-tracking -i 20280801040707 2030-07-20T04:08:11Z DEBUG Process finished, return code=1 2030-07-20T04:08:11Z DEBUG stdout=Please verify that the certmonger service is still running. 2030-07-20T04:08:11Z DEBUG stderr= 2030-07-20T04:08:11Z ERROR certmonger failed to stop tracking certificate: Command '/usr/bin/getcert stop-tracking -i 20280801040707' returned non-zero exit status 1 If I run the same command by hand, it removes the request without problems. -- Petr? From rcritten at redhat.com Tue Jan 15 14:40:57 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 15 Jan 2013 09:40:57 -0500 Subject: [Freeipa-devel] [PATCH] 350 Upgrade process should not crash on named restart In-Reply-To: <50F569A3.4050203@redhat.com> References: <50F54BD4.8020600@redhat.com> <1358257395.15136.120.camel@willson.li.ssimo.org> <50F569A3.4050203@redhat.com> Message-ID: <50F56A79.1020603@redhat.com> Martin Kosek wrote: > On 01/15/2013 02:43 PM, Simo Sorce wrote: >> On Tue, 2013-01-15 at 13:30 +0100, Martin Kosek wrote: >>> When either dirsrv or krb5kdc is down, named service restart in >>> ipa-upgradeconfig will fail and cause a crash of the whole upgrade >>> process. >>> >>> Rather only report a failure to restart the service and continue >>> with the upgrade as it does not need the named service running. Do >>> the same precaution for pki-ca service restart. >>> >>> https://fedorahosted.org/freeipa/ticket/3350 >> >> Shouldn't we note it failed and retry later ? >> Is there a risk it will be down at the end of the upgrade process ? >> >> Simo. >> > > Seems like an overkill to me. It would not certainly help in this case, > because the processes that named requires are down. As Rob suggested, > user upgrading the IPA may be running in a lower run level for example, > it that case I think we may not even try to restart the service. > > Now when I am thinking about it, maybe we should only try to restart if > the service is running - because otherwise it would be started later and > the changes that were done in scope of upgrade script would be applied. That makes sense to me. I also wonder if, since we know about the dirsrv dependency, we shouldn't check that at the same time. rob From pviktori at redhat.com Tue Jan 15 14:42:12 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 15 Jan 2013 15:42:12 +0100 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50F56A82.4090806@redhat.com> References: <50E9D7F2.1060800@redhat.com> <50EAC0F5.8050607@redhat.com> <50EAD70D.8010000@redhat.com> <50EAE65B.60409@redhat.com> <50EAFAFE.1040200@redhat.com> <50EAFE68.3030109@redhat.com> <50EB1E85.2070109@redhat.com> <50F0A4F4.5030304@redhat.com> <50F40C5E.9010302@redhat.com> <50F47F1A.30702@redhat.com> <50F56A82.4090806@redhat.com> Message-ID: <50F56AC4.4060508@redhat.com> On 01/15/2013 03:41 PM, Petr Viktorin wrote: > I get a bunch of errors when installing the RPM: Updating from current master, to be precise. -- Petr? From simo at redhat.com Tue Jan 15 14:44:10 2013 From: simo at redhat.com (Simo Sorce) Date: Tue, 15 Jan 2013 09:44:10 -0500 Subject: [Freeipa-devel] [PATCH] 350 Upgrade process should not crash on named restart In-Reply-To: <50F569A3.4050203@redhat.com> References: <50F54BD4.8020600@redhat.com> <1358257395.15136.120.camel@willson.li.ssimo.org> <50F569A3.4050203@redhat.com> Message-ID: <1358261050.15136.130.camel@willson.li.ssimo.org> On Tue, 2013-01-15 at 15:37 +0100, Martin Kosek wrote: > On 01/15/2013 02:43 PM, Simo Sorce wrote: > > On Tue, 2013-01-15 at 13:30 +0100, Martin Kosek wrote: > >> When either dirsrv or krb5kdc is down, named service restart in > >> ipa-upgradeconfig will fail and cause a crash of the whole upgrade > >> process. > >> > >> Rather only report a failure to restart the service and continue > >> with the upgrade as it does not need the named service running. Do > >> the same precaution for pki-ca service restart. > >> > >> https://fedorahosted.org/freeipa/ticket/3350 > > > > Shouldn't we note it failed and retry later ? > > Is there a risk it will be down at the end of the upgrade process ? > > > > Simo. > > > > Seems like an overkill to me. It would not certainly help in this case, because > the processes that named requires are down. As Rob suggested, user upgrading > the IPA may be running in a lower run level for example, it that case I think > we may not even try to restart the service. Oh I guess I wasn't clear, I did not mean to try to restart the service immediately or multiple times, I meant to make sure that if the service was running when the *whole* update started to make sure it is still running when the whole update finishes. The scenario is: 1. ipa runnig 2. do upgrade 3. restart fails for some reason 4. update completes now what I would like to make sure is that if the restart failed at 3 we try a restart after 4 so that we try to get things up when all the updates are done. Makes sense ? > Now when I am thinking about it, maybe we should only try to restart if the > service is running - because otherwise it would be started later and the > changes that were done in scope of upgrade script would be applied. Yes we should do a conditional restart only, and it is ok to proceeded if it fails, we want to complete the upgrade process in any case, not break out in the middle if at all possible. Simo. -- Simo Sorce * Red Hat, Inc * New York From mkosek at redhat.com Tue Jan 15 15:05:14 2013 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 15 Jan 2013 16:05:14 +0100 Subject: [Freeipa-devel] [PATCH] 350 Upgrade process should not crash on named restart In-Reply-To: <1358261050.15136.130.camel@willson.li.ssimo.org> References: <50F54BD4.8020600@redhat.com> <1358257395.15136.120.camel@willson.li.ssimo.org> <50F569A3.4050203@redhat.com> <1358261050.15136.130.camel@willson.li.ssimo.org> Message-ID: <50F5702A.1010202@redhat.com> On 01/15/2013 03:44 PM, Simo Sorce wrote: > On Tue, 2013-01-15 at 15:37 +0100, Martin Kosek wrote: >> On 01/15/2013 02:43 PM, Simo Sorce wrote: >>> On Tue, 2013-01-15 at 13:30 +0100, Martin Kosek wrote: >>>> When either dirsrv or krb5kdc is down, named service restart in >>>> ipa-upgradeconfig will fail and cause a crash of the whole upgrade >>>> process. >>>> >>>> Rather only report a failure to restart the service and continue >>>> with the upgrade as it does not need the named service running. Do >>>> the same precaution for pki-ca service restart. >>>> >>>> https://fedorahosted.org/freeipa/ticket/3350 >>> >>> Shouldn't we note it failed and retry later ? >>> Is there a risk it will be down at the end of the upgrade process ? >>> >>> Simo. >>> >> >> Seems like an overkill to me. It would not certainly help in this case, because >> the processes that named requires are down. As Rob suggested, user upgrading >> the IPA may be running in a lower run level for example, it that case I think >> we may not even try to restart the service. > > Oh I guess I wasn't clear, I did not mean to try to restart the service > immediately or multiple times, I meant to make sure that if the service > was running when the *whole* update started to make sure it is still > running when the whole update finishes. > > The scenario is: > > 1. ipa runnig > 2. do upgrade > 3. restart fails for some reason > 4. update completes > > now what I would like to make sure is that if the restart failed at 3 we > try a restart after 4 so that we try to get things up when all the > updates are done. > > Makes sense ? Sort of. To be able to do this, I think we would need to at first get a list of all running services (as user may have purposefully shut down some service), then run the upgrades and check that all services in this list are still running at the end of the upgrade. If not, try to amend it. While this looks useful-ish, I would rather keep the patch 350 simple as we are close to the release and I do not want to get too wild. > >> Now when I am thinking about it, maybe we should only try to restart if the >> service is running - because otherwise it would be started later and the >> changes that were done in scope of upgrade script would be applied. > > Yes we should do a conditional restart only, and it is ok to proceeded > if it fails, we want to complete the upgrade process in any case, not > break out in the middle if at all possible. > > Simo. > Right, I will send an updated patch which restarts the named/pki-ca service only if it is running. Martin From rcritten at redhat.com Tue Jan 15 15:17:56 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 15 Jan 2013 10:17:56 -0500 Subject: [Freeipa-devel] [PATCH] 350 Upgrade process should not crash on named restart In-Reply-To: <50F5702A.1010202@redhat.com> References: <50F54BD4.8020600@redhat.com> <1358257395.15136.120.camel@willson.li.ssimo.org> <50F569A3.4050203@redhat.com> <1358261050.15136.130.camel@willson.li.ssimo.org> <50F5702A.1010202@redhat.com> Message-ID: <50F57324.5090906@redhat.com> Martin Kosek wrote: > On 01/15/2013 03:44 PM, Simo Sorce wrote: >> On Tue, 2013-01-15 at 15:37 +0100, Martin Kosek wrote: >>> On 01/15/2013 02:43 PM, Simo Sorce wrote: >>>> On Tue, 2013-01-15 at 13:30 +0100, Martin Kosek wrote: >>>>> When either dirsrv or krb5kdc is down, named service restart in >>>>> ipa-upgradeconfig will fail and cause a crash of the whole upgrade >>>>> process. >>>>> >>>>> Rather only report a failure to restart the service and continue >>>>> with the upgrade as it does not need the named service running. Do >>>>> the same precaution for pki-ca service restart. >>>>> >>>>> https://fedorahosted.org/freeipa/ticket/3350 >>>> >>>> Shouldn't we note it failed and retry later ? >>>> Is there a risk it will be down at the end of the upgrade process ? >>>> >>>> Simo. >>>> >>> >>> Seems like an overkill to me. It would not certainly help in this >>> case, because >>> the processes that named requires are down. As Rob suggested, user >>> upgrading >>> the IPA may be running in a lower run level for example, it that case >>> I think >>> we may not even try to restart the service. >> >> Oh I guess I wasn't clear, I did not mean to try to restart the service >> immediately or multiple times, I meant to make sure that if the service >> was running when the *whole* update started to make sure it is still >> running when the whole update finishes. >> >> The scenario is: >> >> 1. ipa runnig >> 2. do upgrade >> 3. restart fails for some reason >> 4. update completes >> >> now what I would like to make sure is that if the restart failed at 3 we >> try a restart after 4 so that we try to get things up when all the >> updates are done. >> >> Makes sense ? > > Sort of. To be able to do this, I think we would need to at first get a > list of all running services (as user may have purposefully shut down > some service), then run the upgrades and check that all services in this > list are still running at the end of the upgrade. If not, try to amend it. > > While this looks useful-ish, I would rather keep the patch 350 simple as > we are close to the release and I do not want to get too wild. > >> >>> Now when I am thinking about it, maybe we should only try to restart >>> if the >>> service is running - because otherwise it would be started later and the >>> changes that were done in scope of upgrade script would be applied. >> >> Yes we should do a conditional restart only, and it is ok to proceeded >> if it fails, we want to complete the upgrade process in any case, not >> break out in the middle if at all possible. >> >> Simo. >> > > Right, I will send an updated patch which restarts the named/pki-ca > service only if it is running. ACK on this patch as-is. I think we have room for improvement/discussion. Can you open a RFE ticket to investigate any further work we might want to do? rob From mkosek at redhat.com Tue Jan 15 15:44:01 2013 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 15 Jan 2013 16:44:01 +0100 Subject: [Freeipa-devel] [PATCH] 350 Upgrade process should not crash on named restart In-Reply-To: <50F57324.5090906@redhat.com> References: <50F54BD4.8020600@redhat.com> <1358257395.15136.120.camel@willson.li.ssimo.org> <50F569A3.4050203@redhat.com> <1358261050.15136.130.camel@willson.li.ssimo.org> <50F5702A.1010202@redhat.com> <50F57324.5090906@redhat.com> Message-ID: <50F57941.6060509@redhat.com> On 01/15/2013 04:17 PM, Rob Crittenden wrote: > Martin Kosek wrote: >> On 01/15/2013 03:44 PM, Simo Sorce wrote: >>> On Tue, 2013-01-15 at 15:37 +0100, Martin Kosek wrote: >>>> On 01/15/2013 02:43 PM, Simo Sorce wrote: >>>>> On Tue, 2013-01-15 at 13:30 +0100, Martin Kosek wrote: >>>>>> When either dirsrv or krb5kdc is down, named service restart in >>>>>> ipa-upgradeconfig will fail and cause a crash of the whole upgrade >>>>>> process. >>>>>> >>>>>> Rather only report a failure to restart the service and continue >>>>>> with the upgrade as it does not need the named service running. Do >>>>>> the same precaution for pki-ca service restart. >>>>>> >>>>>> https://fedorahosted.org/freeipa/ticket/3350 >>>>> >>>>> Shouldn't we note it failed and retry later ? >>>>> Is there a risk it will be down at the end of the upgrade process ? >>>>> >>>>> Simo. >>>>> >>>> >>>> Seems like an overkill to me. It would not certainly help in this >>>> case, because >>>> the processes that named requires are down. As Rob suggested, user >>>> upgrading >>>> the IPA may be running in a lower run level for example, it that case >>>> I think >>>> we may not even try to restart the service. >>> >>> Oh I guess I wasn't clear, I did not mean to try to restart the service >>> immediately or multiple times, I meant to make sure that if the service >>> was running when the *whole* update started to make sure it is still >>> running when the whole update finishes. >>> >>> The scenario is: >>> >>> 1. ipa runnig >>> 2. do upgrade >>> 3. restart fails for some reason >>> 4. update completes >>> >>> now what I would like to make sure is that if the restart failed at 3 we >>> try a restart after 4 so that we try to get things up when all the >>> updates are done. >>> >>> Makes sense ? >> >> Sort of. To be able to do this, I think we would need to at first get a >> list of all running services (as user may have purposefully shut down >> some service), then run the upgrades and check that all services in this >> list are still running at the end of the upgrade. If not, try to amend it. >> >> While this looks useful-ish, I would rather keep the patch 350 simple as >> we are close to the release and I do not want to get too wild. >> >>> >>>> Now when I am thinking about it, maybe we should only try to restart >>>> if the >>>> service is running - because otherwise it would be started later and the >>>> changes that were done in scope of upgrade script would be applied. >>> >>> Yes we should do a conditional restart only, and it is ok to proceeded >>> if it fails, we want to complete the upgrade process in any case, not >>> break out in the middle if at all possible. >>> >>> Simo. >>> >> >> Right, I will send an updated patch which restarts the named/pki-ca >> service only if it is running. > > ACK on this patch as-is. I think we have room for improvement/discussion. Can > you open a RFE ticket to investigate any further work we might want to do? Sure, this is the ticket: https://fedorahosted.org/freeipa/ticket/3351 Anyway, I rebased the patch also for master and ipa-3-1 and pushed it to all three branches, i.e. master, ipa-3-1, ipa-3-0. Martin > > rob > From pviktori at redhat.com Tue Jan 15 15:53:23 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 15 Jan 2013 16:53:23 +0100 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50F56A82.4090806@redhat.com> References: <50E9D7F2.1060800@redhat.com> <50EAC0F5.8050607@redhat.com> <50EAD70D.8010000@redhat.com> <50EAE65B.60409@redhat.com> <50EAFAFE.1040200@redhat.com> <50EAFE68.3030109@redhat.com> <50EB1E85.2070109@redhat.com> <50F0A4F4.5030304@redhat.com> <50F40C5E.9010302@redhat.com> <50F47F1A.30702@redhat.com> <50F56A82.4090806@redhat.com> Message-ID: <50F57B73.6070403@redhat.com> On 01/15/2013 03:41 PM, Petr Viktorin wrote: > On 01/14/2013 10:56 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> On 01/12/2013 12:49 AM, Rob Crittenden wrote: >>>> Rob Crittenden wrote: >>>>> Petr Viktorin wrote: >>>>>> On 01/07/2013 05:42 PM, Rob Crittenden wrote: >>>>>>> Petr Viktorin wrote: >>>>>>>> On 01/07/2013 03:09 PM, Rob Crittenden wrote: >>>>>>>>> Petr Viktorin wrote: >>>>>> [...] >>>>>>>>>> >>>>>>>>>> Works for me, but I have some questions (this is an area I know >>>>>>>>>> little >>>>>>>>>> about). >>>>>>>>>> >>>>>>>>>> Can we be 100% sure these certs are always renewed together? Is >>>>>>>>>> certmonger the only possible mechanism to update them? >>>>>>>>> >>>>>>>>> You raise a good point. If though some mechanism someone replaces >>>>>>>>> one of >>>>>>>>> these certs it will cause the script to fail. Some notification of >>>>>>>>> this >>>>>>>>> failure will be logged though, and of course, the certs won't be >>>>>>>>> renewed. >>>>>>>>> >>>>>>>>> One could conceivably manually renew one of these certificates. >>>>>>>>> It is >>>>>>>>> probably a very remote possibility but it is non-zero. >>>>>>>>> >>>>>>>>>> Can we be sure certmonger always does the updates in parallel? >>>>>>>>>> If it >>>>>>>>>> managed to update the audit cert before starting on the others, >>>>>>>>>> we'd >>>>>>>>>> get >>>>>>>>>> no CA restart for the others. >>>>>>>>> >>>>>>>>> These all get issued at the same time so should expire at the same >>>>>>>>> time >>>>>>>>> as well (see problem above). The script will hang around for 10 >>>>>>>>> minutes >>>>>>>>> waiting for the renewal to complete, then give up. >>>>>>>> >>>>>>>> The certs might take different amounts of time to update, right? >>>>>>>> Eventually, the expirations could go out of sync enough for it to >>>>>>>> matter. >>>>>>>> AFAICS, without proper locking we still get a race condition when >>>>>>>> the >>>>>>>> other certs start being renewed some time (much less than 10 min) >>>>>>>> after >>>>>>>> the audit one: >>>>>>>> >>>>>>>> (time axis goes down) >>>>>>>> >>>>>>>> audit cert other cert >>>>>>>> ---------- ---------- >>>>>>>> certmonger does renew . >>>>>>>> post-renew script starts . >>>>>>>> check state of other certs: OK . >>>>>>>> . certmonger starts renew >>>>>>>> certutil modifies NSS DB + certmonger modifies NSS DB == boom! >>>>>>> >>>>>>> This can't happen because we count the # of expected certs and wait >>>>>>> until all are in MONITORING before continuing. >>>>>> >>>>>> The problem is that they're also in MONITORING before the whole >>>>>> renewal >>>>>> starts. If the script happens to check just before the state changes >>>>>> from MONITORING to GENERATING_CSR or whatever, we can get corruption. >>>>>> >>>>>>> The worse that would >>>>>>> happen is the trust wouldn't be set on the audit cert and dogtag >>>>>>> wouldn't be restarted. >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>>> The state the system would be in is this: >>>>>>>>> >>>>>>>>> - audit cert trust not updated, so next restart of CA will fail >>>>>>>>> - CA is not restarted so will not use updated certificates >>>>>>>>> >>>>>>>>>> And anyway, why does certmonger do renewals in parallel? It seems >>>>>>>>>> that >>>>>>>>>> if it did one at a time, always waiting until the post-renew >>>>>>>>>> script is >>>>>>>>>> done, this patch wouldn't be necessary. >>>>>>>>>> >>>>>>>>> >>>>>>>>> From what Nalin told me certmonger has some coarse locking such >>>>>>>>> that >>>>>>>>> renewals in a the same NSS database are serialized. As you point >>>>>>>>> out, it >>>>>>>>> would be nice to extend this locking to the post renewal >>>>>>>>> scripts. We >>>>>>>>> can >>>>>>>>> ask Nalin about it. That would fix the potential corruption issue. >>>>>>>>> It is >>>>>>>>> still much nicer to not have to restart dogtag 4 times. >>>>>>>>> >>>>>>>> >>>>>>>> Well, three extra restarts every few years seems like a small >>>>>>>> price to >>>>>>>> pay for robustness. >>>>>>> >>>>>>> It is a bit of a problem though because the certs all renew within >>>>>>> seconds so end up fighting over who is restarting dogtag. This can >>>>>>> cause >>>>>>> some renewals go into a failure state to be retried later. This is >>>>>>> fine >>>>>>> functionally but makes QE a bit of a pain. You then have to make >>>>>>> sure >>>>>>> that renewal is basically done, then restart certmonger and check >>>>>>> everything again, over and over until all the certs are renewed. >>>>>>> This is >>>>>>> difficult to automate. >>>>>> >>>>>> So we need to extend the certmonger lock, and wait until Dogtag is >>>>>> back >>>>>> up before exiting the script. That way it'd still take longer than 1 >>>>>> restart, but all the renews should succeed. >>>>>> >>>>> >>>>> Right, but older dogtag versions don't have the handy servlet to tell >>>>> that the service is actually up and responding. So it is difficult to >>>>> tell from tomcat alone whether the CA is actually up and handling >>>>> requests. >>>>> >>>> >>>> Revised patch that takes advantage of new version of certmonger. >>>> certmonger-0.65 adds locking from the time renewal begins to the end of >>>> the post_save_command. This lets us be sure that no other certmonger >>>> renewals will have the NSS database open in read-write mode. >>>> >>>> We need to be sure that tomcat is shut down before we let certmonger >>>> save the certificate to the NSS database because dogtag opens its >>>> database read/write and two writers can cause corruption. >>>> >>>> rob >>>> >>> >>> stop_pkicad and start_pkicad need the Dogtag version check to select >>> pki_cad/pki_tomcatd. >> >> Fixed. >> >>> >>> A more serious issue is that stop_pkicad needs to be installed on >>> upgrades. Currently the whole enable_certificate_renewal step in >>> ipa-upgradeconfig is skipped if it was done before. >> >> I added a separate upgrade test for this. It currently won't work in >> SELinux enforcing mode because certmonger isn't allowed to talk to dbus >> in an rpm post script. It's being looked at. >> >>> In stop_pkicad can you change the first log message to "certmonger >>> stopping %sd"? It's before the action so we don't want past tense. >> >> Fixed. >> >> rob > > I get a bunch of errors when installing the RPM: > [...] > This is the SELinux issue you were talking about. Sorry for not catching that. With enforcing off, the patch looks & works well for me. I'm just concerned about this change in ipa-upgradeconfig: @@ -707,7 +754,7 @@ def main(): # configuration has changed, restart the name server root_logger.info('Changes to named.conf have been made, restart named') bindinstance.BindInstance(fstore).restart() - ca_restart = ca_restart or enable_certificate_renewal(ca) or upgrade_ipa_profile(ca, api.env.domain, fqdn) + ca_restart = ca_restart or enable_certificate_renewal(ca) or upgrade_ipa_profile(ca, api.env.domain, fqdn) or certificate_renewal_stop_ca(ca) If the enable_certificate_renewal step was done already, but upgrade_ipa_profile requests a CA restart, then the short-circuiting `or` will be satisfied and certificate_renewal_stop_ca won't be run. Since each upgrade step has its own checking, I think it would be safer to use something like: ca_restart = certificate_renewal_stop_ca(ca) or ca_restart or even: ca_restart = any([ ca_restart, enable_certificate_renewal(ca), upgrade_ipa_profile(ca, api.env.domain, fqdn), certificate_renewal_stop_ca(ca), ]) -- Petr? From rcritten at redhat.com Tue Jan 15 15:56:39 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 15 Jan 2013 10:56:39 -0500 Subject: [Freeipa-devel] [PATCH] 0003 Add crond as a default HBAC service In-Reply-To: <20130115125623.GB28090@redhat.com> References: <50F53BEE.7020306@redhat.com> <20130115115651.GA28090@redhat.com> <50F54C45.2020100@redhat.com> <20130115125623.GB28090@redhat.com> Message-ID: <50F57C37.2030701@redhat.com> Alexander Bokovoy wrote: > On Tue, 15 Jan 2013, Dmitri Pal wrote: >> On 01/15/2013 06:56 AM, Alexander Bokovoy wrote: >>> On Tue, 15 Jan 2013, Ana Krivokapic wrote: >>>> crond was not included in the list of default HBAC services - it >>>> needed to be added manually. As crond is a commonly used service, it >>>> is now included as a default HBAC service. >>>> >>>> Ticket: https://fedorahosted.org/freeipa/ticket/3215 >>> ACK. >>> >>> Simple and obvious. >>> >>>> I'm not sure whether a design document is needed for this ticket. It >>>> _is_ marked as an RFE, but it is a pretty simple and specific change. >>>> After yesterday's meeting, I'm not really clear on whether a design >>>> page is needed in cases like this. Thoughts? >>> I suggested that additions like this do not need specific design >>> document. >> >> >> I suggest that we have a catch all page on the wiki that would be called >> "Minor Enhancements". >> It will consist of the table with two columns. >> First column would be the link to the ticket. >> The second column would be a short comment. >> F this ticket it will look like this: >> >> Enhancement Comment >> https://fedorahosted.org/freeipa/ticket/3215 Crond was added to the >> list of available HBAC services >> >> >> >> Such approach would not break the reporting and would show what has been >> done for minor enhancements like this. > This would work as well. If we still link to the mail thread in the > ticket, all ends will be connected. > > +1 from me. > We did a design for HBAC loooong ago (would be in the V2 pages) so perhaps we could add the new service there. rob From rcritten at redhat.com Tue Jan 15 16:15:35 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 15 Jan 2013 11:15:35 -0500 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50F57B73.6070403@redhat.com> References: <50E9D7F2.1060800@redhat.com> <50EAC0F5.8050607@redhat.com> <50EAD70D.8010000@redhat.com> <50EAE65B.60409@redhat.com> <50EAFAFE.1040200@redhat.com> <50EAFE68.3030109@redhat.com> <50EB1E85.2070109@redhat.com> <50F0A4F4.5030304@redhat.com> <50F40C5E.9010302@redhat.com> <50F47F1A.30702@redhat.com> <50F56A82.4090806@redhat.com> <50F57B73.6070403@redhat.com> Message-ID: <50F580A7.2070004@redhat.com> Petr Viktorin wrote: > On 01/15/2013 03:41 PM, Petr Viktorin wrote: >> On 01/14/2013 10:56 PM, Rob Crittenden wrote: >>> Petr Viktorin wrote: >>>> On 01/12/2013 12:49 AM, Rob Crittenden wrote: >>>>> Rob Crittenden wrote: >>>>>> Petr Viktorin wrote: >>>>>>> On 01/07/2013 05:42 PM, Rob Crittenden wrote: >>>>>>>> Petr Viktorin wrote: >>>>>>>>> On 01/07/2013 03:09 PM, Rob Crittenden wrote: >>>>>>>>>> Petr Viktorin wrote: >>>>>>> [...] >>>>>>>>>>> >>>>>>>>>>> Works for me, but I have some questions (this is an area I know >>>>>>>>>>> little >>>>>>>>>>> about). >>>>>>>>>>> >>>>>>>>>>> Can we be 100% sure these certs are always renewed together? Is >>>>>>>>>>> certmonger the only possible mechanism to update them? >>>>>>>>>> >>>>>>>>>> You raise a good point. If though some mechanism someone replaces >>>>>>>>>> one of >>>>>>>>>> these certs it will cause the script to fail. Some >>>>>>>>>> notification of >>>>>>>>>> this >>>>>>>>>> failure will be logged though, and of course, the certs won't be >>>>>>>>>> renewed. >>>>>>>>>> >>>>>>>>>> One could conceivably manually renew one of these certificates. >>>>>>>>>> It is >>>>>>>>>> probably a very remote possibility but it is non-zero. >>>>>>>>>> >>>>>>>>>>> Can we be sure certmonger always does the updates in parallel? >>>>>>>>>>> If it >>>>>>>>>>> managed to update the audit cert before starting on the others, >>>>>>>>>>> we'd >>>>>>>>>>> get >>>>>>>>>>> no CA restart for the others. >>>>>>>>>> >>>>>>>>>> These all get issued at the same time so should expire at the >>>>>>>>>> same >>>>>>>>>> time >>>>>>>>>> as well (see problem above). The script will hang around for 10 >>>>>>>>>> minutes >>>>>>>>>> waiting for the renewal to complete, then give up. >>>>>>>>> >>>>>>>>> The certs might take different amounts of time to update, right? >>>>>>>>> Eventually, the expirations could go out of sync enough for it to >>>>>>>>> matter. >>>>>>>>> AFAICS, without proper locking we still get a race condition when >>>>>>>>> the >>>>>>>>> other certs start being renewed some time (much less than 10 min) >>>>>>>>> after >>>>>>>>> the audit one: >>>>>>>>> >>>>>>>>> (time axis goes down) >>>>>>>>> >>>>>>>>> audit cert other cert >>>>>>>>> ---------- ---------- >>>>>>>>> certmonger does renew . >>>>>>>>> post-renew script starts . >>>>>>>>> check state of other certs: OK . >>>>>>>>> . certmonger starts renew >>>>>>>>> certutil modifies NSS DB + certmonger modifies NSS DB == >>>>>>>>> boom! >>>>>>>> >>>>>>>> This can't happen because we count the # of expected certs and wait >>>>>>>> until all are in MONITORING before continuing. >>>>>>> >>>>>>> The problem is that they're also in MONITORING before the whole >>>>>>> renewal >>>>>>> starts. If the script happens to check just before the state changes >>>>>>> from MONITORING to GENERATING_CSR or whatever, we can get >>>>>>> corruption. >>>>>>> >>>>>>>> The worse that would >>>>>>>> happen is the trust wouldn't be set on the audit cert and dogtag >>>>>>>> wouldn't be restarted. >>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>>> The state the system would be in is this: >>>>>>>>>> >>>>>>>>>> - audit cert trust not updated, so next restart of CA will fail >>>>>>>>>> - CA is not restarted so will not use updated certificates >>>>>>>>>> >>>>>>>>>>> And anyway, why does certmonger do renewals in parallel? It >>>>>>>>>>> seems >>>>>>>>>>> that >>>>>>>>>>> if it did one at a time, always waiting until the post-renew >>>>>>>>>>> script is >>>>>>>>>>> done, this patch wouldn't be necessary. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> From what Nalin told me certmonger has some coarse locking such >>>>>>>>>> that >>>>>>>>>> renewals in a the same NSS database are serialized. As you point >>>>>>>>>> out, it >>>>>>>>>> would be nice to extend this locking to the post renewal >>>>>>>>>> scripts. We >>>>>>>>>> can >>>>>>>>>> ask Nalin about it. That would fix the potential corruption >>>>>>>>>> issue. >>>>>>>>>> It is >>>>>>>>>> still much nicer to not have to restart dogtag 4 times. >>>>>>>>>> >>>>>>>>> >>>>>>>>> Well, three extra restarts every few years seems like a small >>>>>>>>> price to >>>>>>>>> pay for robustness. >>>>>>>> >>>>>>>> It is a bit of a problem though because the certs all renew within >>>>>>>> seconds so end up fighting over who is restarting dogtag. This can >>>>>>>> cause >>>>>>>> some renewals go into a failure state to be retried later. This is >>>>>>>> fine >>>>>>>> functionally but makes QE a bit of a pain. You then have to make >>>>>>>> sure >>>>>>>> that renewal is basically done, then restart certmonger and check >>>>>>>> everything again, over and over until all the certs are renewed. >>>>>>>> This is >>>>>>>> difficult to automate. >>>>>>> >>>>>>> So we need to extend the certmonger lock, and wait until Dogtag is >>>>>>> back >>>>>>> up before exiting the script. That way it'd still take longer than 1 >>>>>>> restart, but all the renews should succeed. >>>>>>> >>>>>> >>>>>> Right, but older dogtag versions don't have the handy servlet to tell >>>>>> that the service is actually up and responding. So it is difficult to >>>>>> tell from tomcat alone whether the CA is actually up and handling >>>>>> requests. >>>>>> >>>>> >>>>> Revised patch that takes advantage of new version of certmonger. >>>>> certmonger-0.65 adds locking from the time renewal begins to the >>>>> end of >>>>> the post_save_command. This lets us be sure that no other certmonger >>>>> renewals will have the NSS database open in read-write mode. >>>>> >>>>> We need to be sure that tomcat is shut down before we let certmonger >>>>> save the certificate to the NSS database because dogtag opens its >>>>> database read/write and two writers can cause corruption. >>>>> >>>>> rob >>>>> >>>> >>>> stop_pkicad and start_pkicad need the Dogtag version check to select >>>> pki_cad/pki_tomcatd. >>> >>> Fixed. >>> >>>> >>>> A more serious issue is that stop_pkicad needs to be installed on >>>> upgrades. Currently the whole enable_certificate_renewal step in >>>> ipa-upgradeconfig is skipped if it was done before. >>> >>> I added a separate upgrade test for this. It currently won't work in >>> SELinux enforcing mode because certmonger isn't allowed to talk to dbus >>> in an rpm post script. It's being looked at. >>> >>>> In stop_pkicad can you change the first log message to "certmonger >>>> stopping %sd"? It's before the action so we don't want past tense. >>> >>> Fixed. >>> >>> rob >> >> I get a bunch of errors when installing the RPM: >> > [...] >> > > This is the SELinux issue you were talking about. Sorry for not catching > that. > > With enforcing off, the patch looks & works well for me. I'm just > concerned about this change in ipa-upgradeconfig: > > @@ -707,7 +754,7 @@ def main(): > # configuration has changed, restart the name server > root_logger.info('Changes to named.conf have been made, > restart named') > bindinstance.BindInstance(fstore).restart() > - ca_restart = ca_restart or enable_certificate_renewal(ca) or > upgrade_ipa_profile(ca, api.env.domain, fqdn) > + ca_restart = ca_restart or enable_certificate_renewal(ca) or > upgrade_ipa_profile(ca, api.env.domain, fqdn) or > certificate_renewal_stop_ca(ca) > > If the enable_certificate_renewal step was done already, but > upgrade_ipa_profile requests a CA restart, then the short-circuiting > `or` will be satisfied and certificate_renewal_stop_ca won't be run. > > Since each upgrade step has its own checking, I think it would be safer > to use something like: > ca_restart = certificate_renewal_stop_ca(ca) or ca_restart > > or even: > ca_restart = any([ > ca_restart, > enable_certificate_renewal(ca), > upgrade_ipa_profile(ca, api.env.domain, fqdn), > certificate_renewal_stop_ca(ca), > ]) > I like this suggestion very much. Updated patch attached. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1079-4-renewal.patch Type: text/x-patch Size: 27177 bytes Desc: not available URL: From pviktori at redhat.com Tue Jan 15 16:32:45 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 15 Jan 2013 17:32:45 +0100 Subject: [Freeipa-devel] Command instantiation In-Reply-To: <1358257141.15136.119.camel@willson.li.ssimo.org> References: <50F42CF9.2000108@redhat.com> <50F43F10.1070206@redhat.com> <20130114173151.GJ8651@redhat.com> <50F4457C.8020303@redhat.com> <50F446D3.3050804@redhat.com> <50F46062.4040502@redhat.com> <50F52004.9020507@redhat.com> <1358257141.15136.119.camel@willson.li.ssimo.org> Message-ID: <50F584AD.5000705@redhat.com> On 01/15/2013 02:39 PM, Simo Sorce wrote: > On Tue, 2013-01-15 at 10:23 +0100, Petr Viktorin wrote: >> On 01/14/2013 08:45 PM, John Dennis wrote: >>> On 01/14/2013 12:56 PM, Jan Cholasta wrote: >>>> On 14.1.2013 18:50, Petr Viktorin wrote: >>>>> Ah, yes, you've discovered my ultimate goal: rewriting the whole >>>>> framefork :) >>>> >>>> It would seem we share the same ultimate goal, sir! :-) >>> >>> Well it's reassuring I'm not alone in my frustration with elements of >>> the framework. I thought it was just me :-) >>> >>> I have one other general complaint about the framework: >>> >>> Too much magic! >>> >>> What do I mean by magic? Things which spring into existence at run time >>> for which there is no static definition. I've spent (wasted) significant >>> amounts of time trying to figure out how something gets instantiated and >>> initialized. These things don't exist in the static code, you can't >>> search for them because they are synthetic. You can see these things >>> being referenced but you'll never find a class definition or __init__() >>> method or assignment to a specific object attribute. It's all very very >>> clever but at the same time very obscure. If you just use the framework >>> in a cookie-cutter fashion this has probably never bothered you, but if >>> you have modify what the framework provides it can be difficult. >>> >>> But I don't want to carp on the framework too much without giving credit >>> to Jason first. His mandate was to produce a pluggable framework that >>> was robust, extensible, and supported easy plugin authorship. Jason was >>> dedicated, almost maniacal in his attention to detail and best >>> practices. He also had to design most of this himself (the rest of the >>> team was heads down on other things at the time). It has mostly stood >>> the test of time. It's pretty hard to anticipate the pain points, that's >>> something only experience with system can give you down the road, which >>> is where we find ourselves now. >>> >> >> +1. >> It's easy to criticize in hindsight, and I have great respect for the >> framework and its author. >> Nevertheless, software grows over time and we need to balance bolting >> things on with improving the foundations, so that we don't get stuck in >> a maze of workarounds in a few years. > > > Can someone summarize how big a change this would be ? > I do understand the general discussion, but I have not been involved > deeply enough in the framework code to tell. > Also how much would this conflict with the proposed LDAP change ? No, the LDAP changes won't affect the framework nor the plugin code. > Do we have a way to slowly change stuff or will it require big > all-or-nothing changes ? As far as I can tell without trying it, the change wouldn't be too disruptive. Command is now immutable, so to current code it shouldn't matter if api.Command.xyz returns a "global" instance or a freshly created one. There will be some complexity to make the change only for Commands and not the other plugin types, but it shouldn't be a big change. The next step is to not lock the Command class. Again I think the biggest issue will be to disable the locking only for Commands and not other classes in the hierarchy. When that's done, we can gradually change individual commands to store data in the Command object. I'd do one or two to make sure the approach works, after that it would be a series of small, localized, low priority patches. The main benefit is that *future* code wouldn't have to resort to thread-local storage. -- Petr? From pviktori at redhat.com Tue Jan 15 16:52:12 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 15 Jan 2013 17:52:12 +0100 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50F580A7.2070004@redhat.com> References: <50E9D7F2.1060800@redhat.com> <50EAC0F5.8050607@redhat.com> <50EAD70D.8010000@redhat.com> <50EAE65B.60409@redhat.com> <50EAFAFE.1040200@redhat.com> <50EAFE68.3030109@redhat.com> <50EB1E85.2070109@redhat.com> <50F0A4F4.5030304@redhat.com> <50F40C5E.9010302@redhat.com> <50F47F1A.30702@redhat.com> <50F56A82.4090806@redhat.com> <50F57B73.6070403@redhat.com> <50F580A7.2070004@redhat.com> Message-ID: <50F5893C.5040000@redhat.com> On 01/15/2013 05:15 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> On 01/15/2013 03:41 PM, Petr Viktorin wrote: >>> On 01/14/2013 10:56 PM, Rob Crittenden wrote: >>>> Petr Viktorin wrote: >>>>> On 01/12/2013 12:49 AM, Rob Crittenden wrote: >>>>>> Rob Crittenden wrote: >>>>>>> Petr Viktorin wrote: >>>>>>>> On 01/07/2013 05:42 PM, Rob Crittenden wrote: >>>>>>>>> Petr Viktorin wrote: >>>>>>>>>> On 01/07/2013 03:09 PM, Rob Crittenden wrote: >>>>>>>>>>> Petr Viktorin wrote: >>>>>>>> [...] >>>>>>>>>>>> >>>>>>>>>>>> Works for me, but I have some questions (this is an area I know >>>>>>>>>>>> little >>>>>>>>>>>> about). >>>>>>>>>>>> >>>>>>>>>>>> Can we be 100% sure these certs are always renewed together? Is >>>>>>>>>>>> certmonger the only possible mechanism to update them? >>>>>>>>>>> >>>>>>>>>>> You raise a good point. If though some mechanism someone >>>>>>>>>>> replaces >>>>>>>>>>> one of >>>>>>>>>>> these certs it will cause the script to fail. Some >>>>>>>>>>> notification of >>>>>>>>>>> this >>>>>>>>>>> failure will be logged though, and of course, the certs won't be >>>>>>>>>>> renewed. >>>>>>>>>>> >>>>>>>>>>> One could conceivably manually renew one of these certificates. >>>>>>>>>>> It is >>>>>>>>>>> probably a very remote possibility but it is non-zero. >>>>>>>>>>> >>>>>>>>>>>> Can we be sure certmonger always does the updates in parallel? >>>>>>>>>>>> If it >>>>>>>>>>>> managed to update the audit cert before starting on the others, >>>>>>>>>>>> we'd >>>>>>>>>>>> get >>>>>>>>>>>> no CA restart for the others. >>>>>>>>>>> >>>>>>>>>>> These all get issued at the same time so should expire at the >>>>>>>>>>> same >>>>>>>>>>> time >>>>>>>>>>> as well (see problem above). The script will hang around for 10 >>>>>>>>>>> minutes >>>>>>>>>>> waiting for the renewal to complete, then give up. >>>>>>>>>> >>>>>>>>>> The certs might take different amounts of time to update, right? >>>>>>>>>> Eventually, the expirations could go out of sync enough for it to >>>>>>>>>> matter. >>>>>>>>>> AFAICS, without proper locking we still get a race condition when >>>>>>>>>> the >>>>>>>>>> other certs start being renewed some time (much less than 10 min) >>>>>>>>>> after >>>>>>>>>> the audit one: >>>>>>>>>> >>>>>>>>>> (time axis goes down) >>>>>>>>>> >>>>>>>>>> audit cert other cert >>>>>>>>>> ---------- ---------- >>>>>>>>>> certmonger does renew . >>>>>>>>>> post-renew script starts . >>>>>>>>>> check state of other certs: OK . >>>>>>>>>> . certmonger starts renew >>>>>>>>>> certutil modifies NSS DB + certmonger modifies NSS DB == >>>>>>>>>> boom! >>>>>>>>> >>>>>>>>> This can't happen because we count the # of expected certs and >>>>>>>>> wait >>>>>>>>> until all are in MONITORING before continuing. >>>>>>>> >>>>>>>> The problem is that they're also in MONITORING before the whole >>>>>>>> renewal >>>>>>>> starts. If the script happens to check just before the state >>>>>>>> changes >>>>>>>> from MONITORING to GENERATING_CSR or whatever, we can get >>>>>>>> corruption. >>>>>>>> >>>>>>>>> The worse that would >>>>>>>>> happen is the trust wouldn't be set on the audit cert and dogtag >>>>>>>>> wouldn't be restarted. >>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> The state the system would be in is this: >>>>>>>>>>> >>>>>>>>>>> - audit cert trust not updated, so next restart of CA will fail >>>>>>>>>>> - CA is not restarted so will not use updated certificates >>>>>>>>>>> >>>>>>>>>>>> And anyway, why does certmonger do renewals in parallel? It >>>>>>>>>>>> seems >>>>>>>>>>>> that >>>>>>>>>>>> if it did one at a time, always waiting until the post-renew >>>>>>>>>>>> script is >>>>>>>>>>>> done, this patch wouldn't be necessary. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> From what Nalin told me certmonger has some coarse locking such >>>>>>>>>>> that >>>>>>>>>>> renewals in a the same NSS database are serialized. As you point >>>>>>>>>>> out, it >>>>>>>>>>> would be nice to extend this locking to the post renewal >>>>>>>>>>> scripts. We >>>>>>>>>>> can >>>>>>>>>>> ask Nalin about it. That would fix the potential corruption >>>>>>>>>>> issue. >>>>>>>>>>> It is >>>>>>>>>>> still much nicer to not have to restart dogtag 4 times. >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Well, three extra restarts every few years seems like a small >>>>>>>>>> price to >>>>>>>>>> pay for robustness. >>>>>>>>> >>>>>>>>> It is a bit of a problem though because the certs all renew within >>>>>>>>> seconds so end up fighting over who is restarting dogtag. This can >>>>>>>>> cause >>>>>>>>> some renewals go into a failure state to be retried later. This is >>>>>>>>> fine >>>>>>>>> functionally but makes QE a bit of a pain. You then have to make >>>>>>>>> sure >>>>>>>>> that renewal is basically done, then restart certmonger and check >>>>>>>>> everything again, over and over until all the certs are renewed. >>>>>>>>> This is >>>>>>>>> difficult to automate. >>>>>>>> >>>>>>>> So we need to extend the certmonger lock, and wait until Dogtag is >>>>>>>> back >>>>>>>> up before exiting the script. That way it'd still take longer >>>>>>>> than 1 >>>>>>>> restart, but all the renews should succeed. >>>>>>>> >>>>>>> >>>>>>> Right, but older dogtag versions don't have the handy servlet to >>>>>>> tell >>>>>>> that the service is actually up and responding. So it is >>>>>>> difficult to >>>>>>> tell from tomcat alone whether the CA is actually up and handling >>>>>>> requests. >>>>>>> >>>>>> >>>>>> Revised patch that takes advantage of new version of certmonger. >>>>>> certmonger-0.65 adds locking from the time renewal begins to the >>>>>> end of >>>>>> the post_save_command. This lets us be sure that no other certmonger >>>>>> renewals will have the NSS database open in read-write mode. >>>>>> >>>>>> We need to be sure that tomcat is shut down before we let certmonger >>>>>> save the certificate to the NSS database because dogtag opens its >>>>>> database read/write and two writers can cause corruption. >>>>>> >>>>>> rob >>>>>> >>>>> >>>>> stop_pkicad and start_pkicad need the Dogtag version check to select >>>>> pki_cad/pki_tomcatd. >>>> >>>> Fixed. >>>> >>>>> >>>>> A more serious issue is that stop_pkicad needs to be installed on >>>>> upgrades. Currently the whole enable_certificate_renewal step in >>>>> ipa-upgradeconfig is skipped if it was done before. >>>> >>>> I added a separate upgrade test for this. It currently won't work in >>>> SELinux enforcing mode because certmonger isn't allowed to talk to dbus >>>> in an rpm post script. It's being looked at. >>>> >>>>> In stop_pkicad can you change the first log message to "certmonger >>>>> stopping %sd"? It's before the action so we don't want past tense. >>>> >>>> Fixed. >>>> >>>> rob >>> >>> I get a bunch of errors when installing the RPM: >>> >> [...] >>> >> >> This is the SELinux issue you were talking about. Sorry for not catching >> that. >> >> With enforcing off, the patch looks & works well for me. I'm just >> concerned about this change in ipa-upgradeconfig: >> >> @@ -707,7 +754,7 @@ def main(): >> # configuration has changed, restart the name server >> root_logger.info('Changes to named.conf have been made, >> restart named') >> bindinstance.BindInstance(fstore).restart() >> - ca_restart = ca_restart or enable_certificate_renewal(ca) or >> upgrade_ipa_profile(ca, api.env.domain, fqdn) >> + ca_restart = ca_restart or enable_certificate_renewal(ca) or >> upgrade_ipa_profile(ca, api.env.domain, fqdn) or >> certificate_renewal_stop_ca(ca) >> >> If the enable_certificate_renewal step was done already, but >> upgrade_ipa_profile requests a CA restart, then the short-circuiting >> `or` will be satisfied and certificate_renewal_stop_ca won't be run. >> >> Since each upgrade step has its own checking, I think it would be safer >> to use something like: >> ca_restart = certificate_renewal_stop_ca(ca) or ca_restart >> >> or even: >> ca_restart = any([ >> ca_restart, >> enable_certificate_renewal(ca), >> upgrade_ipa_profile(ca, api.env.domain, fqdn), >> certificate_renewal_stop_ca(ca), >> ]) >> > > I like this suggestion very much. Updated patch attached. > > rob > ACK, just remove the trailing space in the `]) ` line. We'll need to make sure the SELinux issue isn't forgotten. -- Petr? From dpal at redhat.com Tue Jan 15 20:38:16 2013 From: dpal at redhat.com (Dmitri Pal) Date: Tue, 15 Jan 2013 15:38:16 -0500 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <1358257702.15136.124.camel@willson.li.ssimo.org> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> Message-ID: <50F5BE38.1020901@redhat.com> On 01/15/2013 08:48 AM, Simo Sorce wrote: > On Mon, 2013-01-14 at 16:46 +0100, Tomas Babej wrote: >> Hi, >> >> Since in Kerberos V5 are used 32-bit unix timestamps, setting >> maxlife in pwpolicy to values such as 9999 days would cause >> integer overflow in krbPasswordExpiration attribute. >> >> This would result into unpredictable behaviour such as users >> not being able to log in after password expiration if password >> policy was changed (#3114) or new users not being able to log >> in at all (#3312). >> >> https://fedorahosted.org/freeipa/ticket/3312 >> https://fedorahosted.org/freeipa/ticket/3114 > Given that we control the KDC LDAP driver I think we should not limit > the time in LDAP but rather 'fix-it-up' for the KDC in the DAL driver. Fix how? Truncate to max in the driver itself if it was entered beyond max? Shouldn't we also prevent entering the invalid value into the attribute? > > So I would like to Nack this one, sorry. > > Simo. > -- Thank you, Dmitri Pal Sr. Engineering Manager for IdM portfolio Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From rcritten at redhat.com Tue Jan 15 20:53:36 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 15 Jan 2013 15:53:36 -0500 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <50F5BE38.1020901@redhat.com> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> Message-ID: <50F5C1D0.6060404@redhat.com> Dmitri Pal wrote: > On 01/15/2013 08:48 AM, Simo Sorce wrote: >> On Mon, 2013-01-14 at 16:46 +0100, Tomas Babej wrote: >>> Hi, >>> >>> Since in Kerberos V5 are used 32-bit unix timestamps, setting >>> maxlife in pwpolicy to values such as 9999 days would cause >>> integer overflow in krbPasswordExpiration attribute. >>> >>> This would result into unpredictable behaviour such as users >>> not being able to log in after password expiration if password >>> policy was changed (#3114) or new users not being able to log >>> in at all (#3312). >>> >>> https://fedorahosted.org/freeipa/ticket/3312 >>> https://fedorahosted.org/freeipa/ticket/3114 >> Given that we control the KDC LDAP driver I think we should not limit >> the time in LDAP but rather 'fix-it-up' for the KDC in the DAL driver. > > Fix how? Truncate to max in the driver itself if it was entered beyond max? > Shouldn't we also prevent entering the invalid value into the attribute? > I've been mulling the same question for a while. Why would we want to let bad data get into the directory? rob From simo at redhat.com Tue Jan 15 20:59:36 2013 From: simo at redhat.com (Simo Sorce) Date: Tue, 15 Jan 2013 15:59:36 -0500 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <50F5C1D0.6060404@redhat.com> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> Message-ID: <1358283576.4590.18.camel@willson.li.ssimo.org> On Tue, 2013-01-15 at 15:53 -0500, Rob Crittenden wrote: > Dmitri Pal wrote: > > On 01/15/2013 08:48 AM, Simo Sorce wrote: > >> On Mon, 2013-01-14 at 16:46 +0100, Tomas Babej wrote: > >>> Hi, > >>> > >>> Since in Kerberos V5 are used 32-bit unix timestamps, setting > >>> maxlife in pwpolicy to values such as 9999 days would cause > >>> integer overflow in krbPasswordExpiration attribute. > >>> > >>> This would result into unpredictable behaviour such as users > >>> not being able to log in after password expiration if password > >>> policy was changed (#3114) or new users not being able to log > >>> in at all (#3312). > >>> > >>> https://fedorahosted.org/freeipa/ticket/3312 > >>> https://fedorahosted.org/freeipa/ticket/3114 > >> Given that we control the KDC LDAP driver I think we should not limit > >> the time in LDAP but rather 'fix-it-up' for the KDC in the DAL driver. > > > > Fix how? Truncate to max in the driver itself if it was entered beyond max? > > Shouldn't we also prevent entering the invalid value into the attribute? > > > > I've been mulling the same question for a while. Why would we want to > let bad data get into the directory? It is not bad data and the attribute holds a Generalize time date. The data is valid it's the MIT code that has a limitation in parsing it. Greg tells me he plans supporting additional time by using the 'negative' part of the integer to represent the years beyond 2038. So we should represent data in the directory correctly, which means whtever date in the future and only chop it when feeding MIT libraries until they support the additional range at which time we will change and chop further in the future (around 2067 or so). If we chopped early in the directory we'd not be able to properly represent/change rapresentation later when MIT libs gain additional range capabilities. Simo. -- Simo Sorce * Red Hat, Inc * New York From dpal at redhat.com Tue Jan 15 22:36:18 2013 From: dpal at redhat.com (Dmitri Pal) Date: Tue, 15 Jan 2013 17:36:18 -0500 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <1358283576.4590.18.camel@willson.li.ssimo.org> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> Message-ID: <50F5D9E2.7030800@redhat.com> On 01/15/2013 03:59 PM, Simo Sorce wrote: > On Tue, 2013-01-15 at 15:53 -0500, Rob Crittenden wrote: >> Dmitri Pal wrote: >>> On 01/15/2013 08:48 AM, Simo Sorce wrote: >>>> On Mon, 2013-01-14 at 16:46 +0100, Tomas Babej wrote: >>>>> Hi, >>>>> >>>>> Since in Kerberos V5 are used 32-bit unix timestamps, setting >>>>> maxlife in pwpolicy to values such as 9999 days would cause >>>>> integer overflow in krbPasswordExpiration attribute. >>>>> >>>>> This would result into unpredictable behaviour such as users >>>>> not being able to log in after password expiration if password >>>>> policy was changed (#3114) or new users not being able to log >>>>> in at all (#3312). >>>>> >>>>> https://fedorahosted.org/freeipa/ticket/3312 >>>>> https://fedorahosted.org/freeipa/ticket/3114 >>>> Given that we control the KDC LDAP driver I think we should not limit >>>> the time in LDAP but rather 'fix-it-up' for the KDC in the DAL driver. >>> Fix how? Truncate to max in the driver itself if it was entered beyond max? >>> Shouldn't we also prevent entering the invalid value into the attribute? >>> >> I've been mulling the same question for a while. Why would we want to >> let bad data get into the directory? > It is not bad data and the attribute holds a Generalize time date. > > The data is valid it's the MIT code that has a limitation in parsing it. > > Greg tells me he plans supporting additional time by using the > 'negative' part of the integer to represent the years beyond 2038. > > So we should represent data in the directory correctly, which means > whtever date in the future and only chop it when feeding MIT libraries > until they support the additional range at which time we will change and > chop further in the future (around 2067 or so). > > If we chopped early in the directory we'd not be able to properly > represent/change rapresentation later when MIT libs gain additional > range capabilities. > > Simo. > We would have to change our code either way and the amount of change will be similar so does it really matter? -- Thank you, Dmitri Pal Sr. Engineering Manager for IdM portfolio Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From simo at redhat.com Tue Jan 15 22:55:32 2013 From: simo at redhat.com (Simo Sorce) Date: Tue, 15 Jan 2013 17:55:32 -0500 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <50F5D9E2.7030800@redhat.com> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> Message-ID: <1358290532.4590.21.camel@willson.li.ssimo.org> On Tue, 2013-01-15 at 17:36 -0500, Dmitri Pal wrote: > On 01/15/2013 03:59 PM, Simo Sorce wrote: > > On Tue, 2013-01-15 at 15:53 -0500, Rob Crittenden wrote: > >> Dmitri Pal wrote: > >>> On 01/15/2013 08:48 AM, Simo Sorce wrote: > >>>> On Mon, 2013-01-14 at 16:46 +0100, Tomas Babej wrote: > >>>>> Hi, > >>>>> > >>>>> Since in Kerberos V5 are used 32-bit unix timestamps, setting > >>>>> maxlife in pwpolicy to values such as 9999 days would cause > >>>>> integer overflow in krbPasswordExpiration attribute. > >>>>> > >>>>> This would result into unpredictable behaviour such as users > >>>>> not being able to log in after password expiration if password > >>>>> policy was changed (#3114) or new users not being able to log > >>>>> in at all (#3312). > >>>>> > >>>>> https://fedorahosted.org/freeipa/ticket/3312 > >>>>> https://fedorahosted.org/freeipa/ticket/3114 > >>>> Given that we control the KDC LDAP driver I think we should not limit > >>>> the time in LDAP but rather 'fix-it-up' for the KDC in the DAL driver. > >>> Fix how? Truncate to max in the driver itself if it was entered beyond max? > >>> Shouldn't we also prevent entering the invalid value into the attribute? > >>> > >> I've been mulling the same question for a while. Why would we want to > >> let bad data get into the directory? > > It is not bad data and the attribute holds a Generalize time date. > > > > The data is valid it's the MIT code that has a limitation in parsing it. > > > > Greg tells me he plans supporting additional time by using the > > 'negative' part of the integer to represent the years beyond 2038. > > > > So we should represent data in the directory correctly, which means > > whtever date in the future and only chop it when feeding MIT libraries > > until they support the additional range at which time we will change and > > chop further in the future (around 2067 or so). > > > > If we chopped early in the directory we'd not be able to properly > > represent/change rapresentation later when MIT libs gain additional > > range capabilities. > > > > Simo. > > > We would have to change our code either way and the amount of change > will be similar so does it really matter? Yes it really matters IMO. Simo. -- Simo Sorce * Red Hat, Inc * New York From mkosek at redhat.com Wed Jan 16 09:42:34 2013 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 16 Jan 2013 10:42:34 +0100 Subject: [Freeipa-devel] [PATCH] 351 Installer should not connect to 127.0.0.1 Message-ID: <50F6760A.4090203@redhat.com> IPA installer sometimes tries to connect to the Directory Server via loopback address 127.0.0.1. However, the Directory Server on pure IPv6 systems may not be listening on this address. This address may not even be available. Rather use the FQDN of the server when connecting to the DS to fix this issue and make the connection consistent ldapmodify calls which also use FQDN instead of IP address. https://fedorahosted.org/freeipa/ticket/3355 --- Tested on 2 pure IPv6 systems. Test details available in the ticket. -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-351-installer-should-not-connect-to-127.0.0.1.patch Type: text/x-patch Size: 2645 bytes Desc: not available URL: From pspacek at redhat.com Wed Jan 16 11:02:37 2013 From: pspacek at redhat.com (Petr Spacek) Date: Wed, 16 Jan 2013 12:02:37 +0100 Subject: [Freeipa-devel] [PATCH] 350 Upgrade process should not crash on named restart In-Reply-To: <50F57941.6060509@redhat.com> References: <50F54BD4.8020600@redhat.com> <1358257395.15136.120.camel@willson.li.ssimo.org> <50F569A3.4050203@redhat.com> <1358261050.15136.130.camel@willson.li.ssimo.org> <50F5702A.1010202@redhat.com> <50F57324.5090906@redhat.com> <50F57941.6060509@redhat.com> Message-ID: <50F688CD.2090802@redhat.com> On 15.1.2013 16:44, Martin Kosek wrote: > On 01/15/2013 04:17 PM, Rob Crittenden wrote: >> Martin Kosek wrote: >>> On 01/15/2013 03:44 PM, Simo Sorce wrote: >>>> On Tue, 2013-01-15 at 15:37 +0100, Martin Kosek wrote: >>>>> On 01/15/2013 02:43 PM, Simo Sorce wrote: >>>>>> On Tue, 2013-01-15 at 13:30 +0100, Martin Kosek wrote: >>>>>>> When either dirsrv or krb5kdc is down, named service restart in >>>>>>> ipa-upgradeconfig will fail and cause a crash of the whole upgrade >>>>>>> process. >>>>>>> >>>>>>> Rather only report a failure to restart the service and continue >>>>>>> with the upgrade as it does not need the named service running. Do >>>>>>> the same precaution for pki-ca service restart. >>>>>>> >>>>>>> https://fedorahosted.org/freeipa/ticket/3350 >>>>>> >>>>>> Shouldn't we note it failed and retry later ? >>>>>> Is there a risk it will be down at the end of the upgrade process ? >>>>>> >>>>>> Simo. >>>>>> >>>>> >>>>> Seems like an overkill to me. It would not certainly help in this >>>>> case, because >>>>> the processes that named requires are down. As Rob suggested, user >>>>> upgrading >>>>> the IPA may be running in a lower run level for example, it that case >>>>> I think >>>>> we may not even try to restart the service. >>>> >>>> Oh I guess I wasn't clear, I did not mean to try to restart the service >>>> immediately or multiple times, I meant to make sure that if the service >>>> was running when the *whole* update started to make sure it is still >>>> running when the whole update finishes. >>>> >>>> The scenario is: >>>> >>>> 1. ipa runnig >>>> 2. do upgrade >>>> 3. restart fails for some reason >>>> 4. update completes >>>> >>>> now what I would like to make sure is that if the restart failed at 3 we >>>> try a restart after 4 so that we try to get things up when all the >>>> updates are done. >>>> >>>> Makes sense ? >>> >>> Sort of. To be able to do this, I think we would need to at first get a >>> list of all running services (as user may have purposefully shut down >>> some service), then run the upgrades and check that all services in this >>> list are still running at the end of the upgrade. If not, try to amend it. >>> >>> While this looks useful-ish, I would rather keep the patch 350 simple as >>> we are close to the release and I do not want to get too wild. >>> >>>> >>>>> Now when I am thinking about it, maybe we should only try to restart >>>>> if the >>>>> service is running - because otherwise it would be started later and the >>>>> changes that were done in scope of upgrade script would be applied. >>>> >>>> Yes we should do a conditional restart only, and it is ok to proceeded >>>> if it fails, we want to complete the upgrade process in any case, not >>>> break out in the middle if at all possible. >>>> >>>> Simo. >>>> >>> >>> Right, I will send an updated patch which restarts the named/pki-ca >>> service only if it is running. >> >> ACK on this patch as-is. I think we have room for improvement/discussion. Can >> you open a RFE ticket to investigate any further work we might want to do? > > Sure, this is the ticket: https://fedorahosted.org/freeipa/ticket/3351 > > Anyway, I rebased the patch also for master and ipa-3-1 and pushed it to all > three branches, i.e. master, ipa-3-1, ipa-3-0. BTW bind-dyndb-ldap has a open ticket https://fedorahosted.org/bind-dyndb-ldap/ticket/100 for handling KDC unavailability. It should be coordinated with IPA's bug triage. -- Petr^2 Spacek From akrivoka at redhat.com Wed Jan 16 11:28:50 2013 From: akrivoka at redhat.com (Ana Krivokapic) Date: Wed, 16 Jan 2013 12:28:50 +0100 Subject: [Freeipa-devel] [PATCH] 0003 Add crond as a default HBAC service In-Reply-To: <50F57C37.2030701@redhat.com> References: <50F53BEE.7020306@redhat.com> <20130115115651.GA28090@redhat.com> <50F54C45.2020100@redhat.com> <20130115125623.GB28090@redhat.com> <50F57C37.2030701@redhat.com> Message-ID: <50F68EF2.7000703@redhat.com> On 01/15/2013 04:56 PM, Rob Crittenden wrote: > Alexander Bokovoy wrote: >> On Tue, 15 Jan 2013, Dmitri Pal wrote: >>> On 01/15/2013 06:56 AM, Alexander Bokovoy wrote: >>>> On Tue, 15 Jan 2013, Ana Krivokapic wrote: >>>>> crond was not included in the list of default HBAC services - it >>>>> needed to be added manually. As crond is a commonly used service, it >>>>> is now included as a default HBAC service. >>>>> >>>>> Ticket: https://fedorahosted.org/freeipa/ticket/3215 >>>> ACK. >>>> >>>> Simple and obvious. >>>> >>>>> I'm not sure whether a design document is needed for this ticket. It >>>>> _is_ marked as an RFE, but it is a pretty simple and specific change. >>>>> After yesterday's meeting, I'm not really clear on whether a design >>>>> page is needed in cases like this. Thoughts? >>>> I suggested that additions like this do not need specific design >>>> document. >>> >>> >>> I suggest that we have a catch all page on the wiki that would be >>> called >>> "Minor Enhancements". >>> It will consist of the table with two columns. >>> First column would be the link to the ticket. >>> The second column would be a short comment. >>> F this ticket it will look like this: >>> >>> Enhancement Comment >>> https://fedorahosted.org/freeipa/ticket/3215 Crond was added to the >>> list of available HBAC services >>> >>> >>> >>> Such approach would not break the reporting and would show what has >>> been >>> done for minor enhancements like this. >> This would work as well. If we still link to the mail thread in the >> ticket, all ends will be connected. >> >> +1 from me. >> > > We did a design for HBAC loooong ago (would be in the V2 pages) so > perhaps we could add the new service there. I updated the list of default HBAC services on this page: http://www.freeipa.org/page/V2/PAMServices > > rob > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel -- Regards, Ana Krivokapic Associate Software Engineer FreeIPA team Red Hat Inc. From tbabej at redhat.com Wed Jan 16 11:52:12 2013 From: tbabej at redhat.com (Tomas Babej) Date: Wed, 16 Jan 2013 12:52:12 +0100 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <1358290532.4590.21.camel@willson.li.ssimo.org> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> Message-ID: <50F6946C.3040905@redhat.com> On 01/15/2013 11:55 PM, Simo Sorce wrote: > On Tue, 2013-01-15 at 17:36 -0500, Dmitri Pal wrote: >> On 01/15/2013 03:59 PM, Simo Sorce wrote: >>> On Tue, 2013-01-15 at 15:53 -0500, Rob Crittenden wrote: >>>> Dmitri Pal wrote: >>>>> On 01/15/2013 08:48 AM, Simo Sorce wrote: >>>>>> On Mon, 2013-01-14 at 16:46 +0100, Tomas Babej wrote: >>>>>>> Hi, >>>>>>> >>>>>>> Since in Kerberos V5 are used 32-bit unix timestamps, setting >>>>>>> maxlife in pwpolicy to values such as 9999 days would cause >>>>>>> integer overflow in krbPasswordExpiration attribute. >>>>>>> >>>>>>> This would result into unpredictable behaviour such as users >>>>>>> not being able to log in after password expiration if password >>>>>>> policy was changed (#3114) or new users not being able to log >>>>>>> in at all (#3312). >>>>>>> >>>>>>> https://fedorahosted.org/freeipa/ticket/3312 >>>>>>> https://fedorahosted.org/freeipa/ticket/3114 >>>>>> Given that we control the KDC LDAP driver I think we should not limit >>>>>> the time in LDAP but rather 'fix-it-up' for the KDC in the DAL driver. >>>>> Fix how? Truncate to max in the driver itself if it was entered beyond max? >>>>> Shouldn't we also prevent entering the invalid value into the attribute? >>>>> >>>> I've been mulling the same question for a while. Why would we want to >>>> let bad data get into the directory? >>> It is not bad data and the attribute holds a Generalize time date. >>> >>> The data is valid it's the MIT code that has a limitation in parsing it. >>> >>> Greg tells me he plans supporting additional time by using the >>> 'negative' part of the integer to represent the years beyond 2038. >>> >>> So we should represent data in the directory correctly, which means >>> whtever date in the future and only chop it when feeding MIT libraries >>> until they support the additional range at which time we will change and >>> chop further in the future (around 2067 or so). >>> >>> If we chopped early in the directory we'd not be able to properly >>> represent/change rapresentation later when MIT libs gain additional >>> range capabilities. >>> >>> Simo. >>> >> We would have to change our code either way and the amount of change >> will be similar so does it really matter? > Yes it really matters IMO. > > Simo. > Updated patch attached. Tomas -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0026-2-Prevent-integer-overflow-when-setting-krbPasswordExp.patch Type: text/x-patch Size: 1991 bytes Desc: not available URL: From akrivoka at redhat.com Wed Jan 16 12:55:03 2013 From: akrivoka at redhat.com (Ana Krivokapic) Date: Wed, 16 Jan 2013 13:55:03 +0100 Subject: [Freeipa-devel] [PATCH] 0003 Add crond as a default HBAC service In-Reply-To: <50F54C45.2020100@redhat.com> References: <50F53BEE.7020306@redhat.com> <20130115115651.GA28090@redhat.com> <50F54C45.2020100@redhat.com> Message-ID: <50F6A327.1050300@redhat.com> On 01/15/2013 01:32 PM, Dmitri Pal wrote: > On 01/15/2013 06:56 AM, Alexander Bokovoy wrote: >> On Tue, 15 Jan 2013, Ana Krivokapic wrote: >>> crond was not included in the list of default HBAC services - it >>> needed to be added manually. As crond is a commonly used service, it >>> is now included as a default HBAC service. >>> >>> Ticket: https://fedorahosted.org/freeipa/ticket/3215 >> ACK. >> >> Simple and obvious. >> >>> I'm not sure whether a design document is needed for this ticket. It >>> _is_ marked as an RFE, but it is a pretty simple and specific change. >>> After yesterday's meeting, I'm not really clear on whether a design >>> page is needed in cases like this. Thoughts? >> I suggested that additions like this do not need specific design >> document. > > I suggest that we have a catch all page on the wiki that would be called > "Minor Enhancements". > It will consist of the table with two columns. > First column would be the link to the ticket. > The second column would be a short comment. > F this ticket it will look like this: > > Enhancement Comment > https://fedorahosted.org/freeipa/ticket/3215 Crond was added to the list of available HBAC services > > > > Such approach would not break the reporting and would show what has been > done for minor enhancements like this. I created the Minor Enhancements wiki page and added the link to it on the Documentation page. http://www.freeipa.org/page/V3_Minor_Enhancements >> Since this is the whole patch: >>> --- a/install/updates/50-hbacservice.update >>> +++ b/install/updates/50-hbacservice.update >>> @@ -1,3 +1,10 @@ >>> +dn: cn=crond,cn=hbacservices,cn=hbac,$SUFFIX >>> +default:objectclass: ipahbacservice >>> +default:objectclass: ipaobject >>> +default:cn: crond >>> +default:description: crond >>> +default:ipauniqueid:autogenerate >>> + >>> dn: cn=vsftpd,cn=hbacservices,cn=hbac,$SUFFIX >>> default:objectclass: ipahbacservice >>> default:objectclass: ipaobject >> .. I would simply add a link to the mail archive of this thread to the >> ticket itself. Mailing archive of freeipa-devel@ is as stable as our >> wiki, if not more resilient. ;) >> > -- Regards, Ana Krivokapic Associate Software Engineer FreeIPA team Red Hat Inc. From simo at redhat.com Wed Jan 16 13:47:01 2013 From: simo at redhat.com (Simo Sorce) Date: Wed, 16 Jan 2013 08:47:01 -0500 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <50F6946C.3040905@redhat.com> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> <50F6946C.3040905@redhat.com> Message-ID: <1358344021.4590.31.camel@willson.li.ssimo.org> On Wed, 2013-01-16 at 12:52 +0100, Tomas Babej wrote: > On 01/15/2013 11:55 PM, Simo Sorce wrote: > > On Tue, 2013-01-15 at 17:36 -0500, Dmitri Pal wrote: > >> On 01/15/2013 03:59 PM, Simo Sorce wrote: > >>> On Tue, 2013-01-15 at 15:53 -0500, Rob Crittenden wrote: > >>>> Dmitri Pal wrote: > >>>>> On 01/15/2013 08:48 AM, Simo Sorce wrote: > >>>>>> On Mon, 2013-01-14 at 16:46 +0100, Tomas Babej wrote: > >>>>>>> Hi, > >>>>>>> > >>>>>>> Since in Kerberos V5 are used 32-bit unix timestamps, setting > >>>>>>> maxlife in pwpolicy to values such as 9999 days would cause > >>>>>>> integer overflow in krbPasswordExpiration attribute. > >>>>>>> > >>>>>>> This would result into unpredictable behaviour such as users > >>>>>>> not being able to log in after password expiration if password > >>>>>>> policy was changed (#3114) or new users not being able to log > >>>>>>> in at all (#3312). > >>>>>>> > >>>>>>> https://fedorahosted.org/freeipa/ticket/3312 > >>>>>>> https://fedorahosted.org/freeipa/ticket/3114 > >>>>>> Given that we control the KDC LDAP driver I think we should not limit > >>>>>> the time in LDAP but rather 'fix-it-up' for the KDC in the DAL driver. > >>>>> Fix how? Truncate to max in the driver itself if it was entered beyond max? > >>>>> Shouldn't we also prevent entering the invalid value into the attribute? > >>>>> > >>>> I've been mulling the same question for a while. Why would we want to > >>>> let bad data get into the directory? > >>> It is not bad data and the attribute holds a Generalize time date. > >>> > >>> The data is valid it's the MIT code that has a limitation in parsing it. > >>> > >>> Greg tells me he plans supporting additional time by using the > >>> 'negative' part of the integer to represent the years beyond 2038. > >>> > >>> So we should represent data in the directory correctly, which means > >>> whtever date in the future and only chop it when feeding MIT libraries > >>> until they support the additional range at which time we will change and > >>> chop further in the future (around 2067 or so). > >>> > >>> If we chopped early in the directory we'd not be able to properly > >>> represent/change rapresentation later when MIT libs gain additional > >>> range capabilities. > >>> > >>> Simo. > >>> > >> We would have to change our code either way and the amount of change > >> will be similar so does it really matter? > > Yes it really matters IMO. > > > > Simo. > > > Updated patch attached. This part looks ok but I think you also need to properly set krb5_db_entry-> {expiration, pw_expiration, last_success, last_failed} in ipadb_parse_ldap_entry() Perhaps the best way is to introduce a new function ipadb_ldap_attr_to_krb5_timestamp() in ipa_kdb_common.c so that you do all the overflow checkings once. Simo. -- Simo Sorce * Red Hat, Inc * New York From simo at redhat.com Wed Jan 16 13:50:20 2013 From: simo at redhat.com (Simo Sorce) Date: Wed, 16 Jan 2013 08:50:20 -0500 Subject: [Freeipa-devel] [PATCH] 351 Installer should not connect to 127.0.0.1 In-Reply-To: <50F6760A.4090203@redhat.com> References: <50F6760A.4090203@redhat.com> Message-ID: <1358344220.4590.32.camel@willson.li.ssimo.org> On Wed, 2013-01-16 at 10:42 +0100, Martin Kosek wrote: > IPA installer sometimes tries to connect to the Directory Server > via loopback address 127.0.0.1. However, the Directory Server on > pure IPv6 systems may not be listening on this address. This address > may not even be available. > > Rather use the FQDN of the server when connecting to the DS to fix > this issue and make the connection consistent ldapmodify calls which > also use FQDN instead of IP address. > > https://fedorahosted.org/freeipa/ticket/3355 Martin, shouldn't the installer rather always use the ldapi socket ? Simo. -- Simo Sorce * Red Hat, Inc * New York From simo at redhat.com Wed Jan 16 13:52:35 2013 From: simo at redhat.com (Simo Sorce) Date: Wed, 16 Jan 2013 08:52:35 -0500 Subject: [Freeipa-devel] [PATCH] 0003 Add crond as a default HBAC service In-Reply-To: <50F68EF2.7000703@redhat.com> References: <50F53BEE.7020306@redhat.com> <20130115115651.GA28090@redhat.com> <50F54C45.2020100@redhat.com> <20130115125623.GB28090@redhat.com> <50F57C37.2030701@redhat.com> <50F68EF2.7000703@redhat.com> Message-ID: <1358344355.4590.33.camel@willson.li.ssimo.org> On Wed, 2013-01-16 at 12:28 +0100, Ana Krivokapic wrote: > > We did a design for HBAC loooong ago (would be in the V2 pages) so > > perhaps we could add the new service there. > > I updated the list of default HBAC services on this page: > http://www.freeipa.org/page/V2/PAMServices Love that you added the version they were added in, great work! Simo. -- Simo Sorce * Red Hat, Inc * New York From mkosek at redhat.com Wed Jan 16 14:01:26 2013 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 16 Jan 2013 15:01:26 +0100 Subject: [Freeipa-devel] [PATCH] 351 Installer should not connect to 127.0.0.1 In-Reply-To: <1358344220.4590.32.camel@willson.li.ssimo.org> References: <50F6760A.4090203@redhat.com> <1358344220.4590.32.camel@willson.li.ssimo.org> Message-ID: <50F6B2B6.1090005@redhat.com> On 01/16/2013 02:50 PM, Simo Sorce wrote: > On Wed, 2013-01-16 at 10:42 +0100, Martin Kosek wrote: >> IPA installer sometimes tries to connect to the Directory Server >> via loopback address 127.0.0.1. However, the Directory Server on >> pure IPv6 systems may not be listening on this address. This address >> may not even be available. >> >> Rather use the FQDN of the server when connecting to the DS to fix >> this issue and make the connection consistent ldapmodify calls which >> also use FQDN instead of IP address. >> >> https://fedorahosted.org/freeipa/ticket/3355 > > Martin, > shouldn't the installer rather always use the ldapi socket ? > > Simo. > Probably yes, but the fix would be much more intrusive than the current patch as we connect to ldap://$HOST:389 all over the installer code. My intention was to prepare rather a short fix for the upcoming release... Martin From simo at redhat.com Wed Jan 16 14:10:36 2013 From: simo at redhat.com (Simo Sorce) Date: Wed, 16 Jan 2013 09:10:36 -0500 Subject: [Freeipa-devel] [PATCH] 351 Installer should not connect to 127.0.0.1 In-Reply-To: <50F6B2B6.1090005@redhat.com> References: <50F6760A.4090203@redhat.com> <1358344220.4590.32.camel@willson.li.ssimo.org> <50F6B2B6.1090005@redhat.com> Message-ID: <1358345436.4590.40.camel@willson.li.ssimo.org> On Wed, 2013-01-16 at 15:01 +0100, Martin Kosek wrote: > On 01/16/2013 02:50 PM, Simo Sorce wrote: > > On Wed, 2013-01-16 at 10:42 +0100, Martin Kosek wrote: > >> IPA installer sometimes tries to connect to the Directory Server > >> via loopback address 127.0.0.1. However, the Directory Server on > >> pure IPv6 systems may not be listening on this address. This address > >> may not even be available. > >> > >> Rather use the FQDN of the server when connecting to the DS to fix > >> this issue and make the connection consistent ldapmodify calls which > >> also use FQDN instead of IP address. > >> > >> https://fedorahosted.org/freeipa/ticket/3355 > > > > Martin, > > shouldn't the installer rather always use the ldapi socket ? > > > > Simo. > > > > Probably yes, but the fix would be much more intrusive than the current patch > as we connect to ldap://$HOST:389 all over the installer code. My intention was > to prepare rather a short fix for the upcoming release... Uhmm wouldn't you just need to replace ldap://$HOST:389 with ldapi://path ? However it is understandable to have a short term fix, but can you open a ticket for the longer term goal of moving away from TCP connections to LDAPI ones ? Simo. -- Simo Sorce * Red Hat, Inc * New York From mkosek at redhat.com Wed Jan 16 14:23:19 2013 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 16 Jan 2013 15:23:19 +0100 Subject: [Freeipa-devel] [PATCH] 351 Installer should not connect to 127.0.0.1 In-Reply-To: <1358345436.4590.40.camel@willson.li.ssimo.org> References: <50F6760A.4090203@redhat.com> <1358344220.4590.32.camel@willson.li.ssimo.org> <50F6B2B6.1090005@redhat.com> <1358345436.4590.40.camel@willson.li.ssimo.org> Message-ID: <50F6B7D7.7070803@redhat.com> On 01/16/2013 03:10 PM, Simo Sorce wrote: > On Wed, 2013-01-16 at 15:01 +0100, Martin Kosek wrote: >> On 01/16/2013 02:50 PM, Simo Sorce wrote: >>> On Wed, 2013-01-16 at 10:42 +0100, Martin Kosek wrote: >>>> IPA installer sometimes tries to connect to the Directory Server >>>> via loopback address 127.0.0.1. However, the Directory Server on >>>> pure IPv6 systems may not be listening on this address. This address >>>> may not even be available. >>>> >>>> Rather use the FQDN of the server when connecting to the DS to fix >>>> this issue and make the connection consistent ldapmodify calls which >>>> also use FQDN instead of IP address. >>>> >>>> https://fedorahosted.org/freeipa/ticket/3355 >>> >>> Martin, >>> shouldn't the installer rather always use the ldapi socket ? >>> >>> Simo. >>> >> >> Probably yes, but the fix would be much more intrusive than the current patch >> as we connect to ldap://$HOST:389 all over the installer code. My intention was >> to prepare rather a short fix for the upcoming release... > > Uhmm wouldn't you just need to replace ldap://$HOST:389 with > ldapi://path ? > > However it is understandable to have a short term fix, but can you open > a ticket for the longer term goal of moving away from TCP connections to > LDAPI ones ? > > Simo. > Sure. I updated ticket https://fedorahosted.org/freeipa/ticket/3272 which already plans to fix other inappropriate protocol in installer code. Martin From jcholast at redhat.com Wed Jan 16 15:45:21 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 16 Jan 2013 16:45:21 +0100 Subject: [Freeipa-devel] [PATCH] 93 Add custom mapping object for LDAP entry data Message-ID: <50F6CB11.4010008@redhat.com> Hi, this patch adds initial support for custom LDAP entry objects, as described in . The LDAPEntry class implements the mapping object. The current version works like a dict, plus it has a DN property and validates and normalizes attribute names (there is no case preservation yet). The LDAPEntryCompat class provides compatibility with old code that uses (dn, attrs) tuples. The reason why this is a separate class is that our code currently has 2 contradicting requirements on the entry object: tuple unpacking must work with it (i.e. iter(entry) yields dn and attribute dict), but it also must work as an argument to dict constructor (i.e. iter(entry) yields attribute names). This class will be removed once our code is converted to use LDAPEntry. Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-93-Add-custom-mapping-object-for-LDAP-entry-data.patch Type: text/x-patch Size: 6262 bytes Desc: not available URL: From tbabej at redhat.com Wed Jan 16 16:57:48 2013 From: tbabej at redhat.com (Tomas Babej) Date: Wed, 16 Jan 2013 17:57:48 +0100 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <1358344021.4590.31.camel@willson.li.ssimo.org> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> <50F6946C.3040905@redhat.com> <1358344021.4590.31.camel@willson.li.ssimo.org> Message-ID: <50F6DC0C.1090106@redhat.com> On 01/16/2013 02:47 PM, Simo Sorce wrote: > On Wed, 2013-01-16 at 12:52 +0100, Tomas Babej wrote: >> On 01/15/2013 11:55 PM, Simo Sorce wrote: >>> On Tue, 2013-01-15 at 17:36 -0500, Dmitri Pal wrote: >>>> On 01/15/2013 03:59 PM, Simo Sorce wrote: >>>>> On Tue, 2013-01-15 at 15:53 -0500, Rob Crittenden wrote: >>>>>> Dmitri Pal wrote: >>>>>>> On 01/15/2013 08:48 AM, Simo Sorce wrote: >>>>>>>> On Mon, 2013-01-14 at 16:46 +0100, Tomas Babej wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> Since in Kerberos V5 are used 32-bit unix timestamps, setting >>>>>>>>> maxlife in pwpolicy to values such as 9999 days would cause >>>>>>>>> integer overflow in krbPasswordExpiration attribute. >>>>>>>>> >>>>>>>>> This would result into unpredictable behaviour such as users >>>>>>>>> not being able to log in after password expiration if password >>>>>>>>> policy was changed (#3114) or new users not being able to log >>>>>>>>> in at all (#3312). >>>>>>>>> >>>>>>>>> https://fedorahosted.org/freeipa/ticket/3312 >>>>>>>>> https://fedorahosted.org/freeipa/ticket/3114 >>>>>>>> Given that we control the KDC LDAP driver I think we should not limit >>>>>>>> the time in LDAP but rather 'fix-it-up' for the KDC in the DAL driver. >>>>>>> Fix how? Truncate to max in the driver itself if it was entered beyond max? >>>>>>> Shouldn't we also prevent entering the invalid value into the attribute? >>>>>>> >>>>>> I've been mulling the same question for a while. Why would we want to >>>>>> let bad data get into the directory? >>>>> It is not bad data and the attribute holds a Generalize time date. >>>>> >>>>> The data is valid it's the MIT code that has a limitation in parsing it. >>>>> >>>>> Greg tells me he plans supporting additional time by using the >>>>> 'negative' part of the integer to represent the years beyond 2038. >>>>> >>>>> So we should represent data in the directory correctly, which means >>>>> whtever date in the future and only chop it when feeding MIT libraries >>>>> until they support the additional range at which time we will change and >>>>> chop further in the future (around 2067 or so). >>>>> >>>>> If we chopped early in the directory we'd not be able to properly >>>>> represent/change rapresentation later when MIT libs gain additional >>>>> range capabilities. >>>>> >>>>> Simo. >>>>> >>>> We would have to change our code either way and the amount of change >>>> will be similar so does it really matter? >>> Yes it really matters IMO. >>> >>> Simo. >>> >> Updated patch attached. > This part looks ok but I think you also need to properly set > krb5_db_entry-> {expiration, pw_expiration, last_success, last_failed} > in ipadb_parse_ldap_entry() > > Perhaps the best way is to introduce a new function > ipadb_ldap_attr_to_krb5_timestamp() in ipa_kdb_common.c so that you do > all the overflow checkings once. > > Simo. > They all use ipadb_ldap_attr_to_time_t() to get their values, so the following addition to the patch should be sufficient. Tomas -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0026-3-Prevent-integer-overflow-when-setting-krbPasswordExp.patch Type: text/x-patch Size: 2609 bytes Desc: not available URL: From simo at redhat.com Wed Jan 16 17:01:48 2013 From: simo at redhat.com (Simo Sorce) Date: Wed, 16 Jan 2013 12:01:48 -0500 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <50F6DC0C.1090106@redhat.com> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> <50F6946C.3040905@redhat.com> <1358344021.4590.31.camel@willson.li.ssimo.org> <50F6DC0C.1090106@redhat.com> Message-ID: <1358355708.4590.47.camel@willson.li.ssimo.org> On Wed, 2013-01-16 at 17:57 +0100, Tomas Babej wrote: > On 01/16/2013 02:47 PM, Simo Sorce wrote: > > On Wed, 2013-01-16 at 12:52 +0100, Tomas Babej wrote: > >> On 01/15/2013 11:55 PM, Simo Sorce wrote: > >>> On Tue, 2013-01-15 at 17:36 -0500, Dmitri Pal wrote: > >>>> On 01/15/2013 03:59 PM, Simo Sorce wrote: > >>>>> On Tue, 2013-01-15 at 15:53 -0500, Rob Crittenden wrote: > >>>>>> Dmitri Pal wrote: > >>>>>>> On 01/15/2013 08:48 AM, Simo Sorce wrote: > >>>>>>>> On Mon, 2013-01-14 at 16:46 +0100, Tomas Babej wrote: > >>>>>>>>> Hi, > >>>>>>>>> > >>>>>>>>> Since in Kerberos V5 are used 32-bit unix timestamps, setting > >>>>>>>>> maxlife in pwpolicy to values such as 9999 days would cause > >>>>>>>>> integer overflow in krbPasswordExpiration attribute. > >>>>>>>>> > >>>>>>>>> This would result into unpredictable behaviour such as users > >>>>>>>>> not being able to log in after password expiration if password > >>>>>>>>> policy was changed (#3114) or new users not being able to log > >>>>>>>>> in at all (#3312). > >>>>>>>>> > >>>>>>>>> https://fedorahosted.org/freeipa/ticket/3312 > >>>>>>>>> https://fedorahosted.org/freeipa/ticket/3114 > >>>>>>>> Given that we control the KDC LDAP driver I think we should not limit > >>>>>>>> the time in LDAP but rather 'fix-it-up' for the KDC in the DAL driver. > >>>>>>> Fix how? Truncate to max in the driver itself if it was entered beyond max? > >>>>>>> Shouldn't we also prevent entering the invalid value into the attribute? > >>>>>>> > >>>>>> I've been mulling the same question for a while. Why would we want to > >>>>>> let bad data get into the directory? > >>>>> It is not bad data and the attribute holds a Generalize time date. > >>>>> > >>>>> The data is valid it's the MIT code that has a limitation in parsing it. > >>>>> > >>>>> Greg tells me he plans supporting additional time by using the > >>>>> 'negative' part of the integer to represent the years beyond 2038. > >>>>> > >>>>> So we should represent data in the directory correctly, which means > >>>>> whtever date in the future and only chop it when feeding MIT libraries > >>>>> until they support the additional range at which time we will change and > >>>>> chop further in the future (around 2067 or so). > >>>>> > >>>>> If we chopped early in the directory we'd not be able to properly > >>>>> represent/change rapresentation later when MIT libs gain additional > >>>>> range capabilities. > >>>>> > >>>>> Simo. > >>>>> > >>>> We would have to change our code either way and the amount of change > >>>> will be similar so does it really matter? > >>> Yes it really matters IMO. > >>> > >>> Simo. > >>> > >> Updated patch attached. > > This part looks ok but I think you also need to properly set > > krb5_db_entry-> {expiration, pw_expiration, last_success, last_failed} > > in ipadb_parse_ldap_entry() > > > > Perhaps the best way is to introduce a new function > > ipadb_ldap_attr_to_krb5_timestamp() in ipa_kdb_common.c so that you do > > all the overflow checkings once. > > > > Simo. > > > They all use ipadb_ldap_attr_to_time_t() to get their values, > so the following addition to the patch should be sufficient. It will break dates for other users of the function that do not need to artificially limit the results. Please add a new function. Simo. -- Simo Sorce * Red Hat, Inc * New York From tbabej at redhat.com Wed Jan 16 17:32:02 2013 From: tbabej at redhat.com (Tomas Babej) Date: Wed, 16 Jan 2013 18:32:02 +0100 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <1358355708.4590.47.camel@willson.li.ssimo.org> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> <50F6946C.3040905@redhat.com> <1358344021.4590.31.camel@willson.li.ssimo.org> <50F6DC0C.1090106@redhat.com> <1358355708.4590.47.camel@willson.li.ssimo.org> Message-ID: <50F6E412.7020402@redhat.com> On 01/16/2013 06:01 PM, Simo Sorce wrote: > On Wed, 2013-01-16 at 17:57 +0100, Tomas Babej wrote: >> On 01/16/2013 02:47 PM, Simo Sorce wrote: >>> On Wed, 2013-01-16 at 12:52 +0100, Tomas Babej wrote: >>>> On 01/15/2013 11:55 PM, Simo Sorce wrote: >>>>> On Tue, 2013-01-15 at 17:36 -0500, Dmitri Pal wrote: >>>>>> On 01/15/2013 03:59 PM, Simo Sorce wrote: >>>>>>> On Tue, 2013-01-15 at 15:53 -0500, Rob Crittenden wrote: >>>>>>>> Dmitri Pal wrote: >>>>>>>>> On 01/15/2013 08:48 AM, Simo Sorce wrote: >>>>>>>>>> On Mon, 2013-01-14 at 16:46 +0100, Tomas Babej wrote: >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> Since in Kerberos V5 are used 32-bit unix timestamps, setting >>>>>>>>>>> maxlife in pwpolicy to values such as 9999 days would cause >>>>>>>>>>> integer overflow in krbPasswordExpiration attribute. >>>>>>>>>>> >>>>>>>>>>> This would result into unpredictable behaviour such as users >>>>>>>>>>> not being able to log in after password expiration if password >>>>>>>>>>> policy was changed (#3114) or new users not being able to log >>>>>>>>>>> in at all (#3312). >>>>>>>>>>> >>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/3312 >>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/3114 >>>>>>>>>> Given that we control the KDC LDAP driver I think we should not limit >>>>>>>>>> the time in LDAP but rather 'fix-it-up' for the KDC in the DAL driver. >>>>>>>>> Fix how? Truncate to max in the driver itself if it was entered beyond max? >>>>>>>>> Shouldn't we also prevent entering the invalid value into the attribute? >>>>>>>>> >>>>>>>> I've been mulling the same question for a while. Why would we want to >>>>>>>> let bad data get into the directory? >>>>>>> It is not bad data and the attribute holds a Generalize time date. >>>>>>> >>>>>>> The data is valid it's the MIT code that has a limitation in parsing it. >>>>>>> >>>>>>> Greg tells me he plans supporting additional time by using the >>>>>>> 'negative' part of the integer to represent the years beyond 2038. >>>>>>> >>>>>>> So we should represent data in the directory correctly, which means >>>>>>> whtever date in the future and only chop it when feeding MIT libraries >>>>>>> until they support the additional range at which time we will change and >>>>>>> chop further in the future (around 2067 or so). >>>>>>> >>>>>>> If we chopped early in the directory we'd not be able to properly >>>>>>> represent/change rapresentation later when MIT libs gain additional >>>>>>> range capabilities. >>>>>>> >>>>>>> Simo. >>>>>>> >>>>>> We would have to change our code either way and the amount of change >>>>>> will be similar so does it really matter? >>>>> Yes it really matters IMO. >>>>> >>>>> Simo. >>>>> >>>> Updated patch attached. >>> This part looks ok but I think you also need to properly set >>> krb5_db_entry-> {expiration, pw_expiration, last_success, last_failed} >>> in ipadb_parse_ldap_entry() >>> >>> Perhaps the best way is to introduce a new function >>> ipadb_ldap_attr_to_krb5_timestamp() in ipa_kdb_common.c so that you do >>> all the overflow checkings once. >>> >>> Simo. >>> >> They all use ipadb_ldap_attr_to_time_t() to get their values, >> so the following addition to the patch should be sufficient. > It will break dates for other users of the function that do not need to > artificially limit the results. Please add a new function. > > Simo. > Done. Tomas -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0026-4-Prevent-integer-overflow-when-setting-krbPasswordExp.patch Type: text/x-patch Size: 2999 bytes Desc: not available URL: From simo at redhat.com Wed Jan 16 17:57:23 2013 From: simo at redhat.com (Simo Sorce) Date: Wed, 16 Jan 2013 12:57:23 -0500 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <50F6E412.7020402@redhat.com> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> <50F6946C.3040905@redhat.com> <1358344021.4590.31.camel@willson.li.ssimo.org> <50F6DC0C.1090106@redhat.com> <1358355708.4590.47.camel@willson.li.ssimo.org> <50F6E412.7020402@redhat.com> Message-ID: <1358359043.4590.50.camel@willson.li.ssimo.org> On Wed, 2013-01-16 at 18:32 +0100, Tomas Babej wrote: > >> They all use ipadb_ldap_attr_to_time_t() to get their values, > >> so the following addition to the patch should be sufficient. > > It will break dates for other users of the function that do not need to > > artificially limit the results. Please add a new function. > > > > Simo. > > > Done. > > Tomas ACK Simo -- Simo Sorce * Red Hat, Inc * New York From pviktori at redhat.com Wed Jan 16 18:05:25 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 16 Jan 2013 19:05:25 +0100 Subject: [Freeipa-devel] [PATCH] 93 Add custom mapping object for LDAP entry data In-Reply-To: <50F6CB11.4010008@redhat.com> References: <50F6CB11.4010008@redhat.com> Message-ID: <50F6EBE5.9060307@redhat.com> On 01/16/2013 04:45 PM, Jan Cholasta wrote: > Hi, > > this patch adds initial support for custom LDAP entry objects, as > described in . > > The LDAPEntry class implements the mapping object. The current version > works like a dict, plus it has a DN property and validates and > normalizes attribute names (there is no case preservation yet). > > The LDAPEntryCompat class provides compatibility with old code that uses > (dn, attrs) tuples. The reason why this is a separate class is that our > code currently has 2 contradicting requirements on the entry object: > tuple unpacking must work with it (i.e. iter(entry) yields dn and > attribute dict), but it also must work as an argument to dict > constructor (i.e. iter(entry) yields attribute names). This class will > be removed once our code is converted to use LDAPEntry. With this patch, ldap2 produces LDAPEntryCompat objects. I don't see how that brings us closer to converting the codebase to LDAPEntry. I'd be happier if this patch made both the tuple unpacking and dict-style access possible. But perhaps you have a better plan than I can see from the patch? The dict constructor uses the `keys` method, not iteration, so these two requirements aren't incompatible -- unless we do specifically require that iter(entry) yields attribute names. For example the following will work: class NotQuiteADict(dict): def __iter__(self): return iter(('some', 'data')) a = NotQuiteADict(a=1, b=2, c=3) print dict(a) # -> {'a': 1, 'c': 3, 'b': 2} print list(a) # -> ['some', 'data'] print a.keys() # -> ['a', 'c', 'b'] While overriding a dict's __iter__ in this way is a bit evil, I think for our purposes it would be better than introducing a fourth entry class. When all our code is converted we could just have __iter__ warn or raise an exception for a few releases to make sure no one uses it. In a couple of places we do iterate over the entry to get the keys, but not in many, and they're usually well-tested. We just have to use `entry.keys()` there. Would these changes to your patch be OK? Another issue is that the DN is itself an attribute, so entry.dn and entry['dn'] should be synchronized. I guess that's material for another patch, though. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: Add-custom-mapping-object-for-LDAP-entry-data.patch Type: text/x-patch Size: 11898 bytes Desc: not available URL: From tbabej at redhat.com Thu Jan 17 00:16:20 2013 From: tbabej at redhat.com (Tomas Babej) Date: Thu, 17 Jan 2013 01:16:20 +0100 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <1358359043.4590.50.camel@willson.li.ssimo.org> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> <50F6946C.3040905@redhat.com> <1358344021.4590.31.camel@willson.li.ssimo.org> <50F6DC0C.1090106@redhat.com> <1358355708.4590.47.camel@willson.li.ssimo.org> <50F6E412.7020402@redhat.com> <1358359043.4590.50.camel@willson.li.ssimo.org> Message-ID: <50F742D4.6010800@redhat.com> On 01/16/2013 06:57 PM, Simo Sorce wrote: > On Wed, 2013-01-16 at 18:32 +0100, Tomas Babej wrote: > >>>> They all use ipadb_ldap_attr_to_time_t() to get their values, >>>> so the following addition to the patch should be sufficient. >>> It will break dates for other users of the function that do not need to >>> artificially limit the results. Please add a new function. >>> >>> Simo. >>> >> Done. >> >> Tomas > ACK > > Simo > Self-ACK-deny, I just realized that I forgot to change the function calls. Tomas -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0026-5-Prevent-integer-overflow-when-setting-krbPasswordExp.patch Type: text/x-patch Size: 5211 bytes Desc: not available URL: From dpal at redhat.com Thu Jan 17 00:56:55 2013 From: dpal at redhat.com (Dmitri Pal) Date: Wed, 16 Jan 2013 19:56:55 -0500 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <50F6E412.7020402@redhat.com> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> <50F6946C.3040905@redhat.com> <1358344021.4590.31.camel@willson.li.ssimo.org> <50F6DC0C.1090106@redhat.com> <1358355708.4590.47.camel@willson.li.ssimo.org> <50F6E412.7020402@redhat.com> Message-ID: <50F74C57.6030604@redhat.com> On 01/16/2013 12:32 PM, Tomas Babej wrote: > On 01/16/2013 06:01 PM, Simo Sorce wrote: >> On Wed, 2013-01-16 at 17:57 +0100, Tomas Babej wrote: >>> On 01/16/2013 02:47 PM, Simo Sorce wrote: >>>> On Wed, 2013-01-16 at 12:52 +0100, Tomas Babej wrote: >>>>> On 01/15/2013 11:55 PM, Simo Sorce wrote: >>>>>> On Tue, 2013-01-15 at 17:36 -0500, Dmitri Pal wrote: >>>>>>> On 01/15/2013 03:59 PM, Simo Sorce wrote: >>>>>>>> On Tue, 2013-01-15 at 15:53 -0500, Rob Crittenden wrote: >>>>>>>>> Dmitri Pal wrote: >>>>>>>>>> On 01/15/2013 08:48 AM, Simo Sorce wrote: >>>>>>>>>>> On Mon, 2013-01-14 at 16:46 +0100, Tomas Babej wrote: >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> Since in Kerberos V5 are used 32-bit unix timestamps, setting >>>>>>>>>>>> maxlife in pwpolicy to values such as 9999 days would cause >>>>>>>>>>>> integer overflow in krbPasswordExpiration attribute. >>>>>>>>>>>> >>>>>>>>>>>> This would result into unpredictable behaviour such as users >>>>>>>>>>>> not being able to log in after password expiration if password >>>>>>>>>>>> policy was changed (#3114) or new users not being able to log >>>>>>>>>>>> in at all (#3312). >>>>>>>>>>>> >>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/3312 >>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/3114 >>>>>>>>>>> Given that we control the KDC LDAP driver I think we should >>>>>>>>>>> not limit >>>>>>>>>>> the time in LDAP but rather 'fix-it-up' for the KDC in the >>>>>>>>>>> DAL driver. >>>>>>>>>> Fix how? Truncate to max in the driver itself if it was >>>>>>>>>> entered beyond max? >>>>>>>>>> Shouldn't we also prevent entering the invalid value into the >>>>>>>>>> attribute? >>>>>>>>>> >>>>>>>>> I've been mulling the same question for a while. Why would we >>>>>>>>> want to >>>>>>>>> let bad data get into the directory? >>>>>>>> It is not bad data and the attribute holds a Generalize time date. >>>>>>>> >>>>>>>> The data is valid it's the MIT code that has a limitation in >>>>>>>> parsing it. >>>>>>>> >>>>>>>> Greg tells me he plans supporting additional time by using the >>>>>>>> 'negative' part of the integer to represent the years beyond 2038. >>>>>>>> >>>>>>>> So we should represent data in the directory correctly, which >>>>>>>> means >>>>>>>> whtever date in the future and only chop it when feeding MIT >>>>>>>> libraries >>>>>>>> until they support the additional range at which time we will >>>>>>>> change and >>>>>>>> chop further in the future (around 2067 or so). >>>>>>>> >>>>>>>> If we chopped early in the directory we'd not be able to properly >>>>>>>> represent/change rapresentation later when MIT libs gain >>>>>>>> additional >>>>>>>> range capabilities. >>>>>>>> >>>>>>>> Simo. >>>>>>>> >>>>>>> We would have to change our code either way and the amount of >>>>>>> change >>>>>>> will be similar so does it really matter? >>>>>> Yes it really matters IMO. >>>>>> >>>>>> Simo. >>>>>> >>>>> Updated patch attached. >>>> This part looks ok but I think you also need to properly set >>>> krb5_db_entry-> {expiration, pw_expiration, last_success, last_failed} >>>> in ipadb_parse_ldap_entry() >>>> >>>> Perhaps the best way is to introduce a new function >>>> ipadb_ldap_attr_to_krb5_timestamp() in ipa_kdb_common.c so that you do >>>> all the overflow checkings once. >>>> >>>> Simo. >>>> >>> They all use ipadb_ldap_attr_to_time_t() to get their values, >>> so the following addition to the patch should be sufficient. >> It will break dates for other users of the function that do not need to >> artificially limit the results. Please add a new function. >> >> Simo. >> > Done. > > Tomas > > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel Nack from me, sorry. +int ipadb_ldap_attr_to_krb5_timestamp(LDAP *lcontext, LDAPMessage *le, + char *attrname, time_t *result) +{ + int ret = ipadb_ldab_attr_to_time_t(lcontext, le, + attrname, result); This function converts the time from the LDAP time by reading the string into the struct elements and then constructing time by using timegm() which is really a wrapper around mktime(). According to mktime() man page: If the specified broken-down time cannot be represented as calendar time (seconds since the Epoch), mktime() returns a value of (time_t) -1 and does not alter the members of the broken-down time structure. However it might not be the case so it would be nice to check if this function actually returns some negative value other than -1 if the time is overflown. Regardles of that part if it always returns -1 or can return other negative values like -2 or -3 the check below would produce positive result thus causing the function to return whatever is currently in the *result. IMO the whole fix should be implemented inside the function above when the conversion to time_t happens so that it would never produce a negative result. My suggestion would be before calling timegm() to test if the year, month, and day returned by the break-down structure contain the day after the last day of epoch and then just use use last day of epoc without even calling timegm(). + + /* in the case of integer owerflow, set result to IPAPWD_END_OF_TIME */ + if ((*result+86400) < 0) { + *result = IPAPWD_END_OF_TIME; // 1 Jan 2038, 00:00 GMT + } + + return ret; +} + -- Thank you, Dmitri Pal Sr. Engineering Manager for IdM portfolio Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From edewata at redhat.com Thu Jan 17 03:24:55 2013 From: edewata at redhat.com (Endi Sukma Dewata) Date: Thu, 17 Jan 2013 10:24:55 +0700 Subject: [Freeipa-devel] [PATCH] 240-252 AMD modules and Web UI build In-Reply-To: <50EC4D5B.5080501@redhat.com> References: <50EC4D5B.5080501@redhat.com> Message-ID: <50F76F07.6030405@redhat.com> On 1/8/2013 11:46 PM, Petr Vobornik wrote: > This patchset does following things: > * integrates Dojo library into FreeIPA Web UI > * encapsulates UI parts by AMD definition so they can be used in build > process and by AMD loader > * introduces Dojo builder for building Web UI and UglifyJS for > minimizing JS > * contains help and build scripts for developers and a make process > > Overall it makes final application smaller, faster loadable and it's a > foundation for additional refactoring. > > More information in design page: http://www.freeipa.org/page/V3/WebUI_build > > https://fedorahosted.org/freeipa/ticket/112 > > These patches introduce one regression: extension.js file isn't used. I > want to fix it in #3235 ([RFE] Make WebUI extensions functional just by > adding files). Which I will implement after I finish #3236 ([RFE] more > extensible navigation) on which I'm working now. Nice work! They seem to be working fine so it's ACKed. I have some questions though: 1. Patch #241 includes another patch (001-dojo-build-pvoborni-01-Make-dojo-builder-buildable-by-itself.patch). Is there an instruction how and where to use the patch? 2. Is there a way to disable uglify in case we need to debug with Firebug? 3. Is it possible to set breakpoints in AMD modules in Firebug, for example line 44 in widget.js? 4. Calling change-profile.sh allsource modifies the install/ui/js/dojo. Should they be included in .gitignore? Or is there a way to select the profile without modifying any files included in git (e.g. using parameter)? The concern is that the changes could accidentally get checked in and affect the official build. -- Endi S. Dewata From jcholast at redhat.com Thu Jan 17 08:07:16 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 17 Jan 2013 09:07:16 +0100 Subject: [Freeipa-devel] [PATCH] 93 Add custom mapping object for LDAP entry data In-Reply-To: <50F6EBE5.9060307@redhat.com> References: <50F6CB11.4010008@redhat.com> <50F6EBE5.9060307@redhat.com> Message-ID: <50F7B134.4000201@redhat.com> On 16.1.2013 19:05, Petr Viktorin wrote: > On 01/16/2013 04:45 PM, Jan Cholasta wrote: >> Hi, >> >> this patch adds initial support for custom LDAP entry objects, as >> described in . >> >> The LDAPEntry class implements the mapping object. The current version >> works like a dict, plus it has a DN property and validates and >> normalizes attribute names (there is no case preservation yet). >> >> The LDAPEntryCompat class provides compatibility with old code that uses >> (dn, attrs) tuples. The reason why this is a separate class is that our >> code currently has 2 contradicting requirements on the entry object: >> tuple unpacking must work with it (i.e. iter(entry) yields dn and >> attribute dict), but it also must work as an argument to dict >> constructor (i.e. iter(entry) yields attribute names). This class will >> be removed once our code is converted to use LDAPEntry. > > With this patch, ldap2 produces LDAPEntryCompat objects. I don't see how > that brings us closer to converting the codebase to LDAPEntry. > I'd be happier if this patch made both the tuple unpacking and > dict-style access possible. But perhaps you have a better plan than I > can see from the patch? My plan was to gradually transform these classes into a single LDAPEntry class in a series of patches. > > > The dict constructor uses the `keys` method, not iteration, so these two > requirements aren't incompatible -- unless we do specifically require > that iter(entry) yields attribute names. For example the following will > work: > > class NotQuiteADict(dict): > def __iter__(self): > return iter(('some', 'data')) > > a = NotQuiteADict(a=1, b=2, c=3) > print dict(a) # -> {'a': 1, 'c': 3, 'b': 2} > print list(a) # -> ['some', 'data'] > print a.keys() # -> ['a', 'c', 'b'] > While this works for dict, I'm not sure if it applies to *all* dict-like classes that we use. > > While overriding a dict's __iter__ in this way is a bit evil, I think > for our purposes it would be better than introducing a fourth entry class. Well, both are evil :-) > When all our code is converted we could just have __iter__ warn or raise > an exception for a few releases to make sure no one uses it. Once we completely get rid of entry tuples, we can reintroduce a dict-like __iter__. IMO new code should not be punished (i.e. forced to use .keys()) for the crimes (i.e. tuples) of the old code. > > In a couple of places we do iterate over the entry to get the keys, but > not in many, and they're usually well-tested. We just have to use > `entry.keys()` there. > > Would these changes to your patch be OK? Well, I wanted to keep code changes in this patch to LDAPEntry itself, but it's OK I guess... > > Another issue is that the DN is itself an attribute, so entry.dn and > entry['dn'] should be synchronized. I guess that's material for another > patch, though. > Is there any code that actually requires entry['dn']? We should decided whether we want to use entry['dn'] or entry.dn and use only that in new code. I like entry.dn more, as it better corresponds to the special meaning of dn in entries. Honza -- Jan Cholasta From pviktori at redhat.com Thu Jan 17 11:46:45 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 17 Jan 2013 12:46:45 +0100 Subject: [Freeipa-devel] [PATCH] 93 Add custom mapping object for LDAP entry data In-Reply-To: <50F7B134.4000201@redhat.com> References: <50F6CB11.4010008@redhat.com> <50F6EBE5.9060307@redhat.com> <50F7B134.4000201@redhat.com> Message-ID: <50F7E4A5.5000400@redhat.com> On 01/17/2013 09:07 AM, Jan Cholasta wrote: > On 16.1.2013 19:05, Petr Viktorin wrote: >> On 01/16/2013 04:45 PM, Jan Cholasta wrote: >>> Hi, >>> >>> this patch adds initial support for custom LDAP entry objects, as >>> described in . >>> >>> The LDAPEntry class implements the mapping object. The current version >>> works like a dict, plus it has a DN property and validates and >>> normalizes attribute names (there is no case preservation yet). >>> >>> The LDAPEntryCompat class provides compatibility with old code that uses >>> (dn, attrs) tuples. The reason why this is a separate class is that our >>> code currently has 2 contradicting requirements on the entry object: >>> tuple unpacking must work with it (i.e. iter(entry) yields dn and >>> attribute dict), but it also must work as an argument to dict >>> constructor (i.e. iter(entry) yields attribute names). This class will >>> be removed once our code is converted to use LDAPEntry. >> >> With this patch, ldap2 produces LDAPEntryCompat objects. I don't see how >> that brings us closer to converting the codebase to LDAPEntry. >> I'd be happier if this patch made both the tuple unpacking and >> dict-style access possible. But perhaps you have a better plan than I >> can see from the patch? > > My plan was to gradually transform these classes into a single LDAPEntry > class in a series of patches. > >> >> >> The dict constructor uses the `keys` method, not iteration, so these two >> requirements aren't incompatible -- unless we do specifically require >> that iter(entry) yields attribute names. For example the following will >> work: >> >> class NotQuiteADict(dict): >> def __iter__(self): >> return iter(('some', 'data')) >> >> a = NotQuiteADict(a=1, b=2, c=3) >> print dict(a) # -> {'a': 1, 'c': 3, 'b': 2} >> print list(a) # -> ['some', 'data'] >> print a.keys() # -> ['a', 'c', 'b'] >> > > While this works for dict, I'm not sure if it applies to *all* dict-like > classes that we use. I don't think we have any classes where it doesn't apply. >> While overriding a dict's __iter__ in this way is a bit evil, I think >> for our purposes it would be better than introducing a fourth entry >> class. > > Well, both are evil :-) > >> When all our code is converted we could just have __iter__ warn or raise >> an exception for a few releases to make sure no one uses it. > > Once we completely get rid of entry tuples, we can reintroduce a > dict-like __iter__. IMO new code should not be punished (i.e. forced to > use .keys()) for the crimes (i.e. tuples) of the old code. Yes, eventually. Though I'd like it to raise an exception for some time, so any external plugins that rely on it fail rather than get bad data. >> In a couple of places we do iterate over the entry to get the keys, but >> not in many, and they're usually well-tested. We just have to use >> `entry.keys()` there. >> >> Would these changes to your patch be OK? > > Well, I wanted to keep code changes in this patch to LDAPEntry itself, > but it's OK I guess... > >> >> Another issue is that the DN is itself an attribute, so entry.dn and >> entry['dn'] should be synchronized. I guess that's material for another >> patch, though. >> > > Is there any code that actually requires entry['dn']? Nothing really requires it, of course, but quite a lot of code currently uses it. > We should decided whether we want to use entry['dn'] or entry.dn and use > only that in new code. I like entry.dn more, as it better corresponds to > the special meaning of dn in entries. I also like entry.dn, at least in most cases, but if we don't synchronize them then we need to remove entry['dn'] completely. Having misleading/stale data in the object isn't good. There are some cases where entry['dn'] makes sense, such as iterating over all attributes. If we're going to have validation for every attribute anyway, I don't think synchronizing the two would be a major problem. Especially since entry.dn is already a property. -- Petr? From pvoborni at redhat.com Thu Jan 17 13:01:41 2013 From: pvoborni at redhat.com (Petr Vobornik) Date: Thu, 17 Jan 2013 14:01:41 +0100 Subject: [Freeipa-devel] [PATCH] 240-252 AMD modules and Web UI build In-Reply-To: <50F76F07.6030405@redhat.com> References: <50EC4D5B.5080501@redhat.com> <50F76F07.6030405@redhat.com> Message-ID: <50F7F635.6020609@redhat.com> On 01/17/2013 04:24 AM, Endi Sukma Dewata wrote: > On 1/8/2013 11:46 PM, Petr Vobornik wrote: >> This patchset does following things: >> * integrates Dojo library into FreeIPA Web UI >> * encapsulates UI parts by AMD definition so they can be used in build >> process and by AMD loader >> * introduces Dojo builder for building Web UI and UglifyJS for >> minimizing JS >> * contains help and build scripts for developers and a make process >> >> Overall it makes final application smaller, faster loadable and it's a >> foundation for additional refactoring. >> >> More information in design page: >> http://www.freeipa.org/page/V3/WebUI_build >> >> https://fedorahosted.org/freeipa/ticket/112 >> >> These patches introduce one regression: extension.js file isn't used. I >> want to fix it in #3235 ([RFE] Make WebUI extensions functional just by >> adding files). Which I will implement after I finish #3236 ([RFE] more >> extensible navigation) on which I'm working now. > > Nice work! They seem to be working fine so it's ACKed. I found a little error - there is a jsl problem in dojo.profile:86 - comma at the end of a list. Updated patch 243 attached. > I have some questions though: > > 1. Patch #241 includes another patch > (001-dojo-build-pvoborni-01-Make-dojo-builder-buildable-by-itself.patch). Is > there an instruction how and where to use the patch? The patch is automatically applied by dojo-prep.js (with --all option) when one wants to build the builder. We might consider to send this patch to dojo upstream (requires CLA). The use case is described here: http://www.freeipa.org/page/V3/WebUI_build#Make_new_Builder_build > 2. Is there a way to disable uglify in case we need to debug with Firebug? Currently no. I found myself that I have needed it only when I was trying to figure out what is the output of the builder or some build debugging (what modules are actually used). What's your use case with Firebug? If I want to debug, I would use plain source codes and send it to the test server [1] or I would use the local file modes [2]. The output of the builder is quite ugly to debug. If it's really useful we might add some option to make-ui.sh, should be easy. [1] http://www.freeipa.org/page/V3/WebUI_build#Copy_source_codes_of_FreeIPA_layer_on_test_server [2] http://www.freeipa.org/page/V3/WebUI_build#Set_environment_to_debug_source_codes_of_FreeIPA_layer_using_offline_version > > 3. Is it possible to set breakpoints in AMD modules in Firebug, for > example line 44 in widget.js? Yes. > > 4. Calling change-profile.sh allsource modifies the install/ui/js/dojo. > Should they be included in .gitignore? Or is there a way to select the > profile without modifying any files included in git (e.g. using > parameter)? The concern is that the changes could accidentally get > checked in and affect the official build. > change-profile.sh has option --git-ignore which marks those symbolic links with 'git update-index --assume-unchanged ' which should prevent this issue. Option --git-undo removes this mark. It might be little uncomfortable, but I didn't find better method. Possible option is to remove those links from git repository and add them to .gitignore, but by using it the 'offline version' wouldn't be functional out of the box (checkout). -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0243-1-Minimal-Dojo-layer.patch Type: text/x-patch Size: 82268 bytes Desc: not available URL: From jcholast at redhat.com Thu Jan 17 13:40:57 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 17 Jan 2013 14:40:57 +0100 Subject: [Freeipa-devel] [PATCH] 93 Add custom mapping object for LDAP entry data In-Reply-To: <50F7E4A5.5000400@redhat.com> References: <50F6CB11.4010008@redhat.com> <50F6EBE5.9060307@redhat.com> <50F7B134.4000201@redhat.com> <50F7E4A5.5000400@redhat.com> Message-ID: <50F7FF69.8010509@redhat.com> On 17.1.2013 12:46, Petr Viktorin wrote: > On 01/17/2013 09:07 AM, Jan Cholasta wrote: >> >> While this works for dict, I'm not sure if it applies to *all* dict-like >> classes that we use. > > I don't think we have any classes where it doesn't apply. > >> Once we completely get rid of entry tuples, we can reintroduce a >> dict-like __iter__. IMO new code should not be punished (i.e. forced to >> use .keys()) for the crimes (i.e. tuples) of the old code. > > Yes, eventually. Though I'd like it to raise an exception for some time, > so any external plugins that rely on it fail rather than get bad data. OK. But I think external plugins will break either-way, as we don't really maintain backward compatibility in our internal APIs. > >> We should decided whether we want to use entry['dn'] or entry.dn and use >> only that in new code. I like entry.dn more, as it better corresponds to >> the special meaning of dn in entries. > > I also like entry.dn, at least in most cases, but if we don't > synchronize them then we need to remove entry['dn'] completely. Having > misleading/stale data in the object isn't good. > There are some cases where entry['dn'] makes sense, such as iterating > over all attributes. > If we're going to have validation for every attribute anyway, I don't > think synchronizing the two would be a major problem. Especially since > entry.dn is already a property. > OK, we will deal with that in further patches. Updated patch attached. Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-93.1-Add-custom-mapping-object-for-LDAP-entry-data.patch Type: text/x-patch Size: 18842 bytes Desc: not available URL: From simo at redhat.com Thu Jan 17 13:54:02 2013 From: simo at redhat.com (Simo Sorce) Date: Thu, 17 Jan 2013 08:54:02 -0500 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <50F742D4.6010800@redhat.com> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> <50F6946C.3040905@redhat.com> <1358344021.4590.31.camel@willson.li.ssimo.org> <50F6DC0C.1090106@redhat.com> <1358355708.4590.47.camel@willson.li.ssimo.org> <50F6E412.7020402@redhat.com> <1358359043.4590.50.camel@willson.li.ssimo.org> <50F742D4.6010800@redhat.com> Message-ID: <1358430842.4590.74.camel@willson.li.ssimo.org> On Thu, 2013-01-17 at 01:16 +0100, Tomas Babej wrote: > On 01/16/2013 06:57 PM, Simo Sorce wrote: > > On Wed, 2013-01-16 at 18:32 +0100, Tomas Babej wrote: > > > >>>> They all use ipadb_ldap_attr_to_time_t() to get their values, > >>>> so the following addition to the patch should be sufficient. > >>> It will break dates for other users of the function that do not need to > >>> artificially limit the results. Please add a new function. > >>> > >>> Simo. > >>> > >> Done. > >> > >> Tomas > > ACK > > > > Simo > > > Self-ACK-deny, I just realized that I forgot to change the function calls. Doh! Ack to the new one. Simo. -- Simo Sorce * Red Hat, Inc * New York From tbabej at redhat.com Thu Jan 17 14:29:37 2013 From: tbabej at redhat.com (Tomas Babej) Date: Thu, 17 Jan 2013 15:29:37 +0100 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <50F74C57.6030604@redhat.com> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> <50F6946C.3040905@redhat.com> <1358344021.4590.31.camel@willson.li.ssimo.org> <50F6DC0C.1090106@redhat.com> <1358355708.4590.47.camel@willson.li.ssimo.org> <50F6E412.7020402@redhat.com> <50F74C57.6030604@redhat.com> Message-ID: <50F80AD1.20408@redhat.com> On 01/17/2013 01:56 AM, Dmitri Pal wrote: > On 01/16/2013 12:32 PM, Tomas Babej wrote: >> On 01/16/2013 06:01 PM, Simo Sorce wrote: >>> On Wed, 2013-01-16 at 17:57 +0100, Tomas Babej wrote: >>>> On 01/16/2013 02:47 PM, Simo Sorce wrote: >>>>> On Wed, 2013-01-16 at 12:52 +0100, Tomas Babej wrote: >>>>>> On 01/15/2013 11:55 PM, Simo Sorce wrote: >>>>>>> On Tue, 2013-01-15 at 17:36 -0500, Dmitri Pal wrote: >>>>>>>> On 01/15/2013 03:59 PM, Simo Sorce wrote: >>>>>>>>> On Tue, 2013-01-15 at 15:53 -0500, Rob Crittenden wrote: >>>>>>>>>> Dmitri Pal wrote: >>>>>>>>>>> On 01/15/2013 08:48 AM, Simo Sorce wrote: >>>>>>>>>>>> On Mon, 2013-01-14 at 16:46 +0100, Tomas Babej wrote: >>>>>>>>>>>>> Hi, >>>>>>>>>>>>> >>>>>>>>>>>>> Since in Kerberos V5 are used 32-bit unix timestamps, setting >>>>>>>>>>>>> maxlife in pwpolicy to values such as 9999 days would cause >>>>>>>>>>>>> integer overflow in krbPasswordExpiration attribute. >>>>>>>>>>>>> >>>>>>>>>>>>> This would result into unpredictable behaviour such as users >>>>>>>>>>>>> not being able to log in after password expiration if >>>>>>>>>>>>> password >>>>>>>>>>>>> policy was changed (#3114) or new users not being able to log >>>>>>>>>>>>> in at all (#3312). >>>>>>>>>>>>> >>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/3312 >>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/3114 >>>>>>>>>>>> Given that we control the KDC LDAP driver I think we should >>>>>>>>>>>> not limit >>>>>>>>>>>> the time in LDAP but rather 'fix-it-up' for the KDC in the >>>>>>>>>>>> DAL driver. >>>>>>>>>>> Fix how? Truncate to max in the driver itself if it was >>>>>>>>>>> entered beyond max? >>>>>>>>>>> Shouldn't we also prevent entering the invalid value into >>>>>>>>>>> the attribute? >>>>>>>>>>> >>>>>>>>>> I've been mulling the same question for a while. Why would we >>>>>>>>>> want to >>>>>>>>>> let bad data get into the directory? >>>>>>>>> It is not bad data and the attribute holds a Generalize time >>>>>>>>> date. >>>>>>>>> >>>>>>>>> The data is valid it's the MIT code that has a limitation in >>>>>>>>> parsing it. >>>>>>>>> >>>>>>>>> Greg tells me he plans supporting additional time by using the >>>>>>>>> 'negative' part of the integer to represent the years beyond >>>>>>>>> 2038. >>>>>>>>> >>>>>>>>> So we should represent data in the directory correctly, which >>>>>>>>> means >>>>>>>>> whtever date in the future and only chop it when feeding MIT >>>>>>>>> libraries >>>>>>>>> until they support the additional range at which time we will >>>>>>>>> change and >>>>>>>>> chop further in the future (around 2067 or so). >>>>>>>>> >>>>>>>>> If we chopped early in the directory we'd not be able to properly >>>>>>>>> represent/change rapresentation later when MIT libs gain >>>>>>>>> additional >>>>>>>>> range capabilities. >>>>>>>>> >>>>>>>>> Simo. >>>>>>>>> >>>>>>>> We would have to change our code either way and the amount of >>>>>>>> change >>>>>>>> will be similar so does it really matter? >>>>>>> Yes it really matters IMO. >>>>>>> >>>>>>> Simo. >>>>>>> >>>>>> Updated patch attached. >>>>> This part looks ok but I think you also need to properly set >>>>> krb5_db_entry-> {expiration, pw_expiration, last_success, >>>>> last_failed} >>>>> in ipadb_parse_ldap_entry() >>>>> >>>>> Perhaps the best way is to introduce a new function >>>>> ipadb_ldap_attr_to_krb5_timestamp() in ipa_kdb_common.c so that >>>>> you do >>>>> all the overflow checkings once. >>>>> >>>>> Simo. >>>>> >>>> They all use ipadb_ldap_attr_to_time_t() to get their values, >>>> so the following addition to the patch should be sufficient. >>> It will break dates for other users of the function that do not need to >>> artificially limit the results. Please add a new function. >>> >>> Simo. >>> >> Done. >> >> Tomas >> >> >> _______________________________________________ >> Freeipa-devel mailing list >> Freeipa-devel at redhat.com >> https://www.redhat.com/mailman/listinfo/freeipa-devel > Nack from me, sorry. > > +int ipadb_ldap_attr_to_krb5_timestamp(LDAP *lcontext, LDAPMessage *le, > + char *attrname, time_t *result) > +{ > + int ret = ipadb_ldab_attr_to_time_t(lcontext, le, > + attrname, result); > > This function converts the time from the LDAP time by reading the string into the struct elements and then constructing time by using timegm() which is really a wrapper around mktime(). > According to mktime() man page: > > If the specified broken-down time cannot be represented as calendar time (seconds since > the Epoch), mktime() returns a value of (time_t) -1 and does not alter the members of the > broken-down time structure. > > However it might not be the case so it would be nice to check if this function actually returns some negative value other than -1 if the time is overflown. > Regardles of that part if it always returns -1 or can return other negative values like -2 or -3 the check below would produce positive result thus causing the function to return whatever is currently in the *result. I double checked the behaviour. It holds that mktime() and timegm() behave the same way, up to time-zone difference. I don't know whether the man page is not correct, or the implementation in the standard library is not compliant, however, mktime() never returns -1 as return value (if it was not given tm struct which refers to 31 Dec 1969 29:59:59). I guess the implementation was changed as there would be no way how to distinguish between correct output of 31 Dec 1969 29:59:59 and incorrect output. I tested both incorrect calendar days (like 14th month) and dates behind year 2038. As expected, dates after the end of unix epoch overflow big time (values such as -2143152000). Incorrect dates just get converted into correct ones - 14th month was interpreted as +1 year +3 months. > IMO the whole fix should be implemented inside the function above when the conversion to time_t happens so that it would never produce a negative result. Simo had objections to this approach since this would limit the other uses of ipadb_ldap_attr_to_time_t() function. > My suggestion would be before calling timegm() to test if the year, month, and day returned by the break-down structure contain the day after the last day of epoch and then just use use last day of epoc without even calling timegm(). Indeed it would be simpler approach. However, for the current solution to malfunction (overflow back to the positive values), the date would have to be set to something beyond year ~2100. That would correspond to maxlife of ~32 000 days with present dates. I guess there might be admins setting crazy values like that, I'll send updated patch. > + > + /* in the case of integer owerflow, set result to IPAPWD_END_OF_TIME */ > + if ((*result+86400) < 0) { > > > + *result = IPAPWD_END_OF_TIME; // 1 Jan 2038, 00:00 GMT > + } > + > + return ret; > +} > + Tomas > > -- > Thank you, > Dmitri Pal > > Sr. Engineering Manager for IdM portfolio > Red Hat Inc. > > > ------------------------------- > Looking to carve out IT costs? > www.redhat.com/carveoutcosts/ > > > > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From rcritten at redhat.com Thu Jan 17 15:12:03 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 17 Jan 2013 10:12:03 -0500 Subject: [Freeipa-devel] [PATCH] 0003 Add crond as a default HBAC service In-Reply-To: <20130115115651.GA28090@redhat.com> References: <50F53BEE.7020306@redhat.com> <20130115115651.GA28090@redhat.com> Message-ID: <50F814C3.3090502@redhat.com> Alexander Bokovoy wrote: > On Tue, 15 Jan 2013, Ana Krivokapic wrote: >> crond was not included in the list of default HBAC services - it >> needed to be added manually. As crond is a commonly used service, it >> is now included as a default HBAC service. >> >> Ticket: https://fedorahosted.org/freeipa/ticket/3215 > ACK. > > Simple and obvious. Pushed to master rob > >> I'm not sure whether a design document is needed for this ticket. It >> _is_ marked as an RFE, but it is a pretty simple and specific change. >> After yesterday's meeting, I'm not really clear on whether a design >> page is needed in cases like this. Thoughts? > I suggested that additions like this do not need specific design > document. > > Since this is the whole patch: >> --- a/install/updates/50-hbacservice.update >> +++ b/install/updates/50-hbacservice.update >> @@ -1,3 +1,10 @@ >> +dn: cn=crond,cn=hbacservices,cn=hbac,$SUFFIX >> +default:objectclass: ipahbacservice >> +default:objectclass: ipaobject >> +default:cn: crond >> +default:description: crond >> +default:ipauniqueid:autogenerate >> + >> dn: cn=vsftpd,cn=hbacservices,cn=hbac,$SUFFIX >> default:objectclass: ipahbacservice >> default:objectclass: ipaobject > .. I would simply add a link to the mail archive of this thread to the > ticket itself. Mailing archive of freeipa-devel@ is as stable as our > wiki, if not more resilient. ;) > From rcritten at redhat.com Thu Jan 17 15:15:57 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 17 Jan 2013 10:15:57 -0500 Subject: [Freeipa-devel] [PATCH] 349 Test NetBIOS name clash before creating a trust In-Reply-To: <50F03223.8070802@redhat.com> References: <50F03223.8070802@redhat.com> Message-ID: <50F815AD.2030908@redhat.com> Martin Kosek wrote: > Give a clear message about what is wrong with current Trust settings > before letting AD to return a confusing error message. > > https://fedorahosted.org/freeipa/ticket/3193 I hate being picky over wording but... I think it would read better if you replaced 'this' with 'The IPA server' or 'IPA' or something like that. rob From pviktori at redhat.com Thu Jan 17 15:17:59 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 17 Jan 2013 16:17:59 +0100 Subject: [Freeipa-devel] [PATCH] 93 Add custom mapping object for LDAP entry data In-Reply-To: <50F7FF69.8010509@redhat.com> References: <50F6CB11.4010008@redhat.com> <50F6EBE5.9060307@redhat.com> <50F7B134.4000201@redhat.com> <50F7E4A5.5000400@redhat.com> <50F7FF69.8010509@redhat.com> Message-ID: <50F81627.3020703@redhat.com> On 01/17/2013 02:40 PM, Jan Cholasta wrote: > On 17.1.2013 12:46, Petr Viktorin wrote: >> On 01/17/2013 09:07 AM, Jan Cholasta wrote: >>> >>> While this works for dict, I'm not sure if it applies to *all* dict-like >>> classes that we use. >> >> I don't think we have any classes where it doesn't apply. >> >>> Once we completely get rid of entry tuples, we can reintroduce a >>> dict-like __iter__. IMO new code should not be punished (i.e. forced to >>> use .keys()) for the crimes (i.e. tuples) of the old code. >> >> Yes, eventually. Though I'd like it to raise an exception for some time, >> so any external plugins that rely on it fail rather than get bad data. > > OK. But I think external plugins will break either-way, as we don't > really maintain backward compatibility in our internal APIs. > >> >>> We should decided whether we want to use entry['dn'] or entry.dn and use >>> only that in new code. I like entry.dn more, as it better corresponds to >>> the special meaning of dn in entries. >> >> I also like entry.dn, at least in most cases, but if we don't >> synchronize them then we need to remove entry['dn'] completely. Having >> misleading/stale data in the object isn't good. >> There are some cases where entry['dn'] makes sense, such as iterating >> over all attributes. >> If we're going to have validation for every attribute anyway, I don't >> think synchronizing the two would be a major problem. Especially since >> entry.dn is already a property. >> > > OK, we will deal with that in further patches. > > Updated patch attached. > > Honza > ACK. I'll start working on top of this. -- Petr? From simo at redhat.com Thu Jan 17 16:18:39 2013 From: simo at redhat.com (Simo Sorce) Date: Thu, 17 Jan 2013 11:18:39 -0500 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <50F80AD1.20408@redhat.com> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> <50F6946C.3040905@redhat.com> <1358344021.4590.31.camel@willson.li.ssimo.org> <50F6DC0C.1090106@redhat.com> <1358355708.4590.47.camel@willson.li.ssimo.org> <50F6E412.7020402@redhat.com> <50F74C57.6030604@redhat.com> <50F80AD1.20408@redhat.com> Message-ID: <1358439519.20683.14.camel@willson.li.ssimo.org> On Thu, 2013-01-17 at 15:29 +0100, Tomas Babej wrote: > On 01/17/2013 01:56 AM, Dmitri Pal wrote: > > > On 01/16/2013 12:32 PM, Tomas Babej wrote: > > > On 01/16/2013 06:01 PM, Simo Sorce wrote: > > > > On Wed, 2013-01-16 at 17:57 +0100, Tomas Babej wrote: > > > > > On 01/16/2013 02:47 PM, Simo Sorce wrote: > > > > > > On Wed, 2013-01-16 at 12:52 +0100, Tomas Babej wrote: > > > > > > > On 01/15/2013 11:55 PM, Simo Sorce wrote: > > > > > > > > On Tue, 2013-01-15 at 17:36 -0500, Dmitri Pal wrote: > > > > > > > > > On 01/15/2013 03:59 PM, Simo Sorce wrote: > > > > > > > > > > On Tue, 2013-01-15 at 15:53 -0500, Rob Crittenden > > > > > > > > > > wrote: > > > > > > > > > > > Dmitri Pal wrote: > > > > > > > > > > > > On 01/15/2013 08:48 AM, Simo Sorce wrote: > > > > > > > > > > > > > On Mon, 2013-01-14 at 16:46 +0100, Tomas Babej > > > > > > > > > > > > > wrote: > > > > > > > > > > > > > > Hi, > > > > > > > > > > > > > > > > > > > > > > > > > > > > Since in Kerberos V5 are used 32-bit unix > > > > > > > > > > > > > > timestamps, setting > > > > > > > > > > > > > > maxlife in pwpolicy to values such as 9999 > > > > > > > > > > > > > > days would cause > > > > > > > > > > > > > > integer overflow in krbPasswordExpiration > > > > > > > > > > > > > > attribute. > > > > > > > > > > > > > > > > > > > > > > > > > > > > This would result into unpredictable > > > > > > > > > > > > > > behaviour such as users > > > > > > > > > > > > > > not being able to log in after password > > > > > > > > > > > > > > expiration if password > > > > > > > > > > > > > > policy was changed (#3114) or new users not > > > > > > > > > > > > > > being able to log > > > > > > > > > > > > > > in at all (#3312). > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://fedorahosted.org/freeipa/ticket/3312 > > > > > > > > > > > > > > https://fedorahosted.org/freeipa/ticket/3114 > > > > > > > > > > > > > Given that we control the KDC LDAP driver I > > > > > > > > > > > > > think we should not limit > > > > > > > > > > > > > the time in LDAP but rather 'fix-it-up' for > > > > > > > > > > > > > the KDC in the DAL driver. > > > > > > > > > > > > Fix how? Truncate to max in the driver itself if > > > > > > > > > > > > it was entered beyond max? > > > > > > > > > > > > Shouldn't we also prevent entering the invalid > > > > > > > > > > > > value into the attribute? > > > > > > > > > > > > > > > > > > > > > > > I've been mulling the same question for a while. > > > > > > > > > > > Why would we want to > > > > > > > > > > > let bad data get into the directory? > > > > > > > > > > It is not bad data and the attribute holds a > > > > > > > > > > Generalize time date. > > > > > > > > > > > > > > > > > > > > The data is valid it's the MIT code that has a > > > > > > > > > > limitation in parsing it. > > > > > > > > > > > > > > > > > > > > Greg tells me he plans supporting additional time by > > > > > > > > > > using the > > > > > > > > > > 'negative' part of the integer to represent the > > > > > > > > > > years beyond 2038. > > > > > > > > > > > > > > > > > > > > So we should represent data in the directory > > > > > > > > > > correctly, which means > > > > > > > > > > whtever date in the future and only chop it when > > > > > > > > > > feeding MIT libraries > > > > > > > > > > until they support the additional range at which > > > > > > > > > > time we will change and > > > > > > > > > > chop further in the future (around 2067 or so). > > > > > > > > > > > > > > > > > > > > If we chopped early in the directory we'd not be > > > > > > > > > > able to properly > > > > > > > > > > represent/change rapresentation later when MIT libs > > > > > > > > > > gain additional > > > > > > > > > > range capabilities. > > > > > > > > > > > > > > > > > > > > Simo. > > > > > > > > > > > > > > > > > > > We would have to change our code either way and the > > > > > > > > > amount of change > > > > > > > > > will be similar so does it really matter? > > > > > > > > Yes it really matters IMO. > > > > > > > > > > > > > > > > Simo. > > > > > > > > > > > > > > > Updated patch attached. > > > > > > This part looks ok but I think you also need to properly > > > > > > set > > > > > > krb5_db_entry-> {expiration, pw_expiration, last_success, > > > > > > last_failed} > > > > > > in ipadb_parse_ldap_entry() > > > > > > > > > > > > Perhaps the best way is to introduce a new function > > > > > > ipadb_ldap_attr_to_krb5_timestamp() in ipa_kdb_common.c so > > > > > > that you do > > > > > > all the overflow checkings once. > > > > > > > > > > > > Simo. > > > > > > > > > > > They all use ipadb_ldap_attr_to_time_t() to get their values, > > > > > so the following addition to the patch should be sufficient. > > > > It will break dates for other users of the function that do not > > > > need to > > > > artificially limit the results. Please add a new function. > > > > > > > > Simo. > > > > > > > Done. > > > > > > Tomas > > > > > > > > > _______________________________________________ > > > Freeipa-devel mailing list > > > Freeipa-devel at redhat.com > > > https://www.redhat.com/mailman/listinfo/freeipa-devel > > Nack from me, sorry. > > > > +int ipadb_ldap_attr_to_krb5_timestamp(LDAP *lcontext, LDAPMessage *le, > > + char *attrname, time_t *result) > > +{ > > + int ret = ipadb_ldab_attr_to_time_t(lcontext, le, > > + attrname, result); > > > > This function converts the time from the LDAP time by reading the string into the struct elements and then constructing time by using timegm() which is really a wrapper around mktime(). > > According to mktime() man page: > > > > If the specified broken-down time cannot be represented as calendar time (seconds since > > the Epoch), mktime() returns a value of (time_t) -1 and does not alter the members of the > > broken-down time structure. > > > > However it might not be the case so it would be nice to check if this function actually returns some negative value other than -1 if the time is overflown. > > Regardles of that part if it always returns -1 or can return other negative values like -2 or -3 the check below would produce positive result thus causing the function to return whatever is currently in the *result. > I double checked the behaviour. It holds that mktime() and timegm() > behave the same way, up to time-zone difference. I don't know whether > the man page is not correct, > or the implementation in the standard library is not compliant, > however, mktime() never returns -1 as return value (if it was not > given tm struct which refers to 31 Dec 1969 29:59:59). > > I guess the implementation was changed as there would be no way how to > distinguish between correct output of 31 Dec 1969 29:59:59 and > incorrect output. > > I tested both incorrect calendar days (like 14th month) and dates > behind year 2038. As expected, dates after the end of unix epoch > overflow big time (values such as -2143152000). > Incorrect dates just get converted into correct ones - 14th month was > interpreted as +1 year +3 months. > > > IMO the whole fix should be implemented inside the function above when the conversion to time_t happens so that it would never produce a negative result. > Simo had objections to this approach since this would limit the other > uses of ipadb_ldap_attr_to_time_t() function. > > My suggestion would be before calling timegm() to test if the year, month, and day returned by the break-down structure contain the day after the last day of epoch and then just use use last day of epoc without even calling timegm(). > Indeed it would be simpler approach. However, for the current solution > to malfunction (overflow back to the positive values), the date would > have to be set to something beyond year ~2100. > That would correspond to maxlife of ~32 000 days with present dates. > > I guess there might be admins setting crazy values like that, I'll > send updated patch. > > > + > > + /* in the case of integer owerflow, set result to IPAPWD_END_OF_TIME */ > > + if ((*result+86400) < 0) { > > > > > > + *result = IPAPWD_END_OF_TIME; // 1 Jan 2038, 00:00 GMT > > + } > > + > > + return ret; > > +} > > + Ok this mail made me look again in the issue. I see 2 problems here. 1) platform dependent issues. On i686 time_t is a signed 32bit integer (int) On x86_64 time_t is a signed 64bit integer (long long) So when you test you need to be aware on what platform you are testing in order to know what to expect. 2) The current patch returns time_t *result for the new wrapper function which is wrong, it shoul return krb5_timestamp as the type. The actual test done in the code looks ok but only if you think time_t is a 32bit signed integer. In that case it will overflow. But on x86_64 it will not. Sorry for not catching it earlier. So the way to handle this is to actually check this is to change the wrapper function to look like this: int ipadb_ldap_attr_to_krb5_timestamp(LDAP *lcontext, LDAPMessage *le, char *attrname, krb5_timestamp *result) { time_t restime; long long reslong; int ret = ipadb_ldab_attr_to_time_t(lcontext, le, attrname, restime); if (ret) return ret; reslong = restime; // <- this will cast correctly maintaing sign to a 64bit variable if (reslong < 0 || reslong > IPAPWD_END_OF_TIME) { *result = IPAPWD_END_OF_TIME; } else { *result = (krb5_timestamp)reslong; } return 0; } All calls in ipadb_parse_ldap_entry() that expects a singed 32 bit time as output should be hanged to use ipadb_ldap_attr_to_krb5_timestamp() This includes 2 additional calls Tomas pointed to me on a IRC conversation. So Nack again :-/ Simo. -- Simo Sorce * Red Hat, Inc * New York From pvoborni at redhat.com Thu Jan 17 16:23:59 2013 From: pvoborni at redhat.com (Petr Vobornik) Date: Thu, 17 Jan 2013 17:23:59 +0100 Subject: [Freeipa-devel] [PATCH] 253 Enable mod_deflate In-Reply-To: <50ED983F.6040705@redhat.com> References: <50EC4DE0.1070100@redhat.com> <50EC5026.7050109@redhat.com> <50EC51D8.7060609@redhat.com> <50ED1F47.2050405@redhat.com> <50ED4D75.9040103@redhat.com> <50ED6B6B.5010406@redhat.com> <50ED983F.6040705@redhat.com> Message-ID: <50F8259F.2020308@redhat.com> On 01/09/2013 05:18 PM, Petr Viktorin wrote: > On 01/09/2013 02:06 PM, Petr Vobornik wrote: >> On 01/09/2013 11:59 AM, Petr Viktorin wrote: >>> On 01/09/2013 08:41 AM, Martin Kosek wrote: >>>> On 01/08/2013 06:05 PM, Petr Vobornik wrote: >>>>> On 01/08/2013 05:58 PM, Rob Crittenden wrote: >>>>>> Petr Vobornik wrote: >>>>>>> Design page: http://www.freeipa.org/page/V3/WebUI_gzip_compression >>>>>>> >>>>>>> Enabled mod_deflate for: >>>>>>> * text/html (HTML files) >>>>>>> * text/plain (for future use) >>>>>>> * text/css (CSS files) >>>>>>> * text/xml (XML RPC) >>>>>>> * application/javascript (JavaScript files) >>>>>>> * application/json (JSON RPC) >>>>>>> * application/x-font-woff (woff fonts) >>>>>>> >>>>>>> Added proper mime type for woff fonts. >>>>>>> Disabled etag header because it doesn't work with mod_deflate. >>>>>>> >>>>>>> https://fedorahosted.org/freeipa/ticket/3326 >>>>>> >>>>>> Should this be enabled on upgrades as well? >>>>> >>>>> Yes, I don't see a reason not to. >>>> >>>> This should be enabled on upgrades as is, since Petr bumped VERSION in >>>> install/conf/ipa.conf. >>>> >>>> We should carefully check that enabling it also for xmlrpc/json does >>>> not cause >>>> any grief. >>>> >>>> Martin >>>> >>> >>> HTTP libraries won't ask for gzip if they can't handle it, so there >>> shouldn't be any grief. >>> I tested the UI, installing client & replica, and the CLI tool. All work >>> fine. >>> >>> Just one thing: WOFF is already compressed so we shouldn't gzip it >>> again. >>> >> >> Thanks. Compression for application/x-font-woff removed. Updated patch >> attached. >> > > ACK > > Pushed to master -- Petr Vobornik From pviktori at redhat.com Thu Jan 17 17:27:52 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 17 Jan 2013 18:27:52 +0100 Subject: [Freeipa-devel] [PATCHES] 120-126 LDAP code refactoring (part 1) Message-ID: <50F83498.9050104@redhat.com> Hello, This is the first batch of changes aimed to consolidate our LDAP code. Each should be a self-contained change that doesn't break anything. These patches do some general cleanup (some of the changes might seem trivial but help a lot when grepping through the code); merge the common parts LDAPEntry, Entry and Entity classes; and move stuff that depends on an installed server out of IPASimpleLDAPObject and SchemaCache. I'm posting them early so you can see where I'm going, and so you can find out if your work will conflict with mine. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0120-Remove-some-unused-imports.patch Type: text/x-patch Size: 11281 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0121-Remove-unused-methods-from-Entry-Entity-and-IPAdmin.patch Type: text/x-patch Size: 9346 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0122-Derive-Entity-class-from-Entry-and-move-it-to-ldapup.patch Type: text/x-patch Size: 10172 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0123-Use-explicit-loggers-in-ldap2-code.patch Type: text/x-patch Size: 10234 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0124-Move-LDAPEntry-to-ipaserver.ipaldap-and-derive-Entry.patch Type: text/x-patch Size: 15970 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0125-Remove-connection-creating-code-from-ShemaCache.patch Type: text/x-patch Size: 3993 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0126-Move-the-decision-to-force-schema-updates-out-of-IPA.patch Type: text/x-patch Size: 4820 bytes Desc: not available URL: From edewata at redhat.com Fri Jan 18 02:11:46 2013 From: edewata at redhat.com (Endi Sukma Dewata) Date: Fri, 18 Jan 2013 09:11:46 +0700 Subject: [Freeipa-devel] [PATCH] 240-252 AMD modules and Web UI build In-Reply-To: <50F7F635.6020609@redhat.com> References: <50EC4D5B.5080501@redhat.com> <50F76F07.6030405@redhat.com> <50F7F635.6020609@redhat.com> Message-ID: <50F8AF62.7030809@redhat.com> On 1/17/2013 8:01 PM, Petr Vobornik wrote: > On 01/17/2013 04:24 AM, Endi Sukma Dewata wrote: >> Nice work! They seem to be working fine so it's ACKed. > > I found a little error - there is a jsl problem in dojo.profile:86 - > comma at the end of a list. Updated patch 243 attached. ACK. >> I have some questions though: >> >> 1. Patch #241 includes another patch >> (001-dojo-build-pvoborni-01-Make-dojo-builder-buildable-by-itself.patch). >> Is >> there an instruction how and where to use the patch? > > The patch is automatically applied by dojo-prep.js (with --all option) > when one wants to build the builder. We might consider to send this > patch to dojo upstream (requires CLA). > > The use case is described here: > http://www.freeipa.org/page/V3/WebUI_build#Make_new_Builder_build OK, got it. >> 2. Is there a way to disable uglify in case we need to debug with >> Firebug? > > Currently no. I found myself that I have needed it only when I was > trying to figure out what is the output of the builder or some build > debugging (what modules are actually used). > > What's your use case with Firebug? If I want to debug, I would use plain > source codes and send it to the test server [1] or I would use the local > file modes [2]. The output of the builder is quite ugly to debug. > > If it's really useful we might add some option to make-ui.sh, should be > easy. > > [1] > http://www.freeipa.org/page/V3/WebUI_build#Copy_source_codes_of_FreeIPA_layer_on_test_server > > [2] > http://www.freeipa.org/page/V3/WebUI_build#Set_environment_to_debug_source_codes_of_FreeIPA_layer_using_offline_version I guess what I'm looking for is a way to troubleshoot using Firebug at a customer's environment who's using the compiled code on a live server. I suppose we can ask the customer to install the source code, then run sync.sh to install the sources. But is there a way to clean up the machine and switch back to the compiled code after we're done troubleshooting? The sync.sh --compiled or --clean doesn't seem to do it. >> 3. Is it possible to set breakpoints in AMD modules in Firebug, for >> example line 44 in widget.js? > > Yes. OK, after doing sync.sh --freeipa I was able to see the sources in Firebug and set breakpoints. We might want to include this in a troubleshooting guide (if it's not already there). >> 4. Calling change-profile.sh allsource modifies the install/ui/js/dojo. >> Should they be included in .gitignore? Or is there a way to select the >> profile without modifying any files included in git (e.g. using >> parameter)? The concern is that the changes could accidentally get >> checked in and affect the official build. > > change-profile.sh has option --git-ignore which marks those symbolic > links with 'git update-index --assume-unchanged ' which should prevent > this issue. Option --git-undo removes this mark. > > It might be little uncomfortable, but I didn't find better method. > Possible option is to remove those links from git repository and add > them to .gitignore, but by using it the 'offline version' wouldn't be > functional out of the box (checkout). Are the links used by the offline version only, or would it also affect live server deployed from the RPM that includes modified links? -- Endi S. Dewata From alee at redhat.com Fri Jan 18 03:24:32 2013 From: alee at redhat.com (Ade Lee) Date: Thu, 17 Jan 2013 22:24:32 -0500 Subject: [Freeipa-devel] [Fwd: [Pki-devel] Announcing the release of Dogtag 10] Message-ID: <1358479473.4296.79.camel@aleeredhat.laptop> -------------- next part -------------- An embedded message was scrubbed... From: Ade Lee Subject: [Pki-devel] Announcing the release of Dogtag 10 Date: Thu, 17 Jan 2013 22:22:45 -0500 Size: 10751 URL: From alee at redhat.com Fri Jan 18 03:24:58 2013 From: alee at redhat.com (Ade Lee) Date: Thu, 17 Jan 2013 22:24:58 -0500 Subject: [Freeipa-devel] [Fwd: [Pki-devel] Announcing Dogtag 10.0.1 for pki-core and dogtag-pki] Message-ID: <1358479499.4296.80.camel@aleeredhat.laptop> -------------- next part -------------- An embedded message was scrubbed... From: Ade Lee Subject: [Pki-devel] Announcing Dogtag 10.0.1 for pki-core and dogtag-pki Date: Thu, 17 Jan 2013 22:23:12 -0500 Size: 5420 URL: From pvoborni at redhat.com Fri Jan 18 14:16:04 2013 From: pvoborni at redhat.com (Petr Vobornik) Date: Fri, 18 Jan 2013 15:16:04 +0100 Subject: [Freeipa-devel] [PATCH] 240-252 AMD modules and Web UI build In-Reply-To: <50F8AF62.7030809@redhat.com> References: <50EC4D5B.5080501@redhat.com> <50F76F07.6030405@redhat.com> <50F7F635.6020609@redhat.com> <50F8AF62.7030809@redhat.com> Message-ID: <50F95924.5070800@redhat.com> On 01/18/2013 03:11 AM, Endi Sukma Dewata wrote: > On 1/17/2013 8:01 PM, Petr Vobornik wrote: >> On 01/17/2013 04:24 AM, Endi Sukma Dewata wrote: >>> Nice work! They seem to be working fine so it's ACKed. >> >> I found a little error - there is a jsl problem in dojo.profile:86 - >> comma at the end of a list. Updated patch 243 attached. > > ACK. Pushed to master. We can polish the dev tools later. > >>> 2. Is there a way to disable uglify in case we need to debug with >>> Firebug? >> >> Currently no. I found myself that I have needed it only when I was >> trying to figure out what is the output of the builder or some build >> debugging (what modules are actually used). >> >> What's your use case with Firebug? If I want to debug, I would use plain >> source codes and send it to the test server [1] or I would use the local >> file modes [2]. The output of the builder is quite ugly to debug. >> >> If it's really useful we might add some option to make-ui.sh, should be >> easy. >> >> [1] >> http://www.freeipa.org/page/V3/WebUI_build#Copy_source_codes_of_FreeIPA_layer_on_test_server >> >> >> [2] >> http://www.freeipa.org/page/V3/WebUI_build#Set_environment_to_debug_source_codes_of_FreeIPA_layer_using_offline_version >> > > I guess what I'm looking for is a way to troubleshoot using Firebug at a > customer's environment who's using the compiled code on a live server. Chrome has a 'pretty print' feature which makes the compiled code somehow readable - much better than single line. Example: }, r.reset = function() { delete r.selected_values, r.external_radio && r.external_radio.prop("checked", !1), r.external_text && r.external_text.val("") }, r.save = function(e) { if (r.selected_values && r.selected_values.length) { var t = r.selected_values[0]; r.external_radio && t === r.external_radio.val() ? e[r.name] = r.external_text.val() : e[r.name] = t } Using just a built, not compiled version might be even more readable, but in that case we can use sources codes as well. > I suppose we can ask the customer to install the source code, then run > sync.sh to install the sources. But is there a way to clean up the > machine and switch back to the compiled code after we're done > troubleshooting? The sync.sh --compiled or --clean doesn't seem to do it. You have to build the UI first. --clean only deletes files in certain directory --compiled switches from sources to compiled versions, can be also used with --dojo So the correct command to replace sources with just app.js is: $ util/make-ui.sh && util/sync.sh --host root at test.example.com --freeipa --compiled --clean or shorter: $ util/make-ui.sh && util/sync.sh -fcC -h root at test.example.com We might polish the arguments if you think they are not easy to understand. I assumed that it's better to create more general synch util and then let developers make their own aliases for most often used commands+args - mostly to avoid writing hostname. I'm not against exending the synch util though. > >>> 3. Is it possible to set breakpoints in AMD modules in Firebug, for >>> example line 44 in widget.js? >> >> Yes. > > OK, after doing sync.sh --freeipa I was able to see the sources in > Firebug and set breakpoints. We might want to include this in a > troubleshooting guide (if it's not already there). This guide: http://www.freeipa.org/page/TroubleshootingGuide ? We might polish #2 and then I can write something down. > >>> 4. Calling change-profile.sh allsource modifies the install/ui/js/dojo. >>> Should they be included in .gitignore? Or is there a way to select the >>> profile without modifying any files included in git (e.g. using >>> parameter)? The concern is that the changes could accidentally get >>> checked in and affect the official build. >> >> change-profile.sh has option --git-ignore which marks those symbolic >> links with 'git update-index --assume-unchanged ' which should prevent >> this issue. Option --git-undo removes this mark. >> >> It might be little uncomfortable, but I didn't find better method. >> Possible option is to remove those links from git repository and add >> them to .gitignore, but by using it the 'offline version' wouldn't be >> functional out of the box (checkout). > > Are the links used by the offline version only, or would it also affect > live server deployed from the RPM that includes modified links? > Symbolic links are just for offline version. Deployed RPM has all code directly in subdirs of js directory. Basically the sym links are there just to solve the switching problem - we don't want to copy files around in dev tree. For testing on a machine, mapping from 'build' or 'src' directory solves the sync util. For deployment it's done by makefiles. -- Petr Vobornik From mkosek at redhat.com Fri Jan 18 17:24:49 2013 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 18 Jan 2013 18:24:49 +0100 Subject: [Freeipa-devel] [PATCH] 352-354 Add support for AD users to hbactest command Message-ID: <50F98561.1040503@redhat.com> How this works: 1. When a trusted domain user is tested, AD GC is searched for the user entry Distinguished Name 2. The user entry is then read from AD GC and its SID and SIDs of all its assigned groups (tokenGroups attribute) are retrieved 3. The SIDs are then used to search IPA LDAP database to find all external groups which have any of these SIDs as external members 4. All these groups having these groups as direct or indirect members are added to hbactest allowing it to perform the search LIMITATIONS: - user SID in hbactest --user parameter is not supported - only Trusted Admins group members can use this function as it uses secret for IPA-Trusted domain link - List of group SIDs does not contain group memberships outside of the trusted domain https://fedorahosted.org/freeipa/ticket/2997 ------------------------ There are also 2 patches changing current dcerpc.py code to make it usable both for group-add-member trusted domain user resolution and also for purposes of the trusted domain user hbactest. Example of the new hbactest ability: # ipa hbacrule-show can_login Rule name: can_login Host category: all Source host category: all Enabled: TRUE User Groups: admins, ad_test_admins Services: login, sshd # ipa group-show ad_test_admins Group name: ad_test_admins Description: AD.TEST admins GID: 179000011 Member groups: ext_all_admins Member of HBAC rule: can_login # ipa group-show ext_all_admins Group name: ext_all_admins Description: All AD.TEST admins Member of groups: ad_test_admins Indirect Member of HBAC rule: can_login External member: S-1-5-21-3035198329-144811719-1378114514-512 # ipa hbactest --user='AD\Administrator' --host=`hostname` --service=sshd -------------------- Access granted: True -------------------- Matched rules: can_login There may still be dragons, I am sending what I have now, still need to run more tests. Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-352-generalize-ad-gc-search.patch Type: text/x-patch Size: 10514 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-353-do-not-hide-sid-resolver-error-in-group-add-member.patch Type: text/x-patch Size: 1378 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-354-add-support-for-ad-users-to-hbactest-command.patch Type: text/x-patch Size: 8041 bytes Desc: not available URL: From mkosek at redhat.com Fri Jan 18 17:27:48 2013 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 18 Jan 2013 18:27:48 +0100 Subject: [Freeipa-devel] [PATCH] 349 Test NetBIOS name clash before creating a trust In-Reply-To: <50F815AD.2030908@redhat.com> References: <50F03223.8070802@redhat.com> <50F815AD.2030908@redhat.com> Message-ID: <50F98614.3040505@redhat.com> On 01/17/2013 04:15 PM, Rob Crittenden wrote: > Martin Kosek wrote: >> Give a clear message about what is wrong with current Trust settings >> before letting AD to return a confusing error message. >> >> https://fedorahosted.org/freeipa/ticket/3193 > > I hate being picky over wording but... > > I think it would read better if you replaced 'this' with 'The IPA server' or > 'IPA' or something like that. > > rob > No worries, attaching a better worded version. Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-349-2-test-netbios-name-clash-before-creating-a-trust.patch Type: text/x-patch Size: 1309 bytes Desc: not available URL: From simo at redhat.com Sat Jan 19 18:35:28 2013 From: simo at redhat.com (Simo Sorce) Date: Sat, 19 Jan 2013 13:35:28 -0500 Subject: [Freeipa-devel] [PATCH] 352-354 Add support for AD users to hbactest command In-Reply-To: <50F98561.1040503@redhat.com> References: <50F98561.1040503@redhat.com> Message-ID: <1358620528.20683.98.camel@willson.li.ssimo.org> On Fri, 2013-01-18 at 18:24 +0100, Martin Kosek wrote: > How this works: > 1. When a trusted domain user is tested, AD GC is searched > for the user entry Distinguished Name My head is not clear today but it looks to me you are doing 2 searches. One to go from samAccountName -> DNa dn then a second for DN -> SID. Why are you doing 2 searches ? The first one can return you the ObjectSid already. Simo. > 2. The user entry is then read from AD GC and its SID and SIDs > of all its assigned groups (tokenGroups attribute) are retrieved > 3. The SIDs are then used to search IPA LDAP database to find > all external groups which have any of these SIDs as external > members > 4. All these groups having these groups as direct or indirect > members are added to hbactest allowing it to perform the search > > LIMITATIONS: > - user SID in hbactest --user parameter is not supported > - only Trusted Admins group members can use this function as it > uses secret for IPA-Trusted domain link > - List of group SIDs does not contain group memberships outside > of the trusted domain > > https://fedorahosted.org/freeipa/ticket/2997 > > ------------------------ > > There are also 2 patches changing current dcerpc.py code to make it usable both > for group-add-member trusted domain user resolution and also for purposes of > the trusted domain user hbactest. > > Example of the new hbactest ability: > # ipa hbacrule-show can_login > Rule name: can_login > Host category: all > Source host category: all > Enabled: TRUE > User Groups: admins, ad_test_admins > Services: login, sshd > # ipa group-show ad_test_admins > Group name: ad_test_admins > Description: AD.TEST admins > GID: 179000011 > Member groups: ext_all_admins > Member of HBAC rule: can_login > # ipa group-show ext_all_admins > Group name: ext_all_admins > Description: All AD.TEST admins > Member of groups: ad_test_admins > Indirect Member of HBAC rule: can_login > External member: S-1-5-21-3035198329-144811719-1378114514-512 > > # ipa hbactest --user='AD\Administrator' --host=`hostname` --service=sshd > -------------------- > Access granted: True > -------------------- > Matched rules: can_login > > There may still be dragons, I am sending what I have now, still need to run > more tests. > > Martin > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel -- Simo Sorce * Red Hat, Inc * New York From jdennis at redhat.com Mon Jan 21 15:48:21 2013 From: jdennis at redhat.com (John Dennis) Date: Mon, 21 Jan 2013 10:48:21 -0500 Subject: [Freeipa-devel] [PATCH] 93 Add custom mapping object for LDAP entry data In-Reply-To: <50F6CB11.4010008@redhat.com> References: <50F6CB11.4010008@redhat.com> Message-ID: <50FD6345.6060105@redhat.com> On 01/16/2013 10:45 AM, Jan Cholasta wrote: > Hi, > > this patch adds initial support for custom LDAP entry objects, as > described in . > Just in case you missed it, I added some requirements to the above design page about making LDAP attributes and their values be "smarter". An LDAP attribute has a syntax defining how comparisons are to be performed. Python code using standard Python operators, sorting functions, etc. should "just work" because underneath the object is aware of it's LDAP syntax. The same holds true for attribute names, it should "just work" correctly any place we touch an attribute name because it's an object implementing the desired comparison and hashing behavior. Thus the keys in an Entry dict would need to be a new class and the values would need to be a new class as well. Simple strings do not give rich enough semantic behavior (we shouldn't be providing this semantic behavior every place in the code where we touch an attribute name or value, rather it should just automatically work using standard Python operators. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From pviktori at redhat.com Mon Jan 21 16:46:59 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 21 Jan 2013 17:46:59 +0100 Subject: [Freeipa-devel] [PATCH] 93 Add custom mapping object for LDAP entry data In-Reply-To: <50FD6345.6060105@redhat.com> References: <50F6CB11.4010008@redhat.com> <50FD6345.6060105@redhat.com> Message-ID: <50FD7103.9070904@redhat.com> On 01/21/2013 04:48 PM, John Dennis wrote: > On 01/16/2013 10:45 AM, Jan Cholasta wrote: >> Hi, >> >> this patch adds initial support for custom LDAP entry objects, as >> described in . >> > > Just in case you missed it, I added some requirements to the above > design page about making LDAP attributes and their values be "smarter". It would be nice to discuss these changes on the list, since the implementation is already underway... > An LDAP attribute has a syntax defining how comparisons are to be > performed. Python code using standard Python operators, sorting > functions, etc. should "just work" because underneath the object is > aware of it's LDAP syntax. > > The same holds true for attribute names, it should "just work" correctly > any place we touch an attribute name because it's an object implementing > the desired comparison and hashing behavior. > > Thus the keys in an Entry dict would need to be a new class and the > values would need to be a new class as well. Simple strings do not give > rich enough semantic behavior (we shouldn't be providing this semantic > behavior every place in the code where we touch an attribute name or > value, rather it should just automatically work using standard Python > operators. I think plain strings are fine for attribute names, as long as the entry class handles them correctly. We don't really need to hash or compare them outside of an entry. Or at least not enough to warrant a special class, IMO. Of course Entry.keys() and friends should return normalized names that would sort/hash correctly. As for attribute values, you're right that LDAP specifies how they should be compared, but that's only in the context of a single attribute type. What happens when you try comparing a case-sensitive string to a case-insensitive one in Python? -- Petr? From rcritten at redhat.com Mon Jan 21 17:14:28 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 21 Jan 2013 12:14:28 -0500 Subject: [Freeipa-devel] [PATCH] 351 Installer should not connect to 127.0.0.1 In-Reply-To: <50F6B7D7.7070803@redhat.com> References: <50F6760A.4090203@redhat.com> <1358344220.4590.32.camel@willson.li.ssimo.org> <50F6B2B6.1090005@redhat.com> <1358345436.4590.40.camel@willson.li.ssimo.org> <50F6B7D7.7070803@redhat.com> Message-ID: <50FD7774.6000000@redhat.com> Martin Kosek wrote: > On 01/16/2013 03:10 PM, Simo Sorce wrote: >> On Wed, 2013-01-16 at 15:01 +0100, Martin Kosek wrote: >>> On 01/16/2013 02:50 PM, Simo Sorce wrote: >>>> On Wed, 2013-01-16 at 10:42 +0100, Martin Kosek wrote: >>>>> IPA installer sometimes tries to connect to the Directory Server >>>>> via loopback address 127.0.0.1. However, the Directory Server on >>>>> pure IPv6 systems may not be listening on this address. This address >>>>> may not even be available. >>>>> >>>>> Rather use the FQDN of the server when connecting to the DS to fix >>>>> this issue and make the connection consistent ldapmodify calls which >>>>> also use FQDN instead of IP address. >>>>> >>>>> https://fedorahosted.org/freeipa/ticket/3355 >>>> >>>> Martin, >>>> shouldn't the installer rather always use the ldapi socket ? >>>> >>>> Simo. >>>> >>> >>> Probably yes, but the fix would be much more intrusive than the current patch >>> as we connect to ldap://$HOST:389 all over the installer code. My intention was >>> to prepare rather a short fix for the upcoming release... >> >> Uhmm wouldn't you just need to replace ldap://$HOST:389 with >> ldapi://path ? >> >> However it is understandable to have a short term fix, but can you open >> a ticket for the longer term goal of moving away from TCP connections to >> LDAPI ones ? >> >> Simo. >> > > Sure. I updated ticket https://fedorahosted.org/freeipa/ticket/3272 which > already plans to fix other inappropriate protocol in installer code. ACK, pushed to master, ipa-3-1 and ipa-3-0 rob From pviktori at redhat.com Mon Jan 21 17:38:00 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 21 Jan 2013 18:38:00 +0100 Subject: [Freeipa-devel] [PATCHES] 127-136 LDAP code refactoring (Part 2) In-Reply-To: <50F83498.9050104@redhat.com> References: <50F83498.9050104@redhat.com> Message-ID: <50FD7CF8.2070401@redhat.com> On 01/17/2013 06:27 PM, Petr Viktorin wrote: > Hello, > This is the first batch of changes aimed to consolidate our LDAP code. > Each should be a self-contained change that doesn't break anything. > > These patches do some general cleanup (some of the changes might seem > trivial but help a lot when grepping through the code); merge the common > parts LDAPEntry, Entry and Entity classes; and move stuff that depends > on an installed server out of IPASimpleLDAPObject and SchemaCache. > > I'm posting them early so you can see where I'm going, and so you can > find out if your work will conflict with mine. Here is a second batch of patches. I'm keeping them small so they're easier to digest (and rebase, should that be necessary). If you prefer big patches, just squash them. Obviously, they apply on top of the previous batch. Here we introduce a base class, LDAPConnection, from which IPAdmin and ldap2 inherit. This makes IPAdmin support both interfaces, its own old one and the new common one. Both IPAdmin and ldap2 work with the common Entry class, LDAPEntry, which also has both the old and new interface for the moment. To use a hyperbole, the rest of the work is simply rewriting the codebase to use the new interfaces. And testing. https://fedorahosted.org/freeipa/ticket/2660 -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0127-Move-SchemaCache-and-IPASimpleLDAPObject-to-ipaserve.patch Type: text/x-patch Size: 45336 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0128-Start-LDAPConnection-a-common-base-for-ldap2-and-IPA.patch Type: text/x-patch Size: 12345 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0129-Make-IPAdmin-not-inherit-from-IPASimpleLDAPObject.patch Type: text/x-patch Size: 6272 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0130-Move-schema-related-methods-to-LDAPConnection.patch Type: text/x-patch Size: 6636 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0131-Move-DN-handling-methods-to-LDAPConnection.patch Type: text/x-patch Size: 4151 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0132-Move-filter-making-methods-to-LDAPConnection.patch Type: text/x-patch Size: 13204 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0133-Move-entry-finding-methods-to-LDAPConnection.patch Type: text/x-patch Size: 21693 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0134-Remove-unused-proxydn-functionality-from-IPAdmin.patch Type: text/x-patch Size: 5761 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0135-Move-entry-add-update-remove-rename-to-LDAPConnectio.patch Type: text/x-patch Size: 12385 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0136-Implement-some-of-IPAdmin-s-legacy-methods-in-terms-.patch Type: text/x-patch Size: 5034 bytes Desc: not available URL: From simo at redhat.com Mon Jan 21 18:48:53 2013 From: simo at redhat.com (Simo Sorce) Date: Mon, 21 Jan 2013 13:48:53 -0500 Subject: [Freeipa-devel] [PATCH] 93 Add custom mapping object for LDAP entry data In-Reply-To: <50FD7103.9070904@redhat.com> References: <50F6CB11.4010008@redhat.com> <50FD6345.6060105@redhat.com> <50FD7103.9070904@redhat.com> Message-ID: <1358794133.20683.199.camel@willson.li.ssimo.org> On Mon, 2013-01-21 at 17:46 +0100, Petr Viktorin wrote: > As for attribute values, you're right that LDAP specifies how they > should be compared, but that's only in the context of a single > attribute > type. What happens when you try comparing a case-sensitive string to > a > case-insensitive one in Python? > There is also the little issue that we may treat a string in a more restrictive way than the LDAP schema allow or we may not have schema loaded yet for example in the installer case, so if you want to make the objects 'smart' make sure you do not cause them to be 'too' smart to be usable :) Simo. -- Simo Sorce * Red Hat, Inc * New York From jdennis at redhat.com Mon Jan 21 19:21:29 2013 From: jdennis at redhat.com (John Dennis) Date: Mon, 21 Jan 2013 14:21:29 -0500 Subject: [Freeipa-devel] [PATCH] 93 Add custom mapping object for LDAP entry data In-Reply-To: <1358794133.20683.199.camel@willson.li.ssimo.org> References: <50F6CB11.4010008@redhat.com> <50FD6345.6060105@redhat.com> <50FD7103.9070904@redhat.com> <1358794133.20683.199.camel@willson.li.ssimo.org> Message-ID: <50FD9539.7050402@redhat.com> On 01/21/2013 01:48 PM, Simo Sorce wrote: > There is also the little issue that we may treat a string in a more > restrictive way than the LDAP schema allow In an object orientated language restricted behaviors are modeled by subclassing. > or we may not have schema loaded yet for example in the installer case In what circumstance do we not know the schema? Yes, during install the schema may not already be present on the server for the attribute in question, but that doesn't mean we don't know what the syntax is, rather we just have to look for it in a a different place (because by definition we have to have the schema available to install it). We should never have a situation where we don't know the schema for an attribute. The only issues I've ever seen are attributes whose syntax was incorrectly defined (mostly attributes that are logically DN's but were defined with string syntax). Fortunately those seem to be rare and are currently handled via an "exceptions" table in ldap2. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From simo at redhat.com Tue Jan 22 00:59:02 2013 From: simo at redhat.com (Simo Sorce) Date: Mon, 21 Jan 2013 19:59:02 -0500 Subject: [Freeipa-devel] A new proopsal for Location Based Discovery Message-ID: <1358816342.20683.227.camel@willson.li.ssimo.org> Hello FreeIPA developers and other followers, we've have thought for quite a while about how to best implement location based discovery for our clients so that we can easily redirect group of clients to specific servers in order to better use resources and keep traffic local to clients. However although we made some proposal we haven't implemented anything so far, one reason is that all solution we came up till now were complex and involved substantial client changes. I recently came up with a new take on how to resolve the problem, I've written it up here after some minimal discussion with Petr Spacek and others. It is available here: http://www.freeipa.org/page/V3/DNS_Location_Mechanism I thinks this proposal stands an actual chance at being implemented and getting enough client support, mostly because the necessary changes to existing clients vary from none to very minimal. This is proposal is not yet definitive, and is open to adjustments. I've also inlined a copy below for easier commenting. Please trim out unnecessary text if you choose to reply and comment, and keep only the relevant sections of text if you comment inline. Have a good read, Simo. ======================================================================== A Mechanism to allow for location based discovery by Simo Sorce with help from Petr Spacek and others Forewords This is a new proposal (Jan 2013) to support Location Based discovery in FreeIPA. It was inspired by this earlier proposal made a while ago. The main difference is that it simplifies the whole management by eliminating IP subnets and special client code while still maintaining a great deal of flexibility. The key insight being that different locations can configure the network to use different FreeIPA DNS servers, that are local to the location being considered. Introduction Service Discovery is a process whereby a client uses information stored in the network to find servers that offer a specific service. It is useful for two reasons. First, it places the information required for the location of servers in the network, lessening the burden of client configuration. Second, it gives system and network administrators a central point of control that can be used to to define an optimized policy that determines which clients should use which server and in what order. A standard for service discovery is defined in RFC 2782. This standard defines a DNS RR with the mnemonic SRV and usage rules around it. It allows a client to discover servers that offer a given service over a given protocol. A drawback of SRV Records is that it assumes clients know the correct DNS domain to use as the query target. Without any further information, the client's options includes using their own DNS domain and the name of the security domain in which the client operates (such as the domain name associated to the Kerberos REALM the client refers to). Neither option is likely to yield optimal results however. One key service discovery requirement, especially in large distributed enterprises, is to choose a server that is close to a client. However in many situation binding the client name to a specific zone that is tied to a physical location is not possible. And even if it were it would not address the needs of a roaming client that moves across the networks. We cannot have the name of a client change when it move across networks as this break the Kerberos protocol that associates keys to a fixed host name for Service Principals. The incompleteness of RFC 2782 is acknowledged by systems such as Active Directory that contain extra functionality to augment SRV lookups to make them site aware. The basic idea is to use a target in SRV service discovery that is specific to a location or "site" in AD parlance. Unfortunately AD clients rely on additional configureation or side protocols to determine the client "site" and it is quite specific to Microsoft technologies so the method they use is not easily portable and reusable in other contexts nor documented in Standards or informative IETF RFCs. This document specifies a protocol for location discovery based exclusively on DNS. While the protocol itself can be fully implemented with a normal DNS the FreeIPA mechanism is augmented by additional 'location' awarness and will depend on additional computation done by the FreeIPA DNS plugins. The Discovery Protocol The main point about this protocol is the recognition that all we really need is to find a specific (set of) server(s) that a specific client needs to find. The second point is that the client has 1 bit of information that is inequivocal: its own fully qualified name. Whether this is fixed or assigned by the network (via DHCP) is not important, what matters is that the name univocally identifies this specific host in the network. Based on these two points the idea is to make the client query for a special set of SRV records keyed on the client's own DNS name. From the client perspective this is the simplest protocol possible, it requires no knowledge or hard decisions about what DNS domain name to query or how to discover it. At the same time is allows the Domain Administrators a lot of flexibility on how to configure these records per-client. The failure mode for this protocol is to simply keep using the previous heuristics, we will not define these heuristics as they are not standardized and are implementation and deployment specific to some extent. Suffice to say that this new protocol should not impact in any way on previous heuristics and DNS setups and can be safely implemented in clients with no ill effects save for an additional initial query. Local negative chaching may help in avoiding excessive queries if the administratoir chooses not to configure the servers to support per client SRV Records and otherwise adds little overhead. Client Implementation Because currently used SRV records are multiple and to allow the case where a host may actually be using a domain name that is also already used as a zone name (ie the name X.example.com identifies both an actual host and is a subdomain where clients Y.X.example.com normally searches for SRV records) we group all per-client location SRV records under the _location. sub name. So for example, a client named X.example.com would search for its own per-client records for the LDAP service over the TCP protocol by using the name: _ldap._tcp._location.X.example.com With current practices a client normally looks for _ldap._tcp.example.com instead. It is a simple as that, the only difference between a client supporting this new mechanism and a generic client is only about what name is used as the 'base domain name'. Everything else is identical. Many clients can probably be already configured to use this new base domain. And clients that may not support it (either because the base domain is always derived in some way and not directly configurable or because clients refuse to use _location as a valid bade DNS name component due to the leading '_' character) can be easily changed. Those that can't be changed will simply fall back to use the classic SRV records on the base domain and will simply not be location aware. The additional advantage of using this scheme is that clients can now use per-client SRV searches by default if they so choose because there is no risk of ending up using unrelated servers due to unfortunate host naming. If the administrator took the pain to configure per-client SRV records there is an overwhelming chance those are indeed the records the client is supposed to use. By using this as default it is possible to make client configuration free by default which is a real boon on networks with many hosts. Changing defaults requires careful consideration of security implications, please read the #Security Considerations section for more information. Server side implementation Basic solution The simplest way to implement this scheme on the server side is to just create a set of records for each client. However this is a very heavyweight and error prone process as it requires the creation of many records for each client. A more rational solution A simple but more manageable solution may be to use DNAME records as defined by RFC 6672. The administrator in this case can set up a single set of SRV records per location and then use a DNAME record to glue each client to this subtree. This solution is much more lightweight and less error prone as each client would need one single additional record that points to a well maintained subtree. So a client X.example.com could have a DNAME record like this: _location.X.example.com. DNAME Y._locations.example.com. When the client X tries to search for its own per-client records for the LDAP service over the TCP protocol by using the name _ldap._tcp._location.X.example.com it would be automatically redirected to the record _ldap._tcp.Y._locations.example.com Advanced FreeIPA solution Although the above implementation works fine for most cases it has 2 major drawbacks. The first one is poor support for roaming clients as they would be permanently referring to a specific location even when they travel across potentially very geografically dispersed locations. The other big drawback is that admins will have to create the DNAME records for each client which is a lot of work. In FreeIPA we can have more smarts given we can influence the bind-dyndb-ldap plugin behavior. So one first very simple yet very effective simplification would be to change the bind-dyndb-ldap plugin to create a phantom per-client location DNAME record that points to a 'default' location. This means DNAME records wouldn't be directly stored in LDAP but would be synthesized by the driver if not present using a default configuration. However to make this more useful the plugin shouldn't just use one single default, but should have a default 'per server'. Roaming/Remote clients Roaming clients or Remote clients have one big problem, although they may have a default preferred location they move across networks and the definition of 'location' and 'closest' server changes as they move. Yet their name is still fixed. With a classic Bind setup this problem can somewhat be handled by using views and changing the DNAME returned or directly the SRV records depending on the client IP address. However using source IP address is not always a good indicator. Clients may be behind a NAT or maybe IP addressing is shared between multiple logical locations within a physical network. or the client may be getting the IP address over a VPN tunnel and so on. In general relying on IP address information may or may not work. (There is also the minor issue that we do not yet support views in the bind-dyndb-ldap plugin.) Addressing the multiple locations problem The reason to define multiple locations is that we want to redirect clients to different servers depending on the location they belong to. This only really makes sense if each location has its own (set of) FreeIPA server(s). Also usually a location corresponds to a different network so it can be assumed the if at least one of the FreeIPA servers in each location is a DNS enabled server and the local network configuration (DHCP) server serves this DNS server as the primary server for the client then we can make the reasonable assumption that a client in a specific location will generally query a FreeIPA server in that same location for location-specific information. If this holds true then changing the 'default' location base on the server's own location would effectively make clients stick to the local servers (Assuming the location's SRV records are properly configured to contain only local server, which we can insure through appropriate checks in the framework) This is another simple optimization and works for a lot of cases but not necessarily all. However this optimization leads to another problem. What if the client needs to belong to a specific location indipendetly from what server they ask to, or what if we really only have a few FreeIPA DNS servers but want to use more locations ? One way of course is to create a fixed DNAME record for these clients, so the defaults do not kick in. However this is rather final. Maybe the clients needs a preference but that preference can be overridden in some circumstances. Choosing the right location So the right location for a client may be a combination of a preference and a set of requirements. One example of a requirement that can trump any preference is a bandwidth constrained location. Assume we have a client that normally resides in a large location. This location has been segmented in small sublocations to better distribute load so it has a preferred location. If we use a fixed DNAME to represent this preference when this client roams to a bandwidth constrained network it will try to use the slow link to call 'home' to his usual location. This may be a serious problem. However if we generate the default location dynamically we can easily have rules on the bandwidth constrained location DNS servers that no matter what is the preference any client asking for location based SRV records will always be redirected to the local location which includes only local servers in their SRV records. This is quite powerful and would neatly solve many issues connected with roaming clients. DNS Slave server problem Dynamically choosing locations may cause issues with DNS Slaves servers, as they wouldn't be able implement this dynamic mechanism. One way to handle this problem is to operate in a 'degraded' mode where DNAME records are effectively created and the location is not dynamic per-client anymore. We can still have 'different' defaults per server if we decide to filter DNAME records from replication. However filtering DNAME records is also a problem because we would not be able to filter only location based ones, it would be an all or nothing filter, which would render DNAME records unusable for any other purpose. This restriction is a bit extreme. Another way might be to always compute all zone DNAME records based on the available host records on the fly at DNS server startup, and then keep them cached (and updated) by the bind-dyndb-ldap plugin, which will include these records in AXFR transfers but will not write them back to the LDAP server keeping them local. This solution might be the golden egg, as it might allow all the advantages of dynamic generation, as well as response performance and solve the slave server issue and perhaps even DNSSEC related issues. It has a major drawback, it would make the code a lot more compicated and critical. Overall implementation proposal Given that the basic solution is relatively simple and require minimal if no client changes we should consider implementing at least part of this proposal as soon as possible. Implementing DNAME record support in bind-dyndb-ldap seem a prerequisite and adding client support in the SSSD IPA provider would allow to test at least with the basic setup. This basic support should be implemented sooner rather than later so that full dynamic support can lately be easily added to bind-dyndb-ldap support as well as adding the necessary additional schema and UI to the freeipa framework to mark and group clients and locations. Security Considerations Client Implementation As always DNS replies can be spoofed relatively easily. We recommend that SRV records resolution is used only for those clients that normally use an additional security protocol to talk to network resources and can use additional mechanisms to authenticate these resources. For example a client that uses an LDAP server for security related information like user identity information should only trust SRV record discovery for the LDAP service if LDAPS or STARTTLS over LDAP are mandatory and certificate verification is fully turned on, or if SASL/GSSAPI is used with mutual authentication, integrity and confidentiality options required. Use of DNSSEC and full DNS signature verification may be considered an additional requirement in some cases. Server Implementation TODO: interaction with DNSSEC References SRV Records: RFC 2782 DNAME Records: RFC 6672 -- Simo Sorce * Red Hat, Inc * New York From jcholast at redhat.com Tue Jan 22 07:31:36 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 22 Jan 2013 08:31:36 +0100 Subject: [Freeipa-devel] [PATCH] 93 Add custom mapping object for LDAP entry data In-Reply-To: <50FD7103.9070904@redhat.com> References: <50F6CB11.4010008@redhat.com> <50FD6345.6060105@redhat.com> <50FD7103.9070904@redhat.com> Message-ID: <50FE4058.2080408@redhat.com> On 21.1.2013 17:46, Petr Viktorin wrote: > On 01/21/2013 04:48 PM, John Dennis wrote: >> On 01/16/2013 10:45 AM, Jan Cholasta wrote: >>> Hi, >>> >>> this patch adds initial support for custom LDAP entry objects, as >>> described in . >>> >> >> Just in case you missed it, I added some requirements to the above >> design page about making LDAP attributes and their values be "smarter". > > It would be nice to discuss these changes on the list, since the > implementation is already underway... > >> An LDAP attribute has a syntax defining how comparisons are to be >> performed. Python code using standard Python operators, sorting >> functions, etc. should "just work" because underneath the object is >> aware of it's LDAP syntax. >> >> The same holds true for attribute names, it should "just work" correctly >> any place we touch an attribute name because it's an object implementing >> the desired comparison and hashing behavior. >> >> Thus the keys in an Entry dict would need to be a new class and the >> values would need to be a new class as well. Simple strings do not give >> rich enough semantic behavior (we shouldn't be providing this semantic >> behavior every place in the code where we touch an attribute name or >> value, rather it should just automatically work using standard Python >> operators. > > I think plain strings are fine for attribute names, as long as the entry > class handles them correctly. We don't really need to hash or compare > them outside of an entry. Or at least not enough to warrant a special > class, IMO. > Of course Entry.keys() and friends should return normalized names that > would sort/hash correctly. +1 > > > As for attribute values, you're right that LDAP specifies how they > should be compared, but that's only in the context of a single attribute > type. What happens when you try comparing a case-sensitive string to a > case-insensitive one in Python? > I would say TypeError. But I don't think we should be too strict in following the schema, it's too much work for questionable benefit. As you pointed out, it might bring problems for which there is no right solution. Honza -- Jan Cholasta From pviktori at redhat.com Tue Jan 22 08:48:05 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 22 Jan 2013 09:48:05 +0100 Subject: [Freeipa-devel] [PATCH] 93 Add custom mapping object for LDAP entry data In-Reply-To: <50FE4058.2080408@redhat.com> References: <50F6CB11.4010008@redhat.com> <50FD6345.6060105@redhat.com> <50FD7103.9070904@redhat.com> <50FE4058.2080408@redhat.com> Message-ID: <50FE5245.50000@redhat.com> On 01/22/2013 08:31 AM, Jan Cholasta wrote: > On 21.1.2013 17:46, Petr Viktorin wrote: >> On 01/21/2013 04:48 PM, John Dennis wrote: >>> On 01/16/2013 10:45 AM, Jan Cholasta wrote: >>>> Hi, >>>> >>>> this patch adds initial support for custom LDAP entry objects, as >>>> described in . >>>> >>> >>> Just in case you missed it, I added some requirements to the above >>> design page about making LDAP attributes and their values be "smarter". >> >> It would be nice to discuss these changes on the list, since the >> implementation is already underway... >> >>> An LDAP attribute has a syntax defining how comparisons are to be >>> performed. Python code using standard Python operators, sorting >>> functions, etc. should "just work" because underneath the object is >>> aware of it's LDAP syntax. >>> >>> The same holds true for attribute names, it should "just work" correctly >>> any place we touch an attribute name because it's an object implementing >>> the desired comparison and hashing behavior. >>> >>> Thus the keys in an Entry dict would need to be a new class and the >>> values would need to be a new class as well. Simple strings do not give >>> rich enough semantic behavior (we shouldn't be providing this semantic >>> behavior every place in the code where we touch an attribute name or >>> value, rather it should just automatically work using standard Python >>> operators. >> >> I think plain strings are fine for attribute names, as long as the entry >> class handles them correctly. We don't really need to hash or compare >> them outside of an entry. Or at least not enough to warrant a special >> class, IMO. >> Of course Entry.keys() and friends should return normalized names that >> would sort/hash correctly. > > +1 > >> >> >> As for attribute values, you're right that LDAP specifies how they >> should be compared, but that's only in the context of a single attribute >> type. What happens when you try comparing a case-sensitive string to a >> case-insensitive one in Python? >> > > I would say TypeError. But I don't think we should be too strict in > following the schema, it's too much work for questionable benefit. As > you pointed out, it might bring problems for which there is no right > solution. > Well, instead of "compares equal" you can say "have same hash": If you make "abc" == Insensitive("abc") == "ABC", then "abc" and "ABC" must hash the same, which they don't. If you make "abc" != Insensitive("abc") != "ABC", then I can't say entry["abc"]. -- Petr? From pviktori at redhat.com Tue Jan 22 08:48:54 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 22 Jan 2013 09:48:54 +0100 Subject: [Freeipa-devel] [PATCH] 93 Add custom mapping object for LDAP entry data In-Reply-To: <50FE5245.50000@redhat.com> References: <50F6CB11.4010008@redhat.com> <50FD6345.6060105@redhat.com> <50FD7103.9070904@redhat.com> <50FE4058.2080408@redhat.com> <50FE5245.50000@redhat.com> Message-ID: <50FE5276.8040305@redhat.com> Please ignore the last mail, I've clicked Send instead of Delete. -- Petr? From pvoborni at redhat.com Tue Jan 22 11:00:47 2013 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 22 Jan 2013 12:00:47 +0100 Subject: [Freeipa-devel] [PATCH] 254 Fix BuildRequires: rhino replaced with java-1.7.0-openjdk Message-ID: <50FE715F.6080007@redhat.com> Rhino is needed for Web UI build. Rhino needs java, but from package perspective java-1.7.0-openjdk requires rhino. So the correct BuildRequires is java-1.7.0-openjdk. -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0254-Fix-BuildRequires-rhino-replaced-with-java-1.7.0-ope.patch Type: text/x-patch Size: 1539 bytes Desc: not available URL: From pspacek at redhat.com Tue Jan 22 14:23:19 2013 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 22 Jan 2013 15:23:19 +0100 Subject: [Freeipa-devel] A new proopsal for Location Based Discovery In-Reply-To: <1358816342.20683.227.camel@willson.li.ssimo.org> References: <1358816342.20683.227.camel@willson.li.ssimo.org> Message-ID: <50FEA0D7.60507@redhat.com> On 22.1.2013 01:59, Simo Sorce wrote: > Hello FreeIPA developers and other followers, > Roaming/Remote clients > Roaming clients or Remote clients have one big problem, although they > may have a default preferred location they move across networks and the > definition of 'location' and 'closest' server changes as they move. Yet > their name is still fixed. With a classic Bind setup this problem can > somewhat be handled by using views and changing the DNAME returned or > directly the SRV records depending on the client IP address. However > using source IP address is not always a good indicator. Clients may be > behind a NAT or maybe IP addressing is shared between multiple logical > locations within a physical network. or the client may be getting the IP > address over a VPN tunnel and so on. In general relying on IP address > information may or may not work. (There is also the minor issue that we > do not yet support views in the bind-dyndb-ldap plugin.) > > > Addressing the multiple locations problem > The reason to define multiple locations is that we want to redirect > clients to different servers depending on the location they belong to. > This only really makes sense if each location has its own (set of) > FreeIPA server(s). > > Also usually a location corresponds to a different network so it can be > assumed the if at least one of the FreeIPA servers in each location is a > DNS enabled server and the local network configuration (DHCP) server > serves this DNS server as the primary server for the client then we can > make the reasonable assumption that a client in a specific location will > generally query a FreeIPA server in that same location for > location-specific information. > > If this holds true then changing the 'default' location base on the > server's own location would effectively make clients stick to the local > servers (Assuming the location's SRV records are properly configured to > contain only local server, which we can insure through appropriate > checks in the framework) > > This is another simple optimization and works for a lot of cases but not > necessarily all. However this optimization leads to another problem. > What if the client needs to belong to a specific location indipendetly > from what server they ask to, or what if we really only have a few > FreeIPA DNS servers but want to use more locations ? > > One way of course is to create a fixed DNAME record for these clients, > so the defaults do not kick in. However this is rather final. Maybe the > clients needs a preference but that preference can be overridden in some > circumstances. > > > Choosing the right location > So the right location for a client may be a combination of a preference > and a set of requirements. One example of a requirement that can trump > any preference is a bandwidth constrained location. > > Assume we have a client that normally resides in a large location. This > location has been segmented in small sublocations to better distribute > load so it has a preferred location. If we use a fixed DNAME to > represent this preference when this client roams to a bandwidth > constrained network it will try to use the slow link to call 'home' to > his usual location. This may be a serious problem. > > However if we generate the default location dynamically we can easily > have rules on the bandwidth constrained location DNS servers that no > matter what is the preference any client asking for location based SRV > records will always be redirected to the local location which includes > only local servers in their SRV records. > > This is quite powerful and would neatly solve many issues connected with > roaming clients. I see two ways how to achieve server controlled "location override" as described above. First way is about dynamically changing the _location.client DNAME record, as Simo proposed above. Second way is about making DNS sub-tree "_locations.domain" per-server specific. In that case, client's _location DNAME record points to same ("preferred") location all the time, but real records under "preferred._locations.domain" can point to local or remote servers, it depends on configuration. Creating per-server _locations sub-tree is very easy with current code: Simply copy&paste new bind-dyndb-ldap section to /etc/named.conf and point base DN to some server-specific part of LDAP tree: dynamic-db "ipa-local" { // arg "base cn=srv2.example.com, cn=dns-local, dc=example,dc=com"; } Server specific _locations records live in this sub-tree and each server has have own view of _locations, i.e. each server could specify mapping between locations in own way. DNS clients will see merged DNS tree, no change on client side is required. E.g. client has preferred location "brno" but the client is connected to network in "nyc", i.e. DNS queries are sent to servers in NYC. NYC server has own "_locations" sub-tree with trivial mapping "brno DNAME nyc". How to read the result: Location "Brno" is too far from "NYC", use "NYC" anyway! Also, "default" location could prefer local server over remote ones, i.e. local clients without any configuration will prefer local servers. There is another nice feature: "old" _ntp._udp.domain SRV records could contain aliases pointing to SRV records in some location, e.g. "default". In that case also old clients will prefer local servers over remote ones - almost with no price and with no client reconfiguration. No new concepts, no new code :-) There is still _location DNAME record under client's name, that stay unchanged. Personally, I don't like any on-the-fly record generation. Is it really necessary? In case described above I don't think so. Roaming between locations don't require changing any record, so configuration is static. Old clients would see "default" location and _location record for new clients could be created during ipa host-add or something similar. IMHO client fall-back "try own hostname and then try configured domain if own hostname doesn't contain _location" is reasonable and doesn't enforce new records at all. Admin always could create _location records if he has legacy system and default configuration doesn't fit him. > DNS Slave server problem Without dynamic record generation it would be possible to do zone transfers without any change to current code. Only one new zone (i.e. _locations part of DNS sub-tree) has to be set on each slaves and we are done. > Server Implementation > TODO: interaction with DNSSEC That it *very* important part. I have fear from so many dynamic things inside. -- Petr^2 Spacek From jcholast at redhat.com Tue Jan 22 14:32:39 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 22 Jan 2013 15:32:39 +0100 Subject: [Freeipa-devel] [PATCHES] 94-96 Remove Entry and Entity classes Message-ID: <50FEA307.7080502@redhat.com> Hi, these patches remove the Entry and Entity classes and move instantiation of LDAPEntry objects to LDAPConnection.make_entry factory method. Apply on top of Petr Viktorin's LDAP code refactoring (part 1 & 2) patches. Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-94-Add-make_entry-factory-method-to-LDAPConnection.patch Type: text/x-patch Size: 15020 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-95-Remove-the-Entity-class.patch Type: text/x-patch Size: 6174 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-96-Remove-the-Entry-class.patch Type: text/x-patch Size: 3678 bytes Desc: not available URL: From tbabej at redhat.com Tue Jan 22 14:50:50 2013 From: tbabej at redhat.com (Tomas Babej) Date: Tue, 22 Jan 2013 15:50:50 +0100 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <1358439519.20683.14.camel@willson.li.ssimo.org> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> <50F6946C.3040905@redhat.com> <1358344021.4590.31.camel@willson.li.ssimo.org> <50F6DC0C.1090106@redhat.com> <1358355708.4590.47.camel@willson.li.ssimo.org> <50F6E412.7020402@redhat.com> <50F74C57.6030604@redhat.com> <50F80AD1.20408@redhat.com> <1358439519.20683.14.camel@willson.li.ssimo.org> Message-ID: <50FEA74A.5060908@redhat.com> On 01/17/2013 05:18 PM, Simo Sorce wrote: > On Thu, 2013-01-17 at 15:29 +0100, Tomas Babej wrote: >> On 01/17/2013 01:56 AM, Dmitri Pal wrote: >> >>> On 01/16/2013 12:32 PM, Tomas Babej wrote: >>>> On 01/16/2013 06:01 PM, Simo Sorce wrote: >>>>> On Wed, 2013-01-16 at 17:57 +0100, Tomas Babej wrote: >>>>>> On 01/16/2013 02:47 PM, Simo Sorce wrote: >>>>>>> On Wed, 2013-01-16 at 12:52 +0100, Tomas Babej wrote: >>>>>>>> On 01/15/2013 11:55 PM, Simo Sorce wrote: >>>>>>>>> On Tue, 2013-01-15 at 17:36 -0500, Dmitri Pal wrote: >>>>>>>>>> On 01/15/2013 03:59 PM, Simo Sorce wrote: >>>>>>>>>>> On Tue, 2013-01-15 at 15:53 -0500, Rob Crittenden >>>>>>>>>>> wrote: >>>>>>>>>>>> Dmitri Pal wrote: >>>>>>>>>>>>> On 01/15/2013 08:48 AM, Simo Sorce wrote: >>>>>>>>>>>>>> On Mon, 2013-01-14 at 16:46 +0100, Tomas Babej >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Since in Kerberos V5 are used 32-bit unix >>>>>>>>>>>>>>> timestamps, setting >>>>>>>>>>>>>>> maxlife in pwpolicy to values such as 9999 >>>>>>>>>>>>>>> days would cause >>>>>>>>>>>>>>> integer overflow in krbPasswordExpiration >>>>>>>>>>>>>>> attribute. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> This would result into unpredictable >>>>>>>>>>>>>>> behaviour such as users >>>>>>>>>>>>>>> not being able to log in after password >>>>>>>>>>>>>>> expiration if password >>>>>>>>>>>>>>> policy was changed (#3114) or new users not >>>>>>>>>>>>>>> being able to log >>>>>>>>>>>>>>> in at all (#3312). >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/3312 >>>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/3114 >>>>>>>>>>>>>> Given that we control the KDC LDAP driver I >>>>>>>>>>>>>> think we should not limit >>>>>>>>>>>>>> the time in LDAP but rather 'fix-it-up' for >>>>>>>>>>>>>> the KDC in the DAL driver. >>>>>>>>>>>>> Fix how? Truncate to max in the driver itself if >>>>>>>>>>>>> it was entered beyond max? >>>>>>>>>>>>> Shouldn't we also prevent entering the invalid >>>>>>>>>>>>> value into the attribute? >>>>>>>>>>>>> >>>>>>>>>>>> I've been mulling the same question for a while. >>>>>>>>>>>> Why would we want to >>>>>>>>>>>> let bad data get into the directory? >>>>>>>>>>> It is not bad data and the attribute holds a >>>>>>>>>>> Generalize time date. >>>>>>>>>>> >>>>>>>>>>> The data is valid it's the MIT code that has a >>>>>>>>>>> limitation in parsing it. >>>>>>>>>>> >>>>>>>>>>> Greg tells me he plans supporting additional time by >>>>>>>>>>> using the >>>>>>>>>>> 'negative' part of the integer to represent the >>>>>>>>>>> years beyond 2038. >>>>>>>>>>> >>>>>>>>>>> So we should represent data in the directory >>>>>>>>>>> correctly, which means >>>>>>>>>>> whtever date in the future and only chop it when >>>>>>>>>>> feeding MIT libraries >>>>>>>>>>> until they support the additional range at which >>>>>>>>>>> time we will change and >>>>>>>>>>> chop further in the future (around 2067 or so). >>>>>>>>>>> >>>>>>>>>>> If we chopped early in the directory we'd not be >>>>>>>>>>> able to properly >>>>>>>>>>> represent/change rapresentation later when MIT libs >>>>>>>>>>> gain additional >>>>>>>>>>> range capabilities. >>>>>>>>>>> >>>>>>>>>>> Simo. >>>>>>>>>>> >>>>>>>>>> We would have to change our code either way and the >>>>>>>>>> amount of change >>>>>>>>>> will be similar so does it really matter? >>>>>>>>> Yes it really matters IMO. >>>>>>>>> >>>>>>>>> Simo. >>>>>>>>> >>>>>>>> Updated patch attached. >>>>>>> This part looks ok but I think you also need to properly >>>>>>> set >>>>>>> krb5_db_entry-> {expiration, pw_expiration, last_success, >>>>>>> last_failed} >>>>>>> in ipadb_parse_ldap_entry() >>>>>>> >>>>>>> Perhaps the best way is to introduce a new function >>>>>>> ipadb_ldap_attr_to_krb5_timestamp() in ipa_kdb_common.c so >>>>>>> that you do >>>>>>> all the overflow checkings once. >>>>>>> >>>>>>> Simo. >>>>>>> >>>>>> They all use ipadb_ldap_attr_to_time_t() to get their values, >>>>>> so the following addition to the patch should be sufficient. >>>>> It will break dates for other users of the function that do not >>>>> need to >>>>> artificially limit the results. Please add a new function. >>>>> >>>>> Simo. >>>>> >>>> Done. >>>> >>>> Tomas >>>> >>>> >>>> _______________________________________________ >>>> Freeipa-devel mailing list >>>> Freeipa-devel at redhat.com >>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>> Nack from me, sorry. >>> >>> +int ipadb_ldap_attr_to_krb5_timestamp(LDAP *lcontext, LDAPMessage *le, >>> + char *attrname, time_t *result) >>> +{ >>> + int ret = ipadb_ldab_attr_to_time_t(lcontext, le, >>> + attrname, result); >>> >>> This function converts the time from the LDAP time by reading the string into the struct elements and then constructing time by using timegm() which is really a wrapper around mktime(). >>> According to mktime() man page: >>> >>> If the specified broken-down time cannot be represented as calendar time (seconds since >>> the Epoch), mktime() returns a value of (time_t) -1 and does not alter the members of the >>> broken-down time structure. >>> >>> However it might not be the case so it would be nice to check if this function actually returns some negative value other than -1 if the time is overflown. >>> Regardles of that part if it always returns -1 or can return other negative values like -2 or -3 the check below would produce positive result thus causing the function to return whatever is currently in the *result. >> I double checked the behaviour. It holds that mktime() and timegm() >> behave the same way, up to time-zone difference. I don't know whether >> the man page is not correct, >> or the implementation in the standard library is not compliant, >> however, mktime() never returns -1 as return value (if it was not >> given tm struct which refers to 31 Dec 1969 29:59:59). >> >> I guess the implementation was changed as there would be no way how to >> distinguish between correct output of 31 Dec 1969 29:59:59 and >> incorrect output. >> >> I tested both incorrect calendar days (like 14th month) and dates >> behind year 2038. As expected, dates after the end of unix epoch >> overflow big time (values such as -2143152000). >> Incorrect dates just get converted into correct ones - 14th month was >> interpreted as +1 year +3 months. >> >>> IMO the whole fix should be implemented inside the function above when the conversion to time_t happens so that it would never produce a negative result. >> Simo had objections to this approach since this would limit the other >> uses of ipadb_ldap_attr_to_time_t() function. >>> My suggestion would be before calling timegm() to test if the year, month, and day returned by the break-down structure contain the day after the last day of epoch and then just use use last day of epoc without even calling timegm(). >> Indeed it would be simpler approach. However, for the current solution >> to malfunction (overflow back to the positive values), the date would >> have to be set to something beyond year ~2100. >> That would correspond to maxlife of ~32 000 days with present dates. >> >> I guess there might be admins setting crazy values like that, I'll >> send updated patch. >> >>> + >>> + /* in the case of integer owerflow, set result to IPAPWD_END_OF_TIME */ >>> + if ((*result+86400) < 0) { >>> >>> >>> + *result = IPAPWD_END_OF_TIME; // 1 Jan 2038, 00:00 GMT >>> + } >>> + >>> + return ret; >>> +} >>> + > Ok this mail made me look again in the issue. > > I see 2 problems here. > > 1) platform dependent issues. > > On i686 time_t is a signed 32bit integer (int) > On x86_64 time_t is a signed 64bit integer (long long) > > So when you test you need to be aware on what platform you are testing > in order to know what to expect. > > 2) The current patch returns time_t *result for the new wrapper function > which is wrong, it shoul return krb5_timestamp as the type. > > > The actual test done in the code looks ok but only if you think time_t > is a 32bit signed integer. > In that case it will overflow. But on x86_64 it will not. > Sorry for not catching it earlier. > > So the way to handle this is to actually check this is to change the > wrapper function to look like this: > > int ipadb_ldap_attr_to_krb5_timestamp(LDAP *lcontext, LDAPMessage *le, > char *attrname, krb5_timestamp *result) > { > time_t restime; > long long reslong; > > int ret = ipadb_ldab_attr_to_time_t(lcontext, le, > attrname, restime); > if (ret) return ret; > > reslong = restime; // <- this will cast correctly maintaing sign to a 64bit variable > if (reslong < 0 || reslong > IPAPWD_END_OF_TIME) { > *result = IPAPWD_END_OF_TIME; > } else { > *result = (krb5_timestamp)reslong; > } > return 0; > } > > All calls in ipadb_parse_ldap_entry() that expects a singed 32 bit time > as output should be hanged to use ipadb_ldap_attr_to_krb5_timestamp() > > This includes 2 additional calls Tomas pointed to me on a IRC > conversation. > > So Nack again :-/ > > Simo. > Here I bring the updated version of the patch. Please note, that I *added* a flag attribute to ipadb_ldap_attr_to_krb5_timestamp function, that controls whether the timestamp will be checked for overflow or not. The reasoning behind this is that some attributes will not be set to future dates, due to their inherent nature - such as krbLastSuccessfulAuth or krbLastAdminUnlock. These are all related to past dates, and it would make no sense to set them to future dates, even manually. Therefore I'd rather represent negative values in these attributes as past dates. They would have to be set manually anyway, because they would represent timestamps before the beginning of the unix epoch, however, I find this approach better than pushing them up to year 2038 in case such things happens. Any objections to this approach? Tomas -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0026-6-Prevent-integer-overflow-when-setting-krbPasswordExp.patch Type: text/x-patch Size: 8675 bytes Desc: not available URL: From jcholast at redhat.com Tue Jan 22 14:59:48 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 22 Jan 2013 15:59:48 +0100 Subject: [Freeipa-devel] [PATCHES] 127-136 LDAP code refactoring (Part 2) In-Reply-To: <50FD7CF8.2070401@redhat.com> References: <50F83498.9050104@redhat.com> <50FD7CF8.2070401@redhat.com> Message-ID: <50FEA964.5090400@redhat.com> Hi, On 21.1.2013 18:38, Petr Viktorin wrote: > Here is a second batch of patches. I'm keeping them small so they're > easier to digest (and rebase, should that be necessary). If you prefer > big patches, just squash them. > Obviously, they apply on top of the previous batch. > > Here we introduce a base class, LDAPConnection, from which IPAdmin and > ldap2 inherit. This makes IPAdmin support both interfaces, its own old > one and the new common one. > Both IPAdmin and ldap2 work with the common Entry class, LDAPEntry, > which also has both the old and new interface for the moment. > To use a hyperbole, the rest of the work is simply rewriting the > codebase to use the new interfaces. > And testing. > So far I have just one small nitpick: could you please rename LDAPConnection to LDAPClient? It does more than just connecting, so it should have more suitable name IMHO. Also, the "conn" attribute of LDAPConnection might be confusing to someone (connection in connection?). Honza -- Jan Cholasta From simo at redhat.com Tue Jan 22 15:01:06 2013 From: simo at redhat.com (Simo Sorce) Date: Tue, 22 Jan 2013 10:01:06 -0500 Subject: [Freeipa-devel] A new proopsal for Location Based Discovery In-Reply-To: <50FEA0D7.60507@redhat.com> References: <1358816342.20683.227.camel@willson.li.ssimo.org> <50FEA0D7.60507@redhat.com> Message-ID: <1358866866.20683.264.camel@willson.li.ssimo.org> On Tue, 2013-01-22 at 15:23 +0100, Petr Spacek wrote: > Creating per-server _locations sub-tree is very easy with current code: Simply > copy&paste new bind-dyndb-ldap section to /etc/named.conf and point base DN to > some server-specific part of LDAP tree: > > dynamic-db "ipa-local" { > // > arg "base cn=srv2.example.com, cn=dns-local, dc=example,dc=com"; > } Unless you have a way to mange it via LDAP this is unworkable. Locations should be managed via the Web UI. So you need to be able to create new locations on the fly and change server's locations dynamically, possibly w/o requiring a server restart, but certainly w/o requiring the DNS admin to have direct SSH access to all boxes to go and manually change named.conf > Server specific _locations records live in this sub-tree and each server has > have own view of _locations, i.e. each server could specify mapping between > locations in own way. DNS clients will see merged DNS tree, no change on > client side is required. But this would require to manually change multiple records for multiple servers in the same location, which could go wrong quite easily. Each location configuration should be in a single place so that it is consistent for all servers of that location and not a burden for administration. Also your methods puts location information out of the actual DNS, so you can't lookup location data via DNS except for the 'default'. But that would not be correct, we want to allow a client to lookup location data for a non-default location, because an IPA DNS server may very well be serving multiple locations. > E.g. client has preferred location "brno" but the client is connected to > network in "nyc", i.e. DNS queries are sent to servers in NYC. NYC server has > own "_locations" sub-tree with trivial mapping "brno DNAME nyc". > > How to read the result: Location "Brno" is too far from "NYC", use "NYC" > anyway! Also, "default" location could prefer local server over remote ones, > i.e. local clients without any configuration will prefer local servers. I am not sure how this is different from my proposal, the problem I see is that you loose the ability to force a configuration for select client by actually creating real DNAME records. > There is another nice feature: "old" _ntp._udp.domain SRV records could > contain aliases pointing to SRV records in some location, e.g. "default". In > that case also old clients will prefer local servers over remote ones - almost > with no price and with no client reconfiguration. > > No new concepts, no new code :-) We can do that with a DNAME in theory, but I would rather keep current domain records as is for now. > There is still _location DNAME record under client's name, that stay > unchanged. Personally, I don't like any on-the-fly record generation. Is it > really necessary? Who creates this record for new clients ? How to you handle 3 locations on a single DNS server ? Say I have a headquarters DNS setup where I want to send clients to the engineering, sales or accounting locations depending on the client but I have a shared local network configuration so all clients use the same DNS server. > In case described above I don't think so. Roaming between locations don't > require changing any record, so configuration is static. Yep 'static' is the issue here, we want it more dynamic, the point of generating is that we can change the way we manage locations in future w/o having to jump through more hops. > Old clients would see "default" location and _location record for new clients > could be created during ipa host-add or something similar. We can't give out the privilege to create arbitrary DNAME records to lower level admins, so we would have to add special code. > > DNS Slave server problem > Without dynamic record generation it would be possible to do zone transfers > without any change to current code. Only one new zone (i.e. _locations part of > DNS sub-tree) has to be set on each slaves and we are done. This is true, and we can opt for this fallback initially, but I do not want to restrict manageability just to make the job easier for one of the cases. > > Server Implementation > > TODO: interaction with DNSSEC > That it *very* important part. I have fear from so many dynamic things inside. Yes this is indeed going to add complexity. No doubt. Simo. -- Simo Sorce * Red Hat, Inc * New York From pviktori at redhat.com Tue Jan 22 15:04:06 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 22 Jan 2013 16:04:06 +0100 Subject: [Freeipa-devel] [PATCHES] 127-136 LDAP code refactoring (Part 2) In-Reply-To: <50FD7CF8.2070401@redhat.com> References: <50F83498.9050104@redhat.com> <50FD7CF8.2070401@redhat.com> Message-ID: <50FEAA66.4010704@redhat.com> On 01/21/2013 06:38 PM, Petr Viktorin wrote: > On 01/17/2013 06:27 PM, Petr Viktorin wrote: >> Hello, >> This is the first batch of changes aimed to consolidate our LDAP code. >> Each should be a self-contained change that doesn't break anything. >> >> These patches do some general cleanup (some of the changes might seem >> trivial but help a lot when grepping through the code); merge the common >> parts LDAPEntry, Entry and Entity classes; and move stuff that depends >> on an installed server out of IPASimpleLDAPObject and SchemaCache. >> >> I'm posting them early so you can see where I'm going, and so you can >> find out if your work will conflict with mine. > > Here is a second batch of patches. I'm keeping them small so they're > easier to digest (and rebase, should that be necessary). If you prefer > big patches, just squash them. > Obviously, they apply on top of the previous batch. > > Here we introduce a base class, LDAPConnection, from which IPAdmin and > ldap2 inherit. This makes IPAdmin support both interfaces, its own old > one and the new common one. > Both IPAdmin and ldap2 work with the common Entry class, LDAPEntry, > which also has both the old and new interface for the moment. > To use a hyperbole, the rest of the work is simply rewriting the > codebase to use the new interfaces. > And testing. > I moved the wrong method in patch 0133, which broke the memberof logic. Attaching a fixed version of patches 0132-0135 (so many because the change introduced minor merge conflicts). -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0132-02-Move-filter-making-methods-to-LDAPConnection.patch Type: text/x-patch Size: 13423 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0133-02-Move-entry-finding-methods-to-LDAPConnection.patch Type: text/x-patch Size: 28182 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0134-02-Remove-unused-proxydn-functionality-from-IPAdmin.patch Type: text/x-patch Size: 5781 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0135-02-Move-entry-add-update-remove-rename-to-LDAPConnectio.patch Type: text/x-patch Size: 12320 bytes Desc: not available URL: From atkac at redhat.com Tue Jan 22 15:18:12 2013 From: atkac at redhat.com (Adam Tkac) Date: Tue, 22 Jan 2013 16:18:12 +0100 Subject: [Freeipa-devel] A new proopsal for Location Based Discovery In-Reply-To: <1358816342.20683.227.camel@willson.li.ssimo.org> References: <1358816342.20683.227.camel@willson.li.ssimo.org> Message-ID: <20130122151810.GA21883@redhat.com> On Mon, Jan 21, 2013 at 07:59:02PM -0500, Simo Sorce wrote: > Hello FreeIPA developers and other followers, > > we've have thought for quite a while about how to best implement > location based discovery for our clients so that we can easily redirect > group of clients to specific servers in order to better use resources > and keep traffic local to clients. > > However although we made some proposal we haven't implemented anything > so far, one reason is that all solution we came up till now were complex > and involved substantial client changes. > > I recently came up with a new take on how to resolve the problem, I've > written it up here after some minimal discussion with Petr Spacek and > others. > > It is available here: > http://www.freeipa.org/page/V3/DNS_Location_Mechanism > > > I thinks this proposal stands an actual chance at being implemented and > getting enough client support, mostly because the necessary changes to > existing clients vary from none to very minimal. > > This is proposal is not yet definitive, and is open to adjustments. > > I've also inlined a copy below for easier commenting. > Please trim out unnecessary text if you choose to reply and comment, and > keep only the relevant sections of text if you comment inline. Before we start talking about using DNS for this purpose, have you considered to use IP anycast for this? You can simply create multiple servers with same IP address on different places over the world. After that you announce this IP address from multiple places simultaneounsly via BGP and BGP automatically routes all clients to the closest node. Advantage is that this is already implemented, used and nothing have to be modified. Regards, Adam > Have a good read, > Simo. > > ======================================================================== > > A Mechanism to allow for location based discovery > by Simo Sorce with help from Petr Spacek and others > > > Forewords > This is a new proposal (Jan 2013) to support Location Based discovery in > FreeIPA. It was inspired by this earlier proposal made a while ago. The > main difference is that it simplifies the whole management by > eliminating IP subnets and special client code while still maintaining a > great deal of flexibility. The key insight being that different > locations can configure the network to use different FreeIPA DNS > servers, that are local to the location being considered. > > > Introduction > Service Discovery is a process whereby a client uses information stored > in the network to find servers that offer a specific service. It is > useful for two reasons. First, it places the information required for > the location of servers in the network, lessening the burden of client > configuration. Second, it gives system and network administrators a > central point of control that can be used to to define an optimized > policy that determines which clients should use which server and in what > order. > > A standard for service discovery is defined in RFC 2782. This standard > defines a DNS RR with the mnemonic SRV and usage rules around it. It > allows a client to discover servers that offer a given service over a > given protocol. > > A drawback of SRV Records is that it assumes clients know the correct > DNS domain to use as the query target. Without any further information, > the client's options includes using their own DNS domain and the name of > the security domain in which the client operates (such as the domain > name associated to the Kerberos REALM the client refers to). Neither > option is likely to yield optimal results however. One key service > discovery requirement, especially in large distributed enterprises, is > to choose a server that is close to a client. However in many situation > binding the client name to a specific zone that is tied to a physical > location is not possible. And even if it were it would not address the > needs of a roaming client that moves across the networks. We cannot have > the name of a client change when it move across networks as this break > the Kerberos protocol that associates keys to a fixed host name for > Service Principals. > > The incompleteness of RFC 2782 is acknowledged by systems such as Active > Directory that contain extra functionality to augment SRV lookups to > make them site aware. The basic idea is to use a target in SRV service > discovery that is specific to a location or "site" in AD parlance. > Unfortunately AD clients rely on additional configureation or side > protocols to determine the client "site" and it is quite specific to > Microsoft technologies so the method they use is not easily portable and > reusable in other contexts nor documented in Standards or informative > IETF RFCs. > > This document specifies a protocol for location discovery based > exclusively on DNS. While the protocol itself can be fully implemented > with a normal DNS the FreeIPA mechanism is augmented by additional > 'location' awarness and will depend on additional computation done by > the FreeIPA DNS plugins. > > > The Discovery Protocol > The main point about this protocol is the recognition that all we really > need is to find a specific (set of) server(s) that a specific client > needs to find. The second point is that the client has 1 bit of > information that is inequivocal: its own fully qualified name. Whether > this is fixed or assigned by the network (via DHCP) is not important, > what matters is that the name univocally identifies this specific host > in the network. > > Based on these two points the idea is to make the client query for a > special set of SRV records keyed on the client's own DNS name. From the > client perspective this is the simplest protocol possible, it requires > no knowledge or hard decisions about what DNS domain name to query or > how to discover it. At the same time is allows the Domain Administrators > a lot of flexibility on how to configure these records per-client. > > The failure mode for this protocol is to simply keep using the previous > heuristics, we will not define these heuristics as they are not > standardized and are implementation and deployment specific to some > extent. Suffice to say that this new protocol should not impact in any > way on previous heuristics and DNS setups and can be safely implemented > in clients with no ill effects save for an additional initial query. > Local negative chaching may help in avoiding excessive queries if the > administratoir chooses not to configure the servers to support per > client SRV Records and otherwise adds little overhead. > > > Client Implementation > Because currently used SRV records are multiple and to allow the case > where a host may actually be using a domain name that is also already > used as a zone name (ie the name X.example.com identifies both an actual > host and is a subdomain where clients Y.X.example.com normally searches > for SRV records) we group all per-client location SRV records under the > _location. sub name. > > So for example, a client named X.example.com would search for its own > per-client records for the LDAP service over the TCP protocol by using > the name: _ldap._tcp._location.X.example.com > > With current practices a client normally looks for > _ldap._tcp.example.com instead. > > It is a simple as that, the only difference between a client supporting > this new mechanism and a generic client is only about what name is used > as the 'base domain name'. Everything else is identical. Many clients > can probably be already configured to use this new base domain. And > clients that may not support it (either because the base domain is > always derived in some way and not directly configurable or because > clients refuse to use _location as a valid bade DNS name component due > to the leading '_' character) can be easily changed. Those that can't be > changed will simply fall back to use the classic SRV records on the base > domain and will simply not be location aware. > > The additional advantage of using this scheme is that clients can now > use per-client SRV searches by default if they so choose because there > is no risk of ending up using unrelated servers due to unfortunate host > naming. If the administrator took the pain to configure per-client SRV > records there is an overwhelming chance those are indeed the records the > client is supposed to use. By using this as default it is possible to > make client configuration free by default which is a real boon on > networks with many hosts. > > Changing defaults requires careful consideration of security > implications, please read the #Security Considerations section for more > information. > > > Server side implementation > Basic solution > The simplest way to implement this scheme on the server side is to just > create a set of records for each client. However this is a very > heavyweight and error prone process as it requires the creation of many > records for each client. > > > A more rational solution > A simple but more manageable solution may be to use DNAME records as > defined by RFC 6672. The administrator in this case can set up a single > set of SRV records per location and then use a DNAME record to glue each > client to this subtree. > > This solution is much more lightweight and less error prone as each > client would need one single additional record that points to a well > maintained subtree. > > So a client X.example.com could have a DNAME record like this: > _location.X.example.com. DNAME Y._locations.example.com. > > When the client X tries to search for its own per-client records for the > LDAP service over the TCP protocol by using the name > _ldap._tcp._location.X.example.com it would be automatically redirected > to the record _ldap._tcp.Y._locations.example.com > > > Advanced FreeIPA solution > Although the above implementation works fine for most cases it has 2 > major drawbacks. The first one is poor support for roaming clients as > they would be permanently referring to a specific location even when > they travel across potentially very geografically dispersed locations. > The other big drawback is that admins will have to create the DNAME > records for each client which is a lot of work. In FreeIPA we can have > more smarts given we can influence the bind-dyndb-ldap plugin behavior. > > So one first very simple yet very effective simplification would be to > change the bind-dyndb-ldap plugin to create a phantom per-client > location DNAME record that points to a 'default' location. > > This means DNAME records wouldn't be directly stored in LDAP but would > be synthesized by the driver if not present using a default > configuration. However to make this more useful the plugin shouldn't > just use one single default, but should have a default 'per server'. > > > Roaming/Remote clients > Roaming clients or Remote clients have one big problem, although they > may have a default preferred location they move across networks and the > definition of 'location' and 'closest' server changes as they move. Yet > their name is still fixed. With a classic Bind setup this problem can > somewhat be handled by using views and changing the DNAME returned or > directly the SRV records depending on the client IP address. However > using source IP address is not always a good indicator. Clients may be > behind a NAT or maybe IP addressing is shared between multiple logical > locations within a physical network. or the client may be getting the IP > address over a VPN tunnel and so on. In general relying on IP address > information may or may not work. (There is also the minor issue that we > do not yet support views in the bind-dyndb-ldap plugin.) > > > Addressing the multiple locations problem > The reason to define multiple locations is that we want to redirect > clients to different servers depending on the location they belong to. > This only really makes sense if each location has its own (set of) > FreeIPA server(s). > > Also usually a location corresponds to a different network so it can be > assumed the if at least one of the FreeIPA servers in each location is a > DNS enabled server and the local network configuration (DHCP) server > serves this DNS server as the primary server for the client then we can > make the reasonable assumption that a client in a specific location will > generally query a FreeIPA server in that same location for > location-specific information. > > If this holds true then changing the 'default' location base on the > server's own location would effectively make clients stick to the local > servers (Assuming the location's SRV records are properly configured to > contain only local server, which we can insure through appropriate > checks in the framework) > > This is another simple optimization and works for a lot of cases but not > necessarily all. However this optimization leads to another problem. > What if the client needs to belong to a specific location indipendetly > from what server they ask to, or what if we really only have a few > FreeIPA DNS servers but want to use more locations ? > > One way of course is to create a fixed DNAME record for these clients, > so the defaults do not kick in. However this is rather final. Maybe the > clients needs a preference but that preference can be overridden in some > circumstances. > > > Choosing the right location > So the right location for a client may be a combination of a preference > and a set of requirements. One example of a requirement that can trump > any preference is a bandwidth constrained location. > > Assume we have a client that normally resides in a large location. This > location has been segmented in small sublocations to better distribute > load so it has a preferred location. If we use a fixed DNAME to > represent this preference when this client roams to a bandwidth > constrained network it will try to use the slow link to call 'home' to > his usual location. This may be a serious problem. > > However if we generate the default location dynamically we can easily > have rules on the bandwidth constrained location DNS servers that no > matter what is the preference any client asking for location based SRV > records will always be redirected to the local location which includes > only local servers in their SRV records. > > This is quite powerful and would neatly solve many issues connected with > roaming clients. > > > DNS Slave server problem > Dynamically choosing locations may cause issues with DNS Slaves servers, > as they wouldn't be able implement this dynamic mechanism. > > One way to handle this problem is to operate in a 'degraded' mode where > DNAME records are effectively created and the location is not dynamic > per-client anymore. We can still have 'different' defaults per server if > we decide to filter DNAME records from replication. However filtering > DNAME records is also a problem because we would not be able to filter > only location based ones, it would be an all or nothing filter, which > would render DNAME records unusable for any other purpose. This > restriction is a bit extreme. > > Another way might be to always compute all zone DNAME records based on > the available host records on the fly at DNS server startup, and then > keep them cached (and updated) by the bind-dyndb-ldap plugin, which will > include these records in AXFR transfers but will not write them back to > the LDAP server keeping them local. This solution might be the golden > egg, as it might allow all the advantages of dynamic generation, as well > as response performance and solve the slave server issue and perhaps > even DNSSEC related issues. It has a major drawback, it would make the > code a lot more compicated and critical. > > > Overall implementation proposal > Given that the basic solution is relatively simple and require minimal > if no client changes we should consider implementing at least part of > this proposal as soon as possible. Implementing DNAME record support in > bind-dyndb-ldap seem a prerequisite and adding client support in the > SSSD IPA provider would allow to test at least with the basic setup. > This basic support should be implemented sooner rather than later so > that full dynamic support can lately be easily added to bind-dyndb-ldap > support as well as adding the necessary additional schema and UI to the > freeipa framework to mark and group clients and locations. > > > Security Considerations > Client Implementation > As always DNS replies can be spoofed relatively easily. We recommend > that SRV records resolution is used only for those clients that normally > use an additional security protocol to talk to network resources and can > use additional mechanisms to authenticate these resources. For example a > client that uses an LDAP server for security related information like > user identity information should only trust SRV record discovery for the > LDAP service if LDAPS or STARTTLS over LDAP are mandatory and > certificate verification is fully turned on, or if SASL/GSSAPI is used > with mutual authentication, integrity and confidentiality options > required. Use of DNSSEC and full DNS signature verification may be > considered an additional requirement in some cases. > > > Server Implementation > TODO: interaction with DNSSEC > > > References > SRV Records: RFC 2782 > DNAME Records: RFC 6672 > > > -- > Simo Sorce * Red Hat, Inc * New York > -- Adam Tkac, Red Hat, Inc. From pviktori at redhat.com Tue Jan 22 15:19:37 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 22 Jan 2013 16:19:37 +0100 Subject: [Freeipa-devel] [PATCHES] 127-136 LDAP code refactoring (Part 2) In-Reply-To: <50FEA964.5090400@redhat.com> References: <50F83498.9050104@redhat.com> <50FD7CF8.2070401@redhat.com> <50FEA964.5090400@redhat.com> Message-ID: <50FEAE09.9040500@redhat.com> On 01/22/2013 03:59 PM, Jan Cholasta wrote: > Hi, > > On 21.1.2013 18:38, Petr Viktorin wrote: >> Here is a second batch of patches. I'm keeping them small so they're >> easier to digest (and rebase, should that be necessary). If you prefer >> big patches, just squash them. >> Obviously, they apply on top of the previous batch. >> >> Here we introduce a base class, LDAPConnection, from which IPAdmin and >> ldap2 inherit. This makes IPAdmin support both interfaces, its own old >> one and the new common one. >> Both IPAdmin and ldap2 work with the common Entry class, LDAPEntry, >> which also has both the old and new interface for the moment. >> To use a hyperbole, the rest of the work is simply rewriting the >> codebase to use the new interfaces. >> And testing. >> > > So far I have just one small nitpick: could you please rename > LDAPConnection to LDAPClient? It does more than just connecting, so it > should have more suitable name IMHO. Also, the "conn" attribute of > LDAPConnection might be confusing to someone (connection in connection?). > > Honza > Yes. I'll do it in a later patch so I don't have to rebase my work in progress. -- Petr? From simo at redhat.com Tue Jan 22 15:25:21 2013 From: simo at redhat.com (Simo Sorce) Date: Tue, 22 Jan 2013 10:25:21 -0500 Subject: [Freeipa-devel] A new proopsal for Location Based Discovery In-Reply-To: <20130122151810.GA21883@redhat.com> References: <1358816342.20683.227.camel@willson.li.ssimo.org> <20130122151810.GA21883@redhat.com> Message-ID: <1358868321.20683.266.camel@willson.li.ssimo.org> On Tue, 2013-01-22 at 16:18 +0100, Adam Tkac wrote: > Before we start talking about using DNS for this purpose, have you > considered > to use IP anycast for this? You can simply create multiple servers > with same IP > address on different places over the world. After that you announce > this IP > address from multiple places simultaneounsly via BGP and BGP > automatically > routes all clients to the closest node. Advantage is that this is > already > implemented, used and nothing have to be modified. > > Regards, Adam > We cannot assume our customers can influence or have access to change BGP routing, so I excluded multicast solutions from the get go. Also it requires more changes on the clients which is another heavy minus. Simo. -- Simo Sorce * Red Hat, Inc * New York From simo at redhat.com Tue Jan 22 15:57:49 2013 From: simo at redhat.com (Simo Sorce) Date: Tue, 22 Jan 2013 10:57:49 -0500 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <50FEA74A.5060908@redhat.com> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> <50F6946C.3040905@redhat.com> <1358344021.4590.31.camel@willson.li.ssimo.org> <50F6DC0C.1090106@redhat.com> <1358355708.4590.47.camel@willson.li.ssimo.org> <50F6E412.7020402@redhat.com> <50F74C57.6030604@redhat.com> <50F80AD1.20408@redhat.com> <1358439519.20683.14.camel@willson.li.ssimo.org> <50FEA74A.5060908@redhat.com> Message-ID: <1358870269.20683.292.camel@willson.li.ssimo.org> On Tue, 2013-01-22 at 15:50 +0100, Tomas Babej wrote: > Here I bring the updated version of the patch. Please note, that I > *added* a flag attribute to ipadb_ldap_attr_to_krb5_timestamp > function, that controls whether the timestamp will be checked for > overflow or not. The reasoning behind this is that some attributes > will not be set to future dates, due to their inherent nature - such > as krbLastSuccessfulAuth or krbLastAdminUnlock. > > These are all related to past dates, and it would make no sense to set > them to future dates, even manually. Therefore I'd rather represent > negative values in these attributes as past dates. They would have to > be set manually anyway, because they would represent timestamps before > the beginning of the unix epoch, however, I find this approach better > than pushing them up to year 2038 in case such things happens. > > Any objections to this approach? > I am not sure I understand what is the point of giving this option to callers. A) How does an API user know when to use one or the other option. B) What good does it make to have the same date return different results based on a flag ? What will happen later on when MIT will 'fix' the 2038 limit by changing the meaning of negative timestamps ? Keep in mind that right now negative timestamps are not really valid in the MIT code. Unless there is a 'use' for getting negative timestamps I think it is only harmful to allow it and consumers would only be confused on whether it should be used or not. So my first impression is that you are a bit overthinking here and we should instead always force the same behavior for all callers and always check and enforce endoftime dates. Simo. -- Simo Sorce * Red Hat, Inc * New York From atkac at redhat.com Tue Jan 22 16:02:06 2013 From: atkac at redhat.com (Adam Tkac) Date: Tue, 22 Jan 2013 17:02:06 +0100 Subject: [Freeipa-devel] A new proopsal for Location Based Discovery In-Reply-To: <1358868321.20683.266.camel@willson.li.ssimo.org> References: <1358816342.20683.227.camel@willson.li.ssimo.org> <20130122151810.GA21883@redhat.com> <1358868321.20683.266.camel@willson.li.ssimo.org> Message-ID: <20130122160205.GA22089@redhat.com> On Tue, Jan 22, 2013 at 10:25:21AM -0500, Simo Sorce wrote: > On Tue, 2013-01-22 at 16:18 +0100, Adam Tkac wrote: > > Before we start talking about using DNS for this purpose, have you > > considered > > to use IP anycast for this? You can simply create multiple servers > > with same IP > > address on different places over the world. After that you announce > > this IP > > address from multiple places simultaneounsly via BGP and BGP > > automatically > > routes all clients to the closest node. Advantage is that this is > > already > > implemented, used and nothing have to be modified. > > > > Regards, Adam > > > We cannot assume our customers can influence or have access to change > BGP routing, so I excluded multicast solutions from the get go. > Also it requires more changes on the clients which is another heavy > minus. If I understand correctly, target customers of IPA are companies and they use IPA to maintain resources in their internal networks, aren't they? In this case I see two basic solutions how to solve the "location" issue. 1. BGP routing between multiple internal networks If customer wants to interconnect multiple networks (for example networks in different offices) so resources in network 1 will be accessible from network 2, he must use some kind of routing. All traffic from network 1 must go through border router and is accepted by border router in network 2: network1 <-> router1 <-> router2 <-> network2 This can be extended to multiple offices and all border routers will talk to each other. In this scenario customer can specify set of rules on each router and route traffic to services to specific locations. Please note that there is no need to announce anything to the Internet via BGP. 2. No routing between internal networks In this case networks aren't interconnected so no routing is involved. In this case "location" discovery doesn't make sense because machine in network 1 cannot access resources in network 2. So it will also use the closest service. To summarize my idea, as long as services have _same_ IP addresses in all cooperating IPA installations, which definitely make sense, you don't need to use DNS for location because routing protocol will automatically pick the closest location. I don't see any reason for modifications on clients. Everything what will be modified is routing rules on border routers. Please note that anycast != multicast. Regards, Adam -- Adam Tkac, Red Hat, Inc. From rcritten at redhat.com Tue Jan 22 16:02:59 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 22 Jan 2013 11:02:59 -0500 Subject: [Freeipa-devel] [PATCH] 254 Fix BuildRequires: rhino replaced with java-1.7.0-openjdk In-Reply-To: <50FE715F.6080007@redhat.com> References: <50FE715F.6080007@redhat.com> Message-ID: <50FEB833.5080000@redhat.com> Petr Vobornik wrote: > Rhino is needed for Web UI build. Rhino needs java, but from package > perspective java-1.7.0-openjdk requires rhino. So the correct > BuildRequires is java-1.7.0-openjdk. ACK From pvoborni at redhat.com Tue Jan 22 16:07:14 2013 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 22 Jan 2013 17:07:14 +0100 Subject: [Freeipa-devel] [PATCH] 254 Fix BuildRequires: rhino replaced with java-1.7.0-openjdk In-Reply-To: <50FEB833.5080000@redhat.com> References: <50FE715F.6080007@redhat.com> <50FEB833.5080000@redhat.com> Message-ID: <50FEB932.4000208@redhat.com> On 01/22/2013 05:02 PM, Rob Crittenden wrote: > Petr Vobornik wrote: >> Rhino is needed for Web UI build. Rhino needs java, but from package >> perspective java-1.7.0-openjdk requires rhino. So the correct >> BuildRequires is java-1.7.0-openjdk. > > ACK > Pushed to master. -- Petr Vobornik From simo at redhat.com Tue Jan 22 16:19:30 2013 From: simo at redhat.com (Simo Sorce) Date: Tue, 22 Jan 2013 11:19:30 -0500 Subject: [Freeipa-devel] A new proopsal for Location Based Discovery In-Reply-To: <20130122160205.GA22089@redhat.com> References: <1358816342.20683.227.camel@willson.li.ssimo.org> <20130122151810.GA21883@redhat.com> <1358868321.20683.266.camel@willson.li.ssimo.org> <20130122160205.GA22089@redhat.com> Message-ID: <1358871570.20683.298.camel@willson.li.ssimo.org> On Tue, 2013-01-22 at 17:02 +0100, Adam Tkac wrote: > On Tue, Jan 22, 2013 at 10:25:21AM -0500, Simo Sorce wrote: > > On Tue, 2013-01-22 at 16:18 +0100, Adam Tkac wrote: > > > Before we start talking about using DNS for this purpose, have you > > > considered > > > to use IP anycast for this? You can simply create multiple servers > > > with same IP > > > address on different places over the world. After that you announce > > > this IP > > > address from multiple places simultaneounsly via BGP and BGP > > > automatically > > > routes all clients to the closest node. Advantage is that this is > > > already > > > implemented, used and nothing have to be modified. > > > > > > Regards, Adam > > > > > We cannot assume our customers can influence or have access to change > > BGP routing, so I excluded multicast solutions from the get go. > > Also it requires more changes on the clients which is another heavy > > minus. > > If I understand correctly, target customers of IPA are companies and they use > IPA to maintain resources in their internal networks, aren't they? > > In this case I see two basic solutions how to solve the "location" issue. > > 1. BGP routing between multiple internal networks Sorry Adam, I do not want to be dismissive, and I know that in an ideal world this would be an awesome solution. Just trust me that for most cases asking someone to change their network architecture is simply impossible. We have users telling us their network admins don't even want change firewall configurations in some cases, so you can well see how they would respond to someone asking them to change their routing or enabling and using multicast. Sorry but it simply is not a solution we can consider. Simo. -- Simo Sorce * Red Hat, Inc * New York From pspacek at redhat.com Tue Jan 22 17:30:17 2013 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 22 Jan 2013 18:30:17 +0100 Subject: [Freeipa-devel] A new proopsal for Location Based Discovery In-Reply-To: <1358866866.20683.264.camel@willson.li.ssimo.org> References: <1358816342.20683.227.camel@willson.li.ssimo.org> <50FEA0D7.60507@redhat.com> <1358866866.20683.264.camel@willson.li.ssimo.org> Message-ID: <50FECCA9.50705@redhat.com> On 22.1.2013 16:01, Simo Sorce wrote: Replying to myself for the beginning: > On Tue, 2013-01-22 at 15:23 +0100, Petr Spacek wrote: >>> Server Implementation >>> TODO: interaction with DNSSEC >> That it *very* important part. I have fear from so many dynamic things inside. There is less dynamic things than I thought :-) The only dynamic thing is _location.client.domain DNAME record. Proposal of "filters" was omitted in this version. My biggest concern is related to dynamical parts, I like the idea itself. > Yes this is indeed going to add complexity. No doubt. >> Creating per-server _locations sub-tree is very easy with current code: Simply >> copy&paste new bind-dyndb-ldap section to /etc/named.conf and point base DN to >> some server-specific part of LDAP tree: >> >> dynamic-db "ipa-local" { >> // >> arg "base cn=srv2.example.com, cn=dns-local, dc=example,dc=com"; >> } > > Unless you have a way to mange it via LDAP this is unworkable. Locations > should be managed via the Web UI. So you need to be able to create new > locations on the fly and change server's locations dynamically, possibly > w/o requiring a server restart, but certainly w/o requiring the DNS > admin to have direct SSH access to all boxes to go and manually change > named.conf Sure, admin will never touch lines above. All data *are* directly in LDAP, so any tool can read & change _locations configuration on the fly. >> Server specific _locations records live in this sub-tree and each server has >> have own view of _locations, i.e. each server could specify mapping between >> locations in own way. DNS clients will see merged DNS tree, no change on >> client side is required. > > But this would require to manually change multiple records for multiple > servers in the same location, which could go wrong quite easily. I agree. This is a problem. It would require a tool to handle all location stuff. It definitely needs some clever way for management. > Each location configuration should be in a single place so that it is > consistent for all servers of that location and not a burden for > administration. I agree, it could be seen as a problem. With LDAP referrals and right tool it could be reasonable. > Also your methods puts location information out of the actual DNS, so > you can't lookup location data via DNS except for the 'default'. That is not correct. Any client could ask any server for something._locations.domain and the reply will contain server's mapping for particular location. > But that would not be correct, we want to allow a client to lookup > location data for a non-default location, because an IPA DNS server may > very well be serving multiple locations. Sure, that doesn't change. >> E.g. client has preferred location "brno" but the client is connected to >> network in "nyc", i.e. DNS queries are sent to servers in NYC. NYC server has >> own "_locations" sub-tree with trivial mapping "brno DNAME nyc". >> >> How to read the result: Location "Brno" is too far from "NYC", use "NYC" >> anyway! Also, "default" location could prefer local server over remote ones, >> i.e. local clients without any configuration will prefer local servers. > > I am not sure how this is different from my proposal, the problem I see > is that you loose the ability to force a configuration for select client > by actually creating real DNAME records. DNAME record stored in the database is only "preferred" location, it could be overridden on server side (by different content of _locations.domain sub-tree). >> There is another nice feature: "old" _ntp._udp.domain SRV records could >> contain aliases pointing to SRV records in some location, e.g. "default". In >> that case also old clients will prefer local servers over remote ones - almost >> with no price and with no client reconfiguration. >> >> No new concepts, no new code :-) > > We can do that with a DNAME in theory, but I would rather keep current > domain records as is for now. > >> There is still _location DNAME record under client's name, that stay >> unchanged. Personally, I don't like any on-the-fly record generation. Is it >> really necessary? > > Who creates this record for new clients ? It was mentioned below - 'ipa host-add' + fall-back to 'domain' for new clients. > How to you handle 3 locations on a single DNS server ? > > Say I have a headquarters DNS setup where I want to send clients to the > engineering, sales or accounting locations depending on the client but I > have a shared local network configuration so all clients use the same > DNS server. Each client machine has record like "_location DNAME eng._locations.domain.". >> In case described above I don't think so. Roaming between locations don't >> require changing any record, so configuration is static. > > Yep 'static' is the issue here, we want it more dynamic, the point of > generating is that we can change the way we manage locations in future > w/o having to jump through more hops. I'm not sure if I understood what "hop" mean. In reality all the CNAME/DNAME alias de-referencing is done in single shot if all data are available locally (which is our case). >> Old clients would see "default" location and _location record for new clients >> could be created during ipa host-add or something similar. > > We can't give out the privilege to create arbitrary DNAME records to > lower level admins, so we would have to add special code. IMHO allowing only defined locations (i.e. checking object existence) should be fairly simple. >>> DNS Slave server problem >> Without dynamic record generation it would be possible to do zone transfers >> without any change to current code. Only one new zone (i.e. _locations part of >> DNS sub-tree) has to be set on each slaves and we are done. > > This is true, and we can opt for this fallback initially, but I do not > want to restrict manageability just to make the job easier for one of > the cases. This problem is closely related to record generation. We need to know at which point record has to be generated etc. What we do when somebody asks for: _location._location._location... _location.blah._location.blah._location... _location._ntp._udp.example.com. _location._ntp._udp._location._ntp._udp.example.com. and other variants. How it will play with wildcards? E.g. what we do when somebody asked for _location.client.sub.example.com. but wildcard *.client.sub.example.com. exists? What about wildcard *.sub.example.com.? Etc. What if labels *preceding* '_location' do not exist? I.e. somebody asked for _location.nonexistent-blah.existing-domain. (Note: Zone can legally contain names like 'blah.blah' without 'blah' alone.) How would it be possible to direct legacy clients (looking for SRV records directly in 'domain') to local/optimized location? Trivial solution came over my mind: _ntp._udp.domain. CNAME _ntp._udp._location.dummy_client.domain. ... but it results in internal reference to generated record ... that is scary! (Note: CNAME/DNAMEs within zone are resolved internally in name server, there is no request-response ping-pong between client and server.) Again, I like the idea itself, but I see a lot of problems. (Still not digging into DNSSEC.) -- Petr^2 Spacek From atkac at redhat.com Tue Jan 22 16:46:01 2013 From: atkac at redhat.com (Adam Tkac) Date: Tue, 22 Jan 2013 17:46:01 +0100 Subject: [Freeipa-devel] A new proopsal for Location Based Discovery In-Reply-To: <1358871570.20683.298.camel@willson.li.ssimo.org> References: <1358816342.20683.227.camel@willson.li.ssimo.org> <20130122151810.GA21883@redhat.com> <1358868321.20683.266.camel@willson.li.ssimo.org> <20130122160205.GA22089@redhat.com> <1358871570.20683.298.camel@willson.li.ssimo.org> Message-ID: <20130122164600.GA22547@redhat.com> On Tue, Jan 22, 2013 at 11:19:30AM -0500, Simo Sorce wrote: > On Tue, 2013-01-22 at 17:02 +0100, Adam Tkac wrote: > > On Tue, Jan 22, 2013 at 10:25:21AM -0500, Simo Sorce wrote: > > > On Tue, 2013-01-22 at 16:18 +0100, Adam Tkac wrote: > > > > Before we start talking about using DNS for this purpose, have you > > > > considered > > > > to use IP anycast for this? You can simply create multiple servers > > > > with same IP > > > > address on different places over the world. After that you announce > > > > this IP > > > > address from multiple places simultaneounsly via BGP and BGP > > > > automatically > > > > routes all clients to the closest node. Advantage is that this is > > > > already > > > > implemented, used and nothing have to be modified. > > > > > > > > Regards, Adam > > > > > > > We cannot assume our customers can influence or have access to change > > > BGP routing, so I excluded multicast solutions from the get go. > > > Also it requires more changes on the clients which is another heavy > > > minus. > > > > If I understand correctly, target customers of IPA are companies and they use > > IPA to maintain resources in their internal networks, aren't they? > > > > In this case I see two basic solutions how to solve the "location" issue. > > > > 1. BGP routing between multiple internal networks > > Sorry Adam, I do not want to be dismissive, and I know that in an ideal > world this would be an awesome solution. > > Just trust me that for most cases asking someone to change their network > architecture is simply impossible. This is definitely right. However please read my previous post - I don't propose to change network architecture. Do you how to interconnect multiple networks without routers? I don't. So routers are already present in customer's networks. It can be even static routing, not BGP, and admin can simply set rule on router which physical server clients should use. > We have users telling us their network admins don't even want change > firewall configurations in some cases, so you can well see how they > would respond to someone asking them to change their routing or enabling > and using multicast. I think it's same amount of work to add record to DNS or to add record to the static or dynamic routing tables. > Sorry but it simply is not a solution we can consider. Why? Which setup cannot be achieved with routing configuration and can be achieved with location information in DNS? Regards, Adam -- Adam Tkac, Red Hat, Inc. From dpal at redhat.com Tue Jan 22 18:39:47 2013 From: dpal at redhat.com (Dmitri Pal) Date: Tue, 22 Jan 2013 13:39:47 -0500 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <1358870269.20683.292.camel@willson.li.ssimo.org> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> <50F6946C.3040905@redhat.com> <1358344021.4590.31.camel@willson.li.ssimo.org> <50F6DC0C.1090106@redhat.com> <1358355708.4590.47.camel@willson.li.ssimo.org> <50F6E412.7020402@redhat.com> <50F74C57.6030604@redhat.com> <50F80AD1.20408@redhat.com> <1358439519.20683.14.camel@willson.li.ssimo.org> <50FEA74A.5060908@redhat.com> <1358870269.20683.292.camel@willson.li.ssimo.org> Message-ID: <50FEDCF3.2050409@redhat.com> On 01/22/2013 10:57 AM, Simo Sorce wrote: > On Tue, 2013-01-22 at 15:50 +0100, Tomas Babej wrote: >> Here I bring the updated version of the patch. Please note, that I >> *added* a flag attribute to ipadb_ldap_attr_to_krb5_timestamp >> function, that controls whether the timestamp will be checked for >> overflow or not. The reasoning behind this is that some attributes >> will not be set to future dates, due to their inherent nature - such >> as krbLastSuccessfulAuth or krbLastAdminUnlock. >> >> These are all related to past dates, and it would make no sense to set >> them to future dates, even manually. Therefore I'd rather represent >> negative values in these attributes as past dates. They would have to >> be set manually anyway, because they would represent timestamps before >> the beginning of the unix epoch, however, I find this approach better >> than pushing them up to year 2038 in case such things happens. >> >> Any objections to this approach? >> > I am not sure I understand what is the point of giving this option to > callers. A) How does an API user know when to use one or the other > option. B) What good does it make to have the same date return different > results based on a flag ? > > What will happen later on when MIT will 'fix' the 2038 limit by changing > the meaning of negative timestamps ? Keep in mind that right now > negative timestamps are not really valid in the MIT code. > > Unless there is a 'use' for getting negative timestamps I think it is > only harmful to allow it and consumers would only be confused on whether > it should be used or not. > > So my first impression is that you are a bit overthinking here and we > should instead always force the same behavior for all callers and always > check and enforce endoftime dates. > > Simo. > +1 -- Thank you, Dmitri Pal Sr. Engineering Manager for IdM portfolio Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From simo at redhat.com Wed Jan 23 00:33:53 2013 From: simo at redhat.com (Simo Sorce) Date: Tue, 22 Jan 2013 19:33:53 -0500 Subject: [Freeipa-devel] A new proopsal for Location Based Discovery In-Reply-To: <20130122164600.GA22547@redhat.com> References: <1358816342.20683.227.camel@willson.li.ssimo.org> <20130122151810.GA21883@redhat.com> <1358868321.20683.266.camel@willson.li.ssimo.org> <20130122160205.GA22089@redhat.com> <1358871570.20683.298.camel@willson.li.ssimo.org> <20130122164600.GA22547@redhat.com> Message-ID: <1358901233.20683.313.camel@willson.li.ssimo.org> On Tue, 2013-01-22 at 17:46 +0100, Adam Tkac wrote: > On Tue, Jan 22, 2013 at 11:19:30AM -0500, Simo Sorce wrote: > > On Tue, 2013-01-22 at 17:02 +0100, Adam Tkac wrote: > > > On Tue, Jan 22, 2013 at 10:25:21AM -0500, Simo Sorce wrote: > > > > On Tue, 2013-01-22 at 16:18 +0100, Adam Tkac wrote: > > > > > Before we start talking about using DNS for this purpose, have you > > > > > considered > > > > > to use IP anycast for this? You can simply create multiple servers > > > > > with same IP > > > > > address on different places over the world. After that you announce > > > > > this IP > > > > > address from multiple places simultaneounsly via BGP and BGP > > > > > automatically > > > > > routes all clients to the closest node. Advantage is that this is > > > > > already > > > > > implemented, used and nothing have to be modified. > > > > > > > > > > Regards, Adam > > > > > > > > > We cannot assume our customers can influence or have access to change > > > > BGP routing, so I excluded multicast solutions from the get go. > > > > Also it requires more changes on the clients which is another heavy > > > > minus. > > > > > > If I understand correctly, target customers of IPA are companies and they use > > > IPA to maintain resources in their internal networks, aren't they? > > > > > > In this case I see two basic solutions how to solve the "location" issue. > > > > > > 1. BGP routing between multiple internal networks > > > > Sorry Adam, I do not want to be dismissive, and I know that in an ideal > > world this would be an awesome solution. > > > > Just trust me that for most cases asking someone to change their network > > architecture is simply impossible. > > This is definitely right. > > However please read my previous post - I don't propose to change network > architecture. Do you how to interconnect multiple networks without routers? > I don't. So routers are already present in customer's networks. It can be even > static routing, not BGP, and admin can simply set rule on router which physical > server clients should use. > > > We have users telling us their network admins don't even want change > > firewall configurations in some cases, so you can well see how they > > would respond to someone asking them to change their routing or enabling > > and using multicast. > > I think it's same amount of work to add record to DNS or to add record to the > static or dynamic routing tables. Adding a record to a DNS server is quite different from changing routing and starting routing multicast packets. > > Sorry but it simply is not a solution we can consider. > > Why? Which setup cannot be achieved with routing configuration and can be achieved > with location information in DNS? Queries from clients behind a VPN that doesn't do multicast ? In general multicast cannot be assumed to be available/configured. And it requires support in clients as well as services. Also 'location' doesn't mean necessarily 'local'. My client in NYC may be configured to be bound to servers in Boston for whatever administrative reason. Boston is in no way local to me but is my 'location'. How do you deliver that information in a schema like the one you had in mind ? Simo. -- Simo Sorce * Red Hat, Inc * New York From simo at redhat.com Wed Jan 23 01:13:35 2013 From: simo at redhat.com (Simo Sorce) Date: Tue, 22 Jan 2013 20:13:35 -0500 Subject: [Freeipa-devel] A new proopsal for Location Based Discovery In-Reply-To: <50FECCA9.50705@redhat.com> References: <1358816342.20683.227.camel@willson.li.ssimo.org> <50FEA0D7.60507@redhat.com> <1358866866.20683.264.camel@willson.li.ssimo.org> <50FECCA9.50705@redhat.com> Message-ID: <1358903615.20683.352.camel@willson.li.ssimo.org> On Tue, 2013-01-22 at 18:30 +0100, Petr Spacek wrote: > On 22.1.2013 16:01, Simo Sorce wrote: > > Replying to myself for the beginning: > > > On Tue, 2013-01-22 at 15:23 +0100, Petr Spacek wrote: > >>> Server Implementation > >>> TODO: interaction with DNSSEC > >> That it *very* important part. I have fear from so many dynamic things inside. > There is less dynamic things than I thought :-) The only dynamic thing is > _location.client.domain DNAME record. Proposal of "filters" was omitted in > this version. > > My biggest concern is related to dynamical parts, I like the idea itself. > > > Yes this is indeed going to add complexity. No doubt. > > >> Creating per-server _locations sub-tree is very easy with current code: Simply > >> copy&paste new bind-dyndb-ldap section to /etc/named.conf and point base DN to > >> some server-specific part of LDAP tree: > >> > >> dynamic-db "ipa-local" { > >> // > >> arg "base cn=srv2.example.com, cn=dns-local, dc=example,dc=com"; > >> } > > > > Unless you have a way to mange it via LDAP this is unworkable. Locations > > should be managed via the Web UI. So you need to be able to create new > > locations on the fly and change server's locations dynamically, possibly > > w/o requiring a server restart, but certainly w/o requiring the DNS > > admin to have direct SSH access to all boxes to go and manually change > > named.conf > Sure, admin will never touch lines above. All data *are* directly in LDAP, so > any tool can read & change _locations configuration on the fly. Ok I can see that now. > >> Server specific _locations records live in this sub-tree and each server has > >> have own view of _locations, i.e. each server could specify mapping between > >> locations in own way. DNS clients will see merged DNS tree, no change on > >> client side is required. > > > > But this would require to manually change multiple records for multiple > > servers in the same location, which could go wrong quite easily. > I agree. This is a problem. It would require a tool to handle all location > stuff. It definitely needs some clever way for management. Yes we are shifting complexity from one place to another. > > Each location configuration should be in a single place so that it is > > consistent for all servers of that location and not a burden for > > administration. > I agree, it could be seen as a problem. With LDAP referrals and right tool it > could be reasonable. I am not sure I like the idea of using LDAP referrals for this, I need to think ab out that, I can see how it can be used to reduce some duplication. > > Also your methods puts location information out of the actual DNS, so > > you can't lookup location data via DNS except for the 'default'. > That is not correct. Any client could ask any server for > something._locations.domain and the reply will contain server's mapping for > particular location. See this is the problem. Because your DNAME is fixed, in order to do overrides, you have to have a 'server's view of a location. Ie instead of directing the client to the right place, you 'fake' information about the place the client thinks it is fetching info for. This maintains still a great deal of 'duplication' except the data is not identical, each server have different data labeled with a name recurring on other servers in order to cheat clients. This can easily get out of control. Even if the data is not 'duplicated' in the db and it is just all smoke and mirrors and internal redirects it still a complex maze of redirects you have to store. It also prevents to have final absolute rules for some clients. > > But that would not be correct, we want to allow a client to lookup > > location data for a non-default location, because an IPA DNS server may > > very well be serving multiple locations. > Sure, that doesn't change. Ah but it does, in order to force clients to stick to a location in some case you are faking data for all location so all locations point to the same data. In my model _location.client.domain returns you arbitrary (dynamic) data but foo._location.domain is always consistent across all servers. In your model _location.client.domain returns 'stable' data but foo._location.domain may instead return fake data. For some reason I prefer the former rather than the latter, although I recognize it may just be a matter of taste, but is sounds right to me that the _locations.domain tree is the stable one rather than the _location.client.domain one. > >> E.g. client has preferred location "brno" but the client is connected to > >> network in "nyc", i.e. DNS queries are sent to servers in NYC. NYC server has > >> own "_locations" sub-tree with trivial mapping "brno DNAME nyc". > >> > >> How to read the result: Location "Brno" is too far from "NYC", use "NYC" > >> anyway! Also, "default" location could prefer local server over remote ones, > >> i.e. local clients without any configuration will prefer local servers. > > > > I am not sure how this is different from my proposal, the problem I see > > is that you loose the ability to force a configuration for select client > > by actually creating real DNAME records. > DNAME record stored in the database is only "preferred" location, it could be > overridden on server side (by different content of _locations.domain sub-tree). Yeah but see above about that. > >> There is another nice feature: "old" _ntp._udp.domain SRV records could > >> contain aliases pointing to SRV records in some location, e.g. "default". In > >> that case also old clients will prefer local servers over remote ones - almost > >> with no price and with no client reconfiguration. > >> > >> No new concepts, no new code :-) > > > > We can do that with a DNAME in theory, but I would rather keep current > > domain records as is for now. > > > >> There is still _location DNAME record under client's name, that stay > >> unchanged. Personally, I don't like any on-the-fly record generation. Is it > >> really necessary? > > > > Who creates this record for new clients ? > It was mentioned below - 'ipa host-add' + fall-back to 'domain' for new clients. This means it doesn't work for clients that join now. They use DNS update to create records and have no rights to create DNAME. So admins regularly need to go and add DNAME records or we need to add some code in DS to automatically create them. So again we've just move around where the complexity is. It is untrue that we have no new code, the new code is just elsewhere. > > How to you handle 3 locations on a single DNS server ? > > > > Say I have a headquarters DNS setup where I want to send clients to the > > engineering, sales or accounting locations depending on the client but I > > have a shared local network configuration so all clients use the same > > DNS server. > Each client machine has record like "_location DNAME eng._locations.domain.". Ok another point here. If I need to rename a location in my model I just do it. If I rename NYC to newyork, all I need to do is change NYC._locations.domain to newyork._locations.domain and in IPA I just change the name of the group or association or whatever object we use to group clients (Cos Attribute for example propagates automatically). In your scheme we would have to also change all the records for each client in that location, making a rename quite burdensome. > >> In case described above I don't think so. Roaming between locations don't > >> require changing any record, so configuration is static. > > > > Yep 'static' is the issue here, we want it more dynamic, the point of > > generating is that we can change the way we manage locations in future > > w/o having to jump through more hops. > I'm not sure if I understood what "hop" mean. In reality all the CNAME/DNAME > alias de-referencing is done in single shot if all data are available locally > (which is our case). I was just saying that if data is staic you need to change actual data if you want to change behavior, if it is synthetic you just need to change the code that synthesize it. Changing actual stored data is costly because involves generating replication traffic. > >> Old clients would see "default" location and _location record for new clients > >> could be created during ipa host-add or something similar. > > > > We can't give out the privilege to create arbitrary DNAME records to > > lower level admins, so we would have to add special code. > IMHO allowing only defined locations (i.e. checking object existence) should > be fairly simple. you are assuming that allowing someone to create redirects to arbitrary locations is harmless. But is it ? What if a junior admin, maliciously or not uses this privilege to redirect clients trafic to a location it shouldn't go to by DNS updating the DNAME to point to the wrong location ? I am not comfortable with that and my solution does not suffer from this problem. In order to use stored DNAMEs we would need code on the server to automatically create them and store them in LDAP,a nd do that according to the location defined for the client. Or in other world the same logical process you would need to know how to synthesize the record, only in a different part of the code. (this is just the logic, of course, a phantom record involves more than that, so not saying it is the same) > >>> DNS Slave server problem > >> Without dynamic record generation it would be possible to do zone transfers > >> without any change to current code. Only one new zone (i.e. _locations part of > >> DNS sub-tree) has to be set on each slaves and we are done. > > > > This is true, and we can opt for this fallback initially, but I do not > > want to restrict manageability just to make the job easier for one of > > the cases. > This problem is closely related to record generation. We need to know at which > point record has to be generated etc. What we do when somebody asks for: > _location._location._location... ignore > _location.blah._location.blah._location... ignore > _location._ntp._udp.example.com. ignore > _location._ntp._udp._location._ntp._udp.example.com. ignore > and other variants. Only 1 _location is acceptable, and only 'on top' of an existing A record, and only if an actual record with the same name doesn't actually exist. > How it will play with wildcards? > E.g. what we do when somebody asked for _location.client.sub.example.com. > but > wildcard *.client.sub.example.com. exists? What about wildcard > *.sub.example.com.? Etc. I think I would create the _location record. We just need to decide what is the most appropriate behavior for this case though. What happen if you have a record foo.example.com and also *.example.com ? > What if labels *preceding* '_location' do not exist? I.e. somebody asked for > _location.nonexistent-blah.existing-domain. > (Note: Zone can legally contain names like 'blah.blah' without 'blah' alone.) We return an error. As said above the 'base' must be an A (or AAAA) record. > How would it be possible to direct legacy clients (looking for SRV records > directly in 'domain') to local/optimized location? Not sure what is the problem here. > Trivial solution came over my mind: > _ntp._udp.domain. CNAME _ntp._udp._location.dummy_client.domain. > ... but it results in internal reference to generated record ... that is > scary! (Note: CNAME/DNAMEs within zone are resolved internally in name server, > there is no request-response ping-pong between client and server.) I wasn't going to change the legacy SRV records, but it might be an idea as well to redirect them to the 'local' default. I am not sure it is a good idea. However you wouldn't redirect to the client own location I don't think. I think you would redirect to the server default location at most. And the locations are actual existing records. > Again, I like the idea itself, but I see a lot of problems. (Still not digging > into DNSSEC.) Another compromise option I was thinking is to keep the complexity in LDAP by making up fake DNAME records in LDAP, like we do with the compat plugin, but then I think we would break persistent searches, so I do not like that one too much. Simo. -- Simo Sorce * Red Hat, Inc * New York From mkosek at redhat.com Wed Jan 23 08:10:45 2013 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 23 Jan 2013 09:10:45 +0100 Subject: [Freeipa-devel] [PATCH] 352-354 Add support for AD users to hbactest command In-Reply-To: <1358620528.20683.98.camel@willson.li.ssimo.org> References: <50F98561.1040503@redhat.com> <1358620528.20683.98.camel@willson.li.ssimo.org> Message-ID: <50FF9B05.3020207@redhat.com> On 01/19/2013 07:35 PM, Simo Sorce wrote: > On Fri, 2013-01-18 at 18:24 +0100, Martin Kosek wrote: >> How this works: >> 1. When a trusted domain user is tested, AD GC is searched >> for the user entry Distinguished Name > > My head is not clear today but it looks to me you are doing 2 searches. > One to go from samAccountName -> DNa dn then a second for DN -> SID. > > Why are you doing 2 searches ? The first one can return you the > ObjectSid already. > > Simo. I had to do 2 searches because GC refuses to give me tokenGroups attribute content when I do not search with exact DN and LDAP SCOPE_BASE. So I have to do the first search to find out the DN of the searched user and then a second query to get the tokenGroups (and ObjectSid). Martin From atkac at redhat.com Wed Jan 23 09:35:27 2013 From: atkac at redhat.com (Adam Tkac) Date: Wed, 23 Jan 2013 10:35:27 +0100 Subject: [Freeipa-devel] A new proopsal for Location Based Discovery In-Reply-To: <1358901233.20683.313.camel@willson.li.ssimo.org> References: <1358816342.20683.227.camel@willson.li.ssimo.org> <20130122151810.GA21883@redhat.com> <1358868321.20683.266.camel@willson.li.ssimo.org> <20130122160205.GA22089@redhat.com> <1358871570.20683.298.camel@willson.li.ssimo.org> <20130122164600.GA22547@redhat.com> <1358901233.20683.313.camel@willson.li.ssimo.org> Message-ID: <20130123093527.GA1943@redhat.com> On Tue, Jan 22, 2013 at 07:33:53PM -0500, Simo Sorce wrote: > On Tue, 2013-01-22 at 17:46 +0100, Adam Tkac wrote: > > On Tue, Jan 22, 2013 at 11:19:30AM -0500, Simo Sorce wrote: > > > On Tue, 2013-01-22 at 17:02 +0100, Adam Tkac wrote: > > > > On Tue, Jan 22, 2013 at 10:25:21AM -0500, Simo Sorce wrote: > > > > > On Tue, 2013-01-22 at 16:18 +0100, Adam Tkac wrote: > > > > > > Before we start talking about using DNS for this purpose, have you > > > > > > considered > > > > > > to use IP anycast for this? You can simply create multiple servers > > > > > > with same IP > > > > > > address on different places over the world. After that you announce > > > > > > this IP > > > > > > address from multiple places simultaneounsly via BGP and BGP > > > > > > automatically > > > > > > routes all clients to the closest node. Advantage is that this is > > > > > > already > > > > > > implemented, used and nothing have to be modified. > > > > > > > > > > > > Regards, Adam > > > > > > > > > > > We cannot assume our customers can influence or have access to change > > > > > BGP routing, so I excluded multicast solutions from the get go. > > > > > Also it requires more changes on the clients which is another heavy > > > > > minus. > > > > > > > > If I understand correctly, target customers of IPA are companies and they use > > > > IPA to maintain resources in their internal networks, aren't they? > > > > > > > > In this case I see two basic solutions how to solve the "location" issue. > > > > > > > > 1. BGP routing between multiple internal networks > > > > > > Sorry Adam, I do not want to be dismissive, and I know that in an ideal > > > world this would be an awesome solution. > > > > > > Just trust me that for most cases asking someone to change their network > > > architecture is simply impossible. > > > > This is definitely right. > > > > However please read my previous post - I don't propose to change network > > architecture. Do you how to interconnect multiple networks without routers? > > I don't. So routers are already present in customer's networks. It can be even > > static routing, not BGP, and admin can simply set rule on router which physical > > server clients should use. > > > > > We have users telling us their network admins don't even want change > > > firewall configurations in some cases, so you can well see how they > > > would respond to someone asking them to change their routing or enabling > > > and using multicast. > > > > I think it's same amount of work to add record to DNS or to add record to the > > static or dynamic routing tables. > > Adding a record to a DNS server is quite different from changing routing > and starting routing multicast packets. Please note anycast != multicast. Anycast is unicast so no multicast is involved. > > > Sorry but it simply is not a solution we can consider. > > > > Why? Which setup cannot be achieved with routing configuration and can be achieved > > with location information in DNS? > > Queries from clients behind a VPN that doesn't do multicast ? > > In general multicast cannot be assumed to be available/configured. > > And it requires support in clients as well as services. > > Also 'location' doesn't mean necessarily 'local'. > > My client in NYC may be configured to be bound to servers in Boston for > whatever administrative reason. Boston is in no way local to me but is > my 'location'. How do you deliver that information in a schema like the > one you had in mind ? This is not possible with my anycast proposal. Thanks for explanation, I just didn't imagine which schema cannot be configured on routing level and this is the one. Regards, Adam -- Adam Tkac, Red Hat, Inc. From simo at redhat.com Wed Jan 23 13:23:15 2013 From: simo at redhat.com (Simo Sorce) Date: Wed, 23 Jan 2013 08:23:15 -0500 Subject: [Freeipa-devel] [PATCH] 352-354 Add support for AD users to hbactest command In-Reply-To: <50FF9B05.3020207@redhat.com> References: <50F98561.1040503@redhat.com> <1358620528.20683.98.camel@willson.li.ssimo.org> <50FF9B05.3020207@redhat.com> Message-ID: <1358947395.20683.365.camel@willson.li.ssimo.org> On Wed, 2013-01-23 at 09:10 +0100, Martin Kosek wrote: > On 01/19/2013 07:35 PM, Simo Sorce wrote: > > On Fri, 2013-01-18 at 18:24 +0100, Martin Kosek wrote: > >> How this works: > >> 1. When a trusted domain user is tested, AD GC is searched > >> for the user entry Distinguished Name > > > > My head is not clear today but it looks to me you are doing 2 searches. > > One to go from samAccountName -> DNa dn then a second for DN -> SID. > > > > Why are you doing 2 searches ? The first one can return you the > > ObjectSid already. > > > > Simo. > > I had to do 2 searches because GC refuses to give me tokenGroups attribute > content when I do not search with exact DN and LDAP SCOPE_BASE. So I have to do > the first search to find out the DN of the searched user and then a second > query to get the tokenGroups (and ObjectSid). I see, yes that makes sense, would you mind adding a comment to this effect so we do not try to 'optimize' at some point ? I have no additional concerns then. Simo. -- Simo Sorce * Red Hat, Inc * New York From tbabej at redhat.com Wed Jan 23 13:00:00 2013 From: tbabej at redhat.com (Tomas Babej) Date: Wed, 23 Jan 2013 14:00:00 +0100 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <50FEDCF3.2050409@redhat.com> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> <50F6946C.3040905@redhat.com> <1358344021.4590.31.camel@willson.li.ssimo.org> <50F6DC0C.1090106@redhat.com> <1358355708.4590.47.camel@willson.li.ssimo.org> <50F6E412.7020402@redhat.com> <50F74C57.6030604@redhat.com> <50F80AD1.20408@redhat.com> <1358439519.20683.14.camel@willson.li.ssimo.org> <50FEA74A.5060908@redhat.com> <1358870269.20683.292.camel@willson.li.ssimo.org> <50FEDCF3.2050409@redhat.com> Message-ID: <50FFDED0.2070703@redhat.com> On 01/22/2013 07:39 PM, Dmitri Pal wrote: > On 01/22/2013 10:57 AM, Simo Sorce wrote: >> On Tue, 2013-01-22 at 15:50 +0100, Tomas Babej wrote: >>> Here I bring the updated version of the patch. Please note, that I >>> *added* a flag attribute to ipadb_ldap_attr_to_krb5_timestamp >>> function, that controls whether the timestamp will be checked for >>> overflow or not. The reasoning behind this is that some attributes >>> will not be set to future dates, due to their inherent nature - such >>> as krbLastSuccessfulAuth or krbLastAdminUnlock. >>> >>> These are all related to past dates, and it would make no sense to set >>> them to future dates, even manually. Therefore I'd rather represent >>> negative values in these attributes as past dates. They would have to >>> be set manually anyway, because they would represent timestamps before >>> the beginning of the unix epoch, however, I find this approach better >>> than pushing them up to year 2038 in case such things happens. >>> >>> Any objections to this approach? >>> >> I am not sure I understand what is the point of giving this option to >> callers. A) How does an API user know when to use one or the other >> option. B) What good does it make to have the same date return different >> results based on a flag ? >> >> What will happen later on when MIT will 'fix' the 2038 limit by changing >> the meaning of negative timestamps ? Keep in mind that right now >> negative timestamps are not really valid in the MIT code. >> >> Unless there is a 'use' for getting negative timestamps I think it is >> only harmful to allow it and consumers would only be confused on whether >> it should be used or not. >> >> So my first impression is that you are a bit overthinking here and we >> should instead always force the same behavior for all callers and always >> check and enforce endoftime dates. >> >> Simo. >> > +1 Ok, the patch does not distinguish between 'past' and 'future' timestamps anymore. Please respond if you see any issues. Tomas -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0026-7-Prevent-integer-overflow-when-setting-krbPasswordExp.patch Type: text/x-patch Size: 7751 bytes Desc: not available URL: From simo at redhat.com Wed Jan 23 22:16:12 2013 From: simo at redhat.com (Simo Sorce) Date: Wed, 23 Jan 2013 17:16:12 -0500 Subject: [Freeipa-devel] Announcing FreeIPA 3.1.2 Message-ID: <1358979372.20683.384.camel@willson.li.ssimo.org> The FreeIPA team is proud to announce version FreeIPA v3.1.2 This release contains Security Updates It can be downloaded from http://www.freeipa.org/page/Downloads. == Highlights == During the past few months a number of security Issues have been found that affect FreeIPA. Three Security Advisories have been released: * CVE-2012-4546: Incorrect CRLs publishing * CVE-2012-5484: MITM Attack during Join process * CVE-2013-0199: Cross-Realm Trust key leak The FreeIPA Team would like to thank the Red Hat Security Response Team and in particular Vincent Danen for the invaluable assistance provided for the assessment and resolution of these issues. For CVE-2012-5484 we would like to thank Petr Men??k for reporting the issue. == Upgrading == Please consult each CVE announcement for related Upgrading instructions. An IPA server can be upgraded simply by installing updated rpms. The server does not need to be shut down in advance. Please note, that the referential integrity extension requires an extended set of indexes to be configured. RPM update for an IPA server with a excessive number of hosts, SUDO or HBAC entries may require several minutes to finish. If you have multiple servers you may upgrade them one at a time. It is expected that all servers will be upgraded in a relatively short period (days or weeks not months). They should be able to co-exist peacefully but new features will not be available on old servers and enrolling a new client against an old server will result in the SSH keys not being uploaded. Downgrading a server once upgraded is not supported. Upgrading from 2.2.0 is supported. Upgrading from previous versions is not supported and has not been tested. An enrolled client does not need the new packages installed unless you want to re-enroll it. SSH keys for already installed clients are not uploaded, you will have to re-enroll the client or manually upload the keys. Feedback Please provide comments, bugs and other feedback via the freeipa-devel mailing list: http://www.redhat.com/mailman/listinfo/freeipa-devel == Detailed Changelog since 3.1.1 == Alexander Bokovoy (1): Update plugin to upload CA certificate to LDAP Ana Krivokapic (1): Raise ValidationError for incorrect subtree option. John Dennis (1): Use secure method to acquire IPA CA certificate Martin Kosek (5): permission-find no longer crashes with --targetgroup Avoid CRL migration error message Sort LDAP updates properly Upgrade process should not crash on named restart Installer should not connect to 127.0.0.1 Rob Crittenden (5): Convert uniqueMember members into DN objects. Do SSL CA verification and hostname validation. Don't initialize NSS if we don't have to, clean up unused cert refs Update anonymous access ACI to protect secret attributes. Become IPA 3.1.2 Simo Sorce (1): Upload CA cert in the directory on install ################################################################################ == CVE-2012-4546: Incorrect CRLs publishing == == Summary == It was found that the current default configuration of IPA servers did not publish correct CRLs (Certificate Revocation Lists). The default configuration specifies that every replica is to generate its own CRL, however this can result in inconsistencies in the CRL contents provided to clients from different Identity Management replicas. More specifically, if a certificate is revoked on one Identity Management replica, it will not show up on another Identity Management replica. To avoid this inconsistency, the solution is to configure CRL generation to only take place on one Identity Management server. To do so, the CRL configuration must be changed on all Identity Management servers. == Affected Versions == All 2.x and 3.x versions using multiple CA replicas == Impact == Low == Acknowledgements == The bug was found by the FreeIPA team during an internal review. == Upgrade Instructions == Upgrading to the latest 3.0 or 3.1 FreeIPA versions should be sufficient to resolve the issue. == Manual Instructions == To manually resolve the problem the CRL configuration must be changed on all Identity Management servers. One IPA master needs to be picked as the CRL generator. It does not matter which master, and the following procedure should be used: On the non-CRL generating masters: 1. Configure the clones to point to the CRL generator to get the CRL: 1a. Edit /etc/httpd/conf.d/ipa-pki-proxy.conf 1b. Add "|^/ca/ee/ca/getCRL" to the end of the first LocationMatch. After editing, the first LocationMatch entry in ipa-pki-proxy.conf should look like this: 1c. At the end of this file add: RewriteRule ^/ipa/crl/MasterCRL.bin https://$FQDN/ca/ee/ca/getCRL?op=getCRL&crlIssuingPoint=MasterCRL [L,R=301,NC] 1d. Replace $FQDN with the hostname of the IPA master picked as the CRL generator. 1e. The httpd service will need to be restarted after making this change: # service httpd restart 2. Update the CRL generator to include certificates revoked from other masters in its CRL: 2a. Edit the CA configuration file in /var/lib/pki-ca/conf/CS.cfg: These two settings should be true by default: ca.crl.MasterCRL.enableCRLCache=true ca.crl.MasterCRL.enableCRLUpdates=true 2b. Set this directive to true (This can be appended to the end of CS.cfg): ca.listenToCloneModifications=true 2c. The CA will need to be restarted after making this change: # service pki-cad restart It may be noted that this configuration creates a single point of failure. If the CRL generator server goes down then the other IPA masters will not be able to retrieve a CRL and one will not be generated. In this case one would need to choose a new master as the CRL generator and perform the steps above. It is recommended that a DNS CNAME be created to refer to the server that provides the CRL (with a relatively short TTL). This will provide flexibility in case the CRL generator needs to be changed, and without reconfiguring any clients that need to retrieve the CRL. == Patches == A patch to resolve this issue is available through our git repository: http://git.fedorahosted.org/cgit/freeipa.git/commit/?id=392097f20673708a684da168aec302da7ccda9a6 ################################################################################ == CVE-2012-5484: MITM Attack during Join process == A weakness was found in the way an IPA client communicates with an IPA server when attempting to join an IPA domain. When an IPA client attempts to join an IPA domain an attacker could run a Man in The Middle Attack to try to intercept and hijack initial communication. A join initiated by an administrative user would grant the attacker administrative rights to the IPA server, whereas a join initiated by an unprivileged user would only grant the attacker limited privilege (typically just the ability to join the domain). The weakness is caused by the way the CA certificate is retrieved from the server. The following SSL communication may then be intercepted and subverted. Note that no credentials are exposed through this attack and it is effective only if performed during the join procedure and network traffic can be redirected or intercepted. Mere observation of the network traffic is not sufficient to grant an attacker any privilege. == Affected Versions == All 2.x and 3.x versions == Impact == Low == Acknowledgements == The FreeIPA team would like to thank Petr Men??k for reporting this issue. == Upgrade instructions == The resolution for this issue consist in allowing clients to download the CA certificate exclusively via a mutually authenticated LDAP connection or by providing the CA cert via an external method to the client. At least one IPA server in a domain need to be updated using the provided patches, so that the CA certificate is made available via LDAP. All client should be upgraded to use the updated ipa-client-install script that downloads the CA cert via an authenticated LDAP connection. == Patches == Patches to resolve this issue are available through our git repository: * http://git.fedorahosted.org/cgit/freeipa.git/commit/?id=18eea90ebb24a9c22248f0b7e18646cc6e3e3e0f * http://git.fedorahosted.org/cgit/freeipa.git/commit/?id=a40285c5a0288669b72f9d991508d4405885bffc * http://git.fedorahosted.org/cgit/freeipa.git/commit/?id=91f4af7e6af53e1c6bf17ed36cb2161863eddae4 * http://git.fedorahosted.org/cgit/freeipa.git/commit/?id=a1991aeac19c3fec1fdd0d184c6760c90c9f9fc9 * http://git.fedorahosted.org/cgit/freeipa.git/commit/?id=31e41eea6c2322689826e6065ceba82551c565aa ################################################################################ == CVE-2013-0199: Cross-Realm Trust key leak == FreeIPA 3.0 introduced a Cross-Realm Kerberos trusts with Active Directory, a feature that allows IPA administrators to create a Kerberos trust with an AD. This allows IPA users to be able to access resources in AD trusted domains and vice versa. When the Kerberos trust is created, an outgoing and incoming keys are stored in the IPA LDAP backend (in ipaNTTrustAuthIncoming and ipaNTTrustAuthOutgoing attributes). However, the IPA LDAP ACIs allow anonymous read acess to these attributes which could allow an unprivileged user to read the keys. With these keys an attacker could impersonate users and services of the opposite domain by crafting special Kerberos tickets. == Affected Versions == All 3.x versions. The vulnerability is present only if AD Trusts are enabled and a trust relationship is in place. == Impact == Medium == Acknowledgements == The bug was found by the FreeIPA team during an internal review. == Upgrade Instructions == Administrators are advise to change their ACIs to block the ipaNTTrustAuthIncoming and ipaNTTrustAuthOutgoing attributes from access from non administrative users. Once the new ACIs are in place it is recommended to change the trust password. This can be accomplished by temporary deleting and then recreating the trust agreement between the two domains using tha ipa trust CLI commands. == Patches == A patch to resolve this issue is available through our git repository: http://git.fedorahosted.org/cgit/freeipa.git/commit/?id=d5966bde802d8ef84c202a3e7c85f17b9e305a30 Applying the patch prevents further access to keys but does NOT change the trust secret. ################################################################################ From rcritten at redhat.com Wed Jan 23 22:45:53 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Wed, 23 Jan 2013 17:45:53 -0500 Subject: [Freeipa-devel] [PATCHES] 91-92 Add support for RFC 6594 SSHFP DNS records In-Reply-To: <50EEA00B.8090505@redhat.com> References: <50EE49EC.7000500@redhat.com> <50EEA00B.8090505@redhat.com> Message-ID: <51006821.9000901@redhat.com> Jan Cholasta wrote: > On 10.1.2013 05:56, Jan Cholasta wrote: >> Hi, >> >> Patch 91 removes module ipapython.compat. The code that uses it doesn't >> work with ancient Python versions anyway, so there's no need to keep it >> around. >> >> Patch 92 adds support for automatic generation of RFC 6594 SSHFP DNS >> records to ipa-client-install and host plugin, as described in >> . Note that >> still applies. >> >> https://fedorahosted.org/freeipa/ticket/2642 >> >> Honza >> > > Self-NACK, forgot to actually remove ipapython/compat.py in the first > patch. Also removed an unnecessary try block from the second patch. > > Honza These look good. I'm a little concerned about the magic numbers in the SSHFP code. I know these come from the RFCs. Can you add a comment there so future developers know where the values for key type and fingerprint type come from? rob From mkosek at redhat.com Thu Jan 24 07:15:20 2013 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 24 Jan 2013 08:15:20 +0100 Subject: [Freeipa-devel] [PATCH] 352-354 Add support for AD users to hbactest command In-Reply-To: <1358947395.20683.365.camel@willson.li.ssimo.org> References: <50F98561.1040503@redhat.com> <1358620528.20683.98.camel@willson.li.ssimo.org> <50FF9B05.3020207@redhat.com> <1358947395.20683.365.camel@willson.li.ssimo.org> Message-ID: <5100DF88.8010502@redhat.com> On 01/23/2013 02:23 PM, Simo Sorce wrote: > On Wed, 2013-01-23 at 09:10 +0100, Martin Kosek wrote: >> On 01/19/2013 07:35 PM, Simo Sorce wrote: >>> On Fri, 2013-01-18 at 18:24 +0100, Martin Kosek wrote: >>>> How this works: >>>> 1. When a trusted domain user is tested, AD GC is searched >>>> for the user entry Distinguished Name >>> >>> My head is not clear today but it looks to me you are doing 2 searches. >>> One to go from samAccountName -> DNa dn then a second for DN -> SID. >>> >>> Why are you doing 2 searches ? The first one can return you the >>> ObjectSid already. >>> >>> Simo. >> >> I had to do 2 searches because GC refuses to give me tokenGroups attribute >> content when I do not search with exact DN and LDAP SCOPE_BASE. So I have to do >> the first search to find out the DN of the searched user and then a second >> query to get the tokenGroups (and ObjectSid). > > I see, yes that makes sense, would you mind adding a comment to this > effect so we do not try to 'optimize' at some point ? > I have no additional concerns then. > > Simo. > Hello Simo, Thanks for review. Anyway, there is already a relevant comment in dcerpc.py, where the double search is performed: ... def get_trusted_domain_user_and_groups(self, object_name): ... entries = self.get_trusted_domain_objects(components.get('domain'), components.get('flatname'), filter, attrs, _ldap.SCOPE_SUBTREE) # Get SIDs of user object and it's groups # tokenGroups attribute must be read with scope BASE to avoid search error attrs = ['objectSID', 'tokenGroups'] ... I think it's enough to avoid "optimizing" this process - we would find out the "optimization" soon anyway, as the tokenGroups search would return error :-) Martin From pviktori at redhat.com Thu Jan 24 09:43:46 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 24 Jan 2013 10:43:46 +0100 Subject: [Freeipa-devel] [PATCHES] 127-136 LDAP code refactoring (Part 2) In-Reply-To: <50FEAA66.4010704@redhat.com> References: <50F83498.9050104@redhat.com> <50FD7CF8.2070401@redhat.com> <50FEAA66.4010704@redhat.com> Message-ID: <51010252.1030200@redhat.com> On 01/22/2013 04:04 PM, Petr Viktorin wrote: > On 01/21/2013 06:38 PM, Petr Viktorin wrote: >> On 01/17/2013 06:27 PM, Petr Viktorin wrote: >>> Hello, >>> This is the first batch of changes aimed to consolidate our LDAP code. >>> Each should be a self-contained change that doesn't break anything. >>> >>> These patches do some general cleanup (some of the changes might seem >>> trivial but help a lot when grepping through the code); merge the common >>> parts LDAPEntry, Entry and Entity classes; and move stuff that depends >>> on an installed server out of IPASimpleLDAPObject and SchemaCache. >>> >>> I'm posting them early so you can see where I'm going, and so you can >>> find out if your work will conflict with mine. Patch 0120 grew a conflict with master, attaching a rebased version. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0120-02-Remove-some-unused-imports.patch Type: text/x-patch Size: 11260 bytes Desc: not available URL: From mkosek at redhat.com Thu Jan 24 11:01:36 2013 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 24 Jan 2013 12:01:36 +0100 Subject: [Freeipa-devel] [PATCH] 355 Avoid internal error when user is not Trust admin Message-ID: <51011490.10101@redhat.com> When user tries to perform any action requiring communication with trusted domain, IPA server tries to retrieve a trust secret on his behalf to be able to establish the connection. This happens for example during group-add-member command when external user is being resolved in the AD. When user is not member of Trust admins group, the retrieval crashes and reports internal error. Catch this exception and rather report properly formatted ACIError. ---- I hit this error after updating to the latest FreeIPA version with the AD CVE fixed. Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-355-avoid-internal-error-when-user-is-not-trust-admin.patch Type: text/x-patch Size: 2916 bytes Desc: not available URL: From simo at redhat.com Thu Jan 24 14:04:36 2013 From: simo at redhat.com (Simo Sorce) Date: Thu, 24 Jan 2013 09:04:36 -0500 Subject: [Freeipa-devel] [PATCH] 352-354 Add support for AD users to hbactest command In-Reply-To: <5100DF88.8010502@redhat.com> References: <50F98561.1040503@redhat.com> <1358620528.20683.98.camel@willson.li.ssimo.org> <50FF9B05.3020207@redhat.com> <1358947395.20683.365.camel@willson.li.ssimo.org> <5100DF88.8010502@redhat.com> Message-ID: <1359036276.20683.387.camel@willson.li.ssimo.org> On Thu, 2013-01-24 at 08:15 +0100, Martin Kosek wrote: > On 01/23/2013 02:23 PM, Simo Sorce wrote: > > On Wed, 2013-01-23 at 09:10 +0100, Martin Kosek wrote: > >> On 01/19/2013 07:35 PM, Simo Sorce wrote: > >>> On Fri, 2013-01-18 at 18:24 +0100, Martin Kosek wrote: > >>>> How this works: > >>>> 1. When a trusted domain user is tested, AD GC is searched > >>>> for the user entry Distinguished Name > >>> > >>> My head is not clear today but it looks to me you are doing 2 searches. > >>> One to go from samAccountName -> DNa dn then a second for DN -> SID. > >>> > >>> Why are you doing 2 searches ? The first one can return you the > >>> ObjectSid already. > >>> > >>> Simo. > >> > >> I had to do 2 searches because GC refuses to give me tokenGroups attribute > >> content when I do not search with exact DN and LDAP SCOPE_BASE. So I have to do > >> the first search to find out the DN of the searched user and then a second > >> query to get the tokenGroups (and ObjectSid). > > > > I see, yes that makes sense, would you mind adding a comment to this > > effect so we do not try to 'optimize' at some point ? > > I have no additional concerns then. > > > > Simo. > > > > Hello Simo, > > Thanks for review. Anyway, there is already a relevant comment in dcerpc.py, > where the double search is performed: > > ... > def get_trusted_domain_user_and_groups(self, object_name): > ... > entries = self.get_trusted_domain_objects(components.get('domain'), > components.get('flatname'), filter, attrs, _ldap.SCOPE_SUBTREE) > > # Get SIDs of user object and it's groups > # tokenGroups attribute must be read with scope BASE to avoid search error > attrs = ['objectSID', 'tokenGroups'] > ... > > I think it's enough to avoid "optimizing" this process - we would find out the > "optimization" soon anyway, as the tokenGroups search would return error :-) Perfect! /me just had an eye vision exam, will complain to his doctor :-) -- Simo Sorce * Red Hat, Inc * New York From pviktori at redhat.com Thu Jan 24 14:06:13 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 24 Jan 2013 15:06:13 +0100 Subject: [Freeipa-devel] [PATCHES] 137-144 LDAP code refactoring (Part 3) In-Reply-To: <51010252.1030200@redhat.com> References: <50F83498.9050104@redhat.com> <50FD7CF8.2070401@redhat.com> <50FEAA66.4010704@redhat.com> <51010252.1030200@redhat.com> Message-ID: <51013FD5.8030004@redhat.com> On 01/24/2013 10:43 AM, Petr Viktorin wrote: > On 01/22/2013 04:04 PM, Petr Viktorin wrote: >> On 01/21/2013 06:38 PM, Petr Viktorin wrote: >>> On 01/17/2013 06:27 PM, Petr Viktorin wrote: >>>> Hello, >>>> This is the first batch of changes aimed to consolidate our LDAP code. >>>> Each should be a self-contained change that doesn't break anything. >>>> >>>> These patches do some general cleanup (some of the changes might seem >>>> trivial but help a lot when grepping through the code); merge the >>>> common >>>> parts LDAPEntry, Entry and Entity classes; and move stuff that depends >>>> on an installed server out of IPASimpleLDAPObject and SchemaCache. >>>> >>>> I'm posting them early so you can see where I'm going, and so you can >>>> find out if your work will conflict with mine. > Here is a third set of patches. These apply on top of jcholast's patches 94-96. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0137-Replace-setValue-by-keyword-arguments-when-creating-.patch Type: text/x-patch Size: 25694 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0138-Use-update_entry-with-a-single-entry-in-adtrustinsta.patch Type: text/x-patch Size: 2823 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0139-ldapupdate-Treat-Entries-with-only-a-DN-as-empty.patch Type: text/x-patch Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0140-Replace-entry.getValues-by-entry.get.patch Type: text/x-patch Size: 8982 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0141-Replace-entry.setValue-setValues-by-item-assignment.patch Type: text/x-patch Size: 9518 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0142-Replace-add_s-and-delete_s-by-their-newer-equivalent.patch Type: text/x-patch Size: 3703 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0143-Change-add-update-delete-_entry-to-take-LDAPEntries.patch Type: text/x-patch Size: 4458 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0144-Remove-unused-imports-from-ipaserver-install.patch Type: text/x-patch Size: 11102 bytes Desc: not available URL: From akrivoka at redhat.com Thu Jan 24 15:13:38 2013 From: akrivoka at redhat.com (Ana Krivokapic) Date: Thu, 24 Jan 2013 16:13:38 +0100 Subject: [Freeipa-devel] [PATCH] 0004 Take into consideration services when deleting replicas Message-ID: <51014FA2.9020503@redhat.com> When deleting a replica from IPA domain, warn if the installation is left without CA and/or DNS. Ticket: https://fedorahosted.org/freeipa/ticket/2879 -- Regards, Ana Krivokapic Associate Software Engineer FreeIPA team Red Hat Inc. -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-akrivoka-0004-Take-into-consideration-services-when-deleting-repli.patch Type: text/x-patch Size: 2340 bytes Desc: not available URL: From rcritten at redhat.com Thu Jan 24 15:17:03 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 24 Jan 2013 10:17:03 -0500 Subject: [Freeipa-devel] [PATCH] 0004 Take into consideration services when deleting replicas In-Reply-To: <51014FA2.9020503@redhat.com> References: <51014FA2.9020503@redhat.com> Message-ID: <5101506F.5000108@redhat.com> Ana Krivokapic wrote: > When deleting a replica from IPA domain, warn if the installation is > left without CA and/or DNS. > > Ticket: https://fedorahosted.org/freeipa/ticket/2879 IMHO we should not give an option to delete the last CA. DNS can be more easily re-added if needed. Should we not ask this if the --force flag is set? rob From akrivoka at redhat.com Thu Jan 24 17:49:17 2013 From: akrivoka at redhat.com (Ana Krivokapic) Date: Thu, 24 Jan 2013 18:49:17 +0100 Subject: [Freeipa-devel] [PATCH] 0004 Take into consideration services when deleting replicas In-Reply-To: <5101506F.5000108@redhat.com> References: <51014FA2.9020503@redhat.com> <5101506F.5000108@redhat.com> Message-ID: <5101741D.5090406@redhat.com> On 01/24/2013 04:17 PM, Rob Crittenden wrote: > Ana Krivokapic wrote: >> When deleting a replica from IPA domain, warn if the installation is >> left without CA and/or DNS. >> >> Ticket: https://fedorahosted.org/freeipa/ticket/2879 > > IMHO we should not give an option to delete the last CA. > > DNS can be more easily re-added if needed. > > Should we not ask this if the --force flag is set? > > rob > Makes sense, thanks Rob. I have changed the behaviour to: * abort the deletion if the last CA is about to be deleted * warn if the last DNS is about to be deleted (but only require user confirmation if the --force flag is not set) Updated patch is attached. -- Regards, Ana Krivokapic Associate Software Engineer FreeIPA team Red Hat Inc. -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-akrivoka-0004-02-Take-into-consideration-services-when-deleting-repli.patch Type: text/x-patch Size: 2355 bytes Desc: not available URL: From pviktori at redhat.com Thu Jan 24 17:47:19 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 24 Jan 2013 18:47:19 +0100 Subject: [Freeipa-devel] [PATCH] 0145 Add the CA cert to LDAP after the CA install Message-ID: <510173A7.6080603@redhat.com> This makes a benign "CRITICAL" message in ipa-server-install go away. https://fedorahosted.org/freeipa/ticket/3375 -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0145-Add-the-CA-cert-to-LDAP-after-the-CA-install.patch Type: text/x-patch Size: 2153 bytes Desc: not available URL: From rcritten at redhat.com Thu Jan 24 21:19:33 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 24 Jan 2013 16:19:33 -0500 Subject: [Freeipa-devel] [PATCH] 1082 restart certmonger before upgrading Message-ID: <5101A565.5020004@redhat.com> certmonger may provide new CAs, as in the case from upgrading IPA 2.2 to 3.x. We need these new CAs available during the upgrade process. The certmonger package does its own condrestart as part of %postun which runs after the %post script of freeipa-server, so we need to restart it ourselves before upgrading. rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1082-certmonger-master.patch Type: text/x-diff Size: 2030 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rcrit-1082-certmonger-ipa31.patch Type: text/x-diff Size: 2088 bytes Desc: not available URL: From simo at redhat.com Thu Jan 24 23:45:52 2013 From: simo at redhat.com (Simo Sorce) Date: Thu, 24 Jan 2013 18:45:52 -0500 Subject: [Freeipa-devel] [PATCH 0026] Prevent integer overflow when setting krbPasswordExpiration In-Reply-To: <50FFDED0.2070703@redhat.com> References: <50F42859.1070807@redhat.com> <1358257702.15136.124.camel@willson.li.ssimo.org> <50F5BE38.1020901@redhat.com> <50F5C1D0.6060404@redhat.com> <1358283576.4590.18.camel@willson.li.ssimo.org> <50F5D9E2.7030800@redhat.com> <1358290532.4590.21.camel@willson.li.ssimo.org> <50F6946C.3040905@redhat.com> <1358344021.4590.31.camel@willson.li.ssimo.org> <50F6DC0C.1090106@redhat.com> <1358355708.4590.47.camel@willson.li.ssimo.org> <50F6E412.7020402@redhat.com> <50F74C57.6030604@redhat.com> <50F80AD1.20408@redhat.com> <1358439519.20683.14.camel@willson.li.ssimo.org> <50FEA74A.5060908@redhat.com> <1358870269.20683.292.camel@willson.li.ssimo.org> <50FEDCF3.2050409@redhat.com> <50FFDED0.2070703@redhat.com> Message-ID: <1359071152.20683.510.camel@willson.li.ssimo.org> On Wed, 2013-01-23 at 14:00 +0100, Tomas Babej wrote: > On 01/22/2013 07:39 PM, Dmitri Pal wrote: > > On 01/22/2013 10:57 AM, Simo Sorce wrote: > >> On Tue, 2013-01-22 at 15:50 +0100, Tomas Babej wrote: > >>> Here I bring the updated version of the patch. Please note, that I > >>> *added* a flag attribute to ipadb_ldap_attr_to_krb5_timestamp > >>> function, that controls whether the timestamp will be checked for > >>> overflow or not. The reasoning behind this is that some attributes > >>> will not be set to future dates, due to their inherent nature - such > >>> as krbLastSuccessfulAuth or krbLastAdminUnlock. > >>> > >>> These are all related to past dates, and it would make no sense to set > >>> them to future dates, even manually. Therefore I'd rather represent > >>> negative values in these attributes as past dates. They would have to > >>> be set manually anyway, because they would represent timestamps before > >>> the beginning of the unix epoch, however, I find this approach better > >>> than pushing them up to year 2038 in case such things happens. > >>> > >>> Any objections to this approach? > >>> > >> I am not sure I understand what is the point of giving this option to > >> callers. A) How does an API user know when to use one or the other > >> option. B) What good does it make to have the same date return different > >> results based on a flag ? > >> > >> What will happen later on when MIT will 'fix' the 2038 limit by changing > >> the meaning of negative timestamps ? Keep in mind that right now > >> negative timestamps are not really valid in the MIT code. > >> > >> Unless there is a 'use' for getting negative timestamps I think it is > >> only harmful to allow it and consumers would only be confused on whether > >> it should be used or not. > >> > >> So my first impression is that you are a bit overthinking here and we > >> should instead always force the same behavior for all callers and always > >> check and enforce endoftime dates. > >> > >> Simo. > >> > > +1 > Ok, the patch does not distinguish between 'past' and 'future' > timestamps anymore. > > Please respond if you see any issues. I am sorry I haven't replied yet. I meant to test the patch this time before ACKing given how fiddly this time issue seem to be but haven't had found the time so far. Just want to say I do not see any problem in the last incarnation of the patch, so if someone beats me to testing you have my blessing for an ACK. Simo. -- Simo Sorce * Red Hat, Inc * New York From mkosek at redhat.com Fri Jan 25 08:19:22 2013 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 25 Jan 2013 09:19:22 +0100 Subject: [Freeipa-devel] [PATCH] 0145 Add the CA cert to LDAP after the CA install In-Reply-To: <510173A7.6080603@redhat.com> References: <510173A7.6080603@redhat.com> Message-ID: <5102400A.2030609@redhat.com> On 01/24/2013 06:47 PM, Petr Viktorin wrote: > This makes a benign "CRITICAL" message in ipa-server-install go away. > > https://fedorahosted.org/freeipa/ticket/3375 > ACK, works fine. We may wait with a push of this one until a proper triage. Martin From mkosek at redhat.com Fri Jan 25 09:16:34 2013 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 25 Jan 2013 10:16:34 +0100 Subject: [Freeipa-devel] [PATCH] 1082 restart certmonger before upgrading In-Reply-To: <5101A565.5020004@redhat.com> References: <5101A565.5020004@redhat.com> Message-ID: <51024D72.1030207@redhat.com> On 01/24/2013 10:19 PM, Rob Crittenden wrote: > certmonger may provide new CAs, as in the case from upgrading IPA 2.2 to 3.x. > We need these new CAs available during the upgrade process. > > The certmonger package does its own condrestart as part of %postun which runs > after the %post script of freeipa-server, so we need to restart it ourselves > before upgrading. > > rob > I tested the change and it restarted the certmonger as expected. ACK. Pushed to master, ipa-3-1, ipa-3-0. Martin From pviktori at redhat.com Fri Jan 25 13:54:25 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Fri, 25 Jan 2013 14:54:25 +0100 Subject: [Freeipa-devel] [PATCHES] 137-144 LDAP code refactoring (Part 3) In-Reply-To: <51013FD5.8030004@redhat.com> References: <50F83498.9050104@redhat.com> <50FD7CF8.2070401@redhat.com> <50FEAA66.4010704@redhat.com> <51010252.1030200@redhat.com> <51013FD5.8030004@redhat.com> Message-ID: <51028E91.4070601@redhat.com> On 01/24/2013 03:06 PM, Petr Viktorin wrote: > On 01/24/2013 10:43 AM, Petr Viktorin wrote: >> On 01/22/2013 04:04 PM, Petr Viktorin wrote: >>> On 01/21/2013 06:38 PM, Petr Viktorin wrote: >>>> On 01/17/2013 06:27 PM, Petr Viktorin wrote: >>>>> Hello, >>>>> This is the first batch of changes aimed to consolidate our LDAP code. >>>>> Each should be a self-contained change that doesn't break anything. >>>>> >>>>> These patches do some general cleanup (some of the changes might seem >>>>> trivial but help a lot when grepping through the code); merge the >>>>> common >>>>> parts LDAPEntry, Entry and Entity classes; and move stuff that depends >>>>> on an installed server out of IPASimpleLDAPObject and SchemaCache. >>>>> >>>>> I'm posting them early so you can see where I'm going, and so you can >>>>> find out if your work will conflict with mine. >> > > Here is a third set of patches. These apply on top of jcholast's patches > 94-96. > I found mistakes in two of the patches, attaching fixed versions. Since this patchset is becoming unwieldy, I've put it in a public repo that I'll keep updated. The following command will fetch it into your "pviktori-ldap-refactor" branch: git fetch git://github.com/encukou/freeipa ldap-refactor:pviktori-ldap-refactor -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0129-02-Make-IPAdmin-not-inherit-from-IPASimpleLDAPObject.patch Type: text/x-patch Size: 7287 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0137-02-Replace-setValue-by-keyword-arguments-when-creating-.patch Type: text/x-patch Size: 25695 bytes Desc: not available URL: From jcholast at redhat.com Mon Jan 28 08:34:12 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 28 Jan 2013 09:34:12 +0100 Subject: [Freeipa-devel] [PATCHES] 137-144 LDAP code refactoring (Part 3) In-Reply-To: <51028E91.4070601@redhat.com> References: <50F83498.9050104@redhat.com> <50FD7CF8.2070401@redhat.com> <50FEAA66.4010704@redhat.com> <51010252.1030200@redhat.com> <51013FD5.8030004@redhat.com> <51028E91.4070601@redhat.com> Message-ID: <51063804.4060507@redhat.com> On 25.1.2013 14:54, Petr Viktorin wrote: > On 01/24/2013 03:06 PM, Petr Viktorin wrote: >> On 01/24/2013 10:43 AM, Petr Viktorin wrote: >>> On 01/22/2013 04:04 PM, Petr Viktorin wrote: >>>> On 01/21/2013 06:38 PM, Petr Viktorin wrote: >>>>> On 01/17/2013 06:27 PM, Petr Viktorin wrote: >>>>>> Hello, >>>>>> This is the first batch of changes aimed to consolidate our LDAP >>>>>> code. >>>>>> Each should be a self-contained change that doesn't break anything. >>>>>> >>>>>> These patches do some general cleanup (some of the changes might seem >>>>>> trivial but help a lot when grepping through the code); merge the >>>>>> common >>>>>> parts LDAPEntry, Entry and Entity classes; and move stuff that >>>>>> depends >>>>>> on an installed server out of IPASimpleLDAPObject and SchemaCache. >>>>>> >>>>>> I'm posting them early so you can see where I'm going, and so you can >>>>>> find out if your work will conflict with mine. >>> >> >> Here is a third set of patches. These apply on top of jcholast's patches >> 94-96. >> > > I found mistakes in two of the patches, attaching fixed versions. > > > > Since this patchset is becoming unwieldy, I've put it in a public repo > that I'll keep updated. The following command will fetch it into your > "pviktori-ldap-refactor" branch: > > git fetch git://github.com/encukou/freeipa > ldap-refactor:pviktori-ldap-refactor > > I don't think patch 139 is necessary, I fixed this problem in patch 95 by not including 'dn' as attribute in _entry_to_entity. -- Jan Cholasta From jcholast at redhat.com Mon Jan 28 09:29:46 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 28 Jan 2013 10:29:46 +0100 Subject: [Freeipa-devel] [PATCH] 89 Raise ValidationError on invalid CSV values In-Reply-To: <50F3F272.2000501@redhat.com> References: <50EDA4CE.1060200@redhat.com> <50F3F272.2000501@redhat.com> Message-ID: <5106450A.2000206@redhat.com> On 14.1.2013 12:56, Petr Viktorin wrote: > On 01/09/2013 06:11 PM, Jan Cholasta wrote: >> Hi, >> >> this patch fixes . >> >> Honza >> > > The patch works well, but could you also add a test to ensure we don't > regress in the future? > > Test added. -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-89.1-Raise-ValidationError-on-invalid-CSV-values.patch Type: text/x-patch Size: 2099 bytes Desc: not available URL: From jcholast at redhat.com Mon Jan 28 09:30:34 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 28 Jan 2013 10:30:34 +0100 Subject: [Freeipa-devel] [PATCHES] 91-92 Add support for RFC 6594 SSHFP DNS records In-Reply-To: <51006821.9000901@redhat.com> References: <50EE49EC.7000500@redhat.com> <50EEA00B.8090505@redhat.com> <51006821.9000901@redhat.com> Message-ID: <5106453A.60104@redhat.com> On 23.1.2013 23:45, Rob Crittenden wrote: > Jan Cholasta wrote: >> On 10.1.2013 05:56, Jan Cholasta wrote: >>> Hi, >>> >>> Patch 91 removes module ipapython.compat. The code that uses it doesn't >>> work with ancient Python versions anyway, so there's no need to keep it >>> around. >>> >>> Patch 92 adds support for automatic generation of RFC 6594 SSHFP DNS >>> records to ipa-client-install and host plugin, as described in >>> . Note that >>> still applies. >>> >>> https://fedorahosted.org/freeipa/ticket/2642 >>> >>> Honza >>> >> >> Self-NACK, forgot to actually remove ipapython/compat.py in the first >> patch. Also removed an unnecessary try block from the second patch. >> >> Honza > > These look good. I'm a little concerned about the magic numbers in the > SSHFP code. I know these come from the RFCs. Can you add a comment there > so future developers know where the values for key type and fingerprint > type come from? > > rob Comment added. -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-91.2-Drop-ipapython.compat.patch Type: text/x-patch Size: 6600 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-92.2-Add-support-for-RFC-6594-SSHFP-DNS-records.patch Type: text/x-patch Size: 3073 bytes Desc: not available URL: From pspacek at redhat.com Mon Jan 28 10:23:11 2013 From: pspacek at redhat.com (Petr Spacek) Date: Mon, 28 Jan 2013 11:23:11 +0100 Subject: [Freeipa-devel] FYI: new RFC 6844 on DNS Certification Authority Authorization (CAA) Resource Record In-Reply-To: <20130125224314.8967172E11A@rfc-editor.org> References: <20130125224314.8967172E11A@rfc-editor.org> Message-ID: <5106518F.2000206@redhat.com> Hello list, FYI: New potentially interesting RFC was published. http://tools.ietf.org/html/rfc6844 I'm sorry to people subscribed to rfc-dist list. Petr^2 Spacek -------- Original Message -------- Subject: [rfc-dist] RFC 6844 on DNS Certification Authority Authorization (CAA) Resource Record Date: Fri, 25 Jan 2013 14:43:14 -0800 (PST) From: rfc-editor at rfc-editor.org To: ietf-announce at ietf.org, rfc-dist at rfc-editor.org CC: pkix at ietf.org, rfc-editor at rfc-editor.org A new Request for Comments is now available in online RFC libraries. RFC 6844 Title: DNS Certification Authority Authorization (CAA) Resource Record Author: P. Hallam-Baker, R. Stradling Status: Standards Track Stream: IETF Date: January 2013 Mailbox: philliph at comodo.com, rob.stradling at comodo.com Pages: 18 Characters: 36848 Updates/Obsoletes/SeeAlso: None I-D Tag: draft-ietf-pkix-caa-15.txt URL: http://www.rfc-editor.org/rfc/rfc6844.txt The Certification Authority Authorization (CAA) DNS Resource Record allows a DNS domain name holder to specify one or more Certification Authorities (CAs) authorized to issue certificates for that domain. CAA Resource Records allow a public Certification Authority to implement additional controls to reduce the risk of unintended certificate mis-issue. This document defines the syntax of the CAA record and rules for processing CAA records by certificate issuers. [STANDARDS-TRACK] This document is a product of the Public-Key Infrastructure (X.509) Working Group of the IETF. This is now a Proposed Standard. STANDARDS TRACK: This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the Internet Official Protocol Standards (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. This announcement is sent to the IETF-Announce and rfc-dist lists. To subscribe or unsubscribe, see http://www.ietf.org/mailman/listinfo/ietf-announce http://mailman.rfc-editor.org/mailman/listinfo/rfc-dist For searching the RFC series, see http://www.rfc-editor.org/rfcsearch.html. For downloading RFCs, see http://www.rfc-editor.org/rfc.html. Requests for special distribution should be addressed to either the author of the RFC in question, or to rfc-editor at rfc-editor.org. Unless specifically noted otherwise on the RFC itself, all RFCs are for unlimited distribution. The RFC Editor Team Association Management Solutions, LLC _______________________________________________ rfc-dist mailing list rfc-dist at rfc-editor.org https://www.rfc-editor.org/mailman/listinfo/rfc-dist http://www.rfc-editor.org From jcholast at redhat.com Mon Jan 28 14:25:43 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 28 Jan 2013 15:25:43 +0100 Subject: [Freeipa-devel] [PATCH] 97 Pylint cleanup Message-ID: <51068A67.1050804@redhat.com> Hi, this patch fixes . Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-97-Pylint-cleanup.patch Type: text/x-patch Size: 20434 bytes Desc: not available URL: From pviktori at redhat.com Mon Jan 28 14:30:31 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 28 Jan 2013 15:30:31 +0100 Subject: [Freeipa-devel] [PATCH] 89 Raise ValidationError on invalid CSV values In-Reply-To: <5106450A.2000206@redhat.com> References: <50EDA4CE.1060200@redhat.com> <50F3F272.2000501@redhat.com> <5106450A.2000206@redhat.com> Message-ID: <51068B87.5030601@redhat.com> On 01/28/2013 10:29 AM, Jan Cholasta wrote: > On 14.1.2013 12:56, Petr Viktorin wrote: >> On 01/09/2013 06:11 PM, Jan Cholasta wrote: >>> Hi, >>> >>> this patch fixes . >>> >>> Honza >>> >> >> The patch works well, but could you also add a test to ensure we don't >> regress in the future? >> >> > > Test added. > ACK -- Petr? From pviktori at redhat.com Mon Jan 28 15:09:17 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 28 Jan 2013 16:09:17 +0100 Subject: [Freeipa-devel] [PATCHES] 146-164 LDAP code refactoring (Part 4) In-Reply-To: <51063804.4060507@redhat.com> References: <50F83498.9050104@redhat.com> <50FD7CF8.2070401@redhat.com> <50FEAA66.4010704@redhat.com> <51010252.1030200@redhat.com> <51013FD5.8030004@redhat.com> <51028E91.4070601@redhat.com> <51063804.4060507@redhat.com> Message-ID: <5106949D.408@redhat.com> On 01/28/2013 09:34 AM, Jan Cholasta wrote: > On 25.1.2013 14:54, Petr Viktorin wrote: >> On 01/24/2013 03:06 PM, Petr Viktorin wrote: >>> On 01/24/2013 10:43 AM, Petr Viktorin wrote: >>>> On 01/22/2013 04:04 PM, Petr Viktorin wrote: >>>>> On 01/21/2013 06:38 PM, Petr Viktorin wrote: >>>>>> On 01/17/2013 06:27 PM, Petr Viktorin wrote: >>>>>>> Hello, >>>>>>> This is the first batch of changes aimed to consolidate our LDAP >>>>>>> code. >>>>>>> Each should be a self-contained change that doesn't break anything. >>>>>>> >>>>>>> These patches do some general cleanup (some of the changes might >>>>>>> seem >>>>>>> trivial but help a lot when grepping through the code); merge the >>>>>>> common >>>>>>> parts LDAPEntry, Entry and Entity classes; and move stuff that >>>>>>> depends >>>>>>> on an installed server out of IPASimpleLDAPObject and SchemaCache. >>>>>>> >>>>>>> I'm posting them early so you can see where I'm going, and so you >>>>>>> can >>>>>>> find out if your work will conflict with mine. >>>> >>> >>> Here is a third set of patches. These apply on top of jcholast's patches >>> 94-96. >>> >> >> I found mistakes in two of the patches, attaching fixed versions. >> >> >> >> Since this patchset is becoming unwieldy, I've put it in a public repo >> that I'll keep updated. The following command will fetch it into your >> "pviktori-ldap-refactor" branch: >> >> git fetch git://github.com/encukou/freeipa >> ldap-refactor:pviktori-ldap-refactor >> >> > > I don't think patch 139 is necessary, I fixed this problem in patch 95 > by not including 'dn' as attribute in _entry_to_entity. > You're right. I'm retiring patch 139. We'll need to use entry.dn everywhere, and add an assert so that entry['dn'] is never set. Here is a fourth set of patches. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0146-Remove-unused-bindcert-and-bindkey-arguments-to-IPAd.patch Type: text/x-patch Size: 1739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0147-Turn-the-LDAPError-handler-into-a-context-manager.patch Type: text/x-patch Size: 9528 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0148-Remove-dbdir-binddn-bindpwd-from-IPAdmin.patch Type: text/x-patch Size: 4029 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0149-Remove-IPAdmin.updateEntry-calls-from-fix_replica_ag.patch Type: text/x-patch Size: 2072 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0150-Remove-IPAdmin.get_dns_sorted_by_length.patch Type: text/x-patch Size: 5327 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0151-Replace-IPAdmin.checkTask-by-replication.wait_for_ta.patch Type: text/x-patch Size: 4245 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0152-Introduce-LDAPEntry.get_single-for-getting-single-va.patch Type: text/x-patch Size: 1640 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0153-Remove-special-casing-for-missing-and-single-valued-.patch Type: text/x-patch Size: 1053 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0154-Replace-entry.getValue-by-entry.get_single.patch Type: text/x-patch Size: 29752 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0155-Replace-getList-by-a-get_entries-method.patch Type: text/x-patch Size: 21763 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0156-Remove-toTupleList-and-attrList-from-LDAPEntry.patch Type: text/x-patch Size: 2693 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0157-Rename-LDAPConnection-to-LDAPClient.patch Type: text/x-patch Size: 2833 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0158-Replace-addEntry-with-add_entry.patch Type: text/x-patch Size: 10552 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0159-Replace-deleteEntry-with-delete_entry.patch Type: text/x-patch Size: 6389 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0160-Fix-typo-and-traceback-suppression-in-replication.py.patch Type: text/x-patch Size: 1173 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0161-replace-getEntry-with-get_entry-or-get_entries-if-sc.patch Type: text/x-patch Size: 21243 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0162-Inline-inactivateEntry-in-its-only-caller.patch Type: text/x-patch Size: 2251 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0163-Inline-waitForEntry-in-its-only-caller.patch Type: text/x-patch Size: 4567 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0164-Proxy-LDAP-methods-explicitly-rather-than-using-__ge.patch Type: text/x-patch Size: 2470 bytes Desc: not available URL: From pviktori at redhat.com Mon Jan 28 15:36:17 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Mon, 28 Jan 2013 16:36:17 +0100 Subject: [Freeipa-devel] [PATCHES] 0117-0118 Port ipa-replica-prepare to the admintool framework In-Reply-To: <50E6DC9E.5050702@redhat.com> References: <50E580E2.8090502@redhat.com> <50E58E01.3060402@redhat.com> <50E6DC9E.5050702@redhat.com> Message-ID: <51069AF1.2020705@redhat.com> On 01/04/2013 02:43 PM, Petr Viktorin wrote: > On 01/03/2013 02:56 PM, John Dennis wrote: >> On 01/03/2013 08:00 AM, Petr Viktorin wrote: >>> Hello, >>> >>> The first patch implements logging-related changes to the admintool >>> framework and ipa-ldap-updater (the only thing ported to it so far). >>> The design document is at http://freeipa.org/page/V3/Logging_and_output >>> >>> John, I decided to go ahead and put an explicit "logger" attribute on >>> the tool class rather than adding debug, info, warn. etc methods >>> dynamically using log_mgr.get_logger. I believe it's the cleanest >>> solution. >>> We had a discussion about this in this thread: >>> https://www.redhat.com/archives/freeipa-devel/2012-July/msg00223.html; I >>> didn't get a reaction to my conclusion so I'm letting you know in case >>> you have more to say. >> >> I'm fine with not directly adding the debug, info, warn etc. methods, >> that practice was historical dating back to the days of Jason. However I >> do think it's useful to use a named logger and not the global >> root_logger. I'd prefer we got away from using the root_logger, it's >> continued existence is historical as well and the idea was over time we >> would slowly eliminate it's usage. FWIW the log_mgr.get_logger() is >> still useful for what you want to do. >> >> def get_logger(self, who, bind_logger_names=False) >> >> If you don't set bind_logger_names to True (and pass the class instance >> as who) you won't get the offensive debug, info, etc. methods added to >> the class instance. But it still does all the other bookeeping. >> >> The 'who' in this instance could be either the name of the admin tool or >> the class instance. >> >> Also I'd prefer using the attribute 'log' rather than 'logger'. That >> would make it consistent with code which does already use get_logger() >> passing a class instance because it's adds a 'log' attribute which is >> the logger. Also 'log' is twice as succinct than 'logger' (shorter line >> lengths). >> >> Thus if you do: >> >> log_mgr.get_logger(self) >> >> I think you'll get exactly what you want. A logger named for the class >> and being able to say >> >> self.log.debug() >> self.log.error() >> >> inside the class. >> >> In summary, just drop the True from the get_logger() call. >> > > Thanks! Yes, this works better. Updated patches attached. > Here is patch 117 rebased to current master. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0117-03-Better-logging-for-AdminTool-and-ipa-ldap-updater.patch Type: text/x-patch Size: 15729 bytes Desc: not available URL: From jcholast at redhat.com Tue Jan 29 09:21:16 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 29 Jan 2013 10:21:16 +0100 Subject: [Freeipa-devel] [PATCHES] 137-144 LDAP code refactoring (Part 3) In-Reply-To: <51063804.4060507@redhat.com> References: <50F83498.9050104@redhat.com> <50FD7CF8.2070401@redhat.com> <50FEAA66.4010704@redhat.com> <51010252.1030200@redhat.com> <51013FD5.8030004@redhat.com> <51028E91.4070601@redhat.com> <51063804.4060507@redhat.com> Message-ID: <5107948C.40105@redhat.com> On 28.1.2013 09:34, Jan Cholasta wrote: > On 25.1.2013 14:54, Petr Viktorin wrote: >> On 01/24/2013 03:06 PM, Petr Viktorin wrote: >>> On 01/24/2013 10:43 AM, Petr Viktorin wrote: >>>> On 01/22/2013 04:04 PM, Petr Viktorin wrote: >>>>> On 01/21/2013 06:38 PM, Petr Viktorin wrote: >>>>>> On 01/17/2013 06:27 PM, Petr Viktorin wrote: >>>>>>> Hello, >>>>>>> This is the first batch of changes aimed to consolidate our LDAP >>>>>>> code. >>>>>>> Each should be a self-contained change that doesn't break anything. >>>>>>> >>>>>>> These patches do some general cleanup (some of the changes might >>>>>>> seem >>>>>>> trivial but help a lot when grepping through the code); merge the >>>>>>> common >>>>>>> parts LDAPEntry, Entry and Entity classes; and move stuff that >>>>>>> depends >>>>>>> on an installed server out of IPASimpleLDAPObject and SchemaCache. >>>>>>> >>>>>>> I'm posting them early so you can see where I'm going, and so you >>>>>>> can >>>>>>> find out if your work will conflict with mine. >>>> >>> >>> Here is a third set of patches. These apply on top of jcholast's patches >>> 94-96. >>> >> >> I found mistakes in two of the patches, attaching fixed versions. >> >> >> >> Since this patchset is becoming unwieldy, I've put it in a public repo >> that I'll keep updated. The following command will fetch it into your >> "pviktori-ldap-refactor" branch: >> >> git fetch git://github.com/encukou/freeipa >> ldap-refactor:pviktori-ldap-refactor >> >> > > I don't think patch 139 is necessary, I fixed this problem in patch 95 > by not including 'dn' as attribute in _entry_to_entity. > A patch from this patchset (part 3) causes some of the dns plugin tests to fail (idnsallowdynupdate is missing in dnszone_add output). Honza -- Jan Cholasta From mkosek at redhat.com Tue Jan 29 12:09:49 2013 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 29 Jan 2013 13:09:49 +0100 Subject: [Freeipa-devel] [PATCH] 356 Add trusconfig-show and trustconfig-mod commands Message-ID: <5107BC0D.6070200@redhat.com> Global trust configuration is generated ipa-adtrust-install script is run. Add convenience commands to show auto-generated options like SID or GUID or options chosen by user (NetBIOS). Most of these options are not modifiable via trustconfig-mod command as it would break current trusts. Unit test file covering these new commands was added. https://fedorahosted.org/freeipa/ticket/3333 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-356-add-trusconfig-show-and-trustconfig-mod-commands.patch Type: text/x-patch Size: 14811 bytes Desc: not available URL: From sbose at redhat.com Tue Jan 29 13:10:46 2013 From: sbose at redhat.com (Sumit Bose) Date: Tue, 29 Jan 2013 14:10:46 +0100 Subject: [Freeipa-devel] [RFE] Read and use per-service PAC type Message-ID: <20130129131046.GB10477@localhost.localdomain> Hi, please find the design page for ticket https://fedorahosted.org/freeipa/ticket/2960 at http://freeipa.org/page/V3/Read_and_use_per_service_pac_type or below. There are some questions in the text. bye, Sumit = Overview = In ticket #2184 a new option --pac_type was introduced to ipa service-mod to individually set the PAC type for each service ticket. The allowed values are: * MS-PAC, the PAC used by Microsoft and specified in * http://msdn.microsoft.com/en-us/library/cc237917.aspx * PAD, Posix Authorization Data, currently work in progress, draft can * be found at http://tools.ietf.org/html/draft-ietf-krb-wg-pad-01 * NONE, do not add any authorization data to the ticket If the attribute is not set the default values specified in the ipakrbauthzdata attribute of the ipaConfig object is used. ''Question: What about modifying ipaKrbAuthzData for host objects? Currently I think it is only possible with --setattr.'' ''Question: Since ipakrbauthzdata is optional in the ipaGuiConfig objectclass, someone might delete it manually, which hardcoded default do we want to use in this case?'' ''Question: Since ipakrbauthzdata is a multi-value attribute how should the case handled where on value is 'NONE'? I assume 'NONE' wins in this case.' = Use Cases = * For services which do not use the authorization data from the PAC the * PAC type can be set to 'NONE'. This will keep the tickets smaller and * avoids unneeded load on the FreeIPA server during the PAC processing. * For services which can only handle Kerberos tickets up to a maximal * size, e.g. NFS, see #3263, the PAC type should be set to 'NONE'. The * authorization data in the PAC can become quite large and can make the * Kerberos ticket grow larger than the service can handle * For services which cannot handle the default PAC type but need a * different one. (This is for future versions since currently only the * MS-PAC is supported) = Design= Since the UI/CLI changes were already handled by #2184 no additional user interaction is needed here. For the needed changes to the IPA KDC backend the changes done to fix #3263 can be used as a starting point. = Implementation = To avoid issues during upgrade I think all changes done to fix #3263 should be preserved, i.e. the NFS service will have a hardcoded default 'NONE'. Otherwise the LDAP objects of the NFS services must be modified during upgrade. In ipadb_sign_authdata() a call like
ret = get_service_pac_type(server->princ, &pac_type);
can be added, where get_service_pac_type() runs a LDAP search with a filter like '(&(objectclass=ipaService)(krbPrincipalName=SERVER_PRINCIPAL))' which looks for the ipakrbauthzdata attribute. = Feature Managment = === UI === UI related changes were added by ticket #2184. ''Question: it looks like the PAC type cannot be explicitly set to 'NONE' in the UI, I guess this needs fixing?'' === CLI === CLI related changes were added by ticket #2184. = Major configuration options and enablement = This feature does not need any options and is enabled by default. The behaviour is controlled by the value of the ipaKrbAuthzData of the service object and the ipaConfig object. = Replication = Changes only affect the KDC IPA backend. If not all server and replicas are updated with this feature KDCs which are not updated might still return Kerberos tickets with the default PAC type although there is a different one mentioned in the service object. = Updates and Upgrades = Any impact on the updates and upgrades? = Dependencies = This feature does not add any new dependencies. = External Impact = The changes will only touch the KDC IPA backend, no external impact is expected. = RFE Author = [[User:Sbose|Sumit Bose]] From mkosek at redhat.com Tue Jan 29 14:40:49 2013 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 29 Jan 2013 15:40:49 +0100 Subject: [Freeipa-devel] [PATCH] 97 Pylint cleanup In-Reply-To: <51068A67.1050804@redhat.com> References: <51068A67.1050804@redhat.com> Message-ID: <5107DF71.3080800@redhat.com> On 01/28/2013 03:25 PM, Jan Cholasta wrote: > Hi, > > this patch fixes . > > Honza ACK. Pushed to master, ipa-3-1. Martin From mkosek at redhat.com Tue Jan 29 14:43:04 2013 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 29 Jan 2013 15:43:04 +0100 Subject: [Freeipa-devel] [PATCH] 0145 Add the CA cert to LDAP after the CA install In-Reply-To: <5102400A.2030609@redhat.com> References: <510173A7.6080603@redhat.com> <5102400A.2030609@redhat.com> Message-ID: <5107DFF8.4080401@redhat.com> On 01/25/2013 09:19 AM, Martin Kosek wrote: > On 01/24/2013 06:47 PM, Petr Viktorin wrote: >> This makes a benign "CRITICAL" message in ipa-server-install go away. >> >> https://fedorahosted.org/freeipa/ticket/3375 >> > > ACK, works fine. We may wait with a push of this one until a proper triage. > > Martin > Pushed to master, ipa-3-1. Martin From simo at redhat.com Tue Jan 29 15:13:12 2013 From: simo at redhat.com (Simo Sorce) Date: Tue, 29 Jan 2013 10:13:12 -0500 Subject: [Freeipa-devel] [RFE] Read and use per-service PAC type In-Reply-To: <20130129131046.GB10477@localhost.localdomain> References: <20130129131046.GB10477@localhost.localdomain> Message-ID: <1359472392.1358.7.camel@willson.li.ssimo.org> On Tue, 2013-01-29 at 14:10 +0100, Sumit Bose wrote: > = Implementation = > > To avoid issues during upgrade I think all changes done to fix #3263 > should be preserved, i.e. the NFS service will have a hardcoded > default > 'NONE'. Otherwise the LDAP objects of the NFS services must be > modified > during upgrade. > > In ipadb_sign_authdata() a call like >
> ret = get_service_pac_type(server->princ, &pac_type);
> 
> can be added, where get_service_pac_type() runs a LDAP search with a > filter like > '(&(objectclass=ipaService)(krbPrincipalName=SERVER_PRINCIPAL))' which > looks for the ipakrbauthzdata attribute. > In ipa-kdb we can keep around data when the principal is retrieved from LDAP. So we should keep around data about the pac_type and then retrieve it through krb5_entry. If we are missing the krb5_entry we should ask MIT to change the interface to pass it in. We should *not* perform additional searches, they are costly. Simo. -- Simo Sorce * Red Hat, Inc * New York From pviktori at redhat.com Tue Jan 29 15:39:53 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 29 Jan 2013 16:39:53 +0100 Subject: [Freeipa-devel] [PATCHES] 146-164 LDAP code refactoring (Part 4) In-Reply-To: <5106949D.408@redhat.com> References: <50F83498.9050104@redhat.com> <50FD7CF8.2070401@redhat.com> <50FEAA66.4010704@redhat.com> <51010252.1030200@redhat.com> <51013FD5.8030004@redhat.com> <51028E91.4070601@redhat.com> <51063804.4060507@redhat.com> <5106949D.408@redhat.com> Message-ID: <5107ED49.4000502@redhat.com> On 01/28/2013 04:09 PM, Petr Viktorin wrote: > On 01/28/2013 09:34 AM, Jan Cholasta wrote: >> On 25.1.2013 14:54, Petr Viktorin wrote: >>> On 01/24/2013 03:06 PM, Petr Viktorin wrote: >>>> On 01/24/2013 10:43 AM, Petr Viktorin wrote: >>>>> On 01/22/2013 04:04 PM, Petr Viktorin wrote: >>>>>> On 01/21/2013 06:38 PM, Petr Viktorin wrote: >>>>>>> On 01/17/2013 06:27 PM, Petr Viktorin wrote: >>>>>>>> Hello, >>>>>>>> This is the first batch of changes aimed to consolidate our LDAP >>>>>>>> code. >>>>>>>> Each should be a self-contained change that doesn't break anything. >>>>>>>> >>>>>>>> These patches do some general cleanup (some of the changes might >>>>>>>> seem >>>>>>>> trivial but help a lot when grepping through the code); merge the >>>>>>>> common >>>>>>>> parts LDAPEntry, Entry and Entity classes; and move stuff that >>>>>>>> depends >>>>>>>> on an installed server out of IPASimpleLDAPObject and SchemaCache. >>>>>>>> >>>>>>>> I'm posting them early so you can see where I'm going, and so you >>>>>>>> can >>>>>>>> find out if your work will conflict with mine. >>>>> >>>> >>>> Here is a third set of patches. These apply on top of jcholast's >>>> patches >>>> 94-96. >>>> >>> >>> I found mistakes in two of the patches, attaching fixed versions. >>> >>> >>> >>> Since this patchset is becoming unwieldy, I've put it in a public repo >>> that I'll keep updated. The following command will fetch it into your >>> "pviktori-ldap-refactor" branch: >>> >>> git fetch git://github.com/encukou/freeipa >>> ldap-refactor:pviktori-ldap-refactor >>> >>> >> >> I don't think patch 139 is necessary, I fixed this problem in patch 95 >> by not including 'dn' as attribute in _entry_to_entity. >> > > You're right. I'm retiring patch 139. > We'll need to use entry.dn everywhere, and add an assert so that > entry['dn'] is never set. > > > Here is a fourth set of patches. > Honza noticed test failures caused by patch 143. Patch 123 grew a conflict with master. Fixes attached. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0143-02-Change-add-update-delete-_entry-to-take-LDAPEntries.patch Type: text/x-patch Size: 4603 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0123-02-Use-explicit-loggers-in-ldap2-code.patch Type: text/x-patch Size: 9594 bytes Desc: not available URL: From sbose at redhat.com Tue Jan 29 15:52:01 2013 From: sbose at redhat.com (Sumit Bose) Date: Tue, 29 Jan 2013 16:52:01 +0100 Subject: [Freeipa-devel] [RFE] Read and use per-service PAC type In-Reply-To: <1359472392.1358.7.camel@willson.li.ssimo.org> References: <20130129131046.GB10477@localhost.localdomain> <1359472392.1358.7.camel@willson.li.ssimo.org> Message-ID: <20130129155201.GA5885@localhost.localdomain> On Tue, Jan 29, 2013 at 10:13:12AM -0500, Simo Sorce wrote: > On Tue, 2013-01-29 at 14:10 +0100, Sumit Bose wrote: > > = Implementation = > > > > To avoid issues during upgrade I think all changes done to fix #3263 > > should be preserved, i.e. the NFS service will have a hardcoded > > default > > 'NONE'. Otherwise the LDAP objects of the NFS services must be > > modified > > during upgrade. > > > > In ipadb_sign_authdata() a call like > >
> > ret = get_service_pac_type(server->princ, &pac_type);
> > 
> > can be added, where get_service_pac_type() runs a LDAP search with a > > filter like > > '(&(objectclass=ipaService)(krbPrincipalName=SERVER_PRINCIPAL))' which > > looks for the ipakrbauthzdata attribute. > > > In ipa-kdb we can keep around data when the principal is retrieved from > LDAP. So we should keep around data about the pac_type and then retrieve > it through krb5_entry. > > If we are missing the krb5_entry we should ask MIT to change the > interface to pass it in. ipadb_e_data is already used for extra data. I will update the page accordingly. bye, Sumit > > We should *not* perform additional searches, they are costly. > > Simo. > > -- > Simo Sorce * Red Hat, Inc * New York > From pviktori at redhat.com Tue Jan 29 16:06:15 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 29 Jan 2013 17:06:15 +0100 Subject: [Freeipa-devel] [PATCHES] 0104-0106 Provide means of displaying warning and informational messages on clients In-Reply-To: <50E71D5F.4000304@redhat.com> References: <50C9A709.8070806@redhat.com> <50C9F7A4.80909@redhat.com> <50CA0BDB.4000807@redhat.com> <50CADD74.1090802@redhat.com> <50E71D5F.4000304@redhat.com> Message-ID: <5107F377.5020007@redhat.com> On 01/04/2013 07:20 PM, Petr Viktorin wrote: > On 12/14/2012 09:04 AM, Jan Cholasta wrote: >> On 13.12.2012 18:09, Petr Viktorin wrote: >>> On 12/13/2012 04:43 PM, Martin Kosek wrote: >>>> On 12/13/2012 10:59 AM, Petr Viktorin wrote: >>>>> It's time to give this to another set of eyes :) >>>>> >>>>> Design document: http://freeipa.org/page/V3/Messages >>>>> Ticket: https://fedorahosted.org/freeipa/ticket/2732 >>>>> >>>>> More info is in commit messages. >>>>> >>>>> >>>>> Because of https://fedorahosted.org/freeipa/ticket/3294, I needed to >>>>> change the >>>>> design document: when the client doesn't send the API version, it is >>>>> assumed >>>>> it's at a version before capabilities were introduced (i.e. 2.47). >>>>> The client still gets a warning if the version is missing. Except for >>>>> those >>>>> commands where IPA didn't send a version -- ping, cert-show, etc. -- >>>>> the >>>>> warning wouldn't pass validation on old clients. (I'm assuming that >>>>> our client >>>>> is so far the only one that validates so strictly.) >>>> >>>> I did a basic test of this patch and also quickly read through the >>>> patches and >>>> besides nitpicks (like unused inspect module in >>>> tests/test_ipalib/test_messages.py in patch 0105) I did not find any >>>> obvious >>>> errors in the Python code. >>> >>> Noted, will fix in future versions of the patch. >>> >>>> >>>> However, this patch breaks WebUI badly, I did not even get to a log in >>>> screen. >>>> Cooperation with Petr Vobornik will be needed. In my case, I got blank >>>> screen >>>> and Javascript error: >>>> >>>> TypeError: IPA.messages.dialogs is undefined >>>> https://vm-037.idm.lab.bos.redhat.com/ipa/ui/ipa.js >>>> Line 1460 >>>> >>>> I assume this is related to the Internal Error that was returned in >>>> the JSON call >>>> >>>> { >>>> "error": null, >>>> "id": null, >>>> "principal": "admin at IDM.LAB.BOS.REDHAT.COM", >>>> "result": { >>>> "count": 5, >>>> "results": [ >>>> { >>>> "error": "an internal error has occurred", >>>> "error_code": 903, >>>> "error_name": "InternalError" >>>> }, >>>> { >>>> ... >>>> >>>> This can be reproduced with: >>>> >>>> # curl -v -H "Content-Type:application/json" -H >>>> "referer:https://`hostname`/ipa" -H "Accept:applicaton/json" >>>> --negotiate -u : >>>> --cacert /etc/ipa/ca.crt -d >>>> '{"method":"i18n_messages","params":[[],{}],"id":0}' -X POST >>>> https://`hostname`/ipa/json >>> >>> Good catch! The i18n_messages plugin already defines a "messages" >>> output. When I renamed this from "warnings" to "messages" I forgot to >>> check for clashes. >>> Since i18n_messages is an internal command only used by the Web UI, we >>> can rename its output to "texts" without breaking compatibility. >>> >>> I'm attaching a preliminary fix (for both backend and UI), but hopefully >>> it won't be necessary, see below. >>> >>>> I am also not sure I like the requirement of a specific version option >>>> to be >>>> always passed. I would prefer that missing version option would mean >>>> "I use the >>>> most recent version of API" instead - it would make the custom >>>> JSONRPC/XMLRPC >>>> calls easier to use. >>>> >>>> But since the version option was not being sent for some commands, we >>>> may not >>>> have a choice anyway if we do not want to break old clients in case we >>>> add some >>>> capabilities to these commands. >>>> >>> >>> I see three other options, all worse: >>> - Do not use capabilities for the affected commands, meaning no new >>> functionality can be added to them (and by extension, no new >>> functionality common to all commands can be added). >>> - Treat a missing version as the current version >>> - Break backwards compatibility >>> >>> And one possibly better (thanks to Petr? and Martin for opening my eyes >>> off-list!): >>> - Deprecate XML-RPC. All XML-RPC requests would be pinned to current >>> version (2.47). Capabilities/messages would only apply to JSON-RPC. >>> >>> This would also allow us to solve the above name-clashing problem >>> elegantly. Here is a reminder of what a JSON response looks like: >>> >>> { >>> "error": null, >>> "id": 0, >>> "principal": "admin at IDM.LAB.BOS.REDHAT.COM", >>> "result": { >>> "summary": "IPA server version 3.1.0GIT2e4bd02. API version >>> 2.47" >>> }, >>> "version": "3.1.0GIT2e4bd02" >>> } >>> >>> A XML-RPC response only contains the "result" part of that. >>> So with JSON, we can put the messages in the top level, which is much >>> better design. > > Custom info in the "top level" seems to be a violation of the JSON-RPC > spec. I'd rather not do more of those, so I'm withdrawing this idea. > >>> >>> XML-RPC sucks in other ways too. We already have a workaround for its >>> inability to attach extra info to errors (commit >>> 88262a75ffe7a25640333dcc4da2100830cae821, Add instructions support to >>> PublicError). >>> >>> I've opened a RFC here: https://fedorahosted.org/freeipa/ticket/3299. >>> >> >> +1, XML-RPC sucks. This should have been done a long time ago. >> >> Honza >> > > Here are new patches. > > XML-RPC requests with missing version are assumed to be old (the version > before capabilities are introduced, 2.47). This takes care of backcompat > with clients with bug 3294. > JSON-RPC requests with missing version are assumed to be testing calls > (e.g. curl), they get behavior of the latest version but also a warning. > I've also added this info to the design doc. > > It turns out that these patches don't depend on whether our client uses > XML-RPC or JSON-RPC. If/when it supports both, I'll be able to add some > extra unit tests. > Patch 106 had a minor conflict with master, attaching fixed version. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0106-03-Add-client-capabilities-enable-messages.patch Type: text/x-patch Size: 18925 bytes Desc: not available URL: From rcritten at redhat.com Tue Jan 29 17:54:37 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 29 Jan 2013 12:54:37 -0500 Subject: [Freeipa-devel] [PATCH] 1079 address CA subsystem renewal issues In-Reply-To: <50F5893C.5040000@redhat.com> References: <50E9D7F2.1060800@redhat.com> <50EAC0F5.8050607@redhat.com> <50EAD70D.8010000@redhat.com> <50EAE65B.60409@redhat.com> <50EAFAFE.1040200@redhat.com> <50EAFE68.3030109@redhat.com> <50EB1E85.2070109@redhat.com> <50F0A4F4.5030304@redhat.com> <50F40C5E.9010302@redhat.com> <50F47F1A.30702@redhat.com> <50F56A82.4090806@redhat.com> <50F57B73.6070403@redhat.com> <50F580A7.2070004@redhat.com> <50F5893C.5040000@redhat.com> Message-ID: <51080CDD.2010109@redhat.com> Petr Viktorin wrote: > On 01/15/2013 05:15 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> On 01/15/2013 03:41 PM, Petr Viktorin wrote: >>>> On 01/14/2013 10:56 PM, Rob Crittenden wrote: >>>>> Petr Viktorin wrote: >>>>>> On 01/12/2013 12:49 AM, Rob Crittenden wrote: >>>>>>> Rob Crittenden wrote: >>>>>>>> Petr Viktorin wrote: >>>>>>>>> On 01/07/2013 05:42 PM, Rob Crittenden wrote: >>>>>>>>>> Petr Viktorin wrote: >>>>>>>>>>> On 01/07/2013 03:09 PM, Rob Crittenden wrote: >>>>>>>>>>>> Petr Viktorin wrote: >>>>>>>>> [...] >>>>>>>>>>>>> >>>>>>>>>>>>> Works for me, but I have some questions (this is an area I >>>>>>>>>>>>> know >>>>>>>>>>>>> little >>>>>>>>>>>>> about). >>>>>>>>>>>>> >>>>>>>>>>>>> Can we be 100% sure these certs are always renewed >>>>>>>>>>>>> together? Is >>>>>>>>>>>>> certmonger the only possible mechanism to update them? >>>>>>>>>>>> >>>>>>>>>>>> You raise a good point. If though some mechanism someone >>>>>>>>>>>> replaces >>>>>>>>>>>> one of >>>>>>>>>>>> these certs it will cause the script to fail. Some >>>>>>>>>>>> notification of >>>>>>>>>>>> this >>>>>>>>>>>> failure will be logged though, and of course, the certs >>>>>>>>>>>> won't be >>>>>>>>>>>> renewed. >>>>>>>>>>>> >>>>>>>>>>>> One could conceivably manually renew one of these certificates. >>>>>>>>>>>> It is >>>>>>>>>>>> probably a very remote possibility but it is non-zero. >>>>>>>>>>>> >>>>>>>>>>>>> Can we be sure certmonger always does the updates in parallel? >>>>>>>>>>>>> If it >>>>>>>>>>>>> managed to update the audit cert before starting on the >>>>>>>>>>>>> others, >>>>>>>>>>>>> we'd >>>>>>>>>>>>> get >>>>>>>>>>>>> no CA restart for the others. >>>>>>>>>>>> >>>>>>>>>>>> These all get issued at the same time so should expire at the >>>>>>>>>>>> same >>>>>>>>>>>> time >>>>>>>>>>>> as well (see problem above). The script will hang around for 10 >>>>>>>>>>>> minutes >>>>>>>>>>>> waiting for the renewal to complete, then give up. >>>>>>>>>>> >>>>>>>>>>> The certs might take different amounts of time to update, right? >>>>>>>>>>> Eventually, the expirations could go out of sync enough for >>>>>>>>>>> it to >>>>>>>>>>> matter. >>>>>>>>>>> AFAICS, without proper locking we still get a race condition >>>>>>>>>>> when >>>>>>>>>>> the >>>>>>>>>>> other certs start being renewed some time (much less than 10 >>>>>>>>>>> min) >>>>>>>>>>> after >>>>>>>>>>> the audit one: >>>>>>>>>>> >>>>>>>>>>> (time axis goes down) >>>>>>>>>>> >>>>>>>>>>> audit cert other cert >>>>>>>>>>> ---------- ---------- >>>>>>>>>>> certmonger does renew . >>>>>>>>>>> post-renew script starts . >>>>>>>>>>> check state of other certs: OK . >>>>>>>>>>> . certmonger starts renew >>>>>>>>>>> certutil modifies NSS DB + certmonger modifies NSS DB == >>>>>>>>>>> boom! >>>>>>>>>> >>>>>>>>>> This can't happen because we count the # of expected certs and >>>>>>>>>> wait >>>>>>>>>> until all are in MONITORING before continuing. >>>>>>>>> >>>>>>>>> The problem is that they're also in MONITORING before the whole >>>>>>>>> renewal >>>>>>>>> starts. If the script happens to check just before the state >>>>>>>>> changes >>>>>>>>> from MONITORING to GENERATING_CSR or whatever, we can get >>>>>>>>> corruption. >>>>>>>>> >>>>>>>>>> The worse that would >>>>>>>>>> happen is the trust wouldn't be set on the audit cert and dogtag >>>>>>>>>> wouldn't be restarted. >>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>>> The state the system would be in is this: >>>>>>>>>>>> >>>>>>>>>>>> - audit cert trust not updated, so next restart of CA will fail >>>>>>>>>>>> - CA is not restarted so will not use updated certificates >>>>>>>>>>>> >>>>>>>>>>>>> And anyway, why does certmonger do renewals in parallel? It >>>>>>>>>>>>> seems >>>>>>>>>>>>> that >>>>>>>>>>>>> if it did one at a time, always waiting until the post-renew >>>>>>>>>>>>> script is >>>>>>>>>>>>> done, this patch wouldn't be necessary. >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> From what Nalin told me certmonger has some coarse locking >>>>>>>>>>>> such >>>>>>>>>>>> that >>>>>>>>>>>> renewals in a the same NSS database are serialized. As you >>>>>>>>>>>> point >>>>>>>>>>>> out, it >>>>>>>>>>>> would be nice to extend this locking to the post renewal >>>>>>>>>>>> scripts. We >>>>>>>>>>>> can >>>>>>>>>>>> ask Nalin about it. That would fix the potential corruption >>>>>>>>>>>> issue. >>>>>>>>>>>> It is >>>>>>>>>>>> still much nicer to not have to restart dogtag 4 times. >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Well, three extra restarts every few years seems like a small >>>>>>>>>>> price to >>>>>>>>>>> pay for robustness. >>>>>>>>>> >>>>>>>>>> It is a bit of a problem though because the certs all renew >>>>>>>>>> within >>>>>>>>>> seconds so end up fighting over who is restarting dogtag. This >>>>>>>>>> can >>>>>>>>>> cause >>>>>>>>>> some renewals go into a failure state to be retried later. >>>>>>>>>> This is >>>>>>>>>> fine >>>>>>>>>> functionally but makes QE a bit of a pain. You then have to make >>>>>>>>>> sure >>>>>>>>>> that renewal is basically done, then restart certmonger and check >>>>>>>>>> everything again, over and over until all the certs are renewed. >>>>>>>>>> This is >>>>>>>>>> difficult to automate. >>>>>>>>> >>>>>>>>> So we need to extend the certmonger lock, and wait until Dogtag is >>>>>>>>> back >>>>>>>>> up before exiting the script. That way it'd still take longer >>>>>>>>> than 1 >>>>>>>>> restart, but all the renews should succeed. >>>>>>>>> >>>>>>>> >>>>>>>> Right, but older dogtag versions don't have the handy servlet to >>>>>>>> tell >>>>>>>> that the service is actually up and responding. So it is >>>>>>>> difficult to >>>>>>>> tell from tomcat alone whether the CA is actually up and handling >>>>>>>> requests. >>>>>>>> >>>>>>> >>>>>>> Revised patch that takes advantage of new version of certmonger. >>>>>>> certmonger-0.65 adds locking from the time renewal begins to the >>>>>>> end of >>>>>>> the post_save_command. This lets us be sure that no other certmonger >>>>>>> renewals will have the NSS database open in read-write mode. >>>>>>> >>>>>>> We need to be sure that tomcat is shut down before we let certmonger >>>>>>> save the certificate to the NSS database because dogtag opens its >>>>>>> database read/write and two writers can cause corruption. >>>>>>> >>>>>>> rob >>>>>>> >>>>>> >>>>>> stop_pkicad and start_pkicad need the Dogtag version check to select >>>>>> pki_cad/pki_tomcatd. >>>>> >>>>> Fixed. >>>>> >>>>>> >>>>>> A more serious issue is that stop_pkicad needs to be installed on >>>>>> upgrades. Currently the whole enable_certificate_renewal step in >>>>>> ipa-upgradeconfig is skipped if it was done before. >>>>> >>>>> I added a separate upgrade test for this. It currently won't work in >>>>> SELinux enforcing mode because certmonger isn't allowed to talk to >>>>> dbus >>>>> in an rpm post script. It's being looked at. >>>>> >>>>>> In stop_pkicad can you change the first log message to "certmonger >>>>>> stopping %sd"? It's before the action so we don't want past tense. >>>>> >>>>> Fixed. >>>>> >>>>> rob >>>> >>>> I get a bunch of errors when installing the RPM: >>>> >>> [...] >>>> >>> >>> This is the SELinux issue you were talking about. Sorry for not catching >>> that. >>> >>> With enforcing off, the patch looks & works well for me. I'm just >>> concerned about this change in ipa-upgradeconfig: >>> >>> @@ -707,7 +754,7 @@ def main(): >>> # configuration has changed, restart the name server >>> root_logger.info('Changes to named.conf have been made, >>> restart named') >>> bindinstance.BindInstance(fstore).restart() >>> - ca_restart = ca_restart or enable_certificate_renewal(ca) or >>> upgrade_ipa_profile(ca, api.env.domain, fqdn) >>> + ca_restart = ca_restart or enable_certificate_renewal(ca) or >>> upgrade_ipa_profile(ca, api.env.domain, fqdn) or >>> certificate_renewal_stop_ca(ca) >>> >>> If the enable_certificate_renewal step was done already, but >>> upgrade_ipa_profile requests a CA restart, then the short-circuiting >>> `or` will be satisfied and certificate_renewal_stop_ca won't be run. >>> >>> Since each upgrade step has its own checking, I think it would be safer >>> to use something like: >>> ca_restart = certificate_renewal_stop_ca(ca) or ca_restart >>> >>> or even: >>> ca_restart = any([ >>> ca_restart, >>> enable_certificate_renewal(ca), >>> upgrade_ipa_profile(ca, api.env.domain, fqdn), >>> certificate_renewal_stop_ca(ca), >>> ]) >>> >> >> I like this suggestion very much. Updated patch attached. >> >> rob >> > > ACK, just remove the trailing space in the `]) ` line. > > > We'll need to make sure the SELinux issue isn't forgotten. > I was waiting for a fixed selinux-policy package to get pushed and it finally has. Pushed patch to master, ipa-3-1 and ipa-3-0 rob From abokovoy at redhat.com Tue Jan 29 20:50:02 2013 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 29 Jan 2013 22:50:02 +0200 Subject: [Freeipa-devel] krb5.conf on IPA server and SSSD setup Message-ID: <20130129204918.GB4506@redhat.com> Hi! I've been chasing few bugs in FreeIPA's trusted domains support and found out some grave bugs in both SSSD and FreeIPA. On FreeIPA server side we configure krb5.conf using following settings: ------------------------------------------------- includedir /var/lib/sss/pubconf/krb5.include.d [libdefaults] ... default_realm = EXAMPLE.COM dns_lookup_realm = false dns_lookup_kdc = true ... [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.com -------------------------------------------------- Then SSSD generates files which contain domain_realm mapping for trusted domains in /var/lib/sss/pubconf/krb5.include.d and libkrb5 will read them as part of the krb5.conf sourcing. Few problems here: 1. KDC needs to know this mapping information in order to issue referrals to the clients. There is heuristic in libkrb5 that uses domain_realm mapping first and default_realm value if mapping didn't catch the principal which was not found in the database. 2. krb5.conf is parsed by applications usually only on startup. KDC is not an exception, so any changes to krb5.conf would require to restart KDC if we want them to be noticed. 3. Adding new trust implies therefore KDC restart. It also implies that SSSD should have updated the mapping which is not neccessary true time-wise. As result, operations like mapping trusted domain users via external groups in IPA might fail as IPA code running on IPA server needs to contact LDAP service at trusted domain's Global Catalog using SASL GSSAPI authentication. When ticket is obtained, we don't specify explicitly the realm of the service principal, it is constructed by underlying libldap/libsasl code. If explicit domain_realm mapping is in place on client side (and here client is the server as request is issued from IPA httpd code), trusted domain's Global Catalog host will be automatically mapped to trusted domain realm. Otherwise KDC will hint the client with referral to proper KDC for trusted domain realm. This is the step that might fail if trusted domain is sub-domain of IPA domain, for example, ad.example.com. In this case our explicit mapping for example.com will prevail and requests will always be sent for principal in EXAMPLE.COM realm. More to that, since client and KDC are the same host, KDC will use domain_realm mapping as well and hint client with referral to itself (since .example.com = EXAMPLE.COM). Obtaining ticket will fail again. So, I was trying to solve this issue and I've got to following setup with Nalin's help: 1. Define following settings in [libdefaults] of krb5.conf default_realm = EXAMPLE.COM dns_lookup_realm = false dns_lookup_kdc = true realm_try_domains = 0 realm_try_domains = 0 forces libkrb5 to fallback discovery of realm to domain of the host via DNS if there is no other explicit mapping. 2. Remove any explicit domain_realm mapping for our default realm since it will be implicitly generated from default_realm value by the fallback code anyway. With these changes both KDC and libkrb5 will be able to properly serve out both own domain and trusted domain requests. At some point SSSD will kick in with its explicit mapping for trusted domain realm. Still, KDC will not be able to see this mapping until restart but in Krb5 1.12 we are getting new pluggable interface that will allow to refresh KDC configuration. And here I'm coming to grave error in the SSSD code: the name of explicit mapping file contains non-filtered domain name, which contains dot. krb5.conf manual page states that includedir allows to source all files which names are constructed from alpha-numeric chars, dashes and underscores. Files with other characters are ignored. So dots as in domain_realm_example.com are ignored and our mapping is never sourced. For IDN domains we also will need to transform the name into its Punycode (RFC3492) to avoid breaking out of alpha-numeric space. I'd suggest replacing dots with underscores. File name is irrelevant to libkrb5 after it was read as part of includedir processing, and files are only written by the SSSD. -- / Alexander Bokovoy From jhrozek at redhat.com Tue Jan 29 21:03:38 2013 From: jhrozek at redhat.com (Jakub Hrozek) Date: Tue, 29 Jan 2013 22:03:38 +0100 Subject: [Freeipa-devel] [SSSD] krb5.conf on IPA server and SSSD setup In-Reply-To: <20130129204918.GB4506@redhat.com> References: <20130129204918.GB4506@redhat.com> Message-ID: <20130129210338.GO5523@hendrix.brq.redhat.com> On Tue, Jan 29, 2013 at 10:50:02PM +0200, Alexander Bokovoy wrote: > And here I'm coming to grave error in the SSSD code: the name of > explicit mapping file contains non-filtered domain name, which contains > dot. krb5.conf manual page states that includedir allows to source all > files which names are constructed from alpha-numeric chars, dashes and > underscores. > > Files with other characters are ignored. So dots as in > domain_realm_example.com are ignored and our mapping is never sourced. > > For IDN domains we also will need to transform the name into its > Punycode (RFC3492) to avoid breaking out of alpha-numeric space. > > I'd suggest replacing dots with underscores. Please file a ticket From abokovoy at redhat.com Tue Jan 29 21:08:09 2013 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 29 Jan 2013 23:08:09 +0200 Subject: [Freeipa-devel] [SSSD] krb5.conf on IPA server and SSSD setup In-Reply-To: <20130129210338.GO5523@hendrix.brq.redhat.com> References: <20130129204918.GB4506@redhat.com> <20130129210338.GO5523@hendrix.brq.redhat.com> Message-ID: <20130129210809.GC4506@redhat.com> On Tue, 29 Jan 2013, Jakub Hrozek wrote: >On Tue, Jan 29, 2013 at 10:50:02PM +0200, Alexander Bokovoy wrote: >> And here I'm coming to grave error in the SSSD code: the name of >> explicit mapping file contains non-filtered domain name, which contains >> dot. krb5.conf manual page states that includedir allows to source all >> files which names are constructed from alpha-numeric chars, dashes and >> underscores. >> >> Files with other characters are ignored. So dots as in >> domain_realm_example.com are ignored and our mapping is never sourced. >> >> For IDN domains we also will need to transform the name into its >> Punycode (RFC3492) to avoid breaking out of alpha-numeric space. >> >> I'd suggest replacing dots with underscores. > >Please file a ticket https://bugzilla.redhat.com/show_bug.cgi?id=905650 https://fedorahosted.org/sssd/ticket/1795 -- / Alexander Bokovoy From jhrozek at redhat.com Tue Jan 29 21:11:17 2013 From: jhrozek at redhat.com (Jakub Hrozek) Date: Tue, 29 Jan 2013 22:11:17 +0100 Subject: [Freeipa-devel] [SSSD] krb5.conf on IPA server and SSSD setup In-Reply-To: <20130129210338.GO5523@hendrix.brq.redhat.com> References: <20130129204918.GB4506@redhat.com> <20130129210338.GO5523@hendrix.brq.redhat.com> Message-ID: <20130129211117.GP5523@hendrix.brq.redhat.com> On Tue, Jan 29, 2013 at 10:03:38PM +0100, Jakub Hrozek wrote: > On Tue, Jan 29, 2013 at 10:50:02PM +0200, Alexander Bokovoy wrote: > > And here I'm coming to grave error in the SSSD code: the name of > > explicit mapping file contains non-filtered domain name, which contains > > dot. krb5.conf manual page states that includedir allows to source all > > files which names are constructed from alpha-numeric chars, dashes and > > underscores. > > > > Files with other characters are ignored. So dots as in > > domain_realm_example.com are ignored and our mapping is never sourced. > > > > For IDN domains we also will need to transform the name into its > > Punycode (RFC3492) to avoid breaking out of alpha-numeric space. > > > > I'd suggest replacing dots with underscores. > > Please file a ticket OK, I cloned the F18 bug into: https://fedorahosted.org/sssd/ticket/1795 From dpal at redhat.com Wed Jan 30 04:35:33 2013 From: dpal at redhat.com (Dmitri Pal) Date: Tue, 29 Jan 2013 23:35:33 -0500 Subject: [Freeipa-devel] OTP Design Message-ID: <5108A315.2000204@redhat.com> Hello, We started to shape a page for the OTP prototyping work we are doing. It is work in progress but it has enough information to share and discuss. http://freeipa.org/page/V3/OTP Comments welcome! -- Thank you, Dmitri Pal Sr. Engineering Manager for IdM portfolio Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From pviktori at redhat.com Wed Jan 30 09:53:47 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 30 Jan 2013 10:53:47 +0100 Subject: [Freeipa-devel] [PATCHES] 146-164 LDAP code refactoring (Part 4) In-Reply-To: <5107ED49.4000502@redhat.com> References: <50F83498.9050104@redhat.com> <50FD7CF8.2070401@redhat.com> <50FEAA66.4010704@redhat.com> <51010252.1030200@redhat.com> <51013FD5.8030004@redhat.com> <51028E91.4070601@redhat.com> <51063804.4060507@redhat.com> <5106949D.408@redhat.com> <5107ED49.4000502@redhat.com> Message-ID: <5108EDAB.9040406@redhat.com> On 01/29/2013 04:39 PM, Petr Viktorin wrote: > On 01/28/2013 04:09 PM, Petr Viktorin wrote: >> On 01/28/2013 09:34 AM, Jan Cholasta wrote: >>> On 25.1.2013 14:54, Petr Viktorin wrote: >>>> On 01/24/2013 03:06 PM, Petr Viktorin wrote: >>>>> On 01/24/2013 10:43 AM, Petr Viktorin wrote: >>>>>> On 01/22/2013 04:04 PM, Petr Viktorin wrote: >>>>>>> On 01/21/2013 06:38 PM, Petr Viktorin wrote: >>>>>>>> On 01/17/2013 06:27 PM, Petr Viktorin wrote: >>>>>>>>> Hello, >>>>>>>>> This is the first batch of changes aimed to consolidate our LDAP >>>>>>>>> code. >>>>>>>>> Each should be a self-contained change that doesn't break >>>>>>>>> anything. >>>>>>>>> >>>>>>>>> These patches do some general cleanup (some of the changes might >>>>>>>>> seem >>>>>>>>> trivial but help a lot when grepping through the code); merge the >>>>>>>>> common >>>>>>>>> parts LDAPEntry, Entry and Entity classes; and move stuff that >>>>>>>>> depends >>>>>>>>> on an installed server out of IPASimpleLDAPObject and SchemaCache. >>>>>>>>> >>>>>>>>> I'm posting them early so you can see where I'm going, and so you >>>>>>>>> can >>>>>>>>> find out if your work will conflict with mine. >>>>>> >>>>> >>>>> Here is a third set of patches. These apply on top of jcholast's >>>>> patches >>>>> 94-96. >>>>> >>>> >>>> I found mistakes in two of the patches, attaching fixed versions. >>>> >>>> >>>> >>>> Since this patchset is becoming unwieldy, I've put it in a public repo >>>> that I'll keep updated. The following command will fetch it into your >>>> "pviktori-ldap-refactor" branch: >>>> >>>> git fetch git://github.com/encukou/freeipa >>>> ldap-refactor:pviktori-ldap-refactor >>>> >>>> >>> >>> I don't think patch 139 is necessary, I fixed this problem in patch 95 >>> by not including 'dn' as attribute in _entry_to_entity. >>> >> >> You're right. I'm retiring patch 139. >> We'll need to use entry.dn everywhere, and add an assert so that >> entry['dn'] is never set. >> >> >> Here is a fourth set of patches. >> > > Honza noticed test failures caused by patch 143. Patch 123 grew a > conflict with master. Fixes attached. > Today, patch 144 had a merge conflict. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0144-02-Remove-unused-imports-from-ipaserver-install.patch Type: text/x-patch Size: 11144 bytes Desc: not available URL: From pviktori at redhat.com Wed Jan 30 10:08:21 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 30 Jan 2013 11:08:21 +0100 Subject: [Freeipa-devel] [PATCHES] 0117-0118 Port ipa-replica-prepare to the admintool framework In-Reply-To: <51069AF1.2020705@redhat.com> References: <50E580E2.8090502@redhat.com> <50E58E01.3060402@redhat.com> <50E6DC9E.5050702@redhat.com> <51069AF1.2020705@redhat.com> Message-ID: <5108F115.9080902@redhat.com> On 01/28/2013 04:36 PM, Petr Viktorin wrote: > On 01/04/2013 02:43 PM, Petr Viktorin wrote: >> On 01/03/2013 02:56 PM, John Dennis wrote: >>> On 01/03/2013 08:00 AM, Petr Viktorin wrote: >>>> Hello, >>>> >>>> The first patch implements logging-related changes to the admintool >>>> framework and ipa-ldap-updater (the only thing ported to it so far). >>>> The design document is at http://freeipa.org/page/V3/Logging_and_output >>>> >>>> John, I decided to go ahead and put an explicit "logger" attribute on >>>> the tool class rather than adding debug, info, warn. etc methods >>>> dynamically using log_mgr.get_logger. I believe it's the cleanest >>>> solution. >>>> We had a discussion about this in this thread: >>>> https://www.redhat.com/archives/freeipa-devel/2012-July/msg00223.html; >>>> I >>>> didn't get a reaction to my conclusion so I'm letting you know in case >>>> you have more to say. >>> >>> I'm fine with not directly adding the debug, info, warn etc. methods, >>> that practice was historical dating back to the days of Jason. However I >>> do think it's useful to use a named logger and not the global >>> root_logger. I'd prefer we got away from using the root_logger, it's >>> continued existence is historical as well and the idea was over time we >>> would slowly eliminate it's usage. FWIW the log_mgr.get_logger() is >>> still useful for what you want to do. >>> >>> def get_logger(self, who, bind_logger_names=False) >>> >>> If you don't set bind_logger_names to True (and pass the class instance >>> as who) you won't get the offensive debug, info, etc. methods added to >>> the class instance. But it still does all the other bookeeping. >>> >>> The 'who' in this instance could be either the name of the admin tool or >>> the class instance. >>> >>> Also I'd prefer using the attribute 'log' rather than 'logger'. That >>> would make it consistent with code which does already use get_logger() >>> passing a class instance because it's adds a 'log' attribute which is >>> the logger. Also 'log' is twice as succinct than 'logger' (shorter line >>> lengths). >>> >>> Thus if you do: >>> >>> log_mgr.get_logger(self) >>> >>> I think you'll get exactly what you want. A logger named for the class >>> and being able to say >>> >>> self.log.debug() >>> self.log.error() >>> >>> inside the class. >>> >>> In summary, just drop the True from the get_logger() call. >>> >> >> Thanks! Yes, this works better. Updated patches attached. >> > > > Here is patch 117 rebased to current master. > Rebased again. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0117-04-Better-logging-for-AdminTool-and-ipa-ldap-updater.patch Type: text/x-patch Size: 15394 bytes Desc: not available URL: From pviktori at redhat.com Wed Jan 30 12:04:17 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 30 Jan 2013 13:04:17 +0100 Subject: [Freeipa-devel] Move ipaldap to ipalib? Message-ID: <51090C41.8020807@redhat.com> Hello, I've noticed that the client installer uses python-ldap directly, and duplicates some code we already have. Most recently, a convert_ldap_error function appeared in ipautil. Similar mechanisms are already in ipaldap, ipa-csreplica-manage, ipa-replica-manage, and my patches remove one in ldap2. Are there any objections to moving ipaldap.py to ipalib? -- Petr? From mkosek at redhat.com Wed Jan 30 12:42:18 2013 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 30 Jan 2013 13:42:18 +0100 Subject: [Freeipa-devel] [PATCH] 357 Use fully qualified CCACHE names Message-ID: <5109152A.9030205@redhat.com> Some parts of install scripts used only ccache name as returned by krbV.CCache.name attribute. However, when this name is used again to initialize krbV.CCache object or when it is used in KRB5CCNAME environmental variable, it fails for new DIR type of CCACHE. We should always use both CCACHE type and name when referring to them to avoid these crashes. ldap2 backend was also updated to accept directly krbV.CCache object which contains everything we need to authenticate with ccache. https://fedorahosted.org/freeipa/ticket/3381 --- Please note, that this fix is rather a short/medium-term fix for Fedora 18. In a long term we should consolidate our CCACHE manipulation code, it now uses several different wrappers or just uses krbV python library directly. I did not do any global refactoring in this patch, this should be done after we decide if we want to create a new, more usable krb5 library bindings as was already discussed in the past. Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-357-use-fully-qualified-ccache-names.patch Type: text/x-patch Size: 7625 bytes Desc: not available URL: From jdennis at redhat.com Wed Jan 30 14:03:21 2013 From: jdennis at redhat.com (John Dennis) Date: Wed, 30 Jan 2013 09:03:21 -0500 Subject: [Freeipa-devel] Move ipaldap to ipalib? In-Reply-To: <51090C41.8020807@redhat.com> References: <51090C41.8020807@redhat.com> Message-ID: <51092829.8060503@redhat.com> On 01/30/2013 07:04 AM, Petr Viktorin wrote: > Hello, > > I've noticed that the client installer uses python-ldap directly, and > duplicates some code we already have. > > Most recently, a convert_ldap_error function appeared in ipautil. > Similar mechanisms are already in ipaldap, ipa-csreplica-manage, > ipa-replica-manage, and my patches remove one in ldap2. see ticket https://fedorahosted.org/freeipa/ticket/3296 It discusses the convert_ldap_error function(s), their duplication and it's move to ipautil. > > Are there any objections to moving ipaldap.py to ipalib? > That would make it common code shared between all components which makes sense to me. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From mkosek at redhat.com Wed Jan 30 14:12:06 2013 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 30 Jan 2013 15:12:06 +0100 Subject: [Freeipa-devel] [PATCH] 358-359 Fix openldap migration errors Message-ID: <51092A36.8000103@redhat.com> These 2 attached patches were generated based on my debugging session with "tsunamie" and helping him dealing with migration from his openldap DS. With these applied, migrate-ds command no longer crashes with an error. I can lend my openldap instance I used when developing these patches. Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-358-fix-migration-for-openldap-ds.patch Type: text/x-patch Size: 3832 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-359-allow-extra-whitespaces-in-attr-params.patch Type: text/x-patch Size: 84856 bytes Desc: not available URL: From rcritten at redhat.com Wed Jan 30 14:28:55 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Wed, 30 Jan 2013 09:28:55 -0500 Subject: [Freeipa-devel] Move ipaldap to ipalib? In-Reply-To: <51090C41.8020807@redhat.com> References: <51090C41.8020807@redhat.com> Message-ID: <51092E27.1030701@redhat.com> Petr Viktorin wrote: > Hello, > > I've noticed that the client installer uses python-ldap directly, and > duplicates some code we already have. > > Most recently, a convert_ldap_error function appeared in ipautil. > Similar mechanisms are already in ipaldap, ipa-csreplica-manage, > ipa-replica-manage, and my patches remove one in ldap2. > > Are there any objections to moving ipaldap.py to ipalib? > ipaldap.py is going away eventually, right? It is probably better suited for ipapython if we move it though. rob From pviktori at redhat.com Wed Jan 30 14:41:37 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 30 Jan 2013 15:41:37 +0100 Subject: [Freeipa-devel] Move ipaldap to ipalib? In-Reply-To: <51092E27.1030701@redhat.com> References: <51090C41.8020807@redhat.com> <51092E27.1030701@redhat.com> Message-ID: <51093121.5090002@redhat.com> On 01/30/2013 03:28 PM, Rob Crittenden wrote: > Petr Viktorin wrote: >> Hello, >> >> I've noticed that the client installer uses python-ldap directly, and >> duplicates some code we already have. >> >> Most recently, a convert_ldap_error function appeared in ipautil. >> Similar mechanisms are already in ipaldap, ipa-csreplica-manage, >> ipa-replica-manage, and my patches remove one in ldap2. >> >> Are there any objections to moving ipaldap.py to ipalib? >> > > ipaldap.py is going away eventually, right? No, the common code is in ipaldap.py. ldap2.py only has the ldap2 class, which is a CRUDBackend, manages the magic per-thread connections, and has some extra helpers that can assume IPA is installed. > It is probably better suited for ipapython if we move it though. Sure, makes no difference to me :) To be honest though, I never got the difference between ipalib and ipapython. Could you explain it for me? -- Petr? From dpal at redhat.com Wed Jan 30 14:56:39 2013 From: dpal at redhat.com (Dmitri Pal) Date: Wed, 30 Jan 2013 09:56:39 -0500 Subject: [Freeipa-devel] Move ipaldap to ipalib? In-Reply-To: <51093121.5090002@redhat.com> References: <51090C41.8020807@redhat.com> <51092E27.1030701@redhat.com> <51093121.5090002@redhat.com> Message-ID: <510934A7.1070006@redhat.com> On 01/30/2013 09:41 AM, Petr Viktorin wrote: > On 01/30/2013 03:28 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> Hello, >>> >>> I've noticed that the client installer uses python-ldap directly, and >>> duplicates some code we already have. >>> >>> Most recently, a convert_ldap_error function appeared in ipautil. >>> Similar mechanisms are already in ipaldap, ipa-csreplica-manage, >>> ipa-replica-manage, and my patches remove one in ldap2. >>> >>> Are there any objections to moving ipaldap.py to ipalib? >>> >> >> ipaldap.py is going away eventually, right? > > No, the common code is in ipaldap.py. > ldap2.py only has the ldap2 class, which is a CRUDBackend, manages the > magic per-thread connections, and has some extra helpers that can > assume IPA is installed. > >> It is probably better suited for ipapython if we move it though. > > Sure, makes no difference to me :) > To be honest though, I never got the difference between ipalib and > ipapython. Could you explain it for me? > Though I am all for the consolidation and no duplication please do not forget that LDAP connection from the install script is someone different. I remember there was a discussion somewhere about that. Just keep in mind that there are some special cases that might be lost during consolidation that would cause regressions so please be careful. -- Thank you, Dmitri Pal Sr. Engineering Manager for IdM portfolio Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From pviktori at redhat.com Wed Jan 30 15:13:14 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 30 Jan 2013 16:13:14 +0100 Subject: [Freeipa-devel] Move ipaldap to ipalib? In-Reply-To: <510934A7.1070006@redhat.com> References: <51090C41.8020807@redhat.com> <51092E27.1030701@redhat.com> <51093121.5090002@redhat.com> <510934A7.1070006@redhat.com> Message-ID: <5109388A.3060700@redhat.com> On 01/30/2013 03:56 PM, Dmitri Pal wrote: > On 01/30/2013 09:41 AM, Petr Viktorin wrote: >> On 01/30/2013 03:28 PM, Rob Crittenden wrote: [...] >>> >>> ipaldap.py is going away eventually, right? >> >> No, the common code is in ipaldap.py. >> ldap2.py only has the ldap2 class, which is a CRUDBackend, manages the >> magic per-thread connections, and has some extra helpers that can >> assume IPA is installed. >> [...] > > Though I am all for the consolidation and no duplication please do not > forget that LDAP connection from the install script is someone > different. I remember there was a discussion somewhere about that. Just > keep in mind that there are some special cases that might be lost during > consolidation that would cause regressions so please be careful. > Right. IPAdmin is staying in ipaldap. Its has own connection code and a bit of legacy stuff I'm now trying to get rid of, everything else is shared with ldap2 via a superclass. -- Petr? From jdennis at redhat.com Wed Jan 30 15:20:55 2013 From: jdennis at redhat.com (John Dennis) Date: Wed, 30 Jan 2013 10:20:55 -0500 Subject: [Freeipa-devel] Move ipaldap to ipalib? In-Reply-To: <51093121.5090002@redhat.com> References: <51090C41.8020807@redhat.com> <51092E27.1030701@redhat.com> <51093121.5090002@redhat.com> Message-ID: <51093A57.4000505@redhat.com> On 01/30/2013 09:41 AM, Petr Viktorin wrote: > On 01/30/2013 03:28 PM, Rob Crittenden wrote: >> Petr Viktorin wrote: >>> Hello, >>> >>> I've noticed that the client installer uses python-ldap directly, and >>> duplicates some code we already have. >>> >>> Most recently, a convert_ldap_error function appeared in ipautil. >>> Similar mechanisms are already in ipaldap, ipa-csreplica-manage, >>> ipa-replica-manage, and my patches remove one in ldap2. >>> >>> Are there any objections to moving ipaldap.py to ipalib? >>> >> >> ipaldap.py is going away eventually, right? > > No, the common code is in ipaldap.py. > ldap2.py only has the ldap2 class, which is a CRUDBackend, manages the > magic per-thread connections, and has some extra helpers that can assume > IPA is installed. > >> It is probably better suited for ipapython if we move it though. > > Sure, makes no difference to me :) > To be honest though, I never got the difference between ipalib and > ipapython. Could you explain it for me? > I asked the same question a while ago, see this thread for the answers https://www.redhat.com/archives/freeipa-devel/2011-October/msg00392.html FWIW I still couldn't fully reconcile what was said with what is in the code, I suspect that's because we might not have been disciplined when adding new code. I continue to find it a source of confusion FWIW. What was said in that thread implies the client should not depend on ipalib, but it does, go figure. Getting everything partitioned into their logical locations, namespaces, imports, etc. is a code clean-up effort that should be addressed at some point. Hopefully we'll iterate on that piecemeal, but only if we actually understand the rules :-) -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From rcritten at redhat.com Wed Jan 30 15:28:53 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Wed, 30 Jan 2013 10:28:53 -0500 Subject: [Freeipa-devel] Move ipaldap to ipalib? In-Reply-To: <51093A57.4000505@redhat.com> References: <51090C41.8020807@redhat.com> <51092E27.1030701@redhat.com> <51093121.5090002@redhat.com> <51093A57.4000505@redhat.com> Message-ID: <51093C35.6050509@redhat.com> John Dennis wrote: > On 01/30/2013 09:41 AM, Petr Viktorin wrote: >> On 01/30/2013 03:28 PM, Rob Crittenden wrote: >>> Petr Viktorin wrote: >>>> Hello, >>>> >>>> I've noticed that the client installer uses python-ldap directly, and >>>> duplicates some code we already have. >>>> >>>> Most recently, a convert_ldap_error function appeared in ipautil. >>>> Similar mechanisms are already in ipaldap, ipa-csreplica-manage, >>>> ipa-replica-manage, and my patches remove one in ldap2. >>>> >>>> Are there any objections to moving ipaldap.py to ipalib? >>>> >>> >>> ipaldap.py is going away eventually, right? >> >> No, the common code is in ipaldap.py. >> ldap2.py only has the ldap2 class, which is a CRUDBackend, manages the >> magic per-thread connections, and has some extra helpers that can assume >> IPA is installed. >> >>> It is probably better suited for ipapython if we move it though. >> >> Sure, makes no difference to me :) >> To be honest though, I never got the difference between ipalib and >> ipapython. Could you explain it for me? >> > > I asked the same question a while ago, see this thread for the answers > > https://www.redhat.com/archives/freeipa-devel/2011-October/msg00392.html > > FWIW I still couldn't fully reconcile what was said with what is in the > code, I suspect that's because we might not have been disciplined when > adding new code. I continue to find it a source of confusion FWIW. What > was said in that thread implies the client should not depend on ipalib, > but it does, go figure. > > Getting everything partitioned into their logical locations, namespaces, > imports, etc. is a code clean-up effort that should be addressed at some > point. Hopefully we'll iterate on that piecemeal, but only if we > actually understand the rules :-) > ipalib contains framework code. ipapython contains other common code between client, server and tools. rob From pviktori at redhat.com Wed Jan 30 15:33:00 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Wed, 30 Jan 2013 16:33:00 +0100 Subject: [Freeipa-devel] Move ipaldap to ipalib? In-Reply-To: <51093C35.6050509@redhat.com> References: <51090C41.8020807@redhat.com> <51092E27.1030701@redhat.com> <51093121.5090002@redhat.com> <51093A57.4000505@redhat.com> <51093C35.6050509@redhat.com> Message-ID: <51093D2C.1090301@redhat.com> On 01/30/2013 04:28 PM, Rob Crittenden wrote: > John Dennis wrote: >> On 01/30/2013 09:41 AM, Petr Viktorin wrote: >>> On 01/30/2013 03:28 PM, Rob Crittenden wrote: >>>> Petr Viktorin wrote: >>>>> Hello, >>>>> >>>>> I've noticed that the client installer uses python-ldap directly, and >>>>> duplicates some code we already have. >>>>> >>>>> Most recently, a convert_ldap_error function appeared in ipautil. >>>>> Similar mechanisms are already in ipaldap, ipa-csreplica-manage, >>>>> ipa-replica-manage, and my patches remove one in ldap2. >>>>> >>>>> Are there any objections to moving ipaldap.py to ipalib? >>>>> >>>> >>>> ipaldap.py is going away eventually, right? >>> >>> No, the common code is in ipaldap.py. >>> ldap2.py only has the ldap2 class, which is a CRUDBackend, manages the >>> magic per-thread connections, and has some extra helpers that can assume >>> IPA is installed. >>> >>>> It is probably better suited for ipapython if we move it though. >>> >>> Sure, makes no difference to me :) >>> To be honest though, I never got the difference between ipalib and >>> ipapython. Could you explain it for me? >>> >> >> I asked the same question a while ago, see this thread for the answers >> >> https://www.redhat.com/archives/freeipa-devel/2011-October/msg00392.html >> >> FWIW I still couldn't fully reconcile what was said with what is in the >> code, I suspect that's because we might not have been disciplined when >> adding new code. I continue to find it a source of confusion FWIW. What >> was said in that thread implies the client should not depend on ipalib, >> but it does, go figure. >> >> Getting everything partitioned into their logical locations, namespaces, >> imports, etc. is a code clean-up effort that should be addressed at some >> point. Hopefully we'll iterate on that piecemeal, but only if we >> actually understand the rules :-) >> > > ipalib contains framework code. > > ipapython contains other common code between client, server and tools. > Thank you both! That makes it much clearer. -- Petr? From tbabej at redhat.com Wed Jan 30 16:12:33 2013 From: tbabej at redhat.com (Tomas Babej) Date: Wed, 30 Jan 2013 17:12:33 +0100 Subject: [Freeipa-devel] [PATCH 0027] Add checks for SELinux in install scripts Message-ID: <51094671.30701@redhat.com> Hi, The checks make sure that SELinux is: - installed and enabled (on server install) - installed and enabled OR not installed (on client install) Please note that client installs with SELinux not installed are allowed since freeipa-client package has no dependency on SELinux. (any objections to this approach?) The (unsupported) option --allow-no-selinux has been added. It can used to bypass the checks. Parts of platform-dependant code were refactored to use newly added is_selinux_enabled() function. https://fedorahosted.org/freeipa/ticket/3359 Tomas -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0027-Add-checks-for-SElinux-in-install-scripts.patch Type: text/x-patch Size: 12397 bytes Desc: not available URL: From tbabej at redhat.com Wed Jan 30 16:58:29 2013 From: tbabej at redhat.com (Tomas Babej) Date: Wed, 30 Jan 2013 17:58:29 +0100 Subject: [Freeipa-devel] [PATCH 0027] Add checks for SELinux in install scripts In-Reply-To: <51094671.30701@redhat.com> References: <51094671.30701@redhat.com> Message-ID: <51095135.4040501@redhat.com> On 01/30/2013 05:12 PM, Tomas Babej wrote: > Hi, > > The checks make sure that SELinux is: > - installed and enabled (on server install) > - installed and enabled OR not installed (on client install) > > Please note that client installs with SELinux not installed are > allowed since freeipa-client package has no dependency on SELinux. > (any objections to this approach?) > > The (unsupported) option --allow-no-selinux has been added. It can > used to bypass the checks. > > Parts of platform-dependant code were refactored to use newly added > is_selinux_enabled() function. > > https://fedorahosted.org/freeipa/ticket/3359 > > Tomas I forgot to edit the man pages. Thanks Rob! Updated patch attached. Tomas -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0027-2-Add-checks-for-SELinux-in-install-scripts.patch Type: text/x-patch Size: 13815 bytes Desc: not available URL: From pspacek at redhat.com Thu Jan 31 09:34:41 2013 From: pspacek at redhat.com (Petr Spacek) Date: Thu, 31 Jan 2013 10:34:41 +0100 Subject: [Freeipa-devel] OTP Design In-Reply-To: <5108A315.2000204@redhat.com> References: <5108A315.2000204@redhat.com> Message-ID: <510A3AB1.2090104@redhat.com> On 30.1.2013 05:35, Dmitri Pal wrote: > Hello, > > We started to shape a page for the OTP prototyping work we are doing. > It is work in progress but it has enough information to share and discuss. > http://freeipa.org/page/V3/OTP > > Comments welcome! I gave it a quick look. Generally, the core seems correct to me. I have only nitpicks: I see big amount of new ipa* specific attributes. How other OTP solutions store tokens/configuration? Is there any standard/semi-standard LDAP schema with attributes describing tokens? MIT KDC has own ("native") LDAP driver. It would be nice to coordinate OID allocation and schema definition with MIT and share as much attributes as possible. Do they plan to support OTP configuration in LDAP? (I don't see any note about LDAP support in http://k5wiki.kerberos.org/wiki/Projects/OTPOverRADIUS .) Is the author of https://fedoraproject.org/wiki/Features/EnterpriseTwoFactorAuthentication aware of our effort? What about re-using http://www.dynalogin.org/ server for TOTP/HOTP implementation (rather than writing own OTP-in-389 implementation)? I haven't looked to the dynalogin code ... Could be (old) draft "SASL and GSS-API Mechanism for Two Factor Authentication based on a Password and a One-Time Password (OTP): CROTP" from http://tools.ietf.org/html/draft-josefsson-kitten-crotp-00 interesting for us (in future)? Is it worth to resurrect this effort? -- Petr^2 Spacek From pviktori at redhat.com Thu Jan 31 10:00:32 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 31 Jan 2013 11:00:32 +0100 Subject: [Freeipa-devel] [PATCHES] 146-164 LDAP code refactoring (Part 4) In-Reply-To: <5108EDAB.9040406@redhat.com> References: <50F83498.9050104@redhat.com> <50FD7CF8.2070401@redhat.com> <50FEAA66.4010704@redhat.com> <51010252.1030200@redhat.com> <51013FD5.8030004@redhat.com> <51028E91.4070601@redhat.com> <51063804.4060507@redhat.com> <5106949D.408@redhat.com> <5107ED49.4000502@redhat.com> <5108EDAB.9040406@redhat.com> Message-ID: <510A40C0.5060406@redhat.com> On 01/30/2013 10:53 AM, Petr Viktorin wrote: > On 01/29/2013 04:39 PM, Petr Viktorin wrote: >> On 01/28/2013 04:09 PM, Petr Viktorin wrote: >>> On 01/28/2013 09:34 AM, Jan Cholasta wrote: >>>> On 25.1.2013 14:54, Petr Viktorin wrote: >>>>> On 01/24/2013 03:06 PM, Petr Viktorin wrote: >>>>>> On 01/24/2013 10:43 AM, Petr Viktorin wrote: >>>>>>> On 01/22/2013 04:04 PM, Petr Viktorin wrote: >>>>>>>> On 01/21/2013 06:38 PM, Petr Viktorin wrote: >>>>>>>>> On 01/17/2013 06:27 PM, Petr Viktorin wrote: >>>>>>>>>> Hello, >>>>>>>>>> This is the first batch of changes aimed to consolidate our LDAP >>>>>>>>>> code. >>>>>>>>>> Each should be a self-contained change that doesn't break >>>>>>>>>> anything. >>>>>>>>>> [...] >>>>> Since this patchset is becoming unwieldy, I've put it in a public repo >>>>> that I'll keep updated. The following command will fetch it into your >>>>> "pviktori-ldap-refactor" branch: >>>>> >>>>> git fetch git://github.com/encukou/freeipa >>>>> ldap-refactor:pviktori-ldap-refactor >>>>> >>>>> [...] I found a bug in patch 143, here is a fixed version. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0143-03-Change-add-update-delete-_entry-to-take-LDAPEntries.patch Type: text/x-patch Size: 4782 bytes Desc: not available URL: From pviktori at redhat.com Thu Jan 31 10:03:19 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 31 Jan 2013 11:03:19 +0100 Subject: [Freeipa-devel] 0165-0174 LDAP code refactoring (Part 5) In-Reply-To: <510A40C0.5060406@redhat.com> References: <50F83498.9050104@redhat.com> <50FD7CF8.2070401@redhat.com> <50FEAA66.4010704@redhat.com> <51010252.1030200@redhat.com> <51013FD5.8030004@redhat.com> <51028E91.4070601@redhat.com> <51063804.4060507@redhat.com> <5106949D.408@redhat.com> <5107ED49.4000502@redhat.com> <5108EDAB.9040406@redhat.com> <510A40C0.5060406@redhat.com> Message-ID: <510A4167.5030001@redhat.com> On 01/31/2013 11:00 AM, Petr Viktorin wrote: > On 01/30/2013 10:53 AM, Petr Viktorin wrote: >> On 01/29/2013 04:39 PM, Petr Viktorin wrote: >>> On 01/28/2013 04:09 PM, Petr Viktorin wrote: >>>> On 01/28/2013 09:34 AM, Jan Cholasta wrote: >>>>> On 25.1.2013 14:54, Petr Viktorin wrote: >>>>>> On 01/24/2013 03:06 PM, Petr Viktorin wrote: >>>>>>> On 01/24/2013 10:43 AM, Petr Viktorin wrote: >>>>>>>> On 01/22/2013 04:04 PM, Petr Viktorin wrote: >>>>>>>>> On 01/21/2013 06:38 PM, Petr Viktorin wrote: >>>>>>>>>> On 01/17/2013 06:27 PM, Petr Viktorin wrote: >>>>>>>>>>> Hello, >>>>>>>>>>> This is the first batch of changes aimed to consolidate our LDAP >>>>>>>>>>> code. >>>>>>>>>>> Each should be a self-contained change that doesn't break >>>>>>>>>>> anything. >>>>>>>>>>> > [...] >>>>>> Since this patchset is becoming unwieldy, I've put it in a public >>>>>> repo >>>>>> that I'll keep updated. The following command will fetch it into your >>>>>> "pviktori-ldap-refactor" branch: >>>>>> >>>>>> git fetch git://github.com/encukou/freeipa >>>>>> ldap-refactor:pviktori-ldap-refactor >>>>>> >>>>>> > [...] > > I found a bug in patch 143, here is a fixed version. And hee is another batch of patches. This one is about converting the legacy IPAdmin and raw python-ldap calls to the new wrappers. -- Petr? -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0165-Remove-search_s-and-search_ext_s-from-IPAdmin.patch Type: text/x-patch Size: 8983 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0166-Replace-IPAdmin.start_tls_s-by-an-__init__-argument.patch Type: text/x-patch Size: 3363 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0167-Remove-IPAdmin.sasl_interactive_bind_s.patch Type: text/x-patch Size: 2593 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0168-Remove-IPAdmin.simple_bind_s.patch Type: text/x-patch Size: 3825 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0169-Remove-IPAdmin.unbind_s-keep-unbind.patch Type: text/x-patch Size: 5551 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0170-Use-ldap-instead-of-_ldap-in-ipaldap.patch Type: text/x-patch Size: 11949 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0171-Do-not-use-global-variables-in-migration.py.patch Type: text/x-patch Size: 3419 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0172-Use-IPAdmin-rather-than-raw-python-ldap-in-migration.patch Type: text/x-patch Size: 2348 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0173-Use-IPAdmin-rather-than-raw-python-ldap-in-ipactl.patch Type: text/x-patch Size: 5417 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pviktori-0174-Remove-some-uses-of-raw-python-ldap.patch Type: text/x-patch Size: 37152 bytes Desc: not available URL: From jcholast at redhat.com Thu Jan 31 10:29:28 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 31 Jan 2013 11:29:28 +0100 Subject: [Freeipa-devel] [PATCHES] 94-96 Remove Entry and Entity classes In-Reply-To: <50FEA307.7080502@redhat.com> References: <50FEA307.7080502@redhat.com> Message-ID: <510A4788.2080304@redhat.com> On 22.1.2013 15:32, Jan Cholasta wrote: > Hi, > > these patches remove the Entry and Entity classes and move instantiation > of LDAPEntry objects to LDAPConnection.make_entry factory method. > > Apply on top of Petr Viktorin's LDAP code refactoring (part 1 & 2) patches. > > Honza > Slightly changed patch 95 and rebased all the patches on top of current master and LDAP code refactoring part 1 & 2. Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-94.1-Add-make_entry-factory-method-to-LDAPConnection.patch Type: text/x-patch Size: 15020 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-95.1-Remove-the-Entity-class.patch Type: text/x-patch Size: 6331 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-96.1-Remove-the-Entry-class.patch Type: text/x-patch Size: 3678 bytes Desc: not available URL: From tbabej at redhat.com Thu Jan 31 11:03:56 2013 From: tbabej at redhat.com (Tomas Babej) Date: Thu, 31 Jan 2013 12:03:56 +0100 Subject: [Freeipa-devel] [PATCH 0028] Prevent backtrace in ipa-replica-prepare Message-ID: <510A4F9C.5050707@redhat.com> Hi, This was a regression due to change from DatabaseError to NetworkError when LDAP server is down. https://fedorahosted.org/freeipa/ticket/2939 Tomas From tbabej at redhat.com Thu Jan 31 11:05:40 2013 From: tbabej at redhat.com (Tomas Babej) Date: Thu, 31 Jan 2013 12:05:40 +0100 Subject: [Freeipa-devel] [PATCH 0028] Prevent backtrace in ipa-replica-prepare In-Reply-To: <510A4F9C.5050707@redhat.com> References: <510A4F9C.5050707@redhat.com> Message-ID: <510A5004.20303@redhat.com> On 01/31/2013 12:03 PM, Tomas Babej wrote: > Hi, > > This was a regression due to change from DatabaseError to NetworkError > when LDAP server is down. > > https://fedorahosted.org/freeipa/ticket/2939 > > Tomas > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel Clicking send too soon, patch attached :) Tomas -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0028-Prevent-backtrace-in-ipa-replica-prepare.patch Type: text/x-patch Size: 1164 bytes Desc: not available URL: From mkosek at redhat.com Thu Jan 31 12:18:31 2013 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 31 Jan 2013 13:18:31 +0100 Subject: [Freeipa-devel] [PATCH] 360 Add autodiscovery section in ipa-client-install man pages Message-ID: <510A6117.80507@redhat.com> Explain how autodiscovery and failover works and which options are important for these elements. https://fedorahosted.org/freeipa/ticket/3383 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-360-ipa-client-install-man-pages.patch Type: text/x-patch Size: 6514 bytes Desc: not available URL: From mkosek at redhat.com Thu Jan 31 12:35:48 2013 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 31 Jan 2013 13:35:48 +0100 Subject: [Freeipa-devel] [PATCH 0028] Prevent backtrace in ipa-replica-prepare In-Reply-To: <510A5004.20303@redhat.com> References: <510A4F9C.5050707@redhat.com> <510A5004.20303@redhat.com> Message-ID: <510A6524.105@redhat.com> On 01/31/2013 12:05 PM, Tomas Babej wrote: > On 01/31/2013 12:03 PM, Tomas Babej wrote: >> Hi, >> >> This was a regression due to change from DatabaseError to NetworkError >> when LDAP server is down. >> >> https://fedorahosted.org/freeipa/ticket/2939 >> >> Tomas >> >> _______________________________________________ >> Freeipa-devel mailing list >> Freeipa-devel at redhat.com >> https://www.redhat.com/mailman/listinfo/freeipa-devel > Clicking send too soon, patch attached :) > > Tomas I don't think that removing errors.DatabaseError is necessary. By the way, would this error (and many similar errors) be solved by a server tool refactoring that Petr Viktorin is working on? IIRC, he was about to wrap ipa-replica-prepare in a similar framework like ipa-ldap-updater. With a framework like this one, we would not have to specify separate try..catch lists in all our server manipulation tools. Martin From tbabej at redhat.com Thu Jan 31 13:07:22 2013 From: tbabej at redhat.com (Tomas Babej) Date: Thu, 31 Jan 2013 14:07:22 +0100 Subject: [Freeipa-devel] [PATCH 0029] Fix a typo in ipa-adtrust-install help Message-ID: <510A6C8A.7060205@redhat.com> Hi, this is a fix for a benign typo in ipa-adtrust-install --help description. Tomas -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0029-Fix-a-typo-in-ipa-adtrust-install-help.patch Type: text/x-patch Size: 1179 bytes Desc: not available URL: From mkosek at redhat.com Thu Jan 31 13:12:01 2013 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 31 Jan 2013 14:12:01 +0100 Subject: [Freeipa-devel] [PATCH 0029] Fix a typo in ipa-adtrust-install help In-Reply-To: <510A6C8A.7060205@redhat.com> References: <510A6C8A.7060205@redhat.com> Message-ID: <510A6DA1.4010200@redhat.com> On 01/31/2013 02:07 PM, Tomas Babej wrote: > Hi, > > this is a fix for a benign typo in ipa-adtrust-install --help description. > > Tomas > ACK. Pushed to master, ipa-3-1. Martin From sbose at redhat.com Thu Jan 31 13:15:58 2013 From: sbose at redhat.com (Sumit Bose) Date: Thu, 31 Jan 2013 14:15:58 +0100 Subject: [Freeipa-devel] [PATCH 0029] Fix a typo in ipa-adtrust-install help In-Reply-To: <510A6C8A.7060205@redhat.com> References: <510A6C8A.7060205@redhat.com> Message-ID: <20130131131557.GH24414@localhost.localdomain> On Thu, Jan 31, 2013 at 02:07:22PM +0100, Tomas Babej wrote: > Hi, > > this is a fix for a benign typo in ipa-adtrust-install --help description. > > Tomas thanks for catching this. Usually I prefer to add the space at the end truncated line instead at the beginning of the new line. Do we/the python community have a common rule about this? bye, Sumit From tbabej at redhat.com Thu Jan 31 13:37:36 2013 From: tbabej at redhat.com (Tomas Babej) Date: Thu, 31 Jan 2013 14:37:36 +0100 Subject: [Freeipa-devel] [PATCH 0027] Add checks for SELinux in install scripts In-Reply-To: <51095135.4040501@redhat.com> References: <51094671.30701@redhat.com> <51095135.4040501@redhat.com> Message-ID: <510A73A0.6020005@redhat.com> On 01/30/2013 05:58 PM, Tomas Babej wrote: > On 01/30/2013 05:12 PM, Tomas Babej wrote: >> Hi, >> >> The checks make sure that SELinux is: >> - installed and enabled (on server install) >> - installed and enabled OR not installed (on client install) >> >> Please note that client installs with SELinux not installed are >> allowed since freeipa-client package has no dependency on SELinux. >> (any objections to this approach?) >> >> The (unsupported) option --allow-no-selinux has been added. It can >> used to bypass the checks. >> >> Parts of platform-dependant code were refactored to use newly added >> is_selinux_enabled() function. >> >> https://fedorahosted.org/freeipa/ticket/3359 >> >> Tomas > > I forgot to edit the man pages. Thanks Rob! > > Updated patch attached. > > Tomas Just for the record, since this is a RFE. I updated the 3.2 minor enhacements page: http://www.freeipa.org/page/V3_Minor_Enhancements Tomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From pviktori at redhat.com Thu Jan 31 13:40:52 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 31 Jan 2013 14:40:52 +0100 Subject: [Freeipa-devel] [PATCH 0028] Prevent backtrace in ipa-replica-prepare In-Reply-To: <510A6524.105@redhat.com> References: <510A4F9C.5050707@redhat.com> <510A5004.20303@redhat.com> <510A6524.105@redhat.com> Message-ID: <510A7464.2060103@redhat.com> On 01/31/2013 01:35 PM, Martin Kosek wrote: > On 01/31/2013 12:05 PM, Tomas Babej wrote: >> On 01/31/2013 12:03 PM, Tomas Babej wrote: >>> Hi, >>> >>> This was a regression due to change from DatabaseError to NetworkError >>> when LDAP server is down. >>> >>> https://fedorahosted.org/freeipa/ticket/2939 >>> > > I don't think that removing errors.DatabaseError is necessary. By the way, > would this error (and many similar errors) be solved by a server tool > refactoring that Petr Viktorin is working on? IIRC, he was about to wrap > ipa-replica-prepare in a similar framework like ipa-ldap-updater. > > With a framework like this one, we would not have to specify separate > try..catch lists in all our server manipulation tools. > That patch is on the list. And yes, the framework tries to handle errors sanely, so this `sys.exit("\n"+e.error)` nonsense is not necessary there. -- Petr? From pviktori at redhat.com Thu Jan 31 13:43:22 2013 From: pviktori at redhat.com (Petr Viktorin) Date: Thu, 31 Jan 2013 14:43:22 +0100 Subject: [Freeipa-devel] [PATCH 0029] Fix a typo in ipa-adtrust-install help In-Reply-To: <20130131131557.GH24414@localhost.localdomain> References: <510A6C8A.7060205@redhat.com> <20130131131557.GH24414@localhost.localdomain> Message-ID: <510A74FA.1000609@redhat.com> On 01/31/2013 02:15 PM, Sumit Bose wrote: > On Thu, Jan 31, 2013 at 02:07:22PM +0100, Tomas Babej wrote: >> Hi, >> >> this is a fix for a benign typo in ipa-adtrust-install --help description. >> >> Tomas > > thanks for catching this. Usually I prefer to add the space at the end > truncated line instead at the beginning of the new line. Do we/the > python community have a common rule about this? > > bye, > Sumit Personally, I always put the space at the end (and I have reformatted quite a few of such lines in IPA). I'm not aware of a documented consensus though. -- Petr? From pspacek at redhat.com Thu Jan 31 13:44:19 2013 From: pspacek at redhat.com (Petr Spacek) Date: Thu, 31 Jan 2013 14:44:19 +0100 Subject: [Freeipa-devel] [PATCH] 360 Add autodiscovery section in ipa-client-install man pages In-Reply-To: <510A6117.80507@redhat.com> References: <510A6117.80507@redhat.com> Message-ID: <510A7533.50903@redhat.com> On 31.1.2013 13:18, Martin Kosek wrote: > Explain how autodiscovery and failover works and which options > are important for these elements. > > https://fedorahosted.org/freeipa/ticket/3383 Could you add some note about "how ipa-client installer will be confused by AD"? One paragraph with some explanation could help. -- Petr^2 Spacek From mkosek at redhat.com Thu Jan 31 14:13:40 2013 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 31 Jan 2013 15:13:40 +0100 Subject: [Freeipa-devel] [PATCH] 361 ipa-adtrust-install should ask for SID generation Message-ID: <510A7C14.4080701@redhat.com> When ipa-adtrust-install is run, check if there are any objects that need to have SID generated. If yes, interactively ask the user if the sidgen task should be run. https://fedorahosted.org/freeipa/ticket/3195 -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-361-ipa-adtrust-install-should-ask-for-sid-generation.patch Type: text/x-patch Size: 3515 bytes Desc: not available URL: From jcholast at redhat.com Thu Jan 31 15:18:45 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 31 Jan 2013 16:18:45 +0100 Subject: [Freeipa-devel] [PATCHES] 98-101 Preserve case of LDAP attribute names Message-ID: <510A8B55.70109@redhat.com> Hi, these patches implement attribute name case preservation in LDAPEntry. Apply on top of Petr Viktorin's LDAP code refactoring patchset (up to part 5). Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-98-Use-the-dn-attribute-of-LDAPEntry-to-set-get-DNs-of-.patch Type: text/x-patch Size: 17429 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-99-Preserve-case-of-attribute-names-in-LDAPEntry.patch Type: text/x-patch Size: 8729 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-100-Aggregate-IPASimpleLDAPObject-in-LDAPEntry.patch Type: text/x-patch Size: 3367 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-101-Support-attributes-with-multiple-names-in-LDAPEntry.patch Type: text/x-patch Size: 2136 bytes Desc: not available URL: From abokovoy at redhat.com Thu Jan 31 15:29:10 2013 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Thu, 31 Jan 2013 17:29:10 +0200 Subject: [Freeipa-devel] [PATCH] 361 ipa-adtrust-install should ask for SID generation In-Reply-To: <510A7C14.4080701@redhat.com> References: <510A7C14.4080701@redhat.com> Message-ID: <20130131152910.GG4506@redhat.com> On Thu, 31 Jan 2013, Martin Kosek wrote: >When ipa-adtrust-install is run, check if there are any objects >that need to have SID generated. If yes, interactively ask the user >if the sidgen task should be run. > >https://fedorahosted.org/freeipa/ticket/3195 >From bd6512628d83d1f4bdfc9f414689c8a67bd01c7c Mon Sep 17 00:00:00 2001 >From: Martin Kosek >Date: Thu, 31 Jan 2013 15:08:08 +0100 >Subject: [PATCH] ipa-adtrust-install should ask for SID generation > >When ipa-adtrust-install is run, check if there are any objects >that need have SID generated. If yes, interactively ask the user >if the sidgen task should be run. > >https://fedorahosted.org/freeipa/ticket/3195 >--- > install/tools/ipa-adtrust-install | 42 +++++++++++++++++++++++++++++++++------ > 1 file changed, 36 insertions(+), 6 deletions(-) > >diff --git a/install/tools/ipa-adtrust-install b/install/tools/ipa-adtrust-install >index 17f2f0e98d08863c9e48595d219bffb148490921..e127fd63e9a43b2630325d1fc3aa645f2ef8951a 100755 >--- a/install/tools/ipa-adtrust-install >+++ b/install/tools/ipa-adtrust-install >@@ -275,12 +275,6 @@ def main(): > ip_address = str(ip) > root_logger.debug("will use ip_address: %s\n", ip_address) > >- if not options.unattended: >- print "" >- print "The following operations may take some minutes to complete." >- print "Please wait until the prompt is returned." >- print "" >- > admin_password = options.admin_password > if not (options.unattended or admin_password): > admin_password = read_admin_password(options.admin_name) >@@ -320,6 +314,42 @@ def main(): > set_and_check_netbios_name(options.netbios_name, > options.unattended) > >+ if not options.unattended and not options.add_sids: >+ # The filter corresponds to ipa_sidgen_task.c LDAP search filter >+ filter = '(&(objectclass=ipaobject)(!(objectclass=mepmanagedentry))' \ >+ '(|(objectclass=posixaccount)(objectclass=posixgroup)' \ >+ '(objectclass=ipaidobject))(!(ipantsecurityidentifier=*)))' >+ try: >+ (entries, truncated) = api.Backend.ldap2.find_entries(filter=filter, >+ base_dn=api.env.basedn, attrs_list=['']) >+ except errors.NotFound: >+ # All objects have SIDs assigned >+ pass >+ except (errors.DatabaseError, errors.NetworkError), e: >+ print "Could not retrieve a list of entries that needs a SID generation:" >+ print " %s" % e >+ else: >+ object_count = len(entries) >+ if object_count > 0: >+ print "" >+ print "%d existing users or groups do not have a SID identifier assigned." \ >+ % len(entries) >+ print "Installer can run a task to have ipa-sidgen Directory Server plugin generate" >+ print "the SID identifier for all these users. Please note, the in case of a high" >+ print "number of users and groups, the operation might lead to high replication" >+ print "traffic and performance degradation. Refer to ipa-adtrust-install(1) man page" >+ print "for details." >+ print "" >+ if ipautil.user_input("Do you want to run the ipa-sidgen task?", default=False, >+ allow_empty=False): >+ options.add_sids = True I would still run this check in options.unattended mode and reported warning, for accounting purposes. Could you please make so? -- / Alexander Bokovoy From mkosek at redhat.com Thu Jan 31 15:41:38 2013 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 31 Jan 2013 16:41:38 +0100 Subject: [Freeipa-devel] [PATCH] 360 Add autodiscovery section in ipa-client-install man pages In-Reply-To: <510A7533.50903@redhat.com> References: <510A6117.80507@redhat.com> <510A7533.50903@redhat.com> Message-ID: <510A90B2.902@redhat.com> On 01/31/2013 02:44 PM, Petr Spacek wrote: > On 31.1.2013 13:18, Martin Kosek wrote: >> Explain how autodiscovery and failover works and which options >> are important for these elements. >> >> https://fedorahosted.org/freeipa/ticket/3383 > > Could you add some note about "how ipa-client installer will be confused by > AD"? One paragraph with some explanation could help. > Sure, makes sense. Updated patch attached. Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-360-2-ipa-client-install-man-pages.patch Type: text/x-patch Size: 7473 bytes Desc: not available URL: From rcritten at redhat.com Thu Jan 31 15:54:46 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 31 Jan 2013 10:54:46 -0500 Subject: [Freeipa-devel] [PATCHES] 0117-0118 Port ipa-replica-prepare to the admintool framework In-Reply-To: <5108F115.9080902@redhat.com> References: <50E580E2.8090502@redhat.com> <50E58E01.3060402@redhat.com> <50E6DC9E.5050702@redhat.com> <51069AF1.2020705@redhat.com> <5108F115.9080902@redhat.com> Message-ID: <510A93C6.7050105@redhat.com> Petr Viktorin wrote: > On 01/28/2013 04:36 PM, Petr Viktorin wrote: >> On 01/04/2013 02:43 PM, Petr Viktorin wrote: >>> On 01/03/2013 02:56 PM, John Dennis wrote: >>>> On 01/03/2013 08:00 AM, Petr Viktorin wrote: >>>>> Hello, >>>>> >>>>> The first patch implements logging-related changes to the admintool >>>>> framework and ipa-ldap-updater (the only thing ported to it so far). >>>>> The design document is at >>>>> http://freeipa.org/page/V3/Logging_and_output >>>>> >>>>> John, I decided to go ahead and put an explicit "logger" attribute on >>>>> the tool class rather than adding debug, info, warn. etc methods >>>>> dynamically using log_mgr.get_logger. I believe it's the cleanest >>>>> solution. >>>>> We had a discussion about this in this thread: >>>>> https://www.redhat.com/archives/freeipa-devel/2012-July/msg00223.html; >>>>> I >>>>> didn't get a reaction to my conclusion so I'm letting you know in case >>>>> you have more to say. >>>> >>>> I'm fine with not directly adding the debug, info, warn etc. methods, >>>> that practice was historical dating back to the days of Jason. >>>> However I >>>> do think it's useful to use a named logger and not the global >>>> root_logger. I'd prefer we got away from using the root_logger, it's >>>> continued existence is historical as well and the idea was over time we >>>> would slowly eliminate it's usage. FWIW the log_mgr.get_logger() is >>>> still useful for what you want to do. >>>> >>>> def get_logger(self, who, bind_logger_names=False) >>>> >>>> If you don't set bind_logger_names to True (and pass the class instance >>>> as who) you won't get the offensive debug, info, etc. methods added to >>>> the class instance. But it still does all the other bookeeping. >>>> >>>> The 'who' in this instance could be either the name of the admin >>>> tool or >>>> the class instance. >>>> >>>> Also I'd prefer using the attribute 'log' rather than 'logger'. That >>>> would make it consistent with code which does already use get_logger() >>>> passing a class instance because it's adds a 'log' attribute which is >>>> the logger. Also 'log' is twice as succinct than 'logger' (shorter line >>>> lengths). >>>> >>>> Thus if you do: >>>> >>>> log_mgr.get_logger(self) >>>> >>>> I think you'll get exactly what you want. A logger named for the class >>>> and being able to say >>>> >>>> self.log.debug() >>>> self.log.error() >>>> >>>> inside the class. >>>> >>>> In summary, just drop the True from the get_logger() call. >>>> >>> >>> Thanks! Yes, this works better. Updated patches attached. >>> >> >> >> Here is patch 117 rebased to current master. >> > > Rebased again. Just a few minor points. Patch 117: The n-v-r should be -14. ipa-ldap-updater is no longer runable as non-root. Was this intentional? Patch 118: Seems to work as it did though as a side effect of the new logging some things are displayed that we may want to suppress, specifically: request 'https://dart.example.com:8443/ca/ee/ca/profileSubmitSSLClient' I think changing the log level to DEBUG is probably the way to go. While you're at it you might consider replacing the ipa_replica_prepare remove_file() with the one in installutils. They differ slightly in implementation but basically do the same thing. rob From mkosek at redhat.com Thu Jan 31 15:58:33 2013 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 31 Jan 2013 16:58:33 +0100 Subject: [Freeipa-devel] [PATCH] 361 ipa-adtrust-install should ask for SID generation In-Reply-To: <20130131152910.GG4506@redhat.com> References: <510A7C14.4080701@redhat.com> <20130131152910.GG4506@redhat.com> Message-ID: <510A94A9.8040708@redhat.com> On 01/31/2013 04:29 PM, Alexander Bokovoy wrote: > On Thu, 31 Jan 2013, Martin Kosek wrote: >> When ipa-adtrust-install is run, check if there are any objects >> that need to have SID generated. If yes, interactively ask the user >> if the sidgen task should be run. >> >> https://fedorahosted.org/freeipa/ticket/3195 > ... > I would still run this check in options.unattended mode and reported > warning, for accounting purposes. > > Could you please make so? > Sure! Updated patch attached. Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-361-2-ipa-adtrust-install-should-ask-for-sid-generation.patch Type: text/x-patch Size: 3725 bytes Desc: not available URL: From abokovoy at redhat.com Thu Jan 31 16:01:22 2013 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Thu, 31 Jan 2013 18:01:22 +0200 Subject: [Freeipa-devel] [PATCH] 357 Use fully qualified CCACHE names In-Reply-To: <5109152A.9030205@redhat.com> References: <5109152A.9030205@redhat.com> Message-ID: <20130131160122.GH4506@redhat.com> On Wed, 30 Jan 2013, Martin Kosek wrote: >Some parts of install scripts used only ccache name as returned by >krbV.CCache.name attribute. However, when this name is used again >to initialize krbV.CCache object or when it is used in KRB5CCNAME >environmental variable, it fails for new DIR type of CCACHE. > >We should always use both CCACHE type and name when referring to >them to avoid these crashes. ldap2 backend was also updated to >accept directly krbV.CCache object which contains everything we need >to authenticate with ccache. > >https://fedorahosted.org/freeipa/ticket/3381 Minor comment: there are few cleanups of 'import krbV' in places where Kerberos functions are not used. Maybe it would be better to separate them into their own patch to avoid rebasing issues in future? >Please note, that this fix is rather a short/medium-term fix for Fedora 18. In >a long term we should consolidate our CCACHE manipulation code, it now uses >several different wrappers or just uses krbV python library directly. I did not >do any global refactoring in this patch, this should be done after we decide if >we want to create a new, more usable krb5 library bindings as was already >discussed in the past. Yes. John has published his current code for new Python bindings to libkrb5 at https://github.com/jdennis/python-krb. It is far from finished but gives more pythony feeling and additional contributions are highly welcomed. Once it is ready, we can start looking migrating to it. > from ipalib import api, errors > from ipalib.crud import CrudBackend > from ipalib.request import context >@@ -783,7 +781,7 @@ class ldap2(CrudBackend): > > Keyword arguments: > ldapuri -- the LDAP server to connect to >- ccache -- Kerberos V5 ccache name >+ ccache -- Kerberos V5 ccache object or name > bind_dn -- dn used to bind to the server > bind_pw -- password used to bind to the server > debug_level -- LDAP debug level option >@@ -821,10 +819,17 @@ class ldap2(CrudBackend): > if maxssf < minssf: > conn.set_option(_ldap.OPT_X_SASL_SSF_MAX, minssf) > if ccache is not None: >+ if isinstance(ccache, krbV.CCache): >+ principal = ccache.principal().name >+ # get a fully qualified CCACHE name (schema+name) >+ ccache = "%(type)s:%(name)s" % dict(type=ccache.type, >+ name=ccache.name) May be a comment could be added here that we don't use krbV.CCache instance afterwards and it is OK to override refernce to it by a string? >+ else: >+ principal = krbV.CCache(name=ccache, >+ context=krbV.default_context()).principal().name >+ > os.environ['KRB5CCNAME'] = ccache > conn.sasl_interactive_bind_s(None, SASL_AUTH) >- principal = krbV.CCache(name=ccache, >- context=krbV.default_context()).principal().name > setattr(context, 'principal', principal) > else: > # no kerberos ccache, use simple bind or external sasl -- / Alexander Bokovoy From mkosek at redhat.com Thu Jan 31 16:02:59 2013 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 31 Jan 2013 17:02:59 +0100 Subject: [Freeipa-devel] [PATCH] 360 Add autodiscovery section in ipa-client-install man pages In-Reply-To: <510A90B2.902@redhat.com> References: <510A6117.80507@redhat.com> <510A7533.50903@redhat.com> <510A90B2.902@redhat.com> Message-ID: <510A95B3.4060504@redhat.com> On 01/31/2013 04:41 PM, Martin Kosek wrote: > On 01/31/2013 02:44 PM, Petr Spacek wrote: >> On 31.1.2013 13:18, Martin Kosek wrote: >>> Explain how autodiscovery and failover works and which options >>> are important for these elements. >>> >>> https://fedorahosted.org/freeipa/ticket/3383 >> >> Could you add some note about "how ipa-client installer will be confused by >> AD"? One paragraph with some explanation could help. >> > > Sure, makes sense. Updated patch attached. > > Martin > Petr noticed a typo in the updated section. Fixed version attached. Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-360-3-ipa-client-install-man-pages.patch Type: text/x-patch Size: 7467 bytes Desc: not available URL: From rcritten at redhat.com Thu Jan 31 16:20:40 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 31 Jan 2013 11:20:40 -0500 Subject: [Freeipa-devel] [PATCH 0005] Clarified error message with ipa-client-automount In-Reply-To: <50F0502D.6010907@redhat.com> References: <1129149924.5633962.1353923519895.JavaMail.root@redhat.com> <50B926A9.5030107@redhat.com> <50BCA720.50305@redhat.com> <50F0502D.6010907@redhat.com> Message-ID: <510A99D8.2050500@redhat.com> Lynn Root wrote: > On Mon 03 Dec 2012 05:20:32 AM PST, Lynn Root wrote: >> On 11/30/2012 10:35 PM, Rob Crittenden wrote: >>> Lynn Root wrote: >>>> Returns a clearer hint when user is running ipa-client-automount with >>>> possible firewall up and blocking need ports. >>>> >>>> Not sure if this patch is worded correctly in order to address the >>>> potential firewall block when running ipa-client-automount. Perhaps a >>>> different error should be thrown, rather than NOT_IPA_SERVER. >>>> >>>> Ticket: https://fedorahosted.org/freeipa/ticket/3080 >>> >>> Tomas made a similar change recently in ipa-client-install which >>> includes more information on the ports we need. You may want to take >>> a look at that. It was for ticket >>> https://fedorahosted.org/freeipa/ticket/2816 >>> >>> rob >> Thank you Rob - I adapted the same approach in this updated patch. Let >> me know if it addresses the blocked port issue better. >> >> Thanks! > > Just bumping this thread - I think this might have fallen on the > way-side; certainly lost track of it myself after returning home/holidays. > > However I noticed that this ticket > (https://fedorahosted.org/freeipa/ticket/3080) now has an RFE tag - > don't _believe_ that was there when I started working on it in late > November. I believe the whole design doc conversation was going on > around then. I assume I'll need to start one for this? > > Thanks! > I think this is still not quite right, and I think could be improved in ipa-client-install as well. ipacheckldap() only tries to connect to port 389 (optionally with StartTLS). It returns a number of different possible errors, I think we should have some way to report more specific error messages based on those (can't connect to server Y on port 389, Unable to find Kerberos container, etc) in addition to "Unable to confirm that X is an IPA server". We probably want to do something about the v2 part as well. I think a table in ipadiscovery to translate the possible return vals from ipacheckldap() into a string that can logged is the way to go. rob From mkosek at redhat.com Thu Jan 31 16:26:40 2013 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 31 Jan 2013 17:26:40 +0100 Subject: [Freeipa-devel] [PATCH] 357 Use fully qualified CCACHE names In-Reply-To: <20130131160122.GH4506@redhat.com> References: <5109152A.9030205@redhat.com> <20130131160122.GH4506@redhat.com> Message-ID: <510A9B40.50900@redhat.com> On 01/31/2013 05:01 PM, Alexander Bokovoy wrote: > On Wed, 30 Jan 2013, Martin Kosek wrote: >> Some parts of install scripts used only ccache name as returned by >> krbV.CCache.name attribute. However, when this name is used again >> to initialize krbV.CCache object or when it is used in KRB5CCNAME >> environmental variable, it fails for new DIR type of CCACHE. >> >> We should always use both CCACHE type and name when referring to >> them to avoid these crashes. ldap2 backend was also updated to >> accept directly krbV.CCache object which contains everything we need >> to authenticate with ccache. >> >> https://fedorahosted.org/freeipa/ticket/3381 > Minor comment: there are few cleanups of 'import krbV' in places where > Kerberos functions are not used. Maybe it would be better to separate > them into their own patch to avoid rebasing issues in future? Sure, good idea. Attaching both patches. > >> Please note, that this fix is rather a short/medium-term fix for Fedora 18. In >> a long term we should consolidate our CCACHE manipulation code, it now uses >> several different wrappers or just uses krbV python library directly. I did not >> do any global refactoring in this patch, this should be done after we decide if >> we want to create a new, more usable krb5 library bindings as was already >> discussed in the past. > Yes. John has published his current code for new Python bindings to > libkrb5 at https://github.com/jdennis/python-krb. It is far from > finished but gives more pythony feeling and additional contributions are > highly welcomed. > > Once it is ready, we can start looking migrating to it. Agreed. During the migration, it would then make sense to also refactor and consolidate a our CCACHE manupulation code. > >> from ipalib import api, errors >> from ipalib.crud import CrudBackend >> from ipalib.request import context >> @@ -783,7 +781,7 @@ class ldap2(CrudBackend): >> >> Keyword arguments: >> ldapuri -- the LDAP server to connect to >> - ccache -- Kerberos V5 ccache name >> + ccache -- Kerberos V5 ccache object or name >> bind_dn -- dn used to bind to the server >> bind_pw -- password used to bind to the server >> debug_level -- LDAP debug level option >> @@ -821,10 +819,17 @@ class ldap2(CrudBackend): >> if maxssf < minssf: >> conn.set_option(_ldap.OPT_X_SASL_SSF_MAX, minssf) >> if ccache is not None: >> + if isinstance(ccache, krbV.CCache): >> + principal = ccache.principal().name >> + # get a fully qualified CCACHE name (schema+name) >> + ccache = "%(type)s:%(name)s" % dict(type=ccache.type, >> + name=ccache.name) > May be a comment could be added here that we don't use krbV.CCache > instance afterwards and it is OK to override refernce to it by a > string? Comment added. > >> + else: >> + principal = krbV.CCache(name=ccache, >> + context=krbV.default_context()).principal().name >> + >> os.environ['KRB5CCNAME'] = ccache >> conn.sasl_interactive_bind_s(None, SASL_AUTH) >> - principal = krbV.CCache(name=ccache, >> - context=krbV.default_context()).principal().name >> setattr(context, 'principal', principal) >> else: >> # no kerberos ccache, use simple bind or external sasl > Updated patches attached. Martin -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-357.1-2-remove-unused-krbv-imports.patch Type: text/x-patch Size: 2305 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkosek-357.2-2-use-fully-qualified-ccache-names.patch Type: text/x-patch Size: 5935 bytes Desc: not available URL: From jcholast at redhat.com Thu Jan 31 18:01:31 2013 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 31 Jan 2013 19:01:31 +0100 Subject: [Freeipa-devel] [PATCHES] 146-164 LDAP code refactoring (Part 4) In-Reply-To: <510A40C0.5060406@redhat.com> References: <50F83498.9050104@redhat.com> <50FD7CF8.2070401@redhat.com> <50FEAA66.4010704@redhat.com> <51010252.1030200@redhat.com> <51013FD5.8030004@redhat.com> <51028E91.4070601@redhat.com> <51063804.4060507@redhat.com> <5106949D.408@redhat.com> <5107ED49.4000502@redhat.com> <5108EDAB.9040406@redhat.com> <510A40C0.5060406@redhat.com> Message-ID: <510AB17B.80909@redhat.com> On 31.1.2013 11:00, Petr Viktorin wrote: > On 01/30/2013 10:53 AM, Petr Viktorin wrote: >> On 01/29/2013 04:39 PM, Petr Viktorin wrote: >>> On 01/28/2013 04:09 PM, Petr Viktorin wrote: >>>> On 01/28/2013 09:34 AM, Jan Cholasta wrote: >>>>> On 25.1.2013 14:54, Petr Viktorin wrote: >>>>>> On 01/24/2013 03:06 PM, Petr Viktorin wrote: >>>>>>> On 01/24/2013 10:43 AM, Petr Viktorin wrote: >>>>>>>> On 01/22/2013 04:04 PM, Petr Viktorin wrote: >>>>>>>>> On 01/21/2013 06:38 PM, Petr Viktorin wrote: >>>>>>>>>> On 01/17/2013 06:27 PM, Petr Viktorin wrote: >>>>>>>>>>> Hello, >>>>>>>>>>> This is the first batch of changes aimed to consolidate our LDAP >>>>>>>>>>> code. >>>>>>>>>>> Each should be a self-contained change that doesn't break >>>>>>>>>>> anything. >>>>>>>>>>> > [...] >>>>>> Since this patchset is becoming unwieldy, I've put it in a public >>>>>> repo >>>>>> that I'll keep updated. The following command will fetch it into your >>>>>> "pviktori-ldap-refactor" branch: >>>>>> >>>>>> git fetch git://github.com/encukou/freeipa >>>>>> ldap-refactor:pviktori-ldap-refactor >>>>>> >>>>>> > [...] > > I found a bug in patch 143, here is a fixed version. > I would prefer if you used the semantics of .get() for .get_single() as well (i.e. when no default value is provided, None is assumed) in patch 152. Or is there a reason not to? Honza -- Jan Cholasta From abokovoy at redhat.com Thu Jan 31 18:06:11 2013 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Thu, 31 Jan 2013 20:06:11 +0200 Subject: [Freeipa-devel] [PATCH] 361 ipa-adtrust-install should ask for SID generation In-Reply-To: <510A94A9.8040708@redhat.com> References: <510A7C14.4080701@redhat.com> <20130131152910.GG4506@redhat.com> <510A94A9.8040708@redhat.com> Message-ID: <20130131180611.GI4506@redhat.com> On Thu, 31 Jan 2013, Martin Kosek wrote: >On 01/31/2013 04:29 PM, Alexander Bokovoy wrote: >> On Thu, 31 Jan 2013, Martin Kosek wrote: >>> When ipa-adtrust-install is run, check if there are any objects >>> that need to have SID generated. If yes, interactively ask the user >>> if the sidgen task should be run. >>> >>> https://fedorahosted.org/freeipa/ticket/3195 >> >... >> I would still run this check in options.unattended mode and reported >> warning, for accounting purposes. >> >> Could you please make so? >> > >Sure! Updated patch attached. Thanks! I have only small addition: >+ object_count = len(entries) >+ if object_count > 0: >+ print "" >+ print "WARNING: %d existing users or groups do not have a SID identifier assigned." \ >+ % len(entries) >+ print "Installer can run a task to have ipa-sidgen Directory Server plugin generate" >+ print "the SID identifier for all these users. Please note, the in case of a high" >+ print "number of users and groups, the operation might lead to high replication" >+ print "traffic and performance degradation. Refer to ipa-adtrust-install(1) man page" >+ print "for details." >+ print "" >+ if not options.unattended: >+ if ipautil.user_input("Do you want to run the ipa-sidgen task?", default=False, >+ allow_empty=False): >+ options.add_sids = True ... to make the text of warning consistent it would be good to add + else: + print "Unattended mode was selected, installer will *not* run ipa-sidgen task!" -- / Alexander Bokovoy From abokovoy at redhat.com Thu Jan 31 18:07:50 2013 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Thu, 31 Jan 2013 20:07:50 +0200 Subject: [Freeipa-devel] [PATCH] 357 Use fully qualified CCACHE names In-Reply-To: <510A9B40.50900@redhat.com> References: <5109152A.9030205@redhat.com> <20130131160122.GH4506@redhat.com> <510A9B40.50900@redhat.com> Message-ID: <20130131180750.GJ4506@redhat.com> On Thu, 31 Jan 2013, Martin Kosek wrote: >On 01/31/2013 05:01 PM, Alexander Bokovoy wrote: >> On Wed, 30 Jan 2013, Martin Kosek wrote: >>> Some parts of install scripts used only ccache name as returned by >>> krbV.CCache.name attribute. However, when this name is used again >>> to initialize krbV.CCache object or when it is used in KRB5CCNAME >>> environmental variable, it fails for new DIR type of CCACHE. >>> >>> We should always use both CCACHE type and name when referring to >>> them to avoid these crashes. ldap2 backend was also updated to >>> accept directly krbV.CCache object which contains everything we need >>> to authenticate with ccache. >>> >>> https://fedorahosted.org/freeipa/ticket/3381 >> Minor comment: there are few cleanups of 'import krbV' in places where >> Kerberos functions are not used. Maybe it would be better to separate >> them into their own patch to avoid rebasing issues in future? > >Sure, good idea. Attaching both patches. ACK to both now. Thanks! -- / Alexander Bokovoy From rcritten at redhat.com Thu Jan 31 18:35:43 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 31 Jan 2013 13:35:43 -0500 Subject: [Freeipa-devel] [PATCHES] 0107-0114 Fix Confusing ipa tool online help organization In-Reply-To: <50CB0DCF.7050102@redhat.com> References: <50C9F27E.50808@redhat.com> <50CA76D6.8090404@redhat.com> <50CB0DCF.7050102@redhat.com> Message-ID: <510AB97F.2080608@redhat.com> Petr Viktorin wrote: > On 12/14/2012 01:46 AM, Dmitri Pal wrote: >> On 12/13/2012 10:21 AM, Petr Viktorin wrote: >>> https://fedorahosted.org/freeipa/ticket/3060 >>> >>> Here is a collection of smallish fixes to `ipa help` and `ipa >>> --help`. >>> This should address most of Nikolai's proposal. >>> Additionally, it's now possible to run `ipa --help` without >>> a Kerberos ticket. And there are some new tests. >>> >>> I've not included the "Often used commands" in `ipa help`; I think >>> that is material for a manual/tutorial, not a help command. Selecting >>> a topic from `ipa topics` and then choosing a command from `ipa help >>> ` is a better way to use the help than the verbose `ipa help >>> commands` or proposed incomplete "Often used commands". >> >> Since the ticket has a bit of discussion and you indicate that you did >> not to address everything can you please extract what have been >> addressed and put it into a design page. >> I know it is not RFE but it would help to validate the changes by >> testers. >> Please put the wiki link into the ticket. >> > > http://freeipa.org/page/V3/Help > > What is the purpose of the no-option outfile? Do you anticipate at some point opening this up as a real option or making it easier to log while using the api directly? The help for help is a little confusing: ----- Purpose: Display help for a command or topic. Usage: ipa [global-options] help [TOPIC] [options] Positional arguments: TOPIC The topic or command name. Options: -h, --help show this help message and exit ----- Should [TOPIC] be [TOPIC | COMMAND] or something else? On my fresh F-18 install one of the new unit tests fails: ====================================================================== FAIL: Test that `help user-add` & `user-add -h` are equivalent and contain doc ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest self.test(*self.arg) File "/home/rcrit/redhat/freeipa/tests/test_cmdline/test_help.py", line 111, in test_command_help assert h_ctx.stdout == help_ctx.stdout AssertionError I'm not sure the errors to stderr are working either: $ ipa user-show foo bar baz 2 > /dev/null ipa: ERROR: command 'user_show' takes at most 1 argument rob From dpal at redhat.com Thu Jan 31 18:51:14 2013 From: dpal at redhat.com (Dmitri Pal) Date: Thu, 31 Jan 2013 13:51:14 -0500 Subject: [Freeipa-devel] OTP Design In-Reply-To: <510A3AB1.2090104@redhat.com> References: <5108A315.2000204@redhat.com> <510A3AB1.2090104@redhat.com> Message-ID: <510ABD22.70505@redhat.com> On 01/31/2013 04:34 AM, Petr Spacek wrote: > On 30.1.2013 05:35, Dmitri Pal wrote: >> Hello, >> >> We started to shape a page for the OTP prototyping work we are doing. >> It is work in progress but it has enough information to share and >> discuss. >> http://freeipa.org/page/V3/OTP >> >> Comments welcome! > > I gave it a quick look. Generally, the core seems correct to me. I > have only nitpicks: > > I see big amount of new ipa* specific attributes. > > How other OTP solutions store tokens/configuration? Is there any > standard/semi-standard LDAP schema with attributes describing tokens? No. Not that we are aware of. > > MIT KDC has own ("native") LDAP driver. Which they do not like and do not want to do more with it. We effectively wrote our own. > It would be nice to coordinate OID allocation and schema definition > with MIT and share as much attributes as possible. Do they plan to > support OTP configuration in LDAP? (I don't see any note about LDAP > support in http://k5wiki.kerberos.org/wiki/Projects/OTPOverRADIUS .) They do not plan. And we do not plan to extend the driver. This is the reason for the current design. > > Is the author of > https://fedoraproject.org/wiki/Features/EnterpriseTwoFactorAuthentication > aware of our effort? No I need to reach out to him. > > What about re-using http://www.dynalogin.org/ server for TOTP/HOTP > implementation (rather than writing own OTP-in-389 implementation)? I > haven't looked to the dynalogin code ... The TOTP/HOTP algorithm is very simple there is really no much to reuse. > > Could be (old) draft "SASL and GSS-API Mechanism for Two Factor > Authentication based on a Password and a One-Time Password (OTP): > CROTP" from > http://tools.ietf.org/html/draft-josefsson-kitten-crotp-00 interesting > for us (in future)? Is it worth to resurrect this effort? > Not sure. We will see. -- Thank you, Dmitri Pal Sr. Engineering Manager for IdM portfolio Red Hat Inc. ------------------------------- Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From rcritten at redhat.com Thu Jan 31 18:59:08 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 31 Jan 2013 13:59:08 -0500 Subject: [Freeipa-devel] [PATCHES] 91-92 Add support for RFC 6594 SSHFP DNS records In-Reply-To: <5106453A.60104@redhat.com> References: <50EE49EC.7000500@redhat.com> <50EEA00B.8090505@redhat.com> <51006821.9000901@redhat.com> <5106453A.60104@redhat.com> Message-ID: <510ABEFC.807@redhat.com> Jan Cholasta wrote: > On 23.1.2013 23:45, Rob Crittenden wrote: >> Jan Cholasta wrote: >>> On 10.1.2013 05:56, Jan Cholasta wrote: >>>> Hi, >>>> >>>> Patch 91 removes module ipapython.compat. The code that uses it doesn't >>>> work with ancient Python versions anyway, so there's no need to keep it >>>> around. >>>> >>>> Patch 92 adds support for automatic generation of RFC 6594 SSHFP DNS >>>> records to ipa-client-install and host plugin, as described in >>>> . Note that >>>> still applies. >>>> >>>> https://fedorahosted.org/freeipa/ticket/2642 >>>> >>>> Honza >>>> >>> >>> Self-NACK, forgot to actually remove ipapython/compat.py in the first >>> patch. Also removed an unnecessary try block from the second patch. >>> >>> Honza >> >> These look good. I'm a little concerned about the magic numbers in the >> SSHFP code. I know these come from the RFCs. Can you add a comment there >> so future developers know where the values for key type and fingerprint >> type come from? >> >> rob > > Comment added. > Sorry, I just noticed that this is an RFE and there is no design page. Can you write one up real quick, then I'll push both. I went back and forth a few times on whether we should have a ticket on the dropping of compat, if only to codify that we're giving up an python 2.6, but since this has been a given for a while I think we're ok. rob From rcritten at redhat.com Thu Jan 31 19:36:31 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 31 Jan 2013 14:36:31 -0500 Subject: [Freeipa-devel] [PATCH] 358-359 Fix openldap migration errors In-Reply-To: <51092A36.8000103@redhat.com> References: <51092A36.8000103@redhat.com> Message-ID: <510AC7BF.5020404@redhat.com> Martin Kosek wrote: > These 2 attached patches were generated based on my debugging session with > "tsunamie" and helping him dealing with migration from his openldap DS. With > these applied, migrate-ds command no longer crashes with an error. > > I can lend my openldap instance I used when developing these patches. > > Martin Doesn't the second patch break the rule where the same enforcement is done on entering the data via a named option and setattr? If I understand this correctly the implication is that you couldn't do: ipa user-mod --description=' foo ' But you could do ipa user-mod --setattr description=' foo ' rob From rcritten at redhat.com Thu Jan 31 20:35:12 2013 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 31 Jan 2013 15:35:12 -0500 Subject: [Freeipa-devel] [PATCH] 358-359 Fix openldap migration errors In-Reply-To: <510AD457.6070104@redhat.com> References: <51092A36.8000103@redhat.com> <510AC7BF.5020404@redhat.com> <510AD457.6070104@redhat.com> Message-ID: <510AD580.6060001@redhat.com> Martin Kosek wrote: > On 01/31/2013 08:36 PM, Rob Crittenden wrote: >> Martin Kosek wrote: >>> These 2 attached patches were generated based on my debugging session >>> with >>> "tsunamie" and helping him dealing with migration from his openldap >>> DS. With >>> these applied, migrate-ds command no longer crashes with an error. >>> >>> I can lend my openldap instance I used when developing these patches. >>> >>> Martin >> >> Doesn't the second patch break the rule where the same enforcement is >> done on >> entering the data via a named option and setattr? If I understand this >> correctly the implication is that you couldn't do: >> >> ipa user-mod --description=' foo ' >> >> But you could do >> >> ipa user-mod --setattr description=' foo ' >> >> rob >> > > I don't think so. This patch just removes this restriction from *attr > parameters themselves, the underlying parameter validators (i.e. > description parameter) should be still applied. Though in case of the > leading and trailing spaces, they somehow get trimmed: > > # ipa group-mod foo --setattr "description= some spaces " > -------------------- > Modified group "foo" > -------------------- > Group name: foo > Description: some spaces > GID: 1416400004 > > But as I wanted to have this patch only because of the failing user_mod > operation in the migration.py plugin and since you plan to replace it in > your WIP migration performance patch with direct LDAP mod operation, I > do not insist on pushing patch 359 and patch 358 would be sufficient. > > Martin Ok, and patch 358 works fine, ACK. rob From mkosek at redhat.com Thu Jan 31 20:30:15 2013 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 31 Jan 2013 21:30:15 +0100 Subject: [Freeipa-devel] [PATCH] 358-359 Fix openldap migration errors In-Reply-To: <510AC7BF.5020404@redhat.com> References: <51092A36.8000103@redhat.com> <510AC7BF.5020404@redhat.com> Message-ID: <510AD457.6070104@redhat.com> On 01/31/2013 08:36 PM, Rob Crittenden wrote: > Martin Kosek wrote: >> These 2 attached patches were generated based on my debugging session with >> "tsunamie" and helping him dealing with migration from his openldap DS. With >> these applied, migrate-ds command no longer crashes with an error. >> >> I can lend my openldap instance I used when developing these patches. >> >> Martin > > Doesn't the second patch break the rule where the same enforcement is done on > entering the data via a named option and setattr? If I understand this > correctly the implication is that you couldn't do: > > ipa user-mod --description=' foo ' > > But you could do > > ipa user-mod --setattr description=' foo ' > > rob > I don't think so. This patch just removes this restriction from *attr parameters themselves, the underlying parameter validators (i.e. description parameter) should be still applied. Though in case of the leading and trailing spaces, they somehow get trimmed: # ipa group-mod foo --setattr "description= some spaces " -------------------- Modified group "foo" -------------------- Group name: foo Description: some spaces GID: 1416400004 But as I wanted to have this patch only because of the failing user_mod operation in the migration.py plugin and since you plan to replace it in your WIP migration performance patch with direct LDAP mod operation, I do not insist on pushing patch 359 and patch 358 would be sufficient. Martin From ondrej at hamada.cz Thu Jan 31 23:09:51 2013 From: ondrej at hamada.cz (Ondrej Hamada) Date: Fri, 01 Feb 2013 00:09:51 +0100 Subject: [Freeipa-devel] More types of replicas in FreeIPA In-Reply-To: <50167111.9090605@redhat.com> References: <4FC8CF97.8000202@redhat.com> <1338828433.8230.235.camel@willson.li.ssimo.org> <4FCDE22B.5090000@redhat.com> <1338899122.8230.253.camel@willson.li.ssimo.org> <4FCE003A.800@redhat.com> <1338901087.8230.258.camel@willson.li.ssimo.org> <4FCE08D6.4080005@redhat.com> <1338905200.8230.270.camel@willson.li.ssimo.org> <4FCE1CA7.6010308@redhat.com> <1338909050.8230.278.camel@willson.li.ssimo.org> <4FCE3B63.1040806@redhat.com> <4FCE3DB1.7020002@redhat.com> <1338918705.8230.280.camel@willson.li.ssimo.org> <4FCF54A2.8080706@redhat.com> <1338990180.8230.304.camel@willson.li.ssimo.org> <50127FEC.8020908@redhat.com> <1343394730.2666.31.camel@willson.li.ssimo.org> <50167111.9090605@redhat.com> Message-ID: <510AF9BF.9060901@hamada.cz> Hello, I'm starting to work on my thesis about 'More types of replicas in FreeIPA' again. One of the main problems is the way how should the read-only replicas deal with KDC because they're not supposed to posses the Kerberos (krb) master key. The task was to investigate how is this solved in Active Directory and its Read Only Domain Controllers. I found out that the basic of RODC behaviour is described on technet page (http://technet.microsoft.com/en-us/library/cc754218%28v=ws.10%29.aspx). Login situation: RODC by default forwards the KRB requests to the DC. RODC then forwards the response back to the client and also requests the password to be replicated to RODC. Both the user and his host must be members of 'Allowed RODC Password Replication' group in order to let user's passwords being replicated to RODCs. Request services that the RODC doesn't have credentials for: Client sends TGS-REQ to RODC. RODC can read the TGT in the request, but doesn't have credentials for the service. So the request is forwarded to the DC. DC can decrypt the TGT that was created by RODC and sends back the TGS-RES that is forwarded to the client. (but it does not trust the RODC so it recalculates the privilege attribute certificate). RODC does not cache the credentials for the service. During my experiments the credentials got replicated to the RODC on the first log on of the user. The user's KRB requests were first forwarded to the DC. When the user got krbtgt and TGS for host, ldap and cifs, his TGT was revoked by RODC. He run through the auth. process again, but this time the requests were served by RODC only - no forwarding - and not TGS for host was requested. Unfortunately I can not still recognize how the keys are processed. There's barely any RPC communication - only one DCERPC packet exchange between RODC and DC that takes place when the user sends his first TGS request (this exchange happens also for the clients with disabled replication). It looks to me like the DC knows all the RODC keys. According to Technet, the MS implementation of Kerberos is able to recognize the key owner from the Key Version Number value. I think I can't get more info from the network traffic examination. Do you have any ideas or hints on further investigation of the problem?