From jcholast at redhat.com Mon Mar 2 06:49:14 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 02 Mar 2015 07:49:14 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54ECBE7E.7040108@redhat.com> References: <54ECBE7E.7040108@redhat.com> Message-ID: <54F407EA.9040101@redhat.com> Hi, Dne 24.2.2015 v 19:10 Martin Basti napsal(a): > Hello all, > > please read the design page, any objections/suggestions appreciated > http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring > 1) " * Merge server update commands into the one command (ipa-server-upgrade) " So there is "ipa-server-install" to install the server, "ipa-server-install --uninstall" to uninstall the server and "ipa-server-upgrade" to upgrade the server. Maybe we should bring some consistency here and have one of: a) "ipa-server-install [--install]", "ipa-server-install --uninstall", "ipa-server-install --upgrade" b) "ipa-server-install [install]", "ipa-server-install uninstall", "ipa-server-install upgrade" c) "ipa-server-install", "ipa-server-uninstall", "ipa-server-upgrade" 2) " * Prevent to run IPA service, if code version and configuration version does not match * ipactl should execute ipa-server-upgrade if needed " There should be no configuration version, configuration update should be run always. It's fast and hence does not need to be optimized like data update by using a monolithic version number, which brings more than a few problems on its own. 3) " * Prevent user to use ipa-upgradeconfig and ipa-ldap-updater " Even without arguments? Is ipactl now the only right place to trigger manual update? 4) " Plugins are called from update files, using new directive update-plugin: " Why "update-plugin" and not just "plugin"? Do you expect other kinds of plugins to be called from update files in the future? (I certainly don't.) 5) " New class UpdatePlugin is used for all update plugins. " Just reuse the existing Updater class, no need to reinvent the wheel. 6) I wonder why configuration update is done after data update and not before. I know it's been like that for a long time, but it seems kind of unnatural to me, especially now when schema update is separate from data update. (Rob?) 7) " keep --test option and fix the plugins which do not respect the option " Just a note, I believe this ticket is related: . Good work overall! Honza -- Jan Cholasta From mkosek at redhat.com Mon Mar 2 11:23:48 2015 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 02 Mar 2015 12:23:48 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F407EA.9040101@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> Message-ID: <54F44844.8070106@redhat.com> On 03/02/2015 07:49 AM, Jan Cholasta wrote: > Hi, > > Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >> Hello all, >> >> please read the design page, any objections/suggestions appreciated >> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >> > > 1) > > " > * Merge server update commands into the one command (ipa-server-upgrade) > " > > So there is "ipa-server-install" to install the server, "ipa-server-install > --uninstall" to uninstall the server and "ipa-server-upgrade" to upgrade the > server. Maybe we should bring some consistency here and have one of: > > a) "ipa-server-install [--install]", "ipa-server-install --uninstall", > "ipa-server-install --upgrade" > > b) "ipa-server-install [install]", "ipa-server-install uninstall", > "ipa-server-install upgrade" > > c) "ipa-server-install", "ipa-server-uninstall", "ipa-server-upgrade" Long term, I think we want C. Besides other advantages, it will let us have independent sets of options, based on what you want to do. > 2) > > " > * Prevent to run IPA service, if code version and configuration version does > not match > * ipactl should execute ipa-server-upgrade if needed > " > > There should be no configuration version, configuration update should be run > always. It's fast and hence does not need to be optimized like data update by > using a monolithic version number, which brings more than a few problems on its > own. I do not agree in this section. Why would you like to run it always, even if it was fast? No run is still faster than fast run. In the past, I do not recall ipa-upgradeconfig as being really fast, especially certmonger/Dogtag related updates were really slow thank to service restarts, etc. > 3) > > " > * Prevent user to use ipa-upgradeconfig and ipa-ldap-updater > " > > Even without arguments? Is ipactl now the only right place to trigger manual > update? > > 4) > > " > Plugins are called from update files, using new directive > update-plugin: > " > > Why "update-plugin" and not just "plugin"? Do you expect other kinds of plugins > to be called from update files in the future? (I certainly don't.) I have no strong feelings on this one, but IMO it is always better to have some "plan B" if we choose to indeed implement some yet unforeseen plugin based capability... > 5) > > " > New class UpdatePlugin is used for all update plugins. > " > > Just reuse the existing Updater class, no need to reinvent the wheel. > > 6) > > I wonder why configuration update is done after data update and not before. I > know it's been like that for a long time, but it seems kind of unnatural to me, > especially now when schema update is separate from data update. (Rob?) > > 7) > > " > keep --test option and fix the plugins which do not respect the option > " > > Just a note, I believe this ticket is related: > . > > > Good work overall! > > Honza > From mbasti at redhat.com Mon Mar 2 11:58:32 2015 From: mbasti at redhat.com (Martin Basti) Date: Mon, 02 Mar 2015 12:58:32 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F0C96F.8060907@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> Message-ID: <54F45068.9030306@redhat.com> On 27/02/15 20:45, Rob Crittenden wrote: > Martin Basti wrote: >> On 26/02/15 10:45, Petr Spacek wrote: >>> On 25.2.2015 17:49, Martin Basti wrote: >>>> On 25/02/15 17:15, Petr Spacek wrote: >>>>> On 24.2.2015 19:10, Martin Basti wrote: >>>>>> Hello all, >>>>>> >>>>>> please read the design page, any objections/suggestions appreciated >>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>> Thank you for the design, I have only few nitpicks. >>>>> >>>>>> Increase update files numbers range >>>>>> Update files number will be extended into 4 digits values. >>>>> IMHO the dependency on particular number format should be removed >>>>> altogether. >>>>> It should be perfectly enough to say that updates are executed in ASCII >>>>> lexicographic order and be done with it. >>>> 4.1.3-2 > 4.1.3-12 in lexicographic order, this will not fit. >>> Well, sure, but it allows you to use >>> 00-a >>> 01-b >>> >>> and renumber it to >>> >>> 001-a >>> 002-b >>> >>> at will without changes to code. (Lexicographic order is what 'ls' >>> uses by >>> default so you can see the real ordering at any time very easily.) >>> >>> Also, as you pointed out, it allows you to do things like >>> 12.345-a >>> 12.666-bbb >>> without changing code, again :-) >> Oh stupid me, I read it wrong, I replied with IPA version compare. >> >> sounds good to me, any objections anyone? > This makes sense as long as we don't abuse it. The numbers are there to > apply some amount of order but flexibility is good, and will avoid the > problem of having humongous update files. So it is not clear to me, should we use 4 digit numbers, or just lexicographic order? > I'm fine with allowing DM given that it allows running as non-root > (pretty much the only condition that ldapi wouldn't work), but I think a > full upgrade will fail w/o root given that you are combining the two > commands. I'm not sure, if I get it. The ipa-server-upgrade has to be run under root user (ldapi, service upgrades). If LDAPI failed, user may use DM password to do LDAP update (this should fix LDAPI settings) but root user is still required to update services. So root user is always required to do this. > > On ipactl, would it be overkill if there is a tty to prompt the user to > upgrade? In a non-container world it might be surprising to have an > upgrade happen esp since upgrades take a while. In non-container enviroment, we can still use upgrade during RPM transaction. So you suggest not to do update automaticaly, just write Error the IPA upgrade is required? > > With --skip-version-check what sorts of problems can we foresee? I > assume a big warning will be added to at least the man page, if not the cli? For this big warning everywhere. The main problem is try to run older IPA with newer data. In containers the problem is to run different platform specific versions which differ in functionality/bugfixes etc.. > > Where does platform come from? I'm wondering how Debian will handle this. platform is derived from ipaplatform file which is used with the particular build. Debian should have own platform file. > > Looks really good. > > rob > -- Martin Basti From jcholast at redhat.com Mon Mar 2 12:12:05 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 02 Mar 2015 13:12:05 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F44844.8070106@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> Message-ID: <54F45395.4020209@redhat.com> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): > On 03/02/2015 07:49 AM, Jan Cholasta wrote: >> Hi, >> >> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>> Hello all, >>> >>> please read the design page, any objections/suggestions appreciated >>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>> >> >> 1) >> >> " >> * Merge server update commands into the one command (ipa-server-upgrade) >> " >> >> So there is "ipa-server-install" to install the server, "ipa-server-install >> --uninstall" to uninstall the server and "ipa-server-upgrade" to upgrade the >> server. Maybe we should bring some consistency here and have one of: >> >> a) "ipa-server-install [--install]", "ipa-server-install --uninstall", >> "ipa-server-install --upgrade" >> >> b) "ipa-server-install [install]", "ipa-server-install uninstall", >> "ipa-server-install upgrade" >> >> c) "ipa-server-install", "ipa-server-uninstall", "ipa-server-upgrade" > > Long term, I think we want C. Besides other advantages, it will let us have > independent sets of options, based on what you want to do. > >> 2) >> >> " >> * Prevent to run IPA service, if code version and configuration version does >> not match >> * ipactl should execute ipa-server-upgrade if needed >> " >> >> There should be no configuration version, configuration update should be run >> always. It's fast and hence does not need to be optimized like data update by >> using a monolithic version number, which brings more than a few problems on its >> own. > > I do not agree in this section. Why would you like to run it always, even if it > was fast? No run is still faster than fast run. In the ideal case the installer would be idempotent and upgrade would be re-running the installer and we should aim to do just that. We kind of do that already, but there is a lot of code duplication in installers and ipa-upgradeconfig (I would like to fix that when refactoring installers). IMO it's better to always make 100% sure the configuration is correct rather than to save a second or two. > In the past, I do not recall > ipa-upgradeconfig as being really fast, especially certmonger/Dogtag related > updates were really slow thank to service restarts, etc. Correct, but I was talking about configuration file updates, not (re)starts, which have to always be done in ipactl anyway. > >> 3) >> >> " >> * Prevent user to use ipa-upgradeconfig and ipa-ldap-updater >> " >> >> Even without arguments? Is ipactl now the only right place to trigger manual >> update? >> >> 4) >> >> " >> Plugins are called from update files, using new directive >> update-plugin: >> " >> >> Why "update-plugin" and not just "plugin"? Do you expect other kinds of plugins >> to be called from update files in the future? (I certainly don't.) > > I have no strong feelings on this one, but IMO it is always better to have some > "plan B" if we choose to indeed implement some yet unforeseen plugin based > capability... I doubt that will happen, but if it does, we can always add "plan-b-plugin" directive. > >> 5) >> >> " >> New class UpdatePlugin is used for all update plugins. >> " >> >> Just reuse the existing Updater class, no need to reinvent the wheel. >> >> 6) >> >> I wonder why configuration update is done after data update and not before. I >> know it's been like that for a long time, but it seems kind of unnatural to me, >> especially now when schema update is separate from data update. (Rob?) >> >> 7) >> >> " >> keep --test option and fix the plugins which do not respect the option >> " >> >> Just a note, I believe this ticket is related: >> . >> >> >> Good work overall! >> >> Honza >> > -- Jan Cholasta From mbasti at redhat.com Mon Mar 2 12:51:49 2015 From: mbasti at redhat.com (Martin Basti) Date: Mon, 02 Mar 2015 13:51:49 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F45395.4020209@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> Message-ID: <54F45CE5.1000108@redhat.com> On 02/03/15 13:12, Jan Cholasta wrote: > Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>> Hi, >>> >>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>> Hello all, >>>> >>>> please read the design page, any objections/suggestions appreciated >>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>> >>> >>> 1) >>> >>> " >>> * Merge server update commands into the one command >>> (ipa-server-upgrade) >>> " >>> >>> So there is "ipa-server-install" to install the server, >>> "ipa-server-install >>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>> upgrade the >>> server. Maybe we should bring some consistency here and have one of: >>> >>> a) "ipa-server-install [--install]", "ipa-server-install >>> --uninstall", >>> "ipa-server-install --upgrade" >>> >>> b) "ipa-server-install [install]", "ipa-server-install uninstall", >>> "ipa-server-install upgrade" >>> >>> c) "ipa-server-install", "ipa-server-uninstall", "ipa-server-upgrade" >> >> Long term, I think we want C. Besides other advantages, it will let >> us have >> independent sets of options, based on what you want to do. >> >>> 2) >>> >>> " >>> * Prevent to run IPA service, if code version and configuration >>> version does >>> not match >>> * ipactl should execute ipa-server-upgrade if needed >>> " >>> >>> There should be no configuration version, configuration update >>> should be run >>> always. It's fast and hence does not need to be optimized like data >>> update by >>> using a monolithic version number, which brings more than a few >>> problems on its >>> own. >> >> I do not agree in this section. Why would you like to run it always, >> even if it >> was fast? No run is still faster than fast run. > > In the ideal case the installer would be idempotent and upgrade would > be re-running the installer and we should aim to do just that. We kind > of do that already, but there is a lot of code duplication in > installers and ipa-upgradeconfig (I would like to fix that when > refactoring installers). IMO it's better to always make 100% sure the > configuration is correct rather than to save a second or two. I doesn't like this idea, if user wants to fix something, the one should use --skip-version-check option, and the IPA upgrade will be executed. What if a service changes in a way, the IPA configuration will not work? The user will need to change it manually, but after each restart, upgrade will change the value back into IPA required configuration which will not work. Yes, we have upgrade state file, but then the comparing of one value is faster then checking each state if was executed. My personal opinion is, application should not try to fix itself every restart. > >> In the past, I do not recall >> ipa-upgradeconfig as being really fast, especially certmonger/Dogtag >> related >> updates were really slow thank to service restarts, etc. > > Correct, but I was talking about configuration file updates, not > (re)starts, which have to always be done in ipactl anyway. > >> >>> 3) >>> >>> " >>> * Prevent user to use ipa-upgradeconfig and ipa-ldap-updater >>> " >>> >>> Even without arguments? Is ipactl now the only right place to >>> trigger manual >>> update? Sorry, I will add more details there, if this is not clear. ipa-upgrateconfig will be removed ipa-ldap-updater will not be able to do overall update, you will need to specify options and update file. >>> >>> 4) >>> >>> " >>> Plugins are called from update files, using new directive >>> update-plugin: >>> " >>> >>> Why "update-plugin" and not just "plugin"? Do you expect other kinds >>> of plugins >>> to be called from update files in the future? (I certainly don't.) >> >> I have no strong feelings on this one, but IMO it is always better to >> have some >> "plan B" if we choose to indeed implement some yet unforeseen plugin >> based >> capability... > > I doubt that will happen, but if it does, we can always add > "plan-b-plugin" directive. I do not insist on "update-plugin", I just wanted to be more specific which type of plugin is expected there. > >> >>> 5) >>> >>> " >>> New class UpdatePlugin is used for all update plugins. >>> " >>> >>> Just reuse the existing Updater class, no need to reinvent the wheel. >>> >>> 6) >>> >>> I wonder why configuration update is done after data update and not >>> before. I >>> know it's been like that for a long time, but it seems kind of >>> unnatural to me, >>> especially now when schema update is separate from data update. (Rob?) We need schema update first, but I haven't found any services which need to have updated data (I might be wrong) >>> >>> 7) >>> >>> " >>> keep --test option and fix the plugins which do not respect the option >>> " >>> >>> Just a note, I believe this ticket is related: >>> . >>> >>> >>> Good work overall! >>> >>> Honza >>> >> > > -- Martin Basti From rcritten at redhat.com Mon Mar 2 14:43:51 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 02 Mar 2015 09:43:51 -0500 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F45068.9030306@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> Message-ID: <54F47727.9030203@redhat.com> Martin Basti wrote: > On 27/02/15 20:45, Rob Crittenden wrote: >> Martin Basti wrote: >>> On 26/02/15 10:45, Petr Spacek wrote: >>>> On 25.2.2015 17:49, Martin Basti wrote: >>>>> On 25/02/15 17:15, Petr Spacek wrote: >>>>>> On 24.2.2015 19:10, Martin Basti wrote: >>>>>>> Hello all, >>>>>>> >>>>>>> please read the design page, any objections/suggestions appreciated >>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>> Thank you for the design, I have only few nitpicks. >>>>>> >>>>>>> Increase update files numbers range >>>>>>> Update files number will be extended into 4 digits values. >>>>>> IMHO the dependency on particular number format should be removed >>>>>> altogether. >>>>>> It should be perfectly enough to say that updates are executed in >>>>>> ASCII >>>>>> lexicographic order and be done with it. >>>>> 4.1.3-2 > 4.1.3-12 in lexicographic order, this will not fit. >>>> Well, sure, but it allows you to use >>>> 00-a >>>> 01-b >>>> >>>> and renumber it to >>>> >>>> 001-a >>>> 002-b >>>> >>>> at will without changes to code. (Lexicographic order is what 'ls' >>>> uses by >>>> default so you can see the real ordering at any time very easily.) >>>> >>>> Also, as you pointed out, it allows you to do things like >>>> 12.345-a >>>> 12.666-bbb >>>> without changing code, again :-) >>> Oh stupid me, I read it wrong, I replied with IPA version compare. >>> >>> sounds good to me, any objections anyone? >> This makes sense as long as we don't abuse it. The numbers are there to >> apply some amount of order but flexibility is good, and will avoid the >> problem of having humongous update files. > So it is not clear to me, should we use 4 digit numbers, or just > lexicographic order? Use lexicographic order with number prefixes so things "look" the way the always have but it will be more flexible going forward. I don't know that IPA will ever actually run out of enough number prefixes for this to be a serious problem but plan for the worst I guess. I'd just update the current README that recommends certain levels with information on the new sorting. The order within a given numeric subgroup is only important in a minority of cases. > >> I'm fine with allowing DM given that it allows running as non-root >> (pretty much the only condition that ldapi wouldn't work), but I think a >> full upgrade will fail w/o root given that you are combining the two >> commands. > I'm not sure, if I get it. > > The ipa-server-upgrade has to be run under root user (ldapi, service > upgrades). > If LDAPI failed, user may use DM password to do LDAP update (this should > fix LDAPI settings) but root user is still required to update services. > So root user is always required to do this. But you haven't explained any case why LDAPI would fail. If LDAPI fails then you've got more serious problems that I'm not sure binding as DM is going to solve. The only case where DM would be handy IMHO is either some worst case scenario upgrade where 389-ds is up but not binding to LDAPI or if you want to allow running LDAP updates as non-root. >> >> On ipactl, would it be overkill if there is a tty to prompt the user to >> upgrade? In a non-container world it might be surprising to have an >> upgrade happen esp since upgrades take a while. > In non-container enviroment, we can still use upgrade during RPM > transaction. > > So you suggest not to do update automaticaly, just write Error the IPA > upgrade is required? People do all sorts of strange things. Installing the packages with --no-script isn't in the range of impossible. A prompt, and I'm not saying it's a great idea, is 2 lines of code. I guess it just makes me nervous. >> >> With --skip-version-check what sorts of problems can we foresee? I >> assume a big warning will be added to at least the man page, if not >> the cli? > For this big warning everywhere. > The main problem is try to run older IPA with newer data. In containers > the problem is to run different platform specific versions which differ > in functionality/bugfixes etc.. Ok, pretty much the things I was thinking as well. A scary warning definitely seems warranted. >> >> Where does platform come from? I'm wondering how Debian will handle this. > platform is derived from ipaplatform file which is used with the > particular build. Debian should have own platform file. Ok, I'd add that detail to the design. rob From pvoborni at redhat.com Mon Mar 2 15:13:25 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Mon, 02 Mar 2015 16:13:25 +0100 Subject: [Freeipa-devel] [PATCH 0001] ipa-client-install: attempt to get host TGT several times before aborting client installation In-Reply-To: <54DE4DD8.7060206@redhat.com> References: <54B3FA19.6000903@redhat.com> <54B4D5B4.3050301@redhat.com> <54B4DB7A.4080307@redhat.com> <54B53E68.60000@redhat.com> <54B690EE.3070604@redhat.com> <54B69EDD.2000800@redhat.com> <54DE4DD8.7060206@redhat.com> Message-ID: <54F47E15.8060700@redhat.com> >>>>>> On 01/12/2015 05:45 PM, Martin Babinsky wrote: >>>>>>> related to ticket https://fedorahosted.org/freeipa/ticket/4808 this patch seems to be a bit forgotten. It works, looks fine. One minor issue: trailing whitespaces in the man page. I also wonder if it shouldn't be used in other tools which call kinit with keytab: * ipa-client-automount:434 * ipa-client-install:2591 (this usage should be fine since it's used for server installation) * dcerpc.py:545 * rpcserver.py: 971, 981 (armor for web ui forms base auth) Most importantly the ipa-client-automount because it's called from ipa-client-install (if location is specified) and therefore it might fail during client installation. Or also, kinit call with admin creadentials worked for the user but I wonder if it was just a coincidence and may break under slightly different but similar conditions. -- Petr Vobornik From abokovoy at redhat.com Mon Mar 2 15:19:59 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Mon, 2 Mar 2015 17:19:59 +0200 Subject: [Freeipa-devel] [PATCH 0001] ipa-client-install: attempt to get host TGT several times before aborting client installation In-Reply-To: <54F47E15.8060700@redhat.com> References: <54B3FA19.6000903@redhat.com> <54B4D5B4.3050301@redhat.com> <54B4DB7A.4080307@redhat.com> <54B53E68.60000@redhat.com> <54B690EE.3070604@redhat.com> <54B69EDD.2000800@redhat.com> <54DE4DD8.7060206@redhat.com> <54F47E15.8060700@redhat.com> Message-ID: <20150302151959.GE25455@redhat.com> On Mon, 02 Mar 2015, Petr Vobornik wrote: >>>>>>>On 01/12/2015 05:45 PM, Martin Babinsky wrote: >>>>>>>>related to ticket https://fedorahosted.org/freeipa/ticket/4808 > >this patch seems to be a bit forgotten. > >It works, looks fine. > >One minor issue: trailing whitespaces in the man page. > >I also wonder if it shouldn't be used in other tools which call kinit >with keytab: >* ipa-client-automount:434 >* ipa-client-install:2591 (this usage should be fine since it's used >for server installation) >* dcerpc.py:545 >* rpcserver.py: 971, 981 (armor for web ui forms base auth) > >Most importantly the ipa-client-automount because it's called from >ipa-client-install (if location is specified) and therefore it might >fail during client installation. > >Or also, kinit call with admin creadentials worked for the user but I >wonder if it was just a coincidence and may break under slightly >different but similar conditions. dcerpc.py use is special, don't change anything there please. -- / Alexander Bokovoy From rcritten at redhat.com Mon Mar 2 15:28:14 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Mon, 02 Mar 2015 10:28:14 -0500 Subject: [Freeipa-devel] [PATCH 0001] ipa-client-install: attempt to get host TGT several times before aborting client installation In-Reply-To: <54F47E15.8060700@redhat.com> References: <54B3FA19.6000903@redhat.com> <54B4D5B4.3050301@redhat.com> <54B4DB7A.4080307@redhat.com> <54B53E68.60000@redhat.com> <54B690EE.3070604@redhat.com> <54B69EDD.2000800@redhat.com> <54DE4DD8.7060206@redhat.com> <54F47E15.8060700@redhat.com> Message-ID: <54F4818E.1060408@redhat.com> Petr Vobornik wrote: >>>>>>> On 01/12/2015 05:45 PM, Martin Babinsky wrote: >>>>>>>> related to ticket https://fedorahosted.org/freeipa/ticket/4808 > > this patch seems to be a bit forgotten. > > It works, looks fine. > > One minor issue: trailing whitespaces in the man page. > > I also wonder if it shouldn't be used in other tools which call kinit > with keytab: > * ipa-client-automount:434 > * ipa-client-install:2591 (this usage should be fine since it's used for > server installation) > * dcerpc.py:545 > * rpcserver.py: 971, 981 (armor for web ui forms base auth) > > Most importantly the ipa-client-automount because it's called from > ipa-client-install (if location is specified) and therefore it might > fail during client installation. > > Or also, kinit call with admin creadentials worked for the user but I > wonder if it was just a coincidence and may break under slightly > different but similar conditions. I think that's a fine idea. In fact there is already a function that could be extended, kinit_hostprincipal(). rob From mbasti at redhat.com Mon Mar 2 17:12:35 2015 From: mbasti at redhat.com (Martin Basti) Date: Mon, 02 Mar 2015 18:12:35 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F47727.9030203@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> Message-ID: <54F49A03.8030704@redhat.com> On 02/03/15 15:43, Rob Crittenden wrote: > Martin Basti wrote: >> On 27/02/15 20:45, Rob Crittenden wrote: >>> Martin Basti wrote: >>>> On 26/02/15 10:45, Petr Spacek wrote: >>>>> On 25.2.2015 17:49, Martin Basti wrote: >>>>>> On 25/02/15 17:15, Petr Spacek wrote: >>>>>>> On 24.2.2015 19:10, Martin Basti wrote: >>>>>>>> Hello all, >>>>>>>> >>>>>>>> please read the design page, any objections/suggestions appreciated >>>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>> Thank you for the design, I have only few nitpicks. >>>>>>> >>>>>>>> Increase update files numbers range >>>>>>>> Update files number will be extended into 4 digits values. >>>>>>> IMHO the dependency on particular number format should be removed >>>>>>> altogether. >>>>>>> It should be perfectly enough to say that updates are executed in >>>>>>> ASCII >>>>>>> lexicographic order and be done with it. >>>>>> 4.1.3-2 > 4.1.3-12 in lexicographic order, this will not fit. >>>>> Well, sure, but it allows you to use >>>>> 00-a >>>>> 01-b >>>>> >>>>> and renumber it to >>>>> >>>>> 001-a >>>>> 002-b >>>>> >>>>> at will without changes to code. (Lexicographic order is what 'ls' >>>>> uses by >>>>> default so you can see the real ordering at any time very easily.) >>>>> >>>>> Also, as you pointed out, it allows you to do things like >>>>> 12.345-a >>>>> 12.666-bbb >>>>> without changing code, again :-) >>>> Oh stupid me, I read it wrong, I replied with IPA version compare. >>>> >>>> sounds good to me, any objections anyone? >>> This makes sense as long as we don't abuse it. The numbers are there to >>> apply some amount of order but flexibility is good, and will avoid the >>> problem of having humongous update files. >> So it is not clear to me, should we use 4 digit numbers, or just >> lexicographic order? > Use lexicographic order with number prefixes so things "look" the way > the always have but it will be more flexible going forward. I don't know > that IPA will ever actually run out of enough number prefixes for this > to be a serious problem but plan for the worst I guess. > > I'd just update the current README that recommends certain levels with > information on the new sorting. The order within a given numeric > subgroup is only important in a minority of cases. Ok > >>> I'm fine with allowing DM given that it allows running as non-root >>> (pretty much the only condition that ldapi wouldn't work), but I think a >>> full upgrade will fail w/o root given that you are combining the two >>> commands. >> I'm not sure, if I get it. >> >> The ipa-server-upgrade has to be run under root user (ldapi, service >> upgrades). >> If LDAPI failed, user may use DM password to do LDAP update (this should >> fix LDAPI settings) but root user is still required to update services. >> So root user is always required to do this. > But you haven't explained any case why LDAPI would fail. If LDAPI fails > then you've got more serious problems that I'm not sure binding as DM is > going to solve. > > The only case where DM would be handy IMHO is either some worst case > scenario upgrade where 389-ds is up but not binding to LDAPI or if you > want to allow running LDAP updates as non-root. I don't know cases when LDAPI would failed, except the case LDAPI is misconfigured by user, or disabled by user. It is not big effort to keep both DM binding and LDAPI in code. A user can always found som unexpected use case for LDAP update with DM password. >>> On ipactl, would it be overkill if there is a tty to prompt the user to >>> upgrade? In a non-container world it might be surprising to have an >>> upgrade happen esp since upgrades take a while. >> In non-container enviroment, we can still use upgrade during RPM >> transaction. >> >> So you suggest not to do update automaticaly, just write Error the IPA >> upgrade is required? > People do all sorts of strange things. Installing the packages with > --no-script isn't in the range of impossible. A prompt, and I'm not > saying it's a great idea, is 2 lines of code. > > I guess it just makes me nervous. So lets summarize this: * DO upgrade if possible during RPM transaction * ipactl will NOT run upgrade, just print Error: 'please upgrade ....' * User has to run ipa-server-upgrade manually Does I understand it correctly? > >>> With --skip-version-check what sorts of problems can we foresee? I >>> assume a big warning will be added to at least the man page, if not >>> the cli? >> For this big warning everywhere. >> The main problem is try to run older IPA with newer data. In containers >> the problem is to run different platform specific versions which differ >> in functionality/bugfixes etc.. > Ok, pretty much the things I was thinking as well. A scary warning > definitely seems warranted. > >>> Where does platform come from? I'm wondering how Debian will handle this. >> platform is derived from ipaplatform file which is used with the >> particular build. Debian should have own platform file. > Ok, I'd add that detail to the design. > > rob -- Martin Basti From mkosek at redhat.com Mon Mar 2 17:28:56 2015 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 02 Mar 2015 18:28:56 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F49A03.8030704@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> Message-ID: <54F49DD8.4090801@redhat.com> On 03/02/2015 06:12 PM, Martin Basti wrote: > On 02/03/15 15:43, Rob Crittenden wrote: >> Martin Basti wrote: ... >> But you haven't explained any case why LDAPI would fail. If LDAPI fails >> then you've got more serious problems that I'm not sure binding as DM is >> going to solve. >> >> The only case where DM would be handy IMHO is either some worst case >> scenario upgrade where 389-ds is up but not binding to LDAPI or if you >> want to allow running LDAP updates as non-root. > I don't know cases when LDAPI would failed, except the case LDAPI is > misconfigured by user, or disabled by user. Wasn't LDAPI needed for the DM password less upgrade so that upgrader could simply bind as root with EXTERNAL auth? > It is not big effort to keep both DM binding and LDAPI in code. A user can > always found som unexpected use case for LDAP update with DM password. > >>>> On ipactl, would it be overkill if there is a tty to prompt the user to >>>> upgrade? In a non-container world it might be surprising to have an >>>> upgrade happen esp since upgrades take a while. >>> In non-container enviroment, we can still use upgrade during RPM >>> transaction. >>> >>> So you suggest not to do update automaticaly, just write Error the IPA >>> upgrade is required? >> People do all sorts of strange things. Installing the packages with >> --no-script isn't in the range of impossible. A prompt, and I'm not >> saying it's a great idea, is 2 lines of code. >> >> I guess it just makes me nervous. > So lets summarize this: > * DO upgrade if possible during RPM transaction Umm, I thought we want to get rid of running upgrade during RPM transaction. It is extremely difficult to debug upgrade stuck during RPM transaction, it also makes RPM upgrade run longer than needed. It also makes admins nervous when their rpm upgrade is suddenly waiting right before the end. I even see the fingers slowly reaching to CTRL+C combo... (You can see the consequences) > * ipactl will NOT run upgrade, just print Error: 'please upgrade ....' > * User has to run ipa-server-upgrade manually > > Does I understand it correctly? >> >>>> With --skip-version-check what sorts of problems can we foresee? I >>>> assume a big warning will be added to at least the man page, if not >>>> the cli? >>> For this big warning everywhere. >>> The main problem is try to run older IPA with newer data. In containers >>> the problem is to run different platform specific versions which differ >>> in functionality/bugfixes etc.. >> Ok, pretty much the things I was thinking as well. A scary warning >> definitely seems warranted. >> >>>> Where does platform come from? I'm wondering how Debian will handle this. >>> platform is derived from ipaplatform file which is used with the >>> particular build. Debian should have own platform file. >> Ok, I'd add that detail to the design. >> >> rob > > From sbose at redhat.com Mon Mar 2 17:45:07 2015 From: sbose at redhat.com (Sumit Bose) Date: Mon, 2 Mar 2015 18:45:07 +0100 Subject: [Freeipa-devel] [PATCHES 134-136] extdom: handle ERANGE return code for getXXYYY_r() Message-ID: <20150302174507.GK3271@p.redhat.com> Hi, the attached patches add ERANGE handling to getpwnam(), getpwuid(), getgrnam() and getgrgid(). I added a configurable limit to the buffer allocation which defaults to 128MB. If you think this limit should be change or there should be no limit at all please let me know. bye, Sumit -------------- next part -------------- From 0b4e302866f734b93176d9104bd78a2e55702c40 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Tue, 24 Feb 2015 15:29:00 +0100 Subject: [PATCH 134/136] Add configure check for cwrap libraries Currently only nss-wrapper is checked, checks for other crwap libraries can be added e.g. as AM_CHECK_WRAPPER(uid_wrapper, HAVE_UID_WRAPPER) --- daemons/configure.ac | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/daemons/configure.ac b/daemons/configure.ac index 97cd25115f371e9e549d209401df9325c7e112c1..7c979fe2d0b91e9d71fe4ca5a50ad78a4de79298 100644 --- a/daemons/configure.ac +++ b/daemons/configure.ac @@ -236,6 +236,30 @@ PKG_CHECK_EXISTS(cmocka, ) AM_CONDITIONAL([HAVE_CMOCKA], [test x$have_cmocka = xyes]) +dnl A macro to check presence of a cwrap (http://cwrap.org) wrapper on the system +dnl Usage: +dnl AM_CHECK_WRAPPER(name, conditional) +dnl If the cwrap library is found, sets the HAVE_$name conditional +AC_DEFUN([AM_CHECK_WRAPPER], +[ + FOUND_WRAPPER=0 + + AC_MSG_CHECKING([for $1]) + PKG_CHECK_EXISTS([$1], + [ + AC_MSG_RESULT([yes]) + FOUND_WRAPPER=1 + ], + [ + AC_MSG_RESULT([no]) + AC_MSG_WARN([cwrap library $1 not found, some tests will not run]) + ]) + + AM_CONDITIONAL($2, [ test x$FOUND_WRAPPER = x1]) +]) + +AM_CHECK_WRAPPER(nss_wrapper, HAVE_NSS_WRAPPER) + dnl -- dirsrv is needed for the extdom unit tests -- PKG_CHECK_MODULES([DIRSRV], [dirsrv >= 1.3.0]) dnl -- sss_idmap is needed by the extdom exop -- -- 2.1.0 -------------- next part -------------- From 9960f8f87bfedaaa72db9b27adfc3f86fce88cf8 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Tue, 24 Feb 2015 15:33:39 +0100 Subject: [PATCH 135/136] extdom: handle ERANGE return code for getXXYYY_r() calls The getXXYYY_r() calls require a buffer to store the variable data of the passwd and group structs. If the provided buffer is too small ERANGE is returned and the caller can try with a larger buffer again. Cmocka/cwrap based unit-tests for get*_r_wrapper() are added. Resolves https://fedorahosted.org/freeipa/ticket/4908 --- .../ipa-slapi-plugins/ipa-extdom-extop/Makefile.am | 31 +- .../ipa-extdom-extop/ipa_extdom.h | 8 + .../ipa-extdom-extop/ipa_extdom_cmocka_tests.c | 186 ++++++++++++ .../ipa-extdom-extop/ipa_extdom_common.c | 327 ++++++++++++++------- .../ipa-extdom-extop/test_data/group | 2 + .../ipa-extdom-extop/test_data/passwd | 2 + .../ipa-extdom-extop/test_data/test_setup.sh | 3 + 7 files changed, 456 insertions(+), 103 deletions(-) create mode 100644 daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c create mode 100644 daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/group create mode 100644 daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/passwd create mode 100644 daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/test_setup.sh diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am b/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am index 0008476796f5b20f62f2c32e7b291b787fa7a6fc..a1679812ef3c5de8c6e18433cbb991a99ad0b6c8 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am @@ -35,9 +35,20 @@ libipa_extdom_extop_la_LIBADD = \ $(SSSNSSIDMAP_LIBS) \ $(NULL) +TESTS = +check_PROGRAMS = + if HAVE_CHECK -TESTS = extdom_tests -check_PROGRAMS = extdom_tests +TESTS += extdom_tests +check_PROGRAMS += extdom_tests +endif + +if HAVE_CMOCKA +if HAVE_NSS_WRAPPER +TESTS_ENVIRONMENT = . ./test_data/test_setup.sh; +TESTS += extdom_cmocka_tests +check_PROGRAMS += extdom_cmocka_tests +endif endif extdom_tests_SOURCES = \ @@ -55,6 +66,22 @@ extdom_tests_LDADD = \ $(SSSNSSIDMAP_LIBS) \ $(NULL) +extdom_cmocka_tests_SOURCES = \ + ipa_extdom_cmocka_tests.c \ + ipa_extdom_common.c \ + $(NULL) +extdom_cmocka_tests_CFLAGS = $(CMOCKA_CFLAGS) +extdom_cmocka_tests_LDFLAGS = \ + -rpath $(shell pkg-config --libs-only-L dirsrv | sed -e 's/-L//') \ + $(NULL) +extdom_cmocka_tests_LDADD = \ + $(CMOCKA_LIBS) \ + $(LDAP_LIBS) \ + $(DIRSRV_LIBS) \ + $(SSSNSSIDMAP_LIBS) \ + $(NULL) + + appdir = $(IPA_DATA_DIR) app_DATA = \ ipa-extdom-extop-conf.ldif \ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h index 56ca5009b1aa427f6c059b78ac392c768e461e2e..3231ee224f159545bb92a5c7433bae09869ad17e 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h @@ -174,4 +174,12 @@ int check_request(struct extdom_req *req, enum extdom_version version); int handle_request(struct ipa_extdom_ctx *ctx, struct extdom_req *req, struct berval **berval); int pack_response(struct extdom_res *res, struct berval **ret_val); +int getpwnam_r_wrapper(size_t buf_max, const char *name, + struct passwd *pwd, char **_buf, size_t *_buf_len); +int getpwuid_r_wrapper(size_t buf_max, uid_t uid, + struct passwd *pwd, char **_buf, size_t *_buf_len); +int getgrnam_r_wrapper(size_t buf_max, const char *name, + struct group *grp, char **_buf, size_t *_buf_len); +int getgrgid_r_wrapper(size_t buf_max, gid_t gid, + struct group *grp, char **_buf, size_t *_buf_len); #endif /* _IPA_EXTDOM_H_ */ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c new file mode 100644 index 0000000000000000000000000000000000000000..2143f4a90a1ca61cf4612774f77a520b3793443b --- /dev/null +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c @@ -0,0 +1,186 @@ +/* + Authors: + Sumit Bose + + Copyright (C) 2015 Red Hat + + Extdom tests + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . +*/ + +#include +#include +#include +#include +#include + +#include +#include + + +#include "ipa_extdom.h" + +#define MAX_BUF (1024*1024*1024) + +void test_getpwnam_r_wrapper(void **state) +{ + int ret; + struct passwd pwd; + char *buf; + size_t buf_len; + + ret = getpwnam_r_wrapper(MAX_BUF, "non_exisiting_user", &pwd, &buf, + &buf_len); + assert_int_equal(ret, ENOENT); + + ret = getpwnam_r_wrapper(MAX_BUF, "user", &pwd, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(pwd.pw_name, "user"); + assert_string_equal(pwd.pw_passwd, "x"); + assert_int_equal(pwd.pw_uid, 12345); + assert_int_equal(pwd.pw_gid, 23456); + assert_string_equal(pwd.pw_gecos, "gecos"); + assert_string_equal(pwd.pw_dir, "/home/user"); + assert_string_equal(pwd.pw_shell, "/bin/shell"); + free(buf); + + ret = getpwnam_r_wrapper(MAX_BUF, "user_big", &pwd, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(pwd.pw_name, "user_big"); + assert_string_equal(pwd.pw_passwd, "x"); + assert_int_equal(pwd.pw_uid, 12346); + assert_int_equal(pwd.pw_gid, 23457); + assert_int_equal(strlen(pwd.pw_gecos), 1000 * strlen("gecos")); + assert_string_equal(pwd.pw_dir, "/home/user_big"); + assert_string_equal(pwd.pw_shell, "/bin/shell"); + free(buf); + + ret = getpwnam_r_wrapper(1024, "user_big", &pwd, &buf, &buf_len); + assert_int_equal(ret, ENOMEM); +} + +void test_getpwuid_r_wrapper(void **state) +{ + int ret; + struct passwd pwd; + char *buf; + size_t buf_len; + + ret = getpwuid_r_wrapper(MAX_BUF, 99999, &pwd, &buf, &buf_len); + assert_int_equal(ret, ENOENT); + + ret = getpwuid_r_wrapper(MAX_BUF, 12345, &pwd, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(pwd.pw_name, "user"); + assert_string_equal(pwd.pw_passwd, "x"); + assert_int_equal(pwd.pw_uid, 12345); + assert_int_equal(pwd.pw_gid, 23456); + assert_string_equal(pwd.pw_gecos, "gecos"); + assert_string_equal(pwd.pw_dir, "/home/user"); + assert_string_equal(pwd.pw_shell, "/bin/shell"); + free(buf); + + ret = getpwuid_r_wrapper(MAX_BUF, 12346, &pwd, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(pwd.pw_name, "user_big"); + assert_string_equal(pwd.pw_passwd, "x"); + assert_int_equal(pwd.pw_uid, 12346); + assert_int_equal(pwd.pw_gid, 23457); + assert_int_equal(strlen(pwd.pw_gecos), 1000 * strlen("gecos")); + assert_string_equal(pwd.pw_dir, "/home/user_big"); + assert_string_equal(pwd.pw_shell, "/bin/shell"); + free(buf); + + ret = getpwuid_r_wrapper(1024, 12346, &pwd, &buf, &buf_len); + assert_int_equal(ret, ENOMEM); +} + +void test_getgrnam_r_wrapper(void **state) +{ + int ret; + struct group grp; + char *buf; + size_t buf_len; + + ret = getgrnam_r_wrapper(MAX_BUF, "non_exisiting_group", &grp, &buf, &buf_len); + assert_int_equal(ret, ENOENT); + + ret = getgrnam_r_wrapper(MAX_BUF, "group", &grp, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(grp.gr_name, "group"); + assert_string_equal(grp.gr_passwd, "x"); + assert_int_equal(grp.gr_gid, 11111); + assert_string_equal(grp.gr_mem[0], "member0001"); + assert_string_equal(grp.gr_mem[1], "member0002"); + assert_null(grp.gr_mem[2]); + free(buf); + + ret = getgrnam_r_wrapper(MAX_BUF, "group_big", &grp, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(grp.gr_name, "group_big"); + assert_string_equal(grp.gr_passwd, "x"); + assert_int_equal(grp.gr_gid, 22222); + assert_string_equal(grp.gr_mem[0], "member0001"); + assert_string_equal(grp.gr_mem[1], "member0002"); + free(buf); + + ret = getgrnam_r_wrapper(1024, "group_big", &grp, &buf, &buf_len); + assert_int_equal(ret, ENOMEM); +} + +void test_getgrgid_r_wrapper(void **state) +{ + int ret; + struct group grp; + char *buf; + size_t buf_len; + + ret = getgrgid_r_wrapper(MAX_BUF, 99999, &grp, &buf, &buf_len); + assert_int_equal(ret, ENOENT); + + ret = getgrgid_r_wrapper(MAX_BUF, 11111, &grp, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(grp.gr_name, "group"); + assert_string_equal(grp.gr_passwd, "x"); + assert_int_equal(grp.gr_gid, 11111); + assert_string_equal(grp.gr_mem[0], "member0001"); + assert_string_equal(grp.gr_mem[1], "member0002"); + assert_null(grp.gr_mem[2]); + free(buf); + + ret = getgrgid_r_wrapper(MAX_BUF, 22222, &grp, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(grp.gr_name, "group_big"); + assert_string_equal(grp.gr_passwd, "x"); + assert_int_equal(grp.gr_gid, 22222); + assert_string_equal(grp.gr_mem[0], "member0001"); + assert_string_equal(grp.gr_mem[1], "member0002"); + free(buf); + + ret = getgrgid_r_wrapper(1024, 22222, &grp, &buf, &buf_len); + assert_int_equal(ret, ENOMEM); +} + +int main(int argc, const char *argv[]) +{ + const UnitTest tests[] = { + unit_test(test_getpwnam_r_wrapper), + unit_test(test_getpwuid_r_wrapper), + unit_test(test_getgrnam_r_wrapper), + unit_test(test_getgrgid_r_wrapper), + }; + + return run_tests(tests); +} diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index 20fdd62b20f28f5384cf83b8be5819f721c6c3db..84aeb28066f25f05a89d0c2d42e8b060e2399501 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -49,6 +49,220 @@ #define MAX(a,b) (((a)>(b))?(a):(b)) #define SSSD_DOMAIN_SEPARATOR '@' +#define MAX_BUF (1024*1024*1024) + + + +static int get_buffer(size_t *_buf_len, char **_buf) +{ + long pw_max; + long gr_max; + size_t buf_len; + char *buf; + + pw_max = sysconf(_SC_GETPW_R_SIZE_MAX); + gr_max = sysconf(_SC_GETGR_R_SIZE_MAX); + + if (pw_max == -1 && gr_max == -1) { + buf_len = 16384; + } else { + buf_len = MAX(pw_max, gr_max); + } + + buf = malloc(sizeof(char) * buf_len); + if (buf == NULL) { + return LDAP_OPERATIONS_ERROR; + } + + *_buf_len = buf_len; + *_buf = buf; + + return LDAP_SUCCESS; +} + +int getpwnam_r_wrapper(size_t buf_max, const char *name, + struct passwd *pwd, char **_buf, size_t *_buf_len) +{ + char *buf = NULL; + char *tmp_buf; + size_t buf_len = 0; + int ret; + struct passwd *result = NULL; + + ret = get_buffer(&buf_len, &buf); + if (ret != LDAP_SUCCESS) { + return ENOMEM; + } + + while (buf != NULL + && (ret = getpwnam_r(name, pwd, buf, buf_len, &result)) == ERANGE) { + buf_len *= 2; + if (buf_len > buf_max) { + ret = ENOMEM; + goto done; + } + + tmp_buf = realloc(buf, buf_len); + if (tmp_buf == NULL) { + ret = ENOMEM; + goto done; + } + + buf = tmp_buf; + } + + if (ret == 0 && result == NULL) { + ret = ENOENT; + } + +done: + if (ret == 0) { + *_buf = buf; + *_buf_len = buf_len; + } else { + free(buf); + } + + return ret; +} + +int getpwuid_r_wrapper(size_t buf_max, uid_t uid, + struct passwd *pwd, char **_buf, size_t *_buf_len) +{ + char *buf = NULL; + char *tmp_buf; + size_t buf_len = 0; + int ret; + struct passwd *result = NULL; + + ret = get_buffer(&buf_len, &buf); + if (ret != LDAP_SUCCESS) { + return ENOMEM; + } + + while (buf != NULL + && (ret = getpwuid_r(uid, pwd, buf, buf_len, &result)) == ERANGE) { + buf_len *= 2; + if (buf_len > buf_max) { + ret = ENOMEM; + goto done; + } + + tmp_buf = realloc(buf, buf_len); + if (tmp_buf == NULL) { + ret = ENOMEM; + goto done; + } + + buf = tmp_buf; + } + + if (ret == 0 && result == NULL) { + ret = ENOENT; + } + +done: + if (ret == 0) { + *_buf = buf; + *_buf_len = buf_len; + } else { + free(buf); + } + + return ret; +} + +int getgrnam_r_wrapper(size_t buf_max, const char *name, + struct group *grp, char **_buf, size_t *_buf_len) +{ + char *buf = NULL; + char *tmp_buf; + size_t buf_len = 0; + int ret; + struct group *result = NULL; + + ret = get_buffer(&buf_len, &buf); + if (ret != LDAP_SUCCESS) { + return ENOMEM; + } + + while (buf != NULL + && (ret = getgrnam_r(name, grp, buf, buf_len, &result)) == ERANGE) { + buf_len *= 2; + if (buf_len > buf_max) { + ret = ENOMEM; + goto done; + } + + tmp_buf = realloc(buf, buf_len); + if (tmp_buf == NULL) { + ret = ENOMEM; + goto done; + } + + buf = tmp_buf; + } + + if (ret == 0 && result == NULL) { + ret = ENOENT; + } + +done: + if (ret == 0) { + *_buf = buf; + *_buf_len = buf_len; + } else { + free(buf); + } + + return ret; +} + +int getgrgid_r_wrapper(size_t buf_max, gid_t gid, + struct group *grp, char **_buf, size_t *_buf_len) +{ + char *buf = NULL; + char *tmp_buf; + size_t buf_len = 0; + int ret; + struct group *result = NULL; + + ret = get_buffer(&buf_len, &buf); + if (ret != LDAP_SUCCESS) { + return ENOMEM; + } + + while (buf != NULL + && (ret = getgrgid_r(gid, grp, buf, buf_len, &result)) == ERANGE) { + buf_len *= 2; + if (buf_len > buf_max) { + ret = ENOMEM; + goto done; + } + + tmp_buf = realloc(buf, buf_len); + if (tmp_buf == NULL) { + ret = ENOMEM; + goto done; + } + + buf = tmp_buf; + } + + if (ret == 0 && result == NULL) { + ret = ENOENT; + } + +done: + if (ret == 0) { + *_buf = buf; + *_buf_len = buf_len; + } else { + free(buf); + } + + return ret; +} int parse_request_data(struct berval *req_val, struct extdom_req **_req) { @@ -191,33 +405,6 @@ int check_request(struct extdom_req *req, enum extdom_version version) return LDAP_SUCCESS; } -static int get_buffer(size_t *_buf_len, char **_buf) -{ - long pw_max; - long gr_max; - size_t buf_len; - char *buf; - - pw_max = sysconf(_SC_GETPW_R_SIZE_MAX); - gr_max = sysconf(_SC_GETGR_R_SIZE_MAX); - - if (pw_max == -1 && gr_max == -1) { - buf_len = 16384; - } else { - buf_len = MAX(pw_max, gr_max); - } - - buf = malloc(sizeof(char) * buf_len); - if (buf == NULL) { - return LDAP_OPERATIONS_ERROR; - } - - *_buf_len = buf_len; - *_buf = buf; - - return LDAP_SUCCESS; -} - static int get_user_grouplist(const char *name, gid_t gid, size_t *_ngroups, gid_t **_groups ) { @@ -323,7 +510,6 @@ static int pack_ber_user(enum response_types response_type, size_t buf_len; char *buf = NULL; struct group grp; - struct group *grp_result; size_t c; char *locat; char *short_user_name = NULL; @@ -357,11 +543,6 @@ static int pack_ber_user(enum response_types response_type, goto done; } - ret = get_buffer(&buf_len, &buf); - if (ret != LDAP_SUCCESS) { - goto done; - } - ret = ber_printf(ber,"sss", gecos, homedir, shell); if (ret == -1) { ret = LDAP_OPERATIONS_ERROR; @@ -375,15 +556,11 @@ static int pack_ber_user(enum response_types response_type, } for (c = 0; c < ngroups; c++) { - ret = getgrgid_r(groups[c], &grp, buf, buf_len, &grp_result); + ret = getgrgid_r_wrapper(MAX_BUF, groups[c], &grp, &buf, &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; } - if (grp_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } ret = ber_printf(ber, "s", grp.gr_name); if (ret == -1) { @@ -542,18 +719,12 @@ static int handle_uid_request(enum request_types request_type, uid_t uid, { int ret; struct passwd pwd; - struct passwd *pwd_result = NULL; char *sid_str = NULL; enum sss_id_type id_type; size_t buf_len; char *buf = NULL; struct sss_nss_kv *kv_list = NULL; - ret = get_buffer(&buf_len, &buf); - if (ret != LDAP_SUCCESS) { - return ret; - } - if (request_type == REQ_SIMPLE) { ret = sss_nss_getsidbyid(uid, &sid_str, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_UID @@ -568,15 +739,11 @@ static int handle_uid_request(enum request_types request_type, uid_t uid, ret = pack_ber_sid(sid_str, berval); } else { - ret = getpwuid_r(uid, &pwd, buf, buf_len, &pwd_result); + ret = getpwuid_r_wrapper(MAX_BUF, uid, &pwd, &buf, &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; } - if (pwd_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); @@ -610,18 +777,12 @@ static int handle_gid_request(enum request_types request_type, gid_t gid, { int ret; struct group grp; - struct group *grp_result = NULL; char *sid_str = NULL; enum sss_id_type id_type; size_t buf_len; char *buf = NULL; struct sss_nss_kv *kv_list = NULL; - ret = get_buffer(&buf_len, &buf); - if (ret != LDAP_SUCCESS) { - return ret; - } - if (request_type == REQ_SIMPLE) { ret = sss_nss_getsidbyid(gid, &sid_str, &id_type); if (ret != 0 || id_type != SSS_ID_TYPE_GID) { @@ -635,15 +796,11 @@ static int handle_gid_request(enum request_types request_type, gid_t gid, ret = pack_ber_sid(sid_str, berval); } else { - ret = getgrgid_r(gid, &grp, buf, buf_len, &grp_result); + ret = getgrgid_r_wrapper(MAX_BUF, gid, &grp, &buf, &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; } - if (grp_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(grp.gr_name, &kv_list, &id_type); @@ -676,9 +833,7 @@ static int handle_sid_request(enum request_types request_type, const char *sid, { int ret; struct passwd pwd; - struct passwd *pwd_result = NULL; struct group grp; - struct group *grp_result = NULL; char *domain_name = NULL; char *fq_name = NULL; char *object_name = NULL; @@ -716,25 +871,15 @@ static int handle_sid_request(enum request_types request_type, const char *sid, goto done; } - ret = get_buffer(&buf_len, &buf); - if (ret != LDAP_SUCCESS) { - goto done; - } - switch(id_type) { case SSS_ID_TYPE_UID: case SSS_ID_TYPE_BOTH: - ret = getpwnam_r(fq_name, &pwd, buf, buf_len, &pwd_result); + ret = getpwnam_r_wrapper(MAX_BUF, fq_name, &pwd, &buf, &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; } - if (pwd_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } - if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_UID @@ -755,17 +900,12 @@ static int handle_sid_request(enum request_types request_type, const char *sid, pwd.pw_shell, kv_list, berval); break; case SSS_ID_TYPE_GID: - ret = getgrnam_r(fq_name, &grp, buf, buf_len, &grp_result); + ret = getgrnam_r_wrapper(MAX_BUF, fq_name, &grp, &buf, &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; } - if (grp_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } - if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(grp.gr_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_GID @@ -806,9 +946,7 @@ static int handle_name_request(enum request_types request_type, int ret; char *fq_name = NULL; struct passwd pwd; - struct passwd *pwd_result = NULL; struct group grp; - struct group *grp_result = NULL; char *sid_str = NULL; enum sss_id_type id_type; size_t buf_len; @@ -837,20 +975,8 @@ static int handle_name_request(enum request_types request_type, ret = pack_ber_sid(sid_str, berval); } else { - ret = get_buffer(&buf_len, &buf); - if (ret != LDAP_SUCCESS) { - goto done; - } - - ret = getpwnam_r(fq_name, &pwd, buf, buf_len, &pwd_result); - if (ret != 0) { - /* according to the man page there are a couple of error codes - * which can indicate that the user was not found. To be on the - * safe side we fail back to the group lookup on all errors. */ - pwd_result = NULL; - } - - if (pwd_result != NULL) { + ret = getpwnam_r_wrapper(MAX_BUF, fq_name, &pwd, &buf, &buf_len); + if (ret == 0) { if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_UID @@ -869,17 +995,16 @@ static int handle_name_request(enum request_types request_type, pwd.pw_gid, pwd.pw_gecos, pwd.pw_dir, pwd.pw_shell, kv_list, berval); } else { /* no user entry found */ - ret = getgrnam_r(fq_name, &grp, buf, buf_len, &grp_result); + /* according to the getpwnam() man page there are a couple of + * error codes which can indicate that the user was not found. To + * be on the safe side we fail back to the group lookup on all + * errors. */ + ret = getgrnam_r_wrapper(MAX_BUF, fq_name, &grp, &buf, &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; } - if (grp_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } - if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(grp.gr_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_GID diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/group b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/group new file mode 100644 index 0000000000000000000000000000000000000000..972a6ef390aceeee2e29dccb1fddef2f4ad10fb9 --- /dev/null +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/group @@ -0,0 +1,2 @@ +group:x:11111:member0001,member0002 +group_big:x:22222:member0001,member0002,member0003,member0004,member0005,member0006,member0007,member0008,member0009,member0010,member0011,member0012,member0013,member0014,member0015,member0016,member0017,member0018,member0019,member0020,member0021,member0022,member0023,member0024,member0025,member0026,member0027,member0028,member0029,member0030,member0031,member0032,member0033,member0034,member0035,member0036,member0037,member0038,member0039,member0040,member0041,member0042,member0043,member0044,member0045,member0046,member0047,member0048,member0049,member0050,member0051,member0052,member0053,member0054,member0055,member0056,member0057,member0058,member0059,member0060,member0061,member0062,member0063,member0064,member0065,member0066,member0067,member0068,member0069,member0070,member0071,member0072,member0073,member0074,member0075,member0076,member0077,member0078,member0079,member0080,member0081,member0082,member0083,member0084,member0085,member0086,member0087,member0088,member0089,member0090,member0091,member0092,member0093,member0094,member0095,member0096,member0097,member0098,member0099,member0100,member0101,member0102,member0103,member0104,member0105,member0106,member0107,member0108,member0109,member0110,member0111,member0112,member0113,member0114,member0115,member0116,member0117,member0118,member0119,member0120,member0121,member0122,member0123,member0124,member0125,member0126,member0127,member0128,member0129,member0130,member0131,member0132,member0133,member0134,member0135,member0136,member0137,member0138,member0139,member0140,member0141,member0142,member0143,member0144,member0145,member0146,member0147,member0148,member0149,member0150,member0151,member0152,member0153,member0154,member0155,member0156,member0157,member0158,member0159,member0160,member0161,member0162,member0163,member0164,member0165,member0166,member0167,member0168,member0169,member0170,member0171,member0172,member0173,member0174,member0175,member0176,member0177,member0178,member0179,member0180,member0181,member0182,member0183,member0184,member0185,member0186,member0187,member0188,member0189,member0190,member0191,member0192,member0193,member0194,member0195,member0196,member0197,member0198,member0199,member0200,member0201,member0202,member0203,member0204,member0205,member0206,member0207,member0208,member0209,member0210,member0211,member0212,member0213,member0214,member0215,member0216,member0217,member0218,member0219,member0220,member0221,member0222,member0223,member0224,member0225,member0226,member0227,member0228,member0229,member0230,member0231,member0232,member0233,member0234,member0235,member0236,member0237,member0238,member0239,member0240,member0241,member0242,member0243,member0244,member0245,member0246,member0247,member0248,member0249,member0250,member0251,member0252,member0253,member0254,member0255,member0256,member0257,member0258,member0259,member0260,member0261,member0262,member0263,member0264,member0265,member0266,member0267,member0268,member0269,member0270,member0271,member0272,member0273,member0274,member0275,member0276,member0277,member0278,member0279,member0280,member0281,member0282,member0283,member0284,member0285,member0286,member0287,member0288,member0289,member0290,member0291,member0292,member0293,member0294,member0295,member0296,member0297,member0298,member0299,member0300,member0301,member0302,member0303,member0304,member0305,member0306,member0307,member0308,member0309,member0310,member0311,member0312,member0313,member0314,member0315,member0316,member0317,member0318,member0319,member0320,member0321,member0322,member0323,member0324,member0325,member0326,member0327,member0328,member0329,member0330,member0331,member0332,member0333,member0334,member0335,member0336,member0337,member0338,member0339,member0340,member0341,member0342,member0343,member0344,member0345,member0346,member0347,member0348,member0349,member0350,member0351,member0352,member0353,member0354,member0355,member0356,member0357,member0358,member0359,member0360,member0361,member0362,member0363,member0364,member0365,member0366,member0367,member0368,member0369,member0370,member0371,member0372,member0373,member0374,member0375,member0376,member0377,member0378,member0379,member0380,member0381,member0382,member0383,member0384,member0385,member0386,member0387,member0388,member0389,member0390,member0391,member0392,member0393,member0394,member0395,member0396,member0397,member0398,member0399,member0400,member0401,member0402,member0403,member0404,member0405,member0406,member0407,member0408,member0409,member0410,member0411,member0412,member0413,member0414,member0415,member0416,member0417,member0418,member0419,member0420,member0421,member0422,member0423,member0424,member0425,member0426,member0427,member0428,member0429,member0430,member0431,member0432,member0433,member0434,member0435,member0436,member0437,member0438,member0439,member0440,member0441,member0442,member0443,member0444,member0445,member0446,member0447,member0448,member0449,member0450,member0451,member0452,member0453,member0454,member0455,member0456,member0457,member0458,member0459,member0460,member0461,member0462,member0463,member0464,member0465,member0466,member0467,member0468,member0469,member0470,member0471,member0472,member0473,member0474,member0475,member0476,member0477,member0478,member0479,member0480,member0481,member0482,member0483,member0484,member0485,member0486,member0487,member0488,member0489,member0490,member0491,member0492,member0493,member0494,member0495,member0496,member0497,member0498,member0499,member0500,member0501,member0502,member0503,member0504,member0505,member0506,member0507,member0508,member0509,member0510,member0511,member0512,member0513,member0514,member0515,member0516,member0517,member0518,member0519,member0520,member0521,member0522,member0523,member0524,member0525,member0526,member0527,member0528,member0529,member0530,member0531,member0532,member0533,member0534,member0535,member0536,member0537,member0538,member0539,member0540,member0541,member0542,member0543,member0544,member0545,member0546,member0547,member0548,member0549,member0550,member0551,member0552,member0553,member0554,member0555,member0556,member0557,member0558,member0559,member0560,member0561,member0562,member0563,member0564,member0565,member0566,member0567,member0568,member0569,member0570,member0571,member0572,member0573,member0574,member0575,member0576,member0577,member0578,member0579,member0580,member0581,member0582,member0583,member0584,member0585,member0586,member0587,member0588,member0589,member0590,member0591,member0592,member0593,member0594,member0595,member0596,member0597,member0598,member0599,member0600,member0601,member0602,member0603,member0604,member0605,member0606,member0607,member0608,member0609,member0610,member0611,member0612,member0613,member0614,member0615,member0616,member0617,member0618,member0619,member0620,member0621,member0622,member0623,member0624,member0625,member0626,member0627,member0628,member0629,member0630,member0631,member0632,member0633,member0634,member0635,member0636,member0637,member0638,member0639,member0640,member0641,member0642,member0643,member0644,member0645,member0646,member0647,member0648,member0649,member0650,member0651,member0652,member0653,member0654,member0655,member0656,member0657,member0658,member0659,member0660,member0661,member0662,member0663,member0664,member0665,member0666,member0667,member0668,member0669,member0670,member0671,member0672,member0673,member0674,member0675,member0676,member0677,member0678,member0679,member0680,member0681,member0682,member0683,member0684,member0685,member0686,member0687,member0688,member0689,member0690,member0691,member0692,member0693,member0694,member0695,member0696,member0697,member0698,member0699,member0700,member0701,member0702,member0703,member0704,member0705,member0706,member0707,member0708,member0709,member0710,member0711,member0712,member0713,member0714,member0715,member0716,member0717,member0718,member0719,member0720,member0721,member0722,member0723,member0724,member0725,member0726,member0727,member0728,member0729,member0730,member0731,member0732,member0733,member0734,member0735,member0736,member0737,member0738,member0739,member0740,member0741,member0742,member0743,member0744,member0745,member0746,member0747,member0748,member0749,member0750,member0751,member0752,member0753,member0754,member0755,member0756,member0757,member0758,member0759,member0760,member0761,member0762,member0763,member0764,member0765,member0766,member0767,member0768,member0769,member0770,member0771,member0772,member0773,member0774,member0775,member0776,member0777,member0778,member0779,member0780,member0781,member0782,member0783,member0784,member0785,member0786,member0787,member0788,member0789,member0790,member0791,member0792,member0793,member0794,member0795,member0796,member0797,member0798,member0799,member0800,member0801,member0802,member0803,member0804,member0805,member0806,member0807,member0808,member0809,member0810,member0811,member0812,member0813,member0814,member0815,member0816,member0817,member0818,member0819,member0820,member0821,member0822,member0823,member0824,member0825,member0826,member0827,member0828,member0829,member0830,member0831,member0832,member0833,member0834,member0835,member0836,member0837,member0838,member0839,member0840,member0841,member0842,member0843,member0844,member0845,member0846,member0847,member0848,member0849,member0850,member0851,member0852,member0853,member0854,member0855,member0856,member0857,member0858,member0859,member0860,member0861,member0862,member0863,member0864,member0865,member0866,member0867,member0868,member0869,member0870,member0871,member0872,member0873,member0874,member0875,member0876,member0877,member0878,member0879,member0880,member0881,member0882,member0883,member0884,member0885,member0886,member0887,member0888,member0889,member0890,member0891,member0892,member0893,member0894,member0895,member0896,member0897,member0898,member0899,member0900,member0901,member0902,member0903,member0904,member0905,member0906,member0907,member0908,member0909,member0910,member0911,member0912,member0913,member0914,member0915,member0916,member0917,member0918,member0919,member0920,member0921,member0922,member0923,member0924,member0925,member0926,member0927,member0928,member0929,member0930,member0931,member0932,member0933,member0934,member0935,member0936,member0937,member0938,member0939,member0940,member0941,member0942,member0943,member0944,member0945,member0946,member0947,member0948,member0949,member0950,member0951,member0952,member0953,member0954,member0955,member0956,member0957,member0958,member0959,member0960,member0961,member0962,member0963,member0964,member0965,member0966,member0967,member0968,member0969,member0970,member0971,member0972,member0973,member0974,member0975,member0976,member0977,member0978,member0979,member0980,member0981,member0982,member0983,member0984,member0985,member0986,member0987,member0988,member0989,member0990,member0991,member0992,member0993,member0994,member0995,member0996,member0997,member0998,member0999,member1000, diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/passwd b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/passwd new file mode 100644 index 0000000000000000000000000000000000000000..5962ad57178ed4bc28df27bda8537b970dedee15 --- /dev/null +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/passwd @@ -0,0 +1,2 @@ +user:x:12345:23456:gecos:/home/user:/bin/shell +user_big:x:12346:23457:gecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecos:/home/user_big:/bin/shell diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/test_setup.sh b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/test_setup.sh new file mode 100644 index 0000000000000000000000000000000000000000..ad839f340efe989a91cd6902f59c9a41483f68e0 --- /dev/null +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/test_setup.sh @@ -0,0 +1,3 @@ +export LD_PRELOAD=$(pkg-config --libs nss_wrapper) +export NSS_WRAPPER_PASSWD=./test_data/passwd +export NSS_WRAPPER_GROUP=./test_data/group -- 2.1.0 -------------- next part -------------- From 39fcf6c01f8f7b50a4b03456fb7c7b7033d7f704 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Mon, 2 Mar 2015 10:59:34 +0100 Subject: [PATCH 136/136] extdom: make nss buffer configurable The get*_r_wrapper() calls expect a maximum buffer size to avoid memory shortage if too many threads try to allocate buffers e.g. for large groups. With this patch this size can be configured by setting ipaExtdomMaxNssBufSize in the plugin config object cn=ipa_extdom_extop,cn=plugins,cn=config. Related to https://fedorahosted.org/freeipa/ticket/4908 --- .../ipa-extdom-extop/ipa_extdom.h | 1 + .../ipa-extdom-extop/ipa_extdom_common.c | 59 ++++++++++++++-------- .../ipa-extdom-extop/ipa_extdom_extop.c | 10 ++++ 3 files changed, 48 insertions(+), 22 deletions(-) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h index 3231ee224f159545bb92a5c7433bae09869ad17e..512633f045cba21d3593faa898739684da94b7a4 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h @@ -150,6 +150,7 @@ struct extdom_res { struct ipa_extdom_ctx { Slapi_ComponentId *plugin_id; char *base_dn; + size_t max_nss_buf_size; }; struct domain_info { diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index 84aeb28066f25f05a89d0c2d42e8b060e2399501..ea609d1d5d5a1d384ec9fbdb5e5469a90930df17 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -49,9 +49,6 @@ #define MAX(a,b) (((a)>(b))?(a):(b)) #define SSSD_DOMAIN_SEPARATOR '@' -#define MAX_BUF (1024*1024*1024) - - static int get_buffer(size_t *_buf_len, char **_buf) { @@ -496,7 +493,8 @@ static int pack_ber_sid(const char *sid, struct berval **berval) #define SSSD_SYSDB_SID_STR "objectSIDString" -static int pack_ber_user(enum response_types response_type, +static int pack_ber_user(struct ipa_extdom_ctx *ctx, + enum response_types response_type, const char *domain_name, const char *user_name, uid_t uid, gid_t gid, const char *gecos, const char *homedir, @@ -556,7 +554,8 @@ static int pack_ber_user(enum response_types response_type, } for (c = 0; c < ngroups; c++) { - ret = getgrgid_r_wrapper(MAX_BUF, groups[c], &grp, &buf, &buf_len); + ret = getgrgid_r_wrapper(ctx->max_nss_buf_size, + groups[c], &grp, &buf, &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; @@ -714,7 +713,8 @@ static int pack_ber_name(const char *domain_name, const char *name, return LDAP_SUCCESS; } -static int handle_uid_request(enum request_types request_type, uid_t uid, +static int handle_uid_request(struct ipa_extdom_ctx *ctx, + enum request_types request_type, uid_t uid, const char *domain_name, struct berval **berval) { int ret; @@ -739,7 +739,8 @@ static int handle_uid_request(enum request_types request_type, uid_t uid, ret = pack_ber_sid(sid_str, berval); } else { - ret = getpwuid_r_wrapper(MAX_BUF, uid, &pwd, &buf, &buf_len); + ret = getpwuid_r_wrapper(ctx->max_nss_buf_size, uid, &pwd, &buf, + &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; @@ -758,7 +759,8 @@ static int handle_uid_request(enum request_types request_type, uid_t uid, } } - ret = pack_ber_user((request_type == REQ_FULL ? RESP_USER + ret = pack_ber_user(ctx, + (request_type == REQ_FULL ? RESP_USER : RESP_USER_GROUPLIST), domain_name, pwd.pw_name, pwd.pw_uid, pwd.pw_gid, pwd.pw_gecos, pwd.pw_dir, @@ -772,7 +774,8 @@ done: return ret; } -static int handle_gid_request(enum request_types request_type, gid_t gid, +static int handle_gid_request(struct ipa_extdom_ctx *ctx, + enum request_types request_type, gid_t gid, const char *domain_name, struct berval **berval) { int ret; @@ -796,7 +799,8 @@ static int handle_gid_request(enum request_types request_type, gid_t gid, ret = pack_ber_sid(sid_str, berval); } else { - ret = getgrgid_r_wrapper(MAX_BUF, gid, &grp, &buf, &buf_len); + ret = getgrgid_r_wrapper(ctx->max_nss_buf_size, gid, &grp, &buf, + &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; @@ -828,7 +832,8 @@ done: return ret; } -static int handle_sid_request(enum request_types request_type, const char *sid, +static int handle_sid_request(struct ipa_extdom_ctx *ctx, + enum request_types request_type, const char *sid, struct berval **berval) { int ret; @@ -874,7 +879,8 @@ static int handle_sid_request(enum request_types request_type, const char *sid, switch(id_type) { case SSS_ID_TYPE_UID: case SSS_ID_TYPE_BOTH: - ret = getpwnam_r_wrapper(MAX_BUF, fq_name, &pwd, &buf, &buf_len); + ret = getpwnam_r_wrapper(ctx->max_nss_buf_size, fq_name, &pwd, &buf, + &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; @@ -893,14 +899,16 @@ static int handle_sid_request(enum request_types request_type, const char *sid, } } - ret = pack_ber_user((request_type == REQ_FULL ? RESP_USER + ret = pack_ber_user(ctx, + (request_type == REQ_FULL ? RESP_USER : RESP_USER_GROUPLIST), domain_name, pwd.pw_name, pwd.pw_uid, pwd.pw_gid, pwd.pw_gecos, pwd.pw_dir, pwd.pw_shell, kv_list, berval); break; case SSS_ID_TYPE_GID: - ret = getgrnam_r_wrapper(MAX_BUF, fq_name, &grp, &buf, &buf_len); + ret = getgrnam_r_wrapper(ctx->max_nss_buf_size, fq_name, &grp, &buf, + &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; @@ -939,7 +947,8 @@ done: return ret; } -static int handle_name_request(enum request_types request_type, +static int handle_name_request(struct ipa_extdom_ctx *ctx, + enum request_types request_type, const char *name, const char *domain_name, struct berval **berval) { @@ -975,7 +984,8 @@ static int handle_name_request(enum request_types request_type, ret = pack_ber_sid(sid_str, berval); } else { - ret = getpwnam_r_wrapper(MAX_BUF, fq_name, &pwd, &buf, &buf_len); + ret = getpwnam_r_wrapper(ctx->max_nss_buf_size, fq_name, &pwd, &buf, + &buf_len); if (ret == 0) { if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); @@ -989,7 +999,8 @@ static int handle_name_request(enum request_types request_type, goto done; } } - ret = pack_ber_user((request_type == REQ_FULL ? RESP_USER + ret = pack_ber_user(ctx, + (request_type == REQ_FULL ? RESP_USER : RESP_USER_GROUPLIST), domain_name, pwd.pw_name, pwd.pw_uid, pwd.pw_gid, pwd.pw_gecos, pwd.pw_dir, @@ -999,7 +1010,8 @@ static int handle_name_request(enum request_types request_type, * error codes which can indicate that the user was not found. To * be on the safe side we fail back to the group lookup on all * errors. */ - ret = getgrnam_r_wrapper(MAX_BUF, fq_name, &grp, &buf, &buf_len); + ret = getgrnam_r_wrapper(ctx->max_nss_buf_size, fq_name, &grp, &buf, + &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; @@ -1041,20 +1053,23 @@ int handle_request(struct ipa_extdom_ctx *ctx, struct extdom_req *req, switch (req->input_type) { case INP_POSIX_UID: - ret = handle_uid_request(req->request_type, req->data.posix_uid.uid, + ret = handle_uid_request(ctx, req->request_type, + req->data.posix_uid.uid, req->data.posix_uid.domain_name, berval); break; case INP_POSIX_GID: - ret = handle_gid_request(req->request_type, req->data.posix_gid.gid, + ret = handle_gid_request(ctx, req->request_type, + req->data.posix_gid.gid, req->data.posix_uid.domain_name, berval); break; case INP_SID: - ret = handle_sid_request(req->request_type, req->data.sid, berval); + ret = handle_sid_request(ctx, req->request_type, req->data.sid, berval); break; case INP_NAME: - ret = handle_name_request(req->request_type, req->data.name.object_name, + ret = handle_name_request(ctx, req->request_type, + req->data.name.object_name, req->data.name.domain_name, berval); break; diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c index aa66c145bc6cf2b77fdfe37be18da67588dc0439..e53f968db040a37fbd6a193f87b3671eeabda89d 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c @@ -40,6 +40,8 @@ #include "ipa_extdom.h" #include "util.h" +#define DEFAULT_MAX_NSS_BUFFER (128*1024*1024) + Slapi_PluginDesc ipa_extdom_plugin_desc = { IPA_EXTDOM_FEATURE_DESC, "FreeIPA project", @@ -185,6 +187,14 @@ static int ipa_extdom_init_ctx(Slapi_PBlock *pb, struct ipa_extdom_ctx **_ctx) goto done; } + ctx->max_nss_buf_size = slapi_entry_attr_get_uint(e, + "ipaExtdomMaxNssBufSize"); + if (ctx->max_nss_buf_size == 0) { + ctx->max_nss_buf_size = DEFAULT_MAX_NSS_BUFFER; + } + LOG("Maximal nss buffer size set to [%d]!\n", ctx->max_nss_buf_size); + + ret = 0; done: if (ret) { -- 2.1.0 From mbasti at redhat.com Mon Mar 2 17:54:26 2015 From: mbasti at redhat.com (Martin Basti) Date: Mon, 02 Mar 2015 18:54:26 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F49DD8.4090801@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> <54F49DD8.4090801@redhat.com> Message-ID: <54F4A3D2.50207@redhat.com> On 02/03/15 18:28, Martin Kosek wrote: > On 03/02/2015 06:12 PM, Martin Basti wrote: >> On 02/03/15 15:43, Rob Crittenden wrote: >>> Martin Basti wrote: > ... >>> But you haven't explained any case why LDAPI would fail. If LDAPI fails >>> then you've got more serious problems that I'm not sure binding as DM is >>> going to solve. >>> >>> The only case where DM would be handy IMHO is either some worst case >>> scenario upgrade where 389-ds is up but not binding to LDAPI or if you >>> want to allow running LDAP updates as non-root. >> I don't know cases when LDAPI would failed, except the case LDAPI is >> misconfigured by user, or disabled by user. > Wasn't LDAPI needed for the DM password less upgrade so that upgrader could > simply bind as root with EXTERNAL auth? We can do upgrade in both way, using LDAPI or using DM password, preferred is LDAPI. Question is, what is the use case for using DM password instead of LDAPI during upgrade. > >> It is not big effort to keep both DM binding and LDAPI in code. A user can >> always found som unexpected use case for LDAP update with DM password. >> >>>>> On ipactl, would it be overkill if there is a tty to prompt the user to >>>>> upgrade? In a non-container world it might be surprising to have an >>>>> upgrade happen esp since upgrades take a while. >>>> In non-container enviroment, we can still use upgrade during RPM >>>> transaction. >>>> >>>> So you suggest not to do update automaticaly, just write Error the IPA >>>> upgrade is required? >>> People do all sorts of strange things. Installing the packages with >>> --no-script isn't in the range of impossible. A prompt, and I'm not >>> saying it's a great idea, is 2 lines of code. >>> >>> I guess it just makes me nervous. >> So lets summarize this: >> * DO upgrade if possible during RPM transaction > Umm, I thought we want to get rid of running upgrade during RPM transaction. It > is extremely difficult to debug upgrade stuck during RPM transaction, it also > makes RPM upgrade run longer than needed. It also makes admins nervous when > their rpm upgrade is suddenly waiting right before the end. I even see the > fingers slowly reaching to CTRL+C combo... (You can see the consequences) People are used to have IPA upgraded and ready after RPM upgrade. They may be shocked if IPA services will be in shutdown state after RPM transaction. I have no more objections. > >> * ipactl will NOT run upgrade, just print Error: 'please upgrade ....' >> * User has to run ipa-server-upgrade manually >> >> Does I understand it correctly? >>>>> With --skip-version-check what sorts of problems can we foresee? I >>>>> assume a big warning will be added to at least the man page, if not >>>>> the cli? >>>> For this big warning everywhere. >>>> The main problem is try to run older IPA with newer data. In containers >>>> the problem is to run different platform specific versions which differ >>>> in functionality/bugfixes etc.. >>> Ok, pretty much the things I was thinking as well. A scary warning >>> definitely seems warranted. >>> >>>>> Where does platform come from? I'm wondering how Debian will handle this. >>>> platform is derived from ipaplatform file which is used with the >>>> particular build. Debian should have own platform file. >>> Ok, I'd add that detail to the design. >>> >>> rob >> -- Martin Basti From abokovoy at redhat.com Mon Mar 2 19:02:32 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Mon, 2 Mar 2015 21:02:32 +0200 Subject: [Freeipa-devel] [PATCHES 134-136] extdom: handle ERANGE return code for getXXYYY_r() In-Reply-To: <20150302174507.GK3271@p.redhat.com> References: <20150302174507.GK3271@p.redhat.com> Message-ID: <20150302190232.GF25455@redhat.com> On Mon, 02 Mar 2015, Sumit Bose wrote: >Hi, > >the attached patches add ERANGE handling to getpwnam(), getpwuid(), >getgrnam() and getgrgid(). > >I added a configurable limit to the buffer allocation which defaults to >128MB. If you think this limit should be change or there should be no >limit at all please let me know. I'd recommend to increase minimal buffer size to 16KB or sysconf value whatever is higher (current sysconf initial value is 1KB which is low for our needs). 128MB for maximum before bailing out seems OK. I haven't reviewed the rest of patches yet. -- / Alexander Bokovoy From jcholast at redhat.com Tue Mar 3 06:31:08 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 03 Mar 2015 07:31:08 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F45CE5.1000108@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> Message-ID: <54F5552C.9040508@redhat.com> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): > On 02/03/15 13:12, Jan Cholasta wrote: >> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>> Hi, >>>> >>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>> Hello all, >>>>> >>>>> please read the design page, any objections/suggestions appreciated >>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>> >>>> >>>> 1) >>>> >>>> " >>>> * Merge server update commands into the one command >>>> (ipa-server-upgrade) >>>> " >>>> >>>> So there is "ipa-server-install" to install the server, >>>> "ipa-server-install >>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>> upgrade the >>>> server. Maybe we should bring some consistency here and have one of: >>>> >>>> a) "ipa-server-install [--install]", "ipa-server-install >>>> --uninstall", >>>> "ipa-server-install --upgrade" >>>> >>>> b) "ipa-server-install [install]", "ipa-server-install uninstall", >>>> "ipa-server-install upgrade" >>>> >>>> c) "ipa-server-install", "ipa-server-uninstall", "ipa-server-upgrade" >>> >>> Long term, I think we want C. Besides other advantages, it will let >>> us have >>> independent sets of options, based on what you want to do. >>> >>>> 2) >>>> >>>> " >>>> * Prevent to run IPA service, if code version and configuration >>>> version does >>>> not match >>>> * ipactl should execute ipa-server-upgrade if needed >>>> " >>>> >>>> There should be no configuration version, configuration update >>>> should be run >>>> always. It's fast and hence does not need to be optimized like data >>>> update by >>>> using a monolithic version number, which brings more than a few >>>> problems on its >>>> own. >>> >>> I do not agree in this section. Why would you like to run it always, >>> even if it >>> was fast? No run is still faster than fast run. >> >> In the ideal case the installer would be idempotent and upgrade would >> be re-running the installer and we should aim to do just that. We kind >> of do that already, but there is a lot of code duplication in >> installers and ipa-upgradeconfig (I would like to fix that when >> refactoring installers). IMO it's better to always make 100% sure the >> configuration is correct rather than to save a second or two. > I doesn't like this idea, if user wants to fix something, the one should > use --skip-version-check option, and the IPA upgrade will be executed. Well, what I don't like is dealing with meaningless version numbers. They are causing us grief in API versioning and I don't see why it would be any different here. > What if a service changes in a way, the IPA configuration will not work? Then it's a bug and needs to be fixed, like any other bug. IIRC there was only one or two occurences of such bug in the past 3 years (I remember sshd_config), so I don't think you have a strong case here. > The user will need to change it manually, but after each restart, > upgrade will change the value back into IPA required configuration which > will not work. Says who? It's our code, we can do whatever we want, it doesn't have to be dumb like this. > Yes, we have upgrade state file, but then the comparing of one value is > faster then checking each state if was executed. How faster is that, like, few milliseconds? Are you seriously considering this the right optimization in a process that is magnitudes slower? > > My personal opinion is, application should not try to fix itself every > restart. One could say that application should not try to upgrade itself every restart, but here we are, doing it anyway... >> >>> In the past, I do not recall >>> ipa-upgradeconfig as being really fast, especially certmonger/Dogtag >>> related >>> updates were really slow thank to service restarts, etc. >> >> Correct, but I was talking about configuration file updates, not >> (re)starts, which have to always be done in ipactl anyway. >> >>> >>>> 3) >>>> >>>> " >>>> * Prevent user to use ipa-upgradeconfig and ipa-ldap-updater >>>> " >>>> >>>> Even without arguments? Is ipactl now the only right place to >>>> trigger manual >>>> update? > Sorry, I will add more details there, if this is not clear. > ipa-upgrateconfig will be removed > ipa-ldap-updater will not be able to do overall update, you will need to > specify options and update file. >>>> >>>> 4) >>>> >>>> " >>>> Plugins are called from update files, using new directive >>>> update-plugin: >>>> " >>>> >>>> Why "update-plugin" and not just "plugin"? Do you expect other kinds >>>> of plugins >>>> to be called from update files in the future? (I certainly don't.) >>> >>> I have no strong feelings on this one, but IMO it is always better to >>> have some >>> "plan B" if we choose to indeed implement some yet unforeseen plugin >>> based >>> capability... >> >> I doubt that will happen, but if it does, we can always add >> "plan-b-plugin" directive. > I do not insist on "update-plugin", I just wanted to be more specific > which type of plugin is expected there. Well, the names of the files end with .update and they are located in /usr/share/ipa/updates, I think that's enough hints as to what type of plugin is expected. >> >>> >>>> 5) >>>> >>>> " >>>> New class UpdatePlugin is used for all update plugins. >>>> " >>>> >>>> Just reuse the existing Updater class, no need to reinvent the wheel. >>>> >>>> 6) >>>> >>>> I wonder why configuration update is done after data update and not >>>> before. I >>>> know it's been like that for a long time, but it seems kind of >>>> unnatural to me, >>>> especially now when schema update is separate from data update. (Rob?) > We need schema update first, but I haven't found any services which need > to have updated data (I might be wrong) >>>> >>>> 7) >>>> >>>> " >>>> keep --test option and fix the plugins which do not respect the option >>>> " >>>> >>>> Just a note, I believe this ticket is related: >>>> . >>>> >>>> >>>> Good work overall! >>>> >>>> Honza >>>> >>> >> >> > > -- Jan Cholasta From mbabinsk at redhat.com Tue Mar 3 07:58:33 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Tue, 03 Mar 2015 08:58:33 +0100 Subject: [Freeipa-devel] [PATCH 0001] ipa-client-install: attempt to get host TGT several times before aborting client installation In-Reply-To: <54F4818E.1060408@redhat.com> References: <54B3FA19.6000903@redhat.com> <54B4D5B4.3050301@redhat.com> <54B4DB7A.4080307@redhat.com> <54B53E68.60000@redhat.com> <54B690EE.3070604@redhat.com> <54B69EDD.2000800@redhat.com> <54DE4DD8.7060206@redhat.com> <54F47E15.8060700@redhat.com> <54F4818E.1060408@redhat.com> Message-ID: <54F569A9.6010609@redhat.com> On 03/02/2015 04:28 PM, Rob Crittenden wrote: > Petr Vobornik wrote: >>>>>>>> On 01/12/2015 05:45 PM, Martin Babinsky wrote: >>>>>>>>> related to ticket https://fedorahosted.org/freeipa/ticket/4808 >> >> this patch seems to be a bit forgotten. >> >> It works, looks fine. >> >> One minor issue: trailing whitespaces in the man page. >> >> I also wonder if it shouldn't be used in other tools which call kinit >> with keytab: >> * ipa-client-automount:434 >> * ipa-client-install:2591 (this usage should be fine since it's used for >> server installation) >> * dcerpc.py:545 >> * rpcserver.py: 971, 981 (armor for web ui forms base auth) >> >> Most importantly the ipa-client-automount because it's called from >> ipa-client-install (if location is specified) and therefore it might >> fail during client installation. >> >> Or also, kinit call with admin creadentials worked for the user but I >> wonder if it was just a coincidence and may break under slightly >> different but similar conditions. > > I think that's a fine idea. In fact there is already a function that > could be extended, kinit_hostprincipal(). > > rob > So in principle we could add multiple TGT retries to "kinit_hostprincipal()" and then plug this function to all the places Petr mentioned in order to provide this functionality each time TGT is requested using keytab. Do I understand it correctly? -- Martin^3 Babinsky From mbasti at redhat.com Tue Mar 3 08:06:47 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 03 Mar 2015 09:06:47 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F5552C.9040508@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> Message-ID: <54F56B97.3080709@redhat.com> On 03/03/15 07:31, Jan Cholasta wrote: > Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >> On 02/03/15 13:12, Jan Cholasta wrote: >>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>> Hi, >>>>> >>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>> Hello all, >>>>>> >>>>>> please read the design page, any objections/suggestions appreciated >>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>> >>>>> >>>>> 1) >>>>> >>>>> " >>>>> * Merge server update commands into the one command >>>>> (ipa-server-upgrade) >>>>> " >>>>> >>>>> So there is "ipa-server-install" to install the server, >>>>> "ipa-server-install >>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>> upgrade the >>>>> server. Maybe we should bring some consistency here and have one of: >>>>> >>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>> --uninstall", >>>>> "ipa-server-install --upgrade" >>>>> >>>>> b) "ipa-server-install [install]", "ipa-server-install uninstall", >>>>> "ipa-server-install upgrade" >>>>> >>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>> "ipa-server-upgrade" >>>> >>>> Long term, I think we want C. Besides other advantages, it will let >>>> us have >>>> independent sets of options, based on what you want to do. >>>> >>>>> 2) >>>>> >>>>> " >>>>> * Prevent to run IPA service, if code version and configuration >>>>> version does >>>>> not match >>>>> * ipactl should execute ipa-server-upgrade if needed >>>>> " >>>>> >>>>> There should be no configuration version, configuration update >>>>> should be run >>>>> always. It's fast and hence does not need to be optimized like data >>>>> update by >>>>> using a monolithic version number, which brings more than a few >>>>> problems on its >>>>> own. >>>> >>>> I do not agree in this section. Why would you like to run it always, >>>> even if it >>>> was fast? No run is still faster than fast run. >>> >>> In the ideal case the installer would be idempotent and upgrade would >>> be re-running the installer and we should aim to do just that. We kind >>> of do that already, but there is a lot of code duplication in >>> installers and ipa-upgradeconfig (I would like to fix that when >>> refactoring installers). IMO it's better to always make 100% sure the >>> configuration is correct rather than to save a second or two. >> I doesn't like this idea, if user wants to fix something, the one should >> use --skip-version-check option, and the IPA upgrade will be executed. > > Well, what I don't like is dealing with meaningless version numbers. > They are causing us grief in API versioning and I don't see why it > would be any different here. However you must keep the version because of schema and data upgrade, so why not to execute update as one batch instead of doing config upgrade all the time, and then data upgrade only if required. Some configuration upgrades, like adding new DNS related services, requires new schema, how we can handle this? Running schema upgrade every time? > >> What if a service changes in a way, the IPA configuration will not work? > > Then it's a bug and needs to be fixed, like any other bug. IIRC there > was only one or two occurences of such bug in the past 3 years (I > remember sshd_config), so I don't think you have a strong case here. Ok > >> The user will need to change it manually, but after each restart, >> upgrade will change the value back into IPA required configuration which >> will not work. > > Says who? It's our code, we can do whatever we want, it doesn't have > to be dumb like this. > >> Yes, we have upgrade state file, but then the comparing of one value is >> faster then checking each state if was executed. > > How faster is that, like, few milliseconds? Are you seriously > considering this the right optimization in a process that is > magnitudes slower? Ok the speed is not so important, but I still do not like the idea of executing the code which is not needed to be executed, because I know the version is the same as was before last restart, so nothing changed. > >> >> My personal opinion is, application should not try to fix itself every >> restart. > > One could say that application should not try to upgrade itself every > restart, but here we are, doing it anyway... I want to do upgrade only if needed not every restart. > >>> >>>> In the past, I do not recall >>>> ipa-upgradeconfig as being really fast, especially certmonger/Dogtag >>>> related >>>> updates were really slow thank to service restarts, etc. >>> >>> Correct, but I was talking about configuration file updates, not >>> (re)starts, which have to always be done in ipactl anyway. >>> >>>> >>>>> 3) >>>>> >>>>> " >>>>> * Prevent user to use ipa-upgradeconfig and ipa-ldap-updater >>>>> " >>>>> >>>>> Even without arguments? Is ipactl now the only right place to >>>>> trigger manual >>>>> update? >> Sorry, I will add more details there, if this is not clear. >> ipa-upgrateconfig will be removed >> ipa-ldap-updater will not be able to do overall update, you will need to >> specify options and update file. >>>>> >>>>> 4) >>>>> >>>>> " >>>>> Plugins are called from update files, using new directive >>>>> update-plugin: >>>>> " >>>>> >>>>> Why "update-plugin" and not just "plugin"? Do you expect other kinds >>>>> of plugins >>>>> to be called from update files in the future? (I certainly don't.) >>>> >>>> I have no strong feelings on this one, but IMO it is always better to >>>> have some >>>> "plan B" if we choose to indeed implement some yet unforeseen plugin >>>> based >>>> capability... >>> >>> I doubt that will happen, but if it does, we can always add >>> "plan-b-plugin" directive. >> I do not insist on "update-plugin", I just wanted to be more specific >> which type of plugin is expected there. > > Well, the names of the files end with .update and they are located in > /usr/share/ipa/updates, I think that's enough hints as to what type of > plugin is expected. > okay >>> >>>> >>>>> 5) >>>>> >>>>> " >>>>> New class UpdatePlugin is used for all update plugins. >>>>> " >>>>> >>>>> Just reuse the existing Updater class, no need to reinvent the wheel. >>>>> >>>>> 6) >>>>> >>>>> I wonder why configuration update is done after data update and not >>>>> before. I >>>>> know it's been like that for a long time, but it seems kind of >>>>> unnatural to me, >>>>> especially now when schema update is separate from data update. >>>>> (Rob?) >> We need schema update first, but I haven't found any services which need >> to have updated data (I might be wrong) >>>>> >>>>> 7) >>>>> >>>>> " >>>>> keep --test option and fix the plugins which do not respect the >>>>> option >>>>> " >>>>> >>>>> Just a note, I believe this ticket is related: >>>>> . >>>>> >>>>> >>>>> Good work overall! >>>>> >>>>> Honza >>>>> >>>> >>> >>> >> >> > > -- Martin Basti From jcholast at redhat.com Tue Mar 3 08:33:30 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 03 Mar 2015 09:33:30 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F56B97.3080709@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> Message-ID: <54F571DA.6060801@redhat.com> Dne 3.3.2015 v 09:06 Martin Basti napsal(a): > On 03/03/15 07:31, Jan Cholasta wrote: >> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>> On 02/03/15 13:12, Jan Cholasta wrote: >>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>> Hi, >>>>>> >>>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>>> Hello all, >>>>>>> >>>>>>> please read the design page, any objections/suggestions appreciated >>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>> >>>>>> >>>>>> 1) >>>>>> >>>>>> " >>>>>> * Merge server update commands into the one command >>>>>> (ipa-server-upgrade) >>>>>> " >>>>>> >>>>>> So there is "ipa-server-install" to install the server, >>>>>> "ipa-server-install >>>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>>> upgrade the >>>>>> server. Maybe we should bring some consistency here and have one of: >>>>>> >>>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>>> --uninstall", >>>>>> "ipa-server-install --upgrade" >>>>>> >>>>>> b) "ipa-server-install [install]", "ipa-server-install uninstall", >>>>>> "ipa-server-install upgrade" >>>>>> >>>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>>> "ipa-server-upgrade" >>>>> >>>>> Long term, I think we want C. Besides other advantages, it will let >>>>> us have >>>>> independent sets of options, based on what you want to do. >>>>> >>>>>> 2) >>>>>> >>>>>> " >>>>>> * Prevent to run IPA service, if code version and configuration >>>>>> version does >>>>>> not match >>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>> " >>>>>> >>>>>> There should be no configuration version, configuration update >>>>>> should be run >>>>>> always. It's fast and hence does not need to be optimized like data >>>>>> update by >>>>>> using a monolithic version number, which brings more than a few >>>>>> problems on its >>>>>> own. >>>>> >>>>> I do not agree in this section. Why would you like to run it always, >>>>> even if it >>>>> was fast? No run is still faster than fast run. >>>> >>>> In the ideal case the installer would be idempotent and upgrade would >>>> be re-running the installer and we should aim to do just that. We kind >>>> of do that already, but there is a lot of code duplication in >>>> installers and ipa-upgradeconfig (I would like to fix that when >>>> refactoring installers). IMO it's better to always make 100% sure the >>>> configuration is correct rather than to save a second or two. >>> I doesn't like this idea, if user wants to fix something, the one should >>> use --skip-version-check option, and the IPA upgrade will be executed. >> >> Well, what I don't like is dealing with meaningless version numbers. >> They are causing us grief in API versioning and I don't see why it >> would be any different here. > However you must keep the version because of schema and data upgrade, so > why not to execute update as one batch instead of doing config upgrade > all the time, and then data upgrade only if required. Because there is no exact mapping between version number and what features are actually available. A state file is tons better than a single version number. > > Some configuration upgrades, like adding new DNS related services, > requires new schema, how we can handle this? This does not sound right. Could you be more specific? > Running schema upgrade every time? >> >>> What if a service changes in a way, the IPA configuration will not work? >> >> Then it's a bug and needs to be fixed, like any other bug. IIRC there >> was only one or two occurences of such bug in the past 3 years (I >> remember sshd_config), so I don't think you have a strong case here. > Ok >> >>> The user will need to change it manually, but after each restart, >>> upgrade will change the value back into IPA required configuration which >>> will not work. >> >> Says who? It's our code, we can do whatever we want, it doesn't have >> to be dumb like this. >> >>> Yes, we have upgrade state file, but then the comparing of one value is >>> faster then checking each state if was executed. >> >> How faster is that, like, few milliseconds? Are you seriously >> considering this the right optimization in a process that is >> magnitudes slower? > Ok the speed is not so important, but I still do not like the idea of > executing the code which is not needed to be executed, because I know > the version is the same as was before last restart, so nothing changed. Weren't "clever" optimizations like this what got us into this whole refactoring bussiness in the first place? >> >>> >>> My personal opinion is, application should not try to fix itself every >>> restart. >> >> One could say that application should not try to upgrade itself every >> restart, but here we are, doing it anyway... > I want to do upgrade only if needed not every restart. If there is nothing to upgrade, nothing will be upgraded. The effect is the same. >> >>>> >>>>> In the past, I do not recall >>>>> ipa-upgradeconfig as being really fast, especially certmonger/Dogtag >>>>> related >>>>> updates were really slow thank to service restarts, etc. >>>> >>>> Correct, but I was talking about configuration file updates, not >>>> (re)starts, which have to always be done in ipactl anyway. >>>> >>>>> >>>>>> 3) >>>>>> >>>>>> " >>>>>> * Prevent user to use ipa-upgradeconfig and ipa-ldap-updater >>>>>> " >>>>>> >>>>>> Even without arguments? Is ipactl now the only right place to >>>>>> trigger manual >>>>>> update? >>> Sorry, I will add more details there, if this is not clear. >>> ipa-upgrateconfig will be removed >>> ipa-ldap-updater will not be able to do overall update, you will need to >>> specify options and update file. >>>>>> >>>>>> 4) >>>>>> >>>>>> " >>>>>> Plugins are called from update files, using new directive >>>>>> update-plugin: >>>>>> " >>>>>> >>>>>> Why "update-plugin" and not just "plugin"? Do you expect other kinds >>>>>> of plugins >>>>>> to be called from update files in the future? (I certainly don't.) >>>>> >>>>> I have no strong feelings on this one, but IMO it is always better to >>>>> have some >>>>> "plan B" if we choose to indeed implement some yet unforeseen plugin >>>>> based >>>>> capability... >>>> >>>> I doubt that will happen, but if it does, we can always add >>>> "plan-b-plugin" directive. >>> I do not insist on "update-plugin", I just wanted to be more specific >>> which type of plugin is expected there. >> >> Well, the names of the files end with .update and they are located in >> /usr/share/ipa/updates, I think that's enough hints as to what type of >> plugin is expected. >> > okay >>>> >>>>> >>>>>> 5) >>>>>> >>>>>> " >>>>>> New class UpdatePlugin is used for all update plugins. >>>>>> " >>>>>> >>>>>> Just reuse the existing Updater class, no need to reinvent the wheel. >>>>>> >>>>>> 6) >>>>>> >>>>>> I wonder why configuration update is done after data update and not >>>>>> before. I >>>>>> know it's been like that for a long time, but it seems kind of >>>>>> unnatural to me, >>>>>> especially now when schema update is separate from data update. >>>>>> (Rob?) >>> We need schema update first, but I haven't found any services which need >>> to have updated data (I might be wrong) >>>>>> >>>>>> 7) >>>>>> >>>>>> " >>>>>> keep --test option and fix the plugins which do not respect the >>>>>> option >>>>>> " >>>>>> >>>>>> Just a note, I believe this ticket is related: >>>>>> . >>>>>> >>>>>> >>>>>> Good work overall! >>>>>> >>>>>> Honza >>>>>> >>>>> >>>> >>>> >>> >>> >> >> > > -- Jan Cholasta From pspacek at redhat.com Tue Mar 3 08:36:13 2015 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 03 Mar 2015 09:36:13 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F571DA.6060801@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> Message-ID: <54F5727D.7020005@redhat.com> On 3.3.2015 09:33, Jan Cholasta wrote: > Dne 3.3.2015 v 09:06 Martin Basti napsal(a): >> On 03/03/15 07:31, Jan Cholasta wrote: >>> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>>> On 02/03/15 13:12, Jan Cholasta wrote: >>>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>>> Hi, >>>>>>> >>>>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>>>> Hello all, >>>>>>>> >>>>>>>> please read the design page, any objections/suggestions appreciated >>>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>>> >>>>>>> >>>>>>> 1) >>>>>>> >>>>>>> " >>>>>>> * Merge server update commands into the one command >>>>>>> (ipa-server-upgrade) >>>>>>> " >>>>>>> >>>>>>> So there is "ipa-server-install" to install the server, >>>>>>> "ipa-server-install >>>>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>>>> upgrade the >>>>>>> server. Maybe we should bring some consistency here and have one of: >>>>>>> >>>>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>>>> --uninstall", >>>>>>> "ipa-server-install --upgrade" >>>>>>> >>>>>>> b) "ipa-server-install [install]", "ipa-server-install uninstall", >>>>>>> "ipa-server-install upgrade" >>>>>>> >>>>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>>>> "ipa-server-upgrade" >>>>>> >>>>>> Long term, I think we want C. Besides other advantages, it will let >>>>>> us have >>>>>> independent sets of options, based on what you want to do. >>>>>> >>>>>>> 2) >>>>>>> >>>>>>> " >>>>>>> * Prevent to run IPA service, if code version and configuration >>>>>>> version does >>>>>>> not match >>>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>>> " >>>>>>> >>>>>>> There should be no configuration version, configuration update >>>>>>> should be run >>>>>>> always. It's fast and hence does not need to be optimized like data >>>>>>> update by >>>>>>> using a monolithic version number, which brings more than a few >>>>>>> problems on its >>>>>>> own. >>>>>> >>>>>> I do not agree in this section. Why would you like to run it always, >>>>>> even if it >>>>>> was fast? No run is still faster than fast run. >>>>> >>>>> In the ideal case the installer would be idempotent and upgrade would >>>>> be re-running the installer and we should aim to do just that. We kind >>>>> of do that already, but there is a lot of code duplication in >>>>> installers and ipa-upgradeconfig (I would like to fix that when >>>>> refactoring installers). IMO it's better to always make 100% sure the >>>>> configuration is correct rather than to save a second or two. >>>> I doesn't like this idea, if user wants to fix something, the one should >>>> use --skip-version-check option, and the IPA upgrade will be executed. >>> >>> Well, what I don't like is dealing with meaningless version numbers. >>> They are causing us grief in API versioning and I don't see why it >>> would be any different here. >> However you must keep the version because of schema and data upgrade, so >> why not to execute update as one batch instead of doing config upgrade >> all the time, and then data upgrade only if required. > > Because there is no exact mapping between version number and what features are > actually available. A state file is tons better than a single version number. > >> >> Some configuration upgrades, like adding new DNS related services, >> requires new schema, how we can handle this? > > This does not sound right. Could you be more specific? > >> Running schema upgrade every time? >>> >>>> What if a service changes in a way, the IPA configuration will not work? >>> >>> Then it's a bug and needs to be fixed, like any other bug. IIRC there >>> was only one or two occurences of such bug in the past 3 years (I >>> remember sshd_config), so I don't think you have a strong case here. >> Ok >>> >>>> The user will need to change it manually, but after each restart, >>>> upgrade will change the value back into IPA required configuration which >>>> will not work. >>> >>> Says who? It's our code, we can do whatever we want, it doesn't have >>> to be dumb like this. >>> >>>> Yes, we have upgrade state file, but then the comparing of one value is >>>> faster then checking each state if was executed. >>> >>> How faster is that, like, few milliseconds? Are you seriously >>> considering this the right optimization in a process that is >>> magnitudes slower? >> Ok the speed is not so important, but I still do not like the idea of >> executing the code which is not needed to be executed, because I know >> the version is the same as was before last restart, so nothing changed. > > Weren't "clever" optimizations like this what got us into this whole > refactoring bussiness in the first place? I very much agree with Honza. We should always start with something stupidly-simply and enhance it later, when it is clear if it is really necessary. Do not over-engineer it from the very beginning. Petr^2 Spacek >>>> My personal opinion is, application should not try to fix itself every >>>> restart. >>> >>> One could say that application should not try to upgrade itself every >>> restart, but here we are, doing it anyway... >> I want to do upgrade only if needed not every restart. > > If there is nothing to upgrade, nothing will be upgraded. The effect is the same. > >>> >>>>> >>>>>> In the past, I do not recall >>>>>> ipa-upgradeconfig as being really fast, especially certmonger/Dogtag >>>>>> related >>>>>> updates were really slow thank to service restarts, etc. >>>>> >>>>> Correct, but I was talking about configuration file updates, not >>>>> (re)starts, which have to always be done in ipactl anyway. >>>>> >>>>>> >>>>>>> 3) >>>>>>> >>>>>>> " >>>>>>> * Prevent user to use ipa-upgradeconfig and ipa-ldap-updater >>>>>>> " >>>>>>> >>>>>>> Even without arguments? Is ipactl now the only right place to >>>>>>> trigger manual >>>>>>> update? >>>> Sorry, I will add more details there, if this is not clear. >>>> ipa-upgrateconfig will be removed >>>> ipa-ldap-updater will not be able to do overall update, you will need to >>>> specify options and update file. >>>>>>> >>>>>>> 4) >>>>>>> >>>>>>> " >>>>>>> Plugins are called from update files, using new directive >>>>>>> update-plugin: >>>>>>> " >>>>>>> >>>>>>> Why "update-plugin" and not just "plugin"? Do you expect other kinds >>>>>>> of plugins >>>>>>> to be called from update files in the future? (I certainly don't.) >>>>>> >>>>>> I have no strong feelings on this one, but IMO it is always better to >>>>>> have some >>>>>> "plan B" if we choose to indeed implement some yet unforeseen plugin >>>>>> based >>>>>> capability... >>>>> >>>>> I doubt that will happen, but if it does, we can always add >>>>> "plan-b-plugin" directive. >>>> I do not insist on "update-plugin", I just wanted to be more specific >>>> which type of plugin is expected there. >>> >>> Well, the names of the files end with .update and they are located in >>> /usr/share/ipa/updates, I think that's enough hints as to what type of >>> plugin is expected. >>> >> okay >>>>> >>>>>> >>>>>>> 5) >>>>>>> >>>>>>> " >>>>>>> New class UpdatePlugin is used for all update plugins. >>>>>>> " >>>>>>> >>>>>>> Just reuse the existing Updater class, no need to reinvent the wheel. >>>>>>> >>>>>>> 6) >>>>>>> >>>>>>> I wonder why configuration update is done after data update and not >>>>>>> before. I >>>>>>> know it's been like that for a long time, but it seems kind of >>>>>>> unnatural to me, >>>>>>> especially now when schema update is separate from data update. >>>>>>> (Rob?) >>>> We need schema update first, but I haven't found any services which need >>>> to have updated data (I might be wrong) >>>>>>> >>>>>>> 7) >>>>>>> >>>>>>> " >>>>>>> keep --test option and fix the plugins which do not respect the >>>>>>> option >>>>>>> " >>>>>>> >>>>>>> Just a note, I believe this ticket is related: >>>>>>> . >>>>>>> >>>>>>> >>>>>>> Good work overall! >>>>>>> >>>>>>> Honza From mbasti at redhat.com Tue Mar 3 08:55:52 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 03 Mar 2015 09:55:52 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F571DA.6060801@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> Message-ID: <54F57718.2010607@redhat.com> On 03/03/15 09:33, Jan Cholasta wrote: > Dne 3.3.2015 v 09:06 Martin Basti napsal(a): >> On 03/03/15 07:31, Jan Cholasta wrote: >>> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>>> On 02/03/15 13:12, Jan Cholasta wrote: >>>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>>> Hi, >>>>>>> >>>>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>>>> Hello all, >>>>>>>> >>>>>>>> please read the design page, any objections/suggestions >>>>>>>> appreciated >>>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>>> >>>>>>> >>>>>>> 1) >>>>>>> >>>>>>> " >>>>>>> * Merge server update commands into the one command >>>>>>> (ipa-server-upgrade) >>>>>>> " >>>>>>> >>>>>>> So there is "ipa-server-install" to install the server, >>>>>>> "ipa-server-install >>>>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>>>> upgrade the >>>>>>> server. Maybe we should bring some consistency here and have one >>>>>>> of: >>>>>>> >>>>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>>>> --uninstall", >>>>>>> "ipa-server-install --upgrade" >>>>>>> >>>>>>> b) "ipa-server-install [install]", "ipa-server-install >>>>>>> uninstall", >>>>>>> "ipa-server-install upgrade" >>>>>>> >>>>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>>>> "ipa-server-upgrade" >>>>>> >>>>>> Long term, I think we want C. Besides other advantages, it will let >>>>>> us have >>>>>> independent sets of options, based on what you want to do. >>>>>> >>>>>>> 2) >>>>>>> >>>>>>> " >>>>>>> * Prevent to run IPA service, if code version and configuration >>>>>>> version does >>>>>>> not match >>>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>>> " >>>>>>> >>>>>>> There should be no configuration version, configuration update >>>>>>> should be run >>>>>>> always. It's fast and hence does not need to be optimized like data >>>>>>> update by >>>>>>> using a monolithic version number, which brings more than a few >>>>>>> problems on its >>>>>>> own. >>>>>> >>>>>> I do not agree in this section. Why would you like to run it always, >>>>>> even if it >>>>>> was fast? No run is still faster than fast run. >>>>> >>>>> In the ideal case the installer would be idempotent and upgrade would >>>>> be re-running the installer and we should aim to do just that. We >>>>> kind >>>>> of do that already, but there is a lot of code duplication in >>>>> installers and ipa-upgradeconfig (I would like to fix that when >>>>> refactoring installers). IMO it's better to always make 100% sure the >>>>> configuration is correct rather than to save a second or two. >>>> I doesn't like this idea, if user wants to fix something, the one >>>> should >>>> use --skip-version-check option, and the IPA upgrade will be executed. >>> >>> Well, what I don't like is dealing with meaningless version numbers. >>> They are causing us grief in API versioning and I don't see why it >>> would be any different here. >> However you must keep the version because of schema and data upgrade, so >> why not to execute update as one batch instead of doing config upgrade >> all the time, and then data upgrade only if required. > > Because there is no exact mapping between version number and what > features are actually available. A state file is tons better than a > single version number. > >> >> Some configuration upgrades, like adding new DNS related services, >> requires new schema, how we can handle this? > > This does not sound right. Could you be more specific? at least ipa-dnskeysyncd service, requires updated schema for keys metadata. This service is mandratory for DNS, so it is newly configured during upgrade. Now it works because schema update is the first, so during configuration upgrade we have actual schema. > >> Running schema upgrade every time? >>> >>>> What if a service changes in a way, the IPA configuration will not >>>> work? >>> >>> Then it's a bug and needs to be fixed, like any other bug. IIRC there >>> was only one or two occurences of such bug in the past 3 years (I >>> remember sshd_config), so I don't think you have a strong case here. >> Ok >>> >>>> The user will need to change it manually, but after each restart, >>>> upgrade will change the value back into IPA required configuration >>>> which >>>> will not work. >>> >>> Says who? It's our code, we can do whatever we want, it doesn't have >>> to be dumb like this. >>> >>>> Yes, we have upgrade state file, but then the comparing of one >>>> value is >>>> faster then checking each state if was executed. >>> >>> How faster is that, like, few milliseconds? Are you seriously >>> considering this the right optimization in a process that is >>> magnitudes slower? >> Ok the speed is not so important, but I still do not like the idea of >> executing the code which is not needed to be executed, because I know >> the version is the same as was before last restart, so nothing changed. > > Weren't "clever" optimizations like this what got us into this whole > refactoring bussiness in the first place? The "clever" optimizations worked in past, but IPA grown and now contains constraints/requirements which nobody expected. What if we will need some update which needs to execute time-consuming system check during every upgrade in future? User can always run the upgrade manually, with --skip-version-check, and then configuration plugins will decide if the upgrade is needed. > >>> >>>> >>>> My personal opinion is, application should not try to fix itself every >>>> restart. >>> >>> One could say that application should not try to upgrade itself every >>> restart, but here we are, doing it anyway... >> I want to do upgrade only if needed not every restart. > > If there is nothing to upgrade, nothing will be upgraded. The effect > is the same. > >>> >>>>> >>>>>> In the past, I do not recall >>>>>> ipa-upgradeconfig as being really fast, especially certmonger/Dogtag >>>>>> related >>>>>> updates were really slow thank to service restarts, etc. >>>>> >>>>> Correct, but I was talking about configuration file updates, not >>>>> (re)starts, which have to always be done in ipactl anyway. >>>>> >>>>>> >>>>>>> 3) >>>>>>> >>>>>>> " >>>>>>> * Prevent user to use ipa-upgradeconfig and ipa-ldap-updater >>>>>>> " >>>>>>> >>>>>>> Even without arguments? Is ipactl now the only right place to >>>>>>> trigger manual >>>>>>> update? >>>> Sorry, I will add more details there, if this is not clear. >>>> ipa-upgrateconfig will be removed >>>> ipa-ldap-updater will not be able to do overall update, you will >>>> need to >>>> specify options and update file. >>>>>>> >>>>>>> 4) >>>>>>> >>>>>>> " >>>>>>> Plugins are called from update files, using new directive >>>>>>> update-plugin: >>>>>>> " >>>>>>> >>>>>>> Why "update-plugin" and not just "plugin"? Do you expect other >>>>>>> kinds >>>>>>> of plugins >>>>>>> to be called from update files in the future? (I certainly don't.) >>>>>> >>>>>> I have no strong feelings on this one, but IMO it is always >>>>>> better to >>>>>> have some >>>>>> "plan B" if we choose to indeed implement some yet unforeseen plugin >>>>>> based >>>>>> capability... >>>>> >>>>> I doubt that will happen, but if it does, we can always add >>>>> "plan-b-plugin" directive. >>>> I do not insist on "update-plugin", I just wanted to be more specific >>>> which type of plugin is expected there. >>> >>> Well, the names of the files end with .update and they are located in >>> /usr/share/ipa/updates, I think that's enough hints as to what type of >>> plugin is expected. >>> >> okay >>>>> >>>>>> >>>>>>> 5) >>>>>>> >>>>>>> " >>>>>>> New class UpdatePlugin is used for all update plugins. >>>>>>> " >>>>>>> >>>>>>> Just reuse the existing Updater class, no need to reinvent the >>>>>>> wheel. >>>>>>> >>>>>>> 6) >>>>>>> >>>>>>> I wonder why configuration update is done after data update and not >>>>>>> before. I >>>>>>> know it's been like that for a long time, but it seems kind of >>>>>>> unnatural to me, >>>>>>> especially now when schema update is separate from data update. >>>>>>> (Rob?) >>>> We need schema update first, but I haven't found any services which >>>> need >>>> to have updated data (I might be wrong) >>>>>>> >>>>>>> 7) >>>>>>> >>>>>>> " >>>>>>> keep --test option and fix the plugins which do not respect the >>>>>>> option >>>>>>> " >>>>>>> >>>>>>> Just a note, I believe this ticket is related: >>>>>>> . >>>>>>> >>>>>>> >>>>>>> Good work overall! >>>>>>> >>>>>>> Honza >>>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > -- Martin Basti From jpazdziora at redhat.com Tue Mar 3 09:50:57 2015 From: jpazdziora at redhat.com (Jan Pazdziora) Date: Tue, 3 Mar 2015 10:50:57 +0100 Subject: [Freeipa-devel] One-way trust design In-Reply-To: <20150223160253.GX25455@redhat.com> References: <20150223160253.GX25455@redhat.com> Message-ID: <20150303095057.GM23856@redhat.com> On Mon, Feb 23, 2015 at 06:02:53PM +0200, Alexander Bokovoy wrote: > trust-related functionality would be limited to IPA admins or TDO > object in LDAP would have to be more accessible. Given that TDO > credentials can be used to compromise access to our domain, it is not Could you clarify which domain is the "our" domain? > advisable to give a wider access to them. > > As a side-effect of reducing exposure of TDO credentials, FreeIPA lost > ability to establish and use one-way trust to Active Directory. The "Lost ability" might be confusing -- was removed in 3.1 (?) might be better. > purpose of this feature is to regain the one-way trust support, yet > without giving an elevated access to TDO credentials. You might also want to either add a note or a link, explaining why one-way trust is harder than two-way, IOW, why we lost the one-way ability when we have the two-way one. -- Jan Pazdziora Principal Software Engineer, Identity Management Engineering, Red Hat From mkosek at redhat.com Tue Mar 3 09:55:42 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Mar 2015 10:55:42 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F57718.2010607@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> <54F57718.2010607@redhat.com> Message-ID: <54F5851E.3040500@redhat.com> On 03/03/2015 09:55 AM, Martin Basti wrote: > On 03/03/15 09:33, Jan Cholasta wrote: >> Dne 3.3.2015 v 09:06 Martin Basti napsal(a): >>> On 03/03/15 07:31, Jan Cholasta wrote: >>>> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>>>> On 02/03/15 13:12, Jan Cholasta wrote: >>>>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>>>>> Hello all, >>>>>>>>> >>>>>>>>> please read the design page, any objections/suggestions appreciated >>>>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>>>> >>>>>>>> >>>>>>>> 1) >>>>>>>> >>>>>>>> " >>>>>>>> * Merge server update commands into the one command >>>>>>>> (ipa-server-upgrade) >>>>>>>> " >>>>>>>> >>>>>>>> So there is "ipa-server-install" to install the server, >>>>>>>> "ipa-server-install >>>>>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>>>>> upgrade the >>>>>>>> server. Maybe we should bring some consistency here and have one of: >>>>>>>> >>>>>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>>>>> --uninstall", >>>>>>>> "ipa-server-install --upgrade" >>>>>>>> >>>>>>>> b) "ipa-server-install [install]", "ipa-server-install uninstall", >>>>>>>> "ipa-server-install upgrade" >>>>>>>> >>>>>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>>>>> "ipa-server-upgrade" >>>>>>> >>>>>>> Long term, I think we want C. Besides other advantages, it will let >>>>>>> us have >>>>>>> independent sets of options, based on what you want to do. >>>>>>> >>>>>>>> 2) >>>>>>>> >>>>>>>> " >>>>>>>> * Prevent to run IPA service, if code version and configuration >>>>>>>> version does >>>>>>>> not match >>>>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>>>> " >>>>>>>> >>>>>>>> There should be no configuration version, configuration update >>>>>>>> should be run >>>>>>>> always. It's fast and hence does not need to be optimized like data >>>>>>>> update by >>>>>>>> using a monolithic version number, which brings more than a few >>>>>>>> problems on its >>>>>>>> own. >>>>>>> >>>>>>> I do not agree in this section. Why would you like to run it always, >>>>>>> even if it >>>>>>> was fast? No run is still faster than fast run. >>>>>> >>>>>> In the ideal case the installer would be idempotent and upgrade would >>>>>> be re-running the installer and we should aim to do just that. We kind >>>>>> of do that already, but there is a lot of code duplication in >>>>>> installers and ipa-upgradeconfig (I would like to fix that when >>>>>> refactoring installers). IMO it's better to always make 100% sure the >>>>>> configuration is correct rather than to save a second or two. >>>>> I doesn't like this idea, if user wants to fix something, the one should >>>>> use --skip-version-check option, and the IPA upgrade will be executed. >>>> >>>> Well, what I don't like is dealing with meaningless version numbers. >>>> They are causing us grief in API versioning and I don't see why it >>>> would be any different here. >>> However you must keep the version because of schema and data upgrade, so >>> why not to execute update as one batch instead of doing config upgrade >>> all the time, and then data upgrade only if required. >> >> Because there is no exact mapping between version number and what features >> are actually available. A state file is tons better than a single version >> number. >> >>> >>> Some configuration upgrades, like adding new DNS related services, >>> requires new schema, how we can handle this? >> >> This does not sound right. Could you be more specific? > at least ipa-dnskeysyncd service, requires updated schema for keys metadata. > This service is mandratory for DNS, so it is newly configured during upgrade. > Now it works because schema update is the first, so during configuration > upgrade we have actual schema. >> >>> Running schema upgrade every time? >>>> >>>>> What if a service changes in a way, the IPA configuration will not work? >>>> >>>> Then it's a bug and needs to be fixed, like any other bug. IIRC there >>>> was only one or two occurences of such bug in the past 3 years (I >>>> remember sshd_config), so I don't think you have a strong case here. >>> Ok >>>> >>>>> The user will need to change it manually, but after each restart, >>>>> upgrade will change the value back into IPA required configuration which >>>>> will not work. >>>> >>>> Says who? It's our code, we can do whatever we want, it doesn't have >>>> to be dumb like this. >>>> >>>>> Yes, we have upgrade state file, but then the comparing of one value is >>>>> faster then checking each state if was executed. >>>> >>>> How faster is that, like, few milliseconds? Are you seriously >>>> considering this the right optimization in a process that is >>>> magnitudes slower? >>> Ok the speed is not so important, but I still do not like the idea of >>> executing the code which is not needed to be executed, because I know >>> the version is the same as was before last restart, so nothing changed. >> >> Weren't "clever" optimizations like this what got us into this whole >> refactoring bussiness in the first place? > The "clever" optimizations worked in past, but IPA grown and now contains > constraints/requirements which nobody expected. What if we will need some > update which needs to execute time-consuming system check during every upgrade > in future? > User can always run the upgrade manually, with --skip-version-check, and then > configuration plugins will decide if the upgrade is needed. I tend to agree with Martin, I would prefer to be on the safe side and not run config upgrades every time, unless we are explicitly asked to or unless we are absolutely sure that our idempotent upgrade scripts perfect for this use case. Which is not what I think we can say for 4.2. Maybe we could do it as gradual steps? Do on-demand config update with the said flag and when were confident about our idempotent upgrader, measure the time impact and start doing it every time? Martin From jcholast at redhat.com Tue Mar 3 09:55:57 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 03 Mar 2015 10:55:57 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F57718.2010607@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> <54F57718.2010607@redhat.com> Message-ID: <54F5852D.6090501@redhat.com> Dne 3.3.2015 v 09:55 Martin Basti napsal(a): > On 03/03/15 09:33, Jan Cholasta wrote: >> Dne 3.3.2015 v 09:06 Martin Basti napsal(a): >>> On 03/03/15 07:31, Jan Cholasta wrote: >>>> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>>>> On 02/03/15 13:12, Jan Cholasta wrote: >>>>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>>>>> Hello all, >>>>>>>>> >>>>>>>>> please read the design page, any objections/suggestions >>>>>>>>> appreciated >>>>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>>>> >>>>>>>> >>>>>>>> 1) >>>>>>>> >>>>>>>> " >>>>>>>> * Merge server update commands into the one command >>>>>>>> (ipa-server-upgrade) >>>>>>>> " >>>>>>>> >>>>>>>> So there is "ipa-server-install" to install the server, >>>>>>>> "ipa-server-install >>>>>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>>>>> upgrade the >>>>>>>> server. Maybe we should bring some consistency here and have one >>>>>>>> of: >>>>>>>> >>>>>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>>>>> --uninstall", >>>>>>>> "ipa-server-install --upgrade" >>>>>>>> >>>>>>>> b) "ipa-server-install [install]", "ipa-server-install >>>>>>>> uninstall", >>>>>>>> "ipa-server-install upgrade" >>>>>>>> >>>>>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>>>>> "ipa-server-upgrade" >>>>>>> >>>>>>> Long term, I think we want C. Besides other advantages, it will let >>>>>>> us have >>>>>>> independent sets of options, based on what you want to do. >>>>>>> >>>>>>>> 2) >>>>>>>> >>>>>>>> " >>>>>>>> * Prevent to run IPA service, if code version and configuration >>>>>>>> version does >>>>>>>> not match >>>>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>>>> " >>>>>>>> >>>>>>>> There should be no configuration version, configuration update >>>>>>>> should be run >>>>>>>> always. It's fast and hence does not need to be optimized like data >>>>>>>> update by >>>>>>>> using a monolithic version number, which brings more than a few >>>>>>>> problems on its >>>>>>>> own. >>>>>>> >>>>>>> I do not agree in this section. Why would you like to run it always, >>>>>>> even if it >>>>>>> was fast? No run is still faster than fast run. >>>>>> >>>>>> In the ideal case the installer would be idempotent and upgrade would >>>>>> be re-running the installer and we should aim to do just that. We >>>>>> kind >>>>>> of do that already, but there is a lot of code duplication in >>>>>> installers and ipa-upgradeconfig (I would like to fix that when >>>>>> refactoring installers). IMO it's better to always make 100% sure the >>>>>> configuration is correct rather than to save a second or two. >>>>> I doesn't like this idea, if user wants to fix something, the one >>>>> should >>>>> use --skip-version-check option, and the IPA upgrade will be executed. >>>> >>>> Well, what I don't like is dealing with meaningless version numbers. >>>> They are causing us grief in API versioning and I don't see why it >>>> would be any different here. >>> However you must keep the version because of schema and data upgrade, so >>> why not to execute update as one batch instead of doing config upgrade >>> all the time, and then data upgrade only if required. >> >> Because there is no exact mapping between version number and what >> features are actually available. A state file is tons better than a >> single version number. >> >>> >>> Some configuration upgrades, like adding new DNS related services, >>> requires new schema, how we can handle this? >> >> This does not sound right. Could you be more specific? > at least ipa-dnskeysyncd service, requires updated schema for keys > metadata. > This service is mandratory for DNS, so it is newly configured during > upgrade. > Now it works because schema update is the first, so during configuration > upgrade we have actual schema. Right, but what's your point? We are not discussing order of updates here, I'm perfectly fine with schema updates being done before configuration updates. >> >>> Running schema upgrade every time? >>>> >>>>> What if a service changes in a way, the IPA configuration will not >>>>> work? >>>> >>>> Then it's a bug and needs to be fixed, like any other bug. IIRC there >>>> was only one or two occurences of such bug in the past 3 years (I >>>> remember sshd_config), so I don't think you have a strong case here. >>> Ok >>>> >>>>> The user will need to change it manually, but after each restart, >>>>> upgrade will change the value back into IPA required configuration >>>>> which >>>>> will not work. >>>> >>>> Says who? It's our code, we can do whatever we want, it doesn't have >>>> to be dumb like this. >>>> >>>>> Yes, we have upgrade state file, but then the comparing of one >>>>> value is >>>>> faster then checking each state if was executed. >>>> >>>> How faster is that, like, few milliseconds? Are you seriously >>>> considering this the right optimization in a process that is >>>> magnitudes slower? >>> Ok the speed is not so important, but I still do not like the idea of >>> executing the code which is not needed to be executed, because I know >>> the version is the same as was before last restart, so nothing changed. >> >> Weren't "clever" optimizations like this what got us into this whole >> refactoring bussiness in the first place? > The "clever" optimizations worked in past, but IPA grown and now > contains constraints/requirements which nobody expected. Yes, then why do we need another one, especially so when it does not provide any significant speed-up? > What if we > will need some update which needs to execute time-consuming system check > during every upgrade in future? Then we deal with the optimization in the future, instead of doing it prematurely now. > User can always run the upgrade manually, with --skip-version-check, and > then configuration plugins will decide if the upgrade is needed. > >> >>>> >>>>> >>>>> My personal opinion is, application should not try to fix itself every >>>>> restart. >>>> >>>> One could say that application should not try to upgrade itself every >>>> restart, but here we are, doing it anyway... >>> I want to do upgrade only if needed not every restart. >> >> If there is nothing to upgrade, nothing will be upgraded. The effect >> is the same. >> >>>> >>>>>> >>>>>>> In the past, I do not recall >>>>>>> ipa-upgradeconfig as being really fast, especially certmonger/Dogtag >>>>>>> related >>>>>>> updates were really slow thank to service restarts, etc. >>>>>> >>>>>> Correct, but I was talking about configuration file updates, not >>>>>> (re)starts, which have to always be done in ipactl anyway. >>>>>> >>>>>>> >>>>>>>> 3) >>>>>>>> >>>>>>>> " >>>>>>>> * Prevent user to use ipa-upgradeconfig and ipa-ldap-updater >>>>>>>> " >>>>>>>> >>>>>>>> Even without arguments? Is ipactl now the only right place to >>>>>>>> trigger manual >>>>>>>> update? >>>>> Sorry, I will add more details there, if this is not clear. >>>>> ipa-upgrateconfig will be removed >>>>> ipa-ldap-updater will not be able to do overall update, you will >>>>> need to >>>>> specify options and update file. >>>>>>>> >>>>>>>> 4) >>>>>>>> >>>>>>>> " >>>>>>>> Plugins are called from update files, using new directive >>>>>>>> update-plugin: >>>>>>>> " >>>>>>>> >>>>>>>> Why "update-plugin" and not just "plugin"? Do you expect other >>>>>>>> kinds >>>>>>>> of plugins >>>>>>>> to be called from update files in the future? (I certainly don't.) >>>>>>> >>>>>>> I have no strong feelings on this one, but IMO it is always >>>>>>> better to >>>>>>> have some >>>>>>> "plan B" if we choose to indeed implement some yet unforeseen plugin >>>>>>> based >>>>>>> capability... >>>>>> >>>>>> I doubt that will happen, but if it does, we can always add >>>>>> "plan-b-plugin" directive. >>>>> I do not insist on "update-plugin", I just wanted to be more specific >>>>> which type of plugin is expected there. >>>> >>>> Well, the names of the files end with .update and they are located in >>>> /usr/share/ipa/updates, I think that's enough hints as to what type of >>>> plugin is expected. >>>> >>> okay >>>>>> >>>>>>> >>>>>>>> 5) >>>>>>>> >>>>>>>> " >>>>>>>> New class UpdatePlugin is used for all update plugins. >>>>>>>> " >>>>>>>> >>>>>>>> Just reuse the existing Updater class, no need to reinvent the >>>>>>>> wheel. >>>>>>>> >>>>>>>> 6) >>>>>>>> >>>>>>>> I wonder why configuration update is done after data update and not >>>>>>>> before. I >>>>>>>> know it's been like that for a long time, but it seems kind of >>>>>>>> unnatural to me, >>>>>>>> especially now when schema update is separate from data update. >>>>>>>> (Rob?) >>>>> We need schema update first, but I haven't found any services which >>>>> need >>>>> to have updated data (I might be wrong) >>>>>>>> >>>>>>>> 7) >>>>>>>> >>>>>>>> " >>>>>>>> keep --test option and fix the plugins which do not respect the >>>>>>>> option >>>>>>>> " >>>>>>>> >>>>>>>> Just a note, I believe this ticket is related: >>>>>>>> . >>>>>>>> >>>>>>>> >>>>>>>> Good work overall! >>>>>>>> >>>>>>>> Honza >>>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > -- Jan Cholasta From pspacek at redhat.com Tue Mar 3 09:56:58 2015 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 03 Mar 2015 10:56:58 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F4A3D2.50207@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> <54F49DD8.4090801@redhat.com> <54F4A3D2.50207@redhat.com> Message-ID: <54F5856A.1070505@redhat.com> On 2.3.2015 18:54, Martin Basti wrote: > On 02/03/15 18:28, Martin Kosek wrote: >> On 03/02/2015 06:12 PM, Martin Basti wrote: >>> On 02/03/15 15:43, Rob Crittenden wrote: >>>> Martin Basti wrote: >> ... >>>> But you haven't explained any case why LDAPI would fail. If LDAPI fails >>>> then you've got more serious problems that I'm not sure binding as DM is >>>> going to solve. >>>> >>>> The only case where DM would be handy IMHO is either some worst case >>>> scenario upgrade where 389-ds is up but not binding to LDAPI or if you >>>> want to allow running LDAP updates as non-root. >>> I don't know cases when LDAPI would failed, except the case LDAPI is >>> misconfigured by user, or disabled by user. >> Wasn't LDAPI needed for the DM password less upgrade so that upgrader could >> simply bind as root with EXTERNAL auth? > We can do upgrade in both way, using LDAPI or using DM password, preferred is > LDAPI. > Question is, what is the use case for using DM password instead of LDAPI > during upgrade. >> >>> It is not big effort to keep both DM binding and LDAPI in code. A user can >>> always found som unexpected use case for LDAP update with DM password. >>> >>>>>> On ipactl, would it be overkill if there is a tty to prompt the user to >>>>>> upgrade? In a non-container world it might be surprising to have an >>>>>> upgrade happen esp since upgrades take a while. >>>>> In non-container enviroment, we can still use upgrade during RPM >>>>> transaction. >>>>> >>>>> So you suggest not to do update automaticaly, just write Error the IPA >>>>> upgrade is required? >>>> People do all sorts of strange things. Installing the packages with >>>> --no-script isn't in the range of impossible. A prompt, and I'm not >>>> saying it's a great idea, is 2 lines of code. >>>> >>>> I guess it just makes me nervous. >>> So lets summarize this: >>> * DO upgrade if possible during RPM transaction >> Umm, I thought we want to get rid of running upgrade during RPM transaction. It >> is extremely difficult to debug upgrade stuck during RPM transaction, it also >> makes RPM upgrade run longer than needed. It also makes admins nervous when >> their rpm upgrade is suddenly waiting right before the end. I even see the >> fingers slowly reaching to CTRL+C combo... (You can see the consequences) > People are used to have IPA upgraded and ready after RPM upgrade. > They may be shocked if IPA services will be in shutdown state after RPM > transaction. > > I have no more objections. IMHO the problem with long-running RPM upgrade should be approached from the other way: Just print message 'IPA server upgrade is running, press CTRL+C if you want to destroy your IPA server'. bind-dyndb-ldap prints a message about SELinux in RPM scriptlets for couple releases now and nobody complained (yet? :-). Petr^2 Spacek > >> >>> * ipactl will NOT run upgrade, just print Error: 'please upgrade ....' >>> * User has to run ipa-server-upgrade manually >>> >>> Does I understand it correctly? >>>>>> With --skip-version-check what sorts of problems can we foresee? I >>>>>> assume a big warning will be added to at least the man page, if not >>>>>> the cli? >>>>> For this big warning everywhere. >>>>> The main problem is try to run older IPA with newer data. In containers >>>>> the problem is to run different platform specific versions which differ >>>>> in functionality/bugfixes etc.. >>>> Ok, pretty much the things I was thinking as well. A scary warning >>>> definitely seems warranted. >>>> >>>>>> Where does platform come from? I'm wondering how Debian will handle this. >>>>> platform is derived from ipaplatform file which is used with the >>>>> particular build. Debian should have own platform file. >>>> Ok, I'd add that detail to the design. >>>> >>>> rob >>> > > -- Petr^2 Spacek From mkosek at redhat.com Tue Mar 3 09:58:44 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Mar 2015 10:58:44 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F5727D.7020005@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> <54F5727D.7020005@redhat.com> Message-ID: <54F585D4.7010003@redhat.com> On 03/03/2015 09:36 AM, Petr Spacek wrote: > On 3.3.2015 09:33, Jan Cholasta wrote: >> Dne 3.3.2015 v 09:06 Martin Basti napsal(a): >>> On 03/03/15 07:31, Jan Cholasta wrote: >>>> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>>>> On 02/03/15 13:12, Jan Cholasta wrote: >>>>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>>>>> Hello all, >>>>>>>>> >>>>>>>>> please read the design page, any objections/suggestions appreciated >>>>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>>>> >>>>>>>> >>>>>>>> 1) >>>>>>>> >>>>>>>> " >>>>>>>> * Merge server update commands into the one command >>>>>>>> (ipa-server-upgrade) >>>>>>>> " >>>>>>>> >>>>>>>> So there is "ipa-server-install" to install the server, >>>>>>>> "ipa-server-install >>>>>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>>>>> upgrade the >>>>>>>> server. Maybe we should bring some consistency here and have one of: >>>>>>>> >>>>>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>>>>> --uninstall", >>>>>>>> "ipa-server-install --upgrade" >>>>>>>> >>>>>>>> b) "ipa-server-install [install]", "ipa-server-install uninstall", >>>>>>>> "ipa-server-install upgrade" >>>>>>>> >>>>>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>>>>> "ipa-server-upgrade" >>>>>>> >>>>>>> Long term, I think we want C. Besides other advantages, it will let >>>>>>> us have >>>>>>> independent sets of options, based on what you want to do. >>>>>>> >>>>>>>> 2) >>>>>>>> >>>>>>>> " >>>>>>>> * Prevent to run IPA service, if code version and configuration >>>>>>>> version does >>>>>>>> not match >>>>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>>>> " >>>>>>>> >>>>>>>> There should be no configuration version, configuration update >>>>>>>> should be run >>>>>>>> always. It's fast and hence does not need to be optimized like data >>>>>>>> update by >>>>>>>> using a monolithic version number, which brings more than a few >>>>>>>> problems on its >>>>>>>> own. >>>>>>> >>>>>>> I do not agree in this section. Why would you like to run it always, >>>>>>> even if it >>>>>>> was fast? No run is still faster than fast run. >>>>>> >>>>>> In the ideal case the installer would be idempotent and upgrade would >>>>>> be re-running the installer and we should aim to do just that. We kind >>>>>> of do that already, but there is a lot of code duplication in >>>>>> installers and ipa-upgradeconfig (I would like to fix that when >>>>>> refactoring installers). IMO it's better to always make 100% sure the >>>>>> configuration is correct rather than to save a second or two. >>>>> I doesn't like this idea, if user wants to fix something, the one should >>>>> use --skip-version-check option, and the IPA upgrade will be executed. >>>> >>>> Well, what I don't like is dealing with meaningless version numbers. >>>> They are causing us grief in API versioning and I don't see why it >>>> would be any different here. >>> However you must keep the version because of schema and data upgrade, so >>> why not to execute update as one batch instead of doing config upgrade >>> all the time, and then data upgrade only if required. >> >> Because there is no exact mapping between version number and what features are >> actually available. A state file is tons better than a single version number. >> >>> >>> Some configuration upgrades, like adding new DNS related services, >>> requires new schema, how we can handle this? >> >> This does not sound right. Could you be more specific? >> >>> Running schema upgrade every time? >>>> >>>>> What if a service changes in a way, the IPA configuration will not work? >>>> >>>> Then it's a bug and needs to be fixed, like any other bug. IIRC there >>>> was only one or two occurences of such bug in the past 3 years (I >>>> remember sshd_config), so I don't think you have a strong case here. >>> Ok >>>> >>>>> The user will need to change it manually, but after each restart, >>>>> upgrade will change the value back into IPA required configuration which >>>>> will not work. >>>> >>>> Says who? It's our code, we can do whatever we want, it doesn't have >>>> to be dumb like this. >>>> >>>>> Yes, we have upgrade state file, but then the comparing of one value is >>>>> faster then checking each state if was executed. >>>> >>>> How faster is that, like, few milliseconds? Are you seriously >>>> considering this the right optimization in a process that is >>>> magnitudes slower? >>> Ok the speed is not so important, but I still do not like the idea of >>> executing the code which is not needed to be executed, because I know >>> the version is the same as was before last restart, so nothing changed. >> >> Weren't "clever" optimizations like this what got us into this whole >> refactoring bussiness in the first place? > > I very much agree with Honza. We should always start with something > stupidly-simply and enhance it later, when it is clear if it is really necessary. > > Do not over-engineer it from the very beginning. I completely agree with starting stupid and simply and improving in time. However, are we sure that what Honza proposed is the simple and stupid way? Doing config upgrade only when needed and thus not depending on the efficiency and idempotency of the config upgraders seems to me as *the* stupid and simple way for upgrade refactoring. Martin From mbasti at redhat.com Tue Mar 3 10:00:45 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 03 Mar 2015 11:00:45 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F5852D.6090501@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> <54F57718.2010607@redhat.com> <54F5852D.6090501@redhat.com> Message-ID: <54F5864D.4050909@redhat.com> On 03/03/15 10:55, Jan Cholasta wrote: > Dne 3.3.2015 v 09:55 Martin Basti napsal(a): >> On 03/03/15 09:33, Jan Cholasta wrote: >>> Dne 3.3.2015 v 09:06 Martin Basti napsal(a): >>>> On 03/03/15 07:31, Jan Cholasta wrote: >>>>> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>>>>> On 02/03/15 13:12, Jan Cholasta wrote: >>>>>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>>>>>> Hello all, >>>>>>>>>> >>>>>>>>>> please read the design page, any objections/suggestions >>>>>>>>>> appreciated >>>>>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>>>>> >>>>>>>>> >>>>>>>>> 1) >>>>>>>>> >>>>>>>>> " >>>>>>>>> * Merge server update commands into the one command >>>>>>>>> (ipa-server-upgrade) >>>>>>>>> " >>>>>>>>> >>>>>>>>> So there is "ipa-server-install" to install the server, >>>>>>>>> "ipa-server-install >>>>>>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>>>>>> upgrade the >>>>>>>>> server. Maybe we should bring some consistency here and have one >>>>>>>>> of: >>>>>>>>> >>>>>>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>>>>>> --uninstall", >>>>>>>>> "ipa-server-install --upgrade" >>>>>>>>> >>>>>>>>> b) "ipa-server-install [install]", "ipa-server-install >>>>>>>>> uninstall", >>>>>>>>> "ipa-server-install upgrade" >>>>>>>>> >>>>>>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>>>>>> "ipa-server-upgrade" >>>>>>>> >>>>>>>> Long term, I think we want C. Besides other advantages, it will >>>>>>>> let >>>>>>>> us have >>>>>>>> independent sets of options, based on what you want to do. >>>>>>>> >>>>>>>>> 2) >>>>>>>>> >>>>>>>>> " >>>>>>>>> * Prevent to run IPA service, if code version and configuration >>>>>>>>> version does >>>>>>>>> not match >>>>>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>>>>> " >>>>>>>>> >>>>>>>>> There should be no configuration version, configuration update >>>>>>>>> should be run >>>>>>>>> always. It's fast and hence does not need to be optimized like >>>>>>>>> data >>>>>>>>> update by >>>>>>>>> using a monolithic version number, which brings more than a few >>>>>>>>> problems on its >>>>>>>>> own. >>>>>>>> >>>>>>>> I do not agree in this section. Why would you like to run it >>>>>>>> always, >>>>>>>> even if it >>>>>>>> was fast? No run is still faster than fast run. >>>>>>> >>>>>>> In the ideal case the installer would be idempotent and upgrade >>>>>>> would >>>>>>> be re-running the installer and we should aim to do just that. We >>>>>>> kind >>>>>>> of do that already, but there is a lot of code duplication in >>>>>>> installers and ipa-upgradeconfig (I would like to fix that when >>>>>>> refactoring installers). IMO it's better to always make 100% >>>>>>> sure the >>>>>>> configuration is correct rather than to save a second or two. >>>>>> I doesn't like this idea, if user wants to fix something, the one >>>>>> should >>>>>> use --skip-version-check option, and the IPA upgrade will be >>>>>> executed. >>>>> >>>>> Well, what I don't like is dealing with meaningless version numbers. >>>>> They are causing us grief in API versioning and I don't see why it >>>>> would be any different here. >>>> However you must keep the version because of schema and data >>>> upgrade, so >>>> why not to execute update as one batch instead of doing config upgrade >>>> all the time, and then data upgrade only if required. >>> >>> Because there is no exact mapping between version number and what >>> features are actually available. A state file is tons better than a >>> single version number. >>> >>>> >>>> Some configuration upgrades, like adding new DNS related services, >>>> requires new schema, how we can handle this? >>> >>> This does not sound right. Could you be more specific? >> at least ipa-dnskeysyncd service, requires updated schema for keys >> metadata. >> This service is mandratory for DNS, so it is newly configured during >> upgrade. >> Now it works because schema update is the first, so during configuration >> upgrade we have actual schema. > > Right, but what's your point? We are not discussing order of updates > here, I'm perfectly fine with schema updates being done before > configuration updates. So you want to run schema update before configuration upgrade every restart? OR you want to run schema update if needed based on version, and then run configuration upgrade? > >>> >>>> Running schema upgrade every time? >>>>> >>>>>> What if a service changes in a way, the IPA configuration will not >>>>>> work? >>>>> >>>>> Then it's a bug and needs to be fixed, like any other bug. IIRC there >>>>> was only one or two occurences of such bug in the past 3 years (I >>>>> remember sshd_config), so I don't think you have a strong case here. >>>> Ok >>>>> >>>>>> The user will need to change it manually, but after each restart, >>>>>> upgrade will change the value back into IPA required configuration >>>>>> which >>>>>> will not work. >>>>> >>>>> Says who? It's our code, we can do whatever we want, it doesn't have >>>>> to be dumb like this. >>>>> >>>>>> Yes, we have upgrade state file, but then the comparing of one >>>>>> value is >>>>>> faster then checking each state if was executed. >>>>> >>>>> How faster is that, like, few milliseconds? Are you seriously >>>>> considering this the right optimization in a process that is >>>>> magnitudes slower? >>>> Ok the speed is not so important, but I still do not like the idea of >>>> executing the code which is not needed to be executed, because I know >>>> the version is the same as was before last restart, so nothing >>>> changed. >>> >>> Weren't "clever" optimizations like this what got us into this whole >>> refactoring bussiness in the first place? >> The "clever" optimizations worked in past, but IPA grown and now >> contains constraints/requirements which nobody expected. > > Yes, then why do we need another one, especially so when it does not > provide any significant speed-up? > >> What if we >> will need some update which needs to execute time-consuming system check >> during every upgrade in future? > > Then we deal with the optimization in the future, instead of doing it > prematurely now. > >> User can always run the upgrade manually, with --skip-version-check, and >> then configuration plugins will decide if the upgrade is needed. >> >>> >>>>> >>>>>> >>>>>> My personal opinion is, application should not try to fix itself >>>>>> every >>>>>> restart. >>>>> >>>>> One could say that application should not try to upgrade itself every >>>>> restart, but here we are, doing it anyway... >>>> I want to do upgrade only if needed not every restart. >>> >>> If there is nothing to upgrade, nothing will be upgraded. The effect >>> is the same. >>> >>>>> >>>>>>> >>>>>>>> In the past, I do not recall >>>>>>>> ipa-upgradeconfig as being really fast, especially >>>>>>>> certmonger/Dogtag >>>>>>>> related >>>>>>>> updates were really slow thank to service restarts, etc. >>>>>>> >>>>>>> Correct, but I was talking about configuration file updates, not >>>>>>> (re)starts, which have to always be done in ipactl anyway. >>>>>>> >>>>>>>> >>>>>>>>> 3) >>>>>>>>> >>>>>>>>> " >>>>>>>>> * Prevent user to use ipa-upgradeconfig and ipa-ldap-updater >>>>>>>>> " >>>>>>>>> >>>>>>>>> Even without arguments? Is ipactl now the only right place to >>>>>>>>> trigger manual >>>>>>>>> update? >>>>>> Sorry, I will add more details there, if this is not clear. >>>>>> ipa-upgrateconfig will be removed >>>>>> ipa-ldap-updater will not be able to do overall update, you will >>>>>> need to >>>>>> specify options and update file. >>>>>>>>> >>>>>>>>> 4) >>>>>>>>> >>>>>>>>> " >>>>>>>>> Plugins are called from update files, using new directive >>>>>>>>> update-plugin: >>>>>>>>> " >>>>>>>>> >>>>>>>>> Why "update-plugin" and not just "plugin"? Do you expect other >>>>>>>>> kinds >>>>>>>>> of plugins >>>>>>>>> to be called from update files in the future? (I certainly >>>>>>>>> don't.) >>>>>>>> >>>>>>>> I have no strong feelings on this one, but IMO it is always >>>>>>>> better to >>>>>>>> have some >>>>>>>> "plan B" if we choose to indeed implement some yet unforeseen >>>>>>>> plugin >>>>>>>> based >>>>>>>> capability... >>>>>>> >>>>>>> I doubt that will happen, but if it does, we can always add >>>>>>> "plan-b-plugin" directive. >>>>>> I do not insist on "update-plugin", I just wanted to be more >>>>>> specific >>>>>> which type of plugin is expected there. >>>>> >>>>> Well, the names of the files end with .update and they are located in >>>>> /usr/share/ipa/updates, I think that's enough hints as to what >>>>> type of >>>>> plugin is expected. >>>>> >>>> okay >>>>>>> >>>>>>>> >>>>>>>>> 5) >>>>>>>>> >>>>>>>>> " >>>>>>>>> New class UpdatePlugin is used for all update plugins. >>>>>>>>> " >>>>>>>>> >>>>>>>>> Just reuse the existing Updater class, no need to reinvent the >>>>>>>>> wheel. >>>>>>>>> >>>>>>>>> 6) >>>>>>>>> >>>>>>>>> I wonder why configuration update is done after data update >>>>>>>>> and not >>>>>>>>> before. I >>>>>>>>> know it's been like that for a long time, but it seems kind of >>>>>>>>> unnatural to me, >>>>>>>>> especially now when schema update is separate from data update. >>>>>>>>> (Rob?) >>>>>> We need schema update first, but I haven't found any services which >>>>>> need >>>>>> to have updated data (I might be wrong) >>>>>>>>> >>>>>>>>> 7) >>>>>>>>> >>>>>>>>> " >>>>>>>>> keep --test option and fix the plugins which do not respect the >>>>>>>>> option >>>>>>>>> " >>>>>>>>> >>>>>>>>> Just a note, I believe this ticket is related: >>>>>>>>> . >>>>>>>>> >>>>>>>>> >>>>>>>>> Good work overall! >>>>>>>>> >>>>>>>>> Honza >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> >> > > -- Martin Basti From jcholast at redhat.com Tue Mar 3 10:01:49 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 03 Mar 2015 11:01:49 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F5851E.3040500@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> <54F57718.2010607@redhat.com> <54F5851E.3040500@redhat.com> Message-ID: <54F5868D.9080500@redhat.com> Dne 3.3.2015 v 10:55 Martin Kosek napsal(a): > On 03/03/2015 09:55 AM, Martin Basti wrote: >> On 03/03/15 09:33, Jan Cholasta wrote: >>> Dne 3.3.2015 v 09:06 Martin Basti napsal(a): >>>> On 03/03/15 07:31, Jan Cholasta wrote: >>>>> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>>>>> On 02/03/15 13:12, Jan Cholasta wrote: >>>>>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>>>>>> Hello all, >>>>>>>>>> >>>>>>>>>> please read the design page, any objections/suggestions appreciated >>>>>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>>>>> >>>>>>>>> >>>>>>>>> 1) >>>>>>>>> >>>>>>>>> " >>>>>>>>> * Merge server update commands into the one command >>>>>>>>> (ipa-server-upgrade) >>>>>>>>> " >>>>>>>>> >>>>>>>>> So there is "ipa-server-install" to install the server, >>>>>>>>> "ipa-server-install >>>>>>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>>>>>> upgrade the >>>>>>>>> server. Maybe we should bring some consistency here and have one of: >>>>>>>>> >>>>>>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>>>>>> --uninstall", >>>>>>>>> "ipa-server-install --upgrade" >>>>>>>>> >>>>>>>>> b) "ipa-server-install [install]", "ipa-server-install uninstall", >>>>>>>>> "ipa-server-install upgrade" >>>>>>>>> >>>>>>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>>>>>> "ipa-server-upgrade" >>>>>>>> >>>>>>>> Long term, I think we want C. Besides other advantages, it will let >>>>>>>> us have >>>>>>>> independent sets of options, based on what you want to do. >>>>>>>> >>>>>>>>> 2) >>>>>>>>> >>>>>>>>> " >>>>>>>>> * Prevent to run IPA service, if code version and configuration >>>>>>>>> version does >>>>>>>>> not match >>>>>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>>>>> " >>>>>>>>> >>>>>>>>> There should be no configuration version, configuration update >>>>>>>>> should be run >>>>>>>>> always. It's fast and hence does not need to be optimized like data >>>>>>>>> update by >>>>>>>>> using a monolithic version number, which brings more than a few >>>>>>>>> problems on its >>>>>>>>> own. >>>>>>>> >>>>>>>> I do not agree in this section. Why would you like to run it always, >>>>>>>> even if it >>>>>>>> was fast? No run is still faster than fast run. >>>>>>> >>>>>>> In the ideal case the installer would be idempotent and upgrade would >>>>>>> be re-running the installer and we should aim to do just that. We kind >>>>>>> of do that already, but there is a lot of code duplication in >>>>>>> installers and ipa-upgradeconfig (I would like to fix that when >>>>>>> refactoring installers). IMO it's better to always make 100% sure the >>>>>>> configuration is correct rather than to save a second or two. >>>>>> I doesn't like this idea, if user wants to fix something, the one should >>>>>> use --skip-version-check option, and the IPA upgrade will be executed. >>>>> >>>>> Well, what I don't like is dealing with meaningless version numbers. >>>>> They are causing us grief in API versioning and I don't see why it >>>>> would be any different here. >>>> However you must keep the version because of schema and data upgrade, so >>>> why not to execute update as one batch instead of doing config upgrade >>>> all the time, and then data upgrade only if required. >>> >>> Because there is no exact mapping between version number and what features >>> are actually available. A state file is tons better than a single version >>> number. >>> >>>> >>>> Some configuration upgrades, like adding new DNS related services, >>>> requires new schema, how we can handle this? >>> >>> This does not sound right. Could you be more specific? >> at least ipa-dnskeysyncd service, requires updated schema for keys metadata. >> This service is mandratory for DNS, so it is newly configured during upgrade. >> Now it works because schema update is the first, so during configuration >> upgrade we have actual schema. >>> >>>> Running schema upgrade every time? >>>>> >>>>>> What if a service changes in a way, the IPA configuration will not work? >>>>> >>>>> Then it's a bug and needs to be fixed, like any other bug. IIRC there >>>>> was only one or two occurences of such bug in the past 3 years (I >>>>> remember sshd_config), so I don't think you have a strong case here. >>>> Ok >>>>> >>>>>> The user will need to change it manually, but after each restart, >>>>>> upgrade will change the value back into IPA required configuration which >>>>>> will not work. >>>>> >>>>> Says who? It's our code, we can do whatever we want, it doesn't have >>>>> to be dumb like this. >>>>> >>>>>> Yes, we have upgrade state file, but then the comparing of one value is >>>>>> faster then checking each state if was executed. >>>>> >>>>> How faster is that, like, few milliseconds? Are you seriously >>>>> considering this the right optimization in a process that is >>>>> magnitudes slower? >>>> Ok the speed is not so important, but I still do not like the idea of >>>> executing the code which is not needed to be executed, because I know >>>> the version is the same as was before last restart, so nothing changed. >>> >>> Weren't "clever" optimizations like this what got us into this whole >>> refactoring bussiness in the first place? >> The "clever" optimizations worked in past, but IPA grown and now contains >> constraints/requirements which nobody expected. What if we will need some >> update which needs to execute time-consuming system check during every upgrade >> in future? >> User can always run the upgrade manually, with --skip-version-check, and then >> configuration plugins will decide if the upgrade is needed. > > I tend to agree with Martin, I would prefer to be on the safe side and not run > config upgrades every time, unless we are explicitly asked to or unless we are > absolutely sure that our idempotent upgrade scripts perfect for this use case. > > Which is not what I think we can say for 4.2. Maybe we could do it as gradual > steps? Do on-demand config update with the said flag and when were confident > about our idempotent upgrader, measure the time impact and start doing it every > time? I would very much prefer to do it the other way around, so that most bugs in the code are caught early, instead of hidden behind the version comparison. > > Martin > -- Jan Cholasta From pspacek at redhat.com Tue Mar 3 10:04:39 2015 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 03 Mar 2015 11:04:39 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F585D4.7010003@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> <54F5727D.7020005@redhat.com> <54F585D4.7010003@redhat.com> Message-ID: <54F58737.8090108@redhat.com> On 3.3.2015 10:58, Martin Kosek wrote: > On 03/03/2015 09:36 AM, Petr Spacek wrote: >> On 3.3.2015 09:33, Jan Cholasta wrote: >>> Dne 3.3.2015 v 09:06 Martin Basti napsal(a): >>>> On 03/03/15 07:31, Jan Cholasta wrote: >>>>> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>>>>> On 02/03/15 13:12, Jan Cholasta wrote: >>>>>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>>>>>> Hello all, >>>>>>>>>> >>>>>>>>>> please read the design page, any objections/suggestions appreciated >>>>>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>>>>> >>>>>>>>> >>>>>>>>> 1) >>>>>>>>> >>>>>>>>> " >>>>>>>>> * Merge server update commands into the one command >>>>>>>>> (ipa-server-upgrade) >>>>>>>>> " >>>>>>>>> >>>>>>>>> So there is "ipa-server-install" to install the server, >>>>>>>>> "ipa-server-install >>>>>>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>>>>>> upgrade the >>>>>>>>> server. Maybe we should bring some consistency here and have one of: >>>>>>>>> >>>>>>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>>>>>> --uninstall", >>>>>>>>> "ipa-server-install --upgrade" >>>>>>>>> >>>>>>>>> b) "ipa-server-install [install]", "ipa-server-install uninstall", >>>>>>>>> "ipa-server-install upgrade" >>>>>>>>> >>>>>>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>>>>>> "ipa-server-upgrade" >>>>>>>> >>>>>>>> Long term, I think we want C. Besides other advantages, it will let >>>>>>>> us have >>>>>>>> independent sets of options, based on what you want to do. >>>>>>>> >>>>>>>>> 2) >>>>>>>>> >>>>>>>>> " >>>>>>>>> * Prevent to run IPA service, if code version and configuration >>>>>>>>> version does >>>>>>>>> not match >>>>>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>>>>> " >>>>>>>>> >>>>>>>>> There should be no configuration version, configuration update >>>>>>>>> should be run >>>>>>>>> always. It's fast and hence does not need to be optimized like data >>>>>>>>> update by >>>>>>>>> using a monolithic version number, which brings more than a few >>>>>>>>> problems on its >>>>>>>>> own. >>>>>>>> >>>>>>>> I do not agree in this section. Why would you like to run it always, >>>>>>>> even if it >>>>>>>> was fast? No run is still faster than fast run. >>>>>>> >>>>>>> In the ideal case the installer would be idempotent and upgrade would >>>>>>> be re-running the installer and we should aim to do just that. We kind >>>>>>> of do that already, but there is a lot of code duplication in >>>>>>> installers and ipa-upgradeconfig (I would like to fix that when >>>>>>> refactoring installers). IMO it's better to always make 100% sure the >>>>>>> configuration is correct rather than to save a second or two. >>>>>> I doesn't like this idea, if user wants to fix something, the one should >>>>>> use --skip-version-check option, and the IPA upgrade will be executed. >>>>> >>>>> Well, what I don't like is dealing with meaningless version numbers. >>>>> They are causing us grief in API versioning and I don't see why it >>>>> would be any different here. >>>> However you must keep the version because of schema and data upgrade, so >>>> why not to execute update as one batch instead of doing config upgrade >>>> all the time, and then data upgrade only if required. >>> >>> Because there is no exact mapping between version number and what features are >>> actually available. A state file is tons better than a single version number. >>> >>>> >>>> Some configuration upgrades, like adding new DNS related services, >>>> requires new schema, how we can handle this? >>> >>> This does not sound right. Could you be more specific? >>> >>>> Running schema upgrade every time? >>>>> >>>>>> What if a service changes in a way, the IPA configuration will not work? >>>>> >>>>> Then it's a bug and needs to be fixed, like any other bug. IIRC there >>>>> was only one or two occurences of such bug in the past 3 years (I >>>>> remember sshd_config), so I don't think you have a strong case here. >>>> Ok >>>>> >>>>>> The user will need to change it manually, but after each restart, >>>>>> upgrade will change the value back into IPA required configuration which >>>>>> will not work. >>>>> >>>>> Says who? It's our code, we can do whatever we want, it doesn't have >>>>> to be dumb like this. >>>>> >>>>>> Yes, we have upgrade state file, but then the comparing of one value is >>>>>> faster then checking each state if was executed. >>>>> >>>>> How faster is that, like, few milliseconds? Are you seriously >>>>> considering this the right optimization in a process that is >>>>> magnitudes slower? >>>> Ok the speed is not so important, but I still do not like the idea of >>>> executing the code which is not needed to be executed, because I know >>>> the version is the same as was before last restart, so nothing changed. >>> >>> Weren't "clever" optimizations like this what got us into this whole >>> refactoring bussiness in the first place? >> >> I very much agree with Honza. We should always start with something >> stupidly-simply and enhance it later, when it is clear if it is really necessary. >> >> Do not over-engineer it from the very beginning. > > I completely agree with starting stupid and simply and improving in time. > However, are we sure that what Honza proposed is the simple and stupid way? > > Doing config upgrade only when needed and thus not depending on the efficiency > and idempotency of the config upgraders seems to me as *the* stupid and simple > way for upgrade refactoring. Maybe I'm missing something but if not state.get('dns_forward_zones_supported'): # upgrade to forward zones here seems less error-prone than if version < 4.0: # upgrade to forward zones here # make me a sandwich # consult local oracle to guess that other features where backported # to currently running code -- Petr^2 Spacek From jcholast at redhat.com Tue Mar 3 10:05:55 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 03 Mar 2015 11:05:55 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F585D4.7010003@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> <54F5727D.7020005@redhat.com> <54F585D4.7010003@redhat.com> Message-ID: <54F58783.30408@redhat.com> Dne 3.3.2015 v 10:58 Martin Kosek napsal(a): > On 03/03/2015 09:36 AM, Petr Spacek wrote: >> On 3.3.2015 09:33, Jan Cholasta wrote: >>> Dne 3.3.2015 v 09:06 Martin Basti napsal(a): >>>> On 03/03/15 07:31, Jan Cholasta wrote: >>>>> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>>>>> On 02/03/15 13:12, Jan Cholasta wrote: >>>>>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>>>>>> Hello all, >>>>>>>>>> >>>>>>>>>> please read the design page, any objections/suggestions appreciated >>>>>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>>>>> >>>>>>>>> >>>>>>>>> 1) >>>>>>>>> >>>>>>>>> " >>>>>>>>> * Merge server update commands into the one command >>>>>>>>> (ipa-server-upgrade) >>>>>>>>> " >>>>>>>>> >>>>>>>>> So there is "ipa-server-install" to install the server, >>>>>>>>> "ipa-server-install >>>>>>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>>>>>> upgrade the >>>>>>>>> server. Maybe we should bring some consistency here and have one of: >>>>>>>>> >>>>>>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>>>>>> --uninstall", >>>>>>>>> "ipa-server-install --upgrade" >>>>>>>>> >>>>>>>>> b) "ipa-server-install [install]", "ipa-server-install uninstall", >>>>>>>>> "ipa-server-install upgrade" >>>>>>>>> >>>>>>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>>>>>> "ipa-server-upgrade" >>>>>>>> >>>>>>>> Long term, I think we want C. Besides other advantages, it will let >>>>>>>> us have >>>>>>>> independent sets of options, based on what you want to do. >>>>>>>> >>>>>>>>> 2) >>>>>>>>> >>>>>>>>> " >>>>>>>>> * Prevent to run IPA service, if code version and configuration >>>>>>>>> version does >>>>>>>>> not match >>>>>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>>>>> " >>>>>>>>> >>>>>>>>> There should be no configuration version, configuration update >>>>>>>>> should be run >>>>>>>>> always. It's fast and hence does not need to be optimized like data >>>>>>>>> update by >>>>>>>>> using a monolithic version number, which brings more than a few >>>>>>>>> problems on its >>>>>>>>> own. >>>>>>>> >>>>>>>> I do not agree in this section. Why would you like to run it always, >>>>>>>> even if it >>>>>>>> was fast? No run is still faster than fast run. >>>>>>> >>>>>>> In the ideal case the installer would be idempotent and upgrade would >>>>>>> be re-running the installer and we should aim to do just that. We kind >>>>>>> of do that already, but there is a lot of code duplication in >>>>>>> installers and ipa-upgradeconfig (I would like to fix that when >>>>>>> refactoring installers). IMO it's better to always make 100% sure the >>>>>>> configuration is correct rather than to save a second or two. >>>>>> I doesn't like this idea, if user wants to fix something, the one should >>>>>> use --skip-version-check option, and the IPA upgrade will be executed. >>>>> >>>>> Well, what I don't like is dealing with meaningless version numbers. >>>>> They are causing us grief in API versioning and I don't see why it >>>>> would be any different here. >>>> However you must keep the version because of schema and data upgrade, so >>>> why not to execute update as one batch instead of doing config upgrade >>>> all the time, and then data upgrade only if required. >>> >>> Because there is no exact mapping between version number and what features are >>> actually available. A state file is tons better than a single version number. >>> >>>> >>>> Some configuration upgrades, like adding new DNS related services, >>>> requires new schema, how we can handle this? >>> >>> This does not sound right. Could you be more specific? >>> >>>> Running schema upgrade every time? >>>>> >>>>>> What if a service changes in a way, the IPA configuration will not work? >>>>> >>>>> Then it's a bug and needs to be fixed, like any other bug. IIRC there >>>>> was only one or two occurences of such bug in the past 3 years (I >>>>> remember sshd_config), so I don't think you have a strong case here. >>>> Ok >>>>> >>>>>> The user will need to change it manually, but after each restart, >>>>>> upgrade will change the value back into IPA required configuration which >>>>>> will not work. >>>>> >>>>> Says who? It's our code, we can do whatever we want, it doesn't have >>>>> to be dumb like this. >>>>> >>>>>> Yes, we have upgrade state file, but then the comparing of one value is >>>>>> faster then checking each state if was executed. >>>>> >>>>> How faster is that, like, few milliseconds? Are you seriously >>>>> considering this the right optimization in a process that is >>>>> magnitudes slower? >>>> Ok the speed is not so important, but I still do not like the idea of >>>> executing the code which is not needed to be executed, because I know >>>> the version is the same as was before last restart, so nothing changed. >>> >>> Weren't "clever" optimizations like this what got us into this whole >>> refactoring bussiness in the first place? >> >> I very much agree with Honza. We should always start with something >> stupidly-simply and enhance it later, when it is clear if it is really necessary. >> >> Do not over-engineer it from the very beginning. > > I completely agree with starting stupid and simply and improving in time. > However, are we sure that what Honza proposed is the simple and stupid way? > > Doing config upgrade only when needed and thus not depending on the efficiency > and idempotency of the config upgraders seems to me as *the* stupid and simple > way for upgrade refactoring. How exactly is adding another logic layer on top of the configuration update more simple and stupid? > > Martin -- Jan Cholasta From jcholast at redhat.com Tue Mar 3 10:06:38 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 03 Mar 2015 11:06:38 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F58737.8090108@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> <54F5727D.7020005@redhat.com> <54F585D4.7010003@redhat.com> <54F58737.8090108@redhat.com> Message-ID: <54F587AE.3090305@redhat.com> Dne 3.3.2015 v 11:04 Petr Spacek napsal(a): > On 3.3.2015 10:58, Martin Kosek wrote: >> On 03/03/2015 09:36 AM, Petr Spacek wrote: >>> On 3.3.2015 09:33, Jan Cholasta wrote: >>>> Dne 3.3.2015 v 09:06 Martin Basti napsal(a): >>>>> On 03/03/15 07:31, Jan Cholasta wrote: >>>>>> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>>>>>> On 02/03/15 13:12, Jan Cholasta wrote: >>>>>>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>>>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>>>>>>> Hello all, >>>>>>>>>>> >>>>>>>>>>> please read the design page, any objections/suggestions appreciated >>>>>>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 1) >>>>>>>>>> >>>>>>>>>> " >>>>>>>>>> * Merge server update commands into the one command >>>>>>>>>> (ipa-server-upgrade) >>>>>>>>>> " >>>>>>>>>> >>>>>>>>>> So there is "ipa-server-install" to install the server, >>>>>>>>>> "ipa-server-install >>>>>>>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>>>>>>> upgrade the >>>>>>>>>> server. Maybe we should bring some consistency here and have one of: >>>>>>>>>> >>>>>>>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>>>>>>> --uninstall", >>>>>>>>>> "ipa-server-install --upgrade" >>>>>>>>>> >>>>>>>>>> b) "ipa-server-install [install]", "ipa-server-install uninstall", >>>>>>>>>> "ipa-server-install upgrade" >>>>>>>>>> >>>>>>>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>>>>>>> "ipa-server-upgrade" >>>>>>>>> >>>>>>>>> Long term, I think we want C. Besides other advantages, it will let >>>>>>>>> us have >>>>>>>>> independent sets of options, based on what you want to do. >>>>>>>>> >>>>>>>>>> 2) >>>>>>>>>> >>>>>>>>>> " >>>>>>>>>> * Prevent to run IPA service, if code version and configuration >>>>>>>>>> version does >>>>>>>>>> not match >>>>>>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>>>>>> " >>>>>>>>>> >>>>>>>>>> There should be no configuration version, configuration update >>>>>>>>>> should be run >>>>>>>>>> always. It's fast and hence does not need to be optimized like data >>>>>>>>>> update by >>>>>>>>>> using a monolithic version number, which brings more than a few >>>>>>>>>> problems on its >>>>>>>>>> own. >>>>>>>>> >>>>>>>>> I do not agree in this section. Why would you like to run it always, >>>>>>>>> even if it >>>>>>>>> was fast? No run is still faster than fast run. >>>>>>>> >>>>>>>> In the ideal case the installer would be idempotent and upgrade would >>>>>>>> be re-running the installer and we should aim to do just that. We kind >>>>>>>> of do that already, but there is a lot of code duplication in >>>>>>>> installers and ipa-upgradeconfig (I would like to fix that when >>>>>>>> refactoring installers). IMO it's better to always make 100% sure the >>>>>>>> configuration is correct rather than to save a second or two. >>>>>>> I doesn't like this idea, if user wants to fix something, the one should >>>>>>> use --skip-version-check option, and the IPA upgrade will be executed. >>>>>> >>>>>> Well, what I don't like is dealing with meaningless version numbers. >>>>>> They are causing us grief in API versioning and I don't see why it >>>>>> would be any different here. >>>>> However you must keep the version because of schema and data upgrade, so >>>>> why not to execute update as one batch instead of doing config upgrade >>>>> all the time, and then data upgrade only if required. >>>> >>>> Because there is no exact mapping between version number and what features are >>>> actually available. A state file is tons better than a single version number. >>>> >>>>> >>>>> Some configuration upgrades, like adding new DNS related services, >>>>> requires new schema, how we can handle this? >>>> >>>> This does not sound right. Could you be more specific? >>>> >>>>> Running schema upgrade every time? >>>>>> >>>>>>> What if a service changes in a way, the IPA configuration will not work? >>>>>> >>>>>> Then it's a bug and needs to be fixed, like any other bug. IIRC there >>>>>> was only one or two occurences of such bug in the past 3 years (I >>>>>> remember sshd_config), so I don't think you have a strong case here. >>>>> Ok >>>>>> >>>>>>> The user will need to change it manually, but after each restart, >>>>>>> upgrade will change the value back into IPA required configuration which >>>>>>> will not work. >>>>>> >>>>>> Says who? It's our code, we can do whatever we want, it doesn't have >>>>>> to be dumb like this. >>>>>> >>>>>>> Yes, we have upgrade state file, but then the comparing of one value is >>>>>>> faster then checking each state if was executed. >>>>>> >>>>>> How faster is that, like, few milliseconds? Are you seriously >>>>>> considering this the right optimization in a process that is >>>>>> magnitudes slower? >>>>> Ok the speed is not so important, but I still do not like the idea of >>>>> executing the code which is not needed to be executed, because I know >>>>> the version is the same as was before last restart, so nothing changed. >>>> >>>> Weren't "clever" optimizations like this what got us into this whole >>>> refactoring bussiness in the first place? >>> >>> I very much agree with Honza. We should always start with something >>> stupidly-simply and enhance it later, when it is clear if it is really necessary. >>> >>> Do not over-engineer it from the very beginning. >> >> I completely agree with starting stupid and simply and improving in time. >> However, are we sure that what Honza proposed is the simple and stupid way? >> >> Doing config upgrade only when needed and thus not depending on the efficiency >> and idempotency of the config upgraders seems to me as *the* stupid and simple >> way for upgrade refactoring. > > Maybe I'm missing something but > > if not state.get('dns_forward_zones_supported'): > # upgrade to forward zones here > > seems less error-prone than > > if version < 4.0: > # upgrade to forward zones here > # make me a sandwich > # consult local oracle to guess that other features where backported > # to currently running code > Exactly! -- Jan Cholasta From jcholast at redhat.com Tue Mar 3 10:09:32 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 03 Mar 2015 11:09:32 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F5864D.4050909@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> <54F57718.2010607@redhat.com> <54F5852D.6090501@redhat.com> <54F5864D.4050909@redhat.com> Message-ID: <54F5885C.3020506@redhat.com> Dne 3.3.2015 v 11:00 Martin Basti napsal(a): > On 03/03/15 10:55, Jan Cholasta wrote: >> Dne 3.3.2015 v 09:55 Martin Basti napsal(a): >>> On 03/03/15 09:33, Jan Cholasta wrote: >>>> Dne 3.3.2015 v 09:06 Martin Basti napsal(a): >>>>> On 03/03/15 07:31, Jan Cholasta wrote: >>>>>> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>>>>>> On 02/03/15 13:12, Jan Cholasta wrote: >>>>>>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>>>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>>>>>> ... >>>>>>>>>> 2) >>>>>>>>>> >>>>>>>>>> " >>>>>>>>>> * Prevent to run IPA service, if code version and configuration >>>>>>>>>> version does >>>>>>>>>> not match >>>>>>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>>>>>> " >>>>>>>>>> >>>>>>>>>> There should be no configuration version, configuration update >>>>>>>>>> should be run >>>>>>>>>> always. It's fast and hence does not need to be optimized like >>>>>>>>>> data >>>>>>>>>> update by >>>>>>>>>> using a monolithic version number, which brings more than a few >>>>>>>>>> problems on its >>>>>>>>>> own. >>>>>>>>> >>>>>>>>> I do not agree in this section. Why would you like to run it >>>>>>>>> always, >>>>>>>>> even if it >>>>>>>>> was fast? No run is still faster than fast run. >>>>>>>> >>>>>>>> In the ideal case the installer would be idempotent and upgrade >>>>>>>> would >>>>>>>> be re-running the installer and we should aim to do just that. We >>>>>>>> kind >>>>>>>> of do that already, but there is a lot of code duplication in >>>>>>>> installers and ipa-upgradeconfig (I would like to fix that when >>>>>>>> refactoring installers). IMO it's better to always make 100% >>>>>>>> sure the >>>>>>>> configuration is correct rather than to save a second or two. >>>>>>> I doesn't like this idea, if user wants to fix something, the one >>>>>>> should >>>>>>> use --skip-version-check option, and the IPA upgrade will be >>>>>>> executed. >>>>>> >>>>>> Well, what I don't like is dealing with meaningless version numbers. >>>>>> They are causing us grief in API versioning and I don't see why it >>>>>> would be any different here. >>>>> However you must keep the version because of schema and data >>>>> upgrade, so >>>>> why not to execute update as one batch instead of doing config upgrade >>>>> all the time, and then data upgrade only if required. >>>> >>>> Because there is no exact mapping between version number and what >>>> features are actually available. A state file is tons better than a >>>> single version number. >>>> >>>>> >>>>> Some configuration upgrades, like adding new DNS related services, >>>>> requires new schema, how we can handle this? >>>> >>>> This does not sound right. Could you be more specific? >>> at least ipa-dnskeysyncd service, requires updated schema for keys >>> metadata. >>> This service is mandratory for DNS, so it is newly configured during >>> upgrade. >>> Now it works because schema update is the first, so during configuration >>> upgrade we have actual schema. >> >> Right, but what's your point? We are not discussing order of updates >> here, I'm perfectly fine with schema updates being done before >> configuration updates. > So you want to run schema update before configuration upgrade every > restart? > OR you want to run schema update if needed based on version, and then > run configuration upgrade? I don't really care, as long as the schema is up-to-date, but I guess there is no harm in always running schema update either. -- Jan Cholasta From abokovoy at redhat.com Tue Mar 3 10:33:19 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 3 Mar 2015 12:33:19 +0200 Subject: [Freeipa-devel] One-way trust design In-Reply-To: <20150303095057.GM23856@redhat.com> References: <20150223160253.GX25455@redhat.com> <20150303095057.GM23856@redhat.com> Message-ID: <20150303103319.GL25455@redhat.com> On Tue, 03 Mar 2015, Jan Pazdziora wrote: >On Mon, Feb 23, 2015 at 06:02:53PM +0200, Alexander Bokovoy wrote: >> trust-related functionality would be limited to IPA admins or TDO >> object in LDAP would have to be more accessible. Given that TDO >> credentials can be used to compromise access to our domain, it is not > >Could you clarify which domain is the "our" domain? >From SMB perspective whole IPA realm is a single domain. > >> advisable to give a wider access to them. >> >> As a side-effect of reducing exposure of TDO credentials, FreeIPA lost >> ability to establish and use one-way trust to Active Directory. The > >"Lost ability" might be confusing -- was removed in 3.1 (?) might be >better. We never had it as a feature so support for that wasn't removed. Rather, we lost ability to add that support. >> purpose of this feature is to regain the one-way trust support, yet >> without giving an elevated access to TDO credentials. > >You might also want to either add a note or a link, explaining why >one-way trust is harder than two-way, IOW, why we lost the one-way >ability when we have the two-way one. I think current text covers it clearly. If you have concrete suggestions, feel free to edit the wiki, it is not locked down. :) -- / Alexander Bokovoy From mkosek at redhat.com Tue Mar 3 11:08:23 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Mar 2015 12:08:23 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F587AE.3090305@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> <54F5727D.7020005@redhat.com> <54F585D4.7010003@redhat.com> <54F58737.8090108@redhat.com> <54F587AE.3090305@redhat.com> Message-ID: <54F59627.6010105@redhat.com> On 03/03/2015 11:06 AM, Jan Cholasta wrote: > Dne 3.3.2015 v 11:04 Petr Spacek napsal(a): >> On 3.3.2015 10:58, Martin Kosek wrote: >>> On 03/03/2015 09:36 AM, Petr Spacek wrote: >>>> On 3.3.2015 09:33, Jan Cholasta wrote: >>>>> Dne 3.3.2015 v 09:06 Martin Basti napsal(a): >>>>>> On 03/03/15 07:31, Jan Cholasta wrote: >>>>>>> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>>>>>>> On 02/03/15 13:12, Jan Cholasta wrote: >>>>>>>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>>>>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>>>>>>>> Hello all, >>>>>>>>>>>> >>>>>>>>>>>> please read the design page, any objections/suggestions appreciated >>>>>>>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> 1) >>>>>>>>>>> >>>>>>>>>>> " >>>>>>>>>>> * Merge server update commands into the one command >>>>>>>>>>> (ipa-server-upgrade) >>>>>>>>>>> " >>>>>>>>>>> >>>>>>>>>>> So there is "ipa-server-install" to install the server, >>>>>>>>>>> "ipa-server-install >>>>>>>>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>>>>>>>> upgrade the >>>>>>>>>>> server. Maybe we should bring some consistency here and have one of: >>>>>>>>>>> >>>>>>>>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>>>>>>>> --uninstall", >>>>>>>>>>> "ipa-server-install --upgrade" >>>>>>>>>>> >>>>>>>>>>> b) "ipa-server-install [install]", "ipa-server-install uninstall", >>>>>>>>>>> "ipa-server-install upgrade" >>>>>>>>>>> >>>>>>>>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>>>>>>>> "ipa-server-upgrade" >>>>>>>>>> >>>>>>>>>> Long term, I think we want C. Besides other advantages, it will let >>>>>>>>>> us have >>>>>>>>>> independent sets of options, based on what you want to do. >>>>>>>>>> >>>>>>>>>>> 2) >>>>>>>>>>> >>>>>>>>>>> " >>>>>>>>>>> * Prevent to run IPA service, if code version and configuration >>>>>>>>>>> version does >>>>>>>>>>> not match >>>>>>>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>>>>>>> " >>>>>>>>>>> >>>>>>>>>>> There should be no configuration version, configuration update >>>>>>>>>>> should be run >>>>>>>>>>> always. It's fast and hence does not need to be optimized like data >>>>>>>>>>> update by >>>>>>>>>>> using a monolithic version number, which brings more than a few >>>>>>>>>>> problems on its >>>>>>>>>>> own. >>>>>>>>>> >>>>>>>>>> I do not agree in this section. Why would you like to run it always, >>>>>>>>>> even if it >>>>>>>>>> was fast? No run is still faster than fast run. >>>>>>>>> >>>>>>>>> In the ideal case the installer would be idempotent and upgrade would >>>>>>>>> be re-running the installer and we should aim to do just that. We kind >>>>>>>>> of do that already, but there is a lot of code duplication in >>>>>>>>> installers and ipa-upgradeconfig (I would like to fix that when >>>>>>>>> refactoring installers). IMO it's better to always make 100% sure the >>>>>>>>> configuration is correct rather than to save a second or two. >>>>>>>> I doesn't like this idea, if user wants to fix something, the one should >>>>>>>> use --skip-version-check option, and the IPA upgrade will be executed. >>>>>>> >>>>>>> Well, what I don't like is dealing with meaningless version numbers. >>>>>>> They are causing us grief in API versioning and I don't see why it >>>>>>> would be any different here. >>>>>> However you must keep the version because of schema and data upgrade, so >>>>>> why not to execute update as one batch instead of doing config upgrade >>>>>> all the time, and then data upgrade only if required. >>>>> >>>>> Because there is no exact mapping between version number and what features >>>>> are >>>>> actually available. A state file is tons better than a single version number. >>>>> >>>>>> >>>>>> Some configuration upgrades, like adding new DNS related services, >>>>>> requires new schema, how we can handle this? >>>>> >>>>> This does not sound right. Could you be more specific? >>>>> >>>>>> Running schema upgrade every time? >>>>>>> >>>>>>>> What if a service changes in a way, the IPA configuration will not work? >>>>>>> >>>>>>> Then it's a bug and needs to be fixed, like any other bug. IIRC there >>>>>>> was only one or two occurences of such bug in the past 3 years (I >>>>>>> remember sshd_config), so I don't think you have a strong case here. >>>>>> Ok >>>>>>> >>>>>>>> The user will need to change it manually, but after each restart, >>>>>>>> upgrade will change the value back into IPA required configuration which >>>>>>>> will not work. >>>>>>> >>>>>>> Says who? It's our code, we can do whatever we want, it doesn't have >>>>>>> to be dumb like this. >>>>>>> >>>>>>>> Yes, we have upgrade state file, but then the comparing of one value is >>>>>>>> faster then checking each state if was executed. >>>>>>> >>>>>>> How faster is that, like, few milliseconds? Are you seriously >>>>>>> considering this the right optimization in a process that is >>>>>>> magnitudes slower? >>>>>> Ok the speed is not so important, but I still do not like the idea of >>>>>> executing the code which is not needed to be executed, because I know >>>>>> the version is the same as was before last restart, so nothing changed. >>>>> >>>>> Weren't "clever" optimizations like this what got us into this whole >>>>> refactoring bussiness in the first place? >>>> >>>> I very much agree with Honza. We should always start with something >>>> stupidly-simply and enhance it later, when it is clear if it is really >>>> necessary. >>>> >>>> Do not over-engineer it from the very beginning. >>> >>> I completely agree with starting stupid and simply and improving in time. >>> However, are we sure that what Honza proposed is the simple and stupid way? >>> >>> Doing config upgrade only when needed and thus not depending on the efficiency >>> and idempotency of the config upgraders seems to me as *the* stupid and simple >>> way for upgrade refactoring. >> >> Maybe I'm missing something but >> >> if not state.get('dns_forward_zones_supported'): >> # upgrade to forward zones here >> >> seems less error-prone than >> >> if version < 4.0: >> # upgrade to forward zones here >> # make me a sandwich >> # consult local oracle to guess that other features where backported >> # to currently running code >> > > Exactly! > Question is, what is the magical state.get('dns_forward_zones_supported') command :-) Is this the sysrestore State Store or is there any complicated schema check and LDAP searches for DNS master zones with forwarders? From jcholast at redhat.com Tue Mar 3 11:15:07 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 03 Mar 2015 12:15:07 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F59627.6010105@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> <54F5727D.7020005@redhat.com> <54F585D4.7010003@redhat.com> <54F58737.8090108@redhat.com> <54F587AE.3090305@redhat.com> <54F59627.6010105@redhat.com> Message-ID: <54F597BB.7080709@redhat.com> Dne 3.3.2015 v 12:08 Martin Kosek napsal(a): > On 03/03/2015 11:06 AM, Jan Cholasta wrote: >> Dne 3.3.2015 v 11:04 Petr Spacek napsal(a): >>> On 3.3.2015 10:58, Martin Kosek wrote: >>>> On 03/03/2015 09:36 AM, Petr Spacek wrote: >>>>> On 3.3.2015 09:33, Jan Cholasta wrote: >>>>>> Dne 3.3.2015 v 09:06 Martin Basti napsal(a): >>>>>>> On 03/03/15 07:31, Jan Cholasta wrote: >>>>>>>> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>>>>>>>> On 02/03/15 13:12, Jan Cholasta wrote: >>>>>>>>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>>>>>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>>>>>>>>> Hello all, >>>>>>>>>>>>> >>>>>>>>>>>>> please read the design page, any objections/suggestions appreciated >>>>>>>>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> 1) >>>>>>>>>>>> >>>>>>>>>>>> " >>>>>>>>>>>> * Merge server update commands into the one command >>>>>>>>>>>> (ipa-server-upgrade) >>>>>>>>>>>> " >>>>>>>>>>>> >>>>>>>>>>>> So there is "ipa-server-install" to install the server, >>>>>>>>>>>> "ipa-server-install >>>>>>>>>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>>>>>>>>> upgrade the >>>>>>>>>>>> server. Maybe we should bring some consistency here and have one of: >>>>>>>>>>>> >>>>>>>>>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>>>>>>>>> --uninstall", >>>>>>>>>>>> "ipa-server-install --upgrade" >>>>>>>>>>>> >>>>>>>>>>>> b) "ipa-server-install [install]", "ipa-server-install uninstall", >>>>>>>>>>>> "ipa-server-install upgrade" >>>>>>>>>>>> >>>>>>>>>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>>>>>>>>> "ipa-server-upgrade" >>>>>>>>>>> >>>>>>>>>>> Long term, I think we want C. Besides other advantages, it will let >>>>>>>>>>> us have >>>>>>>>>>> independent sets of options, based on what you want to do. >>>>>>>>>>> >>>>>>>>>>>> 2) >>>>>>>>>>>> >>>>>>>>>>>> " >>>>>>>>>>>> * Prevent to run IPA service, if code version and configuration >>>>>>>>>>>> version does >>>>>>>>>>>> not match >>>>>>>>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>>>>>>>> " >>>>>>>>>>>> >>>>>>>>>>>> There should be no configuration version, configuration update >>>>>>>>>>>> should be run >>>>>>>>>>>> always. It's fast and hence does not need to be optimized like data >>>>>>>>>>>> update by >>>>>>>>>>>> using a monolithic version number, which brings more than a few >>>>>>>>>>>> problems on its >>>>>>>>>>>> own. >>>>>>>>>>> >>>>>>>>>>> I do not agree in this section. Why would you like to run it always, >>>>>>>>>>> even if it >>>>>>>>>>> was fast? No run is still faster than fast run. >>>>>>>>>> >>>>>>>>>> In the ideal case the installer would be idempotent and upgrade would >>>>>>>>>> be re-running the installer and we should aim to do just that. We kind >>>>>>>>>> of do that already, but there is a lot of code duplication in >>>>>>>>>> installers and ipa-upgradeconfig (I would like to fix that when >>>>>>>>>> refactoring installers). IMO it's better to always make 100% sure the >>>>>>>>>> configuration is correct rather than to save a second or two. >>>>>>>>> I doesn't like this idea, if user wants to fix something, the one should >>>>>>>>> use --skip-version-check option, and the IPA upgrade will be executed. >>>>>>>> >>>>>>>> Well, what I don't like is dealing with meaningless version numbers. >>>>>>>> They are causing us grief in API versioning and I don't see why it >>>>>>>> would be any different here. >>>>>>> However you must keep the version because of schema and data upgrade, so >>>>>>> why not to execute update as one batch instead of doing config upgrade >>>>>>> all the time, and then data upgrade only if required. >>>>>> >>>>>> Because there is no exact mapping between version number and what features >>>>>> are >>>>>> actually available. A state file is tons better than a single version number. >>>>>> >>>>>>> >>>>>>> Some configuration upgrades, like adding new DNS related services, >>>>>>> requires new schema, how we can handle this? >>>>>> >>>>>> This does not sound right. Could you be more specific? >>>>>> >>>>>>> Running schema upgrade every time? >>>>>>>> >>>>>>>>> What if a service changes in a way, the IPA configuration will not work? >>>>>>>> >>>>>>>> Then it's a bug and needs to be fixed, like any other bug. IIRC there >>>>>>>> was only one or two occurences of such bug in the past 3 years (I >>>>>>>> remember sshd_config), so I don't think you have a strong case here. >>>>>>> Ok >>>>>>>> >>>>>>>>> The user will need to change it manually, but after each restart, >>>>>>>>> upgrade will change the value back into IPA required configuration which >>>>>>>>> will not work. >>>>>>>> >>>>>>>> Says who? It's our code, we can do whatever we want, it doesn't have >>>>>>>> to be dumb like this. >>>>>>>> >>>>>>>>> Yes, we have upgrade state file, but then the comparing of one value is >>>>>>>>> faster then checking each state if was executed. >>>>>>>> >>>>>>>> How faster is that, like, few milliseconds? Are you seriously >>>>>>>> considering this the right optimization in a process that is >>>>>>>> magnitudes slower? >>>>>>> Ok the speed is not so important, but I still do not like the idea of >>>>>>> executing the code which is not needed to be executed, because I know >>>>>>> the version is the same as was before last restart, so nothing changed. >>>>>> >>>>>> Weren't "clever" optimizations like this what got us into this whole >>>>>> refactoring bussiness in the first place? >>>>> >>>>> I very much agree with Honza. We should always start with something >>>>> stupidly-simply and enhance it later, when it is clear if it is really >>>>> necessary. >>>>> >>>>> Do not over-engineer it from the very beginning. >>>> >>>> I completely agree with starting stupid and simply and improving in time. >>>> However, are we sure that what Honza proposed is the simple and stupid way? >>>> >>>> Doing config upgrade only when needed and thus not depending on the efficiency >>>> and idempotency of the config upgraders seems to me as *the* stupid and simple >>>> way for upgrade refactoring. >>> >>> Maybe I'm missing something but >>> >>> if not state.get('dns_forward_zones_supported'): >>> # upgrade to forward zones here >>> >>> seems less error-prone than >>> >>> if version < 4.0: >>> # upgrade to forward zones here >>> # make me a sandwich >>> # consult local oracle to guess that other features where backported >>> # to currently running code >>> >> >> Exactly! >> > > Question is, what is the magical > > state.get('dns_forward_zones_supported') > > command :-) Is this the sysrestore State Store or is there any complicated > schema check and LDAP searches for DNS master zones with forwarders? This is about local configuration strictly, so it's state store check. -- Jan Cholasta From mbasti at redhat.com Tue Mar 3 11:15:24 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 03 Mar 2015 12:15:24 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F58737.8090108@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> <54F5727D.7020005@redhat.com> <54F585D4.7010003@redhat.com> <54F58737.8090108@redhat.com> Message-ID: <54F597CC.4010806@redhat.com> On 03/03/15 11:04, Petr Spacek wrote: > On 3.3.2015 10:58, Martin Kosek wrote: >> On 03/03/2015 09:36 AM, Petr Spacek wrote: >>> On 3.3.2015 09:33, Jan Cholasta wrote: >>>> Dne 3.3.2015 v 09:06 Martin Basti napsal(a): >>>>> On 03/03/15 07:31, Jan Cholasta wrote: >>>>>> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>>>>>> On 02/03/15 13:12, Jan Cholasta wrote: >>>>>>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>>>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>>>>>>> Hello all, >>>>>>>>>>> >>>>>>>>>>> please read the design page, any objections/suggestions appreciated >>>>>>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>>>>>> >>>>>>>>>> 1) >>>>>>>>>> >>>>>>>>>> " >>>>>>>>>> * Merge server update commands into the one command >>>>>>>>>> (ipa-server-upgrade) >>>>>>>>>> " >>>>>>>>>> >>>>>>>>>> So there is "ipa-server-install" to install the server, >>>>>>>>>> "ipa-server-install >>>>>>>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>>>>>>> upgrade the >>>>>>>>>> server. Maybe we should bring some consistency here and have one of: >>>>>>>>>> >>>>>>>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>>>>>>> --uninstall", >>>>>>>>>> "ipa-server-install --upgrade" >>>>>>>>>> >>>>>>>>>> b) "ipa-server-install [install]", "ipa-server-install uninstall", >>>>>>>>>> "ipa-server-install upgrade" >>>>>>>>>> >>>>>>>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>>>>>>> "ipa-server-upgrade" >>>>>>>>> Long term, I think we want C. Besides other advantages, it will let >>>>>>>>> us have >>>>>>>>> independent sets of options, based on what you want to do. >>>>>>>>> >>>>>>>>>> 2) >>>>>>>>>> >>>>>>>>>> " >>>>>>>>>> * Prevent to run IPA service, if code version and configuration >>>>>>>>>> version does >>>>>>>>>> not match >>>>>>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>>>>>> " >>>>>>>>>> >>>>>>>>>> There should be no configuration version, configuration update >>>>>>>>>> should be run >>>>>>>>>> always. It's fast and hence does not need to be optimized like data >>>>>>>>>> update by >>>>>>>>>> using a monolithic version number, which brings more than a few >>>>>>>>>> problems on its >>>>>>>>>> own. >>>>>>>>> I do not agree in this section. Why would you like to run it always, >>>>>>>>> even if it >>>>>>>>> was fast? No run is still faster than fast run. >>>>>>>> In the ideal case the installer would be idempotent and upgrade would >>>>>>>> be re-running the installer and we should aim to do just that. We kind >>>>>>>> of do that already, but there is a lot of code duplication in >>>>>>>> installers and ipa-upgradeconfig (I would like to fix that when >>>>>>>> refactoring installers). IMO it's better to always make 100% sure the >>>>>>>> configuration is correct rather than to save a second or two. >>>>>>> I doesn't like this idea, if user wants to fix something, the one should >>>>>>> use --skip-version-check option, and the IPA upgrade will be executed. >>>>>> Well, what I don't like is dealing with meaningless version numbers. >>>>>> They are causing us grief in API versioning and I don't see why it >>>>>> would be any different here. >>>>> However you must keep the version because of schema and data upgrade, so >>>>> why not to execute update as one batch instead of doing config upgrade >>>>> all the time, and then data upgrade only if required. >>>> Because there is no exact mapping between version number and what features are >>>> actually available. A state file is tons better than a single version number. >>>> >>>>> Some configuration upgrades, like adding new DNS related services, >>>>> requires new schema, how we can handle this? >>>> This does not sound right. Could you be more specific? >>>> >>>>> Running schema upgrade every time? >>>>>>> What if a service changes in a way, the IPA configuration will not work? >>>>>> Then it's a bug and needs to be fixed, like any other bug. IIRC there >>>>>> was only one or two occurences of such bug in the past 3 years (I >>>>>> remember sshd_config), so I don't think you have a strong case here. >>>>> Ok >>>>>>> The user will need to change it manually, but after each restart, >>>>>>> upgrade will change the value back into IPA required configuration which >>>>>>> will not work. >>>>>> Says who? It's our code, we can do whatever we want, it doesn't have >>>>>> to be dumb like this. >>>>>> >>>>>>> Yes, we have upgrade state file, but then the comparing of one value is >>>>>>> faster then checking each state if was executed. >>>>>> How faster is that, like, few milliseconds? Are you seriously >>>>>> considering this the right optimization in a process that is >>>>>> magnitudes slower? >>>>> Ok the speed is not so important, but I still do not like the idea of >>>>> executing the code which is not needed to be executed, because I know >>>>> the version is the same as was before last restart, so nothing changed. >>>> Weren't "clever" optimizations like this what got us into this whole >>>> refactoring bussiness in the first place? >>> I very much agree with Honza. We should always start with something >>> stupidly-simply and enhance it later, when it is clear if it is really necessary. >>> >>> Do not over-engineer it from the very beginning. >> I completely agree with starting stupid and simply and improving in time. >> However, are we sure that what Honza proposed is the simple and stupid way? >> >> Doing config upgrade only when needed and thus not depending on the efficiency >> and idempotency of the config upgraders seems to me as *the* stupid and simple >> way for upgrade refactoring. > Maybe I'm missing something but > > if not state.get('dns_forward_zones_supported'): > # upgrade to forward zones here > > seems less error-prone than > > if version < 4.0: > # upgrade to forward zones here > # make me a sandwich > # consult local oracle to guess that other features where backported > # to currently running code > forward_zones is data upgrade not configuration upgrade. We will still use sysupgrade values in upgrade plugins, I just propose to execute configuration plugins only if version increase, not all the time. If upgrade will be executed, it depends on plugins (sysupgrade state, etc..) -- Martin Basti From mkosek at redhat.com Tue Mar 3 11:17:35 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Mar 2015 12:17:35 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F58783.30408@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> <54F5727D.7020005@redhat.com> <54F585D4.7010003@redhat.com> <54F58783.30408@redhat.com> Message-ID: <54F5984F.7010506@redhat.com> On 03/03/2015 11:05 AM, Jan Cholasta wrote: > Dne 3.3.2015 v 10:58 Martin Kosek napsal(a): >> On 03/03/2015 09:36 AM, Petr Spacek wrote: >>> On 3.3.2015 09:33, Jan Cholasta wrote: >>>> Dne 3.3.2015 v 09:06 Martin Basti napsal(a): >>>>> On 03/03/15 07:31, Jan Cholasta wrote: >>>>>> Dne 2.3.2015 v 13:51 Martin Basti napsal(a): >>>>>>> On 02/03/15 13:12, Jan Cholasta wrote: >>>>>>>> Dne 2.3.2015 v 12:23 Martin Kosek napsal(a): >>>>>>>>> On 03/02/2015 07:49 AM, Jan Cholasta wrote: >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> Dne 24.2.2015 v 19:10 Martin Basti napsal(a): >>>>>>>>>>> Hello all, >>>>>>>>>>> >>>>>>>>>>> please read the design page, any objections/suggestions appreciated >>>>>>>>>>> http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 1) >>>>>>>>>> >>>>>>>>>> " >>>>>>>>>> * Merge server update commands into the one command >>>>>>>>>> (ipa-server-upgrade) >>>>>>>>>> " >>>>>>>>>> >>>>>>>>>> So there is "ipa-server-install" to install the server, >>>>>>>>>> "ipa-server-install >>>>>>>>>> --uninstall" to uninstall the server and "ipa-server-upgrade" to >>>>>>>>>> upgrade the >>>>>>>>>> server. Maybe we should bring some consistency here and have one of: >>>>>>>>>> >>>>>>>>>> a) "ipa-server-install [--install]", "ipa-server-install >>>>>>>>>> --uninstall", >>>>>>>>>> "ipa-server-install --upgrade" >>>>>>>>>> >>>>>>>>>> b) "ipa-server-install [install]", "ipa-server-install uninstall", >>>>>>>>>> "ipa-server-install upgrade" >>>>>>>>>> >>>>>>>>>> c) "ipa-server-install", "ipa-server-uninstall", >>>>>>>>>> "ipa-server-upgrade" >>>>>>>>> >>>>>>>>> Long term, I think we want C. Besides other advantages, it will let >>>>>>>>> us have >>>>>>>>> independent sets of options, based on what you want to do. >>>>>>>>> >>>>>>>>>> 2) >>>>>>>>>> >>>>>>>>>> " >>>>>>>>>> * Prevent to run IPA service, if code version and configuration >>>>>>>>>> version does >>>>>>>>>> not match >>>>>>>>>> * ipactl should execute ipa-server-upgrade if needed >>>>>>>>>> " >>>>>>>>>> >>>>>>>>>> There should be no configuration version, configuration update >>>>>>>>>> should be run >>>>>>>>>> always. It's fast and hence does not need to be optimized like data >>>>>>>>>> update by >>>>>>>>>> using a monolithic version number, which brings more than a few >>>>>>>>>> problems on its >>>>>>>>>> own. >>>>>>>>> >>>>>>>>> I do not agree in this section. Why would you like to run it always, >>>>>>>>> even if it >>>>>>>>> was fast? No run is still faster than fast run. >>>>>>>> >>>>>>>> In the ideal case the installer would be idempotent and upgrade would >>>>>>>> be re-running the installer and we should aim to do just that. We kind >>>>>>>> of do that already, but there is a lot of code duplication in >>>>>>>> installers and ipa-upgradeconfig (I would like to fix that when >>>>>>>> refactoring installers). IMO it's better to always make 100% sure the >>>>>>>> configuration is correct rather than to save a second or two. >>>>>>> I doesn't like this idea, if user wants to fix something, the one should >>>>>>> use --skip-version-check option, and the IPA upgrade will be executed. >>>>>> >>>>>> Well, what I don't like is dealing with meaningless version numbers. >>>>>> They are causing us grief in API versioning and I don't see why it >>>>>> would be any different here. >>>>> However you must keep the version because of schema and data upgrade, so >>>>> why not to execute update as one batch instead of doing config upgrade >>>>> all the time, and then data upgrade only if required. >>>> >>>> Because there is no exact mapping between version number and what features are >>>> actually available. A state file is tons better than a single version number. >>>> >>>>> >>>>> Some configuration upgrades, like adding new DNS related services, >>>>> requires new schema, how we can handle this? >>>> >>>> This does not sound right. Could you be more specific? >>>> >>>>> Running schema upgrade every time? >>>>>> >>>>>>> What if a service changes in a way, the IPA configuration will not work? >>>>>> >>>>>> Then it's a bug and needs to be fixed, like any other bug. IIRC there >>>>>> was only one or two occurences of such bug in the past 3 years (I >>>>>> remember sshd_config), so I don't think you have a strong case here. >>>>> Ok >>>>>> >>>>>>> The user will need to change it manually, but after each restart, >>>>>>> upgrade will change the value back into IPA required configuration which >>>>>>> will not work. >>>>>> >>>>>> Says who? It's our code, we can do whatever we want, it doesn't have >>>>>> to be dumb like this. >>>>>> >>>>>>> Yes, we have upgrade state file, but then the comparing of one value is >>>>>>> faster then checking each state if was executed. >>>>>> >>>>>> How faster is that, like, few milliseconds? Are you seriously >>>>>> considering this the right optimization in a process that is >>>>>> magnitudes slower? >>>>> Ok the speed is not so important, but I still do not like the idea of >>>>> executing the code which is not needed to be executed, because I know >>>>> the version is the same as was before last restart, so nothing changed. >>>> >>>> Weren't "clever" optimizations like this what got us into this whole >>>> refactoring bussiness in the first place? >>> >>> I very much agree with Honza. We should always start with something >>> stupidly-simply and enhance it later, when it is clear if it is really >>> necessary. >>> >>> Do not over-engineer it from the very beginning. >> >> I completely agree with starting stupid and simply and improving in time. >> However, are we sure that what Honza proposed is the simple and stupid way? >> >> Doing config upgrade only when needed and thus not depending on the efficiency >> and idempotency of the config upgraders seems to me as *the* stupid and simple >> way for upgrade refactoring. > > How exactly is adding another logic layer on top of the configuration update > more simple and stupid? Maybe it isn't. I must admit I am getting little confused with this thread. If you envision the (config) upgrades to work differently than in the design page, can you please summarize your proposal maybe even in a separate thread so that we can assess it? Maybe it makes more sense, but right now it is difficult to follow the big picture in this thread. Thanks, Martin From rcritten at redhat.com Tue Mar 3 13:32:16 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 03 Mar 2015 08:32:16 -0500 Subject: [Freeipa-devel] [PATCH 0001] ipa-client-install: attempt to get host TGT several times before aborting client installation In-Reply-To: <54F569A9.6010609@redhat.com> References: <54B3FA19.6000903@redhat.com> <54B4D5B4.3050301@redhat.com> <54B4DB7A.4080307@redhat.com> <54B53E68.60000@redhat.com> <54B690EE.3070604@redhat.com> <54B69EDD.2000800@redhat.com> <54DE4DD8.7060206@redhat.com> <54F47E15.8060700@redhat.com> <54F4818E.1060408@redhat.com> <54F569A9.6010609@redhat.com> Message-ID: <54F5B7E0.7000200@redhat.com> Martin Babinsky wrote: > On 03/02/2015 04:28 PM, Rob Crittenden wrote: >> Petr Vobornik wrote: >>>>>>>>> On 01/12/2015 05:45 PM, Martin Babinsky wrote: >>>>>>>>>> related to ticket https://fedorahosted.org/freeipa/ticket/4808 >>> >>> this patch seems to be a bit forgotten. >>> >>> It works, looks fine. >>> >>> One minor issue: trailing whitespaces in the man page. >>> >>> I also wonder if it shouldn't be used in other tools which call kinit >>> with keytab: >>> * ipa-client-automount:434 >>> * ipa-client-install:2591 (this usage should be fine since it's used for >>> server installation) >>> * dcerpc.py:545 >>> * rpcserver.py: 971, 981 (armor for web ui forms base auth) >>> >>> Most importantly the ipa-client-automount because it's called from >>> ipa-client-install (if location is specified) and therefore it might >>> fail during client installation. >>> >>> Or also, kinit call with admin creadentials worked for the user but I >>> wonder if it was just a coincidence and may break under slightly >>> different but similar conditions. >> >> I think that's a fine idea. In fact there is already a function that >> could be extended, kinit_hostprincipal(). >> >> rob >> > > So in principle we could add multiple TGT retries to > "kinit_hostprincipal()" and then plug this function to all the places > Petr mentioned in order to provide this functionality each time TGT is > requested using keytab. > > Do I understand it correctly? > Honestly I think I'd only do the retries on client installation. I don't know that the other uses would really benefit or need this. But this is an opportunity to consolidate some code, so I guess the approach I'd take is to add an option to kinit_hostprincipal of retries=0 so that only a single kinit is done. The client installers would pass in some value. This change is quite a bit more invasive but it's also early in the release cycle so the risk will be spread out. rob From simo at redhat.com Tue Mar 3 14:16:06 2015 From: simo at redhat.com (Simo Sorce) Date: Tue, 03 Mar 2015 09:16:06 -0500 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F4A3D2.50207@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> <54F49DD8.4090801@redhat.com> <54F4A3D2.50207@redhat.com> Message-ID: <1425392166.13900.18.camel@willson.usersys.redhat.com> On Mon, 2015-03-02 at 18:54 +0100, Martin Basti wrote: > On 02/03/15 18:28, Martin Kosek wrote: > > On 03/02/2015 06:12 PM, Martin Basti wrote: > >> On 02/03/15 15:43, Rob Crittenden wrote: > >>> Martin Basti wrote: > > ... > >>> But you haven't explained any case why LDAPI would fail. If LDAPI fails > >>> then you've got more serious problems that I'm not sure binding as DM is > >>> going to solve. > >>> > >>> The only case where DM would be handy IMHO is either some worst case > >>> scenario upgrade where 389-ds is up but not binding to LDAPI or if you > >>> want to allow running LDAP updates as non-root. > >> I don't know cases when LDAPI would failed, except the case LDAPI is > >> misconfigured by user, or disabled by user. > > Wasn't LDAPI needed for the DM password less upgrade so that upgrader could > > simply bind as root with EXTERNAL auth? > We can do upgrade in both way, using LDAPI or using DM password, > preferred is LDAPI. > Question is, what is the use case for using DM password instead of LDAPI > during upgrade. There is no use case for using the DM password. > >> It is not big effort to keep both DM binding and LDAPI in code. A user can > >> always found som unexpected use case for LDAP update with DM password. > >> > >>>>> On ipactl, would it be overkill if there is a tty to prompt the user to > >>>>> upgrade? In a non-container world it might be surprising to have an > >>>>> upgrade happen esp since upgrades take a while. > >>>> In non-container enviroment, we can still use upgrade during RPM > >>>> transaction. > >>>> > >>>> So you suggest not to do update automaticaly, just write Error the IPA > >>>> upgrade is required? > >>> People do all sorts of strange things. Installing the packages with > >>> --no-script isn't in the range of impossible. A prompt, and I'm not > >>> saying it's a great idea, is 2 lines of code. > >>> > >>> I guess it just makes me nervous. > >> So lets summarize this: > >> * DO upgrade if possible during RPM transaction > > Umm, I thought we want to get rid of running upgrade during RPM transaction. It > > is extremely difficult to debug upgrade stuck during RPM transaction, it also > > makes RPM upgrade run longer than needed. It also makes admins nervous when > > their rpm upgrade is suddenly waiting right before the end. I even see the > > fingers slowly reaching to CTRL+C combo... (You can see the consequences) > People are used to have IPA upgraded and ready after RPM upgrade. > They may be shocked if IPA services will be in shutdown state after RPM > transaction. This is true, stopping IPA and requiring manual intervention is not ok. > I have no more objections. > > > > >> * ipactl will NOT run upgrade, just print Error: 'please upgrade ....' > >> * User has to run ipa-server-upgrade manually > >> > >> Does I understand it correctly? > >>>>> With --skip-version-check what sorts of problems can we foresee? I > >>>>> assume a big warning will be added to at least the man page, if not > >>>>> the cli? > >>>> For this big warning everywhere. > >>>> The main problem is try to run older IPA with newer data. In containers > >>>> the problem is to run different platform specific versions which differ > >>>> in functionality/bugfixes etc.. > >>> Ok, pretty much the things I was thinking as well. A scary warning > >>> definitely seems warranted. > >>> > >>>>> Where does platform come from? I'm wondering how Debian will handle this. > >>>> platform is derived from ipaplatform file which is used with the > >>>> particular build. Debian should have own platform file. > >>> Ok, I'd add that detail to the design. > >>> > >>> rob > >> > > -- Simo Sorce * Red Hat, Inc * New York From mbabinsk at redhat.com Tue Mar 3 14:24:16 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Tue, 03 Mar 2015 15:24:16 +0100 Subject: [Freeipa-devel] [PATCH 0001] ipa-client-install: attempt to get host TGT several times before aborting client installation In-Reply-To: <54F5B7E0.7000200@redhat.com> References: <54B3FA19.6000903@redhat.com> <54B4D5B4.3050301@redhat.com> <54B4DB7A.4080307@redhat.com> <54B53E68.60000@redhat.com> <54B690EE.3070604@redhat.com> <54B69EDD.2000800@redhat.com> <54DE4DD8.7060206@redhat.com> <54F47E15.8060700@redhat.com> <54F4818E.1060408@redhat.com> <54F569A9.6010609@redhat.com> <54F5B7E0.7000200@redhat.com> Message-ID: <54F5C410.1000003@redhat.com> On 03/03/2015 02:32 PM, Rob Crittenden wrote: > Martin Babinsky wrote: >> On 03/02/2015 04:28 PM, Rob Crittenden wrote: >>> Petr Vobornik wrote: >>>>>>>>>> On 01/12/2015 05:45 PM, Martin Babinsky wrote: >>>>>>>>>>> related to ticket https://fedorahosted.org/freeipa/ticket/4808 >>>> >>>> this patch seems to be a bit forgotten. >>>> >>>> It works, looks fine. >>>> >>>> One minor issue: trailing whitespaces in the man page. >>>> >>>> I also wonder if it shouldn't be used in other tools which call kinit >>>> with keytab: >>>> * ipa-client-automount:434 >>>> * ipa-client-install:2591 (this usage should be fine since it's used for >>>> server installation) >>>> * dcerpc.py:545 >>>> * rpcserver.py: 971, 981 (armor for web ui forms base auth) >>>> >>>> Most importantly the ipa-client-automount because it's called from >>>> ipa-client-install (if location is specified) and therefore it might >>>> fail during client installation. >>>> >>>> Or also, kinit call with admin creadentials worked for the user but I >>>> wonder if it was just a coincidence and may break under slightly >>>> different but similar conditions. >>> >>> I think that's a fine idea. In fact there is already a function that >>> could be extended, kinit_hostprincipal(). >>> >>> rob >>> >> >> So in principle we could add multiple TGT retries to >> "kinit_hostprincipal()" and then plug this function to all the places >> Petr mentioned in order to provide this functionality each time TGT is >> requested using keytab. >> >> Do I understand it correctly? >> > > Honestly I think I'd only do the retries on client installation. I > don't know that the other uses would really benefit or need this. > > But this is an opportunity to consolidate some code, so I guess the > approach I'd take is to add an option to kinit_hostprincipal of > retries=0 so that only a single kinit is done. The client installers > would pass in some value. > > This change is quite a bit more invasive but it's also early in the > release cycle so the risk will be spread out. > > rob > Ok I will try to implement these changes and submit them as separate patchset. -- Martin^3 Babinsky From pspacek at redhat.com Tue Mar 3 14:25:16 2015 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 03 Mar 2015 15:25:16 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F5868D.9080500@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54F407EA.9040101@redhat.com> <54F44844.8070106@redhat.com> <54F45395.4020209@redhat.com> <54F45CE5.1000108@redhat.com> <54F5552C.9040508@redhat.com> <54F56B97.3080709@redhat.com> <54F571DA.6060801@redhat.com> <54F57718.2010607@redhat.com> <54F5851E.3040500@redhat.com> <54F5868D.9080500@redhat.com> Message-ID: <54F5C44C.2020608@redhat.com> On 3.3.2015 11:01, Jan Cholasta wrote: > I would very much prefer to do it the other way around, so that most bugs in > the code are caught early, instead of hidden behind the version comparison. +1 -- Petr^2 Spacek From mkosek at redhat.com Tue Mar 3 14:33:46 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Mar 2015 15:33:46 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <1425392166.13900.18.camel@willson.usersys.redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> <54F49DD8.4090801@redhat.com> <54F4A3D2.50207@redhat.com> <1425392166.13900.18.camel@willson.usersys.redhat.com> Message-ID: <54F5C64A.7010601@redhat.com> On 03/03/2015 03:16 PM, Simo Sorce wrote: > On Mon, 2015-03-02 at 18:54 +0100, Martin Basti wrote: >> On 02/03/15 18:28, Martin Kosek wrote: >>> On 03/02/2015 06:12 PM, Martin Basti wrote: >>>> On 02/03/15 15:43, Rob Crittenden wrote: >>>>> Martin Basti wrote: >>> ... >>>>> But you haven't explained any case why LDAPI would fail. If LDAPI fails >>>>> then you've got more serious problems that I'm not sure binding as DM is >>>>> going to solve. >>>>> >>>>> The only case where DM would be handy IMHO is either some worst case >>>>> scenario upgrade where 389-ds is up but not binding to LDAPI or if you >>>>> want to allow running LDAP updates as non-root. >>>> I don't know cases when LDAPI would failed, except the case LDAPI is >>>> misconfigured by user, or disabled by user. >>> Wasn't LDAPI needed for the DM password less upgrade so that upgrader could >>> simply bind as root with EXTERNAL auth? >> We can do upgrade in both way, using LDAPI or using DM password, >> preferred is LDAPI. >> Question is, what is the use case for using DM password instead of LDAPI >> during upgrade. > > There is no use case for using the DM password. +1, so we will only use LDAPI and ditch DM password options and querying that we now have with ipa-ldap-updater? > >>>> It is not big effort to keep both DM binding and LDAPI in code. A user can >>>> always found som unexpected use case for LDAP update with DM password. >>>> >>>>>>> On ipactl, would it be overkill if there is a tty to prompt the user to >>>>>>> upgrade? In a non-container world it might be surprising to have an >>>>>>> upgrade happen esp since upgrades take a while. >>>>>> In non-container enviroment, we can still use upgrade during RPM >>>>>> transaction. >>>>>> >>>>>> So you suggest not to do update automaticaly, just write Error the IPA >>>>>> upgrade is required? >>>>> People do all sorts of strange things. Installing the packages with >>>>> --no-script isn't in the range of impossible. A prompt, and I'm not >>>>> saying it's a great idea, is 2 lines of code. >>>>> >>>>> I guess it just makes me nervous. >>>> So lets summarize this: >>>> * DO upgrade if possible during RPM transaction >>> Umm, I thought we want to get rid of running upgrade during RPM transaction. It >>> is extremely difficult to debug upgrade stuck during RPM transaction, it also >>> makes RPM upgrade run longer than needed. It also makes admins nervous when >>> their rpm upgrade is suddenly waiting right before the end. I even see the >>> fingers slowly reaching to CTRL+C combo... (You can see the consequences) >> People are used to have IPA upgraded and ready after RPM upgrade. >> They may be shocked if IPA services will be in shutdown state after RPM >> transaction. > > This is true, stopping IPA and requiring manual intervention is not ok. What is the plan then? Keep upgrades done during RPM transaction? Note that RPM transaction was already stuck several times because IPA, or rather DS, deadlocked. We also need to address https://fedorahosted.org/freeipa/ticket/3849. The original plan was to do the upgrade during ipactl start, this would fix this ticket. Alternatively, should we remove the upgrade from RPM scriptlet and only call asynchronous "systemctl restart ipa.service" that would trigger the upgrade in separate process and log results in ipa.service? Martin From mbasti at redhat.com Tue Mar 3 14:40:30 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 03 Mar 2015 15:40:30 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F5C64A.7010601@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> <54F49DD8.4090801@redhat.com> <54F4A3D2.50207@redhat.com> <1425392166.13900.18.camel@willson.usersys.redhat.com> <54F5C64A.7010601@redhat.com> Message-ID: <54F5C7DE.9020701@redhat.com> On 03/03/15 15:33, Martin Kosek wrote: > On 03/03/2015 03:16 PM, Simo Sorce wrote: >> On Mon, 2015-03-02 at 18:54 +0100, Martin Basti wrote: >>> On 02/03/15 18:28, Martin Kosek wrote: >>>> On 03/02/2015 06:12 PM, Martin Basti wrote: >>>>> On 02/03/15 15:43, Rob Crittenden wrote: >>>>>> Martin Basti wrote: >>>> ... >>>>>> But you haven't explained any case why LDAPI would fail. If LDAPI fails >>>>>> then you've got more serious problems that I'm not sure binding as DM is >>>>>> going to solve. >>>>>> >>>>>> The only case where DM would be handy IMHO is either some worst case >>>>>> scenario upgrade where 389-ds is up but not binding to LDAPI or if you >>>>>> want to allow running LDAP updates as non-root. >>>>> I don't know cases when LDAPI would failed, except the case LDAPI is >>>>> misconfigured by user, or disabled by user. >>>> Wasn't LDAPI needed for the DM password less upgrade so that upgrader could >>>> simply bind as root with EXTERNAL auth? >>> We can do upgrade in both way, using LDAPI or using DM password, >>> preferred is LDAPI. >>> Question is, what is the use case for using DM password instead of LDAPI >>> during upgrade. >> There is no use case for using the DM password. > +1, so we will only use LDAPI and ditch DM password options and querying that > we now have with ipa-ldap-updater? > >>>>> It is not big effort to keep both DM binding and LDAPI in code. A user can >>>>> always found som unexpected use case for LDAP update with DM password. >>>>> >>>>>>>> On ipactl, would it be overkill if there is a tty to prompt the user to >>>>>>>> upgrade? In a non-container world it might be surprising to have an >>>>>>>> upgrade happen esp since upgrades take a while. >>>>>>> In non-container enviroment, we can still use upgrade during RPM >>>>>>> transaction. >>>>>>> >>>>>>> So you suggest not to do update automaticaly, just write Error the IPA >>>>>>> upgrade is required? >>>>>> People do all sorts of strange things. Installing the packages with >>>>>> --no-script isn't in the range of impossible. A prompt, and I'm not >>>>>> saying it's a great idea, is 2 lines of code. >>>>>> >>>>>> I guess it just makes me nervous. >>>>> So lets summarize this: >>>>> * DO upgrade if possible during RPM transaction >>>> Umm, I thought we want to get rid of running upgrade during RPM transaction. It >>>> is extremely difficult to debug upgrade stuck during RPM transaction, it also >>>> makes RPM upgrade run longer than needed. It also makes admins nervous when >>>> their rpm upgrade is suddenly waiting right before the end. I even see the >>>> fingers slowly reaching to CTRL+C combo... (You can see the consequences) >>> People are used to have IPA upgraded and ready after RPM upgrade. >>> They may be shocked if IPA services will be in shutdown state after RPM >>> transaction. >> This is true, stopping IPA and requiring manual intervention is not ok. > What is the plan then? Keep upgrades done during RPM transaction? Note that RPM > transaction was already stuck several times because IPA, or rather DS, deadlocked. > > We also need to address https://fedorahosted.org/freeipa/ticket/3849. The > original plan was to do the upgrade during ipactl start, this would fix this > ticket. Alternatively, should we remove the upgrade from RPM scriptlet and only > call asynchronous "systemctl restart ipa.service" that would trigger the > upgrade in separate process and log results in ipa.service? > > Martin The plan is do upgrade during RPM transaction if possible. If not then ipactl start, will show warning for user to do manual upgrade (Rob wanted it in this way, not doing auto upgrade by ipactl) So the fedup case is: RPM upgrade failed, ipactl start will detect version mismatch, show error and prompt user to run ipa-server-upgrade -- Martin Basti From simo at redhat.com Tue Mar 3 14:49:26 2015 From: simo at redhat.com (Simo Sorce) Date: Tue, 03 Mar 2015 09:49:26 -0500 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F5C64A.7010601@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> <54F49DD8.4090801@redhat.com> <54F4A3D2.50207@redhat.com> <1425392166.13900.18.camel@willson.usersys.redhat.com> <54F5C64A.7010601@redhat.com> Message-ID: <1425394166.13900.22.camel@willson.usersys.redhat.com> On Tue, 2015-03-03 at 15:33 +0100, Martin Kosek wrote: > On 03/03/2015 03:16 PM, Simo Sorce wrote: > > On Mon, 2015-03-02 at 18:54 +0100, Martin Basti wrote: > >> On 02/03/15 18:28, Martin Kosek wrote: > >>> On 03/02/2015 06:12 PM, Martin Basti wrote: > >>>> On 02/03/15 15:43, Rob Crittenden wrote: > >>>>> Martin Basti wrote: > >>> ... > >>>>> But you haven't explained any case why LDAPI would fail. If LDAPI fails > >>>>> then you've got more serious problems that I'm not sure binding as DM is > >>>>> going to solve. > >>>>> > >>>>> The only case where DM would be handy IMHO is either some worst case > >>>>> scenario upgrade where 389-ds is up but not binding to LDAPI or if you > >>>>> want to allow running LDAP updates as non-root. > >>>> I don't know cases when LDAPI would failed, except the case LDAPI is > >>>> misconfigured by user, or disabled by user. > >>> Wasn't LDAPI needed for the DM password less upgrade so that upgrader could > >>> simply bind as root with EXTERNAL auth? > >> We can do upgrade in both way, using LDAPI or using DM password, > >> preferred is LDAPI. > >> Question is, what is the use case for using DM password instead of LDAPI > >> during upgrade. > > > > There is no use case for using the DM password. > > +1, so we will only use LDAPI and ditch DM password options and querying that > we now have with ipa-ldap-updater? > > > > >>>> It is not big effort to keep both DM binding and LDAPI in code. A user can > >>>> always found som unexpected use case for LDAP update with DM password. > >>>> > >>>>>>> On ipactl, would it be overkill if there is a tty to prompt the user to > >>>>>>> upgrade? In a non-container world it might be surprising to have an > >>>>>>> upgrade happen esp since upgrades take a while. > >>>>>> In non-container enviroment, we can still use upgrade during RPM > >>>>>> transaction. > >>>>>> > >>>>>> So you suggest not to do update automaticaly, just write Error the IPA > >>>>>> upgrade is required? > >>>>> People do all sorts of strange things. Installing the packages with > >>>>> --no-script isn't in the range of impossible. A prompt, and I'm not > >>>>> saying it's a great idea, is 2 lines of code. > >>>>> > >>>>> I guess it just makes me nervous. > >>>> So lets summarize this: > >>>> * DO upgrade if possible during RPM transaction > >>> Umm, I thought we want to get rid of running upgrade during RPM transaction. It > >>> is extremely difficult to debug upgrade stuck during RPM transaction, it also > >>> makes RPM upgrade run longer than needed. It also makes admins nervous when > >>> their rpm upgrade is suddenly waiting right before the end. I even see the > >>> fingers slowly reaching to CTRL+C combo... (You can see the consequences) > >> People are used to have IPA upgraded and ready after RPM upgrade. > >> They may be shocked if IPA services will be in shutdown state after RPM > >> transaction. > > > > This is true, stopping IPA and requiring manual intervention is not ok. > > What is the plan then? Keep upgrades done during RPM transaction? Note that RPM > transaction was already stuck several times because IPA, or rather DS, deadlocked. > > We also need to address https://fedorahosted.org/freeipa/ticket/3849. The > original plan was to do the upgrade during ipactl start, this would fix this > ticket. Alternatively, should we remove the upgrade from RPM scriptlet and only > call asynchronous "systemctl restart ipa.service" that would trigger the > upgrade in separate process and log results in ipa.service? It's a hard problem. First of all DS should never deadlock, we shouldn't change all our process because one component has a bug If DS deadlocks, it will do so whether it is run in RPM or not, we need to add some way to terminate the upgrade ourselves if it lasts too long I guess. The problem with an asynchronous restart is ... when do you do that ? What triggers it ? Simo. -- Simo Sorce * Red Hat, Inc * New York From jcholast at redhat.com Tue Mar 3 14:49:42 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 03 Mar 2015 15:49:42 +0100 Subject: [Freeipa-devel] [PATCHES 399-401] Allow multiple API instances Message-ID: <54F5CA06.4040609@redhat.com> Hi, the attached patches provide an attempt to fix . Patch 401 serves as an example and modifies ipa-advise to use its own API instance for Advice plugins. Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-399-ipalib-Allow-multiple-API-instances.patch Type: text/x-patch Size: 17374 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-400-ipalib-Move-plugin-package-setup-to-ipalib-specific-.patch Type: text/x-patch Size: 2800 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-401-advise-Add-separate-API-object-for-ipa-advise.patch Type: text/x-patch Size: 10546 bytes Desc: not available URL: From simo at redhat.com Tue Mar 3 14:52:26 2015 From: simo at redhat.com (Simo Sorce) Date: Tue, 03 Mar 2015 09:52:26 -0500 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F5C7DE.9020701@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> <54F49DD8.4090801@redhat.com> <54F4A3D2.50207@redhat.com> <1425392166.13900.18.camel@willson.usersys.redhat.com> <54F5C64A.7010601@redhat.com> <54F5C7DE.9020701@redhat.com> Message-ID: <1425394346.13900.24.camel@willson.usersys.redhat.com> On Tue, 2015-03-03 at 15:40 +0100, Martin Basti wrote: > On 03/03/15 15:33, Martin Kosek wrote: > > On 03/03/2015 03:16 PM, Simo Sorce wrote: > >> On Mon, 2015-03-02 at 18:54 +0100, Martin Basti wrote: > >>> On 02/03/15 18:28, Martin Kosek wrote: > >>>> On 03/02/2015 06:12 PM, Martin Basti wrote: > >>>>> On 02/03/15 15:43, Rob Crittenden wrote: > >>>>>> Martin Basti wrote: > >>>> ... > >>>>>> But you haven't explained any case why LDAPI would fail. If LDAPI fails > >>>>>> then you've got more serious problems that I'm not sure binding as DM is > >>>>>> going to solve. > >>>>>> > >>>>>> The only case where DM would be handy IMHO is either some worst case > >>>>>> scenario upgrade where 389-ds is up but not binding to LDAPI or if you > >>>>>> want to allow running LDAP updates as non-root. > >>>>> I don't know cases when LDAPI would failed, except the case LDAPI is > >>>>> misconfigured by user, or disabled by user. > >>>> Wasn't LDAPI needed for the DM password less upgrade so that upgrader could > >>>> simply bind as root with EXTERNAL auth? > >>> We can do upgrade in both way, using LDAPI or using DM password, > >>> preferred is LDAPI. > >>> Question is, what is the use case for using DM password instead of LDAPI > >>> during upgrade. > >> There is no use case for using the DM password. > > +1, so we will only use LDAPI and ditch DM password options and querying that > > we now have with ipa-ldap-updater? > > > >>>>> It is not big effort to keep both DM binding and LDAPI in code. A user can > >>>>> always found som unexpected use case for LDAP update with DM password. > >>>>> > >>>>>>>> On ipactl, would it be overkill if there is a tty to prompt the user to > >>>>>>>> upgrade? In a non-container world it might be surprising to have an > >>>>>>>> upgrade happen esp since upgrades take a while. > >>>>>>> In non-container enviroment, we can still use upgrade during RPM > >>>>>>> transaction. > >>>>>>> > >>>>>>> So you suggest not to do update automaticaly, just write Error the IPA > >>>>>>> upgrade is required? > >>>>>> People do all sorts of strange things. Installing the packages with > >>>>>> --no-script isn't in the range of impossible. A prompt, and I'm not > >>>>>> saying it's a great idea, is 2 lines of code. > >>>>>> > >>>>>> I guess it just makes me nervous. > >>>>> So lets summarize this: > >>>>> * DO upgrade if possible during RPM transaction > >>>> Umm, I thought we want to get rid of running upgrade during RPM transaction. It > >>>> is extremely difficult to debug upgrade stuck during RPM transaction, it also > >>>> makes RPM upgrade run longer than needed. It also makes admins nervous when > >>>> their rpm upgrade is suddenly waiting right before the end. I even see the > >>>> fingers slowly reaching to CTRL+C combo... (You can see the consequences) > >>> People are used to have IPA upgraded and ready after RPM upgrade. > >>> They may be shocked if IPA services will be in shutdown state after RPM > >>> transaction. > >> This is true, stopping IPA and requiring manual intervention is not ok. > > What is the plan then? Keep upgrades done during RPM transaction? Note that RPM > > transaction was already stuck several times because IPA, or rather DS, deadlocked. > > > > We also need to address https://fedorahosted.org/freeipa/ticket/3849. The > > original plan was to do the upgrade during ipactl start, this would fix this > > ticket. Alternatively, should we remove the upgrade from RPM scriptlet and only > > call asynchronous "systemctl restart ipa.service" that would trigger the > > upgrade in separate process and log results in ipa.service? > > > > Martin > The plan is do upgrade during RPM transaction if possible. If not then > ipactl start, will show warning for user to do manual upgrade (Rob > wanted it in this way, not doing auto upgrade by ipactl) > > So the fedup case is: RPM upgrade failed, ipactl start will detect > version mismatch, show error and prompt user to run ipa-server-upgrade I think this is wrong, sorry. It means unattended installs will just fail to restart and require manual intervention. At the very least this process needs to be conditional, and upgrade needs to be done automatically. If the admin insist he can set something in default.conf to block autoupgrades or something. Simo. -- Simo Sorce * Red Hat, Inc * New York From mkosek at redhat.com Tue Mar 3 14:55:26 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Mar 2015 15:55:26 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <1425394166.13900.22.camel@willson.usersys.redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> <54F49DD8.4090801@redhat.com> <54F4A3D2.50207@redhat.com> <1425392166.13900.18.camel@willson.usersys.redhat.com> <54F5C64A.7010601@redhat.com> <1425394166.13900.22.camel@willson.usersys.redhat.com> Message-ID: <54F5CB5E.9010405@redhat.com> On 03/03/2015 03:49 PM, Simo Sorce wrote: > On Tue, 2015-03-03 at 15:33 +0100, Martin Kosek wrote: >> On 03/03/2015 03:16 PM, Simo Sorce wrote: >>> On Mon, 2015-03-02 at 18:54 +0100, Martin Basti wrote: >>>> On 02/03/15 18:28, Martin Kosek wrote: >>>>> On 03/02/2015 06:12 PM, Martin Basti wrote: >>>>>> On 02/03/15 15:43, Rob Crittenden wrote: >>>>>>> Martin Basti wrote: >>>>> ... >>>>>>> But you haven't explained any case why LDAPI would fail. If LDAPI fails >>>>>>> then you've got more serious problems that I'm not sure binding as DM is >>>>>>> going to solve. >>>>>>> >>>>>>> The only case where DM would be handy IMHO is either some worst case >>>>>>> scenario upgrade where 389-ds is up but not binding to LDAPI or if you >>>>>>> want to allow running LDAP updates as non-root. >>>>>> I don't know cases when LDAPI would failed, except the case LDAPI is >>>>>> misconfigured by user, or disabled by user. >>>>> Wasn't LDAPI needed for the DM password less upgrade so that upgrader could >>>>> simply bind as root with EXTERNAL auth? >>>> We can do upgrade in both way, using LDAPI or using DM password, >>>> preferred is LDAPI. >>>> Question is, what is the use case for using DM password instead of LDAPI >>>> during upgrade. >>> >>> There is no use case for using the DM password. >> >> +1, so we will only use LDAPI and ditch DM password options and querying that >> we now have with ipa-ldap-updater? >> >>> >>>>>> It is not big effort to keep both DM binding and LDAPI in code. A user can >>>>>> always found som unexpected use case for LDAP update with DM password. >>>>>> >>>>>>>>> On ipactl, would it be overkill if there is a tty to prompt the user to >>>>>>>>> upgrade? In a non-container world it might be surprising to have an >>>>>>>>> upgrade happen esp since upgrades take a while. >>>>>>>> In non-container enviroment, we can still use upgrade during RPM >>>>>>>> transaction. >>>>>>>> >>>>>>>> So you suggest not to do update automaticaly, just write Error the IPA >>>>>>>> upgrade is required? >>>>>>> People do all sorts of strange things. Installing the packages with >>>>>>> --no-script isn't in the range of impossible. A prompt, and I'm not >>>>>>> saying it's a great idea, is 2 lines of code. >>>>>>> >>>>>>> I guess it just makes me nervous. >>>>>> So lets summarize this: >>>>>> * DO upgrade if possible during RPM transaction >>>>> Umm, I thought we want to get rid of running upgrade during RPM transaction. It >>>>> is extremely difficult to debug upgrade stuck during RPM transaction, it also >>>>> makes RPM upgrade run longer than needed. It also makes admins nervous when >>>>> their rpm upgrade is suddenly waiting right before the end. I even see the >>>>> fingers slowly reaching to CTRL+C combo... (You can see the consequences) >>>> People are used to have IPA upgraded and ready after RPM upgrade. >>>> They may be shocked if IPA services will be in shutdown state after RPM >>>> transaction. >>> >>> This is true, stopping IPA and requiring manual intervention is not ok. >> >> What is the plan then? Keep upgrades done during RPM transaction? Note that RPM >> transaction was already stuck several times because IPA, or rather DS, deadlocked. >> >> We also need to address https://fedorahosted.org/freeipa/ticket/3849. The >> original plan was to do the upgrade during ipactl start, this would fix this >> ticket. Alternatively, should we remove the upgrade from RPM scriptlet and only >> call asynchronous "systemctl restart ipa.service" that would trigger the >> upgrade in separate process and log results in ipa.service? > > It's a hard problem. > First of all DS should never deadlock, we shouldn't change all our > process because one component has a bug > If DS deadlocks, it will do so whether it is run in RPM or not, we need > to add some way to terminate the upgrade ourselves if it lasts too long > I guess. > > The problem with an asynchronous restart is ... when do you do that ? > What triggers it ? systemctl restart ipa.service --no-block :-) If there is no better place for the upgrade outside of RPM transaction, I am fine, as long as ipactl writes the warning for FedUp users, as Martin mentioned, to make sure they do not run un-upgraded IPA. Martin From mkosek at redhat.com Tue Mar 3 15:01:10 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Mar 2015 16:01:10 +0100 Subject: [Freeipa-devel] [PATCHES 399-401] Allow multiple API instances In-Reply-To: <54F5CA06.4040609@redhat.com> References: <54F5CA06.4040609@redhat.com> Message-ID: <54F5CCB6.5000109@redhat.com> On 03/03/2015 03:49 PM, Jan Cholasta wrote: > Hi, > > the attached patches provide an attempt to fix > . > > Patch 401 serves as an example and modifies ipa-advise to use its own API > instance for Advice plugins. > > Honza Thanks. At least patches 399 and 400 look reasonable short for 4.2. So with these patches, could we also get rid of temporary_ldap2_connection we have in ipa-replica-install? Petr3 may have other examples he met in the past... Martin From rcritten at redhat.com Tue Mar 3 15:04:08 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 03 Mar 2015 10:04:08 -0500 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F5C7DE.9020701@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> <54F49DD8.4090801@redhat.com> <54F4A3D2.50207@redhat.com> <1425392166.13900.18.camel@willson.usersys.redhat.com> <54F5C64A.7010601@redhat.com> <54F5C7DE.9020701@redhat.com> Message-ID: <54F5CD68.2050801@redhat.com> Martin Basti wrote: > On 03/03/15 15:33, Martin Kosek wrote: >> On 03/03/2015 03:16 PM, Simo Sorce wrote: >>> On Mon, 2015-03-02 at 18:54 +0100, Martin Basti wrote: >>>> On 02/03/15 18:28, Martin Kosek wrote: >>>>> On 03/02/2015 06:12 PM, Martin Basti wrote: >>>>>> On 02/03/15 15:43, Rob Crittenden wrote: >>>>>>> Martin Basti wrote: >>>>> ... >>>>>>> But you haven't explained any case why LDAPI would fail. If LDAPI >>>>>>> fails >>>>>>> then you've got more serious problems that I'm not sure binding >>>>>>> as DM is >>>>>>> going to solve. >>>>>>> >>>>>>> The only case where DM would be handy IMHO is either some worst case >>>>>>> scenario upgrade where 389-ds is up but not binding to LDAPI or >>>>>>> if you >>>>>>> want to allow running LDAP updates as non-root. >>>>>> I don't know cases when LDAPI would failed, except the case LDAPI is >>>>>> misconfigured by user, or disabled by user. >>>>> Wasn't LDAPI needed for the DM password less upgrade so that >>>>> upgrader could >>>>> simply bind as root with EXTERNAL auth? >>>> We can do upgrade in both way, using LDAPI or using DM password, >>>> preferred is LDAPI. >>>> Question is, what is the use case for using DM password instead of >>>> LDAPI >>>> during upgrade. >>> There is no use case for using the DM password. >> +1, so we will only use LDAPI and ditch DM password options and >> querying that >> we now have with ipa-ldap-updater? >> >>>>>> It is not big effort to keep both DM binding and LDAPI in code. A >>>>>> user can >>>>>> always found som unexpected use case for LDAP update with DM >>>>>> password. >>>>>> >>>>>>>>> On ipactl, would it be overkill if there is a tty to prompt the >>>>>>>>> user to >>>>>>>>> upgrade? In a non-container world it might be surprising to >>>>>>>>> have an >>>>>>>>> upgrade happen esp since upgrades take a while. >>>>>>>> In non-container enviroment, we can still use upgrade during RPM >>>>>>>> transaction. >>>>>>>> >>>>>>>> So you suggest not to do update automaticaly, just write Error >>>>>>>> the IPA >>>>>>>> upgrade is required? >>>>>>> People do all sorts of strange things. Installing the packages with >>>>>>> --no-script isn't in the range of impossible. A prompt, and I'm not >>>>>>> saying it's a great idea, is 2 lines of code. >>>>>>> >>>>>>> I guess it just makes me nervous. >>>>>> So lets summarize this: >>>>>> * DO upgrade if possible during RPM transaction >>>>> Umm, I thought we want to get rid of running upgrade during RPM >>>>> transaction. It >>>>> is extremely difficult to debug upgrade stuck during RPM >>>>> transaction, it also >>>>> makes RPM upgrade run longer than needed. It also makes admins >>>>> nervous when >>>>> their rpm upgrade is suddenly waiting right before the end. I even >>>>> see the >>>>> fingers slowly reaching to CTRL+C combo... (You can see the >>>>> consequences) >>>> People are used to have IPA upgraded and ready after RPM upgrade. >>>> They may be shocked if IPA services will be in shutdown state after RPM >>>> transaction. >>> This is true, stopping IPA and requiring manual intervention is not ok. >> What is the plan then? Keep upgrades done during RPM transaction? Note >> that RPM >> transaction was already stuck several times because IPA, or rather DS, >> deadlocked. >> >> We also need to address https://fedorahosted.org/freeipa/ticket/3849. The >> original plan was to do the upgrade during ipactl start, this would >> fix this >> ticket. Alternatively, should we remove the upgrade from RPM scriptlet >> and only >> call asynchronous "systemctl restart ipa.service" that would trigger the >> upgrade in separate process and log results in ipa.service? >> >> Martin > The plan is do upgrade during RPM transaction if possible. If not then > ipactl start, will show warning for user to do manual upgrade (Rob > wanted it in this way, not doing auto upgrade by ipactl) Only if there is a tty which means no asking during package update, which I thought was the idea. I just think it is rather unexpected to update a package during a restart. > So the fedup case is: RPM upgrade failed, ipactl start will detect > version mismatch, show error and prompt user to run ipa-server-upgrade I'm beginning to have my own doubts about version, recognizing that there isn't exactly another obvious solution. Running the updates every time ipactl is run isn't great. The updates are not fast by any stretch, 29s on one VM, and we need to log whenever an update is done. My ipaupgrade log is 48M from 20 updates. How many times does one run ipactl restart when diagnosing a problem? My biggest concern with version is who keeps count and where? This is particularly problematic in packaged servers where changes are made without rebasing (Fedora and RHEL). Somewhere the version would need to be bumped with each release? Or only when updates are added? Or only when someone remembers? It just seems fragile and prone to human error unless you have some automatic version incrementor that takes this into consideration. If fallible version or slow updates are the only option then I'd have to go with slow updates if only to avoid a lot of support issues. And I really hate the idea of updates during service restart. rob From tbabej at redhat.com Tue Mar 3 15:04:41 2015 From: tbabej at redhat.com (Tomas Babej) Date: Tue, 03 Mar 2015 16:04:41 +0100 Subject: [Freeipa-devel] [PATCHES 399-401] Allow multiple API instances In-Reply-To: <54F5CCB6.5000109@redhat.com> References: <54F5CA06.4040609@redhat.com> <54F5CCB6.5000109@redhat.com> Message-ID: <54F5CD89.8070202@redhat.com> On 03/03/2015 04:01 PM, Martin Kosek wrote: > On 03/03/2015 03:49 PM, Jan Cholasta wrote: >> Hi, >> >> the attached patches provide an attempt to fix >> . >> >> Patch 401 serves as an example and modifies ipa-advise to use its own API >> instance for Advice plugins. >> >> Honza > Thanks. At least patches 399 and 400 look reasonable short for 4.2. > > So with these patches, could we also get rid of temporary_ldap2_connection we > have in ipa-replica-install? Petr3 may have other examples he met in the past... > > Martin > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel 401 seems reasonable enough to me too, the bulk of the code is mostly just moving the code around and renaming variables. Plus we have a very extensive (100%) coverage for the advise tool, so I wouldn't exclude it from the patchset. Tomas From jcholast at redhat.com Tue Mar 3 15:09:58 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 03 Mar 2015 16:09:58 +0100 Subject: [Freeipa-devel] [PATCHES 399-401] Allow multiple API instances In-Reply-To: <54F5CD89.8070202@redhat.com> References: <54F5CA06.4040609@redhat.com> <54F5CCB6.5000109@redhat.com> <54F5CD89.8070202@redhat.com> Message-ID: <54F5CEC6.2090603@redhat.com> Dne 3.3.2015 v 16:04 Tomas Babej napsal(a): > > On 03/03/2015 04:01 PM, Martin Kosek wrote: >> On 03/03/2015 03:49 PM, Jan Cholasta wrote: >>> Hi, >>> >>> the attached patches provide an attempt to fix >>> . >>> >>> Patch 401 serves as an example and modifies ipa-advise to use its own >>> API >>> instance for Advice plugins. >>> >>> Honza >> Thanks. At least patches 399 and 400 look reasonable short for 4.2. >> >> So with these patches, could we also get rid of >> temporary_ldap2_connection we >> have in ipa-replica-install? Petr3 may have other examples he met in >> the past... I think we can. Shall I prepare a patch? >> >> Martin > > 401 seems reasonable enough to me too, the bulk of the code is mostly > just moving the code around and renaming variables. Right. > > Plus we have a very extensive (100%) coverage for the advise tool, so I > wouldn't exclude it from the patchset. +1 > > Tomas -- Jan Cholasta From mkosek at redhat.com Tue Mar 3 15:10:40 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Mar 2015 16:10:40 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F5CD68.2050801@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> <54F49DD8.4090801@redhat.com> <54F4A3D2.50207@redhat.com> <1425392166.13900.18.camel@willson.usersys.redhat.com> <54F5C64A.7010601@redhat.com> <54F5C7DE.9020701@redhat.com> <54F5CD68.2050801@redhat.com> Message-ID: <54F5CEF0.5040506@redhat.com> On 03/03/2015 04:04 PM, Rob Crittenden wrote: > Martin Basti wrote: >> On 03/03/15 15:33, Martin Kosek wrote: >>> On 03/03/2015 03:16 PM, Simo Sorce wrote: >>>> On Mon, 2015-03-02 at 18:54 +0100, Martin Basti wrote: >>>>> On 02/03/15 18:28, Martin Kosek wrote: >>>>>> On 03/02/2015 06:12 PM, Martin Basti wrote: >>>>>>> On 02/03/15 15:43, Rob Crittenden wrote: >>>>>>>> Martin Basti wrote: >>>>>> ... >>>>>>>> But you haven't explained any case why LDAPI would fail. If LDAPI >>>>>>>> fails >>>>>>>> then you've got more serious problems that I'm not sure binding >>>>>>>> as DM is >>>>>>>> going to solve. >>>>>>>> >>>>>>>> The only case where DM would be handy IMHO is either some worst case >>>>>>>> scenario upgrade where 389-ds is up but not binding to LDAPI or >>>>>>>> if you >>>>>>>> want to allow running LDAP updates as non-root. >>>>>>> I don't know cases when LDAPI would failed, except the case LDAPI is >>>>>>> misconfigured by user, or disabled by user. >>>>>> Wasn't LDAPI needed for the DM password less upgrade so that >>>>>> upgrader could >>>>>> simply bind as root with EXTERNAL auth? >>>>> We can do upgrade in both way, using LDAPI or using DM password, >>>>> preferred is LDAPI. >>>>> Question is, what is the use case for using DM password instead of >>>>> LDAPI >>>>> during upgrade. >>>> There is no use case for using the DM password. >>> +1, so we will only use LDAPI and ditch DM password options and >>> querying that >>> we now have with ipa-ldap-updater? >>> >>>>>>> It is not big effort to keep both DM binding and LDAPI in code. A >>>>>>> user can >>>>>>> always found som unexpected use case for LDAP update with DM >>>>>>> password. >>>>>>> >>>>>>>>>> On ipactl, would it be overkill if there is a tty to prompt the >>>>>>>>>> user to >>>>>>>>>> upgrade? In a non-container world it might be surprising to >>>>>>>>>> have an >>>>>>>>>> upgrade happen esp since upgrades take a while. >>>>>>>>> In non-container enviroment, we can still use upgrade during RPM >>>>>>>>> transaction. >>>>>>>>> >>>>>>>>> So you suggest not to do update automaticaly, just write Error >>>>>>>>> the IPA >>>>>>>>> upgrade is required? >>>>>>>> People do all sorts of strange things. Installing the packages with >>>>>>>> --no-script isn't in the range of impossible. A prompt, and I'm not >>>>>>>> saying it's a great idea, is 2 lines of code. >>>>>>>> >>>>>>>> I guess it just makes me nervous. >>>>>>> So lets summarize this: >>>>>>> * DO upgrade if possible during RPM transaction >>>>>> Umm, I thought we want to get rid of running upgrade during RPM >>>>>> transaction. It >>>>>> is extremely difficult to debug upgrade stuck during RPM >>>>>> transaction, it also >>>>>> makes RPM upgrade run longer than needed. It also makes admins >>>>>> nervous when >>>>>> their rpm upgrade is suddenly waiting right before the end. I even >>>>>> see the >>>>>> fingers slowly reaching to CTRL+C combo... (You can see the >>>>>> consequences) >>>>> People are used to have IPA upgraded and ready after RPM upgrade. >>>>> They may be shocked if IPA services will be in shutdown state after RPM >>>>> transaction. >>>> This is true, stopping IPA and requiring manual intervention is not ok. >>> What is the plan then? Keep upgrades done during RPM transaction? Note >>> that RPM >>> transaction was already stuck several times because IPA, or rather DS, >>> deadlocked. >>> >>> We also need to address https://fedorahosted.org/freeipa/ticket/3849. The >>> original plan was to do the upgrade during ipactl start, this would >>> fix this >>> ticket. Alternatively, should we remove the upgrade from RPM scriptlet >>> and only >>> call asynchronous "systemctl restart ipa.service" that would trigger the >>> upgrade in separate process and log results in ipa.service? >>> >>> Martin >> The plan is do upgrade during RPM transaction if possible. If not then >> ipactl start, will show warning for user to do manual upgrade (Rob >> wanted it in this way, not doing auto upgrade by ipactl) > > Only if there is a tty which means no asking during package update, > which I thought was the idea. I just think it is rather unexpected to > update a package during a restart. > >> So the fedup case is: RPM upgrade failed, ipactl start will detect >> version mismatch, show error and prompt user to run ipa-server-upgrade > > I'm beginning to have my own doubts about version, recognizing that > there isn't exactly another obvious solution. Running the updates every > time ipactl is run isn't great. The updates are not fast by any stretch, > 29s on one VM, and we need to log whenever an update is done. My > ipaupgrade log is 48M from 20 updates. How many times does one run > ipactl restart when diagnosing a problem? > > My biggest concern with version is who keeps count and where? This is > particularly problematic in packaged servers where changes are made > without rebasing (Fedora and RHEL). Somewhere the version would need to > be bumped with each release? Or only when updates are added? Or only > when someone remembers? It just seems fragile and prone to human error > unless you have some automatic version incrementor that takes this into > consideration. > > If fallible version or slow updates are the only option then I'd have to > go with slow updates if only to avoid a lot of support issues. And I > really hate the idea of updates during service restart. > > rob Storing the version is something we just have to do, we need it for missing upgrade detection in FedUp case. We also need for container use case, to make sure that we can check whether we have the right version of bits (container image) and data (mounted volume in a container) - to avoid running woth old bits (and support issues). I see the version generated during RPM/DEB build, maybe even the version we already fill to VENDOR_VERSION? Then no manual version bump is needed when a new downstream patch is added. Martin From simo at redhat.com Tue Mar 3 15:11:29 2015 From: simo at redhat.com (Simo Sorce) Date: Tue, 03 Mar 2015 10:11:29 -0500 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F5CD68.2050801@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> <54F49DD8.4090801@redhat.com> <54F4A3D2.50207@redhat.com> <1425392166.13900.18.camel@willson.usersys.redhat.com> <54F5C64A.7010601@redhat.com> <54F5C7DE.9020701@redhat.com> <54F5CD68.2050801@redhat.com> Message-ID: <1425395489.13900.28.camel@willson.usersys.redhat.com> On Tue, 2015-03-03 at 10:04 -0500, Rob Crittenden wrote: > Martin Basti wrote: > > On 03/03/15 15:33, Martin Kosek wrote: > >> On 03/03/2015 03:16 PM, Simo Sorce wrote: > >>> On Mon, 2015-03-02 at 18:54 +0100, Martin Basti wrote: > >>>> On 02/03/15 18:28, Martin Kosek wrote: > >>>>> On 03/02/2015 06:12 PM, Martin Basti wrote: > >>>>>> On 02/03/15 15:43, Rob Crittenden wrote: > >>>>>>> Martin Basti wrote: > >>>>> ... > >>>>>>> But you haven't explained any case why LDAPI would fail. If LDAPI > >>>>>>> fails > >>>>>>> then you've got more serious problems that I'm not sure binding > >>>>>>> as DM is > >>>>>>> going to solve. > >>>>>>> > >>>>>>> The only case where DM would be handy IMHO is either some worst case > >>>>>>> scenario upgrade where 389-ds is up but not binding to LDAPI or > >>>>>>> if you > >>>>>>> want to allow running LDAP updates as non-root. > >>>>>> I don't know cases when LDAPI would failed, except the case LDAPI is > >>>>>> misconfigured by user, or disabled by user. > >>>>> Wasn't LDAPI needed for the DM password less upgrade so that > >>>>> upgrader could > >>>>> simply bind as root with EXTERNAL auth? > >>>> We can do upgrade in both way, using LDAPI or using DM password, > >>>> preferred is LDAPI. > >>>> Question is, what is the use case for using DM password instead of > >>>> LDAPI > >>>> during upgrade. > >>> There is no use case for using the DM password. > >> +1, so we will only use LDAPI and ditch DM password options and > >> querying that > >> we now have with ipa-ldap-updater? > >> > >>>>>> It is not big effort to keep both DM binding and LDAPI in code. A > >>>>>> user can > >>>>>> always found som unexpected use case for LDAP update with DM > >>>>>> password. > >>>>>> > >>>>>>>>> On ipactl, would it be overkill if there is a tty to prompt the > >>>>>>>>> user to > >>>>>>>>> upgrade? In a non-container world it might be surprising to > >>>>>>>>> have an > >>>>>>>>> upgrade happen esp since upgrades take a while. > >>>>>>>> In non-container enviroment, we can still use upgrade during RPM > >>>>>>>> transaction. > >>>>>>>> > >>>>>>>> So you suggest not to do update automaticaly, just write Error > >>>>>>>> the IPA > >>>>>>>> upgrade is required? > >>>>>>> People do all sorts of strange things. Installing the packages with > >>>>>>> --no-script isn't in the range of impossible. A prompt, and I'm not > >>>>>>> saying it's a great idea, is 2 lines of code. > >>>>>>> > >>>>>>> I guess it just makes me nervous. > >>>>>> So lets summarize this: > >>>>>> * DO upgrade if possible during RPM transaction > >>>>> Umm, I thought we want to get rid of running upgrade during RPM > >>>>> transaction. It > >>>>> is extremely difficult to debug upgrade stuck during RPM > >>>>> transaction, it also > >>>>> makes RPM upgrade run longer than needed. It also makes admins > >>>>> nervous when > >>>>> their rpm upgrade is suddenly waiting right before the end. I even > >>>>> see the > >>>>> fingers slowly reaching to CTRL+C combo... (You can see the > >>>>> consequences) > >>>> People are used to have IPA upgraded and ready after RPM upgrade. > >>>> They may be shocked if IPA services will be in shutdown state after RPM > >>>> transaction. > >>> This is true, stopping IPA and requiring manual intervention is not ok. > >> What is the plan then? Keep upgrades done during RPM transaction? Note > >> that RPM > >> transaction was already stuck several times because IPA, or rather DS, > >> deadlocked. > >> > >> We also need to address https://fedorahosted.org/freeipa/ticket/3849. The > >> original plan was to do the upgrade during ipactl start, this would > >> fix this > >> ticket. Alternatively, should we remove the upgrade from RPM scriptlet > >> and only > >> call asynchronous "systemctl restart ipa.service" that would trigger the > >> upgrade in separate process and log results in ipa.service? > >> > >> Martin > > The plan is do upgrade during RPM transaction if possible. If not then > > ipactl start, will show warning for user to do manual upgrade (Rob > > wanted it in this way, not doing auto upgrade by ipactl) > > Only if there is a tty which means no asking during package update, > which I thought was the idea. I just think it is rather unexpected to > update a package during a restart. > > > So the fedup case is: RPM upgrade failed, ipactl start will detect > > version mismatch, show error and prompt user to run ipa-server-upgrade > > I'm beginning to have my own doubts about version, recognizing that > there isn't exactly another obvious solution. Running the updates every > time ipactl is run isn't great. The updates are not fast by any stretch, > 29s on one VM, and we need to log whenever an update is done. My > ipaupgrade log is 48M from 20 updates. How many times does one run > ipactl restart when diagnosing a problem? > > My biggest concern with version is who keeps count and where? This is > particularly problematic in packaged servers where changes are made > without rebasing (Fedora and RHEL). Somewhere the version would need to > be bumped with each release? Or only when updates are added? Or only > when someone remembers? It just seems fragile and prone to human error > unless you have some automatic version incrementor that takes this into > consideration. > > If fallible version or slow updates are the only option then I'd have to > go with slow updates if only to avoid a lot of support issues. And I > really hate the idea of updates during service restart. Yet updates at start are the only option with container like setups. And if you think about it, why is it bad ? Between updates at start and updates during a rpm transaction ... I am not sure which is worst. At least if an update fails somehow you do not risk causing issues to the whole package upgrade process. And because services are restarted after package upgrades, the upgrades defacto happens immediately after (or during, but not within) package upgrades. I think Jan's idea of marking features in the state file and being able to run some upgrades only if some features/feature-version are missing makes it easier to just "check" at ipactl start and really upgrade at restart only if one of the checks reveals a missing upgrade (and do only that part of the upgrade, although that will probably introduce a dependency graph problem :). Simo. -- Simo Sorce * Red Hat, Inc * New York From mkosek at redhat.com Tue Mar 3 15:11:48 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Mar 2015 16:11:48 +0100 Subject: [Freeipa-devel] [PATCHES 399-401] Allow multiple API instances In-Reply-To: <54F5CEC6.2090603@redhat.com> References: <54F5CA06.4040609@redhat.com> <54F5CCB6.5000109@redhat.com> <54F5CD89.8070202@redhat.com> <54F5CEC6.2090603@redhat.com> Message-ID: <54F5CF34.7040609@redhat.com> On 03/03/2015 04:09 PM, Jan Cholasta wrote: > Dne 3.3.2015 v 16:04 Tomas Babej napsal(a): >> >> On 03/03/2015 04:01 PM, Martin Kosek wrote: >>> On 03/03/2015 03:49 PM, Jan Cholasta wrote: >>>> Hi, >>>> >>>> the attached patches provide an attempt to fix >>>> . >>>> >>>> Patch 401 serves as an example and modifies ipa-advise to use its own >>>> API >>>> instance for Advice plugins. >>>> >>>> Honza >>> Thanks. At least patches 399 and 400 look reasonable short for 4.2. >>> >>> So with these patches, could we also get rid of >>> temporary_ldap2_connection we >>> have in ipa-replica-install? Petr3 may have other examples he met in >>> the past... > > I think we can. Shall I prepare a patch? If it is reasonable simple, I would go for it. It would be another selling point for your patches. > >>> >>> Martin >> >> 401 seems reasonable enough to me too, the bulk of the code is mostly >> just moving the code around and renaming variables. > > Right. > >> >> Plus we have a very extensive (100%) coverage for the advise tool, so I >> wouldn't exclude it from the patchset. > > +1 Martin From mbasti at redhat.com Tue Mar 3 15:28:50 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 03 Mar 2015 16:28:50 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F5CEF0.5040506@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> <54F49DD8.4090801@redhat.com> <54F4A3D2.50207@redhat.com> <1425392166.13900.18.camel@willson.usersys.redhat.com> <54F5C64A.7010601@redhat.com> <54F5C7DE.9020701@redhat.com> <54F5CD68.2050801@redhat.com> <54F5CEF0.5040506@redhat.com> Message-ID: <54F5D332.8090007@redhat.com> On 03/03/15 16:10, Martin Kosek wrote: > On 03/03/2015 04:04 PM, Rob Crittenden wrote: >> Martin Basti wrote: >>> On 03/03/15 15:33, Martin Kosek wrote: >>>> On 03/03/2015 03:16 PM, Simo Sorce wrote: >>>>> On Mon, 2015-03-02 at 18:54 +0100, Martin Basti wrote: >>>>>> On 02/03/15 18:28, Martin Kosek wrote: >>>>>>> On 03/02/2015 06:12 PM, Martin Basti wrote: >>>>>>>> On 02/03/15 15:43, Rob Crittenden wrote: >>>>>>>>> Martin Basti wrote: >>>>>>> ... >>>>>>>>> But you haven't explained any case why LDAPI would fail. If LDAPI >>>>>>>>> fails >>>>>>>>> then you've got more serious problems that I'm not sure binding >>>>>>>>> as DM is >>>>>>>>> going to solve. >>>>>>>>> >>>>>>>>> The only case where DM would be handy IMHO is either some worst case >>>>>>>>> scenario upgrade where 389-ds is up but not binding to LDAPI or >>>>>>>>> if you >>>>>>>>> want to allow running LDAP updates as non-root. >>>>>>>> I don't know cases when LDAPI would failed, except the case LDAPI is >>>>>>>> misconfigured by user, or disabled by user. >>>>>>> Wasn't LDAPI needed for the DM password less upgrade so that >>>>>>> upgrader could >>>>>>> simply bind as root with EXTERNAL auth? >>>>>> We can do upgrade in both way, using LDAPI or using DM password, >>>>>> preferred is LDAPI. >>>>>> Question is, what is the use case for using DM password instead of >>>>>> LDAPI >>>>>> during upgrade. >>>>> There is no use case for using the DM password. >>>> +1, so we will only use LDAPI and ditch DM password options and >>>> querying that >>>> we now have with ipa-ldap-updater? >>>> >>>>>>>> It is not big effort to keep both DM binding and LDAPI in code. A >>>>>>>> user can >>>>>>>> always found som unexpected use case for LDAP update with DM >>>>>>>> password. >>>>>>>> >>>>>>>>>>> On ipactl, would it be overkill if there is a tty to prompt the >>>>>>>>>>> user to >>>>>>>>>>> upgrade? In a non-container world it might be surprising to >>>>>>>>>>> have an >>>>>>>>>>> upgrade happen esp since upgrades take a while. >>>>>>>>>> In non-container enviroment, we can still use upgrade during RPM >>>>>>>>>> transaction. >>>>>>>>>> >>>>>>>>>> So you suggest not to do update automaticaly, just write Error >>>>>>>>>> the IPA >>>>>>>>>> upgrade is required? >>>>>>>>> People do all sorts of strange things. Installing the packages with >>>>>>>>> --no-script isn't in the range of impossible. A prompt, and I'm not >>>>>>>>> saying it's a great idea, is 2 lines of code. >>>>>>>>> >>>>>>>>> I guess it just makes me nervous. >>>>>>>> So lets summarize this: >>>>>>>> * DO upgrade if possible during RPM transaction >>>>>>> Umm, I thought we want to get rid of running upgrade during RPM >>>>>>> transaction. It >>>>>>> is extremely difficult to debug upgrade stuck during RPM >>>>>>> transaction, it also >>>>>>> makes RPM upgrade run longer than needed. It also makes admins >>>>>>> nervous when >>>>>>> their rpm upgrade is suddenly waiting right before the end. I even >>>>>>> see the >>>>>>> fingers slowly reaching to CTRL+C combo... (You can see the >>>>>>> consequences) >>>>>> People are used to have IPA upgraded and ready after RPM upgrade. >>>>>> They may be shocked if IPA services will be in shutdown state after RPM >>>>>> transaction. >>>>> This is true, stopping IPA and requiring manual intervention is not ok. >>>> What is the plan then? Keep upgrades done during RPM transaction? Note >>>> that RPM >>>> transaction was already stuck several times because IPA, or rather DS, >>>> deadlocked. >>>> >>>> We also need to address https://fedorahosted.org/freeipa/ticket/3849. The >>>> original plan was to do the upgrade during ipactl start, this would >>>> fix this >>>> ticket. Alternatively, should we remove the upgrade from RPM scriptlet >>>> and only >>>> call asynchronous "systemctl restart ipa.service" that would trigger the >>>> upgrade in separate process and log results in ipa.service? >>>> >>>> Martin >>> The plan is do upgrade during RPM transaction if possible. If not then >>> ipactl start, will show warning for user to do manual upgrade (Rob >>> wanted it in this way, not doing auto upgrade by ipactl) >> Only if there is a tty which means no asking during package update, >> which I thought was the idea. I just think it is rather unexpected to >> update a package during a restart. >> >>> So the fedup case is: RPM upgrade failed, ipactl start will detect >>> version mismatch, show error and prompt user to run ipa-server-upgrade >> I'm beginning to have my own doubts about version, recognizing that >> there isn't exactly another obvious solution. Running the updates every >> time ipactl is run isn't great. The updates are not fast by any stretch, >> 29s on one VM, and we need to log whenever an update is done. My >> ipaupgrade log is 48M from 20 updates. How many times does one run >> ipactl restart when diagnosing a problem? >> >> My biggest concern with version is who keeps count and where? This is >> particularly problematic in packaged servers where changes are made >> without rebasing (Fedora and RHEL). Somewhere the version would need to >> be bumped with each release? Or only when updates are added? Or only >> when someone remembers? It just seems fragile and prone to human error >> unless you have some automatic version incrementor that takes this into >> consideration. >> >> If fallible version or slow updates are the only option then I'd have to >> go with slow updates if only to avoid a lot of support issues. And I >> really hate the idea of updates during service restart. >> >> rob > Storing the version is something we just have to do, we need it for missing > upgrade detection in FedUp case. We also need for container use case, to make > sure that we can check whether we have the right version of bits (container > image) and data (mounted volume in a container) - to avoid running woth old > bits (and support issues). > > I see the version generated during RPM/DEB build, maybe even the version we > already fill to VENDOR_VERSION? Then no manual version bump is needed when a > new downstream patch is added. Yes I plan to use VENDOR_VERSION. Martin^2 > > Martin -- Martin Basti From mbasti at redhat.com Tue Mar 3 15:43:46 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 03 Mar 2015 16:43:46 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <1425395489.13900.28.camel@willson.usersys.redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> <54F49DD8.4090801@redhat.com> <54F4A3D2.50207@redhat.com> <1425392166.13900.18.camel@willson.usersys.redhat.com> <54F5C64A.7010601@redhat.com> <54F5C7DE.9020701@redhat.com> <54F5CD68.2050801@redhat.com> <1425395489.13900.28.camel@willson.usersys.redhat.com> Message-ID: <54F5D6B2.2070408@redhat.com> On 03/03/15 16:11, Simo Sorce wrote: > On Tue, 2015-03-03 at 10:04 -0500, Rob Crittenden wrote: >> Martin Basti wrote: >>> On 03/03/15 15:33, Martin Kosek wrote: >>>> On 03/03/2015 03:16 PM, Simo Sorce wrote: >>>>> On Mon, 2015-03-02 at 18:54 +0100, Martin Basti wrote: >>>>>> On 02/03/15 18:28, Martin Kosek wrote: >>>>>>> On 03/02/2015 06:12 PM, Martin Basti wrote: >>>>>>>> On 02/03/15 15:43, Rob Crittenden wrote: >>>>>>>>> Martin Basti wrote: >>>>>>> ... >>>>>>>>> But you haven't explained any case why LDAPI would fail. If LDAPI >>>>>>>>> fails >>>>>>>>> then you've got more serious problems that I'm not sure binding >>>>>>>>> as DM is >>>>>>>>> going to solve. >>>>>>>>> >>>>>>>>> The only case where DM would be handy IMHO is either some worst case >>>>>>>>> scenario upgrade where 389-ds is up but not binding to LDAPI or >>>>>>>>> if you >>>>>>>>> want to allow running LDAP updates as non-root. >>>>>>>> I don't know cases when LDAPI would failed, except the case LDAPI is >>>>>>>> misconfigured by user, or disabled by user. >>>>>>> Wasn't LDAPI needed for the DM password less upgrade so that >>>>>>> upgrader could >>>>>>> simply bind as root with EXTERNAL auth? >>>>>> We can do upgrade in both way, using LDAPI or using DM password, >>>>>> preferred is LDAPI. >>>>>> Question is, what is the use case for using DM password instead of >>>>>> LDAPI >>>>>> during upgrade. >>>>> There is no use case for using the DM password. >>>> +1, so we will only use LDAPI and ditch DM password options and >>>> querying that >>>> we now have with ipa-ldap-updater? >>>> >>>>>>>> It is not big effort to keep both DM binding and LDAPI in code. A >>>>>>>> user can >>>>>>>> always found som unexpected use case for LDAP update with DM >>>>>>>> password. >>>>>>>> >>>>>>>>>>> On ipactl, would it be overkill if there is a tty to prompt the >>>>>>>>>>> user to >>>>>>>>>>> upgrade? In a non-container world it might be surprising to >>>>>>>>>>> have an >>>>>>>>>>> upgrade happen esp since upgrades take a while. >>>>>>>>>> In non-container enviroment, we can still use upgrade during RPM >>>>>>>>>> transaction. >>>>>>>>>> >>>>>>>>>> So you suggest not to do update automaticaly, just write Error >>>>>>>>>> the IPA >>>>>>>>>> upgrade is required? >>>>>>>>> People do all sorts of strange things. Installing the packages with >>>>>>>>> --no-script isn't in the range of impossible. A prompt, and I'm not >>>>>>>>> saying it's a great idea, is 2 lines of code. >>>>>>>>> >>>>>>>>> I guess it just makes me nervous. >>>>>>>> So lets summarize this: >>>>>>>> * DO upgrade if possible during RPM transaction >>>>>>> Umm, I thought we want to get rid of running upgrade during RPM >>>>>>> transaction. It >>>>>>> is extremely difficult to debug upgrade stuck during RPM >>>>>>> transaction, it also >>>>>>> makes RPM upgrade run longer than needed. It also makes admins >>>>>>> nervous when >>>>>>> their rpm upgrade is suddenly waiting right before the end. I even >>>>>>> see the >>>>>>> fingers slowly reaching to CTRL+C combo... (You can see the >>>>>>> consequences) >>>>>> People are used to have IPA upgraded and ready after RPM upgrade. >>>>>> They may be shocked if IPA services will be in shutdown state after RPM >>>>>> transaction. >>>>> This is true, stopping IPA and requiring manual intervention is not ok. >>>> What is the plan then? Keep upgrades done during RPM transaction? Note >>>> that RPM >>>> transaction was already stuck several times because IPA, or rather DS, >>>> deadlocked. >>>> >>>> We also need to address https://fedorahosted.org/freeipa/ticket/3849. The >>>> original plan was to do the upgrade during ipactl start, this would >>>> fix this >>>> ticket. Alternatively, should we remove the upgrade from RPM scriptlet >>>> and only >>>> call asynchronous "systemctl restart ipa.service" that would trigger the >>>> upgrade in separate process and log results in ipa.service? >>>> >>>> Martin >>> The plan is do upgrade during RPM transaction if possible. If not then >>> ipactl start, will show warning for user to do manual upgrade (Rob >>> wanted it in this way, not doing auto upgrade by ipactl) >> Only if there is a tty which means no asking during package update, >> which I thought was the idea. I just think it is rather unexpected to >> update a package during a restart. >> >>> So the fedup case is: RPM upgrade failed, ipactl start will detect >>> version mismatch, show error and prompt user to run ipa-server-upgrade >> I'm beginning to have my own doubts about version, recognizing that >> there isn't exactly another obvious solution. Running the updates every >> time ipactl is run isn't great. The updates are not fast by any stretch, >> 29s on one VM, and we need to log whenever an update is done. My >> ipaupgrade log is 48M from 20 updates. How many times does one run >> ipactl restart when diagnosing a problem? >> >> My biggest concern with version is who keeps count and where? This is >> particularly problematic in packaged servers where changes are made >> without rebasing (Fedora and RHEL). Somewhere the version would need to >> be bumped with each release? Or only when updates are added? Or only >> when someone remembers? It just seems fragile and prone to human error >> unless you have some automatic version incrementor that takes this into >> consideration. >> >> If fallible version or slow updates are the only option then I'd have to >> go with slow updates if only to avoid a lot of support issues. And I >> really hate the idea of updates during service restart. I do not like idea of auto upgrading during (re)start as well, it is not expected by user. At least, upgrade takes 5-10 minutes, user may not to know the upgrade is happening. We have 3 use cases: * We can run update during RPM transaction --> just run upgrade as IPA do now * We can not run update during transaction(fedup, --no-script) --> user should run ipa-server-upgrade * Containers --> user should run ipa-server-upgrade > Yet updates at start are the only option with container like setups. > > And if you think about it, why is it bad ? > > Between updates at start and updates during a rpm transaction ... I am > not sure which is worst. > > At least if an update fails somehow you do not risk causing issues to > the whole package upgrade process. > > And because services are restarted after package upgrades, the upgrades > defacto happens immediately after (or during, but not within) package > upgrades. > > I think Jan's idea of marking features in the state file and being able > to run some upgrades only if some features/feature-version are missing > makes it easier to just "check" at ipactl start and really upgrade at > restart only if one of the checks reveals a missing upgrade (and do only > that part of the upgrade, although that will probably introduce a > dependency graph problem :). > > Simo. > -- Martin Basti From mkosek at redhat.com Tue Mar 3 15:47:51 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 03 Mar 2015 16:47:51 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F5D6B2.2070408@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> <54F49DD8.4090801@redhat.com> <54F4A3D2.50207@redhat.com> <1425392166.13900.18.camel@willson.usersys.redhat.com> <54F5C64A.7010601@redhat.com> <54F5C7DE.9020701@redhat.com> <54F5CD68.2050801@redhat.com> <1425395489.13900.28.camel@willson.usersys.redhat.com> <54F5D6B2.2070408@redhat.com> Message-ID: <54F5D7A7.8000005@redhat.com> On 03/03/2015 04:43 PM, Martin Basti wrote: > On 03/03/15 16:11, Simo Sorce wrote: >> On Tue, 2015-03-03 at 10:04 -0500, Rob Crittenden wrote: >>> Martin Basti wrote: >>>> On 03/03/15 15:33, Martin Kosek wrote: >>>>> On 03/03/2015 03:16 PM, Simo Sorce wrote: >>>>>> On Mon, 2015-03-02 at 18:54 +0100, Martin Basti wrote: >>>>>>> On 02/03/15 18:28, Martin Kosek wrote: >>>>>>>> On 03/02/2015 06:12 PM, Martin Basti wrote: >>>>>>>>> On 02/03/15 15:43, Rob Crittenden wrote: >>>>>>>>>> Martin Basti wrote: >>>>>>>> ... >>>>>>>>>> But you haven't explained any case why LDAPI would fail. If LDAPI >>>>>>>>>> fails >>>>>>>>>> then you've got more serious problems that I'm not sure binding >>>>>>>>>> as DM is >>>>>>>>>> going to solve. >>>>>>>>>> >>>>>>>>>> The only case where DM would be handy IMHO is either some worst case >>>>>>>>>> scenario upgrade where 389-ds is up but not binding to LDAPI or >>>>>>>>>> if you >>>>>>>>>> want to allow running LDAP updates as non-root. >>>>>>>>> I don't know cases when LDAPI would failed, except the case LDAPI is >>>>>>>>> misconfigured by user, or disabled by user. >>>>>>>> Wasn't LDAPI needed for the DM password less upgrade so that >>>>>>>> upgrader could >>>>>>>> simply bind as root with EXTERNAL auth? >>>>>>> We can do upgrade in both way, using LDAPI or using DM password, >>>>>>> preferred is LDAPI. >>>>>>> Question is, what is the use case for using DM password instead of >>>>>>> LDAPI >>>>>>> during upgrade. >>>>>> There is no use case for using the DM password. >>>>> +1, so we will only use LDAPI and ditch DM password options and >>>>> querying that >>>>> we now have with ipa-ldap-updater? >>>>> >>>>>>>>> It is not big effort to keep both DM binding and LDAPI in code. A >>>>>>>>> user can >>>>>>>>> always found som unexpected use case for LDAP update with DM >>>>>>>>> password. >>>>>>>>> >>>>>>>>>>>> On ipactl, would it be overkill if there is a tty to prompt the >>>>>>>>>>>> user to >>>>>>>>>>>> upgrade? In a non-container world it might be surprising to >>>>>>>>>>>> have an >>>>>>>>>>>> upgrade happen esp since upgrades take a while. >>>>>>>>>>> In non-container enviroment, we can still use upgrade during RPM >>>>>>>>>>> transaction. >>>>>>>>>>> >>>>>>>>>>> So you suggest not to do update automaticaly, just write Error >>>>>>>>>>> the IPA >>>>>>>>>>> upgrade is required? >>>>>>>>>> People do all sorts of strange things. Installing the packages with >>>>>>>>>> --no-script isn't in the range of impossible. A prompt, and I'm not >>>>>>>>>> saying it's a great idea, is 2 lines of code. >>>>>>>>>> >>>>>>>>>> I guess it just makes me nervous. >>>>>>>>> So lets summarize this: >>>>>>>>> * DO upgrade if possible during RPM transaction >>>>>>>> Umm, I thought we want to get rid of running upgrade during RPM >>>>>>>> transaction. It >>>>>>>> is extremely difficult to debug upgrade stuck during RPM >>>>>>>> transaction, it also >>>>>>>> makes RPM upgrade run longer than needed. It also makes admins >>>>>>>> nervous when >>>>>>>> their rpm upgrade is suddenly waiting right before the end. I even >>>>>>>> see the >>>>>>>> fingers slowly reaching to CTRL+C combo... (You can see the >>>>>>>> consequences) >>>>>>> People are used to have IPA upgraded and ready after RPM upgrade. >>>>>>> They may be shocked if IPA services will be in shutdown state after RPM >>>>>>> transaction. >>>>>> This is true, stopping IPA and requiring manual intervention is not ok. >>>>> What is the plan then? Keep upgrades done during RPM transaction? Note >>>>> that RPM >>>>> transaction was already stuck several times because IPA, or rather DS, >>>>> deadlocked. >>>>> >>>>> We also need to address https://fedorahosted.org/freeipa/ticket/3849. The >>>>> original plan was to do the upgrade during ipactl start, this would >>>>> fix this >>>>> ticket. Alternatively, should we remove the upgrade from RPM scriptlet >>>>> and only >>>>> call asynchronous "systemctl restart ipa.service" that would trigger the >>>>> upgrade in separate process and log results in ipa.service? >>>>> >>>>> Martin >>>> The plan is do upgrade during RPM transaction if possible. If not then >>>> ipactl start, will show warning for user to do manual upgrade (Rob >>>> wanted it in this way, not doing auto upgrade by ipactl) >>> Only if there is a tty which means no asking during package update, >>> which I thought was the idea. I just think it is rather unexpected to >>> update a package during a restart. >>> >>>> So the fedup case is: RPM upgrade failed, ipactl start will detect >>>> version mismatch, show error and prompt user to run ipa-server-upgrade >>> I'm beginning to have my own doubts about version, recognizing that >>> there isn't exactly another obvious solution. Running the updates every >>> time ipactl is run isn't great. The updates are not fast by any stretch, >>> 29s on one VM, and we need to log whenever an update is done. My >>> ipaupgrade log is 48M from 20 updates. How many times does one run >>> ipactl restart when diagnosing a problem? >>> >>> My biggest concern with version is who keeps count and where? This is >>> particularly problematic in packaged servers where changes are made >>> without rebasing (Fedora and RHEL). Somewhere the version would need to >>> be bumped with each release? Or only when updates are added? Or only >>> when someone remembers? It just seems fragile and prone to human error >>> unless you have some automatic version incrementor that takes this into >>> consideration. >>> >>> If fallible version or slow updates are the only option then I'd have to >>> go with slow updates if only to avoid a lot of support issues. And I >>> really hate the idea of updates during service restart. > I do not like idea of auto upgrading during (re)start as well, it is not > expected by user. > At least, upgrade takes 5-10 minutes, user may not to know the upgrade is > happening. > > We have 3 use cases: > * We can run update during RPM transaction --> just run upgrade as IPA do now Ok. We can add more better logging or text output, as Petr2 suggested. > * We can not run update during transaction(fedup, --no-script) --> user should > run ipa-server-upgrade > * Containers --> user should run ipa-server-upgrade Right. And error out in ipactl in these cases, so that user does not run container/VM with data/bits version mismatch. If we are all set, please update the design. Martin From mbasti at redhat.com Tue Mar 3 15:53:27 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 03 Mar 2015 16:53:27 +0100 Subject: [Freeipa-devel] IPA Server upgrade 4.2 design In-Reply-To: <54F5D7A7.8000005@redhat.com> References: <54ECBE7E.7040108@redhat.com> <54EDF53D.8020307@redhat.com> <54EDFD04.5020603@redhat.com> <54EEEB2A.4030108@redhat.com> <54EF21D6.8000301@redhat.com> <54F0C96F.8060907@redhat.com> <54F45068.9030306@redhat.com> <54F47727.9030203@redhat.com> <54F49A03.8030704@redhat.com> <54F49DD8.4090801@redhat.com> <54F4A3D2.50207@redhat.com> <1425392166.13900.18.camel@willson.usersys.redhat.com> <54F5C64A.7010601@redhat.com> <54F5C7DE.9020701@redhat.com> <54F5CD68.2050801@redhat.com> <1425395489.13900.28.camel@willson.usersys.redhat.com> <54F5D6B2.2070408@redhat.com> <54F5D7A7.8000005@redhat.com> Message-ID: <54F5D8F7.2090202@redhat.com> On 03/03/15 16:47, Martin Kosek wrote: > On 03/03/2015 04:43 PM, Martin Basti wrote: >> On 03/03/15 16:11, Simo Sorce wrote: >>> On Tue, 2015-03-03 at 10:04 -0500, Rob Crittenden wrote: >>>> Martin Basti wrote: >>>>> On 03/03/15 15:33, Martin Kosek wrote: >>>>>> On 03/03/2015 03:16 PM, Simo Sorce wrote: >>>>>>> On Mon, 2015-03-02 at 18:54 +0100, Martin Basti wrote: >>>>>>>> On 02/03/15 18:28, Martin Kosek wrote: >>>>>>>>> On 03/02/2015 06:12 PM, Martin Basti wrote: >>>>>>>>>> On 02/03/15 15:43, Rob Crittenden wrote: >>>>>>>>>>> Martin Basti wrote: >>>>>>>>> ... >>>>>>>>>>> But you haven't explained any case why LDAPI would fail. If LDAPI >>>>>>>>>>> fails >>>>>>>>>>> then you've got more serious problems that I'm not sure binding >>>>>>>>>>> as DM is >>>>>>>>>>> going to solve. >>>>>>>>>>> >>>>>>>>>>> The only case where DM would be handy IMHO is either some worst case >>>>>>>>>>> scenario upgrade where 389-ds is up but not binding to LDAPI or >>>>>>>>>>> if you >>>>>>>>>>> want to allow running LDAP updates as non-root. >>>>>>>>>> I don't know cases when LDAPI would failed, except the case LDAPI is >>>>>>>>>> misconfigured by user, or disabled by user. >>>>>>>>> Wasn't LDAPI needed for the DM password less upgrade so that >>>>>>>>> upgrader could >>>>>>>>> simply bind as root with EXTERNAL auth? >>>>>>>> We can do upgrade in both way, using LDAPI or using DM password, >>>>>>>> preferred is LDAPI. >>>>>>>> Question is, what is the use case for using DM password instead of >>>>>>>> LDAPI >>>>>>>> during upgrade. >>>>>>> There is no use case for using the DM password. >>>>>> +1, so we will only use LDAPI and ditch DM password options and >>>>>> querying that >>>>>> we now have with ipa-ldap-updater? >>>>>> >>>>>>>>>> It is not big effort to keep both DM binding and LDAPI in code. A >>>>>>>>>> user can >>>>>>>>>> always found som unexpected use case for LDAP update with DM >>>>>>>>>> password. >>>>>>>>>> >>>>>>>>>>>>> On ipactl, would it be overkill if there is a tty to prompt the >>>>>>>>>>>>> user to >>>>>>>>>>>>> upgrade? In a non-container world it might be surprising to >>>>>>>>>>>>> have an >>>>>>>>>>>>> upgrade happen esp since upgrades take a while. >>>>>>>>>>>> In non-container enviroment, we can still use upgrade during RPM >>>>>>>>>>>> transaction. >>>>>>>>>>>> >>>>>>>>>>>> So you suggest not to do update automaticaly, just write Error >>>>>>>>>>>> the IPA >>>>>>>>>>>> upgrade is required? >>>>>>>>>>> People do all sorts of strange things. Installing the packages with >>>>>>>>>>> --no-script isn't in the range of impossible. A prompt, and I'm not >>>>>>>>>>> saying it's a great idea, is 2 lines of code. >>>>>>>>>>> >>>>>>>>>>> I guess it just makes me nervous. >>>>>>>>>> So lets summarize this: >>>>>>>>>> * DO upgrade if possible during RPM transaction >>>>>>>>> Umm, I thought we want to get rid of running upgrade during RPM >>>>>>>>> transaction. It >>>>>>>>> is extremely difficult to debug upgrade stuck during RPM >>>>>>>>> transaction, it also >>>>>>>>> makes RPM upgrade run longer than needed. It also makes admins >>>>>>>>> nervous when >>>>>>>>> their rpm upgrade is suddenly waiting right before the end. I even >>>>>>>>> see the >>>>>>>>> fingers slowly reaching to CTRL+C combo... (You can see the >>>>>>>>> consequences) >>>>>>>> People are used to have IPA upgraded and ready after RPM upgrade. >>>>>>>> They may be shocked if IPA services will be in shutdown state after RPM >>>>>>>> transaction. >>>>>>> This is true, stopping IPA and requiring manual intervention is not ok. >>>>>> What is the plan then? Keep upgrades done during RPM transaction? Note >>>>>> that RPM >>>>>> transaction was already stuck several times because IPA, or rather DS, >>>>>> deadlocked. >>>>>> >>>>>> We also need to address https://fedorahosted.org/freeipa/ticket/3849. The >>>>>> original plan was to do the upgrade during ipactl start, this would >>>>>> fix this >>>>>> ticket. Alternatively, should we remove the upgrade from RPM scriptlet >>>>>> and only >>>>>> call asynchronous "systemctl restart ipa.service" that would trigger the >>>>>> upgrade in separate process and log results in ipa.service? >>>>>> >>>>>> Martin >>>>> The plan is do upgrade during RPM transaction if possible. If not then >>>>> ipactl start, will show warning for user to do manual upgrade (Rob >>>>> wanted it in this way, not doing auto upgrade by ipactl) >>>> Only if there is a tty which means no asking during package update, >>>> which I thought was the idea. I just think it is rather unexpected to >>>> update a package during a restart. >>>> >>>>> So the fedup case is: RPM upgrade failed, ipactl start will detect >>>>> version mismatch, show error and prompt user to run ipa-server-upgrade >>>> I'm beginning to have my own doubts about version, recognizing that >>>> there isn't exactly another obvious solution. Running the updates every >>>> time ipactl is run isn't great. The updates are not fast by any stretch, >>>> 29s on one VM, and we need to log whenever an update is done. My >>>> ipaupgrade log is 48M from 20 updates. How many times does one run >>>> ipactl restart when diagnosing a problem? >>>> >>>> My biggest concern with version is who keeps count and where? This is >>>> particularly problematic in packaged servers where changes are made >>>> without rebasing (Fedora and RHEL). Somewhere the version would need to >>>> be bumped with each release? Or only when updates are added? Or only >>>> when someone remembers? It just seems fragile and prone to human error >>>> unless you have some automatic version incrementor that takes this into >>>> consideration. >>>> >>>> If fallible version or slow updates are the only option then I'd have to >>>> go with slow updates if only to avoid a lot of support issues. And I >>>> really hate the idea of updates during service restart. >> I do not like idea of auto upgrading during (re)start as well, it is not >> expected by user. >> At least, upgrade takes 5-10 minutes, user may not to know the upgrade is >> happening. >> >> We have 3 use cases: >> * We can run update during RPM transaction --> just run upgrade as IPA do now > Ok. We can add more better logging or text output, as Petr2 suggested. > >> * We can not run update during transaction(fedup, --no-script) --> user should >> run ipa-server-upgrade >> * Containers --> user should run ipa-server-upgrade > Right. And error out in ipactl in these cases, so that user does not run > container/VM with data/bits version mismatch. > > If we are all set, please update the design. > > Martin I can't update design yet, this depends, if we will use just RPM vendor versions, or we will use Honza's proposal with update configuration every restart using sysupgrade.state (except LDAP data, there we still need version), we need to choose first. -- Martin Basti From jcholast at redhat.com Wed Mar 4 10:13:27 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 04 Mar 2015 11:13:27 +0100 Subject: [Freeipa-devel] [PATCHES 399-401] Allow multiple API instances In-Reply-To: <54F5CF34.7040609@redhat.com> References: <54F5CA06.4040609@redhat.com> <54F5CCB6.5000109@redhat.com> <54F5CD89.8070202@redhat.com> <54F5CEC6.2090603@redhat.com> <54F5CF34.7040609@redhat.com> Message-ID: <54F6DAC7.4010206@redhat.com> Dne 3.3.2015 v 16:11 Martin Kosek napsal(a): > On 03/03/2015 04:09 PM, Jan Cholasta wrote: >> Dne 3.3.2015 v 16:04 Tomas Babej napsal(a): >>> >>> On 03/03/2015 04:01 PM, Martin Kosek wrote: >>>> On 03/03/2015 03:49 PM, Jan Cholasta wrote: >>>>> Hi, >>>>> >>>>> the attached patches provide an attempt to fix >>>>> . >>>>> >>>>> Patch 401 serves as an example and modifies ipa-advise to use its own >>>>> API >>>>> instance for Advice plugins. >>>>> >>>>> Honza >>>> Thanks. At least patches 399 and 400 look reasonable short for 4.2. >>>> >>>> So with these patches, could we also get rid of >>>> temporary_ldap2_connection we >>>> have in ipa-replica-install? Petr3 may have other examples he met in >>>> the past... >> >> I think we can. Shall I prepare a patch? > > If it is reasonable simple, I would go for it. It would be another selling > point for your patches. Done. -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-402-ldap2-Use-self-API-instance-instead-of-ipalib.api.patch Type: text/x-patch Size: 3969 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-403-replica-install-Use-different-API-instance-for-the-r.patch Type: text/x-patch Size: 20018 bytes Desc: not available URL: From mkosek at redhat.com Wed Mar 4 10:55:33 2015 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 04 Mar 2015 11:55:33 +0100 Subject: [Freeipa-devel] [PATCHES 399-401] Allow multiple API instances In-Reply-To: <54F6DAC7.4010206@redhat.com> References: <54F5CA06.4040609@redhat.com> <54F5CCB6.5000109@redhat.com> <54F5CD89.8070202@redhat.com> <54F5CEC6.2090603@redhat.com> <54F5CF34.7040609@redhat.com> <54F6DAC7.4010206@redhat.com> Message-ID: <54F6E4A5.6020008@redhat.com> On 03/04/2015 11:13 AM, Jan Cholasta wrote: > Dne 3.3.2015 v 16:11 Martin Kosek napsal(a): >> On 03/03/2015 04:09 PM, Jan Cholasta wrote: >>> Dne 3.3.2015 v 16:04 Tomas Babej napsal(a): >>>> >>>> On 03/03/2015 04:01 PM, Martin Kosek wrote: >>>>> On 03/03/2015 03:49 PM, Jan Cholasta wrote: >>>>>> Hi, >>>>>> >>>>>> the attached patches provide an attempt to fix >>>>>> . >>>>>> >>>>>> Patch 401 serves as an example and modifies ipa-advise to use its own >>>>>> API >>>>>> instance for Advice plugins. >>>>>> >>>>>> Honza >>>>> Thanks. At least patches 399 and 400 look reasonable short for 4.2. >>>>> >>>>> So with these patches, could we also get rid of >>>>> temporary_ldap2_connection we >>>>> have in ipa-replica-install? Petr3 may have other examples he met in >>>>> the past... >>> >>> I think we can. Shall I prepare a patch? >> >> If it is reasonable simple, I would go for it. It would be another selling >> point for your patches. > > Done. > Thanks, this looks great! It proves the point with the separate API object. LGTM, I will let Tomas to continue with standard review then. Martin From dkupka at redhat.com Wed Mar 4 13:11:00 2015 From: dkupka at redhat.com (David Kupka) Date: Wed, 04 Mar 2015 14:11:00 +0100 Subject: [Freeipa-devel] [PATCH] 0040 Add realm name to backup header file. Message-ID: <54F70464.5030803@redhat.com> https://fedorahosted.org/freeipa/ticket/4896 -- David Kupka -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-dkupka-0040-Add-realm-name-to-backup-header-file.patch Type: text/x-patch Size: 2885 bytes Desc: not available URL: From abokovoy at redhat.com Wed Mar 4 13:33:54 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Wed, 4 Mar 2015 15:33:54 +0200 Subject: [Freeipa-devel] [PATCHES 0197-0198] Fix uniqueness plugins upgrade In-Reply-To: <54EDCF53.7050900@redhat.com> References: <54EDCF53.7050900@redhat.com> Message-ID: <20150304133354.GZ25455@redhat.com> On Wed, 25 Feb 2015, Martin Basti wrote: > Modifications: > * All plugins are migrated into new configuration style. > * I left attribute uniqueness plugin disabled, cn=uid > uniqueness,cn=plugins,cn=config is checking the same attribute. > * POST_UPDATE plugin for uid removed, I moved it to update file. Is it okay > Alexander? I haven't found reason why we need to do it in update plugin. > > Thierry, I touched configuration of plugins, which user lifecycle requires, > can you take look if I it does not break anything? > > Patches attached. ACK. -- / Alexander Bokovoy From abokovoy at redhat.com Wed Mar 4 14:17:55 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Wed, 4 Mar 2015 16:17:55 +0200 Subject: [Freeipa-devel] [PATCHES 134-136] extdom: handle ERANGE return code for getXXYYY_r() In-Reply-To: <20150302174507.GK3271@p.redhat.com> References: <20150302174507.GK3271@p.redhat.com> Message-ID: <20150304141755.GB25455@redhat.com> On Mon, 02 Mar 2015, Sumit Bose wrote: >diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c >index 20fdd62b20f28f5384cf83b8be5819f721c6c3db..84aeb28066f25f05a89d0c2d42e8b060e2399501 100644 >--- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c >+++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c >@@ -49,6 +49,220 @@ > > #define MAX(a,b) (((a)>(b))?(a):(b)) > #define SSSD_DOMAIN_SEPARATOR '@' >+#define MAX_BUF (1024*1024*1024) >+ >+ >+ >+static int get_buffer(size_t *_buf_len, char **_buf) >+{ >+ long pw_max; >+ long gr_max; >+ size_t buf_len; >+ char *buf; >+ >+ pw_max = sysconf(_SC_GETPW_R_SIZE_MAX); >+ gr_max = sysconf(_SC_GETGR_R_SIZE_MAX); >+ >+ if (pw_max == -1 && gr_max == -1) { >+ buf_len = 16384; >+ } else { >+ buf_len = MAX(pw_max, gr_max); >+ } Here you'd get buf_len equal to 1024 by default on Linux which is too low for our use case. I think it would be beneficial to add one more MAX(buf_len, 16384): - if (pw_max == -1 && gr_max == -1) { - buf_len = 16384; - } else { - buf_len = MAX(pw_max, gr_max); - } + buf_len = MAX(16384, MAX(pw_max, gr_max)); with MAX(MAX(),..) you also get rid of if() statement as resulting rvalue would be guaranteed to be positive. The rest is going along the common lines but would it be better to allocate memory once per LDAP client request rather than always ask for it per each NSS call? You can guarantee a sequential use of the buffer within the LDAP client request processing so there is no problem with locks but having this memory re-allocated on subsequent getpwnam()/getpwuid()/... calls within the same request processing seems suboptimal to me. -- / Alexander Bokovoy From thozza at redhat.com Wed Mar 4 14:26:31 2015 From: thozza at redhat.com (Tomas Hozza) Date: Wed, 04 Mar 2015 15:26:31 +0100 Subject: [Freeipa-devel] [PATCH 0316] Fix crash triggered by zone objects with unexpected DN In-Reply-To: <54EC8455.3040907@redhat.com> References: <54905091.5010201@redhat.com> <54E45D2A.6050907@redhat.com> <54EC8455.3040907@redhat.com> Message-ID: <54F71617.7040207@redhat.com> On 02/24/2015 03:01 PM, Petr Spacek wrote: > Hello, > > On 18.2.2015 10:36, Tomas Hozza wrote: > > On 12/16/2014 04:32 PM, Petr Spacek wrote: > >> Hello, > >> > >> Fix crash triggered by zone objects with unexpected DN. > >> > >> https://fedorahosted.org/bind-dyndb-ldap/ticket/148 > >> > > NACK. > > > > The patch seems to make no difference when using the reproducer from ticket 148 > > > > 18-Feb-2015 10:34:09.067 running > > 18-Feb-2015 10:34:09.139 ldap_helper.c:4876: INSIST(task == inst->task) failed, back trace > > 18-Feb-2015 10:34:09.139 #0 0x555555587a80 in ?? > > 18-Feb-2015 10:34:09.139 #1 0x7ffff620781a in ?? > > 18-Feb-2015 10:34:09.139 #2 0x7ffff20b00b2 in ?? > > 18-Feb-2015 10:34:09.140 #3 0x7ffff1e7ccf9 in ?? > > 18-Feb-2015 10:34:09.140 #4 0x7ffff1e7d992 in ?? > > 18-Feb-2015 10:34:09.140 #5 0x7ffff20a7f3b in ?? > > 18-Feb-2015 10:34:09.140 #6 0x7ffff5dda52a in ?? > > 18-Feb-2015 10:34:09.140 #7 0x7ffff508d79d in ?? > > 18-Feb-2015 10:34:09.140 exiting (due to assertion failure) > > > > Program received signal SIGABRT, Aborted. > > [Switching to Thread 0x7fffea7cd700 (LWP 1719)] > > 0x00007ffff4fc18c7 in __GI_raise (sig=sig at entry=6) at ../sysdeps/unix/sysv/linux/raise.c:55 > > 55 return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig); > > Missing separate debuginfos, use: debuginfo-install cyrus-sasl-gssapi-2.1.26-19.fc21.x86_64 cyrus-sasl-lib-2.1.26-19.fc21.x86_64 cyrus-sasl-md5-2.1.26-19.fc21.x86_64 cyrus-sasl-plain-2.1.26-19.fc21.x86_64 gssproxy-0.3.1-4.fc21.x86_64 keyutils-libs-1.5.9-4.fc21.x86_64 libattr-2.4.47-9.fc21.x86_64 libdb-5.3.28-9.fc21.x86_64 libgcc-4.9.2-6.fc21.x86_64 libselinux-2.3-5.fc21.x86_64 nspr-4.10.8-1.fc21.x86_64 nss-3.17.4-1.fc21.x86_64 nss-softokn-freebl-3.17.4-1.fc21.x86_64 nss-util-3.17.4-1.fc21.x86_64 pcre-8.35-8.fc21.x86_64 sssd-client-1.12.3-4.fc21.x86_64 xz-libs-5.1.2-14alpha.fc21.x86_64 > > (gdb) bt > > #0 0x00007ffff4fc18c7 in __GI_raise (sig=sig at entry=6) at ../sysdeps/unix/sysv/linux/raise.c:55 > > #1 0x00007ffff4fc352a in __GI_abort () at abort.c:89 > > #2 0x0000555555587c29 in assertion_failed (file=, line=, type=, cond=) at ./main.c:220 > > #3 0x00007ffff620781a in isc_assertion_failed (file=file at entry=0x7ffff20bad2a "ldap_helper.c", line=line at entry=4876, type=type at entry=isc_assertiontype_insist, > > cond=cond at entry=0x7ffff20baf04 "task == inst->task") at assertions.c:57 > > #4 0x00007ffff20b00b2 in syncrepl_update (chgtype=1, entry=0x7ffff0125590, inst=0x7ffff7fa3160) at ldap_helper.c:4876 > > #5 ldap_sync_search_entry (ls=, msg=, entryUUID=, phase=LDAP_SYNC_CAPI_ADD) at ldap_helper.c:5031 > > #6 0x00007ffff1e7ccf9 in ldap_sync_search_entry (ls=ls at entry=0x7fffe40008c0, res=0x7fffe4003870) at ldap_sync.c:228 > > #7 0x00007ffff1e7d992 in ldap_sync_init (ls=0x7fffe40008c0, mode=mode at entry=3) at ldap_sync.c:792 > > #8 0x00007ffff20a7f3b in ldap_syncrepl_watcher (arg=0x7ffff7fa3160) at ldap_helper.c:5247 > > #9 0x00007ffff5dda52a in start_thread (arg=0x7fffea7cd700) at pthread_create.c:310 > > #10 0x00007ffff508d79d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 > > Thank you for catching this! I was using slightly different test which > triggered the new code but by using different code path. > > This new version should be more robust. Please re-test it, thank you! > ACK for version 2. No crash during testing.... ;) Regards, -- Tomas Hozza Software Engineer - EMEA ENG Developer Experience PGP: 1D9F3C2D Red Hat Inc. http://cz.redhat.com From tbordaz at redhat.com Wed Mar 4 14:54:02 2015 From: tbordaz at redhat.com (thierry bordaz) Date: Wed, 04 Mar 2015 15:54:02 +0100 Subject: [Freeipa-devel] [PATCH] 0006 Limit deadlocks between DS plugin DNA and slapi-nis Message-ID: <54F71C8A.90709@redhat.com> https://fedorahosted.org/freeipa/ticket/4927 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Limit-deadlocks-between-DS-plugin-DNA-and-slapi-nis.patch Type: text/x-patch Size: 2838 bytes Desc: not available URL: From mbasti at redhat.com Wed Mar 4 15:17:28 2015 From: mbasti at redhat.com (Martin Basti) Date: Wed, 04 Mar 2015 16:17:28 +0100 Subject: [Freeipa-devel] [PATCHES 0200-0202] DNS fixes related to unsupported records Message-ID: <54F72208.5090208@redhat.com> Ticket: https://fedorahosted.org/freeipa/ticket/4930 0200: 4.1, master Fixes traceback, which was raised if LDAP contained a record that was marked as unsupported. Now unsupported records are shown, if LDAP contains them. 0200: 4.1, master Records marked as unsupported will not show options for editing parts. 0202: only master Removes NSEC3PARAM record from record types. NSEC3PARAM can contain only zone, value is allowed only in idnszone objectclass, so do not confuse users. -- Martin Basti From mbasti at redhat.com Wed Mar 4 15:35:06 2015 From: mbasti at redhat.com (Martin Basti) Date: Wed, 04 Mar 2015 16:35:06 +0100 Subject: [Freeipa-devel] [PATCHES 0200-0202] DNS fixes related to unsupported records In-Reply-To: <54F72208.5090208@redhat.com> References: <54F72208.5090208@redhat.com> Message-ID: <54F7262A.4010709@redhat.com> On 04/03/15 16:17, Martin Basti wrote: > Ticket: https://fedorahosted.org/freeipa/ticket/4930 > > 0200: 4.1, master > Fixes traceback, which was raised if LDAP contained a record that was > marked as unsupported. > Now unsupported records are shown, if LDAP contains them. > > 0200: 4.1, master > Records marked as unsupported will not show options for editing parts. > > 0202: only master > Removes NSEC3PARAM record from record types. NSEC3PARAM can contain > only zone, value is allowed only in idnszone objectclass, so do not > confuse users. > .... and patches attached :-) -- Martin Basti -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0200-DNS-fix-do-not-traceback-if-unsupported-records-are-.patch Type: text/x-patch Size: 4931 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0201-DNS-fix-do-not-show-part-options-for-unsupported-rec.patch Type: text/x-patch Size: 1160 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0202-DNS-remove-NSEC3PARAM-from-records.patch Type: text/x-patch Size: 9786 bytes Desc: not available URL: From sbose at redhat.com Wed Mar 4 17:14:53 2015 From: sbose at redhat.com (Sumit Bose) Date: Wed, 4 Mar 2015 18:14:53 +0100 Subject: [Freeipa-devel] [PATCHES 134-136] extdom: handle ERANGE return code for getXXYYY_r() In-Reply-To: <20150304141755.GB25455@redhat.com> References: <20150302174507.GK3271@p.redhat.com> <20150304141755.GB25455@redhat.com> Message-ID: <20150304171453.GS3271@p.redhat.com> On Wed, Mar 04, 2015 at 04:17:55PM +0200, Alexander Bokovoy wrote: > On Mon, 02 Mar 2015, Sumit Bose wrote: > >diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c > >index 20fdd62b20f28f5384cf83b8be5819f721c6c3db..84aeb28066f25f05a89d0c2d42e8b060e2399501 100644 > >--- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c > >+++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c > >@@ -49,6 +49,220 @@ > > > >#define MAX(a,b) (((a)>(b))?(a):(b)) > >#define SSSD_DOMAIN_SEPARATOR '@' > >+#define MAX_BUF (1024*1024*1024) > >+ > >+ > >+ > >+static int get_buffer(size_t *_buf_len, char **_buf) > >+{ > >+ long pw_max; > >+ long gr_max; > >+ size_t buf_len; > >+ char *buf; > >+ > >+ pw_max = sysconf(_SC_GETPW_R_SIZE_MAX); > >+ gr_max = sysconf(_SC_GETGR_R_SIZE_MAX); > >+ > >+ if (pw_max == -1 && gr_max == -1) { > >+ buf_len = 16384; > >+ } else { > >+ buf_len = MAX(pw_max, gr_max); > >+ } > Here you'd get buf_len equal to 1024 by default on Linux which is too > low for our use case. I think it would be beneficial to add one more > MAX(buf_len, 16384): > - if (pw_max == -1 && gr_max == -1) { > - buf_len = 16384; > - } else { > - buf_len = MAX(pw_max, gr_max); > - } > + buf_len = MAX(16384, MAX(pw_max, gr_max)); > > with MAX(MAX(),..) you also get rid of if() statement as resulting > rvalue would be guaranteed to be positive. done > > The rest is going along the common lines but would it be better to > allocate memory once per LDAP client request rather than always ask for > it per each NSS call? You can guarantee a sequential use of the buffer > within the LDAP client request processing so there is no problem with > locks but having this memory re-allocated on subsequent > getpwnam()/getpwuid()/... calls within the same request processing seems > suboptimal to me. ok, makes sense, I moved get_buffer() back to the callers. New version attached. bye, Sumit > > -- > / Alexander Bokovoy -------------- next part -------------- From 0b4e302866f734b93176d9104bd78a2e55702c40 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Tue, 24 Feb 2015 15:29:00 +0100 Subject: [PATCH 134/136] Add configure check for cwrap libraries Currently only nss-wrapper is checked, checks for other crwap libraries can be added e.g. as AM_CHECK_WRAPPER(uid_wrapper, HAVE_UID_WRAPPER) --- daemons/configure.ac | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/daemons/configure.ac b/daemons/configure.ac index 97cd25115f371e9e549d209401df9325c7e112c1..7c979fe2d0b91e9d71fe4ca5a50ad78a4de79298 100644 --- a/daemons/configure.ac +++ b/daemons/configure.ac @@ -236,6 +236,30 @@ PKG_CHECK_EXISTS(cmocka, ) AM_CONDITIONAL([HAVE_CMOCKA], [test x$have_cmocka = xyes]) +dnl A macro to check presence of a cwrap (http://cwrap.org) wrapper on the system +dnl Usage: +dnl AM_CHECK_WRAPPER(name, conditional) +dnl If the cwrap library is found, sets the HAVE_$name conditional +AC_DEFUN([AM_CHECK_WRAPPER], +[ + FOUND_WRAPPER=0 + + AC_MSG_CHECKING([for $1]) + PKG_CHECK_EXISTS([$1], + [ + AC_MSG_RESULT([yes]) + FOUND_WRAPPER=1 + ], + [ + AC_MSG_RESULT([no]) + AC_MSG_WARN([cwrap library $1 not found, some tests will not run]) + ]) + + AM_CONDITIONAL($2, [ test x$FOUND_WRAPPER = x1]) +]) + +AM_CHECK_WRAPPER(nss_wrapper, HAVE_NSS_WRAPPER) + dnl -- dirsrv is needed for the extdom unit tests -- PKG_CHECK_MODULES([DIRSRV], [dirsrv >= 1.3.0]) dnl -- sss_idmap is needed by the extdom exop -- -- 2.1.0 -------------- next part -------------- From 79fe3779aeabc007365b183abc1474ab2bba6a7b Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Tue, 24 Feb 2015 15:33:39 +0100 Subject: [PATCH 135/136] extdom: handle ERANGE return code for getXXYYY_r() calls The getXXYYY_r() calls require a buffer to store the variable data of the passwd and group structs. If the provided buffer is too small ERANGE is returned and the caller can try with a larger buffer again. Cmocka/cwrap based unit-tests for get*_r_wrapper() are added. Resolves https://fedorahosted.org/freeipa/ticket/4908 --- .../ipa-slapi-plugins/ipa-extdom-extop/Makefile.am | 31 ++- .../ipa-extdom-extop/ipa_extdom.h | 9 + .../ipa-extdom-extop/ipa_extdom_cmocka_tests.c | 226 +++++++++++++++++ .../ipa-extdom-extop/ipa_extdom_common.c | 274 +++++++++++++++------ .../ipa-extdom-extop/test_data/group | 2 + .../ipa-extdom-extop/test_data/passwd | 2 + .../ipa-extdom-extop/test_data/test_setup.sh | 3 + 7 files changed, 469 insertions(+), 78 deletions(-) create mode 100644 daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c create mode 100644 daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/group create mode 100644 daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/passwd create mode 100644 daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/test_setup.sh diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am b/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am index 0008476796f5b20f62f2c32e7b291b787fa7a6fc..a1679812ef3c5de8c6e18433cbb991a99ad0b6c8 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am @@ -35,9 +35,20 @@ libipa_extdom_extop_la_LIBADD = \ $(SSSNSSIDMAP_LIBS) \ $(NULL) +TESTS = +check_PROGRAMS = + if HAVE_CHECK -TESTS = extdom_tests -check_PROGRAMS = extdom_tests +TESTS += extdom_tests +check_PROGRAMS += extdom_tests +endif + +if HAVE_CMOCKA +if HAVE_NSS_WRAPPER +TESTS_ENVIRONMENT = . ./test_data/test_setup.sh; +TESTS += extdom_cmocka_tests +check_PROGRAMS += extdom_cmocka_tests +endif endif extdom_tests_SOURCES = \ @@ -55,6 +66,22 @@ extdom_tests_LDADD = \ $(SSSNSSIDMAP_LIBS) \ $(NULL) +extdom_cmocka_tests_SOURCES = \ + ipa_extdom_cmocka_tests.c \ + ipa_extdom_common.c \ + $(NULL) +extdom_cmocka_tests_CFLAGS = $(CMOCKA_CFLAGS) +extdom_cmocka_tests_LDFLAGS = \ + -rpath $(shell pkg-config --libs-only-L dirsrv | sed -e 's/-L//') \ + $(NULL) +extdom_cmocka_tests_LDADD = \ + $(CMOCKA_LIBS) \ + $(LDAP_LIBS) \ + $(DIRSRV_LIBS) \ + $(SSSNSSIDMAP_LIBS) \ + $(NULL) + + appdir = $(IPA_DATA_DIR) app_DATA = \ ipa-extdom-extop-conf.ldif \ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h index 56ca5009b1aa427f6c059b78ac392c768e461e2e..40bf933920fdd2ca19e5ef195aaa8fb820446cc5 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h @@ -174,4 +174,13 @@ int check_request(struct extdom_req *req, enum extdom_version version); int handle_request(struct ipa_extdom_ctx *ctx, struct extdom_req *req, struct berval **berval); int pack_response(struct extdom_res *res, struct berval **ret_val); +int get_buffer(size_t *_buf_len, char **_buf); +int getpwnam_r_wrapper(size_t buf_max, const char *name, + struct passwd *pwd, char **_buf, size_t *_buf_len); +int getpwuid_r_wrapper(size_t buf_max, uid_t uid, + struct passwd *pwd, char **_buf, size_t *_buf_len); +int getgrnam_r_wrapper(size_t buf_max, const char *name, + struct group *grp, char **_buf, size_t *_buf_len); +int getgrgid_r_wrapper(size_t buf_max, gid_t gid, + struct group *grp, char **_buf, size_t *_buf_len); #endif /* _IPA_EXTDOM_H_ */ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c new file mode 100644 index 0000000000000000000000000000000000000000..be736dd9c5af4d0b632f1dbc55033fdf738bad46 --- /dev/null +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c @@ -0,0 +1,226 @@ +/* + Authors: + Sumit Bose + + Copyright (C) 2015 Red Hat + + Extdom tests + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . +*/ + +#include +#include +#include +#include +#include + +#include +#include + + +#include "ipa_extdom.h" + +#define MAX_BUF (1024*1024*1024) + +void test_getpwnam_r_wrapper(void **state) +{ + int ret; + struct passwd pwd; + char *buf; + size_t buf_len; + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getpwnam_r_wrapper(MAX_BUF, "non_exisiting_user", &pwd, &buf, + &buf_len); + assert_int_equal(ret, ENOENT); + + ret = getpwnam_r_wrapper(MAX_BUF, "user", &pwd, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(pwd.pw_name, "user"); + assert_string_equal(pwd.pw_passwd, "x"); + assert_int_equal(pwd.pw_uid, 12345); + assert_int_equal(pwd.pw_gid, 23456); + assert_string_equal(pwd.pw_gecos, "gecos"); + assert_string_equal(pwd.pw_dir, "/home/user"); + assert_string_equal(pwd.pw_shell, "/bin/shell"); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getpwnam_r_wrapper(MAX_BUF, "user_big", &pwd, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(pwd.pw_name, "user_big"); + assert_string_equal(pwd.pw_passwd, "x"); + assert_int_equal(pwd.pw_uid, 12346); + assert_int_equal(pwd.pw_gid, 23457); + assert_int_equal(strlen(pwd.pw_gecos), 4000 * strlen("gecos")); + assert_string_equal(pwd.pw_dir, "/home/user_big"); + assert_string_equal(pwd.pw_shell, "/bin/shell"); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getpwnam_r_wrapper(1024, "user_big", &pwd, &buf, &buf_len); + assert_int_equal(ret, ENOMEM); + free(buf); +} + +void test_getpwuid_r_wrapper(void **state) +{ + int ret; + struct passwd pwd; + char *buf; + size_t buf_len; + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getpwuid_r_wrapper(MAX_BUF, 99999, &pwd, &buf, &buf_len); + assert_int_equal(ret, ENOENT); + + ret = getpwuid_r_wrapper(MAX_BUF, 12345, &pwd, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(pwd.pw_name, "user"); + assert_string_equal(pwd.pw_passwd, "x"); + assert_int_equal(pwd.pw_uid, 12345); + assert_int_equal(pwd.pw_gid, 23456); + assert_string_equal(pwd.pw_gecos, "gecos"); + assert_string_equal(pwd.pw_dir, "/home/user"); + assert_string_equal(pwd.pw_shell, "/bin/shell"); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getpwuid_r_wrapper(MAX_BUF, 12346, &pwd, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(pwd.pw_name, "user_big"); + assert_string_equal(pwd.pw_passwd, "x"); + assert_int_equal(pwd.pw_uid, 12346); + assert_int_equal(pwd.pw_gid, 23457); + assert_int_equal(strlen(pwd.pw_gecos), 4000 * strlen("gecos")); + assert_string_equal(pwd.pw_dir, "/home/user_big"); + assert_string_equal(pwd.pw_shell, "/bin/shell"); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getpwuid_r_wrapper(1024, 12346, &pwd, &buf, &buf_len); + assert_int_equal(ret, ENOMEM); + free(buf); +} + +void test_getgrnam_r_wrapper(void **state) +{ + int ret; + struct group grp; + char *buf; + size_t buf_len; + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getgrnam_r_wrapper(MAX_BUF, "non_exisiting_group", &grp, &buf, &buf_len); + assert_int_equal(ret, ENOENT); + + ret = getgrnam_r_wrapper(MAX_BUF, "group", &grp, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(grp.gr_name, "group"); + assert_string_equal(grp.gr_passwd, "x"); + assert_int_equal(grp.gr_gid, 11111); + assert_string_equal(grp.gr_mem[0], "member0001"); + assert_string_equal(grp.gr_mem[1], "member0002"); + assert_null(grp.gr_mem[2]); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getgrnam_r_wrapper(MAX_BUF, "group_big", &grp, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(grp.gr_name, "group_big"); + assert_string_equal(grp.gr_passwd, "x"); + assert_int_equal(grp.gr_gid, 22222); + assert_string_equal(grp.gr_mem[0], "member0001"); + assert_string_equal(grp.gr_mem[1], "member0002"); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getgrnam_r_wrapper(1024, "group_big", &grp, &buf, &buf_len); + assert_int_equal(ret, ENOMEM); + free(buf); +} + +void test_getgrgid_r_wrapper(void **state) +{ + int ret; + struct group grp; + char *buf; + size_t buf_len; + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getgrgid_r_wrapper(MAX_BUF, 99999, &grp, &buf, &buf_len); + assert_int_equal(ret, ENOENT); + + ret = getgrgid_r_wrapper(MAX_BUF, 11111, &grp, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(grp.gr_name, "group"); + assert_string_equal(grp.gr_passwd, "x"); + assert_int_equal(grp.gr_gid, 11111); + assert_string_equal(grp.gr_mem[0], "member0001"); + assert_string_equal(grp.gr_mem[1], "member0002"); + assert_null(grp.gr_mem[2]); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getgrgid_r_wrapper(MAX_BUF, 22222, &grp, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(grp.gr_name, "group_big"); + assert_string_equal(grp.gr_passwd, "x"); + assert_int_equal(grp.gr_gid, 22222); + assert_string_equal(grp.gr_mem[0], "member0001"); + assert_string_equal(grp.gr_mem[1], "member0002"); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getgrgid_r_wrapper(1024, 22222, &grp, &buf, &buf_len); + assert_int_equal(ret, ENOMEM); + free(buf); +} + +int main(int argc, const char *argv[]) +{ + const UnitTest tests[] = { + unit_test(test_getpwnam_r_wrapper), + unit_test(test_getpwuid_r_wrapper), + unit_test(test_getgrnam_r_wrapper), + unit_test(test_getgrgid_r_wrapper), + }; + + return run_tests(tests); +} diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index 20fdd62b20f28f5384cf83b8be5819f721c6c3db..57117398a934348eb6e532ef45102d4d13861e86 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -49,6 +49,192 @@ #define MAX(a,b) (((a)>(b))?(a):(b)) #define SSSD_DOMAIN_SEPARATOR '@' +#define MAX_BUF (1024*1024*1024) + + + +int get_buffer(size_t *_buf_len, char **_buf) +{ + long pw_max; + long gr_max; + size_t buf_len; + char *buf; + + pw_max = sysconf(_SC_GETPW_R_SIZE_MAX); + gr_max = sysconf(_SC_GETGR_R_SIZE_MAX); + + buf_len = MAX(16384, MAX(pw_max, gr_max)); + + buf = malloc(sizeof(char) * buf_len); + if (buf == NULL) { + return LDAP_OPERATIONS_ERROR; + } + + *_buf_len = buf_len; + *_buf = buf; + + return LDAP_SUCCESS; +} + +int getpwnam_r_wrapper(size_t buf_max, const char *name, + struct passwd *pwd, char **_buf, size_t *_buf_len) +{ + char *buf = NULL; + char *tmp_buf; + size_t buf_len = 0; + int ret; + struct passwd *result = NULL; + + buf = *_buf; + buf_len = *_buf_len; + + while (buf != NULL + && (ret = getpwnam_r(name, pwd, buf, buf_len, &result)) == ERANGE) { + buf_len *= 2; + if (buf_len > buf_max) { + ret = ENOMEM; + goto done; + } + + tmp_buf = realloc(buf, buf_len); + if (tmp_buf == NULL) { + ret = ENOMEM; + goto done; + } + + buf = tmp_buf; + } + + if (ret == 0 && result == NULL) { + ret = ENOENT; + } + +done: + *_buf = buf; + *_buf_len = buf_len; + + return ret; +} + +int getpwuid_r_wrapper(size_t buf_max, uid_t uid, + struct passwd *pwd, char **_buf, size_t *_buf_len) +{ + char *buf = NULL; + char *tmp_buf; + size_t buf_len = 0; + int ret; + struct passwd *result = NULL; + + buf = *_buf; + buf_len = *_buf_len; + + while (buf != NULL + && (ret = getpwuid_r(uid, pwd, buf, buf_len, &result)) == ERANGE) { + buf_len *= 2; + if (buf_len > buf_max) { + ret = ENOMEM; + goto done; + } + + tmp_buf = realloc(buf, buf_len); + if (tmp_buf == NULL) { + ret = ENOMEM; + goto done; + } + + buf = tmp_buf; + } + + if (ret == 0 && result == NULL) { + ret = ENOENT; + } + +done: + *_buf = buf; + *_buf_len = buf_len; + + return ret; +} + +int getgrnam_r_wrapper(size_t buf_max, const char *name, + struct group *grp, char **_buf, size_t *_buf_len) +{ + char *buf = NULL; + char *tmp_buf; + size_t buf_len = 0; + int ret; + struct group *result = NULL; + + buf = *_buf; + buf_len = *_buf_len; + + while (buf != NULL + && (ret = getgrnam_r(name, grp, buf, buf_len, &result)) == ERANGE) { + buf_len *= 2; + if (buf_len > buf_max) { + ret = ENOMEM; + goto done; + } + + tmp_buf = realloc(buf, buf_len); + if (tmp_buf == NULL) { + ret = ENOMEM; + goto done; + } + + buf = tmp_buf; + } + + if (ret == 0 && result == NULL) { + ret = ENOENT; + } + +done: + *_buf = buf; + *_buf_len = buf_len; + + return ret; +} + +int getgrgid_r_wrapper(size_t buf_max, gid_t gid, + struct group *grp, char **_buf, size_t *_buf_len) +{ + char *buf = NULL; + char *tmp_buf; + size_t buf_len = 0; + int ret; + struct group *result = NULL; + + buf = *_buf; + buf_len = *_buf_len; + + while (buf != NULL + && (ret = getgrgid_r(gid, grp, buf, buf_len, &result)) == ERANGE) { + buf_len *= 2; + if (buf_len > buf_max) { + ret = ENOMEM; + goto done; + } + + tmp_buf = realloc(buf, buf_len); + if (tmp_buf == NULL) { + ret = ENOMEM; + goto done; + } + + buf = tmp_buf; + } + + if (ret == 0 && result == NULL) { + ret = ENOENT; + } + +done: + *_buf = buf; + *_buf_len = buf_len; + + return ret; +} int parse_request_data(struct berval *req_val, struct extdom_req **_req) { @@ -191,33 +377,6 @@ int check_request(struct extdom_req *req, enum extdom_version version) return LDAP_SUCCESS; } -static int get_buffer(size_t *_buf_len, char **_buf) -{ - long pw_max; - long gr_max; - size_t buf_len; - char *buf; - - pw_max = sysconf(_SC_GETPW_R_SIZE_MAX); - gr_max = sysconf(_SC_GETGR_R_SIZE_MAX); - - if (pw_max == -1 && gr_max == -1) { - buf_len = 16384; - } else { - buf_len = MAX(pw_max, gr_max); - } - - buf = malloc(sizeof(char) * buf_len); - if (buf == NULL) { - return LDAP_OPERATIONS_ERROR; - } - - *_buf_len = buf_len; - *_buf = buf; - - return LDAP_SUCCESS; -} - static int get_user_grouplist(const char *name, gid_t gid, size_t *_ngroups, gid_t **_groups ) { @@ -323,7 +482,6 @@ static int pack_ber_user(enum response_types response_type, size_t buf_len; char *buf = NULL; struct group grp; - struct group *grp_result; size_t c; char *locat; char *short_user_name = NULL; @@ -375,15 +533,11 @@ static int pack_ber_user(enum response_types response_type, } for (c = 0; c < ngroups; c++) { - ret = getgrgid_r(groups[c], &grp, buf, buf_len, &grp_result); + ret = getgrgid_r_wrapper(MAX_BUF, groups[c], &grp, &buf, &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; } - if (grp_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } ret = ber_printf(ber, "s", grp.gr_name); if (ret == -1) { @@ -542,7 +696,6 @@ static int handle_uid_request(enum request_types request_type, uid_t uid, { int ret; struct passwd pwd; - struct passwd *pwd_result = NULL; char *sid_str = NULL; enum sss_id_type id_type; size_t buf_len; @@ -568,15 +721,11 @@ static int handle_uid_request(enum request_types request_type, uid_t uid, ret = pack_ber_sid(sid_str, berval); } else { - ret = getpwuid_r(uid, &pwd, buf, buf_len, &pwd_result); + ret = getpwuid_r_wrapper(MAX_BUF, uid, &pwd, &buf, &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; } - if (pwd_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); @@ -610,7 +759,6 @@ static int handle_gid_request(enum request_types request_type, gid_t gid, { int ret; struct group grp; - struct group *grp_result = NULL; char *sid_str = NULL; enum sss_id_type id_type; size_t buf_len; @@ -635,15 +783,11 @@ static int handle_gid_request(enum request_types request_type, gid_t gid, ret = pack_ber_sid(sid_str, berval); } else { - ret = getgrgid_r(gid, &grp, buf, buf_len, &grp_result); + ret = getgrgid_r_wrapper(MAX_BUF, gid, &grp, &buf, &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; } - if (grp_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(grp.gr_name, &kv_list, &id_type); @@ -676,9 +820,7 @@ static int handle_sid_request(enum request_types request_type, const char *sid, { int ret; struct passwd pwd; - struct passwd *pwd_result = NULL; struct group grp; - struct group *grp_result = NULL; char *domain_name = NULL; char *fq_name = NULL; char *object_name = NULL; @@ -724,17 +866,12 @@ static int handle_sid_request(enum request_types request_type, const char *sid, switch(id_type) { case SSS_ID_TYPE_UID: case SSS_ID_TYPE_BOTH: - ret = getpwnam_r(fq_name, &pwd, buf, buf_len, &pwd_result); + ret = getpwnam_r_wrapper(MAX_BUF, fq_name, &pwd, &buf, &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; } - if (pwd_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } - if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_UID @@ -755,17 +892,12 @@ static int handle_sid_request(enum request_types request_type, const char *sid, pwd.pw_shell, kv_list, berval); break; case SSS_ID_TYPE_GID: - ret = getgrnam_r(fq_name, &grp, buf, buf_len, &grp_result); + ret = getgrnam_r_wrapper(MAX_BUF, fq_name, &grp, &buf, &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; } - if (grp_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } - if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(grp.gr_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_GID @@ -806,9 +938,7 @@ static int handle_name_request(enum request_types request_type, int ret; char *fq_name = NULL; struct passwd pwd; - struct passwd *pwd_result = NULL; struct group grp; - struct group *grp_result = NULL; char *sid_str = NULL; enum sss_id_type id_type; size_t buf_len; @@ -842,15 +972,8 @@ static int handle_name_request(enum request_types request_type, goto done; } - ret = getpwnam_r(fq_name, &pwd, buf, buf_len, &pwd_result); - if (ret != 0) { - /* according to the man page there are a couple of error codes - * which can indicate that the user was not found. To be on the - * safe side we fail back to the group lookup on all errors. */ - pwd_result = NULL; - } - - if (pwd_result != NULL) { + ret = getpwnam_r_wrapper(MAX_BUF, fq_name, &pwd, &buf, &buf_len); + if (ret == 0) { if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_UID @@ -869,17 +992,16 @@ static int handle_name_request(enum request_types request_type, pwd.pw_gid, pwd.pw_gecos, pwd.pw_dir, pwd.pw_shell, kv_list, berval); } else { /* no user entry found */ - ret = getgrnam_r(fq_name, &grp, buf, buf_len, &grp_result); + /* according to the getpwnam() man page there are a couple of + * error codes which can indicate that the user was not found. To + * be on the safe side we fail back to the group lookup on all + * errors. */ + ret = getgrnam_r_wrapper(MAX_BUF, fq_name, &grp, &buf, &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; } - if (grp_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } - if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(grp.gr_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_GID diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/group b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/group new file mode 100644 index 0000000000000000000000000000000000000000..8d1b012871b21cc9d5ffdba2168f35ef3e8a5f81 --- /dev/null +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/group @@ -0,0 +1,2 @@ +group:x:11111:member0001,member0002 +group_big:x:22222:member0001,member0002,member0003,member0004,member0005,member0006,member0007,member0008,member0009,member0010,member0011,member0012,member0013,member0014,member0015,member0016,member0017,member0018,member0019,member0020,member0021,member0022,member0023,member0024,member0025,member0026,member0027,member0028,member0029,member0030,member0031,member0032,member0033,member0034,member0035,member0036,member0037,member0038,member0039,member0040,member0041,member0042,member0043,member0044,member0045,member0046,member0047,member0048,member0049,member0050,member0051,member0052,member0053,member0054,member0055,member0056,member0057,member0058,member0059,member0060,member0061,member0062,member0063,member0064,member0065,member0066,member0067,member0068,member0069,member0070,member0071,member0072,member0073,member0074,member0075,member0076,member0077,member0078,member0079,member0080,member0081,member0082,member0083,member0084,member0085,member0086,member0087,member0088,member0089,member0090,member0091,member0092,member0093,member0094,member0095,member0096,member0097,member0098,member0099,member0100,member0101,member0102,member0103,member0104,member0105,member0106,member0107,member0108,member0109,member0110,member0111,member0112,member0113,member0114,member0115,member0116,member0117,member0118,member0119,member0120,member0121,member0122,member0123,member0124,member0125,member0126,member0127,member0128,member0129,member0130,member0131,member0132,member0133,member0134,member0135,member0136,member0137,member0138,member0139,member0140,member0141,member0142,member0143,member0144,member0145,member0146,member0147,member0148,member0149,member0150,member0151,member0152,member0153,member0154,member0155,member0156,member0157,member0158,member0159,member0160,member0161,member0162,member0163,member0164,member0165,member0166,member0167,member0168,member0169,member0170,member0171,member0172,member0173,member0174,member0175,member0176,member0177,member0178,member0179,member0180,member0181,member0182,member0183,member0184,member0185,member0186,member0187,member0188,member0189,member0190,member0191,member0192,member0193,member0194,member0195,member0196,member0197,member0198,member0199,member0200,member0201,member0202,member0203,member0204,member0205,member0206,member0207,member0208,member0209,member0210,member0211,member0212,member0213,member0214,member0215,member0216,member0217,member0218,member0219,member0220,member0221,member0222,member0223,member0224,member0225,member0226,member0227,member0228,member0229,member0230,member0231,member0232,member0233,member0234,member0235,member0236,member0237,member0238,member0239,member0240,member0241,member0242,member0243,member0244,member0245,member0246,member0247,member0248,member0249,member0250,member0251,member0252,member0253,member0254,member0255,member0256,member0257,member0258,member0259,member0260,member0261,member0262,member0263,member0264,member0265,member0266,member0267,member0268,member0269,member0270,member0271,member0272,member0273,member0274,member0275,member0276,member0277,member0278,member0279,member0280,member0281,member0282,member0283,member0284,member0285,member0286,member0287,member0288,member0289,member0290,member0291,member0292,member0293,member0294,member0295,member0296,member0297,member0298,member0299,member0300,member0301,member0302,member0303,member0304,member0305,member0306,member0307,member0308,member0309,member0310,member0311,member0312,member0313,member0314,member0315,member0316,member0317,member0318,member0319,member0320,member0321,member0322,member0323,member0324,member0325,member0326,member0327,member0328,member0329,member0330,member0331,member0332,member0333,member0334,member0335,member0336,member0337,member0338,member0339,member0340,member0341,member0342,member0343,member0344,member0345,member0346,member0347,member0348,member0349,member0350,member0351,member0352,member0353,member0354,member0355,member0356,member0357,member0358,member0359,member0360,member0361,member0362,member0363,member0364,member0365,member0366,member0367,member0368,member0369,member0370,member0371,member0372,member0373,member0374,member0375,member0376,member0377,member0378,member0379,member0380,member0381,member0382,member0383,member0384,member0385,member0386,member0387,member0388,member0389,member0390,member0391,member0392,member0393,member0394,member0395,member0396,member0397,member0398,member0399,member0400,member0401,member0402,member0403,member0404,member0405,member0406,member0407,member0408,member0409,member0410,member0411,member0412,member0413,member0414,member0415,member0416,member0417,member0418,member0419,member0420,member0421,member0422,member0423,member0424,member0425,member0426,member0427,member0428,member0429,member0430,member0431,member0432,member0433,member0434,member0435,member0436,member0437,member0438,member0439,member0440,member0441,member0442,member0443,member0444,member0445,member0446,member0447,member0448,member0449,member0450,member0451,member0452,member0453,member0454,member0455,member0456,member0457,member0458,member0459,member0460,member0461,member0462,member0463,member0464,member0465,member0466,member0467,member0468,member0469,member0470,member0471,member0472,member0473,member0474,member0475,member0476,member0477,member0478,member0479,member0480,member0481,member0482,member0483,member0484,member0485,member0486,member0487,member0488,member0489,member0490,member0491,member0492,member0493,member0494,member0495,member0496,member0497,member0498,member0499,member0500,member0501,member0502,member0503,member0504,member0505,member0506,member0507,member0508,member0509,member0510,member0511,member0512,member0513,member0514,member0515,member0516,member0517,member0518,member0519,member0520,member0521,member0522,member0523,member0524,member0525,member0526,member0527,member0528,member0529,member0530,member0531,member0532,member0533,member0534,member0535,member0536,member0537,member0538,member0539,member0540,member0541,member0542,member0543,member0544,member0545,member0546,member0547,member0548,member0549,member0550,member0551,member0552,member0553,member0554,member0555,member0556,member0557,member0558,member0559,member0560,member0561,member0562,member0563,member0564,member0565,member0566,member0567,member0568,member0569,member0570,member0571,member0572,member0573,member0574,member0575,member0576,member0577,member0578,member0579,member0580,member0581,member0582,member0583,member0584,member0585,member0586,member0587,member0588,member0589,member0590,member0591,member0592,member0593,member0594,member0595,member0596,member0597,member0598,member0599,member0600,member0601,member0602,member0603,member0604,member0605,member0606,member0607,member0608,member0609,member0610,member0611,member0612,member0613,member0614,member0615,member0616,member0617,member0618,member0619,member0620,member0621,member0622,member0623,member0624,member0625,member0626,member0627,member0628,member0629,member0630,member0631,member0632,member0633,member0634,member0635,member0636,member0637,member0638,member0639,member0640,member0641,member0642,member0643,member0644,member0645,member0646,member0647,member0648,member0649,member0650,member0651,member0652,member0653,member0654,member0655,member0656,member0657,member0658,member0659,member0660,member0661,member0662,member0663,member0664,member0665,member0666,member0667,member0668,member0669,member0670,member0671,member0672,member0673,member0674,member0675,member0676,member0677,member0678,member0679,member0680,member0681,member0682,member0683,member0684,member0685,member0686,member0687,member0688,member0689,member0690,member0691,member0692,member0693,member0694,member0695,member0696,member0697,member0698,member0699,member0700,member0701,member0702,member0703,member0704,member0705,member0706,member0707,member0708,member0709,member0710,member0711,member0712,member0713,member0714,member0715,member0716,member0717,member0718,member0719,member0720,member0721,member0722,member0723,member0724,member0725,member0726,member0727,member0728,member0729,member0730,member0731,member0732,member0733,member0734,member0735,member0736,member0737,member0738,member0739,member0740,member0741,member0742,member0743,member0744,member0745,member0746,member0747,member0748,member0749,member0750,member0751,member0752,member0753,member0754,member0755,member0756,member0757,member0758,member0759,member0760,member0761,member0762,member0763,member0764,member0765,member0766,member0767,member0768,member0769,member0770,member0771,member0772,member0773,member0774,member0775,member0776,member0777,member0778,member0779,member0780,member0781,member0782,member0783,member0784,member0785,member0786,member0787,member0788,member0789,member0790,member0791,member0792,member0793,member0794,member0795,member0796,member0797,member0798,member0799,member0800,member0801,member0802,member0803,member0804,member0805,member0806,member0807,member0808,member0809,member0810,member0811,member0812,member0813,member0814,member0815,member0816,member0817,member0818,member0819,member0820,member0821,member0822,member0823,member0824,member0825,member0826,member0827,member0828,member0829,member0830,member0831,member0832,member0833,member0834,member0835,member0836,member0837,member0838,member0839,member0840,member0841,member0842,member0843,member0844,member0845,member0846,member0847,member0848,member0849,member0850,member0851,member0852,member0853,member0854,member0855,member0856,member0857,member0858,member0859,member0860,member0861,member0862,member0863,member0864,member0865,member0866,member0867,member0868,member0869,member0870,member0871,member0872,member0873,member0874,member0875,member0876,member0877,member0878,member0879,member0880,member0881,member0882,member0883,member0884,member0885,member0886,member0887,member0888,member0889,member0890,member0891,member0892,member0893,member0894,member0895,member0896,member0897,member0898,member0899,member0900,member0901,member0902,member0903,member0904,member0905,member0906,member0907,member0908,member0909,member0910,member0911,member0912,member0913,member0914,member0915,member0916,member0917,member0918,member0919,member0920,member0921,member0922,member0923,member0924,member0925,member0926,member0927,member0928,member0929,member0930,member0931,member0932,member0933,member0934,member0935,member0936,member0937,member0938,member0939,member0940,member0941,member0942,member0943,member0944,member0945,member0946,member0947,member0948,member0949,member0950,member0951,member0952,member0953,member0954,member0955,member0956,member0957,member0958,member0959,member0960,member0961,member0962,member0963,member0964,member0965,member0966,member0967,member0968,member0969,member0970,member0971,member0972,member0973,member0974,member0975,member0976,member0977,member0978,member0979,member0980,member0981,member0982,member0983,member0984,member0985,member0986,member0987,member0988,member0989,member0990,member0991,member0992,member0993,member0994,member0995,member0996,member0997,member0998,member0999,member1000,member1001,member1002,member1003,member1004,member1005,member1006,member1007,member1008,member1009,member1010,member1011,member1012,member1013,member1014,member1015,member1016,member1017,member1018,member1019,member1020,member1021,member1022,member1023,member1024,member1025,member1026,member1027,member1028,member1029,member1030,member1031,member1032,member1033,member1034,member1035,member1036,member1037,member1038,member1039,member1040,member1041,member1042,member1043,member1044,member1045,member1046,member1047,member1048,member1049,member1050,member1051,member1052,member1053,member1054,member1055,member1056,member1057,member1058,member1059,member1060,member1061,member1062,member1063,member1064,member1065,member1066,member1067,member1068,member1069,member1070,member1071,member1072,member1073,member1074,member1075,member1076,member1077,member1078,member1079,member1080,member1081,member1082,member1083,member1084,member1085,member1086,member1087,member1088,member1089,member1090,member1091,member1092,member1093,member1094,member1095,member1096,member1097,member1098,member1099,member1100,member1101,member1102,member1103,member1104,member1105,member1106,member1107,member1108,member1109,member1110,member1111,member1112,member1113,member1114,member1115,member1116,member1117,member1118,member1119,member1120,member1121,member1122,member1123,member1124,member1125,member1126,member1127,member1128,member1129,member1130,member1131,member1132,member1133,member1134,member1135,member1136,member1137,member1138,member1139,member1140,member1141,member1142,member1143,member1144,member1145,member1146,member1147,member1148,member1149,member1150,member1151,member1152,member1153,member1154,member1155,member1156,member1157,member1158,member1159,member1160,member1161,member1162,member1163,member1164,member1165,member1166,member1167,member1168,member1169,member1170,member1171,member1172,member1173,member1174,member1175,member1176,member1177,member1178,member1179,member1180,member1181,member1182,member1183,member1184,member1185,member1186,member1187,member1188,member1189,member1190,member1191,member1192,member1193,member1194,member1195,member1196,member1197,member1198,member1199,member1200,member1201,member1202,member1203,member1204,member1205,member1206,member1207,member1208,member1209,member1210,member1211,member1212,member1213,member1214,member1215,member1216,member1217,member1218,member1219,member1220,member1221,member1222,member1223,member1224,member1225,member1226,member1227,member1228,member1229,member1230,member1231,member1232,member1233,member1234,member1235,member1236,member1237,member1238,member1239,member1240,member1241,member1242,member1243,member1244,member1245,member1246,member1247,member1248,member1249,member1250,member1251,member1252,member1253,member1254,member1255,member1256,member1257,member1258,member1259,member1260,member1261,member1262,member1263,member1264,member1265,member1266,member1267,member1268,member1269,member1270,member1271,member1272,member1273,member1274,member1275,member1276,member1277,member1278,member1279,member1280,member1281,member1282,member1283,member1284,member1285,member1286,member1287,member1288,member1289,member1290,member1291,member1292,member1293,member1294,member1295,member1296,member1297,member1298,member1299,member1300,member1301,member1302,member1303,member1304,member1305,member1306,member1307,member1308,member1309,member1310,member1311,member1312,member1313,member1314,member1315,member1316,member1317,member1318,member1319,member1320,member1321,member1322,member1323,member1324,member1325,member1326,member1327,member1328,member1329,member1330,member1331,member1332,member1333,member1334,member1335,member1336,member1337,member1338,member1339,member1340,member1341,member1342,member1343,member1344,member1345,member1346,member1347,member1348,member1349,member1350,member1351,member1352,member1353,member1354,member1355,member1356,member1357,member1358,member1359,member1360,member1361,member1362,member1363,member1364,member1365,member1366,member1367,member1368,member1369,member1370,member1371,member1372,member1373,member1374,member1375,member1376,member1377,member1378,member1379,member1380,member1381,member1382,member1383,member1384,member1385,member1386,member1387,member1388,member1389,member1390,member1391,member1392,member1393,member1394,member1395,member1396,member1397,member1398,member1399,member1400,member1401,member1402,member1403,member1404,member1405,member1406,member1407,member1408,member1409,member1410,member1411,member1412,member1413,member1414,member1415,member1416,member1417,member1418,member1419,member1420,member1421,member1422,member1423,member1424,member1425,member1426,member1427,member1428,member1429,member1430,member1431,member1432,member1433,member1434,member1435,member1436,member1437,member1438,member1439,member1440,member1441,member1442,member1443,member1444,member1445,member1446,member1447,member1448,member1449,member1450,member1451,member1452,member1453,member1454,member1455,member1456,member1457,member1458,member1459,member1460,member1461,member1462,member1463,member1464,member1465,member1466,member1467,member1468,member1469,member1470,member1471,member1472,member1473,member1474,member1475,member1476,member1477,member1478,member1479,member1480,member1481,member1482,member1483,member1484,member1485,member1486,member1487,member1488,member1489,member1490,member1491,member1492,member1493,member1494,member1495,member1496,member1497,member1498,member1499,member1500,member1501,member1502,member1503,member1504,member1505,member1506,member1507,member1508,member1509,member1510,member1511,member1512,member1513,member1514,member1515,member1516,member1517,member1518,member1519,member1520,member1521,member1522,member1523,member1524,member1525,member1526,member1527,member1528,member1529,member1530,member1531,member1532,member1533,member1534,member1535,member1536,member1537,member1538,member1539,member1540,member1541,member1542,member1543,member1544,member1545,member1546,member1547,member1548,member1549,member1550,member1551,member1552,member1553,member1554,member1555,member1556,member1557,member1558,member1559,member1560,member1561,member1562,member1563,member1564,member1565,member1566,member1567,member1568,member1569,member1570,member1571,member1572,member1573,member1574,member1575,member1576,member1577,member1578,member1579,member1580,member1581,member1582,member1583,member1584,member1585,member1586,member1587,member1588,member1589,member1590,member1591,member1592,member1593,member1594,member1595,member1596,member1597,member1598,member1599,member1600,member1601,member1602,member1603,member1604,member1605,member1606,member1607,member1608,member1609,member1610,member1611,member1612,member1613,member1614,member1615,member1616,member1617,member1618,member1619,member1620,member1621,member1622,member1623,member1624,member1625,member1626,member1627,member1628,member1629,member1630,member1631,member1632,member1633,member1634,member1635,member1636,member1637,member1638,member1639,member1640,member1641,member1642,member1643,member1644,member1645,member1646,member1647,member1648,member1649,member1650,member1651,member1652,member1653,member1654,member1655,member1656,member1657,member1658,member1659,member1660,member1661,member1662,member1663,member1664,member1665,member1666,member1667,member1668,member1669,member1670,member1671,member1672,member1673,member1674,member1675,member1676,member1677,member1678,member1679,member1680,member1681,member1682,member1683,member1684,member1685,member1686,member1687,member1688,member1689,member1690,member1691,member1692,member1693,member1694,member1695,member1696,member1697,member1698,member1699,member1700,member1701,member1702,member1703,member1704,member1705,member1706,member1707,member1708,member1709,member1710,member1711,member1712,member1713,member1714,member1715,member1716,member1717,member1718,member1719,member1720,member1721,member1722,member1723,member1724,member1725,member1726,member1727,member1728,member1729,member1730,member1731,member1732,member1733,member1734,member1735,member1736,member1737,member1738,member1739,member1740,member1741,member1742,member1743,member1744,member1745,member1746,member1747,member1748,member1749,member1750,member1751,member1752,member1753,member1754,member1755,member1756,member1757,member1758,member1759,member1760,member1761,member1762,member1763,member1764,member1765,member1766,member1767,member1768,member1769,member1770,member1771,member1772,member1773,member1774,member1775,member1776,member1777,member1778,member1779,member1780,member1781,member1782,member1783,member1784,member1785,member1786,member1787,member1788,member1789,member1790,member1791,member1792,member1793,member1794,member1795,member1796,member1797,member1798,member1799,member1800,member1801,member1802,member1803,member1804,member1805,member1806,member1807,member1808,member1809,member1810,member1811,member1812,member1813,member1814,member1815,member1816,member1817,member1818,member1819,member1820,member1821,member1822,member1823,member1824,member1825,member1826,member1827,member1828,member1829,member1830,member1831,member1832,member1833,member1834,member1835,member1836,member1837,member1838,member1839,member1840,member1841,member1842,member1843,member1844,member1845,member1846,member1847,member1848,member1849,member1850,member1851,member1852,member1853,member1854,member1855,member1856,member1857,member1858,member1859,member1860,member1861,member1862,member1863,member1864,member1865,member1866,member1867,member1868,member1869,member1870,member1871,member1872,member1873,member1874,member1875,member1876,member1877,member1878,member1879,member1880,member1881,member1882,member1883,member1884,member1885,member1886,member1887,member1888,member1889,member1890,member1891,member1892,member1893,member1894,member1895,member1896,member1897,member1898,member1899,member1900,member1901,member1902,member1903,member1904,member1905,member1906,member1907,member1908,member1909,member1910,member1911,member1912,member1913,member1914,member1915,member1916,member1917,member1918,member1919,member1920,member1921,member1922,member1923,member1924,member1925,member1926,member1927,member1928,member1929,member1930,member1931,member1932,member1933,member1934,member1935,member1936,member1937,member1938,member1939,member1940,member1941,member1942,member1943,member1944,member1945,member1946,member1947,member1948,member1949,member1950,member1951,member1952,member1953,member1954,member1955,member1956,member1957,member1958,member1959,member1960,member1961,member1962,member1963,member1964,member1965,member1966,member1967,member1968,member1969,member1970,member1971,member1972,member1973,member1974,member1975,member1976,member1977,member1978,member1979,member1980,member1981,member1982,member1983,member1984,member1985,member1986,member1987,member1988,member1989,member1990,member1991,member1992,member1993,member1994,member1995,member1996,member1997,member1998,member1999,member2000, diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/passwd b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/passwd new file mode 100644 index 0000000000000000000000000000000000000000..971e9bdb8a5d43d915ce0adc42ac29f2f95ade52 --- /dev/null +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/passwd @@ -0,0 +1,2 @@ +user:x:12345:23456:gecos:/home/user:/bin/shell +user_big:x:12346:23457:gecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecos:/home/user_big:/bin/shell diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/test_setup.sh b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/test_setup.sh new file mode 100644 index 0000000000000000000000000000000000000000..ad839f340efe989a91cd6902f59c9a41483f68e0 --- /dev/null +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/test_setup.sh @@ -0,0 +1,3 @@ +export LD_PRELOAD=$(pkg-config --libs nss_wrapper) +export NSS_WRAPPER_PASSWD=./test_data/passwd +export NSS_WRAPPER_GROUP=./test_data/group -- 2.1.0 -------------- next part -------------- From e514203d40d605fc5c576eac78cb385c9392edb8 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Mon, 2 Mar 2015 10:59:34 +0100 Subject: [PATCH 136/136] extdom: make nss buffer configurable The get*_r_wrapper() calls expect a maximum buffer size to avoid memory shortage if too many threads try to allocate buffers e.g. for large groups. With this patch this size can be configured by setting ipaExtdomMaxNssBufSize in the plugin config object cn=ipa_extdom_extop,cn=plugins,cn=config. Related to https://fedorahosted.org/freeipa/ticket/4908 --- .../ipa-extdom-extop/ipa_extdom.h | 1 + .../ipa-extdom-extop/ipa_extdom_common.c | 59 ++++++++++++++-------- .../ipa-extdom-extop/ipa_extdom_extop.c | 10 ++++ 3 files changed, 48 insertions(+), 22 deletions(-) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h index 40bf933920fdd2ca19e5ef195aaa8fb820446cc5..d4c851169ddadc869a59c53075f9fc7f33321085 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h @@ -150,6 +150,7 @@ struct extdom_res { struct ipa_extdom_ctx { Slapi_ComponentId *plugin_id; char *base_dn; + size_t max_nss_buf_size; }; struct domain_info { diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index 57117398a934348eb6e532ef45102d4d13861e86..448710993f551298d3a4cdcc19371b8432773478 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -49,9 +49,6 @@ #define MAX(a,b) (((a)>(b))?(a):(b)) #define SSSD_DOMAIN_SEPARATOR '@' -#define MAX_BUF (1024*1024*1024) - - int get_buffer(size_t *_buf_len, char **_buf) { @@ -468,7 +465,8 @@ static int pack_ber_sid(const char *sid, struct berval **berval) #define SSSD_SYSDB_SID_STR "objectSIDString" -static int pack_ber_user(enum response_types response_type, +static int pack_ber_user(struct ipa_extdom_ctx *ctx, + enum response_types response_type, const char *domain_name, const char *user_name, uid_t uid, gid_t gid, const char *gecos, const char *homedir, @@ -533,7 +531,8 @@ static int pack_ber_user(enum response_types response_type, } for (c = 0; c < ngroups; c++) { - ret = getgrgid_r_wrapper(MAX_BUF, groups[c], &grp, &buf, &buf_len); + ret = getgrgid_r_wrapper(ctx->max_nss_buf_size, + groups[c], &grp, &buf, &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; @@ -691,7 +690,8 @@ static int pack_ber_name(const char *domain_name, const char *name, return LDAP_SUCCESS; } -static int handle_uid_request(enum request_types request_type, uid_t uid, +static int handle_uid_request(struct ipa_extdom_ctx *ctx, + enum request_types request_type, uid_t uid, const char *domain_name, struct berval **berval) { int ret; @@ -721,7 +721,8 @@ static int handle_uid_request(enum request_types request_type, uid_t uid, ret = pack_ber_sid(sid_str, berval); } else { - ret = getpwuid_r_wrapper(MAX_BUF, uid, &pwd, &buf, &buf_len); + ret = getpwuid_r_wrapper(ctx->max_nss_buf_size, uid, &pwd, &buf, + &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; @@ -740,7 +741,8 @@ static int handle_uid_request(enum request_types request_type, uid_t uid, } } - ret = pack_ber_user((request_type == REQ_FULL ? RESP_USER + ret = pack_ber_user(ctx, + (request_type == REQ_FULL ? RESP_USER : RESP_USER_GROUPLIST), domain_name, pwd.pw_name, pwd.pw_uid, pwd.pw_gid, pwd.pw_gecos, pwd.pw_dir, @@ -754,7 +756,8 @@ done: return ret; } -static int handle_gid_request(enum request_types request_type, gid_t gid, +static int handle_gid_request(struct ipa_extdom_ctx *ctx, + enum request_types request_type, gid_t gid, const char *domain_name, struct berval **berval) { int ret; @@ -783,7 +786,8 @@ static int handle_gid_request(enum request_types request_type, gid_t gid, ret = pack_ber_sid(sid_str, berval); } else { - ret = getgrgid_r_wrapper(MAX_BUF, gid, &grp, &buf, &buf_len); + ret = getgrgid_r_wrapper(ctx->max_nss_buf_size, gid, &grp, &buf, + &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; @@ -815,7 +819,8 @@ done: return ret; } -static int handle_sid_request(enum request_types request_type, const char *sid, +static int handle_sid_request(struct ipa_extdom_ctx *ctx, + enum request_types request_type, const char *sid, struct berval **berval) { int ret; @@ -866,7 +871,8 @@ static int handle_sid_request(enum request_types request_type, const char *sid, switch(id_type) { case SSS_ID_TYPE_UID: case SSS_ID_TYPE_BOTH: - ret = getpwnam_r_wrapper(MAX_BUF, fq_name, &pwd, &buf, &buf_len); + ret = getpwnam_r_wrapper(ctx->max_nss_buf_size, fq_name, &pwd, &buf, + &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; @@ -885,14 +891,16 @@ static int handle_sid_request(enum request_types request_type, const char *sid, } } - ret = pack_ber_user((request_type == REQ_FULL ? RESP_USER + ret = pack_ber_user(ctx, + (request_type == REQ_FULL ? RESP_USER : RESP_USER_GROUPLIST), domain_name, pwd.pw_name, pwd.pw_uid, pwd.pw_gid, pwd.pw_gecos, pwd.pw_dir, pwd.pw_shell, kv_list, berval); break; case SSS_ID_TYPE_GID: - ret = getgrnam_r_wrapper(MAX_BUF, fq_name, &grp, &buf, &buf_len); + ret = getgrnam_r_wrapper(ctx->max_nss_buf_size, fq_name, &grp, &buf, + &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; @@ -931,7 +939,8 @@ done: return ret; } -static int handle_name_request(enum request_types request_type, +static int handle_name_request(struct ipa_extdom_ctx *ctx, + enum request_types request_type, const char *name, const char *domain_name, struct berval **berval) { @@ -972,7 +981,8 @@ static int handle_name_request(enum request_types request_type, goto done; } - ret = getpwnam_r_wrapper(MAX_BUF, fq_name, &pwd, &buf, &buf_len); + ret = getpwnam_r_wrapper(ctx->max_nss_buf_size, fq_name, &pwd, &buf, + &buf_len); if (ret == 0) { if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); @@ -986,7 +996,8 @@ static int handle_name_request(enum request_types request_type, goto done; } } - ret = pack_ber_user((request_type == REQ_FULL ? RESP_USER + ret = pack_ber_user(ctx, + (request_type == REQ_FULL ? RESP_USER : RESP_USER_GROUPLIST), domain_name, pwd.pw_name, pwd.pw_uid, pwd.pw_gid, pwd.pw_gecos, pwd.pw_dir, @@ -996,7 +1007,8 @@ static int handle_name_request(enum request_types request_type, * error codes which can indicate that the user was not found. To * be on the safe side we fail back to the group lookup on all * errors. */ - ret = getgrnam_r_wrapper(MAX_BUF, fq_name, &grp, &buf, &buf_len); + ret = getgrnam_r_wrapper(ctx->max_nss_buf_size, fq_name, &grp, &buf, + &buf_len); if (ret != 0) { ret = LDAP_NO_SUCH_OBJECT; goto done; @@ -1038,20 +1050,23 @@ int handle_request(struct ipa_extdom_ctx *ctx, struct extdom_req *req, switch (req->input_type) { case INP_POSIX_UID: - ret = handle_uid_request(req->request_type, req->data.posix_uid.uid, + ret = handle_uid_request(ctx, req->request_type, + req->data.posix_uid.uid, req->data.posix_uid.domain_name, berval); break; case INP_POSIX_GID: - ret = handle_gid_request(req->request_type, req->data.posix_gid.gid, + ret = handle_gid_request(ctx, req->request_type, + req->data.posix_gid.gid, req->data.posix_uid.domain_name, berval); break; case INP_SID: - ret = handle_sid_request(req->request_type, req->data.sid, berval); + ret = handle_sid_request(ctx, req->request_type, req->data.sid, berval); break; case INP_NAME: - ret = handle_name_request(req->request_type, req->data.name.object_name, + ret = handle_name_request(ctx, req->request_type, + req->data.name.object_name, req->data.name.domain_name, berval); break; diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c index aa66c145bc6cf2b77fdfe37be18da67588dc0439..e53f968db040a37fbd6a193f87b3671eeabda89d 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c @@ -40,6 +40,8 @@ #include "ipa_extdom.h" #include "util.h" +#define DEFAULT_MAX_NSS_BUFFER (128*1024*1024) + Slapi_PluginDesc ipa_extdom_plugin_desc = { IPA_EXTDOM_FEATURE_DESC, "FreeIPA project", @@ -185,6 +187,14 @@ static int ipa_extdom_init_ctx(Slapi_PBlock *pb, struct ipa_extdom_ctx **_ctx) goto done; } + ctx->max_nss_buf_size = slapi_entry_attr_get_uint(e, + "ipaExtdomMaxNssBufSize"); + if (ctx->max_nss_buf_size == 0) { + ctx->max_nss_buf_size = DEFAULT_MAX_NSS_BUFFER; + } + LOG("Maximal nss buffer size set to [%d]!\n", ctx->max_nss_buf_size); + + ret = 0; done: if (ret) { -- 2.1.0 From sbose at redhat.com Wed Mar 4 17:35:22 2015 From: sbose at redhat.com (Sumit Bose) Date: Wed, 4 Mar 2015 18:35:22 +0100 Subject: [Freeipa-devel] [PATCHES 137-139] extdom: add err_msg member to request context Message-ID: <20150304173522.GT3271@p.redhat.com> Hi, this patch series improves error reporting of the extdom plugin especially on the client side. Currently there is only SSSD ticket https://fedorahosted.org/sssd/ticket/2463 . Shall I create a corresponding FreeIPA ticket as well? In the third patch I already added a handful of new error messages. Suggestions for more messages are welcome. bye, Sumit -------------- next part -------------- From 2e8e4abb7e79d44f0ce0560daeb7696d9641a684 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Mon, 2 Feb 2015 00:52:10 +0100 Subject: [PATCH 137/139] extdom: add err_msg member to request context --- daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h | 1 + daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c | 1 + daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 5 ++++- 3 files changed, 6 insertions(+), 1 deletion(-) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h index d4c851169ddadc869a59c53075f9fc7f33321085..421f6c6ea625aba2db7e9ffc84115b3647673699 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h @@ -116,6 +116,7 @@ struct extdom_req { gid_t gid; } posix_gid; } data; + char *err_msg; }; struct extdom_res { diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index 448710993f551298d3a4cdcc19371b8432773478..27c1313cb1f6f614b0c74992d4a768c3051a86ae 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -360,6 +360,7 @@ void free_req_data(struct extdom_req *req) break; } + free(req->err_msg); free(req); } diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c index e53f968db040a37fbd6a193f87b3671eeabda89d..a70ed20f1816a7e00385edae8a81dd5dad9e9362 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c @@ -145,11 +145,14 @@ static int ipa_extdom_extop(Slapi_PBlock *pb) rc = LDAP_SUCCESS; done: - free_req_data(req); + if (req->err_msg != NULL) { + err_msg = req->err_msg; + } if (err_msg != NULL) { LOG("%s", err_msg); } slapi_send_ldap_result(pb, rc, NULL, err_msg, 0, NULL); + free_req_data(req); return SLAPI_PLUGIN_EXTENDED_SENT_RESULT; } -- 2.1.0 -------------- next part -------------- From 80406c884eddeb2ebf35bd12a7ed1a8ddd4af2fe Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Mon, 2 Feb 2015 00:53:06 +0100 Subject: [PATCH 138/139] extdom: add add_err_msg() with test --- .../ipa-extdom-extop/ipa_extdom.h | 1 + .../ipa-extdom-extop/ipa_extdom_cmocka_tests.c | 43 ++++++++++++++++++++++ .../ipa-extdom-extop/ipa_extdom_common.c | 23 ++++++++++++ 3 files changed, 67 insertions(+) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h index 421f6c6ea625aba2db7e9ffc84115b3647673699..0d5d55d2fb0ece95466b0225b145a4edeef18efa 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h @@ -185,4 +185,5 @@ int getgrnam_r_wrapper(size_t buf_max, const char *name, struct group *grp, char **_buf, size_t *_buf_len); int getgrgid_r_wrapper(size_t buf_max, gid_t gid, struct group *grp, char **_buf, size_t *_buf_len); +void set_err_msg(struct extdom_req *req, const char *format, ...); #endif /* _IPA_EXTDOM_H_ */ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c index be736dd9c5af4d0b632f1dbc55033fdf738bad46..0ca67030bf667ecd559443842cda166354ce54ce 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c @@ -213,6 +213,47 @@ void test_getgrgid_r_wrapper(void **state) free(buf); } +void extdom_req_setup(void **state) +{ + struct extdom_req *req; + + req = calloc(sizeof(struct extdom_req), 1); + assert_non_null(req); + + *state = req; +} + +void extdom_req_teardown(void **state) +{ + struct extdom_req *req; + + req = (struct extdom_req *) *state; + + free_req_data(req); +} + +void test_set_err_msg(void **state) +{ + struct extdom_req *req; + + req = (struct extdom_req *) *state; + assert_null(req->err_msg); + + set_err_msg(NULL, NULL); + assert_null(req->err_msg); + + set_err_msg(req, NULL); + assert_null(req->err_msg); + + set_err_msg(req, "Test [%s][%d].", "ABCD", 1234); + assert_non_null(req->err_msg); + assert_string_equal(req->err_msg, "Test [ABCD][1234]."); + + set_err_msg(req, "2nd Test [%s][%d].", "ABCD", 1234); + assert_non_null(req->err_msg); + assert_string_equal(req->err_msg, "Test [ABCD][1234]."); +} + int main(int argc, const char *argv[]) { const UnitTest tests[] = { @@ -220,6 +261,8 @@ int main(int argc, const char *argv[]) unit_test(test_getpwuid_r_wrapper), unit_test(test_getgrnam_r_wrapper), unit_test(test_getgrgid_r_wrapper), + unit_test_setup_teardown(test_set_err_msg, + extdom_req_setup, extdom_req_teardown), }; return run_tests(tests); diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index 27c1313cb1f6f614b0c74992d4a768c3051a86ae..ce3884805bd3f8276d29d066c2e54897561d0f99 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -233,6 +233,29 @@ done: return ret; } +void set_err_msg(struct extdom_req *req, const char *format, ...) +{ + int ret; + va_list ap; + + if (req == NULL) { + return; + } + + if (format == NULL || req->err_msg != NULL) { + /* Do not override an existing error message. */ + return; + } + va_start(ap, format); + + ret = vasprintf(&req->err_msg, format, ap); + if (ret == -1) { + req->err_msg = strdup("vasprintf failed.\n"); + } + + va_end(ap); +} + int parse_request_data(struct berval *req_val, struct extdom_req **_req) { BerElement *ber = NULL; -- 2.1.0 -------------- next part -------------- From 131d5fe646d149057220abf6e0480059af8bf427 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Wed, 4 Mar 2015 13:37:50 +0100 Subject: [PATCH 139/139] extdom: add selected error messages --- .../ipa-extdom-extop/ipa_extdom_common.c | 51 ++++++++++++++++------ 1 file changed, 38 insertions(+), 13 deletions(-) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index ce3884805bd3f8276d29d066c2e54897561d0f99..214d6fb23bf21cddac648c0f6b804858d518dcf0 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -304,26 +304,34 @@ int parse_request_data(struct berval *req_val, struct extdom_req **_req) * } */ + req = calloc(sizeof(struct extdom_req), 1); + if (req == NULL) { + /* Since we return req even in the case of an error we make sure is is + * always safe to call free_req_data() on the returned data. */ + *_req = NULL; + return LDAP_OPERATIONS_ERROR; + } + + *_req = req; + if (req_val == NULL || req_val->bv_val == NULL || req_val->bv_len == 0) { + set_err_msg(req, "Missing request data"); return LDAP_PROTOCOL_ERROR; } ber = ber_init(req_val); if (ber == NULL) { + set_err_msg(req, "Cannot initialize BER struct"); return LDAP_PROTOCOL_ERROR; } tag = ber_scanf(ber, "{ee", &input_type, &request_type); if (tag == LBER_ERROR) { ber_free(ber, 1); + set_err_msg(req, "Cannot read input and request type"); return LDAP_PROTOCOL_ERROR; } - req = calloc(sizeof(struct extdom_req), 1); - if (req == NULL) { - return LDAP_OPERATIONS_ERROR; - } - req->input_type = input_type; req->request_type = request_type; @@ -347,17 +355,15 @@ int parse_request_data(struct berval *req_val, struct extdom_req **_req) break; default: ber_free(ber, 1); - free(req); + set_err_msg(req, "Unknown input type"); return LDAP_PROTOCOL_ERROR; } ber_free(ber, 1); if (tag == LBER_ERROR) { - free(req); + set_err_msg(req, "Failed to decode BER data"); return LDAP_PROTOCOL_ERROR; } - *_req = req; - return LDAP_SUCCESS; } @@ -715,6 +721,7 @@ static int pack_ber_name(const char *domain_name, const char *name, } static int handle_uid_request(struct ipa_extdom_ctx *ctx, + struct extdom_req *req, enum request_types request_type, uid_t uid, const char *domain_name, struct berval **berval) { @@ -738,6 +745,7 @@ static int handle_uid_request(struct ipa_extdom_ctx *ctx, if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { + set_err_msg(req, "Failed to lookup SID by UID"); ret = LDAP_OPERATIONS_ERROR; } goto done; @@ -756,6 +764,7 @@ static int handle_uid_request(struct ipa_extdom_ctx *ctx, ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_UID || id_type == SSS_ID_TYPE_BOTH)) { + set_err_msg(req, "Failed to read original data"); if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { @@ -781,6 +790,7 @@ done: } static int handle_gid_request(struct ipa_extdom_ctx *ctx, + struct extdom_req *req, enum request_types request_type, gid_t gid, const char *domain_name, struct berval **berval) { @@ -803,6 +813,7 @@ static int handle_gid_request(struct ipa_extdom_ctx *ctx, if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { + set_err_msg(req, "Failed to lookup SID by GID"); ret = LDAP_OPERATIONS_ERROR; } goto done; @@ -821,6 +832,7 @@ static int handle_gid_request(struct ipa_extdom_ctx *ctx, ret = sss_nss_getorigbyname(grp.gr_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_GID || id_type == SSS_ID_TYPE_BOTH)) { + set_err_msg(req, "Failed to read original data"); if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { @@ -844,6 +856,7 @@ done: } static int handle_sid_request(struct ipa_extdom_ctx *ctx, + struct extdom_req *req, enum request_types request_type, const char *sid, struct berval **berval) { @@ -864,6 +877,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { + set_err_msg(req, "Failed to lookup name by SID"); ret = LDAP_OPERATIONS_ERROR; } goto done; @@ -871,6 +885,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, sep = strchr(fq_name, SSSD_DOMAIN_SEPARATOR); if (sep == NULL) { + set_err_msg(req, "Failed to split fully qualified name"); ret = LDAP_OPERATIONS_ERROR; goto done; } @@ -878,6 +893,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, object_name = strndup(fq_name, (sep - fq_name)); domain_name = strdup(sep + 1); if (object_name == NULL || domain_name == NULL) { + set_err_msg(req, "Missing name or domain"); ret = LDAP_OPERATIONS_ERROR; goto done; } @@ -906,6 +922,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_UID || id_type == SSS_ID_TYPE_BOTH)) { + set_err_msg(req, "Failed ot read original data"); if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { @@ -934,6 +951,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, ret = sss_nss_getorigbyname(grp.gr_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_GID || id_type == SSS_ID_TYPE_BOTH)) { + set_err_msg(req, "Failed to read original data"); if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { @@ -964,6 +982,7 @@ done: } static int handle_name_request(struct ipa_extdom_ctx *ctx, + struct extdom_req *req, enum request_types request_type, const char *name, const char *domain_name, struct berval **berval) @@ -982,6 +1001,7 @@ static int handle_name_request(struct ipa_extdom_ctx *ctx, domain_name); if (ret == -1) { ret = LDAP_OPERATIONS_ERROR; + set_err_msg(req, "Failed to create fully qualified name"); fq_name = NULL; /* content is undefined according to asprintf(3) */ goto done; @@ -993,6 +1013,7 @@ static int handle_name_request(struct ipa_extdom_ctx *ctx, if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { + set_err_msg(req, "Failed to lookup SID by name"); ret = LDAP_OPERATIONS_ERROR; } goto done; @@ -1012,6 +1033,7 @@ static int handle_name_request(struct ipa_extdom_ctx *ctx, ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_UID || id_type == SSS_ID_TYPE_BOTH)) { + set_err_msg(req, "Failed to read original data"); if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { @@ -1045,6 +1067,7 @@ static int handle_name_request(struct ipa_extdom_ctx *ctx, if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { + set_err_msg(req, "Failed to read original data"); ret = LDAP_OPERATIONS_ERROR; } goto done; @@ -1074,27 +1097,29 @@ int handle_request(struct ipa_extdom_ctx *ctx, struct extdom_req *req, switch (req->input_type) { case INP_POSIX_UID: - ret = handle_uid_request(ctx, req->request_type, + ret = handle_uid_request(ctx, req, req->request_type, req->data.posix_uid.uid, req->data.posix_uid.domain_name, berval); break; case INP_POSIX_GID: - ret = handle_gid_request(ctx, req->request_type, + ret = handle_gid_request(ctx, req, req->request_type, req->data.posix_gid.gid, req->data.posix_uid.domain_name, berval); break; case INP_SID: - ret = handle_sid_request(ctx, req->request_type, req->data.sid, berval); + ret = handle_sid_request(ctx, req, req->request_type, req->data.sid, + berval); break; case INP_NAME: - ret = handle_name_request(ctx, req->request_type, + ret = handle_name_request(ctx, req, req->request_type, req->data.name.object_name, req->data.name.domain_name, berval); break; default: + set_err_msg(req, "Unknown input type"); ret = LDAP_PROTOCOL_ERROR; goto done; } -- 2.1.0 From sbose at redhat.com Wed Mar 4 17:42:05 2015 From: sbose at redhat.com (Sumit Bose) Date: Wed, 4 Mar 2015 18:42:05 +0100 Subject: [Freeipa-devel] [PATCH 140] extdom: migrate check-based test to cmocka Message-ID: <20150304174205.GU3271@p.redhat.com> Hi, this is the first patch for https://fedorahosted.org/freeipa/ticket/4922 which converts the check-based tests of the extdom plugin to cmocka. bye, Sumit -------------- next part -------------- From e11c525d27ab19abbb16e12195a2ea9eb6421c80 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Mon, 9 Feb 2015 18:12:01 +0100 Subject: [PATCH] extdom: migrate check-based test to cmocka Besides moving the existing tests to cmocka two new tests are added which were missing from the old tests. Related to https://fedorahosted.org/freeipa/ticket/4922 --- .../ipa-slapi-plugins/ipa-extdom-extop/Makefile.am | 20 -- .../ipa-extdom-extop/ipa_extdom.h | 14 ++ .../ipa-extdom-extop/ipa_extdom_cmocka_tests.c | 156 +++++++++++++++- .../ipa-extdom-extop/ipa_extdom_common.c | 28 +-- .../ipa-extdom-extop/ipa_extdom_tests.c | 203 --------------------- 5 files changed, 176 insertions(+), 245 deletions(-) delete mode 100644 daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_tests.c diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am b/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am index a1679812ef3c5de8c6e18433cbb991a99ad0b6c8..9c2fa1c6a5f95ba06b33c0a5b560939863a88f0e 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am @@ -38,11 +38,6 @@ libipa_extdom_extop_la_LIBADD = \ TESTS = check_PROGRAMS = -if HAVE_CHECK -TESTS += extdom_tests -check_PROGRAMS += extdom_tests -endif - if HAVE_CMOCKA if HAVE_NSS_WRAPPER TESTS_ENVIRONMENT = . ./test_data/test_setup.sh; @@ -51,21 +46,6 @@ check_PROGRAMS += extdom_cmocka_tests endif endif -extdom_tests_SOURCES = \ - ipa_extdom_tests.c \ - ipa_extdom_common.c \ - $(NULL) -extdom_tests_CFLAGS = $(CHECK_CFLAGS) -extdom_tests_LDFLAGS = \ - -rpath $(shell pkg-config --libs-only-L dirsrv | sed -e 's/-L//') \ - $(NULL) -extdom_tests_LDADD = \ - $(CHECK_LIBS) \ - $(LDAP_LIBS) \ - $(DIRSRV_LIBS) \ - $(SSSNSSIDMAP_LIBS) \ - $(NULL) - extdom_cmocka_tests_SOURCES = \ ipa_extdom_cmocka_tests.c \ ipa_extdom_common.c \ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h index 0d5d55d2fb0ece95466b0225b145a4edeef18efa..65dd43ea35726db6231386a0fcbba9be1bd71412 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h @@ -185,5 +185,19 @@ int getgrnam_r_wrapper(size_t buf_max, const char *name, struct group *grp, char **_buf, size_t *_buf_len); int getgrgid_r_wrapper(size_t buf_max, gid_t gid, struct group *grp, char **_buf, size_t *_buf_len); +int pack_ber_sid(const char *sid, struct berval **berval); +int pack_ber_name(const char *domain_name, const char *name, + struct berval **berval); +int pack_ber_user(struct ipa_extdom_ctx *ctx, + enum response_types response_type, + const char *domain_name, const char *user_name, + uid_t uid, gid_t gid, + const char *gecos, const char *homedir, + const char *shell, struct sss_nss_kv *kv_list, + struct berval **berval); +int pack_ber_group(enum response_types response_type, + const char *domain_name, const char *group_name, + gid_t gid, char **members, struct sss_nss_kv *kv_list, + struct berval **berval); void set_err_msg(struct extdom_req *req, const char *format, ...); #endif /* _IPA_EXTDOM_H_ */ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c index 0ca67030bf667ecd559443842cda166354ce54ce..87cb79aea38a901058de745fe986426479aa34b6 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c @@ -213,30 +213,46 @@ void test_getgrgid_r_wrapper(void **state) free(buf); } +struct test_data { + struct extdom_req *req; + struct ipa_extdom_ctx *ctx; +}; + void extdom_req_setup(void **state) { - struct extdom_req *req; + struct test_data *test_data; - req = calloc(sizeof(struct extdom_req), 1); - assert_non_null(req); + test_data = calloc(sizeof(struct test_data), 1); + assert_non_null(test_data); - *state = req; + test_data->req = calloc(sizeof(struct extdom_req), 1); + assert_non_null(test_data->req); + + test_data->ctx = calloc(sizeof(struct ipa_extdom_ctx), 1); + assert_non_null(test_data->req); + + *state = test_data; } void extdom_req_teardown(void **state) { - struct extdom_req *req; + struct test_data *test_data; - req = (struct extdom_req *) *state; + test_data = (struct test_data *) *state; - free_req_data(req); + free_req_data(test_data->req); + free(test_data->ctx); + free(test_data); } void test_set_err_msg(void **state) { struct extdom_req *req; + struct test_data *test_data; + + test_data = (struct test_data *) *state; + req = test_data->req; - req = (struct extdom_req *) *state; assert_null(req->err_msg); set_err_msg(NULL, NULL); @@ -254,6 +270,127 @@ void test_set_err_msg(void **state) assert_string_equal(req->err_msg, "Test [ABCD][1234]."); } +#define TEST_SID "S-1-2-3-4" +#define TEST_DOMAIN_NAME "DOMAIN" + +char res_sid[] = {0x30, 0x0e, 0x0a, 0x01, 0x01, 0x04, 0x09, 0x53, 0x2d, 0x31, \ + 0x2d, 0x32, 0x2d, 0x33, 0x2d, 0x34}; +char res_nam[] = {0x30, 0x13, 0x0a, 0x01, 0x02, 0x30, 0x0e, 0x04, 0x06, 0x44, \ + 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, 0x74, 0x65, 0x73, \ + 0x74}; +char res_uid[] = {0x30, 0x1c, 0x0a, 0x01, 0x03, 0x30, 0x17, 0x04, 0x06, 0x44, \ + 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, 0x74, 0x65, 0x73, \ + 0x74, 0x02, 0x02, 0x30, 0x39, 0x02, 0x03, 0x00, 0xd4, 0x31}; +char res_gid[] = {0x30, 0x1e, 0x0a, 0x01, 0x04, 0x30, 0x19, 0x04, 0x06, 0x44, \ + 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x0a, 0x74, 0x65, 0x73, \ + 0x74, 0x5f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x02, 0x03, 0x00, \ + 0xd4, 0x31}; + +void test_encode(void **state) +{ + int ret; + struct berval *resp_val; + struct ipa_extdom_ctx *ctx; + struct test_data *test_data; + + test_data = (struct test_data *) *state; + ctx = test_data->ctx; + + ctx->max_nss_buf_size = (128*1024*1024); + + ret = pack_ber_sid(TEST_SID, &resp_val); + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(sizeof(res_sid), resp_val->bv_len); + assert_memory_equal(res_sid, resp_val->bv_val, resp_val->bv_len); + ber_bvfree(resp_val); + + ret = pack_ber_name(TEST_DOMAIN_NAME, "test", &resp_val); + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(sizeof(res_nam), resp_val->bv_len); + assert_memory_equal(res_nam, resp_val->bv_val, resp_val->bv_len); + ber_bvfree(resp_val); + + ret = pack_ber_user(ctx, RESP_USER, TEST_DOMAIN_NAME, "test", 12345, 54321, + NULL, NULL, NULL, NULL, &resp_val); + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(sizeof(res_uid), resp_val->bv_len); + assert_memory_equal(res_uid, resp_val->bv_val, resp_val->bv_len); + ber_bvfree(resp_val); + + ret = pack_ber_group(RESP_GROUP, TEST_DOMAIN_NAME, "test_group", 54321, + NULL, NULL, &resp_val); + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(sizeof(res_gid), resp_val->bv_len); + assert_memory_equal(res_gid, resp_val->bv_val, resp_val->bv_len); + ber_bvfree(resp_val); +} + +char req_sid[] = {0x30, 0x11, 0x0a, 0x01, 0x01, 0x0a, 0x01, 0x01, 0x04, 0x09, \ + 0x53, 0x2d, 0x31, 0x2d, 0x32, 0x2d, 0x33, 0x2d, 0x34}; +char req_nam[] = {0x30, 0x16, 0x0a, 0x01, 0x02, 0x0a, 0x01, 0x01, 0x30, 0x0e, \ + 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, \ + 0x74, 0x65, 0x73, 0x74}; +char req_uid[] = {0x30, 0x14, 0x0a, 0x01, 0x03, 0x0a, 0x01, 0x01, 0x30, 0x0c, \ + 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x02, 0x02, \ + 0x30, 0x39}; +char req_gid[] = {0x30, 0x15, 0x0a, 0x01, 0x04, 0x0a, 0x01, 0x01, 0x30, 0x0d, \ + 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x02, 0x03, \ + 0x00, 0xd4, 0x31}; + +void test_decode(void **state) +{ + struct berval req_val; + struct extdom_req *req; + int ret; + + req_val.bv_val = req_sid; + req_val.bv_len = sizeof(req_sid); + + ret = parse_request_data(&req_val, &req); + + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(req->input_type, INP_SID); + assert_int_equal(req->request_type, REQ_SIMPLE); + assert_string_equal(req->data.sid, "S-1-2-3-4"); + free_req_data(req); + + req_val.bv_val = req_nam; + req_val.bv_len = sizeof(req_nam); + + ret = parse_request_data(&req_val, &req); + + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(req->input_type, INP_NAME); + assert_int_equal(req->request_type, REQ_SIMPLE); + assert_string_equal(req->data.name.domain_name, "DOMAIN"); + assert_string_equal(req->data.name.object_name, "test"); + free_req_data(req); + + req_val.bv_val = req_uid; + req_val.bv_len = sizeof(req_uid); + + ret = parse_request_data(&req_val, &req); + + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(req->input_type, INP_POSIX_UID); + assert_int_equal(req->request_type, REQ_SIMPLE); + assert_string_equal(req->data.posix_uid.domain_name, "DOMAIN"); + assert_int_equal(req->data.posix_uid.uid, 12345); + free_req_data(req); + + req_val.bv_val = req_gid; + req_val.bv_len = sizeof(req_gid); + + ret = parse_request_data(&req_val, &req); + + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(req->input_type, INP_POSIX_GID); + assert_int_equal(req->request_type, REQ_SIMPLE); + assert_string_equal(req->data.posix_gid.domain_name, "DOMAIN"); + assert_int_equal(req->data.posix_gid.gid, 54321); + free_req_data(req); +} + int main(int argc, const char *argv[]) { const UnitTest tests[] = { @@ -263,6 +400,9 @@ int main(int argc, const char *argv[]) unit_test(test_getgrgid_r_wrapper), unit_test_setup_teardown(test_set_err_msg, extdom_req_setup, extdom_req_teardown), + unit_test_setup_teardown(test_encode, + extdom_req_setup, extdom_req_teardown), + unit_test(test_decode), }; return run_tests(tests); diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index 214d6fb23bf21cddac648c0f6b804858d518dcf0..62e1e8aac35876581401127a45716ec80c70c282 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -468,7 +468,7 @@ static int add_kv_list(BerElement *ber, struct sss_nss_kv *kv_list) return LDAP_SUCCESS; } -static int pack_ber_sid(const char *sid, struct berval **berval) +int pack_ber_sid(const char *sid, struct berval **berval) { BerElement *ber = NULL; int ret; @@ -495,13 +495,13 @@ static int pack_ber_sid(const char *sid, struct berval **berval) #define SSSD_SYSDB_SID_STR "objectSIDString" -static int pack_ber_user(struct ipa_extdom_ctx *ctx, - enum response_types response_type, - const char *domain_name, const char *user_name, - uid_t uid, gid_t gid, - const char *gecos, const char *homedir, - const char *shell, struct sss_nss_kv *kv_list, - struct berval **berval) +int pack_ber_user(struct ipa_extdom_ctx *ctx, + enum response_types response_type, + const char *domain_name, const char *user_name, + uid_t uid, gid_t gid, + const char *gecos, const char *homedir, + const char *shell, struct sss_nss_kv *kv_list, + struct berval **berval) { BerElement *ber = NULL; int ret; @@ -610,10 +610,10 @@ done: return ret; } -static int pack_ber_group(enum response_types response_type, - const char *domain_name, const char *group_name, - gid_t gid, char **members, struct sss_nss_kv *kv_list, - struct berval **berval) +int pack_ber_group(enum response_types response_type, + const char *domain_name, const char *group_name, + gid_t gid, char **members, struct sss_nss_kv *kv_list, + struct berval **berval) { BerElement *ber = NULL; int ret; @@ -694,8 +694,8 @@ done: return ret; } -static int pack_ber_name(const char *domain_name, const char *name, - struct berval **berval) +int pack_ber_name(const char *domain_name, const char *name, + struct berval **berval) { BerElement *ber = NULL; int ret; diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_tests.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_tests.c deleted file mode 100644 index 1467e256619f827310408d558d48c580118d9a32..0000000000000000000000000000000000000000 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_tests.c +++ /dev/null @@ -1,203 +0,0 @@ -/** BEGIN COPYRIGHT BLOCK - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation, either version 3 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program. If not, see . - * - * Additional permission under GPLv3 section 7: - * - * In the following paragraph, "GPL" means the GNU General Public - * License, version 3 or any later version, and "Non-GPL Code" means - * code that is governed neither by the GPL nor a license - * compatible with the GPL. - * - * You may link the code of this Program with Non-GPL Code and convey - * linked combinations including the two, provided that such Non-GPL - * Code only links to the code of this Program through those well - * defined interfaces identified in the file named EXCEPTION found in - * the source code files (the "Approved Interfaces"). The files of - * Non-GPL Code may instantiate templates or use macros or inline - * functions from the Approved Interfaces without causing the resulting - * work to be covered by the GPL. Only the copyright holders of this - * Program may make changes or additions to the list of Approved - * Interfaces. - * - * Authors: - * Sumit Bose - * - * Copyright (C) 2011 Red Hat, Inc. - * All rights reserved. - * END COPYRIGHT BLOCK **/ - -#include - -#include "ipa_extdom.h" -#include "util.h" - -char req_sid[] = {0x30, 0x11, 0x0a, 0x01, 0x01, 0x0a, 0x01, 0x01, 0x04, 0x09, \ - 0x53, 0x2d, 0x31, 0x2d, 0x32, 0x2d, 0x33, 0x2d, 0x34}; -char req_nam[] = {0x30, 0x16, 0x0a, 0x01, 0x02, 0x0a, 0x01, 0x01, 0x30, 0x0e, \ - 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, \ - 0x74, 0x65, 0x73, 0x74}; -char req_uid[] = {0x30, 0x14, 0x0a, 0x01, 0x03, 0x0a, 0x01, 0x01, 0x30, 0x0c, \ - 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x02, 0x02, \ - 0x30, 0x39}; -char req_gid[] = {0x30, 0x15, 0x0a, 0x01, 0x04, 0x0a, 0x01, 0x01, 0x30, 0x0d, \ - 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x02, 0x03, \ - 0x00, 0xd4, 0x31}; - -char res_sid[] = {0x30, 0x0e, 0x0a, 0x01, 0x01, 0x04, 0x09, 0x53, 0x2d, 0x31, \ - 0x2d, 0x32, 0x2d, 0x33, 0x2d, 0x34}; -char res_nam[] = {0x30, 0x13, 0x0a, 0x01, 0x02, 0x30, 0x0e, 0x04, 0x06, 0x44, \ - 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, 0x74, 0x65, 0x73, \ - 0x74}; -char res_uid[] = {0x30, 0x17, 0x0a, 0x01, 0x03, 0x30, 0x12, 0x04, 0x06, 0x44, \ - 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, 0x74, 0x65, 0x73, \ - 0x74, 0x02, 0x02, 0x30, 0x39}; -char res_gid[] = {0x30, 0x1e, 0x0a, 0x01, 0x04, 0x30, 0x19, 0x04, 0x06, 0x44, \ - 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x0a, 0x74, 0x65, 0x73, \ - 0x74, 0x5f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x02, 0x03, 0x00, \ - 0xd4, 0x31}; - -#define TEST_SID "S-1-2-3-4" -#define TEST_DOMAIN_NAME "DOMAIN" - -START_TEST(test_encode) -{ - int ret; - struct extdom_res res; - struct berval *resp_val; - - res.response_type = RESP_SID; - res.data.sid = TEST_SID; - - ret = pack_response(&res, &resp_val); - - fail_unless(ret == LDAP_SUCCESS, "pack_response() failed."); - fail_unless(sizeof(res_sid) == resp_val->bv_len && - memcmp(res_sid, resp_val->bv_val, resp_val->bv_len) == 0, - "Unexpected BER blob."); - ber_bvfree(resp_val); - - res.response_type = RESP_NAME; - res.data.name.domain_name = TEST_DOMAIN_NAME; - res.data.name.object_name = "test"; - - ret = pack_response(&res, &resp_val); - - fail_unless(ret == LDAP_SUCCESS, "pack_response() failed."); - fail_unless(sizeof(res_nam) == resp_val->bv_len && - memcmp(res_nam, resp_val->bv_val, resp_val->bv_len) == 0, - "Unexpected BER blob."); - ber_bvfree(resp_val); -} -END_TEST - -START_TEST(test_decode) -{ - struct berval req_val; - struct extdom_req *req; - int ret; - - req_val.bv_val = req_sid; - req_val.bv_len = sizeof(req_sid); - - ret = parse_request_data(&req_val, &req); - - fail_unless(ret == LDAP_SUCCESS, "parse_request_data() failed."); - fail_unless(req->input_type == INP_SID, - "parse_request_data() returned unexpected input type"); - fail_unless(req->request_type == REQ_SIMPLE, - "parse_request_data() returned unexpected request type"); - fail_unless(strcmp(req->data.sid, "S-1-2-3-4") == 0, - "parse_request_data() returned unexpected sid"); - free_req_data(req); - - req_val.bv_val = req_nam; - req_val.bv_len = sizeof(req_nam); - - ret = parse_request_data(&req_val, &req); - - fail_unless(ret == LDAP_SUCCESS, - "parse_request_data() failed."); - fail_unless(req->input_type == INP_NAME, - "parse_request_data() returned unexpected input type"); - fail_unless(req->request_type == REQ_SIMPLE, - "parse_request_data() returned unexpected request type"); - fail_unless(strcmp(req->data.name.domain_name, "DOMAIN") == 0, - "parse_request_data() returned unexpected domain name"); - fail_unless(strcmp(req->data.name.object_name, "test") == 0, - "parse_request_data() returned unexpected object name"); - free_req_data(req); - - req_val.bv_val = req_uid; - req_val.bv_len = sizeof(req_uid); - - ret = parse_request_data(&req_val, &req); - - fail_unless(ret == LDAP_SUCCESS, - "parse_request_data() failed."); - fail_unless(req->input_type == INP_POSIX_UID, - "parse_request_data() returned unexpected input type"); - fail_unless(req->request_type == REQ_SIMPLE, - "parse_request_data() returned unexpected request type"); - fail_unless(strcmp(req->data.posix_uid.domain_name, "DOMAIN") == 0, - "parse_request_data() returned unexpected domain name"); - fail_unless(req->data.posix_uid.uid == 12345, - "parse_request_data() returned unexpected uid [%d]", - req->data.posix_uid.uid); - free_req_data(req); - - req_val.bv_val = req_gid; - req_val.bv_len = sizeof(req_gid); - - ret = parse_request_data(&req_val, &req); - - fail_unless(ret == LDAP_SUCCESS, - "parse_request_data() failed."); - fail_unless(req->input_type == INP_POSIX_GID, - "parse_request_data() returned unexpected input type"); - fail_unless(req->request_type == REQ_SIMPLE, - "parse_request_data() returned unexpected request type"); - fail_unless(strcmp(req->data.posix_gid.domain_name, "DOMAIN") == 0, - "parse_request_data() returned unexpected domain name"); - fail_unless(req->data.posix_gid.gid == 54321, - "parse_request_data() returned unexpected gid [%d]", - req->data.posix_gid.gid); - free_req_data(req); -} -END_TEST - -Suite * ipa_extdom_suite(void) -{ - Suite *s = suite_create("IPA extdom"); - - TCase *tc_core = tcase_create("Core"); - tcase_add_test(tc_core, test_decode); - tcase_add_test(tc_core, test_encode); - /* TODO: add test for create_response() */ - suite_add_tcase(s, tc_core); - - return s; -} - -int main(void) -{ - int number_failed; - - Suite *s = ipa_extdom_suite (); - SRunner *sr = srunner_create (s); - srunner_run_all (sr, CK_VERBOSE); - number_failed = srunner_ntests_failed (sr); - srunner_free (sr); - - return (number_failed == 0) ? EXIT_SUCCESS : EXIT_FAILURE; -} -- 2.1.0 From sbose at redhat.com Wed Mar 4 17:47:47 2015 From: sbose at redhat.com (Sumit Bose) Date: Wed, 4 Mar 2015 18:47:47 +0100 Subject: [Freeipa-devel] [PATCH] extdom: return LDAP_NO_SUCH_OBJECT to the client Message-ID: <20150304174747.GV3271@p.redhat.com> Hi, with this patch the extdom plugin will properly indicate to a client if the search object does not exist instead of returning a generic error. This is important for the client to act accordingly and improve debugging possibilities. bye, Sumit -------------- next part -------------- From 3895fa21524efc3a22bfb36b1a9aa34277b8dd46 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Wed, 4 Mar 2015 13:39:04 +0100 Subject: [PATCH] extdom: return LDAP_NO_SUCH_OBJECT to the client --- daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c index a70ed20f1816a7e00385edae8a81dd5dad9e9362..a040f2beba073d856053429face2f464347b2524 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c @@ -123,8 +123,12 @@ static int ipa_extdom_extop(Slapi_PBlock *pb) ret = handle_request(ctx, req, &ret_val); if (ret != LDAP_SUCCESS) { - rc = LDAP_OPERATIONS_ERROR; - err_msg = "Failed to handle the request.\n"; + if (ret == LDAP_NO_SUCH_OBJECT) { + rc = LDAP_NO_SUCH_OBJECT; + } else { + rc = LDAP_OPERATIONS_ERROR; + err_msg = "Failed to handle the request.\n"; + } goto done; } -- 2.1.0 From sbose at redhat.com Wed Mar 4 17:51:22 2015 From: sbose at redhat.com (Sumit Bose) Date: Wed, 4 Mar 2015 18:51:22 +0100 Subject: [Freeipa-devel] [PATCH 142] extdom: fix memory leak Message-ID: <20150304175122.GW3271@p.redhat.com> Hi, while running 389ds with valgrind to see if my other patches introduced a memory leak I found an older one which is fixed by this patch. bye, Sumit -------------- next part -------------- From bb02cdc135fecc1766b17edd61554dbde9bccd0b Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Wed, 4 Mar 2015 17:53:08 +0100 Subject: [PATCH] extdom: fix memory leak --- daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 1 + 1 file changed, 1 insertion(+) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c index a040f2beba073d856053429face2f464347b2524..708d0e4a2fc9da4f87a24a49c945587049f7280f 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c @@ -156,6 +156,7 @@ done: LOG("%s", err_msg); } slapi_send_ldap_result(pb, rc, NULL, err_msg, 0, NULL); + ber_bvfree(ret_val); free_req_data(req); return SLAPI_PLUGIN_EXTENDED_SENT_RESULT; } -- 2.1.0 From dkupka at redhat.com Wed Mar 4 17:52:40 2015 From: dkupka at redhat.com (David Kupka) Date: Wed, 04 Mar 2015 18:52:40 +0100 Subject: [Freeipa-devel] [PATCH] 0040 Add realm name to backup header file. In-Reply-To: <54F70464.5030803@redhat.com> References: <54F70464.5030803@redhat.com> Message-ID: <54F74668.1030108@redhat.com> On 03/04/2015 02:11 PM, David Kupka wrote: > https://fedorahosted.org/freeipa/ticket/4896 > > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel > Honza proposed different approach. We can extract default.conf and use it in to create api object with the right values. I've implemented it and it works too additionally we don't need to change the header content for now. -- David Kupka -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-dkupka-0040-2-Restore-default.conf-and-use-it-to-build-API.patch Type: text/x-patch Size: 6224 bytes Desc: not available URL: From mbasti at redhat.com Wed Mar 4 18:04:14 2015 From: mbasti at redhat.com (Martin Basti) Date: Wed, 04 Mar 2015 19:04:14 +0100 Subject: [Freeipa-devel] IPA 4.2 server upgrade refactoring - summary Message-ID: <54F7491E.9020405@redhat.com> Summary extracted from thread "[Freeipa-devel] IPA Server upgrade 4.2 design" Design page: http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring * ipa-server-upgrade will not allow to use DM password, only LDAPI will be used for upgrade * upgrade files will be executed in alphabetical order, updater will not require number in update file name. But we should still keep the numbering in new upgrade files. * LDAP updates will be applied per file, in order specified in file (from top to bottom) * new directive in update files *"plugin:"* will execute update plugins (renamed form "update-plugin" to "plugin") * option "--skip-version-check" will override version check in ipactl and ipa-server-upgrade commands (was --force before, but this collides with existing --force option in ipactl) * huge warning, "this may broke everything", in help, man, or CLI for --skip-version-check option * ipa-upgradeconfig will be removed * ipa-ldap-updater will be changed to not allow overall update. It stays as util for execute particular update files. How and when execute upgrades (after discussion with Honza) -- not updated in design page yet A) ipactl*: A.1) compare build platform and platform from last upgrade/installation (based on used ipaplatform file) A.1.i) if platform mismatch, raise error and prevent to start services A.2) version of LDAP data(+schema included) compared to current version (VENDOR_VERSION will be used) A.2.i) if version of LDAP data is newer than version of build, raise error and prevent services to start A.2.ii) if version of LDAP data is older than version of build, upgrade is required A.2.iii) if versions are the same, continue A.3) check if services requires update (this should be available after installer refactoring)** A.3.i) if any service requires configuration upgrade, upgrade is required A.3.ii) if any service raises an error about wrong configuration (which cannot be automatically fixed and requires manual fix by user), raise error and prevent to start services A.4.i) if upgrade is needed, ipactl will prevent to start services, and promt user to run ipa-server-upgrade manually (ipactl will not execute upgrade itself) A.4.ii) otherwise start services B) ipa-server-upgrade* B.0) services should be in shutdown state, if not, stop services (services will be started during upgrade on demand, then stopped) B.1) compare build platform and platform from last upgrade/installation (based on used ipaplatform file) B.1.i) if platform mismatch, raise error stop upgrade B.2) check version of LDAP data B.2.i) if LDAP data version is newer than build version, raise error stop upgrade B.2.ii) if LDAP data version is the same as build version, skip schema and LDAP data upgrade B.2.iii) if LDAP data version is older than build version --> data upgrade required B.3) Check if services require upgrade, detect errors as in A.3) (?? this step may not be there)** B.4) if data upgrade required, upgrade schema, then upgrade data, if successful store current build version as data version B.5) Run service upgrade (if needed?)** B.6) if upgrade is successful, inform user that the one can now start IPA (upgrade will not start IPA after it is done) * with --skip-version-check option, ipactl will start services, ipa-server-upgrade will upgrade everything ** services will handle local configuration upgrade by themselves. They will not use data version to decide if upgrade is required (TODO implementation details, Honza wants it in this way - sharing code with installers) Upgrade in different enviroments: 1) Upgrade during RPM transaction (as we do now) -- if it is possible, upgrade will be executed during RPM transaction, service will be started after upgrade (+ add messages "IPA is currently upgrading, please wait") 2) Upgrade cannot be executed during RPM transaction (fedup, --no-script, containers) -- IPA will not start if update is required, user have to run upgrade manually, using ipa-server-upgrade command (+ log/print errors that there is upgrade required) Martin^2 -- Martin Basti -------------- next part -------------- An HTML attachment was scrubbed... URL: From rcritten at redhat.com Wed Mar 4 18:41:18 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Wed, 04 Mar 2015 13:41:18 -0500 Subject: [Freeipa-devel] [PATCHES 0001-0002] ipa-client-install NTP fixes In-Reply-To: <54F24908.2080308@redhat.com> References: <54EE84B1.4040206@redhat.com> <54EEDF8A.2090204@redhat.com> <54F0CEF7.8090609@redhat.com> <54F0D191.9060908@redhat.com> <54F0D860.6020501@redhat.com> <54F0DCC3.5080405@redhat.com> <54F0DF13.2030808@redhat.com> <54F22B7E.30707@redhat.com> <54F22E09.6030707@redhat.com> <54F22F8A.6030505@redhat.com> <54F24908.2080308@redhat.com> Message-ID: <54F751CE.1000202@redhat.com> Nathan Kinder wrote: > > > On 02/28/2015 01:13 PM, Nathan Kinder wrote: >> >> >> On 02/28/2015 01:07 PM, Rob Crittenden wrote: >>> Nathan Kinder wrote: >>>> >>>> >>>> On 02/27/2015 01:18 PM, Nathan Kinder wrote: >>>>> >>>>> >>>>> On 02/27/2015 01:08 PM, Rob Crittenden wrote: >>>>>> Nathan Kinder wrote: >>>>>>> >>>>>>> >>>>>>> On 02/27/2015 12:20 PM, Rob Crittenden wrote: >>>>>>>> Nathan Kinder wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> On 02/26/2015 12:55 AM, Martin Kosek wrote: >>>>>>>>>> On 02/26/2015 03:28 AM, Nathan Kinder wrote: >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> The two attached patches address some issues that affect >>>>>>>>>>> ipa-client-install when syncing time from the NTP server. Now that we >>>>>>>>>>> use ntpd to perform the time sync, the client install can end up hanging >>>>>>>>>>> forever when the server is not reachable (firewall issues, etc.). These >>>>>>>>>>> patches address the issues in two different ways: >>>>>>>>>>> >>>>>>>>>>> 1 - Don't attempt to sync time when --no-ntp is specified. >>>>>>>>>>> >>>>>>>>>>> 2 - Implement a timeout capability that is used when we run ntpd to >>>>>>>>>>> perform the time sync to prevent indefinite hanging. >>>>>>>>>>> >>>>>>>>>>> The one potentially contentious issue is that this introduces a new >>>>>>>>>>> dependency on python-subprocess32 to allow us to have timeout support >>>>>>>>>>> when using Python 2.x. This is packaged for Fedora, but I don't see it >>>>>>>>>>> on RHEL or CentOS currently. It would need to be packaged there. >>>>>>>>>>> >>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/4842 >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> -NGK >>>>>>>>>> >>>>>>>>>> Thanks for Patches. For the second patch, I would really prefer to avoid new >>>>>>>>>> dependency, especially if it's not packaged in RHEL/CentOS. Maybe we could use >>>>>>>>>> some workaround instead, as in: >>>>>>>>>> >>>>>>>>>> http://stackoverflow.com/questions/3733270/python-subprocess-timeout >>>>>>>>> >>>>>>>>> I don't like having to add an additional dependency either, but the >>>>>>>>> alternative seems more risky. Utilizing the subprocess32 module (which >>>>>>>>> is really just a backport of the normal subprocess module from Python >>>>>>>>> 3.x) is not invasive for our code in ipautil.run(). Adding some sort of >>>>>>>>> a thread that has to kill the spawned subprocess seems more risky (see >>>>>>>>> the discussion about a race condition in the stackoverflow thread >>>>>>>>> above). That said, I'm sure the thread/poll method can be made to work >>>>>>>>> if you and others feel strongly that this is a better approach than >>>>>>>>> adding a new dependency. >>>>>>>> >>>>>>>> Why not use /usr/bin/timeout from coreutils? >>>>>>> >>>>>>> That sounds like a perfectly good idea. I wasn't aware of it's >>>>>>> existence (or it's possible that I forgot about it). Thanks for the >>>>>>> suggestion! I'll test out a reworked version of the patch. >>>>>>> >>>>>>> Do you think that there is value in leaving the timeout capability >>>>>>> centrally in ipautil.run()? We only need it for the call to 'ntpd' >>>>>>> right now, but there might be a need for using a timeout for other >>>>>>> commands in the future. The alternative is to just modify >>>>>>> synconce_ntp() to use /usr/bin/timeout and leave ipautil.run() alone. >>>>>> >>>>>> I think it would require a lot of research. One of the programs spawned >>>>>> by this is pkicreate which could take quite some time, and spawning a >>>>>> clone in particular. >>>>>> >>>>>> It is definitely an interesting idea but I think it is safest for now to >>>>>> limit it to just NTP for now. >>>>> >>>>> What I meant was that we would have an optional keyword "timeout" >>>>> parameter to ipautil.run() that defaults to None, just like my >>>>> subprocess32 approach. If a timeout is not passed in, we would use >>>>> subprocess.Popen() to run the specified command just like we do today. >>>>> We would only actually pass the timeout parameter to ipautil.run() in >>>>> synconce_ntp() for now, so no other commands would have a timeout in >>>>> effect. The capability would be available for other commands this way >>>>> though. >>>>> >>>>> Let me propose a patch with this implementation, and if you don't like >>>>> it, we can leave ipautil.run() alone and restrict the changes to >>>>> synconce_ntp(). >>>> >>>> An updated patch 0002 is attached that uses the approach mentioned above. >>> >>> Looks good. Not to nitpick to death but... >>> >>> Can you add timeout to ipaplatform/base/paths.py as BIN_TIMEOUT = >>> "/usr/bin/timeout" and reference that instead? It's for portability. >> >> Sure. I was wondering if we should do something around a full path. >> >>> >>> And a question. I'm impatient. Should there be a notice that it will >>> timeout after n seconds somewhere so people like me don't ^C after 2 >>> seconds? Or is that just overkill and I need to learn patience? >> >> Probably both. :) There's always going to be someone out there who will >> do ctrl-C, so I think printing out a notice is a good idea. I'll add this. >> >>> >>> Stylistically, should we prefer p.returncode is 15 or p.returncode == 15? >> >> After some reading, it seems that '==' should be used. Small integers >> work with 'is', but '==' is the consistent way that equality of integers >> should be checked. I'll modify this. > > Another updated patch 0002 is attached that addresses Rob's review comments. > > Thanks, > -NGK > LGTM. Does someone else have time to test this? I also don't know if there is a policy on placement of new items in paths.py. Things are all over the place and some have BIN_ prefix and some don't. rob From nkinder at redhat.com Wed Mar 4 18:56:22 2015 From: nkinder at redhat.com (Nathan Kinder) Date: Wed, 04 Mar 2015 10:56:22 -0800 Subject: [Freeipa-devel] [PATCHES 0001-0002] ipa-client-install NTP fixes In-Reply-To: <54F751CE.1000202@redhat.com> References: <54EE84B1.4040206@redhat.com> <54EEDF8A.2090204@redhat.com> <54F0CEF7.8090609@redhat.com> <54F0D191.9060908@redhat.com> <54F0D860.6020501@redhat.com> <54F0DCC3.5080405@redhat.com> <54F0DF13.2030808@redhat.com> <54F22B7E.30707@redhat.com> <54F22E09.6030707@redhat.com> <54F22F8A.6030505@redhat.com> <54F24908.2080308@redhat.com> <54F751CE.1000202@redhat.com> Message-ID: <54F75556.6020803@redhat.com> On 03/04/2015 10:41 AM, Rob Crittenden wrote: > Nathan Kinder wrote: >> >> >> On 02/28/2015 01:13 PM, Nathan Kinder wrote: >>> >>> >>> On 02/28/2015 01:07 PM, Rob Crittenden wrote: >>>> Nathan Kinder wrote: >>>>> >>>>> >>>>> On 02/27/2015 01:18 PM, Nathan Kinder wrote: >>>>>> >>>>>> >>>>>> On 02/27/2015 01:08 PM, Rob Crittenden wrote: >>>>>>> Nathan Kinder wrote: >>>>>>>> >>>>>>>> >>>>>>>> On 02/27/2015 12:20 PM, Rob Crittenden wrote: >>>>>>>>> Nathan Kinder wrote: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On 02/26/2015 12:55 AM, Martin Kosek wrote: >>>>>>>>>>> On 02/26/2015 03:28 AM, Nathan Kinder wrote: >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> The two attached patches address some issues that affect >>>>>>>>>>>> ipa-client-install when syncing time from the NTP server. Now that we >>>>>>>>>>>> use ntpd to perform the time sync, the client install can end up hanging >>>>>>>>>>>> forever when the server is not reachable (firewall issues, etc.). These >>>>>>>>>>>> patches address the issues in two different ways: >>>>>>>>>>>> >>>>>>>>>>>> 1 - Don't attempt to sync time when --no-ntp is specified. >>>>>>>>>>>> >>>>>>>>>>>> 2 - Implement a timeout capability that is used when we run ntpd to >>>>>>>>>>>> perform the time sync to prevent indefinite hanging. >>>>>>>>>>>> >>>>>>>>>>>> The one potentially contentious issue is that this introduces a new >>>>>>>>>>>> dependency on python-subprocess32 to allow us to have timeout support >>>>>>>>>>>> when using Python 2.x. This is packaged for Fedora, but I don't see it >>>>>>>>>>>> on RHEL or CentOS currently. It would need to be packaged there. >>>>>>>>>>>> >>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/4842 >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> -NGK >>>>>>>>>>> >>>>>>>>>>> Thanks for Patches. For the second patch, I would really prefer to avoid new >>>>>>>>>>> dependency, especially if it's not packaged in RHEL/CentOS. Maybe we could use >>>>>>>>>>> some workaround instead, as in: >>>>>>>>>>> >>>>>>>>>>> http://stackoverflow.com/questions/3733270/python-subprocess-timeout >>>>>>>>>> >>>>>>>>>> I don't like having to add an additional dependency either, but the >>>>>>>>>> alternative seems more risky. Utilizing the subprocess32 module (which >>>>>>>>>> is really just a backport of the normal subprocess module from Python >>>>>>>>>> 3.x) is not invasive for our code in ipautil.run(). Adding some sort of >>>>>>>>>> a thread that has to kill the spawned subprocess seems more risky (see >>>>>>>>>> the discussion about a race condition in the stackoverflow thread >>>>>>>>>> above). That said, I'm sure the thread/poll method can be made to work >>>>>>>>>> if you and others feel strongly that this is a better approach than >>>>>>>>>> adding a new dependency. >>>>>>>>> >>>>>>>>> Why not use /usr/bin/timeout from coreutils? >>>>>>>> >>>>>>>> That sounds like a perfectly good idea. I wasn't aware of it's >>>>>>>> existence (or it's possible that I forgot about it). Thanks for the >>>>>>>> suggestion! I'll test out a reworked version of the patch. >>>>>>>> >>>>>>>> Do you think that there is value in leaving the timeout capability >>>>>>>> centrally in ipautil.run()? We only need it for the call to 'ntpd' >>>>>>>> right now, but there might be a need for using a timeout for other >>>>>>>> commands in the future. The alternative is to just modify >>>>>>>> synconce_ntp() to use /usr/bin/timeout and leave ipautil.run() alone. >>>>>>> >>>>>>> I think it would require a lot of research. One of the programs spawned >>>>>>> by this is pkicreate which could take quite some time, and spawning a >>>>>>> clone in particular. >>>>>>> >>>>>>> It is definitely an interesting idea but I think it is safest for now to >>>>>>> limit it to just NTP for now. >>>>>> >>>>>> What I meant was that we would have an optional keyword "timeout" >>>>>> parameter to ipautil.run() that defaults to None, just like my >>>>>> subprocess32 approach. If a timeout is not passed in, we would use >>>>>> subprocess.Popen() to run the specified command just like we do today. >>>>>> We would only actually pass the timeout parameter to ipautil.run() in >>>>>> synconce_ntp() for now, so no other commands would have a timeout in >>>>>> effect. The capability would be available for other commands this way >>>>>> though. >>>>>> >>>>>> Let me propose a patch with this implementation, and if you don't like >>>>>> it, we can leave ipautil.run() alone and restrict the changes to >>>>>> synconce_ntp(). >>>>> >>>>> An updated patch 0002 is attached that uses the approach mentioned above. >>>> >>>> Looks good. Not to nitpick to death but... >>>> >>>> Can you add timeout to ipaplatform/base/paths.py as BIN_TIMEOUT = >>>> "/usr/bin/timeout" and reference that instead? It's for portability. >>> >>> Sure. I was wondering if we should do something around a full path. >>> >>>> >>>> And a question. I'm impatient. Should there be a notice that it will >>>> timeout after n seconds somewhere so people like me don't ^C after 2 >>>> seconds? Or is that just overkill and I need to learn patience? >>> >>> Probably both. :) There's always going to be someone out there who will >>> do ctrl-C, so I think printing out a notice is a good idea. I'll add this. >>> >>>> >>>> Stylistically, should we prefer p.returncode is 15 or p.returncode == 15? >>> >>> After some reading, it seems that '==' should be used. Small integers >>> work with 'is', but '==' is the consistent way that equality of integers >>> should be checked. I'll modify this. >> >> Another updated patch 0002 is attached that addresses Rob's review comments. >> >> Thanks, >> -NGK >> > > LGTM. Does someone else have time to test this? > > I also don't know if there is a policy on placement of new items in > paths.py. Things are all over the place and some have BIN_ prefix and > some don't. Yeah, I noticed this too. It didn't look like there was any organization. -NGK > > rob > From mbasti at redhat.com Wed Mar 4 18:58:50 2015 From: mbasti at redhat.com (Martin Basti) Date: Wed, 04 Mar 2015 19:58:50 +0100 Subject: [Freeipa-devel] [PATCHES 0001-0002] ipa-client-install NTP fixes In-Reply-To: <54F75556.6020803@redhat.com> References: <54EE84B1.4040206@redhat.com> <54EEDF8A.2090204@redhat.com> <54F0CEF7.8090609@redhat.com> <54F0D191.9060908@redhat.com> <54F0D860.6020501@redhat.com> <54F0DCC3.5080405@redhat.com> <54F0DF13.2030808@redhat.com> <54F22B7E.30707@redhat.com> <54F22E09.6030707@redhat.com> <54F22F8A.6030505@redhat.com> <54F24908.2080308@redhat.com> <54F751CE.1000202@redhat.com> <54F75556.6020803@redhat.com> Message-ID: <54F755EA.1000100@redhat.com> On 04/03/15 19:56, Nathan Kinder wrote: > > On 03/04/2015 10:41 AM, Rob Crittenden wrote: >> Nathan Kinder wrote: >>> >>> On 02/28/2015 01:13 PM, Nathan Kinder wrote: >>>> >>>> On 02/28/2015 01:07 PM, Rob Crittenden wrote: >>>>> Nathan Kinder wrote: >>>>>> >>>>>> On 02/27/2015 01:18 PM, Nathan Kinder wrote: >>>>>>> >>>>>>> On 02/27/2015 01:08 PM, Rob Crittenden wrote: >>>>>>>> Nathan Kinder wrote: >>>>>>>>> >>>>>>>>> On 02/27/2015 12:20 PM, Rob Crittenden wrote: >>>>>>>>>> Nathan Kinder wrote: >>>>>>>>>>> >>>>>>>>>>> On 02/26/2015 12:55 AM, Martin Kosek wrote: >>>>>>>>>>>> On 02/26/2015 03:28 AM, Nathan Kinder wrote: >>>>>>>>>>>>> Hi, >>>>>>>>>>>>> >>>>>>>>>>>>> The two attached patches address some issues that affect >>>>>>>>>>>>> ipa-client-install when syncing time from the NTP server. Now that we >>>>>>>>>>>>> use ntpd to perform the time sync, the client install can end up hanging >>>>>>>>>>>>> forever when the server is not reachable (firewall issues, etc.). These >>>>>>>>>>>>> patches address the issues in two different ways: >>>>>>>>>>>>> >>>>>>>>>>>>> 1 - Don't attempt to sync time when --no-ntp is specified. >>>>>>>>>>>>> >>>>>>>>>>>>> 2 - Implement a timeout capability that is used when we run ntpd to >>>>>>>>>>>>> perform the time sync to prevent indefinite hanging. >>>>>>>>>>>>> >>>>>>>>>>>>> The one potentially contentious issue is that this introduces a new >>>>>>>>>>>>> dependency on python-subprocess32 to allow us to have timeout support >>>>>>>>>>>>> when using Python 2.x. This is packaged for Fedora, but I don't see it >>>>>>>>>>>>> on RHEL or CentOS currently. It would need to be packaged there. >>>>>>>>>>>>> >>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/4842 >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks, >>>>>>>>>>>>> -NGK >>>>>>>>>>>> Thanks for Patches. For the second patch, I would really prefer to avoid new >>>>>>>>>>>> dependency, especially if it's not packaged in RHEL/CentOS. Maybe we could use >>>>>>>>>>>> some workaround instead, as in: >>>>>>>>>>>> >>>>>>>>>>>> http://stackoverflow.com/questions/3733270/python-subprocess-timeout >>>>>>>>>>> I don't like having to add an additional dependency either, but the >>>>>>>>>>> alternative seems more risky. Utilizing the subprocess32 module (which >>>>>>>>>>> is really just a backport of the normal subprocess module from Python >>>>>>>>>>> 3.x) is not invasive for our code in ipautil.run(). Adding some sort of >>>>>>>>>>> a thread that has to kill the spawned subprocess seems more risky (see >>>>>>>>>>> the discussion about a race condition in the stackoverflow thread >>>>>>>>>>> above). That said, I'm sure the thread/poll method can be made to work >>>>>>>>>>> if you and others feel strongly that this is a better approach than >>>>>>>>>>> adding a new dependency. >>>>>>>>>> Why not use /usr/bin/timeout from coreutils? >>>>>>>>> That sounds like a perfectly good idea. I wasn't aware of it's >>>>>>>>> existence (or it's possible that I forgot about it). Thanks for the >>>>>>>>> suggestion! I'll test out a reworked version of the patch. >>>>>>>>> >>>>>>>>> Do you think that there is value in leaving the timeout capability >>>>>>>>> centrally in ipautil.run()? We only need it for the call to 'ntpd' >>>>>>>>> right now, but there might be a need for using a timeout for other >>>>>>>>> commands in the future. The alternative is to just modify >>>>>>>>> synconce_ntp() to use /usr/bin/timeout and leave ipautil.run() alone. >>>>>>>> I think it would require a lot of research. One of the programs spawned >>>>>>>> by this is pkicreate which could take quite some time, and spawning a >>>>>>>> clone in particular. >>>>>>>> >>>>>>>> It is definitely an interesting idea but I think it is safest for now to >>>>>>>> limit it to just NTP for now. >>>>>>> What I meant was that we would have an optional keyword "timeout" >>>>>>> parameter to ipautil.run() that defaults to None, just like my >>>>>>> subprocess32 approach. If a timeout is not passed in, we would use >>>>>>> subprocess.Popen() to run the specified command just like we do today. >>>>>>> We would only actually pass the timeout parameter to ipautil.run() in >>>>>>> synconce_ntp() for now, so no other commands would have a timeout in >>>>>>> effect. The capability would be available for other commands this way >>>>>>> though. >>>>>>> >>>>>>> Let me propose a patch with this implementation, and if you don't like >>>>>>> it, we can leave ipautil.run() alone and restrict the changes to >>>>>>> synconce_ntp(). >>>>>> An updated patch 0002 is attached that uses the approach mentioned above. >>>>> Looks good. Not to nitpick to death but... >>>>> >>>>> Can you add timeout to ipaplatform/base/paths.py as BIN_TIMEOUT = >>>>> "/usr/bin/timeout" and reference that instead? It's for portability. >>>> Sure. I was wondering if we should do something around a full path. >>>> >>>>> And a question. I'm impatient. Should there be a notice that it will >>>>> timeout after n seconds somewhere so people like me don't ^C after 2 >>>>> seconds? Or is that just overkill and I need to learn patience? >>>> Probably both. :) There's always going to be someone out there who will >>>> do ctrl-C, so I think printing out a notice is a good idea. I'll add this. >>>> >>>>> Stylistically, should we prefer p.returncode is 15 or p.returncode == 15? >>>> After some reading, it seems that '==' should be used. Small integers >>>> work with 'is', but '==' is the consistent way that equality of integers >>>> should be checked. I'll modify this. >>> Another updated patch 0002 is attached that addresses Rob's review comments. >>> >>> Thanks, >>> -NGK >>> >> LGTM. Does someone else have time to test this? >> >> I also don't know if there is a policy on placement of new items in >> paths.py. Things are all over the place and some have BIN_ prefix and >> some don't. > Yeah, I noticed this too. It didn't look like there was any organization. > > -NGK >> rob >> > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel paths are (should be) ordered alphabetically by value, not by variable name. I see there are last 2 lines ordered incorrectly, but please try to keep order as I wrote above. Martin^2 -- Martin Basti From nkinder at redhat.com Wed Mar 4 19:25:29 2015 From: nkinder at redhat.com (Nathan Kinder) Date: Wed, 04 Mar 2015 11:25:29 -0800 Subject: [Freeipa-devel] [PATCHES 0001-0002] ipa-client-install NTP fixes In-Reply-To: <54F755EA.1000100@redhat.com> References: <54EE84B1.4040206@redhat.com> <54EEDF8A.2090204@redhat.com> <54F0CEF7.8090609@redhat.com> <54F0D191.9060908@redhat.com> <54F0D860.6020501@redhat.com> <54F0DCC3.5080405@redhat.com> <54F0DF13.2030808@redhat.com> <54F22B7E.30707@redhat.com> <54F22E09.6030707@redhat.com> <54F22F8A.6030505@redhat.com> <54F24908.2080308@redhat.com> <54F751CE.1000202@redhat.com> <54F75556.6020803@redhat.com> <54F755EA.1000100@redhat.com> Message-ID: <54F75C29.4010105@redhat.com> On 03/04/2015 10:58 AM, Martin Basti wrote: > On 04/03/15 19:56, Nathan Kinder wrote: >> >> On 03/04/2015 10:41 AM, Rob Crittenden wrote: >>> Nathan Kinder wrote: >>>> >>>> On 02/28/2015 01:13 PM, Nathan Kinder wrote: >>>>> >>>>> On 02/28/2015 01:07 PM, Rob Crittenden wrote: >>>>>> Nathan Kinder wrote: >>>>>>> >>>>>>> On 02/27/2015 01:18 PM, Nathan Kinder wrote: >>>>>>>> >>>>>>>> On 02/27/2015 01:08 PM, Rob Crittenden wrote: >>>>>>>>> Nathan Kinder wrote: >>>>>>>>>> >>>>>>>>>> On 02/27/2015 12:20 PM, Rob Crittenden wrote: >>>>>>>>>>> Nathan Kinder wrote: >>>>>>>>>>>> >>>>>>>>>>>> On 02/26/2015 12:55 AM, Martin Kosek wrote: >>>>>>>>>>>>> On 02/26/2015 03:28 AM, Nathan Kinder wrote: >>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>> >>>>>>>>>>>>>> The two attached patches address some issues that affect >>>>>>>>>>>>>> ipa-client-install when syncing time from the NTP server. >>>>>>>>>>>>>> Now that we >>>>>>>>>>>>>> use ntpd to perform the time sync, the client install can >>>>>>>>>>>>>> end up hanging >>>>>>>>>>>>>> forever when the server is not reachable (firewall issues, >>>>>>>>>>>>>> etc.). These >>>>>>>>>>>>>> patches address the issues in two different ways: >>>>>>>>>>>>>> >>>>>>>>>>>>>> 1 - Don't attempt to sync time when --no-ntp is specified. >>>>>>>>>>>>>> >>>>>>>>>>>>>> 2 - Implement a timeout capability that is used when we >>>>>>>>>>>>>> run ntpd to >>>>>>>>>>>>>> perform the time sync to prevent indefinite hanging. >>>>>>>>>>>>>> >>>>>>>>>>>>>> The one potentially contentious issue is that this >>>>>>>>>>>>>> introduces a new >>>>>>>>>>>>>> dependency on python-subprocess32 to allow us to have >>>>>>>>>>>>>> timeout support >>>>>>>>>>>>>> when using Python 2.x. This is packaged for Fedora, but I >>>>>>>>>>>>>> don't see it >>>>>>>>>>>>>> on RHEL or CentOS currently. It would need to be packaged >>>>>>>>>>>>>> there. >>>>>>>>>>>>>> >>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/4842 >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>> -NGK >>>>>>>>>>>>> Thanks for Patches. For the second patch, I would really >>>>>>>>>>>>> prefer to avoid new >>>>>>>>>>>>> dependency, especially if it's not packaged in RHEL/CentOS. >>>>>>>>>>>>> Maybe we could use >>>>>>>>>>>>> some workaround instead, as in: >>>>>>>>>>>>> >>>>>>>>>>>>> http://stackoverflow.com/questions/3733270/python-subprocess-timeout >>>>>>>>>>>>> >>>>>>>>>>>> I don't like having to add an additional dependency either, >>>>>>>>>>>> but the >>>>>>>>>>>> alternative seems more risky. Utilizing the subprocess32 >>>>>>>>>>>> module (which >>>>>>>>>>>> is really just a backport of the normal subprocess module >>>>>>>>>>>> from Python >>>>>>>>>>>> 3.x) is not invasive for our code in ipautil.run(). Adding >>>>>>>>>>>> some sort of >>>>>>>>>>>> a thread that has to kill the spawned subprocess seems more >>>>>>>>>>>> risky (see >>>>>>>>>>>> the discussion about a race condition in the stackoverflow >>>>>>>>>>>> thread >>>>>>>>>>>> above). That said, I'm sure the thread/poll method can be >>>>>>>>>>>> made to work >>>>>>>>>>>> if you and others feel strongly that this is a better >>>>>>>>>>>> approach than >>>>>>>>>>>> adding a new dependency. >>>>>>>>>>> Why not use /usr/bin/timeout from coreutils? >>>>>>>>>> That sounds like a perfectly good idea. I wasn't aware of it's >>>>>>>>>> existence (or it's possible that I forgot about it). Thanks >>>>>>>>>> for the >>>>>>>>>> suggestion! I'll test out a reworked version of the patch. >>>>>>>>>> >>>>>>>>>> Do you think that there is value in leaving the timeout >>>>>>>>>> capability >>>>>>>>>> centrally in ipautil.run()? We only need it for the call to >>>>>>>>>> 'ntpd' >>>>>>>>>> right now, but there might be a need for using a timeout for >>>>>>>>>> other >>>>>>>>>> commands in the future. The alternative is to just modify >>>>>>>>>> synconce_ntp() to use /usr/bin/timeout and leave ipautil.run() >>>>>>>>>> alone. >>>>>>>>> I think it would require a lot of research. One of the programs >>>>>>>>> spawned >>>>>>>>> by this is pkicreate which could take quite some time, and >>>>>>>>> spawning a >>>>>>>>> clone in particular. >>>>>>>>> >>>>>>>>> It is definitely an interesting idea but I think it is safest >>>>>>>>> for now to >>>>>>>>> limit it to just NTP for now. >>>>>>>> What I meant was that we would have an optional keyword "timeout" >>>>>>>> parameter to ipautil.run() that defaults to None, just like my >>>>>>>> subprocess32 approach. If a timeout is not passed in, we would use >>>>>>>> subprocess.Popen() to run the specified command just like we do >>>>>>>> today. >>>>>>>> We would only actually pass the timeout parameter to >>>>>>>> ipautil.run() in >>>>>>>> synconce_ntp() for now, so no other commands would have a >>>>>>>> timeout in >>>>>>>> effect. The capability would be available for other commands >>>>>>>> this way >>>>>>>> though. >>>>>>>> >>>>>>>> Let me propose a patch with this implementation, and if you >>>>>>>> don't like >>>>>>>> it, we can leave ipautil.run() alone and restrict the changes to >>>>>>>> synconce_ntp(). >>>>>>> An updated patch 0002 is attached that uses the approach >>>>>>> mentioned above. >>>>>> Looks good. Not to nitpick to death but... >>>>>> >>>>>> Can you add timeout to ipaplatform/base/paths.py as BIN_TIMEOUT = >>>>>> "/usr/bin/timeout" and reference that instead? It's for portability. >>>>> Sure. I was wondering if we should do something around a full path. >>>>> >>>>>> And a question. I'm impatient. Should there be a notice that it will >>>>>> timeout after n seconds somewhere so people like me don't ^C after 2 >>>>>> seconds? Or is that just overkill and I need to learn patience? >>>>> Probably both. :) There's always going to be someone out there who >>>>> will >>>>> do ctrl-C, so I think printing out a notice is a good idea. I'll >>>>> add this. >>>>> >>>>>> Stylistically, should we prefer p.returncode is 15 or p.returncode >>>>>> == 15? >>>>> After some reading, it seems that '==' should be used. Small integers >>>>> work with 'is', but '==' is the consistent way that equality of >>>>> integers >>>>> should be checked. I'll modify this. >>>> Another updated patch 0002 is attached that addresses Rob's review >>>> comments. >>>> >>>> Thanks, >>>> -NGK >>>> >>> LGTM. Does someone else have time to test this? >>> >>> I also don't know if there is a policy on placement of new items in >>> paths.py. Things are all over the place and some have BIN_ prefix and >>> some don't. >> Yeah, I noticed this too. It didn't look like there was any >> organization. >> >> -NGK >>> rob >>> >> _______________________________________________ >> Freeipa-devel mailing list >> Freeipa-devel at redhat.com >> https://www.redhat.com/mailman/listinfo/freeipa-devel > paths are (should be) ordered alphabetically by value, not by variable > name. > I see there are last 2 lines ordered incorrectly, but please try to keep > order as I wrote above. OK. A new patch is attached that puts the path to 'timeout' in the proper location. Fixing up the order of other paths is unrelated, and should be handled in a separate patch. -NGK > > Martin^2 > -------------- next part -------------- A non-text attachment was scrubbed... Name: 0002-Timeout-when-performing-time-sync-during-client-inst.patch Type: text/x-patch Size: 4080 bytes Desc: not available URL: From abokovoy at redhat.com Thu Mar 5 06:28:28 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Thu, 5 Mar 2015 08:28:28 +0200 Subject: [Freeipa-devel] [PATCH] extdom: return LDAP_NO_SUCH_OBJECT to the client In-Reply-To: <20150304174747.GV3271@p.redhat.com> References: <20150304174747.GV3271@p.redhat.com> Message-ID: <20150305062828.GJ25455@redhat.com> On Wed, 04 Mar 2015, Sumit Bose wrote: >Hi, > >with this patch the extdom plugin will properly indicate to a client if >the search object does not exist instead of returning a generic error. >This is important for the client to act accordingly and improve >debugging possibilities. > >bye, >Sumit >From 3895fa21524efc3a22bfb36b1a9aa34277b8dd46 Mon Sep 17 00:00:00 2001 >From: Sumit Bose >Date: Wed, 4 Mar 2015 13:39:04 +0100 >Subject: [PATCH] extdom: return LDAP_NO_SUCH_OBJECT to the client > >--- > daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > >diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >index a70ed20f1816a7e00385edae8a81dd5dad9e9362..a040f2beba073d856053429face2f464347b2524 100644 >--- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >+++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >@@ -123,8 +123,12 @@ static int ipa_extdom_extop(Slapi_PBlock *pb) > > ret = handle_request(ctx, req, &ret_val); > if (ret != LDAP_SUCCESS) { >- rc = LDAP_OPERATIONS_ERROR; >- err_msg = "Failed to handle the request.\n"; >+ if (ret == LDAP_NO_SUCH_OBJECT) { >+ rc = LDAP_NO_SUCH_OBJECT; >+ } else { >+ rc = LDAP_OPERATIONS_ERROR; >+ err_msg = "Failed to handle the request.\n"; >+ } > goto done; > } > >-- >2.1.0 > ACK. -- / Alexander Bokovoy From abokovoy at redhat.com Thu Mar 5 06:34:38 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Thu, 5 Mar 2015 08:34:38 +0200 Subject: [Freeipa-devel] [PATCH 142] extdom: fix memory leak In-Reply-To: <20150304175122.GW3271@p.redhat.com> References: <20150304175122.GW3271@p.redhat.com> Message-ID: <20150305063438.GK25455@redhat.com> On Wed, 04 Mar 2015, Sumit Bose wrote: >Hi, > >while running 389ds with valgrind to see if my other patches introduced >a memory leak I found an older one which is fixed by this patch. > >bye, >Sumit >From bb02cdc135fecc1766b17edd61554dbde9bccd0b Mon Sep 17 00:00:00 2001 >From: Sumit Bose >Date: Wed, 4 Mar 2015 17:53:08 +0100 >Subject: [PATCH] extdom: fix memory leak > >--- > daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 1 + > 1 file changed, 1 insertion(+) > >diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >index a040f2beba073d856053429face2f464347b2524..708d0e4a2fc9da4f87a24a49c945587049f7280f 100644 >--- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >+++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >@@ -156,6 +156,7 @@ done: > LOG("%s", err_msg); > } > slapi_send_ldap_result(pb, rc, NULL, err_msg, 0, NULL); >+ ber_bvfree(ret_val); > free_req_data(req); > return SLAPI_PLUGIN_EXTENDED_SENT_RESULT; > } I can see in 389-ds code that it actually tries to remove the value in the end of extended operation handling: slapi_pblock_set( pb, SLAPI_EXT_OP_REQ_OID, extoid ); slapi_pblock_set( pb, SLAPI_EXT_OP_REQ_VALUE, &extval ); slapi_pblock_set( pb, SLAPI_REQUESTOR_ISROOT, &pb->pb_op->o_isroot); rc = plugin_call_exop_plugins( pb, extoid ); if ( SLAPI_PLUGIN_EXTENDED_SENT_RESULT != rc ) { if ( SLAPI_PLUGIN_EXTENDED_NOT_HANDLED == rc ) { lderr = LDAP_PROTOCOL_ERROR; /* no plugin handled the op */ errmsg = "unsupported extended operation"; } else { errmsg = NULL; lderr = rc; } send_ldap_result( pb, lderr, NULL, errmsg, 0, NULL ); } free_and_return: if (extoid) slapi_ch_free((void **)&extoid); if (extval.bv_val) slapi_ch_free((void **)&extval.bv_val); return; -- / Alexander Bokovoy From nkinder at redhat.com Thu Mar 5 06:48:37 2015 From: nkinder at redhat.com (Nathan Kinder) Date: Wed, 04 Mar 2015 22:48:37 -0800 Subject: [Freeipa-devel] [PATCH 142] extdom: fix memory leak In-Reply-To: <20150305063438.GK25455@redhat.com> References: <20150304175122.GW3271@p.redhat.com> <20150305063438.GK25455@redhat.com> Message-ID: <54F7FC45.5050009@redhat.com> On 03/04/2015 10:34 PM, Alexander Bokovoy wrote: > On Wed, 04 Mar 2015, Sumit Bose wrote: >> Hi, >> >> while running 389ds with valgrind to see if my other patches introduced >> a memory leak I found an older one which is fixed by this patch. >> >> bye, >> Sumit > >> From bb02cdc135fecc1766b17edd61554dbde9bccd0b Mon Sep 17 00:00:00 2001 >> From: Sumit Bose >> Date: Wed, 4 Mar 2015 17:53:08 +0100 >> Subject: [PATCH] extdom: fix memory leak >> >> --- >> daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 1 + >> 1 file changed, 1 insertion(+) >> >> diff --git >> a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >> b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >> index >> a040f2beba073d856053429face2f464347b2524..708d0e4a2fc9da4f87a24a49c945587049f7280f >> 100644 >> --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >> +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >> @@ -156,6 +156,7 @@ done: >> LOG("%s", err_msg); >> } >> slapi_send_ldap_result(pb, rc, NULL, err_msg, 0, NULL); >> + ber_bvfree(ret_val); >> free_req_data(req); >> return SLAPI_PLUGIN_EXTENDED_SENT_RESULT; >> } > I can see in 389-ds code that it actually tries to remove the value in > the end of extended operation handling: This below code snippet is freeing the extended operation request value (SLAPI_EXT_OP_REQ_VALUE), not the return value (SLAPI_EXT_OP_RET_VAL). If you look at check_and_send_extended_result() in the 389-ds code, you'll see where the extended operation return value is sent, and it doesn't perform a free. It is up to the plug-in to perform the free. A good example of this in the 389-ds code is in the passwd_modify_extop() function. Sumit's code looks good to me. ACK. -NGK > slapi_pblock_set( pb, SLAPI_EXT_OP_REQ_OID, extoid ); > slapi_pblock_set( pb, SLAPI_EXT_OP_REQ_VALUE, &extval ); > slapi_pblock_set( pb, SLAPI_REQUESTOR_ISROOT, &pb->pb_op->o_isroot); > > rc = plugin_call_exop_plugins( pb, extoid ); > > if ( SLAPI_PLUGIN_EXTENDED_SENT_RESULT != rc ) { > if ( SLAPI_PLUGIN_EXTENDED_NOT_HANDLED == rc ) { > lderr = LDAP_PROTOCOL_ERROR; /* no plugin > handled the op */ > errmsg = "unsupported extended operation"; > } else { > errmsg = NULL; > lderr = rc; > } > send_ldap_result( pb, lderr, NULL, errmsg, 0, NULL ); > } > free_and_return: > if (extoid) > slapi_ch_free((void **)&extoid); > if (extval.bv_val) > slapi_ch_free((void **)&extval.bv_val); > return; > > From abokovoy at redhat.com Thu Mar 5 07:00:12 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Thu, 5 Mar 2015 09:00:12 +0200 Subject: [Freeipa-devel] [PATCH 142] extdom: fix memory leak In-Reply-To: <54F7FC45.5050009@redhat.com> References: <20150304175122.GW3271@p.redhat.com> <20150305063438.GK25455@redhat.com> <54F7FC45.5050009@redhat.com> Message-ID: <20150305070012.GL25455@redhat.com> On Wed, 04 Mar 2015, Nathan Kinder wrote: > > >On 03/04/2015 10:34 PM, Alexander Bokovoy wrote: >> On Wed, 04 Mar 2015, Sumit Bose wrote: >>> Hi, >>> >>> while running 389ds with valgrind to see if my other patches introduced >>> a memory leak I found an older one which is fixed by this patch. >>> >>> bye, >>> Sumit >> >>> From bb02cdc135fecc1766b17edd61554dbde9bccd0b Mon Sep 17 00:00:00 2001 >>> From: Sumit Bose >>> Date: Wed, 4 Mar 2015 17:53:08 +0100 >>> Subject: [PATCH] extdom: fix memory leak >>> >>> --- >>> daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 1 + >>> 1 file changed, 1 insertion(+) >>> >>> diff --git >>> a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>> b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>> index >>> a040f2beba073d856053429face2f464347b2524..708d0e4a2fc9da4f87a24a49c945587049f7280f >>> 100644 >>> --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>> +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>> @@ -156,6 +156,7 @@ done: >>> LOG("%s", err_msg); >>> } >>> slapi_send_ldap_result(pb, rc, NULL, err_msg, 0, NULL); >>> + ber_bvfree(ret_val); >>> free_req_data(req); >>> return SLAPI_PLUGIN_EXTENDED_SENT_RESULT; >>> } >> I can see in 389-ds code that it actually tries to remove the value in >> the end of extended operation handling: > >This below code snippet is freeing the extended operation request value >(SLAPI_EXT_OP_REQ_VALUE), not the return value (SLAPI_EXT_OP_RET_VAL). > >If you look at check_and_send_extended_result() in the 389-ds code, >you'll see where the extended operation return value is sent, and it >doesn't perform a free. It is up to the plug-in to perform the free. A >good example of this in the 389-ds code is in the passwd_modify_extop() >function. > > >Sumit's code looks good to me. ACK. Argh. Sorry for confusion of RET vs REQ. Morning, I need coffee! ACK. > >-NGK > >> slapi_pblock_set( pb, SLAPI_EXT_OP_REQ_OID, extoid ); >> slapi_pblock_set( pb, SLAPI_EXT_OP_REQ_VALUE, &extval ); >> slapi_pblock_set( pb, SLAPI_REQUESTOR_ISROOT, &pb->pb_op->o_isroot); >> >> rc = plugin_call_exop_plugins( pb, extoid ); >> >> if ( SLAPI_PLUGIN_EXTENDED_SENT_RESULT != rc ) { >> if ( SLAPI_PLUGIN_EXTENDED_NOT_HANDLED == rc ) { >> lderr = LDAP_PROTOCOL_ERROR; /* no plugin >> handled the op */ >> errmsg = "unsupported extended operation"; >> } else { >> errmsg = NULL; >> lderr = rc; >> } >> send_ldap_result( pb, lderr, NULL, errmsg, 0, NULL ); >> } >> free_and_return: >> if (extoid) >> slapi_ch_free((void **)&extoid); >> if (extval.bv_val) >> slapi_ch_free((void **)&extval.bv_val); >> return; >> >> -- / Alexander Bokovoy From pvoborni at redhat.com Thu Mar 5 07:54:01 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Thu, 05 Mar 2015 08:54:01 +0100 Subject: [Freeipa-devel] [PATCHES] SPEC: Require python2 version of sssd bindings In-Reply-To: <20150227205039.GB2327@mail.corp.redhat.com> References: <20150227205039.GB2327@mail.corp.redhat.com> Message-ID: <54F80B99.6030104@redhat.com> On 02/27/2015 09:50 PM, Lukas Slebodnik wrote: > ehlo, > > Please review attached patches and fix freeipa in fedora 22 ASAP. > > I think the most critical is 1st patch > > sh$ git grep "SSSDConfig" | grep import > install/tools/ipa-upgradeconfig:import SSSDConfig > ipa-client/ipa-install/ipa-client-automount:import SSSDConfig > ipa-client/ipa-install/ipa-client-install: import SSSDConfig > > BTW package python-sssdconfig is provides since sssd-1.10.0alpha1 (2013-04-02) > but it was not explicitely required. > > The latest python3 changes in sssd (fedora 22) is just a result of negligent > packaging of freeipa. > > LS > Fedora 22 was amended. Patch 1: ACK Patch 2: ACK Patch3: the package name is libsss_nss_idmap-python not python-libsss_nss_idmap which already is required in adtrust package -- Petr Vobornik From sbose at redhat.com Thu Mar 5 08:16:36 2015 From: sbose at redhat.com (Sumit Bose) Date: Thu, 5 Mar 2015 09:16:36 +0100 Subject: [Freeipa-devel] [PATCHES 134-136] extdom: handle ERANGE return code for getXXYYY_r() In-Reply-To: <20150304171453.GS3271@p.redhat.com> References: <20150302174507.GK3271@p.redhat.com> <20150304141755.GB25455@redhat.com> <20150304171453.GS3271@p.redhat.com> Message-ID: <20150305081636.GX3271@p.redhat.com> On Wed, Mar 04, 2015 at 06:14:53PM +0100, Sumit Bose wrote: > On Wed, Mar 04, 2015 at 04:17:55PM +0200, Alexander Bokovoy wrote: > > On Mon, 02 Mar 2015, Sumit Bose wrote: > > >diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c > > >index 20fdd62b20f28f5384cf83b8be5819f721c6c3db..84aeb28066f25f05a89d0c2d42e8b060e2399501 100644 > > >--- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c > > >+++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c > > >@@ -49,6 +49,220 @@ > > > > > >#define MAX(a,b) (((a)>(b))?(a):(b)) > > >#define SSSD_DOMAIN_SEPARATOR '@' > > >+#define MAX_BUF (1024*1024*1024) > > >+ > > >+ > > >+ > > >+static int get_buffer(size_t *_buf_len, char **_buf) > > >+{ > > >+ long pw_max; > > >+ long gr_max; > > >+ size_t buf_len; > > >+ char *buf; > > >+ > > >+ pw_max = sysconf(_SC_GETPW_R_SIZE_MAX); > > >+ gr_max = sysconf(_SC_GETGR_R_SIZE_MAX); > > >+ > > >+ if (pw_max == -1 && gr_max == -1) { > > >+ buf_len = 16384; > > >+ } else { > > >+ buf_len = MAX(pw_max, gr_max); > > >+ } > > Here you'd get buf_len equal to 1024 by default on Linux which is too > > low for our use case. I think it would be beneficial to add one more > > MAX(buf_len, 16384): > > - if (pw_max == -1 && gr_max == -1) { > > - buf_len = 16384; > > - } else { > > - buf_len = MAX(pw_max, gr_max); > > - } > > + buf_len = MAX(16384, MAX(pw_max, gr_max)); > > > > with MAX(MAX(),..) you also get rid of if() statement as resulting > > rvalue would be guaranteed to be positive. > > done > > > > > The rest is going along the common lines but would it be better to > > allocate memory once per LDAP client request rather than always ask for > > it per each NSS call? You can guarantee a sequential use of the buffer > > within the LDAP client request processing so there is no problem with > > locks but having this memory re-allocated on subsequent > > getpwnam()/getpwuid()/... calls within the same request processing seems > > suboptimal to me. > > ok, makes sense, I moved get_buffer() back to the callers. > > New version attached. Please ignore this patch, I will send a revised version soon. bye, Sumit > > bye, > Sumit > > > > > -- > > / Alexander Bokovoy From pvoborni at redhat.com Thu Mar 5 10:00:48 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Thu, 05 Mar 2015 11:00:48 +0100 Subject: [Freeipa-devel] [PATCH] 0039 Try continue ipa-client-automount even if nsslapd-minssf > 0. In-Reply-To: <54F0C522.9050209@redhat.com> References: <54EEEE11.60305@redhat.com> <54EF0960.4070608@redhat.com> <54EF25C7.5040001@redhat.com> <54EF3394.9010502@redhat.com> <54F06F49.7090004@redhat.com> <54F070A2.9090706@redhat.com> <54F07300.7070708@redhat.com> <54F0C522.9050209@redhat.com> Message-ID: <54F82950.5050208@redhat.com> On 02/27/2015 08:27 PM, Rob Crittenden wrote: > David Kupka wrote: > > Hmm, interesting. Yeah, I suppose trying to get a host ticket would be > good defensive programming. > > ACK on the new patch from me too. > > rob Pushed to: master: aa745b31d3762121bb0df1432cb2a48d1d15fd2a ipa-4-1: 0344f246c294d5dcdf19ec4dd851de48a55e6274 -- Petr Vobornik From tbabej at redhat.com Thu Mar 5 10:10:38 2015 From: tbabej at redhat.com (Tomas Babej) Date: Thu, 05 Mar 2015 11:10:38 +0100 Subject: [Freeipa-devel] [PATCHES 399-401] Allow multiple API instances In-Reply-To: <54F6E4A5.6020008@redhat.com> References: <54F5CA06.4040609@redhat.com> <54F5CCB6.5000109@redhat.com> <54F5CD89.8070202@redhat.com> <54F5CEC6.2090603@redhat.com> <54F5CF34.7040609@redhat.com> <54F6DAC7.4010206@redhat.com> <54F6E4A5.6020008@redhat.com> Message-ID: <54F82B9E.9070503@redhat.com> On 03/04/2015 11:55 AM, Martin Kosek wrote: > On 03/04/2015 11:13 AM, Jan Cholasta wrote: >> Dne 3.3.2015 v 16:11 Martin Kosek napsal(a): >>> On 03/03/2015 04:09 PM, Jan Cholasta wrote: >>>> Dne 3.3.2015 v 16:04 Tomas Babej napsal(a): >>>>> On 03/03/2015 04:01 PM, Martin Kosek wrote: >>>>>> On 03/03/2015 03:49 PM, Jan Cholasta wrote: >>>>>>> Hi, >>>>>>> >>>>>>> the attached patches provide an attempt to fix >>>>>>> . >>>>>>> >>>>>>> Patch 401 serves as an example and modifies ipa-advise to use its own >>>>>>> API >>>>>>> instance for Advice plugins. >>>>>>> >>>>>>> Honza >>>>>> Thanks. At least patches 399 and 400 look reasonable short for 4.2. >>>>>> >>>>>> So with these patches, could we also get rid of >>>>>> temporary_ldap2_connection we >>>>>> have in ipa-replica-install? Petr3 may have other examples he met in >>>>>> the past... >>>> I think we can. Shall I prepare a patch? >>> If it is reasonable simple, I would go for it. It would be another selling >>> point for your patches. >> Done. >> > Thanks, this looks great! It proves the point with the separate API object. > LGTM, I will let Tomas to continue with standard review then. > > Martin Codewise looks good to me. I tested the server and replica installation, which went well. And of course, our ipa-advise tests detected no breakage, hence it's a ACK. Pushed to master: 8713c5a6953e92f72d9ea7aad40588c284011025 Tomas From lslebodn at redhat.com Thu Mar 5 10:23:29 2015 From: lslebodn at redhat.com (Lukas Slebodnik) Date: Thu, 5 Mar 2015 11:23:29 +0100 Subject: [Freeipa-devel] [PATCHES] SPEC: Require python2 version of sssd bindings In-Reply-To: <54F80B99.6030104@redhat.com> References: <20150227205039.GB2327@mail.corp.redhat.com> <54F80B99.6030104@redhat.com> Message-ID: <20150305102328.GA18226@mail.corp.redhat.com> On (05/03/15 08:54), Petr Vobornik wrote: >On 02/27/2015 09:50 PM, Lukas Slebodnik wrote: >>ehlo, >> >>Please review attached patches and fix freeipa in fedora 22 ASAP. >> >>I think the most critical is 1st patch >> >>sh$ git grep "SSSDConfig" | grep import >>install/tools/ipa-upgradeconfig:import SSSDConfig >>ipa-client/ipa-install/ipa-client-automount:import SSSDConfig >>ipa-client/ipa-install/ipa-client-install: import SSSDConfig >> >>BTW package python-sssdconfig is provides since sssd-1.10.0alpha1 (2013-04-02) >>but it was not explicitely required. >> >>The latest python3 changes in sssd (fedora 22) is just a result of negligent >>packaging of freeipa. >> >>LS >> > >Fedora 22 was amended. > >Patch 1: ACK > >Patch 2: ACK > >Patch3: >the package name is libsss_nss_idmap-python not python-libsss_nss_idmap >which already is required in adtrust package In sssd upstream we decided to rename package libsss_nss_idmap-python to python-libsss_nss_idmap according to new rpm python guidelines. The python3 version has alredy correct name. We will rename package in downstream with next major release (1.13). Of course it we will add "Provides: libsss_nss_idmap-python". We can push 3rd patch later or I can update 3rd patch. What do you prefer? Than you very much for review. LS From sbose at redhat.com Thu Mar 5 10:33:53 2015 From: sbose at redhat.com (Sumit Bose) Date: Thu, 5 Mar 2015 11:33:53 +0100 Subject: [Freeipa-devel] [PATCHES 134-136] extdom: handle ERANGE return code for getXXYYY_r() In-Reply-To: <20150305081636.GX3271@p.redhat.com> References: <20150302174507.GK3271@p.redhat.com> <20150304141755.GB25455@redhat.com> <20150304171453.GS3271@p.redhat.com> <20150305081636.GX3271@p.redhat.com> Message-ID: <20150305103353.GA3271@p.redhat.com> On Thu, Mar 05, 2015 at 09:16:36AM +0100, Sumit Bose wrote: > On Wed, Mar 04, 2015 at 06:14:53PM +0100, Sumit Bose wrote: > > On Wed, Mar 04, 2015 at 04:17:55PM +0200, Alexander Bokovoy wrote: > > > On Mon, 02 Mar 2015, Sumit Bose wrote: > > > >diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c > > > >index 20fdd62b20f28f5384cf83b8be5819f721c6c3db..84aeb28066f25f05a89d0c2d42e8b060e2399501 100644 > > > >--- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c > > > >+++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c > > > >@@ -49,6 +49,220 @@ > > > > > > > >#define MAX(a,b) (((a)>(b))?(a):(b)) > > > >#define SSSD_DOMAIN_SEPARATOR '@' > > > >+#define MAX_BUF (1024*1024*1024) > > > >+ > > > >+ > > > >+ > > > >+static int get_buffer(size_t *_buf_len, char **_buf) > > > >+{ > > > >+ long pw_max; > > > >+ long gr_max; > > > >+ size_t buf_len; > > > >+ char *buf; > > > >+ > > > >+ pw_max = sysconf(_SC_GETPW_R_SIZE_MAX); > > > >+ gr_max = sysconf(_SC_GETGR_R_SIZE_MAX); > > > >+ > > > >+ if (pw_max == -1 && gr_max == -1) { > > > >+ buf_len = 16384; > > > >+ } else { > > > >+ buf_len = MAX(pw_max, gr_max); > > > >+ } > > > Here you'd get buf_len equal to 1024 by default on Linux which is too > > > low for our use case. I think it would be beneficial to add one more > > > MAX(buf_len, 16384): > > > - if (pw_max == -1 && gr_max == -1) { > > > - buf_len = 16384; > > > - } else { > > > - buf_len = MAX(pw_max, gr_max); > > > - } > > > + buf_len = MAX(16384, MAX(pw_max, gr_max)); > > > > > > with MAX(MAX(),..) you also get rid of if() statement as resulting > > > rvalue would be guaranteed to be positive. > > > > done > > > > > > > > The rest is going along the common lines but would it be better to > > > allocate memory once per LDAP client request rather than always ask for > > > it per each NSS call? You can guarantee a sequential use of the buffer > > > within the LDAP client request processing so there is no problem with > > > locks but having this memory re-allocated on subsequent > > > getpwnam()/getpwuid()/... calls within the same request processing seems > > > suboptimal to me. > > > > ok, makes sense, I moved get_buffer() back to the callers. > > > > New version attached. > > Please ignore this patch, I will send a revised version soon. Please find attached a revised version which properly reports missing objects and out-of-memory cases and makes sure buf and buf_len are in sync. bye, Sumit > > bye, > Sumit > > > > > bye, > > Sumit > > > > > > > > -- > > > / Alexander Bokovoy > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel -------------- next part -------------- From 0b4e302866f734b93176d9104bd78a2e55702c40 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Tue, 24 Feb 2015 15:29:00 +0100 Subject: [PATCH 134/136] Add configure check for cwrap libraries Currently only nss-wrapper is checked, checks for other crwap libraries can be added e.g. as AM_CHECK_WRAPPER(uid_wrapper, HAVE_UID_WRAPPER) --- daemons/configure.ac | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/daemons/configure.ac b/daemons/configure.ac index 97cd25115f371e9e549d209401df9325c7e112c1..7c979fe2d0b91e9d71fe4ca5a50ad78a4de79298 100644 --- a/daemons/configure.ac +++ b/daemons/configure.ac @@ -236,6 +236,30 @@ PKG_CHECK_EXISTS(cmocka, ) AM_CONDITIONAL([HAVE_CMOCKA], [test x$have_cmocka = xyes]) +dnl A macro to check presence of a cwrap (http://cwrap.org) wrapper on the system +dnl Usage: +dnl AM_CHECK_WRAPPER(name, conditional) +dnl If the cwrap library is found, sets the HAVE_$name conditional +AC_DEFUN([AM_CHECK_WRAPPER], +[ + FOUND_WRAPPER=0 + + AC_MSG_CHECKING([for $1]) + PKG_CHECK_EXISTS([$1], + [ + AC_MSG_RESULT([yes]) + FOUND_WRAPPER=1 + ], + [ + AC_MSG_RESULT([no]) + AC_MSG_WARN([cwrap library $1 not found, some tests will not run]) + ]) + + AM_CONDITIONAL($2, [ test x$FOUND_WRAPPER = x1]) +]) + +AM_CHECK_WRAPPER(nss_wrapper, HAVE_NSS_WRAPPER) + dnl -- dirsrv is needed for the extdom unit tests -- PKG_CHECK_MODULES([DIRSRV], [dirsrv >= 1.3.0]) dnl -- sss_idmap is needed by the extdom exop -- -- 2.1.0 -------------- next part -------------- From 0a5614b12446b69ea8b77a827ce2c7627f0b1ca4 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Tue, 24 Feb 2015 15:33:39 +0100 Subject: [PATCH 135/136] extdom: handle ERANGE return code for getXXYYY_r() calls The getXXYYY_r() calls require a buffer to store the variable data of the passwd and group structs. If the provided buffer is too small ERANGE is returned and the caller can try with a larger buffer again. Cmocka/cwrap based unit-tests for get*_r_wrapper() are added. Resolves https://fedorahosted.org/freeipa/ticket/4908 --- .../ipa-slapi-plugins/ipa-extdom-extop/Makefile.am | 31 ++- .../ipa-extdom-extop/ipa_extdom.h | 9 + .../ipa-extdom-extop/ipa_extdom_cmocka_tests.c | 226 +++++++++++++++ .../ipa-extdom-extop/ipa_extdom_common.c | 309 +++++++++++++++------ .../ipa-extdom-extop/test_data/group | 2 + .../ipa-extdom-extop/test_data/passwd | 2 + .../ipa-extdom-extop/test_data/test_setup.sh | 3 + 7 files changed, 498 insertions(+), 84 deletions(-) create mode 100644 daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c create mode 100644 daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/group create mode 100644 daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/passwd create mode 100644 daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/test_setup.sh diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am b/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am index 0008476796f5b20f62f2c32e7b291b787fa7a6fc..a1679812ef3c5de8c6e18433cbb991a99ad0b6c8 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am @@ -35,9 +35,20 @@ libipa_extdom_extop_la_LIBADD = \ $(SSSNSSIDMAP_LIBS) \ $(NULL) +TESTS = +check_PROGRAMS = + if HAVE_CHECK -TESTS = extdom_tests -check_PROGRAMS = extdom_tests +TESTS += extdom_tests +check_PROGRAMS += extdom_tests +endif + +if HAVE_CMOCKA +if HAVE_NSS_WRAPPER +TESTS_ENVIRONMENT = . ./test_data/test_setup.sh; +TESTS += extdom_cmocka_tests +check_PROGRAMS += extdom_cmocka_tests +endif endif extdom_tests_SOURCES = \ @@ -55,6 +66,22 @@ extdom_tests_LDADD = \ $(SSSNSSIDMAP_LIBS) \ $(NULL) +extdom_cmocka_tests_SOURCES = \ + ipa_extdom_cmocka_tests.c \ + ipa_extdom_common.c \ + $(NULL) +extdom_cmocka_tests_CFLAGS = $(CMOCKA_CFLAGS) +extdom_cmocka_tests_LDFLAGS = \ + -rpath $(shell pkg-config --libs-only-L dirsrv | sed -e 's/-L//') \ + $(NULL) +extdom_cmocka_tests_LDADD = \ + $(CMOCKA_LIBS) \ + $(LDAP_LIBS) \ + $(DIRSRV_LIBS) \ + $(SSSNSSIDMAP_LIBS) \ + $(NULL) + + appdir = $(IPA_DATA_DIR) app_DATA = \ ipa-extdom-extop-conf.ldif \ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h index 56ca5009b1aa427f6c059b78ac392c768e461e2e..40bf933920fdd2ca19e5ef195aaa8fb820446cc5 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h @@ -174,4 +174,13 @@ int check_request(struct extdom_req *req, enum extdom_version version); int handle_request(struct ipa_extdom_ctx *ctx, struct extdom_req *req, struct berval **berval); int pack_response(struct extdom_res *res, struct berval **ret_val); +int get_buffer(size_t *_buf_len, char **_buf); +int getpwnam_r_wrapper(size_t buf_max, const char *name, + struct passwd *pwd, char **_buf, size_t *_buf_len); +int getpwuid_r_wrapper(size_t buf_max, uid_t uid, + struct passwd *pwd, char **_buf, size_t *_buf_len); +int getgrnam_r_wrapper(size_t buf_max, const char *name, + struct group *grp, char **_buf, size_t *_buf_len); +int getgrgid_r_wrapper(size_t buf_max, gid_t gid, + struct group *grp, char **_buf, size_t *_buf_len); #endif /* _IPA_EXTDOM_H_ */ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c new file mode 100644 index 0000000000000000000000000000000000000000..d5bacd7e8c9dc0a71eea70162406c7e5b67384ad --- /dev/null +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c @@ -0,0 +1,226 @@ +/* + Authors: + Sumit Bose + + Copyright (C) 2015 Red Hat + + Extdom tests + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . +*/ + +#include +#include +#include +#include +#include + +#include +#include + + +#include "ipa_extdom.h" + +#define MAX_BUF (1024*1024*1024) + +void test_getpwnam_r_wrapper(void **state) +{ + int ret; + struct passwd pwd; + char *buf; + size_t buf_len; + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getpwnam_r_wrapper(MAX_BUF, "non_exisiting_user", &pwd, &buf, + &buf_len); + assert_int_equal(ret, ENOENT); + + ret = getpwnam_r_wrapper(MAX_BUF, "user", &pwd, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(pwd.pw_name, "user"); + assert_string_equal(pwd.pw_passwd, "x"); + assert_int_equal(pwd.pw_uid, 12345); + assert_int_equal(pwd.pw_gid, 23456); + assert_string_equal(pwd.pw_gecos, "gecos"); + assert_string_equal(pwd.pw_dir, "/home/user"); + assert_string_equal(pwd.pw_shell, "/bin/shell"); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getpwnam_r_wrapper(MAX_BUF, "user_big", &pwd, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(pwd.pw_name, "user_big"); + assert_string_equal(pwd.pw_passwd, "x"); + assert_int_equal(pwd.pw_uid, 12346); + assert_int_equal(pwd.pw_gid, 23457); + assert_int_equal(strlen(pwd.pw_gecos), 4000 * strlen("gecos")); + assert_string_equal(pwd.pw_dir, "/home/user_big"); + assert_string_equal(pwd.pw_shell, "/bin/shell"); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getpwnam_r_wrapper(1024, "user_big", &pwd, &buf, &buf_len); + assert_int_equal(ret, ERANGE); + free(buf); +} + +void test_getpwuid_r_wrapper(void **state) +{ + int ret; + struct passwd pwd; + char *buf; + size_t buf_len; + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getpwuid_r_wrapper(MAX_BUF, 99999, &pwd, &buf, &buf_len); + assert_int_equal(ret, ENOENT); + + ret = getpwuid_r_wrapper(MAX_BUF, 12345, &pwd, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(pwd.pw_name, "user"); + assert_string_equal(pwd.pw_passwd, "x"); + assert_int_equal(pwd.pw_uid, 12345); + assert_int_equal(pwd.pw_gid, 23456); + assert_string_equal(pwd.pw_gecos, "gecos"); + assert_string_equal(pwd.pw_dir, "/home/user"); + assert_string_equal(pwd.pw_shell, "/bin/shell"); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getpwuid_r_wrapper(MAX_BUF, 12346, &pwd, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(pwd.pw_name, "user_big"); + assert_string_equal(pwd.pw_passwd, "x"); + assert_int_equal(pwd.pw_uid, 12346); + assert_int_equal(pwd.pw_gid, 23457); + assert_int_equal(strlen(pwd.pw_gecos), 4000 * strlen("gecos")); + assert_string_equal(pwd.pw_dir, "/home/user_big"); + assert_string_equal(pwd.pw_shell, "/bin/shell"); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getpwuid_r_wrapper(1024, 12346, &pwd, &buf, &buf_len); + assert_int_equal(ret, ERANGE); + free(buf); +} + +void test_getgrnam_r_wrapper(void **state) +{ + int ret; + struct group grp; + char *buf; + size_t buf_len; + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getgrnam_r_wrapper(MAX_BUF, "non_exisiting_group", &grp, &buf, &buf_len); + assert_int_equal(ret, ENOENT); + + ret = getgrnam_r_wrapper(MAX_BUF, "group", &grp, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(grp.gr_name, "group"); + assert_string_equal(grp.gr_passwd, "x"); + assert_int_equal(grp.gr_gid, 11111); + assert_string_equal(grp.gr_mem[0], "member0001"); + assert_string_equal(grp.gr_mem[1], "member0002"); + assert_null(grp.gr_mem[2]); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getgrnam_r_wrapper(MAX_BUF, "group_big", &grp, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(grp.gr_name, "group_big"); + assert_string_equal(grp.gr_passwd, "x"); + assert_int_equal(grp.gr_gid, 22222); + assert_string_equal(grp.gr_mem[0], "member0001"); + assert_string_equal(grp.gr_mem[1], "member0002"); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getgrnam_r_wrapper(1024, "group_big", &grp, &buf, &buf_len); + assert_int_equal(ret, ERANGE); + free(buf); +} + +void test_getgrgid_r_wrapper(void **state) +{ + int ret; + struct group grp; + char *buf; + size_t buf_len; + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getgrgid_r_wrapper(MAX_BUF, 99999, &grp, &buf, &buf_len); + assert_int_equal(ret, ENOENT); + + ret = getgrgid_r_wrapper(MAX_BUF, 11111, &grp, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(grp.gr_name, "group"); + assert_string_equal(grp.gr_passwd, "x"); + assert_int_equal(grp.gr_gid, 11111); + assert_string_equal(grp.gr_mem[0], "member0001"); + assert_string_equal(grp.gr_mem[1], "member0002"); + assert_null(grp.gr_mem[2]); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getgrgid_r_wrapper(MAX_BUF, 22222, &grp, &buf, &buf_len); + assert_int_equal(ret, 0); + assert_string_equal(grp.gr_name, "group_big"); + assert_string_equal(grp.gr_passwd, "x"); + assert_int_equal(grp.gr_gid, 22222); + assert_string_equal(grp.gr_mem[0], "member0001"); + assert_string_equal(grp.gr_mem[1], "member0002"); + free(buf); + + ret = get_buffer(&buf_len, &buf); + assert_int_equal(ret, 0); + + ret = getgrgid_r_wrapper(1024, 22222, &grp, &buf, &buf_len); + assert_int_equal(ret, ERANGE); + free(buf); +} + +int main(int argc, const char *argv[]) +{ + const UnitTest tests[] = { + unit_test(test_getpwnam_r_wrapper), + unit_test(test_getpwuid_r_wrapper), + unit_test(test_getgrnam_r_wrapper), + unit_test(test_getgrgid_r_wrapper), + }; + + return run_tests(tests); +} diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index 20fdd62b20f28f5384cf83b8be5819f721c6c3db..cbe336963ffbafadd5a7b8029a65fafe506f75e8 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -49,6 +49,188 @@ #define MAX(a,b) (((a)>(b))?(a):(b)) #define SSSD_DOMAIN_SEPARATOR '@' +#define MAX_BUF (1024*1024*1024) + + + +int get_buffer(size_t *_buf_len, char **_buf) +{ + long pw_max; + long gr_max; + size_t buf_len; + char *buf; + + pw_max = sysconf(_SC_GETPW_R_SIZE_MAX); + gr_max = sysconf(_SC_GETGR_R_SIZE_MAX); + + buf_len = MAX(16384, MAX(pw_max, gr_max)); + + buf = malloc(sizeof(char) * buf_len); + if (buf == NULL) { + return LDAP_OPERATIONS_ERROR; + } + + *_buf_len = buf_len; + *_buf = buf; + + return LDAP_SUCCESS; +} + +static int inc_buffer(size_t buf_max, size_t *_buf_len, char **_buf) +{ + size_t tmp_len; + char *tmp_buf; + + tmp_buf = *_buf; + tmp_len = *_buf_len; + + tmp_len *= 2; + if (tmp_len > buf_max) { + return ERANGE; + } + + tmp_buf = realloc(tmp_buf, tmp_len); + if (tmp_buf == NULL) { + return ENOMEM; + } + + *_buf_len = tmp_len; + *_buf = tmp_buf; + + return 0; +} + +int getpwnam_r_wrapper(size_t buf_max, const char *name, + struct passwd *pwd, char **_buf, size_t *_buf_len) +{ + char *buf = NULL; + size_t buf_len = 0; + int ret; + struct passwd *result = NULL; + + buf = *_buf; + buf_len = *_buf_len; + + while (buf != NULL + && (ret = getpwnam_r(name, pwd, buf, buf_len, &result)) == ERANGE) { + ret = inc_buffer(buf_max, &buf_len, &buf); + if (ret != 0) { + if (ret == ERANGE) { + LOG("Buffer too small, increase ipaExtdomMaxNssBufSize.\n"); + } + goto done; + } + } + + if (ret == 0 && result == NULL) { + ret = ENOENT; + } + +done: + *_buf = buf; + *_buf_len = buf_len; + + return ret; +} + +int getpwuid_r_wrapper(size_t buf_max, uid_t uid, + struct passwd *pwd, char **_buf, size_t *_buf_len) +{ + char *buf = NULL; + size_t buf_len = 0; + int ret; + struct passwd *result = NULL; + + buf = *_buf; + buf_len = *_buf_len; + + while (buf != NULL + && (ret = getpwuid_r(uid, pwd, buf, buf_len, &result)) == ERANGE) { + ret = inc_buffer(buf_max, &buf_len, &buf); + if (ret != 0) { + if (ret == ERANGE) { + LOG("Buffer too small, increase ipaExtdomMaxNssBufSize.\n"); + } + goto done; + } + } + + if (ret == 0 && result == NULL) { + ret = ENOENT; + } + +done: + *_buf = buf; + *_buf_len = buf_len; + + return ret; +} + +int getgrnam_r_wrapper(size_t buf_max, const char *name, + struct group *grp, char **_buf, size_t *_buf_len) +{ + char *buf = NULL; + size_t buf_len = 0; + int ret; + struct group *result = NULL; + + buf = *_buf; + buf_len = *_buf_len; + + while (buf != NULL + && (ret = getgrnam_r(name, grp, buf, buf_len, &result)) == ERANGE) { + ret = inc_buffer(buf_max, &buf_len, &buf); + if (ret != 0) { + if (ret == ERANGE) { + LOG("Buffer too small, increase ipaExtdomMaxNssBufSize.\n"); + } + goto done; + } + } + + if (ret == 0 && result == NULL) { + ret = ENOENT; + } + +done: + *_buf = buf; + *_buf_len = buf_len; + + return ret; +} + +int getgrgid_r_wrapper(size_t buf_max, gid_t gid, + struct group *grp, char **_buf, size_t *_buf_len) +{ + char *buf = NULL; + size_t buf_len = 0; + int ret; + struct group *result = NULL; + + buf = *_buf; + buf_len = *_buf_len; + + while (buf != NULL + && (ret = getgrgid_r(gid, grp, buf, buf_len, &result)) == ERANGE) { + ret = inc_buffer(buf_max, &buf_len, &buf); + if (ret != 0) { + if (ret == ERANGE) { + LOG("Buffer too small, increase ipaExtdomMaxNssBufSize.\n"); + } + goto done; + } + } + + if (ret == 0 && result == NULL) { + ret = ENOENT; + } + +done: + *_buf = buf; + *_buf_len = buf_len; + + return ret; +} int parse_request_data(struct berval *req_val, struct extdom_req **_req) { @@ -191,33 +373,6 @@ int check_request(struct extdom_req *req, enum extdom_version version) return LDAP_SUCCESS; } -static int get_buffer(size_t *_buf_len, char **_buf) -{ - long pw_max; - long gr_max; - size_t buf_len; - char *buf; - - pw_max = sysconf(_SC_GETPW_R_SIZE_MAX); - gr_max = sysconf(_SC_GETGR_R_SIZE_MAX); - - if (pw_max == -1 && gr_max == -1) { - buf_len = 16384; - } else { - buf_len = MAX(pw_max, gr_max); - } - - buf = malloc(sizeof(char) * buf_len); - if (buf == NULL) { - return LDAP_OPERATIONS_ERROR; - } - - *_buf_len = buf_len; - *_buf = buf; - - return LDAP_SUCCESS; -} - static int get_user_grouplist(const char *name, gid_t gid, size_t *_ngroups, gid_t **_groups ) { @@ -323,7 +478,6 @@ static int pack_ber_user(enum response_types response_type, size_t buf_len; char *buf = NULL; struct group grp; - struct group *grp_result; size_t c; char *locat; char *short_user_name = NULL; @@ -375,13 +529,13 @@ static int pack_ber_user(enum response_types response_type, } for (c = 0; c < ngroups; c++) { - ret = getgrgid_r(groups[c], &grp, buf, buf_len, &grp_result); + ret = getgrgid_r_wrapper(MAX_BUF, groups[c], &grp, &buf, &buf_len); if (ret != 0) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } - if (grp_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; + if (ret == ENOMEM || ret == ERANGE) { + ret = LDAP_OPERATIONS_ERROR; + } else { + ret = LDAP_NO_SUCH_OBJECT; + } goto done; } @@ -542,7 +696,6 @@ static int handle_uid_request(enum request_types request_type, uid_t uid, { int ret; struct passwd pwd; - struct passwd *pwd_result = NULL; char *sid_str = NULL; enum sss_id_type id_type; size_t buf_len; @@ -568,13 +721,13 @@ static int handle_uid_request(enum request_types request_type, uid_t uid, ret = pack_ber_sid(sid_str, berval); } else { - ret = getpwuid_r(uid, &pwd, buf, buf_len, &pwd_result); + ret = getpwuid_r_wrapper(MAX_BUF, uid, &pwd, &buf, &buf_len); if (ret != 0) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } - if (pwd_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; + if (ret == ENOMEM || ret == ERANGE) { + ret = LDAP_OPERATIONS_ERROR; + } else { + ret = LDAP_NO_SUCH_OBJECT; + } goto done; } @@ -610,7 +763,6 @@ static int handle_gid_request(enum request_types request_type, gid_t gid, { int ret; struct group grp; - struct group *grp_result = NULL; char *sid_str = NULL; enum sss_id_type id_type; size_t buf_len; @@ -635,13 +787,13 @@ static int handle_gid_request(enum request_types request_type, gid_t gid, ret = pack_ber_sid(sid_str, berval); } else { - ret = getgrgid_r(gid, &grp, buf, buf_len, &grp_result); + ret = getgrgid_r_wrapper(MAX_BUF, gid, &grp, &buf, &buf_len); if (ret != 0) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } - if (grp_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; + if (ret == ENOMEM || ret == ERANGE) { + ret = LDAP_OPERATIONS_ERROR; + } else { + ret = LDAP_NO_SUCH_OBJECT; + } goto done; } @@ -676,9 +828,7 @@ static int handle_sid_request(enum request_types request_type, const char *sid, { int ret; struct passwd pwd; - struct passwd *pwd_result = NULL; struct group grp; - struct group *grp_result = NULL; char *domain_name = NULL; char *fq_name = NULL; char *object_name = NULL; @@ -724,14 +874,13 @@ static int handle_sid_request(enum request_types request_type, const char *sid, switch(id_type) { case SSS_ID_TYPE_UID: case SSS_ID_TYPE_BOTH: - ret = getpwnam_r(fq_name, &pwd, buf, buf_len, &pwd_result); + ret = getpwnam_r_wrapper(MAX_BUF, fq_name, &pwd, &buf, &buf_len); if (ret != 0) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } - - if (pwd_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; + if (ret == ENOMEM || ret == ERANGE) { + ret = LDAP_OPERATIONS_ERROR; + } else { + ret = LDAP_NO_SUCH_OBJECT; + } goto done; } @@ -755,14 +904,13 @@ static int handle_sid_request(enum request_types request_type, const char *sid, pwd.pw_shell, kv_list, berval); break; case SSS_ID_TYPE_GID: - ret = getgrnam_r(fq_name, &grp, buf, buf_len, &grp_result); + ret = getgrnam_r_wrapper(MAX_BUF, fq_name, &grp, &buf, &buf_len); if (ret != 0) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } - - if (grp_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; + if (ret == ENOMEM || ret == ERANGE) { + ret = LDAP_OPERATIONS_ERROR; + } else { + ret = LDAP_NO_SUCH_OBJECT; + } goto done; } @@ -806,9 +954,7 @@ static int handle_name_request(enum request_types request_type, int ret; char *fq_name = NULL; struct passwd pwd; - struct passwd *pwd_result = NULL; struct group grp; - struct group *grp_result = NULL; char *sid_str = NULL; enum sss_id_type id_type; size_t buf_len; @@ -842,15 +988,8 @@ static int handle_name_request(enum request_types request_type, goto done; } - ret = getpwnam_r(fq_name, &pwd, buf, buf_len, &pwd_result); - if (ret != 0) { - /* according to the man page there are a couple of error codes - * which can indicate that the user was not found. To be on the - * safe side we fail back to the group lookup on all errors. */ - pwd_result = NULL; - } - - if (pwd_result != NULL) { + ret = getpwnam_r_wrapper(MAX_BUF, fq_name, &pwd, &buf, &buf_len); + if (ret == 0) { if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_UID @@ -868,15 +1007,21 @@ static int handle_name_request(enum request_types request_type, domain_name, pwd.pw_name, pwd.pw_uid, pwd.pw_gid, pwd.pw_gecos, pwd.pw_dir, pwd.pw_shell, kv_list, berval); + } else if (ret == ENOMEM || ret == ERANGE) { + ret = LDAP_OPERATIONS_ERROR; + goto done; } else { /* no user entry found */ - ret = getgrnam_r(fq_name, &grp, buf, buf_len, &grp_result); + /* according to the getpwnam() man page there are a couple of + * error codes which can indicate that the user was not found. To + * be on the safe side we fail back to the group lookup on all + * errors. */ + ret = getgrnam_r_wrapper(MAX_BUF, fq_name, &grp, &buf, &buf_len); if (ret != 0) { - ret = LDAP_NO_SUCH_OBJECT; - goto done; - } - - if (grp_result == NULL) { - ret = LDAP_NO_SUCH_OBJECT; + if (ret == ENOMEM || ret == ERANGE) { + ret = LDAP_OPERATIONS_ERROR; + } else { + ret = LDAP_NO_SUCH_OBJECT; + } goto done; } diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/group b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/group new file mode 100644 index 0000000000000000000000000000000000000000..8d1b012871b21cc9d5ffdba2168f35ef3e8a5f81 --- /dev/null +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/group @@ -0,0 +1,2 @@ +group:x:11111:member0001,member0002 +group_big:x:22222:member0001,member0002,member0003,member0004,member0005,member0006,member0007,member0008,member0009,member0010,member0011,member0012,member0013,member0014,member0015,member0016,member0017,member0018,member0019,member0020,member0021,member0022,member0023,member0024,member0025,member0026,member0027,member0028,member0029,member0030,member0031,member0032,member0033,member0034,member0035,member0036,member0037,member0038,member0039,member0040,member0041,member0042,member0043,member0044,member0045,member0046,member0047,member0048,member0049,member0050,member0051,member0052,member0053,member0054,member0055,member0056,member0057,member0058,member0059,member0060,member0061,member0062,member0063,member0064,member0065,member0066,member0067,member0068,member0069,member0070,member0071,member0072,member0073,member0074,member0075,member0076,member0077,member0078,member0079,member0080,member0081,member0082,member0083,member0084,member0085,member0086,member0087,member0088,member0089,member0090,member0091,member0092,member0093,member0094,member0095,member0096,member0097,member0098,member0099,member0100,member0101,member0102,member0103,member0104,member0105,member0106,member0107,member0108,member0109,member0110,member0111,member0112,member0113,member0114,member0115,member0116,member0117,member0118,member0119,member0120,member0121,member0122,member0123,member0124,member0125,member0126,member0127,member0128,member0129,member0130,member0131,member0132,member0133,member0134,member0135,member0136,member0137,member0138,member0139,member0140,member0141,member0142,member0143,member0144,member0145,member0146,member0147,member0148,member0149,member0150,member0151,member0152,member0153,member0154,member0155,member0156,member0157,member0158,member0159,member0160,member0161,member0162,member0163,member0164,member0165,member0166,member0167,member0168,member0169,member0170,member0171,member0172,member0173,member0174,member0175,member0176,member0177,member0178,member0179,member0180,member0181,member0182,member0183,member0184,member0185,member0186,member0187,member0188,member0189,member0190,member0191,member0192,member0193,member0194,member0195,member0196,member0197,member0198,member0199,member0200,member0201,member0202,member0203,member0204,member0205,member0206,member0207,member0208,member0209,member0210,member0211,member0212,member0213,member0214,member0215,member0216,member0217,member0218,member0219,member0220,member0221,member0222,member0223,member0224,member0225,member0226,member0227,member0228,member0229,member0230,member0231,member0232,member0233,member0234,member0235,member0236,member0237,member0238,member0239,member0240,member0241,member0242,member0243,member0244,member0245,member0246,member0247,member0248,member0249,member0250,member0251,member0252,member0253,member0254,member0255,member0256,member0257,member0258,member0259,member0260,member0261,member0262,member0263,member0264,member0265,member0266,member0267,member0268,member0269,member0270,member0271,member0272,member0273,member0274,member0275,member0276,member0277,member0278,member0279,member0280,member0281,member0282,member0283,member0284,member0285,member0286,member0287,member0288,member0289,member0290,member0291,member0292,member0293,member0294,member0295,member0296,member0297,member0298,member0299,member0300,member0301,member0302,member0303,member0304,member0305,member0306,member0307,member0308,member0309,member0310,member0311,member0312,member0313,member0314,member0315,member0316,member0317,member0318,member0319,member0320,member0321,member0322,member0323,member0324,member0325,member0326,member0327,member0328,member0329,member0330,member0331,member0332,member0333,member0334,member0335,member0336,member0337,member0338,member0339,member0340,member0341,member0342,member0343,member0344,member0345,member0346,member0347,member0348,member0349,member0350,member0351,member0352,member0353,member0354,member0355,member0356,member0357,member0358,member0359,member0360,member0361,member0362,member0363,member0364,member0365,member0366,member0367,member0368,member0369,member0370,member0371,member0372,member0373,member0374,member0375,member0376,member0377,member0378,member0379,member0380,member0381,member0382,member0383,member0384,member0385,member0386,member0387,member0388,member0389,member0390,member0391,member0392,member0393,member0394,member0395,member0396,member0397,member0398,member0399,member0400,member0401,member0402,member0403,member0404,member0405,member0406,member0407,member0408,member0409,member0410,member0411,member0412,member0413,member0414,member0415,member0416,member0417,member0418,member0419,member0420,member0421,member0422,member0423,member0424,member0425,member0426,member0427,member0428,member0429,member0430,member0431,member0432,member0433,member0434,member0435,member0436,member0437,member0438,member0439,member0440,member0441,member0442,member0443,member0444,member0445,member0446,member0447,member0448,member0449,member0450,member0451,member0452,member0453,member0454,member0455,member0456,member0457,member0458,member0459,member0460,member0461,member0462,member0463,member0464,member0465,member0466,member0467,member0468,member0469,member0470,member0471,member0472,member0473,member0474,member0475,member0476,member0477,member0478,member0479,member0480,member0481,member0482,member0483,member0484,member0485,member0486,member0487,member0488,member0489,member0490,member0491,member0492,member0493,member0494,member0495,member0496,member0497,member0498,member0499,member0500,member0501,member0502,member0503,member0504,member0505,member0506,member0507,member0508,member0509,member0510,member0511,member0512,member0513,member0514,member0515,member0516,member0517,member0518,member0519,member0520,member0521,member0522,member0523,member0524,member0525,member0526,member0527,member0528,member0529,member0530,member0531,member0532,member0533,member0534,member0535,member0536,member0537,member0538,member0539,member0540,member0541,member0542,member0543,member0544,member0545,member0546,member0547,member0548,member0549,member0550,member0551,member0552,member0553,member0554,member0555,member0556,member0557,member0558,member0559,member0560,member0561,member0562,member0563,member0564,member0565,member0566,member0567,member0568,member0569,member0570,member0571,member0572,member0573,member0574,member0575,member0576,member0577,member0578,member0579,member0580,member0581,member0582,member0583,member0584,member0585,member0586,member0587,member0588,member0589,member0590,member0591,member0592,member0593,member0594,member0595,member0596,member0597,member0598,member0599,member0600,member0601,member0602,member0603,member0604,member0605,member0606,member0607,member0608,member0609,member0610,member0611,member0612,member0613,member0614,member0615,member0616,member0617,member0618,member0619,member0620,member0621,member0622,member0623,member0624,member0625,member0626,member0627,member0628,member0629,member0630,member0631,member0632,member0633,member0634,member0635,member0636,member0637,member0638,member0639,member0640,member0641,member0642,member0643,member0644,member0645,member0646,member0647,member0648,member0649,member0650,member0651,member0652,member0653,member0654,member0655,member0656,member0657,member0658,member0659,member0660,member0661,member0662,member0663,member0664,member0665,member0666,member0667,member0668,member0669,member0670,member0671,member0672,member0673,member0674,member0675,member0676,member0677,member0678,member0679,member0680,member0681,member0682,member0683,member0684,member0685,member0686,member0687,member0688,member0689,member0690,member0691,member0692,member0693,member0694,member0695,member0696,member0697,member0698,member0699,member0700,member0701,member0702,member0703,member0704,member0705,member0706,member0707,member0708,member0709,member0710,member0711,member0712,member0713,member0714,member0715,member0716,member0717,member0718,member0719,member0720,member0721,member0722,member0723,member0724,member0725,member0726,member0727,member0728,member0729,member0730,member0731,member0732,member0733,member0734,member0735,member0736,member0737,member0738,member0739,member0740,member0741,member0742,member0743,member0744,member0745,member0746,member0747,member0748,member0749,member0750,member0751,member0752,member0753,member0754,member0755,member0756,member0757,member0758,member0759,member0760,member0761,member0762,member0763,member0764,member0765,member0766,member0767,member0768,member0769,member0770,member0771,member0772,member0773,member0774,member0775,member0776,member0777,member0778,member0779,member0780,member0781,member0782,member0783,member0784,member0785,member0786,member0787,member0788,member0789,member0790,member0791,member0792,member0793,member0794,member0795,member0796,member0797,member0798,member0799,member0800,member0801,member0802,member0803,member0804,member0805,member0806,member0807,member0808,member0809,member0810,member0811,member0812,member0813,member0814,member0815,member0816,member0817,member0818,member0819,member0820,member0821,member0822,member0823,member0824,member0825,member0826,member0827,member0828,member0829,member0830,member0831,member0832,member0833,member0834,member0835,member0836,member0837,member0838,member0839,member0840,member0841,member0842,member0843,member0844,member0845,member0846,member0847,member0848,member0849,member0850,member0851,member0852,member0853,member0854,member0855,member0856,member0857,member0858,member0859,member0860,member0861,member0862,member0863,member0864,member0865,member0866,member0867,member0868,member0869,member0870,member0871,member0872,member0873,member0874,member0875,member0876,member0877,member0878,member0879,member0880,member0881,member0882,member0883,member0884,member0885,member0886,member0887,member0888,member0889,member0890,member0891,member0892,member0893,member0894,member0895,member0896,member0897,member0898,member0899,member0900,member0901,member0902,member0903,member0904,member0905,member0906,member0907,member0908,member0909,member0910,member0911,member0912,member0913,member0914,member0915,member0916,member0917,member0918,member0919,member0920,member0921,member0922,member0923,member0924,member0925,member0926,member0927,member0928,member0929,member0930,member0931,member0932,member0933,member0934,member0935,member0936,member0937,member0938,member0939,member0940,member0941,member0942,member0943,member0944,member0945,member0946,member0947,member0948,member0949,member0950,member0951,member0952,member0953,member0954,member0955,member0956,member0957,member0958,member0959,member0960,member0961,member0962,member0963,member0964,member0965,member0966,member0967,member0968,member0969,member0970,member0971,member0972,member0973,member0974,member0975,member0976,member0977,member0978,member0979,member0980,member0981,member0982,member0983,member0984,member0985,member0986,member0987,member0988,member0989,member0990,member0991,member0992,member0993,member0994,member0995,member0996,member0997,member0998,member0999,member1000,member1001,member1002,member1003,member1004,member1005,member1006,member1007,member1008,member1009,member1010,member1011,member1012,member1013,member1014,member1015,member1016,member1017,member1018,member1019,member1020,member1021,member1022,member1023,member1024,member1025,member1026,member1027,member1028,member1029,member1030,member1031,member1032,member1033,member1034,member1035,member1036,member1037,member1038,member1039,member1040,member1041,member1042,member1043,member1044,member1045,member1046,member1047,member1048,member1049,member1050,member1051,member1052,member1053,member1054,member1055,member1056,member1057,member1058,member1059,member1060,member1061,member1062,member1063,member1064,member1065,member1066,member1067,member1068,member1069,member1070,member1071,member1072,member1073,member1074,member1075,member1076,member1077,member1078,member1079,member1080,member1081,member1082,member1083,member1084,member1085,member1086,member1087,member1088,member1089,member1090,member1091,member1092,member1093,member1094,member1095,member1096,member1097,member1098,member1099,member1100,member1101,member1102,member1103,member1104,member1105,member1106,member1107,member1108,member1109,member1110,member1111,member1112,member1113,member1114,member1115,member1116,member1117,member1118,member1119,member1120,member1121,member1122,member1123,member1124,member1125,member1126,member1127,member1128,member1129,member1130,member1131,member1132,member1133,member1134,member1135,member1136,member1137,member1138,member1139,member1140,member1141,member1142,member1143,member1144,member1145,member1146,member1147,member1148,member1149,member1150,member1151,member1152,member1153,member1154,member1155,member1156,member1157,member1158,member1159,member1160,member1161,member1162,member1163,member1164,member1165,member1166,member1167,member1168,member1169,member1170,member1171,member1172,member1173,member1174,member1175,member1176,member1177,member1178,member1179,member1180,member1181,member1182,member1183,member1184,member1185,member1186,member1187,member1188,member1189,member1190,member1191,member1192,member1193,member1194,member1195,member1196,member1197,member1198,member1199,member1200,member1201,member1202,member1203,member1204,member1205,member1206,member1207,member1208,member1209,member1210,member1211,member1212,member1213,member1214,member1215,member1216,member1217,member1218,member1219,member1220,member1221,member1222,member1223,member1224,member1225,member1226,member1227,member1228,member1229,member1230,member1231,member1232,member1233,member1234,member1235,member1236,member1237,member1238,member1239,member1240,member1241,member1242,member1243,member1244,member1245,member1246,member1247,member1248,member1249,member1250,member1251,member1252,member1253,member1254,member1255,member1256,member1257,member1258,member1259,member1260,member1261,member1262,member1263,member1264,member1265,member1266,member1267,member1268,member1269,member1270,member1271,member1272,member1273,member1274,member1275,member1276,member1277,member1278,member1279,member1280,member1281,member1282,member1283,member1284,member1285,member1286,member1287,member1288,member1289,member1290,member1291,member1292,member1293,member1294,member1295,member1296,member1297,member1298,member1299,member1300,member1301,member1302,member1303,member1304,member1305,member1306,member1307,member1308,member1309,member1310,member1311,member1312,member1313,member1314,member1315,member1316,member1317,member1318,member1319,member1320,member1321,member1322,member1323,member1324,member1325,member1326,member1327,member1328,member1329,member1330,member1331,member1332,member1333,member1334,member1335,member1336,member1337,member1338,member1339,member1340,member1341,member1342,member1343,member1344,member1345,member1346,member1347,member1348,member1349,member1350,member1351,member1352,member1353,member1354,member1355,member1356,member1357,member1358,member1359,member1360,member1361,member1362,member1363,member1364,member1365,member1366,member1367,member1368,member1369,member1370,member1371,member1372,member1373,member1374,member1375,member1376,member1377,member1378,member1379,member1380,member1381,member1382,member1383,member1384,member1385,member1386,member1387,member1388,member1389,member1390,member1391,member1392,member1393,member1394,member1395,member1396,member1397,member1398,member1399,member1400,member1401,member1402,member1403,member1404,member1405,member1406,member1407,member1408,member1409,member1410,member1411,member1412,member1413,member1414,member1415,member1416,member1417,member1418,member1419,member1420,member1421,member1422,member1423,member1424,member1425,member1426,member1427,member1428,member1429,member1430,member1431,member1432,member1433,member1434,member1435,member1436,member1437,member1438,member1439,member1440,member1441,member1442,member1443,member1444,member1445,member1446,member1447,member1448,member1449,member1450,member1451,member1452,member1453,member1454,member1455,member1456,member1457,member1458,member1459,member1460,member1461,member1462,member1463,member1464,member1465,member1466,member1467,member1468,member1469,member1470,member1471,member1472,member1473,member1474,member1475,member1476,member1477,member1478,member1479,member1480,member1481,member1482,member1483,member1484,member1485,member1486,member1487,member1488,member1489,member1490,member1491,member1492,member1493,member1494,member1495,member1496,member1497,member1498,member1499,member1500,member1501,member1502,member1503,member1504,member1505,member1506,member1507,member1508,member1509,member1510,member1511,member1512,member1513,member1514,member1515,member1516,member1517,member1518,member1519,member1520,member1521,member1522,member1523,member1524,member1525,member1526,member1527,member1528,member1529,member1530,member1531,member1532,member1533,member1534,member1535,member1536,member1537,member1538,member1539,member1540,member1541,member1542,member1543,member1544,member1545,member1546,member1547,member1548,member1549,member1550,member1551,member1552,member1553,member1554,member1555,member1556,member1557,member1558,member1559,member1560,member1561,member1562,member1563,member1564,member1565,member1566,member1567,member1568,member1569,member1570,member1571,member1572,member1573,member1574,member1575,member1576,member1577,member1578,member1579,member1580,member1581,member1582,member1583,member1584,member1585,member1586,member1587,member1588,member1589,member1590,member1591,member1592,member1593,member1594,member1595,member1596,member1597,member1598,member1599,member1600,member1601,member1602,member1603,member1604,member1605,member1606,member1607,member1608,member1609,member1610,member1611,member1612,member1613,member1614,member1615,member1616,member1617,member1618,member1619,member1620,member1621,member1622,member1623,member1624,member1625,member1626,member1627,member1628,member1629,member1630,member1631,member1632,member1633,member1634,member1635,member1636,member1637,member1638,member1639,member1640,member1641,member1642,member1643,member1644,member1645,member1646,member1647,member1648,member1649,member1650,member1651,member1652,member1653,member1654,member1655,member1656,member1657,member1658,member1659,member1660,member1661,member1662,member1663,member1664,member1665,member1666,member1667,member1668,member1669,member1670,member1671,member1672,member1673,member1674,member1675,member1676,member1677,member1678,member1679,member1680,member1681,member1682,member1683,member1684,member1685,member1686,member1687,member1688,member1689,member1690,member1691,member1692,member1693,member1694,member1695,member1696,member1697,member1698,member1699,member1700,member1701,member1702,member1703,member1704,member1705,member1706,member1707,member1708,member1709,member1710,member1711,member1712,member1713,member1714,member1715,member1716,member1717,member1718,member1719,member1720,member1721,member1722,member1723,member1724,member1725,member1726,member1727,member1728,member1729,member1730,member1731,member1732,member1733,member1734,member1735,member1736,member1737,member1738,member1739,member1740,member1741,member1742,member1743,member1744,member1745,member1746,member1747,member1748,member1749,member1750,member1751,member1752,member1753,member1754,member1755,member1756,member1757,member1758,member1759,member1760,member1761,member1762,member1763,member1764,member1765,member1766,member1767,member1768,member1769,member1770,member1771,member1772,member1773,member1774,member1775,member1776,member1777,member1778,member1779,member1780,member1781,member1782,member1783,member1784,member1785,member1786,member1787,member1788,member1789,member1790,member1791,member1792,member1793,member1794,member1795,member1796,member1797,member1798,member1799,member1800,member1801,member1802,member1803,member1804,member1805,member1806,member1807,member1808,member1809,member1810,member1811,member1812,member1813,member1814,member1815,member1816,member1817,member1818,member1819,member1820,member1821,member1822,member1823,member1824,member1825,member1826,member1827,member1828,member1829,member1830,member1831,member1832,member1833,member1834,member1835,member1836,member1837,member1838,member1839,member1840,member1841,member1842,member1843,member1844,member1845,member1846,member1847,member1848,member1849,member1850,member1851,member1852,member1853,member1854,member1855,member1856,member1857,member1858,member1859,member1860,member1861,member1862,member1863,member1864,member1865,member1866,member1867,member1868,member1869,member1870,member1871,member1872,member1873,member1874,member1875,member1876,member1877,member1878,member1879,member1880,member1881,member1882,member1883,member1884,member1885,member1886,member1887,member1888,member1889,member1890,member1891,member1892,member1893,member1894,member1895,member1896,member1897,member1898,member1899,member1900,member1901,member1902,member1903,member1904,member1905,member1906,member1907,member1908,member1909,member1910,member1911,member1912,member1913,member1914,member1915,member1916,member1917,member1918,member1919,member1920,member1921,member1922,member1923,member1924,member1925,member1926,member1927,member1928,member1929,member1930,member1931,member1932,member1933,member1934,member1935,member1936,member1937,member1938,member1939,member1940,member1941,member1942,member1943,member1944,member1945,member1946,member1947,member1948,member1949,member1950,member1951,member1952,member1953,member1954,member1955,member1956,member1957,member1958,member1959,member1960,member1961,member1962,member1963,member1964,member1965,member1966,member1967,member1968,member1969,member1970,member1971,member1972,member1973,member1974,member1975,member1976,member1977,member1978,member1979,member1980,member1981,member1982,member1983,member1984,member1985,member1986,member1987,member1988,member1989,member1990,member1991,member1992,member1993,member1994,member1995,member1996,member1997,member1998,member1999,member2000, diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/passwd b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/passwd new file mode 100644 index 0000000000000000000000000000000000000000..971e9bdb8a5d43d915ce0adc42ac29f2f95ade52 --- /dev/null +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/passwd @@ -0,0 +1,2 @@ +user:x:12345:23456:gecos:/home/user:/bin/shell +user_big:x:12346:23457:gecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecosgecos:/home/user_big:/bin/shell diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/test_setup.sh b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/test_setup.sh new file mode 100644 index 0000000000000000000000000000000000000000..ad839f340efe989a91cd6902f59c9a41483f68e0 --- /dev/null +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/test_data/test_setup.sh @@ -0,0 +1,3 @@ +export LD_PRELOAD=$(pkg-config --libs nss_wrapper) +export NSS_WRAPPER_PASSWD=./test_data/passwd +export NSS_WRAPPER_GROUP=./test_data/group -- 2.1.0 -------------- next part -------------- From 5048f230deb4ff93b04a459ed7dd6216233ee0d8 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Mon, 2 Mar 2015 10:59:34 +0100 Subject: [PATCH 136/136] extdom: make nss buffer configurable The get*_r_wrapper() calls expect a maximum buffer size to avoid memory shortage if too many threads try to allocate buffers e.g. for large groups. With this patch this size can be configured by setting ipaExtdomMaxNssBufSize in the plugin config object cn=ipa_extdom_extop,cn=plugins,cn=config. Related to https://fedorahosted.org/freeipa/ticket/4908 --- .../ipa-extdom-extop/ipa_extdom.h | 1 + .../ipa-extdom-extop/ipa_extdom_common.c | 59 ++++++++++++++-------- .../ipa-extdom-extop/ipa_extdom_extop.c | 10 ++++ 3 files changed, 48 insertions(+), 22 deletions(-) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h index 40bf933920fdd2ca19e5ef195aaa8fb820446cc5..d4c851169ddadc869a59c53075f9fc7f33321085 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h @@ -150,6 +150,7 @@ struct extdom_res { struct ipa_extdom_ctx { Slapi_ComponentId *plugin_id; char *base_dn; + size_t max_nss_buf_size; }; struct domain_info { diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index cbe336963ffbafadd5a7b8029a65fafe506f75e8..47bcb179f04e08c64d92f55809b84f2d59622344 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -49,9 +49,6 @@ #define MAX(a,b) (((a)>(b))?(a):(b)) #define SSSD_DOMAIN_SEPARATOR '@' -#define MAX_BUF (1024*1024*1024) - - int get_buffer(size_t *_buf_len, char **_buf) { @@ -464,7 +461,8 @@ static int pack_ber_sid(const char *sid, struct berval **berval) #define SSSD_SYSDB_SID_STR "objectSIDString" -static int pack_ber_user(enum response_types response_type, +static int pack_ber_user(struct ipa_extdom_ctx *ctx, + enum response_types response_type, const char *domain_name, const char *user_name, uid_t uid, gid_t gid, const char *gecos, const char *homedir, @@ -529,7 +527,8 @@ static int pack_ber_user(enum response_types response_type, } for (c = 0; c < ngroups; c++) { - ret = getgrgid_r_wrapper(MAX_BUF, groups[c], &grp, &buf, &buf_len); + ret = getgrgid_r_wrapper(ctx->max_nss_buf_size, + groups[c], &grp, &buf, &buf_len); if (ret != 0) { if (ret == ENOMEM || ret == ERANGE) { ret = LDAP_OPERATIONS_ERROR; @@ -691,7 +690,8 @@ static int pack_ber_name(const char *domain_name, const char *name, return LDAP_SUCCESS; } -static int handle_uid_request(enum request_types request_type, uid_t uid, +static int handle_uid_request(struct ipa_extdom_ctx *ctx, + enum request_types request_type, uid_t uid, const char *domain_name, struct berval **berval) { int ret; @@ -721,7 +721,8 @@ static int handle_uid_request(enum request_types request_type, uid_t uid, ret = pack_ber_sid(sid_str, berval); } else { - ret = getpwuid_r_wrapper(MAX_BUF, uid, &pwd, &buf, &buf_len); + ret = getpwuid_r_wrapper(ctx->max_nss_buf_size, uid, &pwd, &buf, + &buf_len); if (ret != 0) { if (ret == ENOMEM || ret == ERANGE) { ret = LDAP_OPERATIONS_ERROR; @@ -744,7 +745,8 @@ static int handle_uid_request(enum request_types request_type, uid_t uid, } } - ret = pack_ber_user((request_type == REQ_FULL ? RESP_USER + ret = pack_ber_user(ctx, + (request_type == REQ_FULL ? RESP_USER : RESP_USER_GROUPLIST), domain_name, pwd.pw_name, pwd.pw_uid, pwd.pw_gid, pwd.pw_gecos, pwd.pw_dir, @@ -758,7 +760,8 @@ done: return ret; } -static int handle_gid_request(enum request_types request_type, gid_t gid, +static int handle_gid_request(struct ipa_extdom_ctx *ctx, + enum request_types request_type, gid_t gid, const char *domain_name, struct berval **berval) { int ret; @@ -787,7 +790,8 @@ static int handle_gid_request(enum request_types request_type, gid_t gid, ret = pack_ber_sid(sid_str, berval); } else { - ret = getgrgid_r_wrapper(MAX_BUF, gid, &grp, &buf, &buf_len); + ret = getgrgid_r_wrapper(ctx->max_nss_buf_size, gid, &grp, &buf, + &buf_len); if (ret != 0) { if (ret == ENOMEM || ret == ERANGE) { ret = LDAP_OPERATIONS_ERROR; @@ -823,7 +827,8 @@ done: return ret; } -static int handle_sid_request(enum request_types request_type, const char *sid, +static int handle_sid_request(struct ipa_extdom_ctx *ctx, + enum request_types request_type, const char *sid, struct berval **berval) { int ret; @@ -874,7 +879,8 @@ static int handle_sid_request(enum request_types request_type, const char *sid, switch(id_type) { case SSS_ID_TYPE_UID: case SSS_ID_TYPE_BOTH: - ret = getpwnam_r_wrapper(MAX_BUF, fq_name, &pwd, &buf, &buf_len); + ret = getpwnam_r_wrapper(ctx->max_nss_buf_size, fq_name, &pwd, &buf, + &buf_len); if (ret != 0) { if (ret == ENOMEM || ret == ERANGE) { ret = LDAP_OPERATIONS_ERROR; @@ -897,14 +903,16 @@ static int handle_sid_request(enum request_types request_type, const char *sid, } } - ret = pack_ber_user((request_type == REQ_FULL ? RESP_USER + ret = pack_ber_user(ctx, + (request_type == REQ_FULL ? RESP_USER : RESP_USER_GROUPLIST), domain_name, pwd.pw_name, pwd.pw_uid, pwd.pw_gid, pwd.pw_gecos, pwd.pw_dir, pwd.pw_shell, kv_list, berval); break; case SSS_ID_TYPE_GID: - ret = getgrnam_r_wrapper(MAX_BUF, fq_name, &grp, &buf, &buf_len); + ret = getgrnam_r_wrapper(ctx->max_nss_buf_size, fq_name, &grp, &buf, + &buf_len); if (ret != 0) { if (ret == ENOMEM || ret == ERANGE) { ret = LDAP_OPERATIONS_ERROR; @@ -947,7 +955,8 @@ done: return ret; } -static int handle_name_request(enum request_types request_type, +static int handle_name_request(struct ipa_extdom_ctx *ctx, + enum request_types request_type, const char *name, const char *domain_name, struct berval **berval) { @@ -988,7 +997,8 @@ static int handle_name_request(enum request_types request_type, goto done; } - ret = getpwnam_r_wrapper(MAX_BUF, fq_name, &pwd, &buf, &buf_len); + ret = getpwnam_r_wrapper(ctx->max_nss_buf_size, fq_name, &pwd, &buf, + &buf_len); if (ret == 0) { if (request_type == REQ_FULL_WITH_GROUPS) { ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); @@ -1002,7 +1012,8 @@ static int handle_name_request(enum request_types request_type, goto done; } } - ret = pack_ber_user((request_type == REQ_FULL ? RESP_USER + ret = pack_ber_user(ctx, + (request_type == REQ_FULL ? RESP_USER : RESP_USER_GROUPLIST), domain_name, pwd.pw_name, pwd.pw_uid, pwd.pw_gid, pwd.pw_gecos, pwd.pw_dir, @@ -1015,7 +1026,8 @@ static int handle_name_request(enum request_types request_type, * error codes which can indicate that the user was not found. To * be on the safe side we fail back to the group lookup on all * errors. */ - ret = getgrnam_r_wrapper(MAX_BUF, fq_name, &grp, &buf, &buf_len); + ret = getgrnam_r_wrapper(ctx->max_nss_buf_size, fq_name, &grp, &buf, + &buf_len); if (ret != 0) { if (ret == ENOMEM || ret == ERANGE) { ret = LDAP_OPERATIONS_ERROR; @@ -1061,20 +1073,23 @@ int handle_request(struct ipa_extdom_ctx *ctx, struct extdom_req *req, switch (req->input_type) { case INP_POSIX_UID: - ret = handle_uid_request(req->request_type, req->data.posix_uid.uid, + ret = handle_uid_request(ctx, req->request_type, + req->data.posix_uid.uid, req->data.posix_uid.domain_name, berval); break; case INP_POSIX_GID: - ret = handle_gid_request(req->request_type, req->data.posix_gid.gid, + ret = handle_gid_request(ctx, req->request_type, + req->data.posix_gid.gid, req->data.posix_uid.domain_name, berval); break; case INP_SID: - ret = handle_sid_request(req->request_type, req->data.sid, berval); + ret = handle_sid_request(ctx, req->request_type, req->data.sid, berval); break; case INP_NAME: - ret = handle_name_request(req->request_type, req->data.name.object_name, + ret = handle_name_request(ctx, req->request_type, + req->data.name.object_name, req->data.name.domain_name, berval); break; diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c index aa66c145bc6cf2b77fdfe37be18da67588dc0439..e53f968db040a37fbd6a193f87b3671eeabda89d 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c @@ -40,6 +40,8 @@ #include "ipa_extdom.h" #include "util.h" +#define DEFAULT_MAX_NSS_BUFFER (128*1024*1024) + Slapi_PluginDesc ipa_extdom_plugin_desc = { IPA_EXTDOM_FEATURE_DESC, "FreeIPA project", @@ -185,6 +187,14 @@ static int ipa_extdom_init_ctx(Slapi_PBlock *pb, struct ipa_extdom_ctx **_ctx) goto done; } + ctx->max_nss_buf_size = slapi_entry_attr_get_uint(e, + "ipaExtdomMaxNssBufSize"); + if (ctx->max_nss_buf_size == 0) { + ctx->max_nss_buf_size = DEFAULT_MAX_NSS_BUFFER; + } + LOG("Maximal nss buffer size set to [%d]!\n", ctx->max_nss_buf_size); + + ret = 0; done: if (ret) { -- 2.1.0 From tbabej at redhat.com Thu Mar 5 11:44:06 2015 From: tbabej at redhat.com (Tomas Babej) Date: Thu, 05 Mar 2015 12:44:06 +0100 Subject: [Freeipa-devel] [PATCHES 0197-0198] Fix uniqueness plugins upgrade In-Reply-To: <20150304133354.GZ25455@redhat.com> References: <54EDCF53.7050900@redhat.com> <20150304133354.GZ25455@redhat.com> Message-ID: <54F84186.2060608@redhat.com> On 03/04/2015 02:33 PM, Alexander Bokovoy wrote: > On Wed, 25 Feb 2015, Martin Basti wrote: >> Modifications: >> * All plugins are migrated into new configuration style. >> * I left attribute uniqueness plugin disabled, cn=uid >> uniqueness,cn=plugins,cn=config is checking the same attribute. >> * POST_UPDATE plugin for uid removed, I moved it to update file. Is >> it okay Alexander? I haven't found reason why we need to do it in >> update plugin. >> >> Thierry, I touched configuration of plugins, which user lifecycle >> requires, can you take look if I it does not break anything? >> >> Patches attached. > ACK. Pushed to master: 52b7101c1148618d5c8e2ec25576cc7ad3e9b7bb From dkupka at redhat.com Thu Mar 5 12:06:16 2015 From: dkupka at redhat.com (David Kupka) Date: Thu, 05 Mar 2015 13:06:16 +0100 Subject: [Freeipa-devel] [PATCH] 0040 Add realm name to backup header file. In-Reply-To: <54F74668.1030108@redhat.com> References: <54F70464.5030803@redhat.com> <54F74668.1030108@redhat.com> Message-ID: <54F846B8.9090407@redhat.com> On 03/04/2015 06:52 PM, David Kupka wrote: > On 03/04/2015 02:11 PM, David Kupka wrote: >> https://fedorahosted.org/freeipa/ticket/4896 >> >> >> _______________________________________________ >> Freeipa-devel mailing list >> Freeipa-devel at redhat.com >> https://www.redhat.com/mailman/listinfo/freeipa-devel >> > > Honza proposed different approach. We can extract default.conf and use > it in to create api object with the right values. > I've implemented it and it works too additionally we don't need to > change the header content for now. > > > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel > Patch updated after offline NACK from Honza. -- David Kupka -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-dkupka-0040-3-Restore-default.conf-and-use-it-to-build-API.patch Type: text/x-patch Size: 6300 bytes Desc: not available URL: From mbabinsk at redhat.com Thu Mar 5 12:11:27 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Thu, 05 Mar 2015 13:11:27 +0100 Subject: [Freeipa-devel] [PATCH 0014] emit a more helpful error messages when CA configuration fails Message-ID: <54F847EF.2080608@redhat.com> https://fedorahosted.org/freeipa/ticket/4900 -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0014-Emit-a-more-helpful-error-messages-when-CA-configura.patch Type: text/x-patch Size: 2821 bytes Desc: not available URL: From jcholast at redhat.com Thu Mar 5 12:18:45 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 05 Mar 2015 13:18:45 +0100 Subject: [Freeipa-devel] [PATCH] 0040 Add realm name to backup header file. In-Reply-To: <54F846B8.9090407@redhat.com> References: <54F70464.5030803@redhat.com> <54F74668.1030108@redhat.com> <54F846B8.9090407@redhat.com> Message-ID: <54F849A5.30207@redhat.com> Hi, Dne 5.3.2015 v 13:06 David Kupka napsal(a): > On 03/04/2015 06:52 PM, David Kupka wrote: >> On 03/04/2015 02:11 PM, David Kupka wrote: >>> https://fedorahosted.org/freeipa/ticket/4896 >>> >>> >>> _______________________________________________ >>> Freeipa-devel mailing list >>> Freeipa-devel at redhat.com >>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>> >> >> Honza proposed different approach. We can extract default.conf and use >> it in to create api object with the right values. >> I've implemented it and it works too additionally we don't need to >> change the header content for now. For the record, my concern was that the original approach was too invasive for a minor release. >> >> >> >> _______________________________________________ >> Freeipa-devel mailing list >> Freeipa-devel at redhat.com >> https://www.redhat.com/mailman/listinfo/freeipa-devel >> > > Patch updated after offline NACK from Honza. > > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel > Thanks, ACK! Pushed to: master: 4a20115ce8a3d90afec827d356edecc7834a0684 ipa-4-1: 253f9adae7968af8df8aab0ae2441d26112deb2b Honza -- Jan Cholasta From abokovoy at redhat.com Thu Mar 5 12:31:44 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Thu, 5 Mar 2015 14:31:44 +0200 Subject: [Freeipa-devel] [PATCHES 134-136] extdom: handle ERANGE return code for getXXYYY_r() In-Reply-To: <20150305103353.GA3271@p.redhat.com> References: <20150302174507.GK3271@p.redhat.com> <20150304141755.GB25455@redhat.com> <20150304171453.GS3271@p.redhat.com> <20150305081636.GX3271@p.redhat.com> <20150305103353.GA3271@p.redhat.com> Message-ID: <20150305123144.GP25455@redhat.com> On Thu, 05 Mar 2015, Sumit Bose wrote: >From 0b4e302866f734b93176d9104bd78a2e55702c40 Mon Sep 17 00:00:00 2001 >From: Sumit Bose >Date: Tue, 24 Feb 2015 15:29:00 +0100 >Subject: [PATCH 134/136] Add configure check for cwrap libraries > >Currently only nss-wrapper is checked, checks for other crwap libraries >can be added e.g. as > >AM_CHECK_WRAPPER(uid_wrapper, HAVE_UID_WRAPPER) >--- > daemons/configure.ac | 24 ++++++++++++++++++++++++ > 1 file changed, 24 insertions(+) > >diff --git a/daemons/configure.ac b/daemons/configure.ac >index 97cd25115f371e9e549d209401df9325c7e112c1..7c979fe2d0b91e9d71fe4ca5a50ad78a4de79298 100644 >--- a/daemons/configure.ac >+++ b/daemons/configure.ac >@@ -236,6 +236,30 @@ PKG_CHECK_EXISTS(cmocka, > ) > AM_CONDITIONAL([HAVE_CMOCKA], [test x$have_cmocka = xyes]) > >+dnl A macro to check presence of a cwrap (http://cwrap.org) wrapper on the system >+dnl Usage: >+dnl AM_CHECK_WRAPPER(name, conditional) >+dnl If the cwrap library is found, sets the HAVE_$name conditional >+AC_DEFUN([AM_CHECK_WRAPPER], >+[ >+ FOUND_WRAPPER=0 >+ >+ AC_MSG_CHECKING([for $1]) >+ PKG_CHECK_EXISTS([$1], >+ [ >+ AC_MSG_RESULT([yes]) >+ FOUND_WRAPPER=1 >+ ], >+ [ >+ AC_MSG_RESULT([no]) >+ AC_MSG_WARN([cwrap library $1 not found, some tests will not run]) >+ ]) >+ >+ AM_CONDITIONAL($2, [ test x$FOUND_WRAPPER = x1]) >+]) >+ >+AM_CHECK_WRAPPER(nss_wrapper, HAVE_NSS_WRAPPER) >+ > dnl -- dirsrv is needed for the extdom unit tests -- > PKG_CHECK_MODULES([DIRSRV], [dirsrv >= 1.3.0]) > dnl -- sss_idmap is needed by the extdom exop -- ACK, though this sets a precedent for defining our own macro. -- / Alexander Bokovoy From lryznaro at redhat.com Thu Mar 5 06:29:37 2015 From: lryznaro at redhat.com (Lenka Ryznarova) Date: Thu, 05 Mar 2015 07:29:37 +0100 Subject: [Freeipa-devel] [PATCH] 0001 Test Objectclass of postdetach group Message-ID: <1425536977.15996.1.camel@dhcp130-146.brq.redhat.com> Patch related to ticket https://fedorahosted.org/freeipa/ticket/4909 Lenka -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-lryznaro-0001-Test-Objectclass-of-postdetach-group.patch Type: text/x-patch Size: 3346 bytes Desc: not available URL: From jcholast at redhat.com Thu Mar 5 13:35:47 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 05 Mar 2015 14:35:47 +0100 Subject: [Freeipa-devel] [PATCH] 0006 Limit deadlocks between DS plugin DNA and slapi-nis In-Reply-To: <54F71C8A.90709@redhat.com> References: <54F71C8A.90709@redhat.com> Message-ID: <54F85BB3.4020205@redhat.com> Hi Thierry, Dne 4.3.2015 v 15:54 thierry bordaz napsal(a): > https://fedorahosted.org/freeipa/ticket/4927 Thanks, ACK. Added ticket URL to commit message and pushed to: master: 6e00f7318230781debd9952c6f2a3d924f35688a ipa-4-1: 5c3611481a5e0a4974ee368c60b8ef9ca34ea38a Honza -- Jan Cholasta From pspacek at redhat.com Thu Mar 5 13:45:25 2015 From: pspacek at redhat.com (Petr Spacek) Date: Thu, 05 Mar 2015 14:45:25 +0100 Subject: [Freeipa-devel] [PATCH 0195] Fix memory leaks in ipapkcs11helper module In-Reply-To: <54EF435E.5070402@redhat.com> References: <54EDCC47.40400@redhat.com> <54EF0C4D.8010700@redhat.com> <54EF435E.5070402@redhat.com> Message-ID: <54F85DF5.8000608@redhat.com> On 26.2.2015 17:01, Martin Basti wrote: > On 26/02/15 13:06, Petr Spacek wrote: >> Hello Martin, >> >> thank you for patch! This NACK is only aesthetic :-) >> >> On 25.2.2015 14:21, Martin Basti wrote: >>> if (!check_return_value(rv, "import_wrapped_key: key unwrapping")) { >>> + error = 1; >>> + goto final; >>> + } >> >> This exact sequence is repeated many times in the code. >> >> I would prefer a C macro like this: >> #define GOTO_FAIL \ >> do { \ >> error = 1; \ >> goto final; \ >> } while(0) >> >> This allows more dense code like: >> if (!test) >> GOTO_FAIL; >> >> and does not have the risk of missing error = 1 somewhere. >> >>> +final: >>> if (pkey != NULL) >>> EVP_PKEY_free(pkey); >>> + if (label != NULL) PyMem_Free(label); >>> + if (error){ >>> + return NULL; >>> + } >>> return ret; >>> } >> Apparently, this is inconsistent with itself. >> >> Please pick one style and use it, e.g. >> if (label != NULL) >> PyMem_Free(label) >> >> ... and do not add curly braces when unnecessary. >> >> If you want, we can try running $ indent on current sources and committing >> changes separately so you do not have to make changes like this by hand. >> > Thanks. Updated patch attached. ACK, it works for me. -- Petr^2 Spacek From pspacek at redhat.com Thu Mar 5 13:45:35 2015 From: pspacek at redhat.com (Petr Spacek) Date: Thu, 05 Mar 2015 14:45:35 +0100 Subject: [Freeipa-devel] [PATCH 0190] DNSSEC: add support for CKM_RSA_PKCS_OAEP mechanism In-Reply-To: <54EF42DC.7090301@redhat.com> References: <54DB54DE.4030606@redhat.com> <54EF07EF.6060300@redhat.com> <54EF42DC.7090301@redhat.com> Message-ID: <54F85DFF.6040000@redhat.com> On 26.2.2015 16:59, Martin Basti wrote: > On 26/02/15 12:47, Petr Spacek wrote: >> On 11.2.2015 14:10, Martin Basti wrote: >>> https://fedorahosted.org/freeipa/ticket/4657#comment:13 >>> >>> Patch attached. >>> >>> -- >>> Martin Basti >>> >>> >>> freeipa-mbasti-0190-DNSSEC-add-support-for-CKM_RSA_PKCS_OAEP-mechanism.patch >>> >>> >>> From 4d698a5adaa94eb854c75bd9bcaf3093f31a11e5 Mon Sep 17 00:00:00 2001 >>> From: Martin Basti >>> Date: Wed, 11 Feb 2015 14:05:46 +0100 >>> Subject: [PATCH] DNSSEC add support for CKM_RSA_PKCS_OAEP mechanism >>> >>> Ticket: https://fedorahosted.org/freeipa/ticket/4657#comment:13 >>> --- >>> ipapython/ipap11helper/p11helper.c | 72 >>> ++++++++++++++++++++++++++++++++++++-- >>> 1 file changed, 69 insertions(+), 3 deletions(-) >>> >>> diff --git a/ipapython/ipap11helper/p11helper.c >>> b/ipapython/ipap11helper/p11helper.c >>> index >>> 4e0f262057b377124793f1e3091a8c9df4794164..c638bbe849f1bbddc8004bd1c4cccc1128b1c9e7 >>> 100644 >>> --- a/ipapython/ipap11helper/p11helper.c >>> +++ b/ipapython/ipap11helper/p11helper.c >>> @@ -53,6 +53,22 @@ >>> // TODO >>> #define CKA_COPYABLE (0x0017) >>> +#define CKG_MGF1_SHA1 (0x00000001) >>> + >>> +#define CKZ_DATA_SPECIFIED (0x00000001) >>> + >>> +struct ck_rsa_pkcs_oaep_params { >>> + CK_MECHANISM_TYPE hash_alg; >>> + unsigned long mgf; >>> + unsigned long source; >>> + void *source_data; >>> + unsigned long source_data_len; >>> +}; >>> + >>> +typedef struct ck_rsa_pkcs_oaep_params CK_RSA_PKCS_OAEP_PARAMS; >>> +typedef struct ck_rsa_pkcs_oaep_params *CK_RSA_PKCS_OAEP_PARAMS_PTR; >>> + >>> + >>> CK_BBOOL true = CK_TRUE; >>> CK_BBOOL false = CK_FALSE; >>> @@ -118,6 +134,17 @@ CK_BBOOL* bool; >>> } PyObj2Bool_mapping_t; >>> /** >>> + * Constants >>> + */ >>> +static const CK_RSA_PKCS_OAEP_PARAMS CONST_RSA_PKCS_OAEP_PARAMS = { >>> + .hash_alg = CKM_SHA_1, >>> + .mgf = CKG_MGF1_SHA1, >>> + .source = CKZ_DATA_SPECIFIED, >>> + .source_data = NULL, >>> + .source_data_len = 0 >>> +}; >>> + >>> +/** >>> * ipap11helper Exceptions >>> */ >>> static PyObject *ipap11helperException; //parent class for all exceptions >>> @@ -1359,17 +1386,36 @@ P11_Helper_export_wrapped_key(P11_Helper* self, >>> PyObject *args, PyObject *kwds) >>> CK_BYTE_PTR wrapped_key = NULL; >>> CK_ULONG wrapped_key_len = 0; >>> CK_MECHANISM wrapping_mech = { CKM_RSA_PKCS, NULL, 0 }; >>> - CK_MECHANISM_TYPE wrapping_mech_type = CKM_RSA_PKCS; >>> /* currently we don't support parameter in mechanism */ >>> static char *kwlist[] = { "key", "wrapping_key", "wrapping_mech", >>> NULL }; >>> //TODO check long overflow >>> //TODO export method >>> if (!PyArg_ParseTupleAndKeywords(args, kwds, "kkk|", kwlist, >>> &object_key, >>> - &object_wrapping_key, &wrapping_mech_type)) { >>> + &object_wrapping_key, &wrapping_mech.mechanism)) { >>> return NULL; >>> } >>> - wrapping_mech.mechanism = wrapping_mech_type; >>> + >>> + // fill mech parameters >>> + switch(wrapping_mech.mechanism){ >>> + case CKM_RSA_PKCS: >>> + case CKM_AES_KEY_WRAP: >>> + case CKM_AES_KEY_WRAP_PAD: >>> + //default params >>> + break; >>> + >>> + case CKM_RSA_PKCS_OAEP: >>> + /* Use the same configuration as openSSL >>> + * https://www.openssl.org/docs/crypto/RSA_public_encrypt.html >>> + */ >>> + wrapping_mech.pParameter = (void*) &CONST_RSA_PKCS_OAEP_PARAMS; >>> + wrapping_mech.ulParameterLen = >>> sizeof(CONST_RSA_PKCS_OAEP_PARAMS); >>> + break; >>> + >>> + default: >>> + PyErr_SetString(ipap11helperError, "Unsupported wrapping >>> mechanism"); >>> + return NULL; >>> + } >>> rv = self->p11->C_WrapKey(self->session, &wrapping_mech, >>> object_wrapping_key, object_key, NULL, &wrapped_key_len); >>> @@ -1452,6 +1498,26 @@ P11_Helper_import_wrapped_secret_key(P11_Helper* >>> self, PyObject *args, >>> return NULL; >>> } >>> + switch(wrapping_mech.mechanism){ >>> + case CKM_RSA_PKCS: >>> + case CKM_AES_KEY_WRAP: >>> + case CKM_AES_KEY_WRAP_PAD: >>> + //default params >>> + break; >> NACK. This switch is duplicate of the previous one. Please split it into an >> auxiliary function and call it twice. >> >> Thank you! >> > Thanks. Updated patch attached. ACK, it works for me. -- Petr^2 Spacek From pspacek at redhat.com Thu Mar 5 13:45:43 2015 From: pspacek at redhat.com (Petr Spacek) Date: Thu, 05 Mar 2015 14:45:43 +0100 Subject: [Freeipa-devel] [PATCH 0194] Remove unused method to export secret key from ipapkcs11helper module In-Reply-To: <54EDCCF9.1070407@redhat.com> References: <54EDCCF9.1070407@redhat.com> Message-ID: <54F85E07.2020208@redhat.com> On 25.2.2015 14:24, Martin Basti wrote: > The method never been used, and never will be, because we do not want to > export secrets. > > Ticket: https://fedorahosted.org/freeipa/ticket/4657 > > Patch attached (may require mbasti-0195, mbasti-0190) ACK, it works for me. -- Petr^2 Spacek From pspacek at redhat.com Thu Mar 5 13:50:59 2015 From: pspacek at redhat.com (Petr Spacek) Date: Thu, 05 Mar 2015 14:50:59 +0100 Subject: [Freeipa-devel] [PATCH 0023-0025] p11helper improvements Message-ID: <54F85F43.50702@redhat.com> Hello, please review this patch set. It should be applied on top of your previous p11helper patch set. Thank you! -- Petr^2 Spacek -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pspacek-0023-p11helper-standardize-indentation-and-other-visual-a.patch Type: text/x-patch Size: 100208 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pspacek-0024-p11helper-use-sizeof-instead-of-magic-constants.patch Type: text/x-patch Size: 3460 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pspacek-0025-p11helper-clarify-error-message.patch Type: text/x-patch Size: 1150 bytes Desc: not available URL: From pspacek at redhat.com Thu Mar 5 13:57:43 2015 From: pspacek at redhat.com (Petr Spacek) Date: Thu, 05 Mar 2015 14:57:43 +0100 Subject: [Freeipa-devel] [PATCH 0316] Fix crash triggered by zone objects with unexpected DN In-Reply-To: <54F71617.7040207@redhat.com> References: <54905091.5010201@redhat.com> <54E45D2A.6050907@redhat.com> <54EC8455.3040907@redhat.com> <54F71617.7040207@redhat.com> Message-ID: <54F860D7.2090102@redhat.com> On 4.3.2015 15:26, Tomas Hozza wrote: > On 02/24/2015 03:01 PM, Petr Spacek wrote: >> > Hello, >> > >> > On 18.2.2015 10:36, Tomas Hozza wrote: >>> > > On 12/16/2014 04:32 PM, Petr Spacek wrote: >>>> > >> Hello, >>>> > >> >>>> > >> Fix crash triggered by zone objects with unexpected DN. >>>> > >> >>>> > >> https://fedorahosted.org/bind-dyndb-ldap/ticket/148 >>>> > >> >>> > > NACK. >>> > > >>> > > The patch seems to make no difference when using the reproducer from ticket 148 >>> > > >>> > > 18-Feb-2015 10:34:09.067 running >>> > > 18-Feb-2015 10:34:09.139 ldap_helper.c:4876: INSIST(task == inst->task) failed, back trace >>> > > 18-Feb-2015 10:34:09.139 #0 0x555555587a80 in ?? >>> > > 18-Feb-2015 10:34:09.139 #1 0x7ffff620781a in ?? >>> > > 18-Feb-2015 10:34:09.139 #2 0x7ffff20b00b2 in ?? >>> > > 18-Feb-2015 10:34:09.140 #3 0x7ffff1e7ccf9 in ?? >>> > > 18-Feb-2015 10:34:09.140 #4 0x7ffff1e7d992 in ?? >>> > > 18-Feb-2015 10:34:09.140 #5 0x7ffff20a7f3b in ?? >>> > > 18-Feb-2015 10:34:09.140 #6 0x7ffff5dda52a in ?? >>> > > 18-Feb-2015 10:34:09.140 #7 0x7ffff508d79d in ?? >>> > > 18-Feb-2015 10:34:09.140 exiting (due to assertion failure) >>> > > >>> > > Program received signal SIGABRT, Aborted. >>> > > [Switching to Thread 0x7fffea7cd700 (LWP 1719)] >>> > > 0x00007ffff4fc18c7 in __GI_raise (sig=sig at entry=6) at ../sysdeps/unix/sysv/linux/raise.c:55 >>> > > 55 return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig); >>> > > Missing separate debuginfos, use: debuginfo-install cyrus-sasl-gssapi-2.1.26-19.fc21.x86_64 cyrus-sasl-lib-2.1.26-19.fc21.x86_64 cyrus-sasl-md5-2.1.26-19.fc21.x86_64 cyrus-sasl-plain-2.1.26-19.fc21.x86_64 gssproxy-0.3.1-4.fc21.x86_64 keyutils-libs-1.5.9-4.fc21.x86_64 libattr-2.4.47-9.fc21.x86_64 libdb-5.3.28-9.fc21.x86_64 libgcc-4.9.2-6.fc21.x86_64 libselinux-2.3-5.fc21.x86_64 nspr-4.10.8-1.fc21.x86_64 nss-3.17.4-1.fc21.x86_64 nss-softokn-freebl-3.17.4-1.fc21.x86_64 nss-util-3.17.4-1.fc21.x86_64 pcre-8.35-8.fc21.x86_64 sssd-client-1.12.3-4.fc21.x86_64 xz-libs-5.1.2-14alpha.fc21.x86_64 >>> > > (gdb) bt >>> > > #0 0x00007ffff4fc18c7 in __GI_raise (sig=sig at entry=6) at ../sysdeps/unix/sysv/linux/raise.c:55 >>> > > #1 0x00007ffff4fc352a in __GI_abort () at abort.c:89 >>> > > #2 0x0000555555587c29 in assertion_failed (file=, line=, type=, cond=) at ./main.c:220 >>> > > #3 0x00007ffff620781a in isc_assertion_failed (file=file at entry=0x7ffff20bad2a "ldap_helper.c", line=line at entry=4876, type=type at entry=isc_assertiontype_insist, >>> > > cond=cond at entry=0x7ffff20baf04 "task == inst->task") at assertions.c:57 >>> > > #4 0x00007ffff20b00b2 in syncrepl_update (chgtype=1, entry=0x7ffff0125590, inst=0x7ffff7fa3160) at ldap_helper.c:4876 >>> > > #5 ldap_sync_search_entry (ls=, msg=, entryUUID=, phase=LDAP_SYNC_CAPI_ADD) at ldap_helper.c:5031 >>> > > #6 0x00007ffff1e7ccf9 in ldap_sync_search_entry (ls=ls at entry=0x7fffe40008c0, res=0x7fffe4003870) at ldap_sync.c:228 >>> > > #7 0x00007ffff1e7d992 in ldap_sync_init (ls=0x7fffe40008c0, mode=mode at entry=3) at ldap_sync.c:792 >>> > > #8 0x00007ffff20a7f3b in ldap_syncrepl_watcher (arg=0x7ffff7fa3160) at ldap_helper.c:5247 >>> > > #9 0x00007ffff5dda52a in start_thread (arg=0x7fffea7cd700) at pthread_create.c:310 >>> > > #10 0x00007ffff508d79d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 >> > >> > Thank you for catching this! I was using slightly different test which >> > triggered the new code but by using different code path. >> > >> > This new version should be more robust. Please re-test it, thank you! >> > > ACK for version 2. Thank, pushed to master: 9d2160ead48d64b6943cb4f0e7ec0feddd82dbc5 -- Petr^2 Spacek From pspacek at redhat.com Thu Mar 5 14:37:40 2015 From: pspacek at redhat.com (Petr Spacek) Date: Thu, 05 Mar 2015 15:37:40 +0100 Subject: [Freeipa-devel] [PATCH 0023-0025] p11helper improvements In-Reply-To: <54F85F43.50702@redhat.com> References: <54F85F43.50702@redhat.com> Message-ID: <54F86A34.5000008@redhat.com> On 5.3.2015 14:50, Petr Spacek wrote: > Hello, > > please review this patch set. It should be applied on top of your previous > p11helper patch set. > > Thank you! Reviewer requested reworded version of the error message, here it is. -- Petr^2 Spacek -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pspacek-0025-2-p11helper-clarify-error-message.patch Type: text/x-patch Size: 1228 bytes Desc: not available URL: From mbasti at redhat.com Thu Mar 5 15:10:47 2015 From: mbasti at redhat.com (Martin Basti) Date: Thu, 05 Mar 2015 16:10:47 +0100 Subject: [Freeipa-devel] [PATCH 0023-0025] p11helper improvements In-Reply-To: <54F86A34.5000008@redhat.com> References: <54F85F43.50702@redhat.com> <54F86A34.5000008@redhat.com> Message-ID: <54F871F7.2010107@redhat.com> On 05/03/15 15:37, Petr Spacek wrote: > On 5.3.2015 14:50, Petr Spacek wrote: >> Hello, >> >> please review this patch set. It should be applied on top of your previous >> p11helper patch set. >> >> Thank you! > Reviewer requested reworded version of the error message, here it is. Thank you for patches. Required patches: mbasti-190-2, mbasti-195-2, mbasti-196 0023: ACK 0024: ACK 0025-2: ACK Martin^2 -- Martin Basti From pvoborni at redhat.com Thu Mar 5 15:20:40 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Thu, 05 Mar 2015 16:20:40 +0100 Subject: [Freeipa-devel] [PATCHES] SPEC: Require python2 version of sssd bindings In-Reply-To: <20150305102328.GA18226@mail.corp.redhat.com> References: <20150227205039.GB2327@mail.corp.redhat.com> <54F80B99.6030104@redhat.com> <20150305102328.GA18226@mail.corp.redhat.com> Message-ID: <54F87448.4050703@redhat.com> On 03/05/2015 11:23 AM, Lukas Slebodnik wrote: > On (05/03/15 08:54), Petr Vobornik wrote: >> On 02/27/2015 09:50 PM, Lukas Slebodnik wrote: >>> ehlo, >>> >>> Please review attached patches and fix freeipa in fedora 22 ASAP. >>> >>> I think the most critical is 1st patch >>> >>> sh$ git grep "SSSDConfig" | grep import >>> install/tools/ipa-upgradeconfig:import SSSDConfig >>> ipa-client/ipa-install/ipa-client-automount:import SSSDConfig >>> ipa-client/ipa-install/ipa-client-install: import SSSDConfig >>> >>> BTW package python-sssdconfig is provides since sssd-1.10.0alpha1 (2013-04-02) >>> but it was not explicitely required. >>> >>> The latest python3 changes in sssd (fedora 22) is just a result of negligent >>> packaging of freeipa. >>> >>> LS >>> >> >> Fedora 22 was amended. >> >> Patch 1: ACK >> >> Patch 2: ACK >> >> Patch3: >> the package name is libsss_nss_idmap-python not python-libsss_nss_idmap >> which already is required in adtrust package > In sssd upstream we decided to rename package libsss_nss_idmap-python to > python-libsss_nss_idmap according to new rpm python guidelines. > The python3 version has alredy correct name. > > We will rename package in downstream with next major release (1.13). > Of course it we will add "Provides: libsss_nss_idmap-python". > > We can push 3rd patch later or I can update 3rd patch. > What do you prefer? > > Than you very much for review. > > LS > Patch 3 should be updated to not forget the remaining change in ipa-python package. It then should be updated downstream and master when 1.13 is released in Fedora, or in master sooner if SSSD 1.13 becomes the minimal version required by master. -- Petr Vobornik From corey.kovacs at gmail.com Fri Mar 6 02:54:46 2015 From: corey.kovacs at gmail.com (Corey Kovacs) Date: Thu, 5 Mar 2015 19:54:46 -0700 Subject: [Freeipa-devel] UI plugins Message-ID: After reading the extending freeipa training document I was able successfully add us to meet attributes and add/modify them using the cli which was pretty cool. Now that I got the cli out of the way I want to add the fields to the ui. Because of the similarities between what I want to do and the example given in the docs I just followed along and changed variables where it made sense to do so. I cannot however get the new field to show up. The Apache logs don't show any errors but they do show the plugin being read as the page (user details) is loaded. After searching around I found a document which attempts to explain the process but it assumes some knowledge held by the reader which I don't possess. It looks like I supposed to create some sort of index loader which loads all of the modules which are part of the plugin. I can't seem to find any good documents telling the whole process or least non that I can make sense of. It would help also to understand how to debug such a thing. I running version 3.3 on rhel 7. Any help or pointers to more documentation would be greatly appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkosek at redhat.com Fri Mar 6 07:14:56 2015 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 06 Mar 2015 08:14:56 +0100 Subject: [Freeipa-devel] UI plugins In-Reply-To: References: Message-ID: <54F953F0.5040103@redhat.com> On 03/06/2015 03:54 AM, Corey Kovacs wrote: > After reading the extending freeipa training document I was able successfully > add us to meet attributes and add/modify them using the cli which was pretty > cool. Now that I got the cli out of the way I want to add the fields to the ui. > Because of the similarities between what I want to do and the example given in > the docs I just followed along and changed variables where it made sense to do > so. I cannot however get the new field to show up. The Apache logs don't show > any errors but they do show the plugin being read as the page (user details) is > loaded. After searching around I found a document which attempts to explain the > process but it assumes some knowledge held by the reader which I don't possess. > > It looks like I supposed to create some sort of index loader which loads all > of the modules which are part of the plugin. I can't seem to find any good > documents telling the whole process or least non that I can make sense of. > > It would help also to understand how to debug such a thing. I will let Petr (CCed) to help here. > I running version 3.3 on rhel 7. Any help or pointers to more documentation > would be greatly appreciated. BTW, seeing version - note that RHEL-7.1 was released yesterday, you may want to consider upgrading. If you did not see the FreeIPA public demo for example, you will be surprised how awesome the UI is in FreeIPA 4.1 that is shipped in RHEL-7.1 :-) Martin From corey.kovacs at gmail.com Fri Mar 6 07:27:24 2015 From: corey.kovacs at gmail.com (Corey Kovacs) Date: Fri, 6 Mar 2015 00:27:24 -0700 Subject: [Freeipa-devel] UI plugins In-Reply-To: <54F953F0.5040103@redhat.com> References: <54F953F0.5040103@redhat.com> Message-ID: That's a pretty big version jump. Must be a good reason. I'll take a look. Thanks. On Mar 6, 2015 12:15 AM, "Martin Kosek" wrote: > On 03/06/2015 03:54 AM, Corey Kovacs wrote: > >> After reading the extending freeipa training document I was able >> successfully >> add us to meet attributes and add/modify them using the cli which was >> pretty >> cool. Now that I got the cli out of the way I want to add the fields to >> the ui. >> Because of the similarities between what I want to do and the example >> given in >> the docs I just followed along and changed variables where it made sense >> to do >> so. I cannot however get the new field to show up. The Apache logs don't >> show >> any errors but they do show the plugin being read as the page (user >> details) is >> loaded. After searching around I found a document which attempts to >> explain the >> process but it assumes some knowledge held by the reader which I don't >> possess. >> >> It looks like I supposed to create some sort of index loader which loads >> all >> of the modules which are part of the plugin. I can't seem to find any good >> documents telling the whole process or least non that I can make sense of. >> >> It would help also to understand how to debug such a thing. >> > > I will let Petr (CCed) to help here. > > I running version 3.3 on rhel 7. Any help or pointers to more >> documentation >> would be greatly appreciated. >> > > BTW, seeing version - note that RHEL-7.1 was released yesterday, you may > want to consider upgrading. If you did not see the FreeIPA public demo for > example, you will be surprised how awesome the UI is in FreeIPA 4.1 that is > shipped in RHEL-7.1 :-) > > Martin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcholast at redhat.com Fri Mar 6 08:53:54 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Fri, 06 Mar 2015 09:53:54 +0100 Subject: [Freeipa-devel] [PATCH] Password vault In-Reply-To: <54EBEB55.6010306@redhat.com> References: <54E1AF55.3060409@redhat.com> <54EBEB55.6010306@redhat.com> Message-ID: <54F96B22.9050507@redhat.com> Hi Endi, Dne 24.2.2015 v 04:09 Endi Sukma Dewata napsal(a): > On 2/16/2015 2:50 AM, Endi Sukma Dewata wrote: >> Hi, >> >> Attached are the updated patches for the password vault, and some new >> ones (please disregard previous patch submissions). Please give them a >> try. Thanks. > > New patches attached replacing all previous vault patches. They include > the new escrow functionality, changes to the asymmetric vault, and some > cleanups. Thanks. > Patch 353: 1) Please follow PEP8 in new code. The pep8 tool reports these errors in existing files: ./ipalib/constants.py:98:80: E501 line too long (84 > 79 characters) ./ipalib/plugins/baseldap.py:1527:80: E501 line too long (81 > 79 characters) ./ipalib/plugins/user.py:915:80: E501 line too long (80 > 79 characters) as well as many errors in the files this patch adds. 2) Pylint reports the following error: ipatests/test_xmlrpc/test_vault_plugin.py:153: [E0102(function-redefined), test_vault] class already defined line 27) 3) The container_vault config option should be renamed to container_vaultcontainer, as it is used in the vaultcontainer plugin, not the vault plugin. 4) The vault object should be child of the vaultcontainer object. Not only is this correct from the object model perspective, but it would also make all the container_id hacks go away. 5) When specifying param flags, use set literals. This is especially wrong, because it's not a tuple, but a string in parentheses: + flags=('virtual_attribute'), 6) The `container` param of vault should actually be an option in vault_* commands. Also it should be renamed to `container_id`, for consistency with vaultcontainer. 7) The `vault_id` param of vault should have "no_option" in flags, since it is output-only. 8) Don't translate docstrings where not necessary: + def get_dn(self, *keys, **options): + __doc__ = _(""" + Generates vault DN from vault ID. + """) Only plugin modules and classes should have translated docstrings. 9) This looks wrong in vault.get_dn() and vaultcontainer.get_dn(): + name = None + if keys: + name = keys[0] Primary key of the object should always be set, so the if statement should not be there. Also, primary key of any given object is always last in "keys", so use keys[-1] instead of keys[0]. 10) Use "self.api" instead of "api" to access the API in plugins. 11) No clever optimizations like this please: + # vault DN cannot be the container base DN + if len(dn) == len(api.Object.vaultcontainer.base_dn): + raise ValueError('Invalid vault DN: %s' % dn) Compare the DNs by value instead. 12) vault.split_id() is not used anywhere. 13) Bytes is not base64-encoded data: + Bytes('data?', + cli_name='data', + doc=_('Base-64 encoded binary data to archive'), + ), It is base64-encoded in the CLI, but on the API level it is not. The doc should say just "Binary data to archive". 14) Use File instead of Str for input files: + Str('in?', + cli_name='in', + doc=_('File containing data to archive'), + ), 15) Use MutuallyExclusiveError instead of ValidationError when there are mutually exclusive options specified. 16) You do way too much stuff in vault_add.forward(). Only code that must be done on the client needs to be there, i.e. handling of the "data", "text" and "in" options. The vault_archive call must be in vault_add.execute(), otherwise a) we will be making 2 RPC calls from the client and b) it won't be called at all when api.env.in_server is True. 17) Why are vaultcontainer objects automatically created in vault_add? If you have to automatically create them, you also have to automatically delete them when the command fails. But that's a hassle, so I would just not create them automatically. 18) Why are vaultcontainer objects automatically created in vault_find? This is just plain wrong and has to be removed, now. 19) What is the reason behind all the json stuff in vault_transport_cert? vault_transport_cert.__json__() is exactly the same as Command.__json__() and hence redundant. 20) Are vault_transport_cert, vault_archive and vault_retrieve supposed to be runnable by users? If not, add "NO_CLI = True" to the class definition. 21) vault_archive is not a retrieve operation, it should be based on LDAPUpdate instead of LDAPRetrieve. Or Command actually, since it does not do anything with LDAP. The same applies to vault_retrieve. 22) vault_archive will break with binary data that is not UTF-8 encoded text. This is where it occurs: + vault_data[u'data'] = unicode(data) Generally, don't use unicode() on str values and str() on unicode values directly, always use .decode() and .encode(). 23) Since vault containers are nested, the vaultcontainer object should be child of itself. There is no support for nested objects in the framework, but it shouldn't be too hard to do anyway. 24) Instead of: + while len(dn) > len(self.base_dn): + + rdn = dn[0] ... + dn = DN(*dn[1:]) you can do: + for rdn in dn[:-len(self.base_dn)]: ... 25) Why are parent vaultcontainer objects automatically created in vaultcontainer_add? 26) Instead of the delete_entry refactoring in baseldap and vaultcontainer_add, you can put this in vaultcontainer_add's pre_callback: try: ldap.get_entries(dn, scope=ldap.SCOPE_ONELEVEL, attrs_list=[]) except errors.NotFound: pass else: if not options.get('force', False): raise errors.NotAllowedOnNonLeaf() 27) Why are parent vaultcontainer objects automatically created in vaultcontainer_find? 28) The vault and vaultcontainer plugins seem to be pretty similar, I think it would make sense to put common stuff in a base class and inherit vault and vaultcontainer from that. More later. Honza -- Jan Cholasta From tbabej at redhat.com Fri Mar 6 09:56:11 2015 From: tbabej at redhat.com (Tomas Babej) Date: Fri, 06 Mar 2015 10:56:11 +0100 Subject: [Freeipa-devel] [PATCH 0194] Remove unused method to export secret key from ipapkcs11helper module In-Reply-To: <54F85E07.2020208@redhat.com> References: <54EDCCF9.1070407@redhat.com> <54F85E07.2020208@redhat.com> Message-ID: <54F979BB.1040403@redhat.com> On 03/05/2015 02:45 PM, Petr Spacek wrote: > On 25.2.2015 14:24, Martin Basti wrote: >> The method never been used, and never will be, because we do not want to >> export secrets. >> >> Ticket: https://fedorahosted.org/freeipa/ticket/4657 >> >> Patch attached (may require mbasti-0195, mbasti-0190) > ACK, it works for me. > Pushed to master, ipa-4-1. From tbabej at redhat.com Fri Mar 6 09:56:44 2015 From: tbabej at redhat.com (Tomas Babej) Date: Fri, 06 Mar 2015 10:56:44 +0100 Subject: [Freeipa-devel] [PATCH 0190] DNSSEC: add support for CKM_RSA_PKCS_OAEP mechanism In-Reply-To: <54F85DFF.6040000@redhat.com> References: <54DB54DE.4030606@redhat.com> <54EF07EF.6060300@redhat.com> <54EF42DC.7090301@redhat.com> <54F85DFF.6040000@redhat.com> Message-ID: <54F979DC.4080705@redhat.com> On 03/05/2015 02:45 PM, Petr Spacek wrote: > On 26.2.2015 16:59, Martin Basti wrote: >> On 26/02/15 12:47, Petr Spacek wrote: >>> On 11.2.2015 14:10, Martin Basti wrote: >>>> https://fedorahosted.org/freeipa/ticket/4657#comment:13 >>>> >>>> Patch attached. >>>> >>>> -- >>>> Martin Basti >>>> >>>> >>>> freeipa-mbasti-0190-DNSSEC-add-support-for-CKM_RSA_PKCS_OAEP-mechanism.patch >>>> >>>> >>>> From 4d698a5adaa94eb854c75bd9bcaf3093f31a11e5 Mon Sep 17 00:00:00 2001 >>>> From: Martin Basti >>>> Date: Wed, 11 Feb 2015 14:05:46 +0100 >>>> Subject: [PATCH] DNSSEC add support for CKM_RSA_PKCS_OAEP mechanism >>>> >>>> Ticket: https://fedorahosted.org/freeipa/ticket/4657#comment:13 >>>> --- >>>> ipapython/ipap11helper/p11helper.c | 72 >>>> ++++++++++++++++++++++++++++++++++++-- >>>> 1 file changed, 69 insertions(+), 3 deletions(-) >>>> >>>> diff --git a/ipapython/ipap11helper/p11helper.c >>>> b/ipapython/ipap11helper/p11helper.c >>>> index >>>> 4e0f262057b377124793f1e3091a8c9df4794164..c638bbe849f1bbddc8004bd1c4cccc1128b1c9e7 >>>> 100644 >>>> --- a/ipapython/ipap11helper/p11helper.c >>>> +++ b/ipapython/ipap11helper/p11helper.c >>>> @@ -53,6 +53,22 @@ >>>> // TODO >>>> #define CKA_COPYABLE (0x0017) >>>> +#define CKG_MGF1_SHA1 (0x00000001) >>>> + >>>> +#define CKZ_DATA_SPECIFIED (0x00000001) >>>> + >>>> +struct ck_rsa_pkcs_oaep_params { >>>> + CK_MECHANISM_TYPE hash_alg; >>>> + unsigned long mgf; >>>> + unsigned long source; >>>> + void *source_data; >>>> + unsigned long source_data_len; >>>> +}; >>>> + >>>> +typedef struct ck_rsa_pkcs_oaep_params CK_RSA_PKCS_OAEP_PARAMS; >>>> +typedef struct ck_rsa_pkcs_oaep_params *CK_RSA_PKCS_OAEP_PARAMS_PTR; >>>> + >>>> + >>>> CK_BBOOL true = CK_TRUE; >>>> CK_BBOOL false = CK_FALSE; >>>> @@ -118,6 +134,17 @@ CK_BBOOL* bool; >>>> } PyObj2Bool_mapping_t; >>>> /** >>>> + * Constants >>>> + */ >>>> +static const CK_RSA_PKCS_OAEP_PARAMS CONST_RSA_PKCS_OAEP_PARAMS = { >>>> + .hash_alg = CKM_SHA_1, >>>> + .mgf = CKG_MGF1_SHA1, >>>> + .source = CKZ_DATA_SPECIFIED, >>>> + .source_data = NULL, >>>> + .source_data_len = 0 >>>> +}; >>>> + >>>> +/** >>>> * ipap11helper Exceptions >>>> */ >>>> static PyObject *ipap11helperException; //parent class for all exceptions >>>> @@ -1359,17 +1386,36 @@ P11_Helper_export_wrapped_key(P11_Helper* self, >>>> PyObject *args, PyObject *kwds) >>>> CK_BYTE_PTR wrapped_key = NULL; >>>> CK_ULONG wrapped_key_len = 0; >>>> CK_MECHANISM wrapping_mech = { CKM_RSA_PKCS, NULL, 0 }; >>>> - CK_MECHANISM_TYPE wrapping_mech_type = CKM_RSA_PKCS; >>>> /* currently we don't support parameter in mechanism */ >>>> static char *kwlist[] = { "key", "wrapping_key", "wrapping_mech", >>>> NULL }; >>>> //TODO check long overflow >>>> //TODO export method >>>> if (!PyArg_ParseTupleAndKeywords(args, kwds, "kkk|", kwlist, >>>> &object_key, >>>> - &object_wrapping_key, &wrapping_mech_type)) { >>>> + &object_wrapping_key, &wrapping_mech.mechanism)) { >>>> return NULL; >>>> } >>>> - wrapping_mech.mechanism = wrapping_mech_type; >>>> + >>>> + // fill mech parameters >>>> + switch(wrapping_mech.mechanism){ >>>> + case CKM_RSA_PKCS: >>>> + case CKM_AES_KEY_WRAP: >>>> + case CKM_AES_KEY_WRAP_PAD: >>>> + //default params >>>> + break; >>>> + >>>> + case CKM_RSA_PKCS_OAEP: >>>> + /* Use the same configuration as openSSL >>>> + * https://www.openssl.org/docs/crypto/RSA_public_encrypt.html >>>> + */ >>>> + wrapping_mech.pParameter = (void*) &CONST_RSA_PKCS_OAEP_PARAMS; >>>> + wrapping_mech.ulParameterLen = >>>> sizeof(CONST_RSA_PKCS_OAEP_PARAMS); >>>> + break; >>>> + >>>> + default: >>>> + PyErr_SetString(ipap11helperError, "Unsupported wrapping >>>> mechanism"); >>>> + return NULL; >>>> + } >>>> rv = self->p11->C_WrapKey(self->session, &wrapping_mech, >>>> object_wrapping_key, object_key, NULL, &wrapped_key_len); >>>> @@ -1452,6 +1498,26 @@ P11_Helper_import_wrapped_secret_key(P11_Helper* >>>> self, PyObject *args, >>>> return NULL; >>>> } >>>> + switch(wrapping_mech.mechanism){ >>>> + case CKM_RSA_PKCS: >>>> + case CKM_AES_KEY_WRAP: >>>> + case CKM_AES_KEY_WRAP_PAD: >>>> + //default params >>>> + break; >>> NACK. This switch is duplicate of the previous one. Please split it into an >>> auxiliary function and call it twice. >>> >>> Thank you! >>> >> Thanks. Updated patch attached. Pushed to master, ipa-4-1. > ACK, it works for me. > From tbabej at redhat.com Fri Mar 6 09:57:33 2015 From: tbabej at redhat.com (Tomas Babej) Date: Fri, 06 Mar 2015 10:57:33 +0100 Subject: [Freeipa-devel] [PATCH 0195] Fix memory leaks in ipapkcs11helper module In-Reply-To: <54F85DF5.8000608@redhat.com> References: <54EDCC47.40400@redhat.com> <54EF0C4D.8010700@redhat.com> <54EF435E.5070402@redhat.com> <54F85DF5.8000608@redhat.com> Message-ID: <54F97A0D.30902@redhat.com> On 03/05/2015 02:45 PM, Petr Spacek wrote: > On 26.2.2015 17:01, Martin Basti wrote: >> On 26/02/15 13:06, Petr Spacek wrote: >>> Hello Martin, >>> >>> thank you for patch! This NACK is only aesthetic :-) >>> >>> On 25.2.2015 14:21, Martin Basti wrote: >>>> if (!check_return_value(rv, "import_wrapped_key: key unwrapping")) { >>>> + error = 1; >>>> + goto final; >>>> + } >>> This exact sequence is repeated many times in the code. >>> >>> I would prefer a C macro like this: >>> #define GOTO_FAIL \ >>> do { \ >>> error = 1; \ >>> goto final; \ >>> } while(0) >>> >>> This allows more dense code like: >>> if (!test) >>> GOTO_FAIL; >>> >>> and does not have the risk of missing error = 1 somewhere. >>> >>>> +final: >>>> if (pkey != NULL) >>>> EVP_PKEY_free(pkey); >>>> + if (label != NULL) PyMem_Free(label); >>>> + if (error){ >>>> + return NULL; >>>> + } >>>> return ret; >>>> } >>> Apparently, this is inconsistent with itself. >>> >>> Please pick one style and use it, e.g. >>> if (label != NULL) >>> PyMem_Free(label) >>> >>> ... and do not add curly braces when unnecessary. >>> >>> If you want, we can try running $ indent on current sources and committing >>> changes separately so you do not have to make changes like this by hand. >>> >> Thanks. Updated patch attached. > ACK, it works for me. > Pushed to master, ipa-4-1. From tbabej at redhat.com Fri Mar 6 09:58:55 2015 From: tbabej at redhat.com (Tomas Babej) Date: Fri, 06 Mar 2015 10:58:55 +0100 Subject: [Freeipa-devel] [PATCH 0023-0025] p11helper improvements In-Reply-To: <54F871F7.2010107@redhat.com> References: <54F85F43.50702@redhat.com> <54F86A34.5000008@redhat.com> <54F871F7.2010107@redhat.com> Message-ID: <54F97A5F.2000506@redhat.com> On 03/05/2015 04:10 PM, Martin Basti wrote: > On 05/03/15 15:37, Petr Spacek wrote: >> On 5.3.2015 14:50, Petr Spacek wrote: >>> Hello, >>> >>> please review this patch set. It should be applied on top of your >>> previous >>> p11helper patch set. >>> >>> Thank you! >> Reviewer requested reworded version of the error message, here it is. > Thank you for patches. > Required patches: mbasti-190-2, mbasti-195-2, mbasti-196 > > 0023: ACK > 0024: ACK > 0025-2: ACK > > Martin^2 > Pushed to: master: 459f0a8401cf0fa4f7ba119646b797340c0a796c ipa-4-1: 8fefd63152d5f5a28ac6cf51b504a150d8e7b360 From mkosek at redhat.com Fri Mar 6 10:43:07 2015 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 06 Mar 2015 11:43:07 +0100 Subject: [Freeipa-devel] New freeipa-devel footer Message-ID: <54F984BB.2070801@redhat.com> Hi list, Are you also annoyed by freeipa-devel footers and having to delete them with every reply? I was certainly annoyed, and after yesterday bump from other FreeIPA developer I finally updated the freeipa-devel configuration to follow freeipa-users example and set a better footer, with proper "-- " prefix so that MUAs can trim it. See the footer below. If you have any improvements proposals, just tell me. -- Martin Kosek Supervisor, Software Engineering - Identity Management Team Red Hat Inc. From pvoborni at redhat.com Fri Mar 6 10:51:19 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Fri, 06 Mar 2015 11:51:19 +0100 Subject: [Freeipa-devel] UI plugins In-Reply-To: References: Message-ID: <54F986A7.8000702@redhat.com> On 03/06/2015 03:54 AM, Corey Kovacs wrote: > After reading the extending freeipa training document I was able > successfully add us to meet attributes and add/modify them using the cli > which was pretty cool. Now that I got the cli out of the way I want to add > the fields to the ui. Because of the similarities between what I want to do > and the example given in the docs I just followed along and changed > variables where it made sense to do so. I cannot however get the new field > to show up. The Apache logs don't show any errors but they do show the > plugin being read as the page (user details) is loaded. After searching > around I found a document which attempts to explain the process but it > assumes some knowledge held by the reader which I don't possess. > > It looks like I supposed to create some sort of index loader which loads > all of the modules which are part of the plugin. I can't seem to find any > good documents telling the whole process or least non that I can make sense > of. > Plugin could be just one file, if the plugin name is "myplugin" it has to be: /usr/share/ipa/ui/js/plugins/myplugin/myplugin.js IPA Web UI uses AMD module format: - https://dojotoolkit.org/documentation/tutorials/1.10/modules/ - http://requirejs.org/docs/whyamd.html#amd I have some example plugins here: - https://pvoborni.fedorapeople.org/plugins/ For your case I would look at: - https://pvoborni.fedorapeople.org/plugins/employeenumber/employeenumber.js Web UI basics and introspection: (this section is little redundant to your question, but it might help others) FreeIPA Web UI is semi declarative. That means that part of the pages could be thrown away, modified or extended by plugins before the page is constructed. To do that, one has to modify specification object of particular module. Here, I would assume that you don't have UI sources(before minification), so all introspection will be done in running web ui. Otherwise it's easier to inspect sources in install/ui/src/freeipa, i.e. checkout ipa-3-3 branch from upstream git. List of modules could be find(after authentication) in browse developer tools console in object: window.require.cache or (depends on IPA version) window.dojo.config.cache One can then obtain the module by: var user_module = require('freeipa/user') specification object is usually in 'entity_spec' or '$enity_name_spec' property. user_module.entity_spec UI is internally organized into entities. Entity corresponds to ipalib object. Entity, e.g. user, usually have multiple pages called facets. To get a list of facets: user_module.entity_spec.facets The one with fields has usually a name "details". For users it's the second facet: var details_facet = user_module.entity_spec.facets[1] IF i simplify it a bit, we can say that fields on a page are organized in sections: details_facet.sections Section has fields. A field usually represents an editable element on page, e.g. a textbox with label, validators and other stuff. Example of inspection: https://pvoborni.fedorapeople.org/images/inspect_ui.png Your goal is to pick a section, or create a new one an add a field there. To know what to define, just examine a definition of already existing field and just amend name, label, ... > It would help also to understand how to debug such a thing. > In browser developer tools. There is javascript console (use in the text above), javascript debugger, network tab for inspecting loaded files. For developing RHEL 7 plugin, I would suggest you to install test instance of FreeIPA 3.3 and use "Debugging with source codes" method described in: - https://pvoborni.fedorapeople.org/doc/#!/guide/Debugging Some tips: - if you get a weird dojo loader messegate, you probably have a syntax error in the plugin or you don't return a plugin object or the plugin could not be loaded (bad name) - it's good to use some JavaScript liner - jsl or jshint to catch syntax errors early. > I running version 3.3 on rhel 7. Any help or pointers to more > documentation would be greatly appreciated. > -- Petr Vobornik From jpazdziora at redhat.com Fri Mar 6 10:55:42 2015 From: jpazdziora at redhat.com (Jan Pazdziora) Date: Fri, 6 Mar 2015 11:55:42 +0100 Subject: [Freeipa-devel] New freeipa-devel footer In-Reply-To: <54F984BB.2070801@redhat.com> References: <54F984BB.2070801@redhat.com> Message-ID: <20150306105542.GK26395@redhat.com> On Fri, Mar 06, 2015 at 11:43:07AM +0100, Martin Kosek wrote: > > See the footer below. If you have any improvements proposals, just tell me. Given the information about the list actions is in the List-* header of ever email, do you need the > Manage your subscription for the Freeipa-devel mailing list: > https://www.redhat.com/mailman/listinfo/freeipa-devel bits? I like > Go to http://www.freeipa.org/page/Contribute/Code for more info on how to contribute to the project but having it wrapped to < 80 characters would be even nicer. Or even just Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code might be enough. -- Jan Pazdziora Principal Software Engineer, Identity Management Engineering, Red Hat From mkosek at redhat.com Fri Mar 6 11:01:21 2015 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 06 Mar 2015 12:01:21 +0100 Subject: [Freeipa-devel] New freeipa-devel footer In-Reply-To: <20150306105542.GK26395@redhat.com> References: <54F984BB.2070801@redhat.com> <20150306105542.GK26395@redhat.com> Message-ID: <54F98901.6040408@redhat.com> On 03/06/2015 11:55 AM, Jan Pazdziora wrote: > On Fri, Mar 06, 2015 at 11:43:07AM +0100, Martin Kosek wrote: >> >> See the footer below. If you have any improvements proposals, just tell me. > > Given the information about the list actions is in the List-* header > of ever email, do you need the > >> Manage your subscription for the Freeipa-devel mailing list: >> https://www.redhat.com/mailman/listinfo/freeipa-devel > > bits? > > I like > >> Go to http://www.freeipa.org/page/Contribute/Code for more info on how to contribute to the project > > but having it wrapped to < 80 characters would be even nicer. Or even > just > > Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code > > might be enough. > Good idea. I fixed that part. As for link to mailman, I see it as an option for people now knowing that something as mail headers exists. But given this is freeipa-*devel* list, we may indeed remove it. I will see if there are any more opinions on that matter. From mbabinsk at redhat.com Fri Mar 6 12:05:11 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Fri, 06 Mar 2015 13:05:11 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0019] changes to the way host TGT is obtained using keytab Message-ID: <54F997F7.2070400@redhat.com> This series of patches for the master/4.1 branch attempts to implement some of the Rob's and Petr Vobornik's ideas which originated from a discussion on this list regarding my original patch fixing https://fedorahosted.org/freeipa/ticket/4808. I suppose that these patches are just a first iteration, we may further discuss if this is the right thing to do. Below is a quote from the original discussion just to get the context: -- Martin^3 Babinsky > Martin Babinsky wrote: >> On 03/02/2015 04:28 PM, Rob Crittenden wrote: >>> Petr Vobornik wrote: >>>>>>>>>> On 01/12/2015 05:45 PM, Martin Babinsky wrote: >>>>>>>>>>> related to ticket https://fedorahosted.org/freeipa/ticket/4808 >>>> >>>> this patch seems to be a bit forgotten. >>>> >>>> It works, looks fine. >>>> >>>> One minor issue: trailing whitespaces in the man page. >>>> >>>> I also wonder if it shouldn't be used in other tools which call kinit >>>> with keytab: >>>> * ipa-client-automount:434 >>>> * ipa-client-install:2591 (this usage should be fine since it's used for >>>> server installation) >>>> * dcerpc.py:545 >>>> * rpcserver.py: 971, 981 (armor for web ui forms base auth) >>>> >>>> Most importantly the ipa-client-automount because it's called from >>>> ipa-client-install (if location is specified) and therefore it might >>>> fail during client installation. >>>> >>>> Or also, kinit call with admin creadentials worked for the user but I >>>> wonder if it was just a coincidence and may break under slightly >>>> different but similar conditions. >>> >>> I think that's a fine idea. In fact there is already a function that >>> could be extended, kinit_hostprincipal(). >>> >>> rob >>> >> >> So in principle we could add multiple TGT retries to >> "kinit_hostprincipal()" and then plug this function to all the places >> Petr mentioned in order to provide this functionality each time TGT is >> requested using keytab. >> >> Do I understand it correctly? >> > > Honestly I think I'd only do the retries on client installation. I > don't know that the other uses would really benefit or need this. > > But this is an opportunity to consolidate some code, so I guess the > approach I'd take is to add an option to kinit_hostprincipal of > retries=0 so that only a single kinit is done. The client installers > would pass in some value. > > This change is quite a bit more invasive but it's also early in the > release cycle so the risk will be spread out. > > rob -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0015-1-modifications-to-ipautil.kinit_hostprincipal.patch Type: text/x-patch Size: 3501 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0016-1-ipa-client-install-try-to-get-host-TGT-several-times.patch Type: text/x-patch Size: 6906 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0017-1-ipa-client-automount-use-updated-ipautil.kinit_hostp.patch Type: text/x-patch Size: 1450 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0018-1-rpcserver.py-use-ipautil.kinit_hostprincipal-to-obta.patch Type: text/x-patch Size: 1260 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0019-1-updated-existing-calls-to-ipautil.kinit_hostprincipa.patch Type: text/x-patch Size: 6756 bytes Desc: not available URL: From abokovoy at redhat.com Fri Mar 6 12:08:29 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Fri, 6 Mar 2015 14:08:29 +0200 Subject: [Freeipa-devel] [PATCHES 134-136] extdom: handle ERANGE return code for getXXYYY_r() In-Reply-To: <20150305103353.GA3271@p.redhat.com> References: <20150302174507.GK3271@p.redhat.com> <20150304141755.GB25455@redhat.com> <20150304171453.GS3271@p.redhat.com> <20150305081636.GX3271@p.redhat.com> <20150305103353.GA3271@p.redhat.com> Message-ID: <20150306120829.GV25455@redhat.com> On Thu, 05 Mar 2015, Sumit Bose wrote: >On Thu, Mar 05, 2015 at 09:16:36AM +0100, Sumit Bose wrote: >> On Wed, Mar 04, 2015 at 06:14:53PM +0100, Sumit Bose wrote: >> > On Wed, Mar 04, 2015 at 04:17:55PM +0200, Alexander Bokovoy wrote: >> > > On Mon, 02 Mar 2015, Sumit Bose wrote: >> > > >diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c >> > > >index 20fdd62b20f28f5384cf83b8be5819f721c6c3db..84aeb28066f25f05a89d0c2d42e8b060e2399501 100644 >> > > >--- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c >> > > >+++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c >> > > >@@ -49,6 +49,220 @@ >> > > > >> > > >#define MAX(a,b) (((a)>(b))?(a):(b)) >> > > >#define SSSD_DOMAIN_SEPARATOR '@' >> > > >+#define MAX_BUF (1024*1024*1024) >> > > >+ >> > > >+ >> > > >+ >> > > >+static int get_buffer(size_t *_buf_len, char **_buf) >> > > >+{ >> > > >+ long pw_max; >> > > >+ long gr_max; >> > > >+ size_t buf_len; >> > > >+ char *buf; >> > > >+ >> > > >+ pw_max = sysconf(_SC_GETPW_R_SIZE_MAX); >> > > >+ gr_max = sysconf(_SC_GETGR_R_SIZE_MAX); >> > > >+ >> > > >+ if (pw_max == -1 && gr_max == -1) { >> > > >+ buf_len = 16384; >> > > >+ } else { >> > > >+ buf_len = MAX(pw_max, gr_max); >> > > >+ } >> > > Here you'd get buf_len equal to 1024 by default on Linux which is too >> > > low for our use case. I think it would be beneficial to add one more >> > > MAX(buf_len, 16384): >> > > - if (pw_max == -1 && gr_max == -1) { >> > > - buf_len = 16384; >> > > - } else { >> > > - buf_len = MAX(pw_max, gr_max); >> > > - } >> > > + buf_len = MAX(16384, MAX(pw_max, gr_max)); >> > > >> > > with MAX(MAX(),..) you also get rid of if() statement as resulting >> > > rvalue would be guaranteed to be positive. >> > >> > done >> > >> > > >> > > The rest is going along the common lines but would it be better to >> > > allocate memory once per LDAP client request rather than always ask for >> > > it per each NSS call? You can guarantee a sequential use of the buffer >> > > within the LDAP client request processing so there is no problem with >> > > locks but having this memory re-allocated on subsequent >> > > getpwnam()/getpwuid()/... calls within the same request processing seems >> > > suboptimal to me. >> > >> > ok, makes sense, I moved get_buffer() back to the callers. >> > >> > New version attached. >> >> Please ignore this patch, I will send a revised version soon. > >Please find attached a revised version which properly reports missing >objects and out-of-memory cases and makes sure buf and buf_len are in >sync. ACK to patches 0135 and 0136. This concludes the review, thanks! -- / Alexander Bokovoy From pspacek at redhat.com Fri Mar 6 12:30:45 2015 From: pspacek at redhat.com (Petr Spacek) Date: Fri, 06 Mar 2015 13:30:45 +0100 Subject: [Freeipa-devel] [PATCHES 0200-0202] DNS fixes related to unsupported records In-Reply-To: <54F7262A.4010709@redhat.com> References: <54F72208.5090208@redhat.com> <54F7262A.4010709@redhat.com> Message-ID: <54F99DF5.1030408@redhat.com> On 4.3.2015 16:35, Martin Basti wrote: > On 04/03/15 16:17, Martin Basti wrote: >> Ticket: https://fedorahosted.org/freeipa/ticket/4930 >> >> 0200: 4.1, master >> Fixes traceback, which was raised if LDAP contained a record that was marked >> as unsupported. >> Now unsupported records are shown, if LDAP contains them. >> >> 0200: 4.1, master >> Records marked as unsupported will not show options for editing parts. >> >> 0202: only master >> Removes NSEC3PARAM record from record types. NSEC3PARAM can contain only >> zone, value is allowed only in idnszone objectclass, so do not confuse users. >> > > .... and patches attached :-) ACK. It works for me and can be pushed to branches 4.1 and master. -- Petr^2 Spacek From pspacek at redhat.com Fri Mar 6 12:42:43 2015 From: pspacek at redhat.com (Petr Spacek) Date: Fri, 06 Mar 2015 13:42:43 +0100 Subject: [Freeipa-devel] New freeipa-devel footer In-Reply-To: <54F98901.6040408@redhat.com> References: <54F984BB.2070801@redhat.com> <20150306105542.GK26395@redhat.com> <54F98901.6040408@redhat.com> Message-ID: <54F9A0C3.5010205@redhat.com> On 6.3.2015 12:01, Martin Kosek wrote: > On 03/06/2015 11:55 AM, Jan Pazdziora wrote: >> On Fri, Mar 06, 2015 at 11:43:07AM +0100, Martin Kosek wrote: >>> >>> See the footer below. If you have any improvements proposals, just tell me. >> >> Given the information about the list actions is in the List-* header >> of ever email, do you need the >> >>> Manage your subscription for the Freeipa-devel mailing list: >>> https://www.redhat.com/mailman/listinfo/freeipa-devel >> >> bits? >> >> I like >> >>> Go to http://www.freeipa.org/page/Contribute/Code for more info on how to >>> contribute to the project >> >> but having it wrapped to < 80 characters would be even nicer. Or even >> just >> >> Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code >> >> might be enough. >> > > Good idea. I fixed that part. > > As for link to mailman, I see it as an option for people now knowing that > something as mail headers exists. But given this is freeipa-*devel* list, we > may indeed remove it. I will see if there are any more opinions on that matter. Personally I would nuke the footer from devel list completely :-) You are already on devel list so what is the point of inviting you to it? :-) -- Petr^2 Spacek From jcholast at redhat.com Fri Mar 6 13:08:03 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Fri, 06 Mar 2015 14:08:03 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0019] changes to the way host TGT is obtained using keytab In-Reply-To: <54F997F7.2070400@redhat.com> References: <54F997F7.2070400@redhat.com> Message-ID: <54F9A6B3.4010102@redhat.com> Hi Martin, Dne 6.3.2015 v 13:05 Martin Babinsky napsal(a): > This series of patches for the master/4.1 branch attempts to implement > some of the Rob's and Petr Vobornik's ideas which originated from a > discussion on this list regarding my original patch fixing > https://fedorahosted.org/freeipa/ticket/4808. > > I suppose that these patches are just a first iteration, we may further > discuss if this is the right thing to do. > > Below is a quote from the original discussion just to get the context: 1) Why 5 patches for 2 changes (kinit_hostprincipal instead of exec kinit, ipa-client-install --kinit-attempts)? 2) IMO a for loop would be better than an infinite while loop: for attempt in range(attempts): try: # kinit ... except krbV.Krb5Error as e: # kinit failed ... else: break else: # max attempts reached ... 3) I think it would be nice to support ccache types other than FILE. 4) I would prefer if you kept using the full ccache name returned from kinit_hostprincipal when connecting to LDAP. 5) Given that the ccache path usually ends with "/ccache", I would retain the old way of calling kinit_hostprincipal. You can do something like this to support all of the above: def kinit_hostprincipal(keytab, ccache_file, principal, attempts=1): if os.path.isdir(ccache_file): ccache_file = os.path.join(ccache_file, 'ccache') ... return ccache_file (You don't need to prepend "FILE:", as it is the default ccache type.) Honza -- Jan Cholasta From mkosek at redhat.com Fri Mar 6 13:10:59 2015 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 06 Mar 2015 14:10:59 +0100 Subject: [Freeipa-devel] New freeipa-devel footer In-Reply-To: <54F9A0C3.5010205@redhat.com> References: <54F984BB.2070801@redhat.com> <20150306105542.GK26395@redhat.com> <54F98901.6040408@redhat.com> <54F9A0C3.5010205@redhat.com> Message-ID: <54F9A763.80403@redhat.com> On 03/06/2015 01:42 PM, Petr Spacek wrote: > On 6.3.2015 12:01, Martin Kosek wrote: >> On 03/06/2015 11:55 AM, Jan Pazdziora wrote: >>> On Fri, Mar 06, 2015 at 11:43:07AM +0100, Martin Kosek wrote: >>>> >>>> See the footer below. If you have any improvements proposals, just tell me. >>> >>> Given the information about the list actions is in the List-* header >>> of ever email, do you need the >>> >>>> Manage your subscription for the Freeipa-devel mailing list: >>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>> >>> bits? >>> >>> I like >>> >>>> Go to http://www.freeipa.org/page/Contribute/Code for more info on how to >>>> contribute to the project >>> >>> but having it wrapped to < 80 characters would be even nicer. Or even >>> just >>> >>> Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code >>> >>> might be enough. >>> >> >> Good idea. I fixed that part. >> >> As for link to mailman, I see it as an option for people now knowing that >> something as mail headers exists. But given this is freeipa-*devel* list, we >> may indeed remove it. I will see if there are any more opinions on that matter. > > Personally I would nuke the footer from devel list completely :-) You are > already on devel list so what is the point of inviting you to it? :-) > Example: you are googling for something FreeIPA devel related, it gives you mail from archive. Then having convenient link may be useful (if you are lazy googling "freeipa-devel") :-) The http://www.freeipa.org/page/Contribute/Code is especially useful IMO, I want everyone to know how to contribute patches to FreeIPA... Martin From mkosek at redhat.com Fri Mar 6 13:16:39 2015 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 06 Mar 2015 14:16:39 +0100 Subject: [Freeipa-devel] IPA 4.2 server upgrade refactoring - summary In-Reply-To: <54F7491E.9020405@redhat.com> References: <54F7491E.9020405@redhat.com> Message-ID: <54F9A8B7.6010005@redhat.com> On 03/04/2015 07:04 PM, Martin Basti wrote: > Summary extracted from thread "[Freeipa-devel] IPA Server upgrade 4.2 design" > > Design page: http://www.freeipa.org/page/V4/Server_Upgrade_Refactoring > > * ipa-server-upgrade will not allow to use DM password, only LDAPI will be used > for upgrade > * upgrade files will be executed in alphabetical order, updater will not > require number in update file name. But we should still keep the numbering in > new upgrade files. > * LDAP updates will be applied per file, in order specified in file (from top > to bottom) > * new directive in update files *"plugin:"* will execute update > plugins (renamed form "update-plugin" to "plugin") > * option "--skip-version-check" will override version check in ipactl and > ipa-server-upgrade commands (was --force before, but this collides with > existing --force option in ipactl) > * huge warning, "this may broke everything", in help, man, or CLI for > --skip-version-check option > * ipa-upgradeconfig will be removed > * ipa-ldap-updater will be changed to not allow overall update. It stays as > util for execute particular update files. Makes sense to me. Everyone ok with above so that Martin can start working on the changes? > How and when execute upgrades (after discussion with Honza) -- not updated in > design page yet > A) ipactl*: > A.1) compare build platform and platform from last upgrade/installation (based > on used ipaplatform file) > A.1.i) if platform mismatch, raise error and prevent to start services > A.2) version of LDAP data(+schema included) compared to current version > (VENDOR_VERSION will be used) > A.2.i) if version of LDAP data is newer than version of build, raise error and > prevent services to start > A.2.ii) if version of LDAP data is older than version of build, upgrade is required > A.2.iii) if versions are the same, continue > A.3) check if services requires update (this should be available after > installer refactoring)** > A.3.i) if any service requires configuration upgrade, upgrade is required > A.3.ii) if any service raises an error about wrong configuration (which cannot > be automatically fixed and requires manual fix by user), raise error and > prevent to start services > A.4.i) if upgrade is needed, ipactl will prevent to start services, and promt > user to run ipa-server-upgrade manually (ipactl will not execute upgrade itself) > A.4.ii) otherwise start services > > > B) ipa-server-upgrade* > B.0) services should be in shutdown state, if not, stop services (services will > be started during upgrade on demand, then stopped) > B.1) compare build platform and platform from last upgrade/installation (based > on used ipaplatform file) > B.1.i) if platform mismatch, raise error stop upgrade > B.2) check version of LDAP data > B.2.i) if LDAP data version is newer than build version, raise error stop upgrade > B.2.ii) if LDAP data version is the same as build version, skip schema and LDAP > data upgrade > B.2.iii) if LDAP data version is older than build version --> data upgrade required > B.3) Check if services require upgrade, detect errors as in A.3) (?? this step > may not be there)** > B.4) if data upgrade required, upgrade schema, then upgrade data, if successful > store current build version as data version > B.5) Run service upgrade (if needed?)** > B.6) if upgrade is successful, inform user that the one can now start IPA > (upgrade will not start IPA after it is done) > > * with --skip-version-check option, ipactl will start services, > ipa-server-upgrade will upgrade everything > ** services will handle local configuration upgrade by themselves. They will > not use data version to decide if upgrade is required (TODO implementation > details, Honza wants it in this way - sharing code with installers) > > > Upgrade in different enviroments: > 1) Upgrade during RPM transaction (as we do now) -- if it is possible, upgrade > will be executed during RPM transaction, service will be started after upgrade > (+ add messages "IPA is currently upgrading, please wait") > 2) Upgrade cannot be executed during RPM transaction (fedup, --no-script, > containers) -- IPA will not start if update is required, user have to run > upgrade manually, using ipa-server-upgrade command (+ log/print errors that > there is upgrade required) > > Martin^2 > > -- > Martin Basti > From pvoborni at redhat.com Fri Mar 6 13:21:50 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Fri, 06 Mar 2015 14:21:50 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0019] changes to the way host TGT is obtained using keytab In-Reply-To: <54F997F7.2070400@redhat.com> References: <54F997F7.2070400@redhat.com> Message-ID: <54F9A9EE.2030209@redhat.com> On 03/06/2015 01:05 PM, Martin Babinsky wrote: > This series of patches for the master/4.1 branch attempts to implement > some of the Rob's and Petr Vobornik's ideas which originated from a > discussion on this list regarding my original patch fixing > https://fedorahosted.org/freeipa/ticket/4808. > > I suppose that these patches are just a first iteration, we may further > discuss if this is the right thing to do. > > Below is a quote from the original discussion just to get the context: > The original kinit_hostprincipal had `ccachedir` argument, the new one has `ccache_name`. But the new code still prepends FILE ccache type: old: ccache_file = 'FILE:%s/ccache' % ccachedir new: ccache_file = 'FILE:%s' % ccache_name I would remove the line because I understand the use of 'ccache_name' name as equivalent of KRB5CCNAME and therefore I would expect that the value of this argument would be used to set the environment variable WITHOUT any modification. And mainly, user is limited only to FILE ccache type. I also wonder if os.environ['KRB5CCNAME'] = ccache_file has to be set when ccache is defined by krbV call: ccache = krbV.CCache(name=ccache_file, ... krbV snipped doesn't use it so maybe we can remove it. https://git.fedorahosted.org/cgit/python-krbV.git/tree/krbV-code-snippets.py -- Petr Vobornik From corey.kovacs at gmail.com Fri Mar 6 13:31:39 2015 From: corey.kovacs at gmail.com (Corey Kovacs) Date: Fri, 6 Mar 2015 06:31:39 -0700 Subject: [Freeipa-devel] UI plugins In-Reply-To: <54F986A7.8000702@redhat.com> References: <54F986A7.8000702@redhat.com> Message-ID: I almost forgot to ask. Since you don't point it out, I am assuming (yeah I know) the plugin code methods have not changed from 3.3 to 4.1? That is to day I should be able to use the same techniques? Thanks again! Corey On Fri, Mar 6, 2015 at 3:51 AM, Petr Vobornik wrote: > On 03/06/2015 03:54 AM, Corey Kovacs wrote: > >> After reading the extending freeipa training document I was able >> successfully add us to meet attributes and add/modify them using the cli >> which was pretty cool. Now that I got the cli out of the way I want to add >> the fields to the ui. Because of the similarities between what I want to >> do >> and the example given in the docs I just followed along and changed >> variables where it made sense to do so. I cannot however get the new field >> to show up. The Apache logs don't show any errors but they do show the >> plugin being read as the page (user details) is loaded. After searching >> around I found a document which attempts to explain the process but it >> assumes some knowledge held by the reader which I don't possess. >> >> It looks like I supposed to create some sort of index loader which loads >> all of the modules which are part of the plugin. I can't seem to find any >> good documents telling the whole process or least non that I can make >> sense >> of. >> >> > Plugin could be just one file, if the plugin name is "myplugin" it has to > be: > > /usr/share/ipa/ui/js/plugins/myplugin/myplugin.js > > IPA Web UI uses AMD module format: > - https://dojotoolkit.org/documentation/tutorials/1.10/modules/ > - http://requirejs.org/docs/whyamd.html#amd > > I have some example plugins here: > - https://pvoborni.fedorapeople.org/plugins/ > > For your case I would look at: > - https://pvoborni.fedorapeople.org/plugins/employeenumber/ > employeenumber.js > > Web UI basics and introspection: > (this section is little redundant to your question, but it might help > others) > > FreeIPA Web UI is semi declarative. That means that part of the pages > could be thrown away, modified or extended by plugins before the page is > constructed. To do that, one has to modify specification object of > particular module. > > Here, I would assume that you don't have UI sources(before minification), > so all introspection will be done in running web ui. Otherwise it's easier > to inspect sources in install/ui/src/freeipa, i.e. checkout ipa-3-3 branch > from upstream git. > > List of modules could be find(after authentication) in browse developer > tools console in object: > > window.require.cache > or (depends on IPA version) > window.dojo.config.cache > > One can then obtain the module by: > > var user_module = require('freeipa/user') > > specification object is usually in 'entity_spec' or '$enity_name_spec' > property. > > user_module.entity_spec > > UI is internally organized into entities. Entity corresponds to ipalib > object. Entity, e.g. user, usually have multiple pages called facets. To > get a list of facets: > user_module.entity_spec.facets > > The one with fields has usually a name "details". For users it's the > second facet: > var details_facet = user_module.entity_spec.facets[1] > > IF i simplify it a bit, we can say that fields on a page are organized in > sections: > details_facet.sections > > Section has fields. A field usually represents an editable element on > page, e.g. a textbox with label, validators and other stuff. > > Example of inspection: > > https://pvoborni.fedorapeople.org/images/inspect_ui.png > > Your goal is to pick a section, or create a new one an add a field there. > To know what to define, just examine a definition of already existing field > and just amend name, label, ... > > > It would help also to understand how to debug such a thing. >> >> > In browser developer tools. There is javascript console (use in the text > above), javascript debugger, network tab for inspecting loaded files. > > For developing RHEL 7 plugin, I would suggest you to install test instance > of FreeIPA 3.3 and use "Debugging with source codes" method described in: > > - https://pvoborni.fedorapeople.org/doc/#!/guide/Debugging > > Some tips: > - if you get a weird dojo loader messegate, you probably have a syntax > error in the plugin or you don't return a plugin object or the plugin could > not be loaded (bad name) > - it's good to use some JavaScript liner - jsl or jshint to catch syntax > errors early. > > > I running version 3.3 on rhel 7. Any help or pointers to more >> documentation would be greatly appreciated. >> >> > -- > Petr Vobornik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbabinsk at redhat.com Fri Mar 6 13:45:17 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Fri, 06 Mar 2015 14:45:17 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0019] changes to the way host TGT is obtained using keytab In-Reply-To: <54F9A6B3.4010102@redhat.com> References: <54F997F7.2070400@redhat.com> <54F9A6B3.4010102@redhat.com> Message-ID: <54F9AF6D.9070309@redhat.com> On 03/06/2015 02:08 PM, Jan Cholasta wrote: > Hi Martin, > > Dne 6.3.2015 v 13:05 Martin Babinsky napsal(a): >> This series of patches for the master/4.1 branch attempts to implement >> some of the Rob's and Petr Vobornik's ideas which originated from a >> discussion on this list regarding my original patch fixing >> https://fedorahosted.org/freeipa/ticket/4808. >> >> I suppose that these patches are just a first iteration, we may further >> discuss if this is the right thing to do. >> >> Below is a quote from the original discussion just to get the context: > > 1) Why 5 patches for 2 changes (kinit_hostprincipal instead of exec > kinit, ipa-client-install --kinit-attempts)? Will squish them to a smaller number (2-3) > 2) IMO a for loop would be better than an infinite while loop: > > for attempt in range(attempts): > try: > # kinit > ... > except krbV.Krb5Error as e: > # kinit failed > ... > else: > break > else: > # max attempts reached > ... > That's true. Infinite loops are tad scary anyway. > 3) I think it would be nice to support ccache types other than FILE. According to Petr Vobornik (see his reply), the user is limited mostly to FILE ccache type, so I don't know if it will make sense to support also other types. > > 4) I would prefer if you kept using the full ccache name returned from > kinit_hostprincipal when connecting to LDAP. > > 5) Given that the ccache path usually ends with "/ccache", I would > retain the old way of calling kinit_hostprincipal. You can do something > like this to support all of the above: > > def kinit_hostprincipal(keytab, ccache_file, principal, attempts=1): > if os.path.isdir(ccache_file): > ccache_file = os.path.join(ccache_file, 'ccache') > ... > return ccache_file > > (You don't need to prepend "FILE:", as it is the default ccache type.) > > Honza > Dumb me didn't realize that 'ccache_file' is a reference to an actual filesystem path and that the filename can be set dynamically depending on path type (directory vs. file). Thank you for your comments. Will update the patches accordingly. -- Martin^3 Babinsky From pvoborni at redhat.com Fri Mar 6 13:58:13 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Fri, 06 Mar 2015 14:58:13 +0100 Subject: [Freeipa-devel] UI plugins In-Reply-To: References: <54F986A7.8000702@redhat.com> Message-ID: <54F9B275.4040400@redhat.com> On 03/06/2015 02:31 PM, Corey Kovacs wrote: > I almost forgot to ask. Since you don't point it out, I am assuming (yeah I > know) the plugin code methods have not changed from 3.3 to 4.1? That is to > day I should be able to use the same techniques? Same techniques should be applicable, there were no major improvements in plugin support in 4.1. But Web UI doesn't have any stable API so little things could be different. Basically plugins interact with Web UI's core code which is not perfect(easy to break things) but it's powerful and better than nothing. > > Thanks again! > > Corey > > On Fri, Mar 6, 2015 at 3:51 AM, Petr Vobornik wrote: > >> On 03/06/2015 03:54 AM, Corey Kovacs wrote: >> >>> After reading the extending freeipa training document I was able >>> successfully add us to meet attributes and add/modify them using the cli >>> which was pretty cool. Now that I got the cli out of the way I want to add >>> the fields to the ui. Because of the similarities between what I want to >>> do >>> and the example given in the docs I just followed along and changed >>> variables where it made sense to do so. I cannot however get the new field >>> to show up. The Apache logs don't show any errors but they do show the >>> plugin being read as the page (user details) is loaded. After searching >>> around I found a document which attempts to explain the process but it >>> assumes some knowledge held by the reader which I don't possess. >>> >>> It looks like I supposed to create some sort of index loader which loads >>> all of the modules which are part of the plugin. I can't seem to find any >>> good documents telling the whole process or least non that I can make >>> sense >>> of. >>> >>> >> Plugin could be just one file, if the plugin name is "myplugin" it has to >> be: >> >> /usr/share/ipa/ui/js/plugins/myplugin/myplugin.js >> >> IPA Web UI uses AMD module format: >> - https://dojotoolkit.org/documentation/tutorials/1.10/modules/ >> - http://requirejs.org/docs/whyamd.html#amd >> >> I have some example plugins here: >> - https://pvoborni.fedorapeople.org/plugins/ >> >> For your case I would look at: >> - https://pvoborni.fedorapeople.org/plugins/employeenumber/ >> employeenumber.js >> >> Web UI basics and introspection: >> (this section is little redundant to your question, but it might help >> others) >> >> FreeIPA Web UI is semi declarative. That means that part of the pages >> could be thrown away, modified or extended by plugins before the page is >> constructed. To do that, one has to modify specification object of >> particular module. >> >> Here, I would assume that you don't have UI sources(before minification), >> so all introspection will be done in running web ui. Otherwise it's easier >> to inspect sources in install/ui/src/freeipa, i.e. checkout ipa-3-3 branch >> from upstream git. >> >> List of modules could be find(after authentication) in browse developer >> tools console in object: >> >> window.require.cache >> or (depends on IPA version) >> window.dojo.config.cache >> >> One can then obtain the module by: >> >> var user_module = require('freeipa/user') >> >> specification object is usually in 'entity_spec' or '$enity_name_spec' >> property. >> >> user_module.entity_spec >> >> UI is internally organized into entities. Entity corresponds to ipalib >> object. Entity, e.g. user, usually have multiple pages called facets. To >> get a list of facets: >> user_module.entity_spec.facets >> >> The one with fields has usually a name "details". For users it's the >> second facet: >> var details_facet = user_module.entity_spec.facets[1] >> >> IF i simplify it a bit, we can say that fields on a page are organized in >> sections: >> details_facet.sections >> >> Section has fields. A field usually represents an editable element on >> page, e.g. a textbox with label, validators and other stuff. >> >> Example of inspection: >> >> https://pvoborni.fedorapeople.org/images/inspect_ui.png >> >> Your goal is to pick a section, or create a new one an add a field there. >> To know what to define, just examine a definition of already existing field >> and just amend name, label, ... >> >> >> It would help also to understand how to debug such a thing. >>> >>> >> In browser developer tools. There is javascript console (use in the text >> above), javascript debugger, network tab for inspecting loaded files. >> >> For developing RHEL 7 plugin, I would suggest you to install test instance >> of FreeIPA 3.3 and use "Debugging with source codes" method described in: >> >> - https://pvoborni.fedorapeople.org/doc/#!/guide/Debugging >> >> Some tips: >> - if you get a weird dojo loader messegate, you probably have a syntax >> error in the plugin or you don't return a plugin object or the plugin could >> not be loaded (bad name) >> - it's good to use some JavaScript liner - jsl or jshint to catch syntax >> errors early. >> >> >> I running version 3.3 on rhel 7. Any help or pointers to more >>> documentation would be greatly appreciated. >>> >>> >> -- >> Petr Vobornik >> > -- Petr Vobornik From pvoborni at redhat.com Fri Mar 6 14:03:01 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Fri, 06 Mar 2015 15:03:01 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0019] changes to the way host TGT is obtained using keytab In-Reply-To: <54F9AF6D.9070309@redhat.com> References: <54F997F7.2070400@redhat.com> <54F9A6B3.4010102@redhat.com> <54F9AF6D.9070309@redhat.com> Message-ID: <54F9B395.9000708@redhat.com> On 03/06/2015 02:45 PM, Martin Babinsky wrote: > On 03/06/2015 02:08 PM, Jan Cholasta wrote: >> Hi Martin, >> >> Dne 6.3.2015 v 13:05 Martin Babinsky napsal(a): >>> This series of patches for the master/4.1 branch attempts to implement >>> some of the Rob's and Petr Vobornik's ideas which originated from a >>> discussion on this list regarding my original patch fixing >>> https://fedorahosted.org/freeipa/ticket/4808. >>> >>> I suppose that these patches are just a first iteration, we may further >>> discuss if this is the right thing to do. >>> >>> Below is a quote from the original discussion just to get the context: >> >> 3) I think it would be nice to support ccache types other than FILE. > According to Petr Vobornik (see his reply), the user is limited mostly > to FILE ccache type, so I don't know if it will make sense to support > also other types. Actually I agree with Honza. I wanted to say that with your implementation user can't use anything else - are limited to FILE type. -- Petr Vobornik From lslebodn at redhat.com Fri Mar 6 14:05:05 2015 From: lslebodn at redhat.com (Lukas Slebodnik) Date: Fri, 6 Mar 2015 15:05:05 +0100 Subject: [Freeipa-devel] [PATCHES] SPEC: Require python2 version of sssd bindings In-Reply-To: <54F87448.4050703@redhat.com> References: <20150227205039.GB2327@mail.corp.redhat.com> <54F80B99.6030104@redhat.com> <20150305102328.GA18226@mail.corp.redhat.com> <54F87448.4050703@redhat.com> Message-ID: <20150306140505.GA2345@mail.corp.redhat.com> On (05/03/15 16:20), Petr Vobornik wrote: >On 03/05/2015 11:23 AM, Lukas Slebodnik wrote: >>On (05/03/15 08:54), Petr Vobornik wrote: >>>On 02/27/2015 09:50 PM, Lukas Slebodnik wrote: >>>>ehlo, >>>> >>>>Please review attached patches and fix freeipa in fedora 22 ASAP. >>>> >>>>I think the most critical is 1st patch >>>> >>>>sh$ git grep "SSSDConfig" | grep import >>>>install/tools/ipa-upgradeconfig:import SSSDConfig >>>>ipa-client/ipa-install/ipa-client-automount:import SSSDConfig >>>>ipa-client/ipa-install/ipa-client-install: import SSSDConfig >>>> >>>>BTW package python-sssdconfig is provides since sssd-1.10.0alpha1 (2013-04-02) >>>>but it was not explicitely required. >>>> >>>>The latest python3 changes in sssd (fedora 22) is just a result of negligent >>>>packaging of freeipa. >>>> >>>>LS >>>> >>> >>>Fedora 22 was amended. >>> >>>Patch 1: ACK >>> >>>Patch 2: ACK >>> >>>Patch3: >>>the package name is libsss_nss_idmap-python not python-libsss_nss_idmap >>>which already is required in adtrust package >>In sssd upstream we decided to rename package libsss_nss_idmap-python to >>python-libsss_nss_idmap according to new rpm python guidelines. >>The python3 version has alredy correct name. >> >>We will rename package in downstream with next major release (1.13). >>Of course it we will add "Provides: libsss_nss_idmap-python". >> >>We can push 3rd patch later or I can update 3rd patch. >>What do you prefer? >> >>Than you very much for review. >> >>LS >> > >Patch 3 should be updated to not forget the remaining change in ipa-python >package. > >It then should be updated downstream and master when 1.13 is released in >Fedora, or in master sooner if SSSD 1.13 becomes the minimal version required >by master. Fixed. BTW Why ther is a pylint comment for some sssd modules I did not kave any pylint problems after removing comment. ipalib/plugins/trust.py:32: import pysss_murmur #pylint: disable=F0401 ipalib/plugins/trust.py:38: import pysss_nss_idmap #pylint: disable=F0401 And why are these modules optional (try except) ipalib/plugins/trust.py-31-try: ipalib/plugins/trust.py:32: import pysss_murmur #pylint: disable=F0401 ipalib/plugins/trust.py-33- _murmur_installed = True ipalib/plugins/trust.py-34-except Exception, e: ipalib/plugins/trust.py-35- _murmur_installed = False ipalib/plugins/trust.py-36- ipalib/plugins/trust.py-37-try: ipalib/plugins/trust.py:38: import pysss_nss_idmap #pylint: disable=F0401 ipalib/plugins/trust.py-39- _nss_idmap_installed = True ipalib/plugins/trust.py-40-except Exception, e: ipalib/plugins/trust.py-41- _nss_idmap_installed = False LS -------------- next part -------------- >From 41523fd6ab9ea95644cac1a6cd20386a620a1df5 Mon Sep 17 00:00:00 2001 From: Lukas Slebodnik Date: Fri, 27 Feb 2015 20:40:06 +0100 Subject: [PATCH 1/3] SPEC: Explicitly requires python-sssdconfig Resolves: https://fedorahosted.org/freeipa/ticket/4929 --- freeipa.spec.in | 2 ++ 1 file changed, 2 insertions(+) diff --git a/freeipa.spec.in b/freeipa.spec.in index b186d9fdff31118ea4d929f024f4dc16a75b1d0b..9513f45c6c933a1109390393cb90d68e8c697dc7 100644 --- a/freeipa.spec.in +++ b/freeipa.spec.in @@ -122,6 +122,7 @@ Requires: mod_auth_kerb >= 5.4-16 Requires: mod_nss >= 1.0.8-26 Requires: python-ldap >= 2.4.15 Requires: python-krbV +Requires: python-sssdconfig Requires: acl Requires: python-pyasn1 Requires: memcached @@ -228,6 +229,7 @@ Requires: wget Requires: libcurl >= 7.21.7-2 Requires: xmlrpc-c >= 1.27.4 Requires: sssd >= 1.12.3 +Requires: python-sssdconfig Requires: certmonger >= 0.76.8 Requires: nss-tools Requires: bind-utils -- 2.3.1 -------------- next part -------------- >From 69c4f4bfc911990dc7ec650c2958aaf716c53ac5 Mon Sep 17 00:00:00 2001 From: Lukas Slebodnik Date: Fri, 27 Feb 2015 20:43:38 +0100 Subject: [PATCH 2/3] SPEC: Require python2 version of sssd bindings Python modules pysss and pysss_murmur was part of package sssd-common. Fedora 22 tries to get rid of python2 and therefore these modules were extracted from package sssd-common to separate packages python-sss and python-sss-murmur and python3 version of packages python3-sss python3-sss-murmur git grep "pysss" | grep import ipalib/plugins/trust.py: import pysss_murmur #pylint: disable=F0401 ipaserver/dcerpc.py:import pysss ipaserver/dcerpc.py is pacakged in freeipa-server-trust-ad palib/plugins/trust.py is packaged in freeipa-python Resolves: https://fedorahosted.org/freeipa/ticket/4929 --- freeipa.spec.in | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/freeipa.spec.in b/freeipa.spec.in index 9513f45c6c933a1109390393cb90d68e8c697dc7..7a1ff8b50ef1b462ad14fb2328149c3c2ed2fb38 100644 --- a/freeipa.spec.in +++ b/freeipa.spec.in @@ -195,6 +195,9 @@ Requires: samba >= %{samba_version} Requires: samba-winbind Requires: libsss_idmap Requires: libsss_nss_idmap-python +%if (0%{?fedora} >= 22) +Requires: python-sss +%endif # We use alternatives to divert winbind_krb5_locator.so plugin to libkrb5 # on the installes where server-trust-ad subpackage is installed because # IPA AD trusts cannot be used at the same time with the locator plugin @@ -288,6 +291,9 @@ Requires: python-qrcode-core >= 5.0.0 Requires: python-pyasn1 Requires: python-dateutil Requires: python-yubico +%if (0%{?fedora} >= 22) +Requires: python-sss-murmur +%endif Requires: wget Requires: dbus-python -- 2.3.1 -------------- next part -------------- >From e906786e5c66cbd8f85ea251aec806d25723ec0b Mon Sep 17 00:00:00 2001 From: Lukas Slebodnik Date: Fri, 27 Feb 2015 21:02:51 +0100 Subject: [PATCH 3/3] SPEC: Add missing requires for python-libsss_nss_idmap git grep "pysss_nss_idmap" | grep import ipalib/plugins/trust.py: import pysss_nss_idmap #pylint: disable=F0401 ipaserver/dcerpc.py:import pysss_nss_idmap ipaserver/dcerpc.py is packaged in freeipa-server-trust-ad palib/plugins/trust.py is packaged in freeipa-python Resolves: https://fedorahosted.org/freeipa/ticket/4929 --- freeipa.spec.in | 1 + 1 file changed, 1 insertion(+) diff --git a/freeipa.spec.in b/freeipa.spec.in index 7a1ff8b50ef1b462ad14fb2328149c3c2ed2fb38..2344f6a435f36a87dcd454ad2fb4a191d316ef80 100644 --- a/freeipa.spec.in +++ b/freeipa.spec.in @@ -294,6 +294,7 @@ Requires: python-yubico %if (0%{?fedora} >= 22) Requires: python-sss-murmur %endif +Requires: libsss_nss_idmap-python Requires: wget Requires: dbus-python -- 2.3.1 From abokovoy at redhat.com Fri Mar 6 14:13:18 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Fri, 6 Mar 2015 16:13:18 +0200 Subject: [Freeipa-devel] [PATCHES] SPEC: Require python2 version of sssd bindings In-Reply-To: <20150306140505.GA2345@mail.corp.redhat.com> References: <20150227205039.GB2327@mail.corp.redhat.com> <54F80B99.6030104@redhat.com> <20150305102328.GA18226@mail.corp.redhat.com> <54F87448.4050703@redhat.com> <20150306140505.GA2345@mail.corp.redhat.com> Message-ID: <20150306141318.GX25455@redhat.com> On Fri, 06 Mar 2015, Lukas Slebodnik wrote: >On (05/03/15 16:20), Petr Vobornik wrote: >>On 03/05/2015 11:23 AM, Lukas Slebodnik wrote: >>>On (05/03/15 08:54), Petr Vobornik wrote: >>>>On 02/27/2015 09:50 PM, Lukas Slebodnik wrote: >>>>>ehlo, >>>>> >>>>>Please review attached patches and fix freeipa in fedora 22 ASAP. >>>>> >>>>>I think the most critical is 1st patch >>>>> >>>>>sh$ git grep "SSSDConfig" | grep import >>>>>install/tools/ipa-upgradeconfig:import SSSDConfig >>>>>ipa-client/ipa-install/ipa-client-automount:import SSSDConfig >>>>>ipa-client/ipa-install/ipa-client-install: import SSSDConfig >>>>> >>>>>BTW package python-sssdconfig is provides since sssd-1.10.0alpha1 (2013-04-02) >>>>>but it was not explicitely required. >>>>> >>>>>The latest python3 changes in sssd (fedora 22) is just a result of negligent >>>>>packaging of freeipa. >>>>> >>>>>LS >>>>> >>>> >>>>Fedora 22 was amended. >>>> >>>>Patch 1: ACK >>>> >>>>Patch 2: ACK >>>> >>>>Patch3: >>>>the package name is libsss_nss_idmap-python not python-libsss_nss_idmap >>>>which already is required in adtrust package >>>In sssd upstream we decided to rename package libsss_nss_idmap-python to >>>python-libsss_nss_idmap according to new rpm python guidelines. >>>The python3 version has alredy correct name. >>> >>>We will rename package in downstream with next major release (1.13). >>>Of course it we will add "Provides: libsss_nss_idmap-python". >>> >>>We can push 3rd patch later or I can update 3rd patch. >>>What do you prefer? >>> >>>Than you very much for review. >>> >>>LS >>> >> >>Patch 3 should be updated to not forget the remaining change in ipa-python >>package. >> >>It then should be updated downstream and master when 1.13 is released in >>Fedora, or in master sooner if SSSD 1.13 becomes the minimal version required >>by master. > >Fixed. > >BTW Why ther is a pylint comment for some sssd modules >I did not kave any pylint problems after removing comment. > >ipalib/plugins/trust.py:32: import pysss_murmur #pylint: disable=F0401 >ipalib/plugins/trust.py:38: import pysss_nss_idmap #pylint: disable=F0401 > > >And why are these modules optional (try except) Because they are needed to properly load in the case trust subpackages are not installed, to generate proper messages to users who will try these commands, like 'ipa trust-add' while the infrastructure is not in place. pylint is dumb for such cases. -- / Alexander Bokovoy From lslebodn at redhat.com Fri Mar 6 14:20:05 2015 From: lslebodn at redhat.com (Lukas Slebodnik) Date: Fri, 6 Mar 2015 15:20:05 +0100 Subject: [Freeipa-devel] [PATCHES] SPEC: Require python2 version of sssd bindings In-Reply-To: <20150306141318.GX25455@redhat.com> References: <20150227205039.GB2327@mail.corp.redhat.com> <54F80B99.6030104@redhat.com> <20150305102328.GA18226@mail.corp.redhat.com> <54F87448.4050703@redhat.com> <20150306140505.GA2345@mail.corp.redhat.com> <20150306141318.GX25455@redhat.com> Message-ID: <20150306142005.GB2345@mail.corp.redhat.com> On (06/03/15 16:13), Alexander Bokovoy wrote: >On Fri, 06 Mar 2015, Lukas Slebodnik wrote: >>On (05/03/15 16:20), Petr Vobornik wrote: >>>On 03/05/2015 11:23 AM, Lukas Slebodnik wrote: >>>>On (05/03/15 08:54), Petr Vobornik wrote: >>>>>On 02/27/2015 09:50 PM, Lukas Slebodnik wrote: >>>>>>ehlo, >>>>>> >>>>>>Please review attached patches and fix freeipa in fedora 22 ASAP. >>>>>> >>>>>>I think the most critical is 1st patch >>>>>> >>>>>>sh$ git grep "SSSDConfig" | grep import >>>>>>install/tools/ipa-upgradeconfig:import SSSDConfig >>>>>>ipa-client/ipa-install/ipa-client-automount:import SSSDConfig >>>>>>ipa-client/ipa-install/ipa-client-install: import SSSDConfig >>>>>> >>>>>>BTW package python-sssdconfig is provides since sssd-1.10.0alpha1 (2013-04-02) >>>>>>but it was not explicitely required. >>>>>> >>>>>>The latest python3 changes in sssd (fedora 22) is just a result of negligent >>>>>>packaging of freeipa. >>>>>> >>>>>>LS >>>>>> >>>>> >>>>>Fedora 22 was amended. >>>>> >>>>>Patch 1: ACK >>>>> >>>>>Patch 2: ACK >>>>> >>>>>Patch3: >>>>>the package name is libsss_nss_idmap-python not python-libsss_nss_idmap >>>>>which already is required in adtrust package >>>>In sssd upstream we decided to rename package libsss_nss_idmap-python to >>>>python-libsss_nss_idmap according to new rpm python guidelines. >>>>The python3 version has alredy correct name. >>>> >>>>We will rename package in downstream with next major release (1.13). >>>>Of course it we will add "Provides: libsss_nss_idmap-python". >>>> >>>>We can push 3rd patch later or I can update 3rd patch. >>>>What do you prefer? >>>> >>>>Than you very much for review. >>>> >>>>LS >>>> >>> >>>Patch 3 should be updated to not forget the remaining change in ipa-python >>>package. >>> >>>It then should be updated downstream and master when 1.13 is released in >>>Fedora, or in master sooner if SSSD 1.13 becomes the minimal version required >>>by master. >> >>Fixed. >> >>BTW Why ther is a pylint comment for some sssd modules >>I did not kave any pylint problems after removing comment. >> >>ipalib/plugins/trust.py:32: import pysss_murmur #pylint: disable=F0401 >>ipalib/plugins/trust.py:38: import pysss_nss_idmap #pylint: disable=F0401 >> >> >>And why are these modules optional (try except) >Because they are needed to properly load in the case trust subpackages >are not installed, to generate proper messages to users who will try >these commands, like 'ipa trust-add' while the infrastructure is not in >place. > >pylint is dumb for such cases. > Yes but my patches added requires to all necessary packages. How can I get pylint warning? I modified spec file and make-lint was called in "%check" phase. and I did not have any pylint problems in mock. LS From abokovoy at redhat.com Fri Mar 6 14:24:37 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Fri, 6 Mar 2015 16:24:37 +0200 Subject: [Freeipa-devel] [PATCHES] SPEC: Require python2 version of sssd bindings In-Reply-To: <20150306142005.GB2345@mail.corp.redhat.com> References: <20150227205039.GB2327@mail.corp.redhat.com> <54F80B99.6030104@redhat.com> <20150305102328.GA18226@mail.corp.redhat.com> <54F87448.4050703@redhat.com> <20150306140505.GA2345@mail.corp.redhat.com> <20150306141318.GX25455@redhat.com> <20150306142005.GB2345@mail.corp.redhat.com> Message-ID: <20150306142437.GY25455@redhat.com> On Fri, 06 Mar 2015, Lukas Slebodnik wrote: >On (06/03/15 16:13), Alexander Bokovoy wrote: >>On Fri, 06 Mar 2015, Lukas Slebodnik wrote: >>>On (05/03/15 16:20), Petr Vobornik wrote: >>>>On 03/05/2015 11:23 AM, Lukas Slebodnik wrote: >>>>>On (05/03/15 08:54), Petr Vobornik wrote: >>>>>>On 02/27/2015 09:50 PM, Lukas Slebodnik wrote: >>>>>>>ehlo, >>>>>>> >>>>>>>Please review attached patches and fix freeipa in fedora 22 ASAP. >>>>>>> >>>>>>>I think the most critical is 1st patch >>>>>>> >>>>>>>sh$ git grep "SSSDConfig" | grep import >>>>>>>install/tools/ipa-upgradeconfig:import SSSDConfig >>>>>>>ipa-client/ipa-install/ipa-client-automount:import SSSDConfig >>>>>>>ipa-client/ipa-install/ipa-client-install: import SSSDConfig >>>>>>> >>>>>>>BTW package python-sssdconfig is provides since sssd-1.10.0alpha1 (2013-04-02) >>>>>>>but it was not explicitely required. >>>>>>> >>>>>>>The latest python3 changes in sssd (fedora 22) is just a result of negligent >>>>>>>packaging of freeipa. >>>>>>> >>>>>>>LS >>>>>>> >>>>>> >>>>>>Fedora 22 was amended. >>>>>> >>>>>>Patch 1: ACK >>>>>> >>>>>>Patch 2: ACK >>>>>> >>>>>>Patch3: >>>>>>the package name is libsss_nss_idmap-python not python-libsss_nss_idmap >>>>>>which already is required in adtrust package >>>>>In sssd upstream we decided to rename package libsss_nss_idmap-python to >>>>>python-libsss_nss_idmap according to new rpm python guidelines. >>>>>The python3 version has alredy correct name. >>>>> >>>>>We will rename package in downstream with next major release (1.13). >>>>>Of course it we will add "Provides: libsss_nss_idmap-python". >>>>> >>>>>We can push 3rd patch later or I can update 3rd patch. >>>>>What do you prefer? >>>>> >>>>>Than you very much for review. >>>>> >>>>>LS >>>>> >>>> >>>>Patch 3 should be updated to not forget the remaining change in ipa-python >>>>package. >>>> >>>>It then should be updated downstream and master when 1.13 is released in >>>>Fedora, or in master sooner if SSSD 1.13 becomes the minimal version required >>>>by master. >>> >>>Fixed. >>> >>>BTW Why ther is a pylint comment for some sssd modules >>>I did not kave any pylint problems after removing comment. >>> >>>ipalib/plugins/trust.py:32: import pysss_murmur #pylint: disable=F0401 >>>ipalib/plugins/trust.py:38: import pysss_nss_idmap #pylint: disable=F0401 >>> >>> >>>And why are these modules optional (try except) >>Because they are needed to properly load in the case trust subpackages >>are not installed, to generate proper messages to users who will try >>these commands, like 'ipa trust-add' while the infrastructure is not in >>place. >> >>pylint is dumb for such cases. >> >Yes but my patches added requires to all necessary packages. > >How can I get pylint warning? >I modified spec file and make-lint was called in "%check" phase. >and I did not have any pylint problems in mock. If you don't have those packages installed, it will complain. -- / Alexander Bokovoy From corey.kovacs at gmail.com Fri Mar 6 14:26:40 2015 From: corey.kovacs at gmail.com (Corey Kovacs) Date: Fri, 6 Mar 2015 07:26:40 -0700 Subject: [Freeipa-devel] UI plugins In-Reply-To: <54F9B275.4040400@redhat.com> References: <54F986A7.8000702@redhat.com> <54F9B275.4040400@redhat.com> Message-ID: Sounds great. Just wanted to know if I was going to be reinventing my own wheel again. Thanks again. Corey On Mar 6, 2015 6:58 AM, "Petr Vobornik" wrote: > On 03/06/2015 02:31 PM, Corey Kovacs wrote: > >> I almost forgot to ask. Since you don't point it out, I am assuming (yeah >> I >> know) the plugin code methods have not changed from 3.3 to 4.1? That is to >> day I should be able to use the same techniques? >> > > Same techniques should be applicable, there were no major improvements in > plugin support in 4.1. But Web UI doesn't have any stable API so little > things could be different. > > Basically plugins interact with Web UI's core code which is not > perfect(easy to break things) but it's powerful and better than nothing. > > >> Thanks again! >> >> Corey >> >> On Fri, Mar 6, 2015 at 3:51 AM, Petr Vobornik >> wrote: >> >> On 03/06/2015 03:54 AM, Corey Kovacs wrote: >>> >>> After reading the extending freeipa training document I was able >>>> successfully add us to meet attributes and add/modify them using the >>>> cli >>>> which was pretty cool. Now that I got the cli out of the way I want to >>>> add >>>> the fields to the ui. Because of the similarities between what I want to >>>> do >>>> and the example given in the docs I just followed along and changed >>>> variables where it made sense to do so. I cannot however get the new >>>> field >>>> to show up. The Apache logs don't show any errors but they do show the >>>> plugin being read as the page (user details) is loaded. After searching >>>> around I found a document which attempts to explain the process but it >>>> assumes some knowledge held by the reader which I don't possess. >>>> >>>> It looks like I supposed to create some sort of index loader which >>>> loads >>>> all of the modules which are part of the plugin. I can't seem to find >>>> any >>>> good documents telling the whole process or least non that I can make >>>> sense >>>> of. >>>> >>>> >>>> Plugin could be just one file, if the plugin name is "myplugin" it has >>> to >>> be: >>> >>> /usr/share/ipa/ui/js/plugins/myplugin/myplugin.js >>> >>> IPA Web UI uses AMD module format: >>> - https://dojotoolkit.org/documentation/tutorials/1.10/modules/ >>> - http://requirejs.org/docs/whyamd.html#amd >>> >>> I have some example plugins here: >>> - https://pvoborni.fedorapeople.org/plugins/ >>> >>> For your case I would look at: >>> - https://pvoborni.fedorapeople.org/plugins/employeenumber/ >>> employeenumber.js >>> >>> Web UI basics and introspection: >>> (this section is little redundant to your question, but it might help >>> others) >>> >>> FreeIPA Web UI is semi declarative. That means that part of the pages >>> could be thrown away, modified or extended by plugins before the page is >>> constructed. To do that, one has to modify specification object of >>> particular module. >>> >>> Here, I would assume that you don't have UI sources(before minification), >>> so all introspection will be done in running web ui. Otherwise it's >>> easier >>> to inspect sources in install/ui/src/freeipa, i.e. checkout ipa-3-3 >>> branch >>> from upstream git. >>> >>> List of modules could be find(after authentication) in browse developer >>> tools console in object: >>> >>> window.require.cache >>> or (depends on IPA version) >>> window.dojo.config.cache >>> >>> One can then obtain the module by: >>> >>> var user_module = require('freeipa/user') >>> >>> specification object is usually in 'entity_spec' or '$enity_name_spec' >>> property. >>> >>> user_module.entity_spec >>> >>> UI is internally organized into entities. Entity corresponds to ipalib >>> object. Entity, e.g. user, usually have multiple pages called facets. To >>> get a list of facets: >>> user_module.entity_spec.facets >>> >>> The one with fields has usually a name "details". For users it's the >>> second facet: >>> var details_facet = user_module.entity_spec.facets[1] >>> >>> IF i simplify it a bit, we can say that fields on a page are organized in >>> sections: >>> details_facet.sections >>> >>> Section has fields. A field usually represents an editable element on >>> page, e.g. a textbox with label, validators and other stuff. >>> >>> Example of inspection: >>> >>> https://pvoborni.fedorapeople.org/images/inspect_ui.png >>> >>> Your goal is to pick a section, or create a new one an add a field there. >>> To know what to define, just examine a definition of already existing >>> field >>> and just amend name, label, ... >>> >>> >>> It would help also to understand how to debug such a thing. >>> >>>> >>>> >>>> In browser developer tools. There is javascript console (use in the >>> text >>> above), javascript debugger, network tab for inspecting loaded files. >>> >>> For developing RHEL 7 plugin, I would suggest you to install test >>> instance >>> of FreeIPA 3.3 and use "Debugging with source codes" method described in: >>> >>> - https://pvoborni.fedorapeople.org/doc/#!/guide/Debugging >>> >>> Some tips: >>> - if you get a weird dojo loader messegate, you probably have a syntax >>> error in the plugin or you don't return a plugin object or the plugin >>> could >>> not be loaded (bad name) >>> - it's good to use some JavaScript liner - jsl or jshint to catch syntax >>> errors early. >>> >>> >>> I running version 3.3 on rhel 7. Any help or pointers to more >>> >>>> documentation would be greatly appreciated. >>>> >>>> >>>> -- >>> Petr Vobornik >>> >>> >> > > -- > Petr Vobornik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbasti at redhat.com Fri Mar 6 15:50:17 2015 From: mbasti at redhat.com (Martin Basti) Date: Fri, 06 Mar 2015 16:50:17 +0100 Subject: [Freeipa-devel] [PATCHES 0204-0207] Server upgrade: Make LDAP data upgrade deterministic Message-ID: <54F9CCB9.4070600@redhat.com> The patchset ensure, the upgrade order will respect ordering of entries in *.update files. Required for: https://fedorahosted.org/freeipa/ticket/4904 Patch 205 also fixes https://fedorahosted.org/freeipa/ticket/3560 Required patch mbasti-0203 Patches attached. -- Martin Basti -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0204-Server-Upgrade-do-not-sort-updates-by-DN.patch Type: text/x-patch Size: 1576 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0205-Server-Upgrade-Upgrade-one-file-per-time.patch Type: text/x-patch Size: 4115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0206-Server-Upgrade-Set-modified-to-false-before-each-upd.patch Type: text/x-patch Size: 1202 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0207-Server-Upgrade-Update-entries-in-order-specified-in-.patch Type: text/x-patch Size: 15178 bytes Desc: not available URL: From mbasti at redhat.com Fri Mar 6 15:52:15 2015 From: mbasti at redhat.com (Martin Basti) Date: Fri, 06 Mar 2015 16:52:15 +0100 Subject: [Freeipa-devel] [PATCH 0203] Remove unused PRE_SCHEMA upgrade Message-ID: <54F9CD2F.3050900@redhat.com> This upgrade step is not used anymore. Required by: https://fedorahosted.org/freeipa/ticket/4904 Patch attached. -- Martin Basti -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0203-Server-Upgrade-Remove-unused-PRE_SCHEMA_UPDATE.patch Type: text/x-patch Size: 7330 bytes Desc: not available URL: From mbasti at redhat.com Fri Mar 6 17:00:02 2015 From: mbasti at redhat.com (Martin Basti) Date: Fri, 06 Mar 2015 18:00:02 +0100 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins Message-ID: <54F9DD12.2050008@redhat.com> Upgrade plugins which modify LDAP data directly should not be executed in --test mode. This patch is a workaround, to ensure update with --test option will not modify any LDAP data. https://fedorahosted.org/freeipa/ticket/3448 Patch attached. -- Martin Basti -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0208-Server-Upgrade-respect-test-option-in-plugins.patch Type: text/x-patch Size: 6662 bytes Desc: not available URL: From tbordaz at redhat.com Fri Mar 6 18:30:27 2015 From: tbordaz at redhat.com (thierry bordaz) Date: Fri, 06 Mar 2015 19:30:27 +0100 Subject: [Freeipa-devel] [PATCH] 0003-3 User life cycle: new stageuser plugin with add verb In-Reply-To: <54E5FF07.1080809@redhat.com> References: <53E4D6AE.6050505@redhat.com> <54045399.3030404@redhat.com> <54196346.5070500@redhat.com> <54D0A7EB.1010700@redhat.com> <54D22BE2.9050407@redhat.com> <54D24567.4010103@redhat.com> <54E5D092.6030708@redhat.com> <54E5FF07.1080809@redhat.com> Message-ID: <54F9F243.5090003@redhat.com> On 02/19/2015 04:19 PM, Martin Basti wrote: > On 19/02/15 13:01, thierry bordaz wrote: >> On 02/04/2015 05:14 PM, Jan Cholasta wrote: >>> Hi, >>> >>> Dne 4.2.2015 v 15:25 David Kupka napsal(a): >>>> On 02/03/2015 11:50 AM, thierry bordaz wrote: >>>>> On 09/17/2014 12:32 PM, thierry bordaz wrote: >>>>>> On 09/01/2014 01:08 PM, Petr Viktorin wrote: >>>>>>> On 08/08/2014 03:54 PM, thierry bordaz wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> The attached patch is related to 'User Life Cycle' >>>>>>>> (https://fedorahosted.org/freeipa/ticket/3813) >>>>>>>> >>>>>>>> It creates a stageuser plugin with a first function stageuser-add. >>>>>>>> Stage >>>>>>>> user entries are provisioned under 'cn=staged >>>>>>>> users,cn=accounts,cn=provisioning,SUFFIX'. >>>>>>>> >>>>>>>> Thanks >>>>>>>> thierry >>>>>>> >>>>>>> Avoid `from ipalib.plugins.baseldap import *` in new code; instead >>>>>>> import the module itself and use e.g. `baseldap.LDAPObject`. >>>>>>> >>>>>>> The stageuser help (docstring) is copied from the user plugin, and >>>>>>> discusses things like account lockout and disabling users. It >>>>>>> should >>>>>>> rather explain what stageuser itself does. (And I don't very much >>>>>>> like the Note about the interface being badly designed...) >>>>>>> Also decide if the docs should call it "staged user" or "stage >>>>>>> user" >>>>>>> or "stageuser". >>>>>>> >>>>>>> A lot of the code is copied and pasted over from the users plugin. >>>>>>> Don't do that. Either import things (e.g. validate_nsaccountlock) >>>>>>> from the users plugin, or move the reused code into a shared >>>>>>> module. >>>>>>> >>>>>>> For the `user` object, since so much is the same, it might be >>>>>>> best to >>>>>>> create a common base class for user and stageuser; and similarly >>>>>>> for >>>>>>> the Command plugins. >>>>>>> >>>>>>> The default permissions need different names, and you don't need >>>>>>> another copy of the 'non_object' ones. Also, run the makeaci >>>>>>> script. >>>>>>> >>>>>> Hello, >>>>>> >>>>>> This modified patch is mainly moving common base class into a >>>>>> new >>>>>> plugin: accounts.py. user/stageuser plugin inherits from >>>>>> accounts. >>>>>> It also creates a better description of what are stage user, how >>>>>> to add a new stage user, updates ACI.txt and separate >>>>>> active/stage >>>>>> user managed permissions. >>>>>> >>>>>> thanks >>>>>> thierry >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Freeipa-devel mailing list >>>>>> Freeipa-devel at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>> >>>>> >>>>> Thanks David for the reviews. Here the last patches >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Freeipa-devel mailing list >>>>> Freeipa-devel at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>> >>>> >>>> The freeipa-tbordaz-0002 patch had trailing whitespaces on few >>>> lines so >>>> I'm attaching fixed version (and unchanged patch >>>> freeipa-tbordaz-0003-3 >>>> to keep them together). >>>> >>>> The ULC feature is still WIP but these patches look good to me and >>>> don't >>>> break anything as far as I tested. >>>> We should push them now to avoid further rebases. Thierry can then >>>> prepare other patches delivering the rest of ULC functionality. >>> >>> Few comments from just reading the patches: >>> >>> 1) I would name the base class "baseuser", "account" does not >>> necessarily mean user account. >>> >>> 2) This is very wrong: >>> >>> -class user_add(LDAPCreate): >>> +class user_add(user, LDAPCreate): >>> >>> You are creating a plugin which is both an object and an command. >>> >>> 3) This is purely subjective, but I don't like the name >>> "deleteuser", as it has a verb in it. We usually don't do that and >>> IMHO we shouldn't do that. >>> >>> Honza >>> >> >> Thank you for the review. I am attaching the updates patches >> >> >> >> >> >> >> _______________________________________________ >> Freeipa-devel mailing list >> Freeipa-devel at redhat.com >> https://www.redhat.com/mailman/listinfo/freeipa-devel > Hello, > I'm getting errors during make rpms: > > if [ "" != "yes" ]; then \ > ./makeapi --validate; \ > ./makeaci --validate; \ > fi > > /root/freeipa/ipalib/plugins/baseuser.py:641 command "baseuser_add" > doc is not internationalized > /root/freeipa/ipalib/plugins/baseuser.py:653 command "baseuser_find" > doc is not internationalized > /root/freeipa/ipalib/plugins/baseuser.py:647 command "baseuser_mod" > doc is not internationalized > 0 commands without doc, 3 commands whose doc is not i18n > Command baseuser_add in ipalib, not in API > Command baseuser_find in ipalib, not in API > Command baseuser_mod in ipalib, not in API > > There are one or more new commands defined. > Update API.txt and increment the minor version in VERSION. > > There are one or more documentation problems. > You must fix these before preceeding > > Issues probably caused by this: > 1) > You should not use the register decorator, if this class is just for > inheritance > @register() > class baseuser_add(LDAPCreate): > > @register() > class baseuser_mod(LDAPUpdate): > > @register() > class baseuser_find(LDAPSearch): > > see dns.py plugin and "DNSZoneBase" and "dnszone" classes > > 2) > there might be an issue with > @register() > class baseuser(LDAPObject): > > the register decorator should not be there, I was warned by Petr^3 to > not use permission in parent class. The same permission should be > specified only in one place (for example user class), (otherwise they > will be generated twice??) I don't know more details about it. > > -- > Martin Basti Hello Martin, Jan, Thanks for your review. I changed the patch so that it does not register baseuser_*. Also increase the minor version because of new command. Finally I moved the managed_permission definition out of the parent baseuser class. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-User-Life-Cycle-Exclude-subtree-for-ipaUniqueID-gene.patch Type: text/x-patch Size: 2585 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0002-User-life-cycle-stageuser-add-verb.patch Type: text/x-patch Size: 60478 bytes Desc: not available URL: From slaz at seznam.cz Mon Mar 9 07:00:53 2015 From: slaz at seznam.cz (=?UTF-8?B?U3RhbmlzbGF2IEzDoXpuacSNa2E=?=) Date: Mon, 09 Mar 2015 08:00:53 +0100 Subject: [Freeipa-devel] Time-based account policies Message-ID: <54FD4525.6000000@seznam.cz> Hi! My name is Stanislav Laznicka and I am a student at Brno University of Technology. As a part of my Master's thesis, I am supposed to design and implement time-based account policies extensions for FreeIPA and SSSD. While going through the code, I noticed time-based access control was implemented in the past, but it was pulled. I would very much be interested to know why that was and what were the problems with that implementation (so that I don't repeat those again). The solution to the time-based account policies as I see it can be divided into two possible directions - having the time of the policies stored as a UTC time (which is what both Active Directory and 389 Directory Server do), or it can be just a time record that would be compared to the local time of each client. Each of the approaches above has its pros and cons. Basically, local time approach is much more flexible when it comes to multiple time zones, however it does not allow the absolute control of access as the UTC time based approach would (or at least, it does not allow it without some further additions). I would therefore also be interested to hear from you about which of these approaches corresponds more to the common use-case of the FreeIPA system. Cheers, Standa L. From dkupka at redhat.com Mon Mar 9 10:56:14 2015 From: dkupka at redhat.com (David Kupka) Date: Mon, 09 Mar 2015 11:56:14 +0100 Subject: [Freeipa-devel] [PATCH 0199] Remove unused disable-betxn.ldif file In-Reply-To: <54EDD211.4080606@redhat.com> References: <54EDD211.4080606@redhat.com> Message-ID: <54FD7C4E.7050205@redhat.com> On 02/25/2015 02:45 PM, Martin Basti wrote: > Hello, > > the file 'disable-betxn.ldif' is not used in code in IPA master branch. > There is 10-enable-betxn.update which is used. > > If I'm right we can remove it. Patch attached. > > Please correct me if the file is needed. > Martin^2 > > > > _______________________________________________ > Freeipa-devel mailing list > Freeipa-devel at redhat.com > https://www.redhat.com/mailman/listinfo/freeipa-devel > IIUC ipa stopped using this file in commit f1f1b4e7f2e9c1838ad7ec76002b78ca0c2a3c46 but it was not removed. Works for me, ACK. -- David Kupka From tbabej at redhat.com Mon Mar 9 11:26:33 2015 From: tbabej at redhat.com (Tomas Babej) Date: Mon, 09 Mar 2015 12:26:33 +0100 Subject: [Freeipa-devel] [PATCHES 306-316] Automated migration tool from Winsync Message-ID: <54FD8369.10803@redhat.com> Hi, this couple of patches provides a initial implementation of the winsync migration tool: https://fedorahosted.org/freeipa/ticket/4524 Some parts could use some polishing, but this is a sound foundation. Tomas -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0306-winsync-migrate-Add-initial-plumbing.patch Type: text/x-patch Size: 5584 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0307-winsync-migrate-Add-a-way-to-find-all-winsync-users.patch Type: text/x-patch Size: 2161 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0308-migrate-winsync-Create-user-ID-overrides-in-place-of.patch Type: text/x-patch Size: 2405 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0309-migrate-winsync-Add-option-validation-and-handling.patch Type: text/x-patch Size: 2479 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0310-winsync-migrate-Move-the-api-initalization-and-LDAP-.patch Type: text/x-patch Size: 1929 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0311-dcerpc-Change-logging-level-for-debug-information.patch Type: text/x-patch Size: 1300 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0312-dcerpc-Add-debugging-message-to-failing-kinit-as-htt.patch Type: text/x-patch Size: 867 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0313-winsync-migrate-Require-root-privileges.patch Type: text/x-patch Size: 965 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0314-idviews-Do-not-abort-the-find-show-commands-on-conve.patch Type: text/x-patch Size: 1740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0315-winsync-migrate-Require-explicit-specification-of-th.patch Type: text/x-patch Size: 3087 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0316-winsync-migrate-Delete-winsync-agreement-prior-to-mi.patch Type: text/x-patch Size: 2808 bytes Desc: not available URL: From mbabinsk at redhat.com Mon Mar 9 12:06:07 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Mon, 09 Mar 2015 13:06:07 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <54F997F7.2070400@redhat.com> References: <54F997F7.2070400@redhat.com> Message-ID: <54FD8CAF.7030609@redhat.com> On 03/06/2015 01:05 PM, Martin Babinsky wrote: > This series of patches for the master/4.1 branch attempts to implement > some of the Rob's and Petr Vobornik's ideas which originated from a > discussion on this list regarding my original patch fixing > https://fedorahosted.org/freeipa/ticket/4808. > > I suppose that these patches are just a first iteration, we may further > discuss if this is the right thing to do. > > Below is a quote from the original discussion just to get the context: > > > The next iteration of patches is attached below. Thanks to jcholast and pvoborni for the comments and insights. -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0015-2-ipautil-new-functions-kinit_keytab-and-kinit_passwor.patch Type: text/x-patch Size: 4114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0016-2-ipa-client-install-try-to-get-host-TGT-several-times.patch Type: text/x-patch Size: 8314 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0017-2-Adopted-kinit_keytab-and-kinit_password-for-kerberos.patch Type: text/x-patch Size: 11358 bytes Desc: not available URL: From mbasti at redhat.com Mon Mar 9 12:52:04 2015 From: mbasti at redhat.com (Martin Basti) Date: Mon, 09 Mar 2015 13:52:04 +0100 Subject: [Freeipa-devel] [PATCH 0209] Fix logically dead code in ipap11helper module Message-ID: <54FD9774.9030209@redhat.com> Patch attached. -- Martin Basti -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0209-Fix-dead-code-in-ipap11helper-module.patch Type: text/x-patch Size: 1890 bytes Desc: not available URL: From npmccallum at redhat.com Mon Mar 9 13:02:56 2015 From: npmccallum at redhat.com (Nathaniel McCallum) Date: Mon, 09 Mar 2015 09:02:56 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FD4525.6000000@seznam.cz> References: <54FD4525.6000000@seznam.cz> Message-ID: <1425906176.2658.1.camel@redhat.com> On Mon, 2015-03-09 at 08:00 +0100, Stanislav L?zni?ka wrote: > Hi! > > My name is Stanislav Laznicka and I am a student at Brno University > of Technology. As a part of my Master's thesis, I am supposed to > design and > implement time-based account policies extensions for FreeIPA and > SSSD. > > While going through the code, I noticed time-based access control > was implemented in the past, but it was pulled. I would very much be > interested to know why that was and what were the problems with that > implementation (so that I don't repeat those again). > > The solution to the time-based account policies as I see it can be > divided into two possible directions - having the time of the > policies stored as a UTC time (which is what both Active Directory > and 389 Directory Server do), or it can be just a time record that > would be compared to the local time of each client. > > Each of the approaches above has its pros and cons. Basically, local > time approach is much more flexible when it comes to multiple time > zones, however it does not allow the absolute control of access as > the UTC time based approach would (or at least, it does not allow it > without > some further additions). I would therefore also be interested to > hear from you about which of these approaches corresponds more to > the common use-case of the FreeIPA system. I would be deeply worried about the unexpected security issues that could arise if local time was used by default. Nathaniel From tbabej at redhat.com Mon Mar 9 13:46:03 2015 From: tbabej at redhat.com (Tomas Babej) Date: Mon, 09 Mar 2015 14:46:03 +0100 Subject: [Freeipa-devel] [PATCH 0199] Remove unused disable-betxn.ldif file In-Reply-To: <54FD7C4E.7050205@redhat.com> References: <54EDD211.4080606@redhat.com> <54FD7C4E.7050205@redhat.com> Message-ID: <54FDA41B.3000806@redhat.com> On 03/09/2015 11:56 AM, David Kupka wrote: > On 02/25/2015 02:45 PM, Martin Basti wrote: >> Hello, >> >> the file 'disable-betxn.ldif' is not used in code in IPA master branch. >> There is 10-enable-betxn.update which is used. >> >> If I'm right we can remove it. Patch attached. >> >> Please correct me if the file is needed. >> Martin^2 >> >> >> >> _______________________________________________ >> Freeipa-devel mailing list >> Freeipa-devel at redhat.com >> https://www.redhat.com/mailman/listinfo/freeipa-devel >> > IIUC ipa stopped using this file in commit > f1f1b4e7f2e9c1838ad7ec76002b78ca0c2a3c46 but it was not removed. > > Works for me, ACK. > Pushed to master: a695f33989c3e6159a12c9341e42c042847cd57b From tbabej at redhat.com Mon Mar 9 13:49:14 2015 From: tbabej at redhat.com (Tomas Babej) Date: Mon, 09 Mar 2015 14:49:14 +0100 Subject: [Freeipa-devel] [PATCHES 134-136] extdom: handle ERANGE return code for getXXYYY_r() In-Reply-To: <20150306120829.GV25455@redhat.com> References: <20150302174507.GK3271@p.redhat.com> <20150304141755.GB25455@redhat.com> <20150304171453.GS3271@p.redhat.com> <20150305081636.GX3271@p.redhat.com> <20150305103353.GA3271@p.redhat.com> <20150306120829.GV25455@redhat.com> Message-ID: <54FDA4DA.4040409@redhat.com> On 03/06/2015 01:08 PM, Alexander Bokovoy wrote: > On Thu, 05 Mar 2015, Sumit Bose wrote: >> On Thu, Mar 05, 2015 at 09:16:36AM +0100, Sumit Bose wrote: >>> On Wed, Mar 04, 2015 at 06:14:53PM +0100, Sumit Bose wrote: >>> > On Wed, Mar 04, 2015 at 04:17:55PM +0200, Alexander Bokovoy wrote: >>> > > On Mon, 02 Mar 2015, Sumit Bose wrote: >>> > > >diff --git >>> a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c >>> b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c >>> > > >index >>> 20fdd62b20f28f5384cf83b8be5819f721c6c3db..84aeb28066f25f05a89d0c2d42e8b060e2399501 >>> 100644 >>> > > >--- >>> a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c >>> > > >+++ >>> b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c >>> > > >@@ -49,6 +49,220 @@ >>> > > > >>> > > >#define MAX(a,b) (((a)>(b))?(a):(b)) >>> > > >#define SSSD_DOMAIN_SEPARATOR '@' >>> > > >+#define MAX_BUF (1024*1024*1024) >>> > > >+ >>> > > >+ >>> > > >+ >>> > > >+static int get_buffer(size_t *_buf_len, char **_buf) >>> > > >+{ >>> > > >+ long pw_max; >>> > > >+ long gr_max; >>> > > >+ size_t buf_len; >>> > > >+ char *buf; >>> > > >+ >>> > > >+ pw_max = sysconf(_SC_GETPW_R_SIZE_MAX); >>> > > >+ gr_max = sysconf(_SC_GETGR_R_SIZE_MAX); >>> > > >+ >>> > > >+ if (pw_max == -1 && gr_max == -1) { >>> > > >+ buf_len = 16384; >>> > > >+ } else { >>> > > >+ buf_len = MAX(pw_max, gr_max); >>> > > >+ } >>> > > Here you'd get buf_len equal to 1024 by default on Linux which >>> is too >>> > > low for our use case. I think it would be beneficial to add one >>> more >>> > > MAX(buf_len, 16384): >>> > > - if (pw_max == -1 && gr_max == -1) { >>> > > - buf_len = 16384; >>> > > - } else { >>> > > - buf_len = MAX(pw_max, gr_max); >>> > > - } >>> > > + buf_len = MAX(16384, MAX(pw_max, gr_max)); >>> > > >>> > > with MAX(MAX(),..) you also get rid of if() statement as resulting >>> > > rvalue would be guaranteed to be positive. >>> > >>> > done >>> > >>> > > >>> > > The rest is going along the common lines but would it be better to >>> > > allocate memory once per LDAP client request rather than always >>> ask for >>> > > it per each NSS call? You can guarantee a sequential use of the >>> buffer >>> > > within the LDAP client request processing so there is no problem >>> with >>> > > locks but having this memory re-allocated on subsequent >>> > > getpwnam()/getpwuid()/... calls within the same request >>> processing seems >>> > > suboptimal to me. >>> > >>> > ok, makes sense, I moved get_buffer() back to the callers. >>> > >>> > New version attached. >>> >>> Please ignore this patch, I will send a revised version soon. >> >> Please find attached a revised version which properly reports missing >> objects and out-of-memory cases and makes sure buf and buf_len are in >> sync. > ACK to patches 0135 and 0136. This concludes the review, thanks! > Pushed to master: c15a407cbfaed163a933ab137eed16387efe25d2 From corey.kovacs at gmail.com Mon Mar 9 13:56:19 2015 From: corey.kovacs at gmail.com (Corey Kovacs) Date: Mon, 9 Mar 2015 07:56:19 -0600 Subject: [Freeipa-devel] UI plugins In-Reply-To: References: <54F986A7.8000702@redhat.com> <54F9B275.4040400@redhat.com> Message-ID: Petr, Using the information you provided I was able to get things working in a general sense. I suspect the answer to my next question is going to be "use the source luke..." which is cool but I was hoping documentation might be available which describes adding sections to facets and controlling the order of additional fields. I have added about a dozen and they don't sort the way I'd like them to. Thanks again for your help last week. Corey On Mar 6, 2015 7:26 AM, "Corey Kovacs" wrote: > Sounds great. Just wanted to know if I was going to be reinventing my own > wheel again. > > Thanks again. > > Corey > On Mar 6, 2015 6:58 AM, "Petr Vobornik" wrote: > >> On 03/06/2015 02:31 PM, Corey Kovacs wrote: >> >>> I almost forgot to ask. Since you don't point it out, I am assuming >>> (yeah I >>> know) the plugin code methods have not changed from 3.3 to 4.1? That is >>> to >>> day I should be able to use the same techniques? >>> >> >> Same techniques should be applicable, there were no major improvements in >> plugin support in 4.1. But Web UI doesn't have any stable API so little >> things could be different. >> >> Basically plugins interact with Web UI's core code which is not >> perfect(easy to break things) but it's powerful and better than nothing. >> >> >>> Thanks again! >>> >>> Corey >>> >>> On Fri, Mar 6, 2015 at 3:51 AM, Petr Vobornik >>> wrote: >>> >>> On 03/06/2015 03:54 AM, Corey Kovacs wrote: >>>> >>>> After reading the extending freeipa training document I was able >>>>> successfully add us to meet attributes and add/modify them using the >>>>> cli >>>>> which was pretty cool. Now that I got the cli out of the way I want to >>>>> add >>>>> the fields to the ui. Because of the similarities between what I want >>>>> to >>>>> do >>>>> and the example given in the docs I just followed along and changed >>>>> variables where it made sense to do so. I cannot however get the new >>>>> field >>>>> to show up. The Apache logs don't show any errors but they do show the >>>>> plugin being read as the page (user details) is loaded. After searching >>>>> around I found a document which attempts to explain the process but it >>>>> assumes some knowledge held by the reader which I don't possess. >>>>> >>>>> It looks like I supposed to create some sort of index loader which >>>>> loads >>>>> all of the modules which are part of the plugin. I can't seem to find >>>>> any >>>>> good documents telling the whole process or least non that I can make >>>>> sense >>>>> of. >>>>> >>>>> >>>>> Plugin could be just one file, if the plugin name is "myplugin" it >>>> has to >>>> be: >>>> >>>> /usr/share/ipa/ui/js/plugins/myplugin/myplugin.js >>>> >>>> IPA Web UI uses AMD module format: >>>> - https://dojotoolkit.org/documentation/tutorials/1.10/modules/ >>>> - http://requirejs.org/docs/whyamd.html#amd >>>> >>>> I have some example plugins here: >>>> - https://pvoborni.fedorapeople.org/plugins/ >>>> >>>> For your case I would look at: >>>> - https://pvoborni.fedorapeople.org/plugins/employeenumber/ >>>> employeenumber.js >>>> >>>> Web UI basics and introspection: >>>> (this section is little redundant to your question, but it might help >>>> others) >>>> >>>> FreeIPA Web UI is semi declarative. That means that part of the pages >>>> could be thrown away, modified or extended by plugins before the page is >>>> constructed. To do that, one has to modify specification object of >>>> particular module. >>>> >>>> Here, I would assume that you don't have UI sources(before >>>> minification), >>>> so all introspection will be done in running web ui. Otherwise it's >>>> easier >>>> to inspect sources in install/ui/src/freeipa, i.e. checkout ipa-3-3 >>>> branch >>>> from upstream git. >>>> >>>> List of modules could be find(after authentication) in browse developer >>>> tools console in object: >>>> >>>> window.require.cache >>>> or (depends on IPA version) >>>> window.dojo.config.cache >>>> >>>> One can then obtain the module by: >>>> >>>> var user_module = require('freeipa/user') >>>> >>>> specification object is usually in 'entity_spec' or '$enity_name_spec' >>>> property. >>>> >>>> user_module.entity_spec >>>> >>>> UI is internally organized into entities. Entity corresponds to ipalib >>>> object. Entity, e.g. user, usually have multiple pages called facets. To >>>> get a list of facets: >>>> user_module.entity_spec.facets >>>> >>>> The one with fields has usually a name "details". For users it's the >>>> second facet: >>>> var details_facet = user_module.entity_spec.facets[1] >>>> >>>> IF i simplify it a bit, we can say that fields on a page are organized >>>> in >>>> sections: >>>> details_facet.sections >>>> >>>> Section has fields. A field usually represents an editable element on >>>> page, e.g. a textbox with label, validators and other stuff. >>>> >>>> Example of inspection: >>>> >>>> https://pvoborni.fedorapeople.org/images/inspect_ui.png >>>> >>>> Your goal is to pick a section, or create a new one an add a field >>>> there. >>>> To know what to define, just examine a definition of already existing >>>> field >>>> and just amend name, label, ... >>>> >>>> >>>> It would help also to understand how to debug such a thing. >>>> >>>>> >>>>> >>>>> In browser developer tools. There is javascript console (use in the >>>> text >>>> above), javascript debugger, network tab for inspecting loaded files. >>>> >>>> For developing RHEL 7 plugin, I would suggest you to install test >>>> instance >>>> of FreeIPA 3.3 and use "Debugging with source codes" method described >>>> in: >>>> >>>> - https://pvoborni.fedorapeople.org/doc/#!/guide/Debugging >>>> >>>> Some tips: >>>> - if you get a weird dojo loader messegate, you probably have a syntax >>>> error in the plugin or you don't return a plugin object or the plugin >>>> could >>>> not be loaded (bad name) >>>> - it's good to use some JavaScript liner - jsl or jshint to catch syntax >>>> errors early. >>>> >>>> >>>> I running version 3.3 on rhel 7. Any help or pointers to more >>>> >>>>> documentation would be greatly appreciated. >>>>> >>>>> >>>>> -- >>>> Petr Vobornik >>>> >>>> >>> >> >> -- >> Petr Vobornik >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbabej at redhat.com Mon Mar 9 14:09:16 2015 From: tbabej at redhat.com (Tomas Babej) Date: Mon, 09 Mar 2015 15:09:16 +0100 Subject: [Freeipa-devel] [PATCHES 0200-0202] DNS fixes related to unsupported records In-Reply-To: <54F99DF5.1030408@redhat.com> References: <54F72208.5090208@redhat.com> <54F7262A.4010709@redhat.com> <54F99DF5.1030408@redhat.com> Message-ID: <54FDA98C.6050509@redhat.com> On 03/06/2015 01:30 PM, Petr Spacek wrote: > On 4.3.2015 16:35, Martin Basti wrote: >> On 04/03/15 16:17, Martin Basti wrote: >>> Ticket: https://fedorahosted.org/freeipa/ticket/4930 >>> >>> 0200: 4.1, master >>> Fixes traceback, which was raised if LDAP contained a record that was marked >>> as unsupported. >>> Now unsupported records are shown, if LDAP contains them. >>> >>> 0200: 4.1, master >>> Records marked as unsupported will not show options for editing parts. >>> >>> 0202: only master >>> Removes NSEC3PARAM record from record types. NSEC3PARAM can contain only >>> zone, value is allowed only in idnszone objectclass, so do not confuse users. >>> >> .... and patches attached :-) > ACK. It works for me and can be pushed to branches 4.1 and master. > Patches require a rebase. From mbasti at redhat.com Mon Mar 9 14:20:16 2015 From: mbasti at redhat.com (Martin Basti) Date: Mon, 09 Mar 2015 15:20:16 +0100 Subject: [Freeipa-devel] [PATCHES 0200-0202] DNS fixes related to unsupported records In-Reply-To: <54FDA98C.6050509@redhat.com> References: <54F72208.5090208@redhat.com> <54F7262A.4010709@redhat.com> <54F99DF5.1030408@redhat.com> <54FDA98C.6050509@redhat.com> Message-ID: <54FDAC20.3040906@redhat.com> On 09/03/15 15:09, Tomas Babej wrote: > > On 03/06/2015 01:30 PM, Petr Spacek wrote: >> On 4.3.2015 16:35, Martin Basti wrote: >>> On 04/03/15 16:17, Martin Basti wrote: >>>> Ticket: https://fedorahosted.org/freeipa/ticket/4930 >>>> >>>> 0200: 4.1, master >>>> Fixes traceback, which was raised if LDAP contained a record that >>>> was marked >>>> as unsupported. >>>> Now unsupported records are shown, if LDAP contains them. >>>> >>>> 0200: 4.1, master >>>> Records marked as unsupported will not show options for editing parts. >>>> >>>> 0202: only master >>>> Removes NSEC3PARAM record from record types. NSEC3PARAM can contain >>>> only >>>> zone, value is allowed only in idnszone objectclass, so do not >>>> confuse users. >>>> >>> .... and patches attached :-) >> ACK. It works for me and can be pushed to branches 4.1 and master. >> > > Patches require a rebase. Rebased patch 202 for IPA 4-1 branch -- Martin Basti -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-4-1-mbasti-0202-DNS-remove-NSEC3PARAM-from-records.patch Type: text/x-patch Size: 9782 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0200-DNS-fix-do-not-traceback-if-unsupported-records-are-.patch Type: text/x-patch Size: 4931 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0201-DNS-fix-do-not-show-part-options-for-unsupported-rec.patch Type: text/x-patch Size: 1160 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0202-DNS-remove-NSEC3PARAM-from-records.patch Type: text/x-patch Size: 9786 bytes Desc: not available URL: From tbabej at redhat.com Mon Mar 9 14:23:33 2015 From: tbabej at redhat.com (Tomas Babej) Date: Mon, 09 Mar 2015 15:23:33 +0100 Subject: [Freeipa-devel] [PATCHES 0200-0202] DNS fixes related to unsupported records In-Reply-To: <54FDAC20.3040906@redhat.com> References: <54F72208.5090208@redhat.com> <54F7262A.4010709@redhat.com> <54F99DF5.1030408@redhat.com> <54FDA98C.6050509@redhat.com> <54FDAC20.3040906@redhat.com> Message-ID: <54FDACE5.8030701@redhat.com> On 03/09/2015 03:20 PM, Martin Basti wrote: > On 09/03/15 15:09, Tomas Babej wrote: >> >> On 03/06/2015 01:30 PM, Petr Spacek wrote: >>> On 4.3.2015 16:35, Martin Basti wrote: >>>> On 04/03/15 16:17, Martin Basti wrote: >>>>> Ticket: https://fedorahosted.org/freeipa/ticket/4930 >>>>> >>>>> 0200: 4.1, master >>>>> Fixes traceback, which was raised if LDAP contained a record that >>>>> was marked >>>>> as unsupported. >>>>> Now unsupported records are shown, if LDAP contains them. >>>>> >>>>> 0200: 4.1, master >>>>> Records marked as unsupported will not show options for editing >>>>> parts. >>>>> >>>>> 0202: only master >>>>> Removes NSEC3PARAM record from record types. NSEC3PARAM can >>>>> contain only >>>>> zone, value is allowed only in idnszone objectclass, so do not >>>>> confuse users. >>>>> >>>> .... and patches attached :-) >>> ACK. It works for me and can be pushed to branches 4.1 and master. >>> >> >> Patches require a rebase. > Rebased patch 202 for IPA 4-1 branch > Pushed to master: f26220b9b301b406325c3206c2a7fe0edd6771f0 Pushed to ipa-4-1: 5f191e85e9beedbb40a7ce581069999761863289 From mkosek at redhat.com Mon Mar 9 14:49:23 2015 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 09 Mar 2015 15:49:23 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <1425906176.2658.1.camel@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> Message-ID: <54FDB2F3.5030005@redhat.com> On 03/09/2015 02:02 PM, Nathaniel McCallum wrote: > On Mon, 2015-03-09 at 08:00 +0100, Stanislav L?zni?ka wrote: >> Hi! >> >> My name is Stanislav Laznicka and I am a student at Brno University >> of Technology. As a part of my Master's thesis, I am supposed to >> design and >> implement time-based account policies extensions for FreeIPA and >> SSSD. >> >> While going through the code, I noticed time-based access control >> was implemented in the past, but it was pulled. I would very much be >> interested to know why that was and what were the problems with that >> implementation (so that I don't repeat those again). >> >> The solution to the time-based account policies as I see it can be >> divided into two possible directions - having the time of the >> policies stored as a UTC time (which is what both Active Directory >> and 389 Directory Server do), or it can be just a time record that >> would be compared to the local time of each client. >> >> Each of the approaches above has its pros and cons. Basically, local >> time approach is much more flexible when it comes to multiple time >> zones, however it does not allow the absolute control of access as >> the UTC time based approach would (or at least, it does not allow it >> without >> some further additions). I would therefore also be interested to >> hear from you about which of these approaches corresponds more to >> the common use-case of the FreeIPA system. > > I would be deeply worried about the unexpected security issues that > could arise if local time was used by default. > > Nathaniel My comments for the options: * Control in UTC time: easy to evaluate on client even when user (or anyone else) misconfigured the time zone. However, implementation is more difficult: - For rules like "person X can interactively log in from 8:00 to 17:00", you would need separate HBAC rule for each time zone as UTC range would differ - On the other hand, one can create simple rule "person X can use company resources from 8:00 EST to 17:00 EST, in whichever time zone they are located) - FreeIPA would need some helper UI (or even zone indication stored with host/hostgroup) that would help set up the access in local time and save in UTC time * Control in local time: difficult to evaluate, potential security issues as Nathaniel mentioned. Implementation and control would be easier though: - One could set up easy rule: "person X can interactivelly log in from 8:00 to 17:00" - Easier UI as one would not need to mess with zones, we would assume time zone is set up correctly on each host. So far, it indeed looks like UTC is the way to go. Martin From pvoborni at redhat.com Mon Mar 9 14:57:37 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Mon, 09 Mar 2015 15:57:37 +0100 Subject: [Freeipa-devel] UI plugins In-Reply-To: References: <54F986A7.8000702@redhat.com> <54F9B275.4040400@redhat.com> Message-ID: <54FDB4E1.4080304@redhat.com> On 03/09/2015 02:56 PM, Corey Kovacs wrote: > Petr, > > Using the information you provided I was able to get things working in a > general sense. I suspect the answer to my next question is going to be > "use the source luke..." which is cool but I was hoping documentation might > be available which describes adding sections to facets and controlling the > order of additional fields. I have added about a dozen and they don't sort > the way I'd like them to. > > Thanks again for your help last week. The order on the page matches the order in spec object. You can use standard Array.splice method[1] to put new field between two existing ones. [1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice -- Petr Vobornik From abokovoy at redhat.com Mon Mar 9 14:58:30 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Mon, 9 Mar 2015 16:58:30 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FDB2F3.5030005@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> Message-ID: <20150309145830.GA25455@redhat.com> On Mon, 09 Mar 2015, Martin Kosek wrote: >On 03/09/2015 02:02 PM, Nathaniel McCallum wrote: >> On Mon, 2015-03-09 at 08:00 +0100, Stanislav L?zni?ka wrote: >>> Hi! >>> >>> My name is Stanislav Laznicka and I am a student at Brno University >>> of Technology. As a part of my Master's thesis, I am supposed to >>> design and >>> implement time-based account policies extensions for FreeIPA and >>> SSSD. >>> >>> While going through the code, I noticed time-based access control >>> was implemented in the past, but it was pulled. I would very much be >>> interested to know why that was and what were the problems with that >>> implementation (so that I don't repeat those again). >>> >>> The solution to the time-based account policies as I see it can be >>> divided into two possible directions - having the time of the >>> policies stored as a UTC time (which is what both Active Directory >>> and 389 Directory Server do), or it can be just a time record that >>> would be compared to the local time of each client. >>> >>> Each of the approaches above has its pros and cons. Basically, local >>> time approach is much more flexible when it comes to multiple time >>> zones, however it does not allow the absolute control of access as >>> the UTC time based approach would (or at least, it does not allow it >>> without >>> some further additions). I would therefore also be interested to >>> hear from you about which of these approaches corresponds more to >>> the common use-case of the FreeIPA system. >> >> I would be deeply worried about the unexpected security issues that >> could arise if local time was used by default. >> >> Nathaniel > >My comments for the options: > >* Control in UTC time: easy to evaluate on client even when user (or anyone >else) misconfigured the time zone. However, implementation is more difficult: > - For rules like "person X can interactively log in from 8:00 to 17:00", you >would need separate HBAC rule for each time zone as UTC range would differ HBAC rules are applied on the client anyway so applying local time correction is fine in both cases. The problem is then with HBAC rule testing ('ipa hbactest') which runs on IPA master and would need to gain ability to handle local time per client too. And this is not an easy task. > - On the other hand, one can create simple rule "person X can use company >resources from 8:00 EST to 17:00 EST, in whichever time zone they are located) > - FreeIPA would need some helper UI (or even zone indication stored with >host/hostgroup) that would help set up the access in local time and save in UTC >time UI/CLI helpers are smaller issue if we can agree on the basic functionality. One of bigger issues we had was lack of versatile ical format parser to handle calendar-like specification of events -- we need to allow importing these ones instead of inventing our own. Another issue is that often rule does depend on a details about specific service -- it is common to have web services to use different timezone than the rest of processes running on the server. You would get an HBAC rule where something like apache service is defined but you'd need to associate timezone with it and have this association to be specific to a server or group of servers rather than just a service itself. -- / Alexander Bokovoy From mkosek at redhat.com Mon Mar 9 15:08:46 2015 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 09 Mar 2015 16:08:46 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150309145830.GA25455@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> Message-ID: <54FDB77E.4030807@redhat.com> On 03/09/2015 03:58 PM, Alexander Bokovoy wrote: > On Mon, 09 Mar 2015, Martin Kosek wrote: ... > One of bigger issues we had was lack of versatile ical format parser to > handle calendar-like specification of events -- we need to allow > importing these ones instead of inventing our own. Good point. I wonder how rigorous we want to be. iCal is a pretty powerful calendaring format. If we want to implement full support for it, it would be lot of code both on server side for setting it and on client side for evaluating it (CCing Jakub for reference). AD itself has much simpler UI for setting the access time, a table like that: http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg IIRC, they only store the bits of "can login/cannot login" for the time slots. That's another alternative. > Another issue is that often rule does depend on a details about specific > service -- it is common to have web services to use different timezone > than the rest of processes running on the server. You would get an HBAC > rule where something like apache service is defined but you'd need to > associate timezone with it and have this association to be specific to a > server or group of servers rather than just a service itself. HBAC service is mostly only PAM service, not IPA service, so I do not think you can easily store this information. But we can certainly store time zone information in a host or a host group and let that help the hbactest-* or UI... From abokovoy at redhat.com Mon Mar 9 15:13:58 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Mon, 9 Mar 2015 17:13:58 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FDB77E.4030807@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> Message-ID: <20150309151358.GB25455@redhat.com> On Mon, 09 Mar 2015, Martin Kosek wrote: >On 03/09/2015 03:58 PM, Alexander Bokovoy wrote: >> On Mon, 09 Mar 2015, Martin Kosek wrote: >... >> One of bigger issues we had was lack of versatile ical format parser to >> handle calendar-like specification of events -- we need to allow >> importing these ones instead of inventing our own. > >Good point. I wonder how rigorous we want to be. iCal is a pretty powerful >calendaring format. If we want to implement full support for it, it would be >lot of code both on server side for setting it and on client side for >evaluating it (CCing Jakub for reference). > >AD itself has much simpler UI for setting the access time, a table like that: >http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg > >IIRC, they only store the bits of "can login/cannot login" for the time slots. >That's another alternative. > >> Another issue is that often rule does depend on a details about specific >> service -- it is common to have web services to use different timezone >> than the rest of processes running on the server. You would get an HBAC >> rule where something like apache service is defined but you'd need to >> associate timezone with it and have this association to be specific to a >> server or group of servers rather than just a service itself. > >HBAC service is mostly only PAM service, not IPA service, so I do not think you >can easily store this information. But we can certainly store time zone >information in a host or a host group and let that help the hbactest-* or UI... I don't understand why are you involving IPA services here. HBAC rules are only about PAM services and PAM (HBAC) services are specific to hosts where they are in use. We aren't *that* contextual yet because we didn't need that in past but having a timezone info only per host is wrong precisely because every single process on the host can be run under different timezone as it is just a way of interpreting monotone time source data we get from kernel. -- / Alexander Bokovoy From corey.kovacs at gmail.com Mon Mar 9 15:48:47 2015 From: corey.kovacs at gmail.com (Corey Kovacs) Date: Mon, 9 Mar 2015 09:48:47 -0600 Subject: [Freeipa-devel] UI plugins In-Reply-To: <54FDB4E1.4080304@redhat.com> References: <54F986A7.8000702@redhat.com> <54F9B275.4040400@redhat.com> <54FDB4E1.4080304@redhat.com> Message-ID: Ahh of course that makes sense. I had made a separate plugin for each field. Thanks again On Mar 9, 2015 8:57 AM, "Petr Vobornik" wrote: > On 03/09/2015 02:56 PM, Corey Kovacs wrote: > >> Petr, >> >> Using the information you provided I was able to get things working in a >> general sense. I suspect the answer to my next question is going to be >> "use the source luke..." which is cool but I was hoping documentation >> might >> be available which describes adding sections to facets and controlling the >> order of additional fields. I have added about a dozen and they don't sort >> the way I'd like them to. >> >> Thanks again for your help last week. >> > > The order on the page matches the order in spec object. > > You can use standard Array.splice method[1] to put new field between two > existing ones. > > [1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/ > Reference/Global_Objects/Array/splice > -- > Petr Vobornik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jhrozek at redhat.com Mon Mar 9 17:13:18 2015 From: jhrozek at redhat.com (Jakub Hrozek) Date: Mon, 9 Mar 2015 18:13:18 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FDB77E.4030807@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> Message-ID: <20150309171318.GV29063@hendrix.arn.redhat.com> On Mon, Mar 09, 2015 at 04:08:46PM +0100, Martin Kosek wrote: > On 03/09/2015 03:58 PM, Alexander Bokovoy wrote: > > On Mon, 09 Mar 2015, Martin Kosek wrote: > ... > > One of bigger issues we had was lack of versatile ical format parser to > > handle calendar-like specification of events -- we need to allow > > importing these ones instead of inventing our own. > > Good point. I wonder how rigorous we want to be. iCal is a pretty powerful > calendaring format. If we want to implement full support for it, it would be > lot of code both on server side for setting it and on client side for > evaluating it (CCing Jakub for reference). > > AD itself has much simpler UI for setting the access time, a table like that: > http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg > > IIRC, they only store the bits of "can login/cannot login" for the time slots. > That's another alternative. I don't think that's what Alexander meant, I don't think the client library should come anywhere close to the iCal format. We might want to provide a script to convert an external format, but that's about it. I thought we could simply reuse parts of the previous grammar, maybe simplified. But I agree with Nathaniel (as I stated also in the private thread) that we should use UTC where possible. > > > Another issue is that often rule does depend on a details about specific > > service -- it is common to have web services to use different timezone > > than the rest of processes running on the server. You would get an HBAC > > rule where something like apache service is defined but you'd need to > > associate timezone with it and have this association to be specific to a > > server or group of servers rather than just a service itself. > > HBAC service is mostly only PAM service, not IPA service, so I do not think you > can easily store this information. But we can certainly store time zone > information in a host or a host group and let that help the hbactest-* or UI... From abokovoy at redhat.com Mon Mar 9 18:22:16 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Mon, 9 Mar 2015 20:22:16 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150309171318.GV29063@hendrix.arn.redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> Message-ID: <20150309182216.GD25455@redhat.com> On Mon, 09 Mar 2015, Jakub Hrozek wrote: >On Mon, Mar 09, 2015 at 04:08:46PM +0100, Martin Kosek wrote: >> On 03/09/2015 03:58 PM, Alexander Bokovoy wrote: >> > On Mon, 09 Mar 2015, Martin Kosek wrote: >> ... >> > One of bigger issues we had was lack of versatile ical format parser to >> > handle calendar-like specification of events -- we need to allow >> > importing these ones instead of inventing our own. >> >> Good point. I wonder how rigorous we want to be. iCal is a pretty powerful >> calendaring format. If we want to implement full support for it, it would be >> lot of code both on server side for setting it and on client side for >> evaluating it (CCing Jakub for reference). >> >> AD itself has much simpler UI for setting the access time, a table like that: >> http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg >> >> IIRC, they only store the bits of "can login/cannot login" for the time slots. >> That's another alternative. > >I don't think that's what Alexander meant, I don't think the client >library should come anywhere close to the iCal format. We might want to >provide a script to convert an external format, but that's about it. > >I thought we could simply reuse parts of the previous grammar, maybe >simplified. But I agree with Nathaniel (as I stated also in the private >thread) that we should use UTC where possible. Yes and no. Let me go in details a bit. We need iCal support to allow importing events created by external tools. We don't need to use it as internal format. However, there are quite a lot of details that can be lost if only UTC would be in use for a date as you are ignoring a timezone information completely. Timezone information needs to be kept when importing and not always could be reliably represented in UTC so that conversion from one timezone to another could present quite a problem. See http://www.w3.org/TR/timezone/ for some of recommendations. -- / Alexander Bokovoy From npmccallum at redhat.com Mon Mar 9 18:40:05 2015 From: npmccallum at redhat.com (Nathaniel McCallum) Date: Mon, 09 Mar 2015 14:40:05 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150309182216.GD25455@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> Message-ID: <1425926405.14916.7.camel@redhat.com> On Mon, 2015-03-09 at 20:22 +0200, Alexander Bokovoy wrote: > On Mon, 09 Mar 2015, Jakub Hrozek wrote: > > On Mon, Mar 09, 2015 at 04:08:46PM +0100, Martin Kosek wrote: > > > On 03/09/2015 03:58 PM, Alexander Bokovoy wrote: > > > > On Mon, 09 Mar 2015, Martin Kosek wrote: > > > ... > > > > One of bigger issues we had was lack of versatile ical format > > > > parser to handle calendar-like specification of events -- we > > > > need to allow importing these ones instead of inventing our > > > > own. > > > > > > Good point. I wonder how rigorous we want to be. iCal is a > > > pretty powerful > > > calendaring format. If we want to implement full support for it, > > > it would be > > > lot of code both on server side for setting it and on client > > > side for evaluating it (CCing Jakub for reference). > > > > > > AD itself has much simpler UI for setting the access time, a > > > table like that: > > > http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg > > > > > > IIRC, they only store the bits of "can login/cannot login" for > > > the time slots. > > > That's another alternative. > > > > I don't think that's what Alexander meant, I don't think the > > client library should come anywhere close to the iCal format. We > > might want to provide a script to convert an external format, but > > that's about it. > > > > I thought we could simply reuse parts of the previous grammar, > > maybe simplified. But I agree with Nathaniel (as I stated also in > > the private thread) that we should use UTC where possible. > Yes and no. Let me go in details a bit. > > We need iCal support to allow importing events created by external > tools. We don't need to use it as internal format. > > However, there are quite a lot of details that can be lost if only > UTC would be in use for a date as you are ignoring a timezone > information completely. Timezone information needs to be kept when > importing and not always could be reliably represented in UTC so > that conversion from one timezone to another could present quite a > problem. > > See http://www.w3.org/TR/timezone/for some of recommendations. I'm fine with a plan like this. I just want the admin to opt-in to timezones. Localtime *by default* is fraught with pitfalls. If the admin needs local time policy, he should specify it exactly and should confirm the local time on affected systems. Nathaniel From abokovoy at redhat.com Mon Mar 9 18:55:20 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Mon, 9 Mar 2015 20:55:20 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <1425926405.14916.7.camel@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <1425926405.14916.7.camel@redhat.com> Message-ID: <20150309185520.GG25455@redhat.com> On Mon, 09 Mar 2015, Nathaniel McCallum wrote: >On Mon, 2015-03-09 at 20:22 +0200, Alexander Bokovoy wrote: >> On Mon, 09 Mar 2015, Jakub Hrozek wrote: >> > On Mon, Mar 09, 2015 at 04:08:46PM +0100, Martin Kosek wrote: >> > > On 03/09/2015 03:58 PM, Alexander Bokovoy wrote: >> > > > On Mon, 09 Mar 2015, Martin Kosek wrote: >> > > ... >> > > > One of bigger issues we had was lack of versatile ical format >> > > > parser to handle calendar-like specification of events -- we >> > > > need to allow importing these ones instead of inventing our >> > > > own. >> > > >> > > Good point. I wonder how rigorous we want to be. iCal is a >> > > pretty powerful >> > > calendaring format. If we want to implement full support for it, >> > > it would be >> > > lot of code both on server side for setting it and on client >> > > side for evaluating it (CCing Jakub for reference). >> > > >> > > AD itself has much simpler UI for setting the access time, a >> > > table like that: >> > > http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg >> > > >> > > IIRC, they only store the bits of "can login/cannot login" for >> > > the time slots. >> > > That's another alternative. >> > >> > I don't think that's what Alexander meant, I don't think the >> > client library should come anywhere close to the iCal format. We >> > might want to provide a script to convert an external format, but >> > that's about it. >> > >> > I thought we could simply reuse parts of the previous grammar, >> > maybe simplified. But I agree with Nathaniel (as I stated also in >> > the private thread) that we should use UTC where possible. >> Yes and no. Let me go in details a bit. >> >> We need iCal support to allow importing events created by external >> tools. We don't need to use it as internal format. >> >> However, there are quite a lot of details that can be lost if only >> UTC would be in use for a date as you are ignoring a timezone >> information completely. Timezone information needs to be kept when >> importing and not always could be reliably represented in UTC so >> that conversion from one timezone to another could present quite a >> problem. >> >> See http://www.w3.org/TR/timezone/for some of recommendations. > >I'm fine with a plan like this. I just want the admin to opt-in to >timezones. Localtime *by default* is fraught with pitfalls. If the >admin needs local time policy, he should specify it exactly and should >confirm the local time on affected systems. Yep. We need to make it easy -- probably by allowing to define timezones as part of host object (like Martin proposed) and overridden in HBAC service/service group. And all this have to be very visual in the UI. Another problem is how to deal with missing timezone information at the place where HBAC rule with time rules would need to be applied. We need to define the way we handle all these (unavailable, non-parsable and other kinds of errors) timezone info issues. -- / Alexander Bokovoy From simo at redhat.com Mon Mar 9 19:45:29 2015 From: simo at redhat.com (Simo Sorce) Date: Mon, 09 Mar 2015 15:45:29 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150309171318.GV29063@hendrix.arn.redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> Message-ID: <1425930329.4735.52.camel@willson.usersys.redhat.com> On Mon, 2015-03-09 at 18:13 +0100, Jakub Hrozek wrote: > On Mon, Mar 09, 2015 at 04:08:46PM +0100, Martin Kosek wrote: > > On 03/09/2015 03:58 PM, Alexander Bokovoy wrote: > > > On Mon, 09 Mar 2015, Martin Kosek wrote: > > ... > > > One of bigger issues we had was lack of versatile ical format parser to > > > handle calendar-like specification of events -- we need to allow > > > importing these ones instead of inventing our own. > > > > Good point. I wonder how rigorous we want to be. iCal is a pretty powerful > > calendaring format. If we want to implement full support for it, it would be > > lot of code both on server side for setting it and on client side for > > evaluating it (CCing Jakub for reference). > > > > AD itself has much simpler UI for setting the access time, a table like that: > > http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg > > > > IIRC, they only store the bits of "can login/cannot login" for the time slots. > > That's another alternative. > > I don't think that's what Alexander meant, I don't think the client > library should come anywhere close to the iCal format. We might want to > provide a script to convert an external format, but that's about it. > > I thought we could simply reuse parts of the previous grammar, maybe > simplified. But I agree with Nathaniel (as I stated also in the private > thread) that we should use UTC where possible. Simplified == Kinda Broken. We've been through this a few times already, it just doesn't work. At a minimum you need to be able to select between UTC and "Local Time" and it is a rathole down there (What time is it *here* may be a hard question to answer :-/) > > > > > Another issue is that often rule does depend on a details about specific > > > service -- it is common to have web services to use different timezone > > > than the rest of processes running on the server. You would get an HBAC > > > rule where something like apache service is defined but you'd need to > > > associate timezone with it and have this association to be specific to a > > > server or group of servers rather than just a service itself. > > > > HBAC service is mostly only PAM service, not IPA service, so I do not think you > > can easily store this information. But we can certainly store time zone > > information in a host or a host group and let that help the hbactest-* or UI... > -- Simo Sorce * Red Hat, Inc * New York From simo at redhat.com Mon Mar 9 19:48:30 2015 From: simo at redhat.com (Simo Sorce) Date: Mon, 09 Mar 2015 15:48:30 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150309185520.GG25455@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <1425926405.14916.7.camel@redhat.com> <20150309185520.GG25455@redhat.com> Message-ID: <1425930510.4735.55.camel@willson.usersys.redhat.com> On Mon, 2015-03-09 at 20:55 +0200, Alexander Bokovoy wrote: > On Mon, 09 Mar 2015, Nathaniel McCallum wrote: > >On Mon, 2015-03-09 at 20:22 +0200, Alexander Bokovoy wrote: > >> On Mon, 09 Mar 2015, Jakub Hrozek wrote: > >> > On Mon, Mar 09, 2015 at 04:08:46PM +0100, Martin Kosek wrote: > >> > > On 03/09/2015 03:58 PM, Alexander Bokovoy wrote: > >> > > > On Mon, 09 Mar 2015, Martin Kosek wrote: > >> > > ... > >> > > > One of bigger issues we had was lack of versatile ical format > >> > > > parser to handle calendar-like specification of events -- we > >> > > > need to allow importing these ones instead of inventing our > >> > > > own. > >> > > > >> > > Good point. I wonder how rigorous we want to be. iCal is a > >> > > pretty powerful > >> > > calendaring format. If we want to implement full support for it, > >> > > it would be > >> > > lot of code both on server side for setting it and on client > >> > > side for evaluating it (CCing Jakub for reference). > >> > > > >> > > AD itself has much simpler UI for setting the access time, a > >> > > table like that: > >> > > http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg > >> > > > >> > > IIRC, they only store the bits of "can login/cannot login" for > >> > > the time slots. > >> > > That's another alternative. > >> > > >> > I don't think that's what Alexander meant, I don't think the > >> > client library should come anywhere close to the iCal format. We > >> > might want to provide a script to convert an external format, but > >> > that's about it. > >> > > >> > I thought we could simply reuse parts of the previous grammar, > >> > maybe simplified. But I agree with Nathaniel (as I stated also in > >> > the private thread) that we should use UTC where possible. > >> Yes and no. Let me go in details a bit. > >> > >> We need iCal support to allow importing events created by external > >> tools. We don't need to use it as internal format. > >> > >> However, there are quite a lot of details that can be lost if only > >> UTC would be in use for a date as you are ignoring a timezone > >> information completely. Timezone information needs to be kept when > >> importing and not always could be reliably represented in UTC so > >> that conversion from one timezone to another could present quite a > >> problem. > >> > >> See http://www.w3.org/TR/timezone/for some of recommendations. > > > >I'm fine with a plan like this. I just want the admin to opt-in to > >timezones. Localtime *by default* is fraught with pitfalls. If the > >admin needs local time policy, he should specify it exactly and should > >confirm the local time on affected systems. > Yep. We need to make it easy -- probably by allowing to define timezones > as part of host object (like Martin proposed) and overridden in HBAC > service/service group. And all this have to be very visual in the UI. > > Another problem is how to deal with missing timezone information at the > place where HBAC rule with time rules would need to be applied. We need > to define the way we handle all these (unavailable, non-parsable and > other kinds of errors) timezone info issues. I would rather punt, than do some crappy thing ala Windows. Local Only or UTC only is always wrong. For some tasks 'local' is the only thing that makes sense (your morning alarm clock), for other things 'UTC' is the only thing that make sense (coordinated changes across multiple distributed data centers). Implementing just one or the other is not useful. -- Simo Sorce * Red Hat, Inc * New York From abokovoy at redhat.com Mon Mar 9 20:02:40 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Mon, 9 Mar 2015 22:02:40 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <1425930510.4735.55.camel@willson.usersys.redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <1425926405.14916.7.camel@redhat.com> <20150309185520.GG25455@redhat.com> <1425930510.4735.55.camel@willson.usersys.redhat.com> Message-ID: <20150309200240.GH25455@redhat.com> On Mon, 09 Mar 2015, Simo Sorce wrote: >On Mon, 2015-03-09 at 20:55 +0200, Alexander Bokovoy wrote: >> On Mon, 09 Mar 2015, Nathaniel McCallum wrote: >> >On Mon, 2015-03-09 at 20:22 +0200, Alexander Bokovoy wrote: >> >> On Mon, 09 Mar 2015, Jakub Hrozek wrote: >> >> > On Mon, Mar 09, 2015 at 04:08:46PM +0100, Martin Kosek wrote: >> >> > > On 03/09/2015 03:58 PM, Alexander Bokovoy wrote: >> >> > > > On Mon, 09 Mar 2015, Martin Kosek wrote: >> >> > > ... >> >> > > > One of bigger issues we had was lack of versatile ical format >> >> > > > parser to handle calendar-like specification of events -- we >> >> > > > need to allow importing these ones instead of inventing our >> >> > > > own. >> >> > > >> >> > > Good point. I wonder how rigorous we want to be. iCal is a >> >> > > pretty powerful >> >> > > calendaring format. If we want to implement full support for it, >> >> > > it would be >> >> > > lot of code both on server side for setting it and on client >> >> > > side for evaluating it (CCing Jakub for reference). >> >> > > >> >> > > AD itself has much simpler UI for setting the access time, a >> >> > > table like that: >> >> > > http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg >> >> > > >> >> > > IIRC, they only store the bits of "can login/cannot login" for >> >> > > the time slots. >> >> > > That's another alternative. >> >> > >> >> > I don't think that's what Alexander meant, I don't think the >> >> > client library should come anywhere close to the iCal format. We >> >> > might want to provide a script to convert an external format, but >> >> > that's about it. >> >> > >> >> > I thought we could simply reuse parts of the previous grammar, >> >> > maybe simplified. But I agree with Nathaniel (as I stated also in >> >> > the private thread) that we should use UTC where possible. >> >> Yes and no. Let me go in details a bit. >> >> >> >> We need iCal support to allow importing events created by external >> >> tools. We don't need to use it as internal format. >> >> >> >> However, there are quite a lot of details that can be lost if only >> >> UTC would be in use for a date as you are ignoring a timezone >> >> information completely. Timezone information needs to be kept when >> >> importing and not always could be reliably represented in UTC so >> >> that conversion from one timezone to another could present quite a >> >> problem. >> >> >> >> See http://www.w3.org/TR/timezone/for some of recommendations. >> > >> >I'm fine with a plan like this. I just want the admin to opt-in to >> >timezones. Localtime *by default* is fraught with pitfalls. If the >> >admin needs local time policy, he should specify it exactly and should >> >confirm the local time on affected systems. >> Yep. We need to make it easy -- probably by allowing to define timezones >> as part of host object (like Martin proposed) and overridden in HBAC >> service/service group. And all this have to be very visual in the UI. >> >> Another problem is how to deal with missing timezone information at the >> place where HBAC rule with time rules would need to be applied. We need >> to define the way we handle all these (unavailable, non-parsable and >> other kinds of errors) timezone info issues. > >I would rather punt, than do some crappy thing ala Windows. > >Local Only or UTC only is always wrong. > >For some tasks 'local' is the only thing that makes sense (your morning >alarm clock), for other things 'UTC' is the only thing that make sense >(coordinated changes across multiple distributed data centers). > >Implementing just one or the other is not useful. Correct. At this point I think we are more or less in agreement that we need to store time rules in UTC plus timezone correction information specific to the execution context (HBAC rule, host, etc). Handling 'UTC' rule is equivalent to selecting specific timezone (GMT+0, for example) so this is a case of more general (UTC time, timezone definiton) pairs. This timezone definition may have aliases forcing HBAC intepretation to use local machine defaults if needed but the general scheme stays the same. -- / Alexander Bokovoy From npmccallum at redhat.com Mon Mar 9 20:05:03 2015 From: npmccallum at redhat.com (Nathaniel McCallum) Date: Mon, 09 Mar 2015 16:05:03 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150309200240.GH25455@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <1425926405.14916.7.camel@redhat.com> <20150309185520.GG25455@redhat.com> <1425930510.4735.55.camel@willson.usersys.redhat.com> <20150309200240.GH25455@redhat.com> Message-ID: <1425931503.23911.10.camel@redhat.com> On Mon, 2015-03-09 at 22:02 +0200, Alexander Bokovoy wrote: > On Mon, 09 Mar 2015, Simo Sorce wrote: > > On Mon, 2015-03-09 at 20:55 +0200, Alexander Bokovoy wrote: > > > On Mon, 09 Mar 2015, Nathaniel McCallum wrote: > > > > On Mon, 2015-03-09 at 20:22 +0200, Alexander Bokovoy wrote: > > > > > On Mon, 09 Mar 2015, Jakub Hrozek wrote: > > > > > > On Mon, Mar 09, 2015 at 04:08:46PM +0100, Martin Kosek > > > > > > wrote: > > > > > > > On 03/09/2015 03:58 PM, Alexander Bokovoy wrote: > > > > > > > > On Mon, 09 Mar 2015, Martin Kosek wrote: > > > > > > > ... > > > > > > > > One of bigger issues we had was lack of versatile ical > > > > > > > > format > > > > > > > > parser to handle calendar-like specification of events > > > > > > > > -- we > > > > > > > > need to allow importing these ones instead of > > > > > > > > inventing our > > > > > > > > own. > > > > > > > > > > > > > > Good point. I wonder how rigorous we want to be. iCal is > > > > > > > a > > > > > > > pretty powerful > > > > > > > calendaring format. If we want to implement full support > > > > > > > for it, it would be > > > > > > > lot of code both on server side for setting it and on > > > > > > > client > > > > > > > side for evaluating it (CCing Jakub for reference). > > > > > > > > > > > > > > AD itself has much simpler UI for setting the access > > > > > > > time, a > > > > > > > table like that: > > > > > > > http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg > > > > > > > > > > > > > > IIRC, they only store the bits of "can login/cannot > > > > > > > login" for the time slots. > > > > > > > That's another alternative. > > > > > > > > > > > > I don't think that's what Alexander meant, I don't think > > > > > > the > > > > > > client library should come anywhere close to the iCal > > > > > > format. We might want to provide a script to convert an > > > > > > external format, but that's about it. > > > > > > > > > > > > I thought we could simply reuse parts of the previous > > > > > > grammar, maybe simplified. But I agree with Nathaniel (as I > > > > > > stated also in the private thread) that we should use UTC > > > > > > where possible. > > > > > Yes and no. Let me go in details a bit. > > > > > > > > > > We need iCal support to allow importing events created by > > > > > external tools. We don't need to use it as internal format. > > > > > > > > > > However, there are quite a lot of details that can be lost > > > > > if only UTC would be in use for a date as you are ignoring a > > > > > timezone information completely. Timezone information needs > > > > > to be kept when importing and not always could be reliably > > > > > represented in UTC so that conversion from one timezone to > > > > > another could present quite a problem. > > > > > > > > > > See http://www.w3.org/TR/timezone/forsome of > > > > > recommendations. > > > > > > > > I'm fine with a plan like this. I just want the admin to opt- > > > > in to timezones. Localtime *by default* is fraught with > > > > pitfalls. If the admin needs local time policy, he should > > > > specify it exactly and should confirm the local time on > > > > affected systems. > > > Yep. We need to make it easy -- probably by allowing to define > > > timezones as part of host object (like Martin proposed) and > > > overridden in HBAC service/service group. And all this have to > > > be very visual in the UI. > > > > > > Another problem is how to deal with missing timezone information > > > at the place where HBAC rule with time rules would need to be > > > applied. We need to define the way we handle all these > > > (unavailable, non-parsable and other kinds of errors) timezone > > > info issues. > > > > I would rather punt, than do some crappy thing ala Windows. +1 > > Local Only or UTC only is always wrong. +1. It was never my intent to imply this. :) > > For some tasks 'local' is the only thing that makes sense (your > > morning alarm clock), for other things 'UTC' is the only thing > > that make sense (coordinated changes across multiple distributed > > data centers). > > > > Implementing just one or the other is not useful. > Correct. At this point I think we are more or less in agreement that > we need to store time rules in UTC plus timezone correction > information specific to the execution context (HBAC rule, host, > etc). Handling 'UTC' rule is equivalent to selecting specific > timezone (GMT+0, for example) so this is a case of more general (UTC > time, timezone definiton) pairs. > > This timezone definition may have aliases forcing HBAC intepretation > to use local machine defaults if needed but the general scheme stays > the same. Agreed. From jdennis at redhat.com Mon Mar 9 20:17:13 2015 From: jdennis at redhat.com (John Dennis) Date: Mon, 09 Mar 2015 16:17:13 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <1425930329.4735.52.camel@willson.usersys.redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <1425930329.4735.52.camel@willson.usersys.redhat.com> Message-ID: <54FDFFC9.1000102@redhat.com> On 03/09/2015 03:45 PM, Simo Sorce wrote: > We've been through this a few times already, it just doesn't work. > At a minimum you need to be able to select between UTC and "Local Time" > and it is a rathole down there (What time is it *here* may be a hard > question to answer :-/) O.K. I'll bite, Converting from UTC to local using static UTC offsets is fraught with perils. But if you do the local conversion utilizing the tz (i.e. Olson) database why would be it hard to determine local time? -- John From simo at redhat.com Mon Mar 9 22:10:45 2015 From: simo at redhat.com (Simo Sorce) Date: Mon, 09 Mar 2015 18:10:45 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FDFFC9.1000102@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <1425930329.4735.52.camel@willson.usersys.redhat.com> <54FDFFC9.1000102@redhat.com> Message-ID: <1425939045.4735.57.camel@willson.usersys.redhat.com> On Mon, 2015-03-09 at 16:17 -0400, John Dennis wrote: > On 03/09/2015 03:45 PM, Simo Sorce wrote: > > We've been through this a few times already, it just doesn't work. > > At a minimum you need to be able to select between UTC and "Local Time" > > and it is a rathole down there (What time is it *here* may be a hard > > question to answer :-/) > > O.K. I'll bite, Converting from UTC to local using static UTC offsets is > fraught with perils. But if you do the local conversion utilizing the tz > (i.e. Olson) database why would be it hard to determine local time? In some cases it may be hard due to external reasons (think travelers and their devices). Another problem is to understand who's timezone to match and how do you know, if a user is connecting via the network and a local time rule is in place, whose local time should that be ? The user or the server's ? Simo. -- Simo Sorce * Red Hat, Inc * New York From mkosek at redhat.com Tue Mar 10 07:51:39 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 10 Mar 2015 08:51:39 +0100 Subject: [Freeipa-devel] Rename IPAv3_AD_trust_setup? Message-ID: <54FEA28B.6020804@redhat.com> Hi, I just saw someone refer to [1] with respect to FreeIPA 4.x. Would it make sense to just rename the page from [1] to [2] (with keeping redirect of course)? This would move the page from Howto/ name space which we use for community HOWTO articles and move it to standard default name space. We would also not confuse people with the "v3" version, since it applies to all our FreeIPA versions, including v4. [1] http://www.freeipa.org/page/Howto/IPAv3_AD_trust_setup [2] http://www.freeipa.org/page/Active_Directory_Trust_setup -- Martin Kosek Supervisor, Software Engineering - Identity Management Team Red Hat Inc. From abokovoy at redhat.com Tue Mar 10 08:06:11 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 10 Mar 2015 10:06:11 +0200 Subject: [Freeipa-devel] Rename IPAv3_AD_trust_setup? In-Reply-To: <54FEA28B.6020804@redhat.com> References: <54FEA28B.6020804@redhat.com> Message-ID: <20150310080611.GI25455@redhat.com> On Tue, 10 Mar 2015, Martin Kosek wrote: >Hi, > >I just saw someone refer to [1] with respect to FreeIPA 4.x. Would it make >sense to just rename the page from [1] to [2] (with keeping redirect of course)? > >This would move the page from Howto/ name space which we use for community >HOWTO articles and move it to standard default name space. We would also not >confuse people with the "v3" version, since it applies to all our FreeIPA >versions, including v4. > >[1] http://www.freeipa.org/page/Howto/IPAv3_AD_trust_setup >[2] http://www.freeipa.org/page/Active_Directory_Trust_setup I'm fine if there is a redirect. -- / Alexander Bokovoy From pvoborni at redhat.com Tue Mar 10 10:02:37 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 10 Mar 2015 11:02:37 +0100 Subject: [Freeipa-devel] Purpose of default user group Message-ID: <54FEC13D.1000400@redhat.com> Hi, I would like to ask what is a purpose of a default user group - by default ipausers? Default group is also a required field in ipa config. In ipa migrate-ds we also set the group to all users who are not member of anything. Why is it important for a user to be a member of a group? Thank you -- Petr Vobornik From tbabej at redhat.com Tue Mar 10 10:57:06 2015 From: tbabej at redhat.com (Tomas Babej) Date: Tue, 10 Mar 2015 11:57:06 +0100 Subject: [Freeipa-devel] [PATCH] extdom: return LDAP_NO_SUCH_OBJECT to the client In-Reply-To: <20150305062828.GJ25455@redhat.com> References: <20150304174747.GV3271@p.redhat.com> <20150305062828.GJ25455@redhat.com> Message-ID: <54FECE02.1070407@redhat.com> On 03/05/2015 07:28 AM, Alexander Bokovoy wrote: > On Wed, 04 Mar 2015, Sumit Bose wrote: >> Hi, >> >> with this patch the extdom plugin will properly indicate to a client if >> the search object does not exist instead of returning a generic error. >> This is important for the client to act accordingly and improve >> debugging possibilities. >> >> bye, >> Sumit > >> From 3895fa21524efc3a22bfb36b1a9aa34277b8dd46 Mon Sep 17 00:00:00 2001 >> From: Sumit Bose >> Date: Wed, 4 Mar 2015 13:39:04 +0100 >> Subject: [PATCH] extdom: return LDAP_NO_SUCH_OBJECT to the client >> >> --- >> daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 8 >> ++++++-- >> 1 file changed, 6 insertions(+), 2 deletions(-) >> >> diff --git >> a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >> b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >> index >> a70ed20f1816a7e00385edae8a81dd5dad9e9362..a040f2beba073d856053429face2f464347b2524 >> 100644 >> --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >> +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >> @@ -123,8 +123,12 @@ static int ipa_extdom_extop(Slapi_PBlock *pb) >> >> ret = handle_request(ctx, req, &ret_val); >> if (ret != LDAP_SUCCESS) { >> - rc = LDAP_OPERATIONS_ERROR; >> - err_msg = "Failed to handle the request.\n"; >> + if (ret == LDAP_NO_SUCH_OBJECT) { >> + rc = LDAP_NO_SUCH_OBJECT; >> + } else { >> + rc = LDAP_OPERATIONS_ERROR; >> + err_msg = "Failed to handle the request.\n"; >> + } >> goto done; >> } >> >> -- >> 2.1.0 >> > ACK. > Pushed to master: 024463804c0c73e89ed76e709a838762a8302f04 From tbabej at redhat.com Tue Mar 10 10:59:45 2015 From: tbabej at redhat.com (Tomas Babej) Date: Tue, 10 Mar 2015 11:59:45 +0100 Subject: [Freeipa-devel] [PATCH 142] extdom: fix memory leak In-Reply-To: <20150305070012.GL25455@redhat.com> References: <20150304175122.GW3271@p.redhat.com> <20150305063438.GK25455@redhat.com> <54F7FC45.5050009@redhat.com> <20150305070012.GL25455@redhat.com> Message-ID: <54FECEA1.9090000@redhat.com> On 03/05/2015 08:00 AM, Alexander Bokovoy wrote: > On Wed, 04 Mar 2015, Nathan Kinder wrote: >> >> >> On 03/04/2015 10:34 PM, Alexander Bokovoy wrote: >>> On Wed, 04 Mar 2015, Sumit Bose wrote: >>>> Hi, >>>> >>>> while running 389ds with valgrind to see if my other patches >>>> introduced >>>> a memory leak I found an older one which is fixed by this patch. >>>> >>>> bye, >>>> Sumit >>> >>>> From bb02cdc135fecc1766b17edd61554dbde9bccd0b Mon Sep 17 00:00:00 2001 >>>> From: Sumit Bose >>>> Date: Wed, 4 Mar 2015 17:53:08 +0100 >>>> Subject: [PATCH] extdom: fix memory leak >>>> >>>> --- >>>> daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 1 + >>>> 1 file changed, 1 insertion(+) >>>> >>>> diff --git >>>> a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>>> b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>>> index >>>> a040f2beba073d856053429face2f464347b2524..708d0e4a2fc9da4f87a24a49c945587049f7280f >>>> >>>> 100644 >>>> --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>>> +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>>> @@ -156,6 +156,7 @@ done: >>>> LOG("%s", err_msg); >>>> } >>>> slapi_send_ldap_result(pb, rc, NULL, err_msg, 0, NULL); >>>> + ber_bvfree(ret_val); >>>> free_req_data(req); >>>> return SLAPI_PLUGIN_EXTENDED_SENT_RESULT; >>>> } >>> I can see in 389-ds code that it actually tries to remove the value in >>> the end of extended operation handling: >> >> This below code snippet is freeing the extended operation request value >> (SLAPI_EXT_OP_REQ_VALUE), not the return value (SLAPI_EXT_OP_RET_VAL). >> >> If you look at check_and_send_extended_result() in the 389-ds code, >> you'll see where the extended operation return value is sent, and it >> doesn't perform a free. It is up to the plug-in to perform the free. A >> good example of this in the 389-ds code is in the passwd_modify_extop() >> function. >> >> >> Sumit's code looks good to me. ACK. > Argh. Sorry for confusion of RET vs REQ. Morning, I need coffee! > > ACK. > >> >> -NGK >> >>> slapi_pblock_set( pb, SLAPI_EXT_OP_REQ_OID, extoid ); >>> slapi_pblock_set( pb, SLAPI_EXT_OP_REQ_VALUE, &extval ); >>> slapi_pblock_set( pb, SLAPI_REQUESTOR_ISROOT, >>> &pb->pb_op->o_isroot); >>> >>> rc = plugin_call_exop_plugins( pb, extoid ); >>> >>> if ( SLAPI_PLUGIN_EXTENDED_SENT_RESULT != rc ) { >>> if ( SLAPI_PLUGIN_EXTENDED_NOT_HANDLED == rc ) { >>> lderr = LDAP_PROTOCOL_ERROR; /* no plugin >>> handled the op */ >>> errmsg = "unsupported extended operation"; >>> } else { >>> errmsg = NULL; >>> lderr = rc; >>> } >>> send_ldap_result( pb, lderr, NULL, errmsg, 0, NULL ); >>> } >>> free_and_return: >>> if (extoid) >>> slapi_ch_free((void **)&extoid); >>> if (extval.bv_val) >>> slapi_ch_free((void **)&extval.bv_val); >>> return; >>> >>> > The patch does not apply to the current master branch. Sumit, can you send a updated version? Thanks, Tomas From sbose at redhat.com Tue Mar 10 11:10:20 2015 From: sbose at redhat.com (Sumit Bose) Date: Tue, 10 Mar 2015 12:10:20 +0100 Subject: [Freeipa-devel] [PATCH 142] extdom: fix memory leak In-Reply-To: <54FECEA1.9090000@redhat.com> References: <20150304175122.GW3271@p.redhat.com> <20150305063438.GK25455@redhat.com> <54F7FC45.5050009@redhat.com> <20150305070012.GL25455@redhat.com> <54FECEA1.9090000@redhat.com> Message-ID: <20150310111020.GE7307@p.redhat.com> On Tue, Mar 10, 2015 at 11:59:45AM +0100, Tomas Babej wrote: > > On 03/05/2015 08:00 AM, Alexander Bokovoy wrote: > >On Wed, 04 Mar 2015, Nathan Kinder wrote: > >> > >> > >>On 03/04/2015 10:34 PM, Alexander Bokovoy wrote: > >>>On Wed, 04 Mar 2015, Sumit Bose wrote: > >>>>Hi, > >>>> > >>>>while running 389ds with valgrind to see if my other patches > >>>>introduced > >>>>a memory leak I found an older one which is fixed by this patch. > >>>> > >>>>bye, > >>>>Sumit > >>> > >>>>From bb02cdc135fecc1766b17edd61554dbde9bccd0b Mon Sep 17 00:00:00 2001 > >>>>From: Sumit Bose > >>>>Date: Wed, 4 Mar 2015 17:53:08 +0100 > >>>>Subject: [PATCH] extdom: fix memory leak > >>>> > >>>>--- > >>>>daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 1 + > >>>>1 file changed, 1 insertion(+) > >>>> > >>>>diff --git > >>>>a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c > >>>>b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c > >>>>index > >>>>a040f2beba073d856053429face2f464347b2524..708d0e4a2fc9da4f87a24a49c945587049f7280f > >>>> > >>>>100644 > >>>>--- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c > >>>>+++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c > >>>>@@ -156,6 +156,7 @@ done: > >>>> LOG("%s", err_msg); > >>>> } > >>>> slapi_send_ldap_result(pb, rc, NULL, err_msg, 0, NULL); > >>>>+ ber_bvfree(ret_val); > >>>> free_req_data(req); > >>>> return SLAPI_PLUGIN_EXTENDED_SENT_RESULT; > >>>>} > >>>I can see in 389-ds code that it actually tries to remove the value in > >>>the end of extended operation handling: > >> > >>This below code snippet is freeing the extended operation request value > >>(SLAPI_EXT_OP_REQ_VALUE), not the return value (SLAPI_EXT_OP_RET_VAL). > >> > >>If you look at check_and_send_extended_result() in the 389-ds code, > >>you'll see where the extended operation return value is sent, and it > >>doesn't perform a free. It is up to the plug-in to perform the free. A > >>good example of this in the 389-ds code is in the passwd_modify_extop() > >>function. > >> > >> > >>Sumit's code looks good to me. ACK. > >Argh. Sorry for confusion of RET vs REQ. Morning, I need coffee! > > > >ACK. > > > >> > >>-NGK > >> > >>> slapi_pblock_set( pb, SLAPI_EXT_OP_REQ_OID, extoid ); > >>> slapi_pblock_set( pb, SLAPI_EXT_OP_REQ_VALUE, &extval ); > >>> slapi_pblock_set( pb, SLAPI_REQUESTOR_ISROOT, > >>>&pb->pb_op->o_isroot); > >>> > >>> rc = plugin_call_exop_plugins( pb, extoid ); > >>> > >>> if ( SLAPI_PLUGIN_EXTENDED_SENT_RESULT != rc ) { > >>> if ( SLAPI_PLUGIN_EXTENDED_NOT_HANDLED == rc ) { > >>> lderr = LDAP_PROTOCOL_ERROR; /* no plugin > >>>handled the op */ > >>> errmsg = "unsupported extended operation"; > >>> } else { > >>> errmsg = NULL; > >>> lderr = rc; > >>> } > >>> send_ldap_result( pb, lderr, NULL, errmsg, 0, NULL ); > >>> } > >>>free_and_return: > >>> if (extoid) > >>> slapi_ch_free((void **)&extoid); > >>> if (extval.bv_val) > >>> slapi_ch_free((void **)&extval.bv_val); > >>> return; > >>> > >>> > > > > The patch does not apply to the current master branch. > > Sumit, can you send a updated version? sure, new version attached. bye, Sumit > > Thanks, > Tomas -------------- next part -------------- From 2a14bb1a6c44a8c6f5730f97ad3239e9fca2392a Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Wed, 4 Mar 2015 17:53:08 +0100 Subject: [PATCH] extdom: fix memory leak --- daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 1 + 1 file changed, 1 insertion(+) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c index 1ea0c1320accc66235d6f1a41055de434aeacab7..dc7785877fc321ddaa5b6967d1c1b06cb454bbbf 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c @@ -154,6 +154,7 @@ done: LOG("%s", err_msg); } slapi_send_ldap_result(pb, rc, NULL, err_msg, 0, NULL); + ber_bvfree(ret_val); return SLAPI_PLUGIN_EXTENDED_SENT_RESULT; } -- 2.1.0 From tbabej at redhat.com Tue Mar 10 11:14:53 2015 From: tbabej at redhat.com (Tomas Babej) Date: Tue, 10 Mar 2015 12:14:53 +0100 Subject: [Freeipa-devel] [PATCH 142] extdom: fix memory leak In-Reply-To: <20150310111020.GE7307@p.redhat.com> References: <20150304175122.GW3271@p.redhat.com> <20150305063438.GK25455@redhat.com> <54F7FC45.5050009@redhat.com> <20150305070012.GL25455@redhat.com> <54FECEA1.9090000@redhat.com> <20150310111020.GE7307@p.redhat.com> Message-ID: <54FED22D.5030704@redhat.com> On 03/10/2015 12:10 PM, Sumit Bose wrote: > On Tue, Mar 10, 2015 at 11:59:45AM +0100, Tomas Babej wrote: >> On 03/05/2015 08:00 AM, Alexander Bokovoy wrote: >>> On Wed, 04 Mar 2015, Nathan Kinder wrote: >>>> >>>> On 03/04/2015 10:34 PM, Alexander Bokovoy wrote: >>>>> On Wed, 04 Mar 2015, Sumit Bose wrote: >>>>>> Hi, >>>>>> >>>>>> while running 389ds with valgrind to see if my other patches >>>>>> introduced >>>>>> a memory leak I found an older one which is fixed by this patch. >>>>>> >>>>>> bye, >>>>>> Sumit >>>>> >From bb02cdc135fecc1766b17edd61554dbde9bccd0b Mon Sep 17 00:00:00 2001 >>>>>> From: Sumit Bose >>>>>> Date: Wed, 4 Mar 2015 17:53:08 +0100 >>>>>> Subject: [PATCH] extdom: fix memory leak >>>>>> >>>>>> --- >>>>>> daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 1 + >>>>>> 1 file changed, 1 insertion(+) >>>>>> >>>>>> diff --git >>>>>> a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>>>>> b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>>>>> index >>>>>> a040f2beba073d856053429face2f464347b2524..708d0e4a2fc9da4f87a24a49c945587049f7280f >>>>>> >>>>>> 100644 >>>>>> --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>>>>> +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>>>>> @@ -156,6 +156,7 @@ done: >>>>>> LOG("%s", err_msg); >>>>>> } >>>>>> slapi_send_ldap_result(pb, rc, NULL, err_msg, 0, NULL); >>>>>> + ber_bvfree(ret_val); >>>>>> free_req_data(req); >>>>>> return SLAPI_PLUGIN_EXTENDED_SENT_RESULT; >>>>>> } >>>>> I can see in 389-ds code that it actually tries to remove the value in >>>>> the end of extended operation handling: >>>> This below code snippet is freeing the extended operation request value >>>> (SLAPI_EXT_OP_REQ_VALUE), not the return value (SLAPI_EXT_OP_RET_VAL). >>>> >>>> If you look at check_and_send_extended_result() in the 389-ds code, >>>> you'll see where the extended operation return value is sent, and it >>>> doesn't perform a free. It is up to the plug-in to perform the free. A >>>> good example of this in the 389-ds code is in the passwd_modify_extop() >>>> function. >>>> >>>> >>>> Sumit's code looks good to me. ACK. >>> Argh. Sorry for confusion of RET vs REQ. Morning, I need coffee! >>> >>> ACK. >>> >>>> -NGK >>>> >>>>> slapi_pblock_set( pb, SLAPI_EXT_OP_REQ_OID, extoid ); >>>>> slapi_pblock_set( pb, SLAPI_EXT_OP_REQ_VALUE, &extval ); >>>>> slapi_pblock_set( pb, SLAPI_REQUESTOR_ISROOT, >>>>> &pb->pb_op->o_isroot); >>>>> >>>>> rc = plugin_call_exop_plugins( pb, extoid ); >>>>> >>>>> if ( SLAPI_PLUGIN_EXTENDED_SENT_RESULT != rc ) { >>>>> if ( SLAPI_PLUGIN_EXTENDED_NOT_HANDLED == rc ) { >>>>> lderr = LDAP_PROTOCOL_ERROR; /* no plugin >>>>> handled the op */ >>>>> errmsg = "unsupported extended operation"; >>>>> } else { >>>>> errmsg = NULL; >>>>> lderr = rc; >>>>> } >>>>> send_ldap_result( pb, lderr, NULL, errmsg, 0, NULL ); >>>>> } >>>>> free_and_return: >>>>> if (extoid) >>>>> slapi_ch_free((void **)&extoid); >>>>> if (extval.bv_val) >>>>> slapi_ch_free((void **)&extval.bv_val); >>>>> return; >>>>> >>>>> >> The patch does not apply to the current master branch. >> >> Sumit, can you send a updated version? > sure, new version attached. > > bye, > Sumit > >> Thanks, >> Tomas Thanks, Pushed to master: 8dac096ae3a294dc55b32b69b873013fd687e945 From tbabej at redhat.com Tue Mar 10 11:20:16 2015 From: tbabej at redhat.com (Tomas Babej) Date: Tue, 10 Mar 2015 12:20:16 +0100 Subject: [Freeipa-devel] [PATCH] Use curl instead of wget In-Reply-To: <20150122150156.GY4383@redhat.com> References: <1421877892.4015342.217094565.0A1E790D@webmail.messagingengine.com> <20150122052357.GA12694@redhat.com> <1421933839.48030.217355185.52DD5292@webmail.messagingengine.com> <20150122134539.GX4383@redhat.com> <1421937397.65670.217380117.5B381874@webmail.messagingengine.com> <20150122150156.GY4383@redhat.com> Message-ID: <54FED370.1030001@redhat.com> On 01/22/2015 04:01 PM, Alexander Bokovoy wrote: > On Thu, 22 Jan 2015, Colin Walters wrote: >> >> >> On Thu, Jan 22, 2015, at 08:45 AM, Alexander Bokovoy wrote: >> >>> We have abstraction layer to take care of different platforms on a >>> wider >>> scale than just this particular binary. We are gradually moving all >>> code >>> to use platform abstraction to allow other platforms to be supported >>> (like FreeBSD or Illumos) and we in general cannot guarantee things >>> will >>> be there at the same locations. >> >> That doesn't answer the "why not just use $PATH" part. Regardless, >> here's a new patch which adds a BIN_CURL. > We had cases in past when a non-working from FreeIPA perspective utility > was selected from $PATH due to a local misconfiguration. For something > that is packaged as a complex combination of multiple packages, > packaging requirements at least allow to establish the environment we > expect and which was tested. Relying on $PATH doesn't improve this > assumption. > > And some of cases are pretty subtle, like libxmlrpc-c uses cURL library > and if cURL was built without GSSAPI support it will silently fail, > leaving no traces at why this has happened. curl utility also doesn't > check if SPNEGO support (GSSAPI in our case) was compiled in if you > specify 'curl --negotiate' without any option value. > > >> From 47701a454ba442f08cd05a77ff6a2dbba76b787a Mon Sep 17 00:00:00 2001 >> From: Colin Walters >> Date: Wed, 21 Jan 2015 16:59:52 -0500 >> Subject: [PATCH] Use curl instead of wget >> >> Curl has a shared library, and so ends up being used by more components >> of the OS. It should be preferred over wget. >> >> The motivation for this patch is for Project Atomic hosts; we want to >> include ipa-client, but trim down its dependencies. >> >> If wget isn't installed on the host, it doesn't need to be updated for >> security errata. > Code-wise looks OK. We need to test it, I'll look at it next week. > I see two issues with the patch: 1.) BIN_CURL does not respect the ordering of the paths (they are sorted by the values). 2.) I'm not sure the patch should touch advise/legacy_clients.py at all. That part of the code just generates a bash script, which is meant to be run on legacy clients (which have nothing to do with the Atomic effort). If we want to change this, tests in test_integration/test_advise.py need to be amended too. From mkosek at redhat.com Tue Mar 10 13:54:40 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 10 Mar 2015 14:54:40 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <1425931503.23911.10.camel@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <1425926405.14916.7.camel@redhat.com> <20150309185520.GG25455@redhat.com> <1425930510.4735.55.camel@willson.usersys.redhat.com> <20150309200240.GH25455@redhat.com> <1425931503.23911.10.camel@redhat.com> Message-ID: <54FEF7A0.7@redhat.com> On 03/09/2015 09:05 PM, Nathaniel McCallum wrote: > On Mon, 2015-03-09 at 22:02 +0200, Alexander Bokovoy wrote: >> On Mon, 09 Mar 2015, Simo Sorce wrote: ... >>> For some tasks 'local' is the only thing that makes sense (your >>> morning alarm clock), for other things 'UTC' is the only thing >>> that make sense (coordinated changes across multiple distributed >>> data centers). >>> >>> Implementing just one or the other is not useful. >> Correct. At this point I think we are more or less in agreement that >> we need to store time rules in UTC plus timezone correction >> information specific to the execution context (HBAC rule, host, >> etc). Handling 'UTC' rule is equivalent to selecting specific >> timezone (GMT+0, for example) so this is a case of more general (UTC >> time, timezone definiton) pairs. >> >> This timezone definition may have aliases forcing HBAC intepretation >> to use local machine defaults if needed but the general scheme stays >> the same. > > Agreed. Alexander, can you please elaborate a bit more on the idea of storing the time rules in UTC + timezone correction? I thought SSSD would take take the time zone information from the local system. The purpose is that admin can create rule like "Joe can interactively log in from 8:00 to 17:00 on all machines across the globe". You cannot store the time zone with such rule as the rule spans across several many different time zones. Right? From mkosek at redhat.com Tue Mar 10 14:00:42 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 10 Mar 2015 15:00:42 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150309182216.GD25455@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> Message-ID: <54FEF90A.407@redhat.com> On 03/09/2015 07:22 PM, Alexander Bokovoy wrote: > On Mon, 09 Mar 2015, Jakub Hrozek wrote: >> On Mon, Mar 09, 2015 at 04:08:46PM +0100, Martin Kosek wrote: >>> On 03/09/2015 03:58 PM, Alexander Bokovoy wrote: >>> > On Mon, 09 Mar 2015, Martin Kosek wrote: >>> ... >>> > One of bigger issues we had was lack of versatile ical format parser to >>> > handle calendar-like specification of events -- we need to allow >>> > importing these ones instead of inventing our own. >>> >>> Good point. I wonder how rigorous we want to be. iCal is a pretty powerful >>> calendaring format. If we want to implement full support for it, it would be >>> lot of code both on server side for setting it and on client side for >>> evaluating it (CCing Jakub for reference). >>> >>> AD itself has much simpler UI for setting the access time, a table like that: >>> http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg >>> >>> >>> IIRC, they only store the bits of "can login/cannot login" for the time slots. >>> That's another alternative. >> >> I don't think that's what Alexander meant, I don't think the client >> library should come anywhere close to the iCal format. We might want to >> provide a script to convert an external format, but that's about it. >> >> I thought we could simply reuse parts of the previous grammar, maybe >> simplified. But I agree with Nathaniel (as I stated also in the private >> thread) that we should use UTC where possible. > Yes and no. Let me go in details a bit. > > We need iCal support to allow importing events created by external > tools. We don't need to use it as internal format. Can you please share a bit what events you have in mind? We are talking about HBAC access rules, so I am not sure what you want to import. Is this for use cases like - I have a recurring Linux learning lab, I want to all participants to be able to log in to this system during the lab run? This may results in pretty complicated time related rules in HBAC, where you may need to deal with reocurrence, exceptions, etc. So far more complex than the AD use cases (http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg). Thanks, Martin From simo at redhat.com Tue Mar 10 14:14:37 2015 From: simo at redhat.com (Simo Sorce) Date: Tue, 10 Mar 2015 10:14:37 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FEF7A0.7@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <1425926405.14916.7.camel@redhat.com> <20150309185520.GG25455@redhat.com> <1425930510.4735.55.camel@willson.usersys.redhat.com> <20150309200240.GH25455@redhat.com> <1425931503.23911.10.camel@redhat.com> <54FEF7A0.7@redhat.com> Message-ID: <1425996877.4735.74.camel@willson.usersys.redhat.com> On Tue, 2015-03-10 at 14:54 +0100, Martin Kosek wrote: > On 03/09/2015 09:05 PM, Nathaniel McCallum wrote: > > On Mon, 2015-03-09 at 22:02 +0200, Alexander Bokovoy wrote: > >> On Mon, 09 Mar 2015, Simo Sorce wrote: > ... > >>> For some tasks 'local' is the only thing that makes sense (your > >>> morning alarm clock), for other things 'UTC' is the only thing > >>> that make sense (coordinated changes across multiple distributed > >>> data centers). > >>> > >>> Implementing just one or the other is not useful. > >> Correct. At this point I think we are more or less in agreement that > >> we need to store time rules in UTC plus timezone correction > >> information specific to the execution context (HBAC rule, host, > >> etc). Handling 'UTC' rule is equivalent to selecting specific > >> timezone (GMT+0, for example) so this is a case of more general (UTC > >> time, timezone definiton) pairs. > >> > >> This timezone definition may have aliases forcing HBAC intepretation > >> to use local machine defaults if needed but the general scheme stays > >> the same. > > > > Agreed. > > Alexander, can you please elaborate a bit more on the idea of storing the time > rules in UTC + timezone correction? I thought SSSD would take take the time > zone information from the local system. > > The purpose is that admin can create rule like "Joe can interactively log in > from 8:00 to 17:00 on all machines across the globe". You cannot store the time > zone with such rule as the rule spans across several many different time zones. > Right? Yes this is a rule expressed in "Local Time" which is a time-zone-less rule. Simo. -- Simo Sorce * Red Hat, Inc * New York From simo at redhat.com Tue Mar 10 14:17:24 2015 From: simo at redhat.com (Simo Sorce) Date: Tue, 10 Mar 2015 10:17:24 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FEF90A.407@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> Message-ID: <1425997044.4735.77.camel@willson.usersys.redhat.com> On Tue, 2015-03-10 at 15:00 +0100, Martin Kosek wrote: > On 03/09/2015 07:22 PM, Alexander Bokovoy wrote: > > On Mon, 09 Mar 2015, Jakub Hrozek wrote: > >> On Mon, Mar 09, 2015 at 04:08:46PM +0100, Martin Kosek wrote: > >>> On 03/09/2015 03:58 PM, Alexander Bokovoy wrote: > >>> > On Mon, 09 Mar 2015, Martin Kosek wrote: > >>> ... > >>> > One of bigger issues we had was lack of versatile ical format parser to > >>> > handle calendar-like specification of events -- we need to allow > >>> > importing these ones instead of inventing our own. > >>> > >>> Good point. I wonder how rigorous we want to be. iCal is a pretty powerful > >>> calendaring format. If we want to implement full support for it, it would be > >>> lot of code both on server side for setting it and on client side for > >>> evaluating it (CCing Jakub for reference). > >>> > >>> AD itself has much simpler UI for setting the access time, a table like that: > >>> http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg > >>> > >>> > >>> IIRC, they only store the bits of "can login/cannot login" for the time slots. > >>> That's another alternative. > >> > >> I don't think that's what Alexander meant, I don't think the client > >> library should come anywhere close to the iCal format. We might want to > >> provide a script to convert an external format, but that's about it. > >> > >> I thought we could simply reuse parts of the previous grammar, maybe > >> simplified. But I agree with Nathaniel (as I stated also in the private > >> thread) that we should use UTC where possible. > > Yes and no. Let me go in details a bit. > > > > We need iCal support to allow importing events created by external > > tools. We don't need to use it as internal format. > > Can you please share a bit what events you have in mind? We are talking about > HBAC access rules, so I am not sure what you want to import. > > Is this for use cases like - I have a recurring Linux learning lab, I want to > all participants to be able to log in to this system during the lab run? You still need this, if you really want time based rules, then pretty soon you'll get requests to add special exceptions, like holidays, and who knows what else. Last time around we came to the conclusion that this was very complicated and dropped it for that reason. A non-complicated tool is too simple to be useful, a complete one was deemed too complicated to implement. Damned if you, damned if you don't. Simo. -- Simo Sorce * Red Hat, Inc * New York From rcritten at redhat.com Tue Mar 10 14:27:57 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 10 Mar 2015 10:27:57 -0400 Subject: [Freeipa-devel] Purpose of default user group In-Reply-To: <54FEC13D.1000400@redhat.com> References: <54FEC13D.1000400@redhat.com> Message-ID: <54FEFF6D.60601@redhat.com> Petr Vobornik wrote: > Hi, > > I would like to ask what is a purpose of a default user group - by > default ipausers? Default group is also a required field in ipa config. To be able to apply some (undefined) group policy to all users. I'm not aware that it has ever been used for this. > In ipa migrate-ds we also set the group to all users who are not member > of anything. Why is it important for a user to be a member of a group? > > Thank you Every POSIX user needs a default GID. We don't create user-private groups for migrated users. rob From pspacek at redhat.com Tue Mar 10 14:32:31 2015 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 10 Mar 2015 15:32:31 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) Message-ID: <54FF007F.7060701@redhat.com> Hello, I would like to discuss Generic support for unknown DNS RR types (RFC 3597 [0]). Here is the proposal: LDAP schema =========== - 1 new attribute: ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' EQUALITY caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) The attribute should be added to existing idnsRecord object class as MAY. This new attribute should contain data encoded according to ?RFC 3597 section 5 [5]: The RDATA section of an RR of unknown type is represented as a sequence of white space separated words as follows: The special token \# (a backslash immediately followed by a hash sign), which identifies the RDATA as having the generic encoding defined herein rather than a traditional type-specific encoding. An unsigned decimal integer specifying the RDATA length in octets. Zero or more words of hexadecimal data encoding the actual RDATA field, each containing an even number of hexadecimal digits. If the RDATA is of zero length, the text representation contains only the \# token and the single zero representing the length. Examples from RFC: a.example. CLASS32 TYPE731 \# 6 abcd ( ef 01 23 45 ) b.example. HS TYPE62347 \# 0 e.example. IN A \# 4 0A000001 e.example. CLASS1 TYPE1 10.0.0.2 Open questions about LDAP format ================================ Should we include "\#" constant? We know that the attribute contains record in RFC 3597 syntax so it is not strictly necessary. I think it would be better to follow RFC 3597 format. It allows blind copy&pasting from other tools, including direct calls to python-dns. It also eases writing conversion tools between DNS and LDAP format because they do not need to change record values. Another question is if we should explicitly include length of data represented in hexadecimal notation as a decimal number. I'm very strongly inclined to let it there because it is very good sanity check and again, it allows us to re-use existing tools including parsers. I will ask Uninett.no for standardization after we sort this out (they own the OID arc we use for DNS records). Attribute usage =============== Every DNS RR type has assigned a number [1] which is used on wire. RR types which are unknown to the server cannot be named by their mnemonic/type name because server would not be able to do name->number conversion and to generate DNS wire format. As a result, we have to encode the RR type number somehow. Let's use attribute sub-types. E.g. a record with type 65280 and hex value 0A000001 will be represented as: GenericRecord;TYPE65280: \# 4 0A000001 CLI === $ ipa dnsrecord-add zone.example owner \ --generic-type=65280 --generic-data='\# 4 0A000001' $ ipa dnsrecord-show zone.example owner Record name: owner TYPE65280 Record: \# 4 0A000001 ACK? :-) Related tickets =============== https://fedorahosted.org/freeipa/ticket/4939 https://fedorahosted.org/bind-dyndb-ldap/ticket/157 [0] http://tools.ietf.org/html/rfc3597 [1] http://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-4 [5] http://tools.ietf.org/html/rfc3597#section-5 -- Petr^2 Spacek From abokovoy at redhat.com Tue Mar 10 14:34:53 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 10 Mar 2015 16:34:53 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <1425996877.4735.74.camel@willson.usersys.redhat.com> References: <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <1425926405.14916.7.camel@redhat.com> <20150309185520.GG25455@redhat.com> <1425930510.4735.55.camel@willson.usersys.redhat.com> <20150309200240.GH25455@redhat.com> <1425931503.23911.10.camel@redhat.com> <54FEF7A0.7@redhat.com> <1425996877.4735.74.camel@willson.usersys.redhat.com> Message-ID: <20150310143453.GO25455@redhat.com> On Tue, 10 Mar 2015, Simo Sorce wrote: >On Tue, 2015-03-10 at 14:54 +0100, Martin Kosek wrote: >> On 03/09/2015 09:05 PM, Nathaniel McCallum wrote: >> > On Mon, 2015-03-09 at 22:02 +0200, Alexander Bokovoy wrote: >> >> On Mon, 09 Mar 2015, Simo Sorce wrote: >> ... >> >>> For some tasks 'local' is the only thing that makes sense (your >> >>> morning alarm clock), for other things 'UTC' is the only thing >> >>> that make sense (coordinated changes across multiple distributed >> >>> data centers). >> >>> >> >>> Implementing just one or the other is not useful. >> >> Correct. At this point I think we are more or less in agreement that >> >> we need to store time rules in UTC plus timezone correction >> >> information specific to the execution context (HBAC rule, host, >> >> etc). Handling 'UTC' rule is equivalent to selecting specific >> >> timezone (GMT+0, for example) so this is a case of more general (UTC >> >> time, timezone definiton) pairs. >> >> >> >> This timezone definition may have aliases forcing HBAC intepretation >> >> to use local machine defaults if needed but the general scheme stays >> >> the same. >> > >> > Agreed. >> >> Alexander, can you please elaborate a bit more on the idea of storing the time >> rules in UTC + timezone correction? I thought SSSD would take take the time >> zone information from the local system. >> >> The purpose is that admin can create rule like "Joe can interactively log in >> from 8:00 to 17:00 on all machines across the globe". You cannot store the time >> zone with such rule as the rule spans across several many different time zones. >> Right? > >Yes this is a rule expressed in "Local Time" which is a time-zone-less >rule. Yep, and timezone info for this rule is "Local Time" which is a timezone that doesn't exist in Olson database and would be interpreted by SSSD as "default server timezone". -- / Alexander Bokovoy From mkosek at redhat.com Tue Mar 10 14:38:41 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 10 Mar 2015 15:38:41 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150310143453.GO25455@redhat.com> References: <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <1425926405.14916.7.camel@redhat.com> <20150309185520.GG25455@redhat.com> <1425930510.4735.55.camel@willson.usersys.redhat.com> <20150309200240.GH25455@redhat.com> <1425931503.23911.10.camel@redhat.com> <54FEF7A0.7@redhat.com> <1425996877.4735.74.camel@willson.usersys.redhat.com> <20150310143453.GO25455@redhat.com> Message-ID: <54FF01F1.3060701@redhat.com> On 03/10/2015 03:34 PM, Alexander Bokovoy wrote: > On Tue, 10 Mar 2015, Simo Sorce wrote: >> On Tue, 2015-03-10 at 14:54 +0100, Martin Kosek wrote: >>> On 03/09/2015 09:05 PM, Nathaniel McCallum wrote: >>> > On Mon, 2015-03-09 at 22:02 +0200, Alexander Bokovoy wrote: >>> >> On Mon, 09 Mar 2015, Simo Sorce wrote: >>> ... >>> >>> For some tasks 'local' is the only thing that makes sense (your >>> >>> morning alarm clock), for other things 'UTC' is the only thing >>> >>> that make sense (coordinated changes across multiple distributed >>> >>> data centers). >>> >>> >>> >>> Implementing just one or the other is not useful. >>> >> Correct. At this point I think we are more or less in agreement that >>> >> we need to store time rules in UTC plus timezone correction >>> >> information specific to the execution context (HBAC rule, host, >>> >> etc). Handling 'UTC' rule is equivalent to selecting specific >>> >> timezone (GMT+0, for example) so this is a case of more general (UTC >>> >> time, timezone definiton) pairs. >>> >> >>> >> This timezone definition may have aliases forcing HBAC intepretation >>> >> to use local machine defaults if needed but the general scheme stays >>> >> the same. >>> > >>> > Agreed. >>> >>> Alexander, can you please elaborate a bit more on the idea of storing the time >>> rules in UTC + timezone correction? I thought SSSD would take take the time >>> zone information from the local system. >>> >>> The purpose is that admin can create rule like "Joe can interactively log in >>> from 8:00 to 17:00 on all machines across the globe". You cannot store the time >>> zone with such rule as the rule spans across several many different time zones. >>> Right? >> >> Yes this is a rule expressed in "Local Time" which is a time-zone-less >> rule. > Yep, and timezone info for this rule is "Local Time" which is a timezone > that doesn't exist in Olson database and would be interpreted by SSSD > as "default server timezone". I still do not understand. With Local Time HBAC rule, I thought that the time zone information/setting would only exist on the local machine. Maybe if you provide example of exact setting that would be stored in LDAP and what should be the implications on the clients in different time zones, I would understand better. Thanks, Martin From abokovoy at redhat.com Tue Mar 10 14:40:41 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 10 Mar 2015 16:40:41 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FEF90A.407@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> Message-ID: <20150310144041.GP25455@redhat.com> On Tue, 10 Mar 2015, Martin Kosek wrote: >On 03/09/2015 07:22 PM, Alexander Bokovoy wrote: >> On Mon, 09 Mar 2015, Jakub Hrozek wrote: >>> On Mon, Mar 09, 2015 at 04:08:46PM +0100, Martin Kosek wrote: >>>> On 03/09/2015 03:58 PM, Alexander Bokovoy wrote: >>>> > On Mon, 09 Mar 2015, Martin Kosek wrote: >>>> ... >>>> > One of bigger issues we had was lack of versatile ical format parser to >>>> > handle calendar-like specification of events -- we need to allow >>>> > importing these ones instead of inventing our own. >>>> >>>> Good point. I wonder how rigorous we want to be. iCal is a pretty powerful >>>> calendaring format. If we want to implement full support for it, it would be >>>> lot of code both on server side for setting it and on client side for >>>> evaluating it (CCing Jakub for reference). >>>> >>>> AD itself has much simpler UI for setting the access time, a table like that: >>>> http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg >>>> >>>> >>>> IIRC, they only store the bits of "can login/cannot login" for the time slots. >>>> That's another alternative. >>> >>> I don't think that's what Alexander meant, I don't think the client >>> library should come anywhere close to the iCal format. We might want to >>> provide a script to convert an external format, but that's about it. >>> >>> I thought we could simply reuse parts of the previous grammar, maybe >>> simplified. But I agree with Nathaniel (as I stated also in the private >>> thread) that we should use UTC where possible. >> Yes and no. Let me go in details a bit. >> >> We need iCal support to allow importing events created by external >> tools. We don't need to use it as internal format. > >Can you please share a bit what events you have in mind? We are talking about >HBAC access rules, so I am not sure what you want to import. > >Is this for use cases like - I have a recurring Linux learning lab, I want to >all participants to be able to log in to this system during the lab run? > >This may results in pretty complicated time related rules in HBAC, where you >may need to deal with reocurrence, exceptions, etc. So far more complex than >the AD use cases >(http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg). This is where importing iCal is helpful because it allows you to outsource the task of creating such event to something else. Parsing event information would produce a rule definition we would store and SSSD would apply as HBAC rule. However, we don't need ourselves to provide a complex UI to define such rules. Instead, we can do a simple UI to create rules plus a UI to import rules defined in iCal by some other software. The rest is visualizing HBAC time/date rules which is separate from dealing with complexity of creating or importing rules. Additionally, for iCal-based imports we can utilize participants information from the iCal to automatically set up members of the rule (based on mail attribute). -- / Alexander Bokovoy From abokovoy at redhat.com Tue Mar 10 14:45:39 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 10 Mar 2015 16:45:39 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FF01F1.3060701@redhat.com> References: <20150309182216.GD25455@redhat.com> <1425926405.14916.7.camel@redhat.com> <20150309185520.GG25455@redhat.com> <1425930510.4735.55.camel@willson.usersys.redhat.com> <20150309200240.GH25455@redhat.com> <1425931503.23911.10.camel@redhat.com> <54FEF7A0.7@redhat.com> <1425996877.4735.74.camel@willson.usersys.redhat.com> <20150310143453.GO25455@redhat.com> <54FF01F1.3060701@redhat.com> Message-ID: <20150310144539.GQ25455@redhat.com> On Tue, 10 Mar 2015, Martin Kosek wrote: >On 03/10/2015 03:34 PM, Alexander Bokovoy wrote: >> On Tue, 10 Mar 2015, Simo Sorce wrote: >>> On Tue, 2015-03-10 at 14:54 +0100, Martin Kosek wrote: >>>> On 03/09/2015 09:05 PM, Nathaniel McCallum wrote: >>>> > On Mon, 2015-03-09 at 22:02 +0200, Alexander Bokovoy wrote: >>>> >> On Mon, 09 Mar 2015, Simo Sorce wrote: >>>> ... >>>> >>> For some tasks 'local' is the only thing that makes sense (your >>>> >>> morning alarm clock), for other things 'UTC' is the only thing >>>> >>> that make sense (coordinated changes across multiple distributed >>>> >>> data centers). >>>> >>> >>>> >>> Implementing just one or the other is not useful. >>>> >> Correct. At this point I think we are more or less in agreement that >>>> >> we need to store time rules in UTC plus timezone correction >>>> >> information specific to the execution context (HBAC rule, host, >>>> >> etc). Handling 'UTC' rule is equivalent to selecting specific >>>> >> timezone (GMT+0, for example) so this is a case of more general (UTC >>>> >> time, timezone definiton) pairs. >>>> >> >>>> >> This timezone definition may have aliases forcing HBAC intepretation >>>> >> to use local machine defaults if needed but the general scheme stays >>>> >> the same. >>>> > >>>> > Agreed. >>>> >>>> Alexander, can you please elaborate a bit more on the idea of storing the time >>>> rules in UTC + timezone correction? I thought SSSD would take take the time >>>> zone information from the local system. >>>> >>>> The purpose is that admin can create rule like "Joe can interactively log in >>>> from 8:00 to 17:00 on all machines across the globe". You cannot store the time >>>> zone with such rule as the rule spans across several many different time zones. >>>> Right? >>> >>> Yes this is a rule expressed in "Local Time" which is a time-zone-less >>> rule. >> Yep, and timezone info for this rule is "Local Time" which is a timezone >> that doesn't exist in Olson database and would be interpreted by SSSD >> as "default server timezone". > >I still do not understand. > >With Local Time HBAC rule, I thought that the time zone information/setting >would only exist on the local machine. Maybe if you provide example of exact >setting that would be stored in LDAP and what should be the implications on the >clients in different time zones, I would understand better. Instead of a single time point in UTC we would have a pair (time, tzinfo) where tzinfo is just a timezone info from Olson database (tzdata). 'time' element would be expressed in UTC. We would use Olson database to find out available timezones. If tzinfo is 'Local Time', we simply record it this way in LDAP record and then SSSD on the client would parse it in such way that instead of forcing TZ to the tzinfo string it would use default /etc/localtime on the host. -- / Alexander Bokovoy From mkosek at redhat.com Tue Mar 10 14:47:10 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 10 Mar 2015 15:47:10 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150310144041.GP25455@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> <20150310144041.GP25455@redhat.com> Message-ID: <54FF03EE.6020005@redhat.com> On 03/10/2015 03:40 PM, Alexander Bokovoy wrote: > On Tue, 10 Mar 2015, Martin Kosek wrote: >> On 03/09/2015 07:22 PM, Alexander Bokovoy wrote: >>> On Mon, 09 Mar 2015, Jakub Hrozek wrote: >>>> On Mon, Mar 09, 2015 at 04:08:46PM +0100, Martin Kosek wrote: >>>>> On 03/09/2015 03:58 PM, Alexander Bokovoy wrote: >>>>> > On Mon, 09 Mar 2015, Martin Kosek wrote: >>>>> ... >>>>> > One of bigger issues we had was lack of versatile ical format parser to >>>>> > handle calendar-like specification of events -- we need to allow >>>>> > importing these ones instead of inventing our own. >>>>> >>>>> Good point. I wonder how rigorous we want to be. iCal is a pretty powerful >>>>> calendaring format. If we want to implement full support for it, it would be >>>>> lot of code both on server side for setting it and on client side for >>>>> evaluating it (CCing Jakub for reference). >>>>> >>>>> AD itself has much simpler UI for setting the access time, a table like that: >>>>> http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg >>>>> >>>>> >>>>> >>>>> IIRC, they only store the bits of "can login/cannot login" for the time >>>>> slots. >>>>> That's another alternative. >>>> >>>> I don't think that's what Alexander meant, I don't think the client >>>> library should come anywhere close to the iCal format. We might want to >>>> provide a script to convert an external format, but that's about it. >>>> >>>> I thought we could simply reuse parts of the previous grammar, maybe >>>> simplified. But I agree with Nathaniel (as I stated also in the private >>>> thread) that we should use UTC where possible. >>> Yes and no. Let me go in details a bit. >>> >>> We need iCal support to allow importing events created by external >>> tools. We don't need to use it as internal format. >> >> Can you please share a bit what events you have in mind? We are talking about >> HBAC access rules, so I am not sure what you want to import. >> >> Is this for use cases like - I have a recurring Linux learning lab, I want to >> all participants to be able to log in to this system during the lab run? >> >> This may results in pretty complicated time related rules in HBAC, where you >> may need to deal with reocurrence, exceptions, etc. So far more complex than >> the AD use cases >> (http://www.intelliadmin.com/images/Logon%20Hours%20Windows%20Active%20Directory.jpg). >> > This is where importing iCal is helpful because it allows you to > outsource the task of creating such event to something else. > > Parsing event information would produce a rule definition we would store > and SSSD would apply as HBAC rule. However, we don't need ourselves to > provide a complex UI to define such rules. Instead, we can do a simple > UI to create rules plus a UI to import rules defined in iCal by some > other software. The rest is visualizing HBAC time/date rules which is > separate from dealing with complexity of creating or importing rules. > > Additionally, for iCal-based imports we can utilize participants > information from the iCal to automatically set up members of the rule > (based on mail attribute). > Ah, makes sense to me. With all the possibilities that iCal format offers, we would more or less end up storing iCal in HBAC rules (or our own format of iCal). I am just concerned it would make a bit complex processing on SSSD side, especially in the security sensitive piece for authorization rules. We may need to use libraries for processing iCal rules, like libical (http://koji.fedoraproject.org/koji/buildinfo?buildID=606329)... From mkosek at redhat.com Tue Mar 10 14:52:44 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 10 Mar 2015 15:52:44 +0100 Subject: [Freeipa-devel] Purpose of default user group In-Reply-To: <54FEFF6D.60601@redhat.com> References: <54FEC13D.1000400@redhat.com> <54FEFF6D.60601@redhat.com> Message-ID: <54FF053C.30101@redhat.com> On 03/10/2015 03:27 PM, Rob Crittenden wrote: > Petr Vobornik wrote: >> Hi, >> >> I would like to ask what is a purpose of a default user group - by >> default ipausers? Default group is also a required field in ipa config. > > To be able to apply some (undefined) group policy to all users. I'm not > aware that it has ever been used for this. I would also interested in the use cases, especially given all the pain we have with ipausers and large user bases. Especially that for current policies (SUDO, HBAC, SELinux user policy), we always have other means to specify "all users". > >> In ipa migrate-ds we also set the group to all users who are not member >> of anything. Why is it important for a user to be a member of a group? >> >> Thank you > > Every POSIX user needs a default GID. We don't create user-private > groups for migrated users. > > rob > From simo at redhat.com Tue Mar 10 14:53:22 2015 From: simo at redhat.com (Simo Sorce) Date: Tue, 10 Mar 2015 10:53:22 -0400 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <54FF007F.7060701@redhat.com> References: <54FF007F.7060701@redhat.com> Message-ID: <1425999202.4735.90.camel@willson.usersys.redhat.com> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: > Hello, > > I would like to discuss Generic support for unknown DNS RR types (RFC 3597 > [0]). Here is the proposal: > > LDAP schema > =========== > - 1 new attribute: > ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' EQUALITY > caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) > > The attribute should be added to existing idnsRecord object class as MAY. > > This new attribute should contain data encoded according to ?RFC 3597 section > 5 [5]: > > The RDATA section of an RR of unknown type is represented as a > sequence of white space separated words as follows: > > The special token \# (a backslash immediately followed by a hash > sign), which identifies the RDATA as having the generic encoding > defined herein rather than a traditional type-specific encoding. > > An unsigned decimal integer specifying the RDATA length in octets. > > Zero or more words of hexadecimal data encoding the actual RDATA > field, each containing an even number of hexadecimal digits. > > If the RDATA is of zero length, the text representation contains only > the \# token and the single zero representing the length. > > Examples from RFC: > a.example. CLASS32 TYPE731 \# 6 abcd ( > ef 01 23 45 ) > b.example. HS TYPE62347 \# 0 > e.example. IN A \# 4 0A000001 > e.example. CLASS1 TYPE1 10.0.0.2 > > > Open questions about LDAP format > ================================ > Should we include "\#" constant? We know that the attribute contains record in > RFC 3597 syntax so it is not strictly necessary. > > I think it would be better to follow RFC 3597 format. It allows blind > copy&pasting from other tools, including direct calls to python-dns. > > It also eases writing conversion tools between DNS and LDAP format because > they do not need to change record values. > > > Another question is if we should explicitly include length of data represented > in hexadecimal notation as a decimal number. I'm very strongly inclined to let > it there because it is very good sanity check and again, it allows us to > re-use existing tools including parsers. > > I will ask Uninett.no for standardization after we sort this out (they own the > OID arc we use for DNS records). > > > Attribute usage > =============== > Every DNS RR type has assigned a number [1] which is used on wire. RR types > which are unknown to the server cannot be named by their mnemonic/type name > because server would not be able to do name->number conversion and to generate > DNS wire format. > > As a result, we have to encode the RR type number somehow. Let's use attribute > sub-types. > > E.g. a record with type 65280 and hex value 0A000001 will be represented as: > GenericRecord;TYPE65280: \# 4 0A000001 > > > CLI > === > $ ipa dnsrecord-add zone.example owner \ > --generic-type=65280 --generic-data='\# 4 0A000001' > > $ ipa dnsrecord-show zone.example owner > Record name: owner > TYPE65280 Record: \# 4 0A000001 > > > ACK? :-) Almost. We should refrain from using subtypes when not necessary, and in this case it is not necessary. Use: GenericRecord: 65280 \# 4 0A000001 Done! Simo. > > Related tickets > =============== > https://fedorahosted.org/freeipa/ticket/4939 > https://fedorahosted.org/bind-dyndb-ldap/ticket/157 > > [0] http://tools.ietf.org/html/rfc3597 > [1] > http://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-4 > [5] http://tools.ietf.org/html/rfc3597#section-5 > > -- > Petr^2 Spacek > -- Simo Sorce * Red Hat, Inc * New York From jhrozek at redhat.com Tue Mar 10 15:01:30 2015 From: jhrozek at redhat.com (Jakub Hrozek) Date: Tue, 10 Mar 2015 16:01:30 +0100 Subject: [Freeipa-devel] Purpose of default user group In-Reply-To: <54FF053C.30101@redhat.com> References: <54FEC13D.1000400@redhat.com> <54FEFF6D.60601@redhat.com> <54FF053C.30101@redhat.com> Message-ID: <20150310150130.GQ22044@hendrix.arn.redhat.com> On Tue, Mar 10, 2015 at 03:52:44PM +0100, Martin Kosek wrote: > On 03/10/2015 03:27 PM, Rob Crittenden wrote: > > Petr Vobornik wrote: > >> Hi, > >> > >> I would like to ask what is a purpose of a default user group - by > >> default ipausers? Default group is also a required field in ipa config. > > > > To be able to apply some (undefined) group policy to all users. I'm not > > aware that it has ever been used for this. > > I would also interested in the use cases, especially given all the pain we have > with ipausers and large user bases. Especially that for current policies (SUDO, > HBAC, SELinux user policy), we always have other means to specify "all users". yes, but those means usually specify both AD and IPA users, right? I always thought "ipausers" is a handy shortcut for selecting IPA users only and not AD users. From jhrozek at redhat.com Tue Mar 10 15:06:01 2015 From: jhrozek at redhat.com (Jakub Hrozek) Date: Tue, 10 Mar 2015 16:06:01 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FF03EE.6020005@redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> <20150310144041.GP25455@redhat.com> <54FF03EE.6020005@redhat.com> Message-ID: <20150310150601.GR22044@hendrix.arn.redhat.com> On Tue, Mar 10, 2015 at 03:47:10PM +0100, Martin Kosek wrote: > > This is where importing iCal is helpful because it allows you to > > outsource the task of creating such event to something else. > > > > Parsing event information would produce a rule definition we would store > > and SSSD would apply as HBAC rule. However, we don't need ourselves to > > provide a complex UI to define such rules. Instead, we can do a simple > > UI to create rules plus a UI to import rules defined in iCal by some > > other software. The rest is visualizing HBAC time/date rules which is > > separate from dealing with complexity of creating or importing rules. > > > > Additionally, for iCal-based imports we can utilize participants > > information from the iCal to automatically set up members of the rule > > (based on mail attribute). > > > > Ah, makes sense to me. > > With all the possibilities that iCal format offers, we would more or less end > up storing iCal in HBAC rules (or our own format of iCal). I am just concerned > it would make a bit complex processing on SSSD side, especially in the security > sensitive piece for authorization rules. > > We may need to use libraries for processing iCal rules, like libical > (http://koji.fedoraproject.org/koji/buildinfo?buildID=606329)... Is that what Alexander said, though? In his reply, I see: "Parsing event information would produce a rule definition we would store and SSSD would apply as HBAC rule". I don't think iCal dependency is something we want in SSSD, the rules should be converted from iCal to SSSD format in a layer atop libipa_hbac.. From pspacek at redhat.com Tue Mar 10 15:19:38 2015 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 10 Mar 2015 16:19:38 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <1425999202.4735.90.camel@willson.usersys.redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> Message-ID: <54FF0B8A.1090308@redhat.com> On 10.3.2015 15:53, Simo Sorce wrote: > On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >> Hello, >> >> I would like to discuss Generic support for unknown DNS RR types (RFC 3597 >> [0]). Here is the proposal: >> >> LDAP schema >> =========== >> - 1 new attribute: >> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' EQUALITY >> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >> >> The attribute should be added to existing idnsRecord object class as MAY. >> >> This new attribute should contain data encoded according to ?RFC 3597 section >> 5 [5]: >> >> The RDATA section of an RR of unknown type is represented as a >> sequence of white space separated words as follows: >> >> The special token \# (a backslash immediately followed by a hash >> sign), which identifies the RDATA as having the generic encoding >> defined herein rather than a traditional type-specific encoding. >> >> An unsigned decimal integer specifying the RDATA length in octets. >> >> Zero or more words of hexadecimal data encoding the actual RDATA >> field, each containing an even number of hexadecimal digits. >> >> If the RDATA is of zero length, the text representation contains only >> the \# token and the single zero representing the length. >> >> Examples from RFC: >> a.example. CLASS32 TYPE731 \# 6 abcd ( >> ef 01 23 45 ) >> b.example. HS TYPE62347 \# 0 >> e.example. IN A \# 4 0A000001 >> e.example. CLASS1 TYPE1 10.0.0.2 >> >> >> Open questions about LDAP format >> ================================ >> Should we include "\#" constant? We know that the attribute contains record in >> RFC 3597 syntax so it is not strictly necessary. >> >> I think it would be better to follow RFC 3597 format. It allows blind >> copy&pasting from other tools, including direct calls to python-dns. >> >> It also eases writing conversion tools between DNS and LDAP format because >> they do not need to change record values. >> >> >> Another question is if we should explicitly include length of data represented >> in hexadecimal notation as a decimal number. I'm very strongly inclined to let >> it there because it is very good sanity check and again, it allows us to >> re-use existing tools including parsers. >> >> I will ask Uninett.no for standardization after we sort this out (they own the >> OID arc we use for DNS records). >> >> >> Attribute usage >> =============== >> Every DNS RR type has assigned a number [1] which is used on wire. RR types >> which are unknown to the server cannot be named by their mnemonic/type name >> because server would not be able to do name->number conversion and to generate >> DNS wire format. >> >> As a result, we have to encode the RR type number somehow. Let's use attribute >> sub-types. >> >> E.g. a record with type 65280 and hex value 0A000001 will be represented as: >> GenericRecord;TYPE65280: \# 4 0A000001 >> >> >> CLI >> === >> $ ipa dnsrecord-add zone.example owner \ >> --generic-type=65280 --generic-data='\# 4 0A000001' >> >> $ ipa dnsrecord-show zone.example owner >> Record name: owner >> TYPE65280 Record: \# 4 0A000001 >> >> >> ACK? :-) > > Almost. > We should refrain from using subtypes when not necessary, and in this > case it is not necessary. > > Use: > GenericRecord: 65280 \# 4 0A000001 I was considering that too but I can see two main drawbacks: 1) It does not work very well with DS ACI (targetattrfilter, anyone?). Adding generic write access to GenericRecord == ability to add TLSA records too, which you may not want. IMHO it is perfectly reasonable to limit write access to certain types (e.g. to one from private range). 2) We would need a separate substring index for emulating filters like (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence index which will be handy one day when we decide to handle upgrades like GenericRecord;TYPE256->UriRecord. Another (less important) annoyance is that conversion tools would have to mangle record data instead of just converting attribute name->record type. I can be convinced that subtypes are not necessary but I do not see clear advantage of avoiding them. What is the problem with subtypes? Petr^2 Spacek > > Done! > > Simo. > >> >> Related tickets >> =============== >> https://fedorahosted.org/freeipa/ticket/4939 >> https://fedorahosted.org/bind-dyndb-ldap/ticket/157 >> >> [0] http://tools.ietf.org/html/rfc3597 >> [1] >> http://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-4 >> [5] http://tools.ietf.org/html/rfc3597#section-5 >> >> -- >> Petr^2 Spacek -- Petr^2 Spacek From pvoborni at redhat.com Tue Mar 10 15:22:52 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 10 Mar 2015 16:22:52 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <1425999202.4735.90.camel@willson.usersys.redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> Message-ID: <54FF0C4C.8050907@redhat.com> On 03/10/2015 03:53 PM, Simo Sorce wrote: > On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >> Hello, >> >> I would like to discuss Generic support for unknown DNS RR types (RFC 3597 >> [0]). Here is the proposal: >> >> LDAP schema >> =========== >> - 1 new attribute: >> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' EQUALITY >> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >> >> The attribute should be added to existing idnsRecord object class as MAY. >> >> This new attribute should contain data encoded according to ?RFC 3597 section >> 5 [5]: >> >> The RDATA section of an RR of unknown type is represented as a >> sequence of white space separated words as follows: >> >> The special token \# (a backslash immediately followed by a hash >> sign), which identifies the RDATA as having the generic encoding >> defined herein rather than a traditional type-specific encoding. >> >> An unsigned decimal integer specifying the RDATA length in octets. >> >> Zero or more words of hexadecimal data encoding the actual RDATA >> field, each containing an even number of hexadecimal digits. >> >> If the RDATA is of zero length, the text representation contains only >> the \# token and the single zero representing the length. >> >> Examples from RFC: >> a.example. CLASS32 TYPE731 \# 6 abcd ( >> ef 01 23 45 ) >> b.example. HS TYPE62347 \# 0 >> e.example. IN A \# 4 0A000001 >> e.example. CLASS1 TYPE1 10.0.0.2 >> >> >> Open questions about LDAP format >> ================================ >> Should we include "\#" constant? We know that the attribute contains record in >> RFC 3597 syntax so it is not strictly necessary. >> >> I think it would be better to follow RFC 3597 format. It allows blind >> copy&pasting from other tools, including direct calls to python-dns. >> >> It also eases writing conversion tools between DNS and LDAP format because >> they do not need to change record values. >> >> >> Another question is if we should explicitly include length of data represented >> in hexadecimal notation as a decimal number. I'm very strongly inclined to let >> it there because it is very good sanity check and again, it allows us to >> re-use existing tools including parsers. >> >> I will ask Uninett.no for standardization after we sort this out (they own the >> OID arc we use for DNS records). >> >> >> Attribute usage >> =============== >> Every DNS RR type has assigned a number [1] which is used on wire. RR types >> which are unknown to the server cannot be named by their mnemonic/type name >> because server would not be able to do name->number conversion and to generate >> DNS wire format. >> >> As a result, we have to encode the RR type number somehow. Let's use attribute >> sub-types. >> >> E.g. a record with type 65280 and hex value 0A000001 will be represented as: >> GenericRecord;TYPE65280: \# 4 0A000001 >> >> >> CLI >> === >> $ ipa dnsrecord-add zone.example owner \ >> --generic-type=65280 --generic-data='\# 4 0A000001' >> >> $ ipa dnsrecord-show zone.example owner >> Record name: owner >> TYPE65280 Record: \# 4 0A000001 CLI is inconsistent with current one. We have 2 modes: structured and unstructured. With simo's proposal it should work better when specifying multiple values. we use different option name for structured mod/add and unstructured even if the record has only one part, it could be: $ ipa dnsrecord-add zone.example owner \ --generic-rec={"65280 \# 4 0A000001", "62347 \# 0"} $ ipa dnsrecord-add zone.example owner --structured \ --generic-value={"65280 \# 4 0A000001", "62347 \# 0"} For mod the same. If we stick with this, Web UI should be quite easy and quick to create. >> >> >> ACK? :-) > > Almost. > We should refrain from using subtypes when not necessary, and in this > case it is not necessary. > > Use: > GenericRecord: 65280 \# 4 0A000001 > > Done! > > Simo. +1, it would also simplify code. > >> >> Related tickets >> =============== >> https://fedorahosted.org/freeipa/ticket/4939 >> https://fedorahosted.org/bind-dyndb-ldap/ticket/157 >> >> [0] http://tools.ietf.org/html/rfc3597 >> [1] >> http://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-4 >> [5] http://tools.ietf.org/html/rfc3597#section-5 >> >> -- >> Petr^2 Spacek >> -- Petr Vobornik From pspacek at redhat.com Tue Mar 10 15:35:17 2015 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 10 Mar 2015 16:35:17 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <54FF0C4C.8050907@redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0C4C.8050907@redhat.com> Message-ID: <54FF0F35.4020003@redhat.com> On 10.3.2015 16:22, Petr Vobornik wrote: > On 03/10/2015 03:53 PM, Simo Sorce wrote: >> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >>> Hello, >>> >>> I would like to discuss Generic support for unknown DNS RR types (RFC 3597 >>> [0]). Here is the proposal: >>> >>> LDAP schema >>> =========== >>> - 1 new attribute: >>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' EQUALITY >>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >>> >>> The attribute should be added to existing idnsRecord object class as MAY. >>> >>> This new attribute should contain data encoded according to ?RFC 3597 section >>> 5 [5]: >>> >>> The RDATA section of an RR of unknown type is represented as a >>> sequence of white space separated words as follows: >>> >>> The special token \# (a backslash immediately followed by a hash >>> sign), which identifies the RDATA as having the generic encoding >>> defined herein rather than a traditional type-specific encoding. >>> >>> An unsigned decimal integer specifying the RDATA length in octets. >>> >>> Zero or more words of hexadecimal data encoding the actual RDATA >>> field, each containing an even number of hexadecimal digits. >>> >>> If the RDATA is of zero length, the text representation contains only >>> the \# token and the single zero representing the length. >>> >>> Examples from RFC: >>> a.example. CLASS32 TYPE731 \# 6 abcd ( >>> ef 01 23 45 ) >>> b.example. HS TYPE62347 \# 0 >>> e.example. IN A \# 4 0A000001 >>> e.example. CLASS1 TYPE1 10.0.0.2 >>> >>> >>> Open questions about LDAP format >>> ================================ >>> Should we include "\#" constant? We know that the attribute contains record in >>> RFC 3597 syntax so it is not strictly necessary. >>> >>> I think it would be better to follow RFC 3597 format. It allows blind >>> copy&pasting from other tools, including direct calls to python-dns. >>> >>> It also eases writing conversion tools between DNS and LDAP format because >>> they do not need to change record values. >>> >>> >>> Another question is if we should explicitly include length of data represented >>> in hexadecimal notation as a decimal number. I'm very strongly inclined to let >>> it there because it is very good sanity check and again, it allows us to >>> re-use existing tools including parsers. >>> >>> I will ask Uninett.no for standardization after we sort this out (they own the >>> OID arc we use for DNS records). >>> >>> >>> Attribute usage >>> =============== >>> Every DNS RR type has assigned a number [1] which is used on wire. RR types >>> which are unknown to the server cannot be named by their mnemonic/type name >>> because server would not be able to do name->number conversion and to generate >>> DNS wire format. >>> >>> As a result, we have to encode the RR type number somehow. Let's use attribute >>> sub-types. >>> >>> E.g. a record with type 65280 and hex value 0A000001 will be represented as: >>> GenericRecord;TYPE65280: \# 4 0A000001 >>> >>> >>> CLI >>> === >>> $ ipa dnsrecord-add zone.example owner \ >>> --generic-type=65280 --generic-data='\# 4 0A000001' >>> >>> $ ipa dnsrecord-show zone.example owner >>> Record name: owner >>> TYPE65280 Record: \# 4 0A000001 > > CLI is inconsistent with current one. We have 2 modes: structured and > unstructured. With simo's proposal it should work better when specifying > multiple values. > > we use different option name for structured mod/add and unstructured even if > the record has only one part, it could be: I always thought that is is just an compatibility-thing. Should we really do that even for new records? What is the value? Especially for single-part record types? > $ ipa dnsrecord-add zone.example owner \ > --generic-rec={"65280 \# 4 0A000001", "62347 \# 0"} > > $ ipa dnsrecord-add zone.example owner --structured \ > --generic-value={"65280 \# 4 0A000001", "62347 \# 0"} > > For mod the same. If we stick with this, Web UI should be quite easy and quick > to create. Maybe it will be easy for developers but not for users. I do not see an obvious way to delete all records of given generic type 62347 and not to delete generic records of all other types. IMHO we really should not mix "type" and "value", not at least in user interface (even if it is one string in LDAP). Petr^2 Spacek >>> ACK? :-) >> >> Almost. >> We should refrain from using subtypes when not necessary, and in this >> case it is not necessary. >> >> Use: >> GenericRecord: 65280 \# 4 0A000001 >> >> Done! >> >> Simo. > > +1, it would also simplify code. > >> >>> >>> Related tickets >>> =============== >>> https://fedorahosted.org/freeipa/ticket/4939 >>> https://fedorahosted.org/bind-dyndb-ldap/ticket/157 >>> >>> [0] http://tools.ietf.org/html/rfc3597 >>> [1] >>> http://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-4 >>> >>> [5] http://tools.ietf.org/html/rfc3597#section-5 >>> >>> -- >>> Petr^2 Spacek From tbabej at redhat.com Tue Mar 10 15:35:34 2015 From: tbabej at redhat.com (Tomas Babej) Date: Tue, 10 Mar 2015 16:35:34 +0100 Subject: [Freeipa-devel] [PATCHES 306-316] Automated migration tool from Winsync In-Reply-To: <54FD8369.10803@redhat.com> References: <54FD8369.10803@redhat.com> Message-ID: <54FF0F46.3010109@redhat.com> On 03/09/2015 12:26 PM, Tomas Babej wrote: > Hi, > > this couple of patches provides a initial implementation of the > winsync migration tool: > > https://fedorahosted.org/freeipa/ticket/4524 > > Some parts could use some polishing, but this is a sound foundation. > > Tomas > > > Attaching one more patch to the bundle. This one should make the winsync tool readily available after install. Tomas -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbabej-0317-winsync-migrate-Include-the-tool-parts-in-Makefile-a.patch Type: text/x-patch Size: 1923 bytes Desc: not available URL: From pspacek at redhat.com Tue Mar 10 15:44:32 2015 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 10 Mar 2015 16:44:32 +0100 Subject: [Freeipa-devel] Purpose of default user group In-Reply-To: <20150310150130.GQ22044@hendrix.arn.redhat.com> References: <54FEC13D.1000400@redhat.com> <54FEFF6D.60601@redhat.com> <54FF053C.30101@redhat.com> <20150310150130.GQ22044@hendrix.arn.redhat.com> Message-ID: <54FF1160.70202@redhat.com> On 10.3.2015 16:01, Jakub Hrozek wrote: > On Tue, Mar 10, 2015 at 03:52:44PM +0100, Martin Kosek wrote: >> On 03/10/2015 03:27 PM, Rob Crittenden wrote: >>> Petr Vobornik wrote: >>>> Hi, >>>> >>>> I would like to ask what is a purpose of a default user group - by >>>> default ipausers? Default group is also a required field in ipa config. >>> >>> To be able to apply some (undefined) group policy to all users. I'm not >>> aware that it has ever been used for this. >> >> I would also interested in the use cases, especially given all the pain we have >> with ipausers and large user bases. Especially that for current policies (SUDO, >> HBAC, SELinux user policy), we always have other means to specify "all users". > > yes, but those means usually specify both AD and IPA users, right? > > I always thought "ipausers" is a handy shortcut for selecting IPA users > only and not AD users. I always thought that "ipausers" is an equivalent of "domain users" in AD world (compare with "Trusted domain users"). In my admin life I considered "domain users" to be useful alias for real authenticated user accounts (compare with "Everyone" = even unauthenticated access, "Authenticated users" = includes machine accounts too.) Moreover, getting rid of ipausers does not help with 'big groups problem' in any way. E.g. at university you are almost inevitably going to have groups like 'students' which will contain more than 90 % of users anyway. -- Petr^2 Spacek From slaz at seznam.cz Tue Mar 10 15:51:28 2015 From: slaz at seznam.cz (=?UTF-8?B?U3RhbmlzbGF2IEzDoXpuacSNa2E=?=) Date: Tue, 10 Mar 2015 16:51:28 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150310150601.GR22044@hendrix.arn.redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> <20150310144041.GP25455@redhat.com> <54FF03EE.6020005@redhat.com> <20150310150601.GR22044@hendrix.arn.redhat.com> Message-ID: <54FF1300.3090104@seznam.cz> On 03/10/2015 04:06 PM, Jakub Hrozek wrote: > On Tue, Mar 10, 2015 at 03:47:10PM +0100, Martin Kosek wrote: >>> This is where importing iCal is helpful because it allows you to >>> outsource the task of creating such event to something else. >>> >>> Parsing event information would produce a rule definition we would store >>> and SSSD would apply as HBAC rule. However, we don't need ourselves to >>> provide a complex UI to define such rules. Instead, we can do a simple >>> UI to create rules plus a UI to import rules defined in iCal by some >>> other software. The rest is visualizing HBAC time/date rules which is >>> separate from dealing with complexity of creating or importing rules. >>> >>> Additionally, for iCal-based imports we can utilize participants >>> information from the iCal to automatically set up members of the rule >>> (based on mail attribute). >>> >> Ah, makes sense to me. >> >> With all the possibilities that iCal format offers, we would more or less end >> up storing iCal in HBAC rules (or our own format of iCal). I am just concerned >> it would make a bit complex processing on SSSD side, especially in the security >> sensitive piece for authorization rules. >> >> We may need to use libraries for processing iCal rules, like libical >> (http://koji.fedoraproject.org/koji/buildinfo?buildID=606329)... > Is that what Alexander said, though? In his reply, I see: > "Parsing event information would produce a rule definition we would > store and SSSD would apply as HBAC rule". This is what kind of worried me, too. If I understand it well, this means you would have iCal events such as holidays (these were mentioned before), and you would like to generate HBAC rules based on these events. Those rules would, however, be different for each country (if this is still about holidays) and might collide among user and host groups. Therefore, you would have lots and lots of rules in the end, wouldn't you? I wonder if anyone does that. From what I've seen in AD and 389 Directory Server, time-based rules are being stored in a rather simple manner. I don't mind a more complex solution but I think such exceptions might be little too much. But I might have not understood the idea very well. > I don't think iCal dependency is something we want in SSSD, the > rules should be converted from iCal to SSSD format in a layer atop > libipa_hbac.. > From abokovoy at redhat.com Tue Mar 10 15:55:58 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 10 Mar 2015 17:55:58 +0200 Subject: [Freeipa-devel] Purpose of default user group In-Reply-To: <54FF1160.70202@redhat.com> References: <54FEC13D.1000400@redhat.com> <54FEFF6D.60601@redhat.com> <54FF053C.30101@redhat.com> <20150310150130.GQ22044@hendrix.arn.redhat.com> <54FF1160.70202@redhat.com> Message-ID: <20150310155558.GS25455@redhat.com> On Tue, 10 Mar 2015, Petr Spacek wrote: >On 10.3.2015 16:01, Jakub Hrozek wrote: >> On Tue, Mar 10, 2015 at 03:52:44PM +0100, Martin Kosek wrote: >>> On 03/10/2015 03:27 PM, Rob Crittenden wrote: >>>> Petr Vobornik wrote: >>>>> Hi, >>>>> >>>>> I would like to ask what is a purpose of a default user group - by >>>>> default ipausers? Default group is also a required field in ipa config. >>>> >>>> To be able to apply some (undefined) group policy to all users. I'm not >>>> aware that it has ever been used for this. >>> >>> I would also interested in the use cases, especially given all the pain we have >>> with ipausers and large user bases. Especially that for current policies (SUDO, >>> HBAC, SELinux user policy), we always have other means to specify "all users". >> >> yes, but those means usually specify both AD and IPA users, right? >> >> I always thought "ipausers" is a handy shortcut for selecting IPA users >> only and not AD users. > >I always thought that "ipausers" is an equivalent of "domain users" in AD >world (compare with "Trusted domain users"). > >In my admin life I considered "domain users" to be useful alias for real >authenticated user accounts (compare with "Everyone" = even unauthenticated >access, "Authenticated users" = includes machine accounts too.) > > >Moreover, getting rid of ipausers does not help with 'big groups problem' in >any way. E.g. at university you are almost inevitably going to have groups >like 'students' which will contain more than 90 % of users anyway. For what use we need this distinction in IPA itself? - ACI (permissions) have separate notion to describe anonymous/any authenticated dichotomy - HBAC has 'all' category for users which in HBAC context means all authenticated users Where else we would need ipausers other than default POSIX group which we are not using it for? -- / Alexander Bokovoy From pspacek at redhat.com Tue Mar 10 15:57:52 2015 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 10 Mar 2015 16:57:52 +0100 Subject: [Freeipa-devel] Purpose of default user group In-Reply-To: <20150310155558.GS25455@redhat.com> References: <54FEC13D.1000400@redhat.com> <54FEFF6D.60601@redhat.com> <54FF053C.30101@redhat.com> <20150310150130.GQ22044@hendrix.arn.redhat.com> <54FF1160.70202@redhat.com> <20150310155558.GS25455@redhat.com> Message-ID: <54FF1480.60608@redhat.com> On 10.3.2015 16:55, Alexander Bokovoy wrote: > On Tue, 10 Mar 2015, Petr Spacek wrote: >> On 10.3.2015 16:01, Jakub Hrozek wrote: >>> On Tue, Mar 10, 2015 at 03:52:44PM +0100, Martin Kosek wrote: >>>> On 03/10/2015 03:27 PM, Rob Crittenden wrote: >>>>> Petr Vobornik wrote: >>>>>> Hi, >>>>>> >>>>>> I would like to ask what is a purpose of a default user group - by >>>>>> default ipausers? Default group is also a required field in ipa config. >>>>> >>>>> To be able to apply some (undefined) group policy to all users. I'm not >>>>> aware that it has ever been used for this. >>>> >>>> I would also interested in the use cases, especially given all the pain we >>>> have >>>> with ipausers and large user bases. Especially that for current policies >>>> (SUDO, >>>> HBAC, SELinux user policy), we always have other means to specify "all >>>> users". >>> >>> yes, but those means usually specify both AD and IPA users, right? >>> >>> I always thought "ipausers" is a handy shortcut for selecting IPA users >>> only and not AD users. >> >> I always thought that "ipausers" is an equivalent of "domain users" in AD >> world (compare with "Trusted domain users"). >> >> In my admin life I considered "domain users" to be useful alias for real >> authenticated user accounts (compare with "Everyone" = even unauthenticated >> access, "Authenticated users" = includes machine accounts too.) >> >> >> Moreover, getting rid of ipausers does not help with 'big groups problem' in >> any way. E.g. at university you are almost inevitably going to have groups >> like 'students' which will contain more than 90 % of users anyway. > For what use we need this distinction in IPA itself? > - ACI (permissions) have separate notion to describe > anonymous/any authenticated dichotomy > - HBAC has 'all' category for users which in HBAC context means all > authenticated users > > Where else we would need ipausers other than default POSIX group which > we are not using it for? Ah, it is not a POSIX group? Too bad. I was using AD "domain users" for file permissions so POSIX group equivalent is what I had in mind. -- Petr^2 Spacek From rcritten at redhat.com Tue Mar 10 16:08:21 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 10 Mar 2015 12:08:21 -0400 Subject: [Freeipa-devel] Purpose of default user group In-Reply-To: <20150310155558.GS25455@redhat.com> References: <54FEC13D.1000400@redhat.com> <54FEFF6D.60601@redhat.com> <54FF053C.30101@redhat.com> <20150310150130.GQ22044@hendrix.arn.redhat.com> <54FF1160.70202@redhat.com> <20150310155558.GS25455@redhat.com> Message-ID: <54FF16F5.8020908@redhat.com> Alexander Bokovoy wrote: > On Tue, 10 Mar 2015, Petr Spacek wrote: >> On 10.3.2015 16:01, Jakub Hrozek wrote: >>> On Tue, Mar 10, 2015 at 03:52:44PM +0100, Martin Kosek wrote: >>>> On 03/10/2015 03:27 PM, Rob Crittenden wrote: >>>>> Petr Vobornik wrote: >>>>>> Hi, >>>>>> >>>>>> I would like to ask what is a purpose of a default user group - by >>>>>> default ipausers? Default group is also a required field in ipa >>>>>> config. >>>>> >>>>> To be able to apply some (undefined) group policy to all users. I'm >>>>> not >>>>> aware that it has ever been used for this. >>>> >>>> I would also interested in the use cases, especially given all the >>>> pain we have >>>> with ipausers and large user bases. Especially that for current >>>> policies (SUDO, >>>> HBAC, SELinux user policy), we always have other means to specify >>>> "all users". >>> >>> yes, but those means usually specify both AD and IPA users, right? >>> >>> I always thought "ipausers" is a handy shortcut for selecting IPA users >>> only and not AD users. >> >> I always thought that "ipausers" is an equivalent of "domain users" in AD >> world (compare with "Trusted domain users"). >> >> In my admin life I considered "domain users" to be useful alias for real >> authenticated user accounts (compare with "Everyone" = even >> unauthenticated >> access, "Authenticated users" = includes machine accounts too.) >> >> >> Moreover, getting rid of ipausers does not help with 'big groups >> problem' in >> any way. E.g. at university you are almost inevitably going to have >> groups >> like 'students' which will contain more than 90 % of users anyway. > For what use we need this distinction in IPA itself? > - ACI (permissions) have separate notion to describe > anonymous/any authenticated dichotomy > - HBAC has 'all' category for users which in HBAC context means all > authenticated users > > Where else we would need ipausers other than default POSIX group which > we are not using it for? Petr's point is that deleting ipausers is a short-term solution that ignores the underlying problem. But yeah, ipausers is a solution looking for a problem AFAIK. It was a future-proofing move because if we ever decided we needed on, slurping in all the users at once and adding to some common group would be time-consuming. rob From redhatrises at gmail.com Tue Mar 10 16:11:57 2015 From: redhatrises at gmail.com (Gabe Alford) Date: Tue, 10 Mar 2015 10:11:57 -0600 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FF1300.3090104@seznam.cz> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> <20150310144041.GP25455@redhat.com> <54FF03EE.6020005@redhat.com> <20150310150601.GR22044@hendrix.arn.redhat.com> <54FF1300.3090104@seznam.cz> Message-ID: On Tue, Mar 10, 2015 at 9:51 AM, Stanislav L?zni?ka wrote: > On 03/10/2015 04:06 PM, Jakub Hrozek wrote: > >> On Tue, Mar 10, 2015 at 03:47:10PM +0100, Martin Kosek wrote: >> >>> This is where importing iCal is helpful because it allows you to >>>> outsource the task of creating such event to something else. >>>> >>>> Parsing event information would produce a rule definition we would store >>>> and SSSD would apply as HBAC rule. However, we don't need ourselves to >>>> provide a complex UI to define such rules. Instead, we can do a simple >>>> UI to create rules plus a UI to import rules defined in iCal by some >>>> other software. The rest is visualizing HBAC time/date rules which is >>>> separate from dealing with complexity of creating or importing rules. >>>> >>>> Additionally, for iCal-based imports we can utilize participants >>>> information from the iCal to automatically set up members of the rule >>>> (based on mail attribute). >>>> >>>> Ah, makes sense to me. >>> >>> With all the possibilities that iCal format offers, we would more or >>> less end >>> up storing iCal in HBAC rules (or our own format of iCal). I am just >>> concerned >>> it would make a bit complex processing on SSSD side, especially in the >>> security >>> sensitive piece for authorization rules. >>> >>> We may need to use libraries for processing iCal rules, like libical >>> (http://koji.fedoraproject.org/koji/buildinfo?buildID=606329)... >>> >> Is that what Alexander said, though? In his reply, I see: >> "Parsing event information would produce a rule definition we would >> store and SSSD would apply as HBAC rule". >> > This is what kind of worried me, too. If I understand it well, this means > you would have iCal events such as holidays (these were mentioned before), > and you would like to generate HBAC rules based on these events. Those > rules would, however, be different for each country (if this is still about > holidays) and might collide among user and host groups. Therefore, you > would have lots and lots of rules in the end, wouldn't you? > > I wonder if anyone does that. From what I've seen in AD and 389 Directory > Server, time-based rules are being stored in a rather simple manner. I > don't mind a more complex solution but I think such exceptions might be > little too much. But I might have not understood the idea very well. This is my understanding as well. If using AD as the example, there are two ways that timebased rules are configured: 1. Permit logon hours during specified timeframe on specified day(s) of the week. 2. Deny logon hours during specified timeframe on specified day(s) of the week. There is nothing about holidays. I think that implementing holidays and special exemptions should be avoided. Just my 2 cents. Gabe > I don't think iCal dependency is something we want in SSSD, the >> rules should be converted from iCal to SSSD format in a layer atop >> libipa_hbac.. >> >> > -- > Manage your subscription for the Freeipa-devel mailing list: > https://www.redhat.com/mailman/listinfo/freeipa-devel > Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdennis at redhat.com Tue Mar 10 16:13:53 2015 From: jdennis at redhat.com (John Dennis) Date: Tue, 10 Mar 2015 12:13:53 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150310150601.GR22044@hendrix.arn.redhat.com> References: <54FD4525.6000000@seznam.cz> <1425906176.2658.1.camel@redhat.com> <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> <20150310144041.GP25455@redhat.com> <54FF03EE.6020005@redhat.com> <20150310150601.GR22044@hendrix.arn.redhat.com> Message-ID: <54FF1841.10209@redhat.com> On 03/10/2015 11:06 AM, Jakub Hrozek wrote: >> We may need to use libraries for processing iCal rules, like libical >> (http://koji.fedoraproject.org/koji/buildinfo?buildID=606329)... > > Is that what Alexander said, though? In his reply, I see: > "Parsing event information would produce a rule definition we would > store and SSSD would apply as HBAC rule". > > I don't think iCal dependency is something we want in SSSD, the > rules should be converted from iCal to SSSD format in a layer atop > libipa_hbac.. But doesn't the iCal rule have to be evaluated in SSSD? If so that requires linking against libical, right? -- John From abokovoy at redhat.com Tue Mar 10 16:13:52 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 10 Mar 2015 18:13:52 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FF1300.3090104@seznam.cz> References: <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> <20150310144041.GP25455@redhat.com> <54FF03EE.6020005@redhat.com> <20150310150601.GR22044@hendrix.arn.redhat.com> <54FF1300.3090104@seznam.cz> Message-ID: <20150310161352.GT25455@redhat.com> On Tue, 10 Mar 2015, Stanislav L?zni?ka wrote: >On 03/10/2015 04:06 PM, Jakub Hrozek wrote: >>On Tue, Mar 10, 2015 at 03:47:10PM +0100, Martin Kosek wrote: >>>>This is where importing iCal is helpful because it allows you to >>>>outsource the task of creating such event to something else. >>>> >>>>Parsing event information would produce a rule definition we would store >>>>and SSSD would apply as HBAC rule. However, we don't need ourselves to >>>>provide a complex UI to define such rules. Instead, we can do a simple >>>>UI to create rules plus a UI to import rules defined in iCal by some >>>>other software. The rest is visualizing HBAC time/date rules which is >>>>separate from dealing with complexity of creating or importing rules. >>>> >>>>Additionally, for iCal-based imports we can utilize participants >>>>information from the iCal to automatically set up members of the rule >>>>(based on mail attribute). >>>> >>>Ah, makes sense to me. >>> >>>With all the possibilities that iCal format offers, we would more or less end >>>up storing iCal in HBAC rules (or our own format of iCal). I am just concerned >>>it would make a bit complex processing on SSSD side, especially in the security >>>sensitive piece for authorization rules. >>> >>>We may need to use libraries for processing iCal rules, like libical >>>(http://koji.fedoraproject.org/koji/buildinfo?buildID=606329)... >>Is that what Alexander said, though? In his reply, I see: >> "Parsing event information would produce a rule definition we would >> store and SSSD would apply as HBAC rule". >This is what kind of worried me, too. If I understand it well, this >means you would have iCal events such as holidays (these were >mentioned before), and you would like to generate HBAC rules based on >these events. Those rules would, however, be different for each >country (if this is still about holidays) and might collide among user >and host groups. Therefore, you would have lots and lots of rules in >the end, wouldn't you? It does not matter how many rules are there. SSSD caches HBAC rules per host and if rule doesn't apply, it is not downloaded and doesn't affect the host. HBAC rule is a tuple (user|group, host|hostgroup, service|servicegroup). This tuple would get extension representing time/date information in a multivalued attribute that would describe all time/date intervals applicable to this rule. HBAC rules represent ALLOW action and default is DENY so you don't need to represent holidays, they are on DENY by default. You only need to represent ALLOW here. -- / Alexander Bokovoy From abokovoy at redhat.com Tue Mar 10 16:18:34 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 10 Mar 2015 18:18:34 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FF1841.10209@redhat.com> References: <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> <20150310144041.GP25455@redhat.com> <54FF03EE.6020005@redhat.com> <20150310150601.GR22044@hendrix.arn.redhat.com> <54FF1841.10209@redhat.com> Message-ID: <20150310161834.GU25455@redhat.com> On Tue, 10 Mar 2015, John Dennis wrote: >On 03/10/2015 11:06 AM, Jakub Hrozek wrote: >>> We may need to use libraries for processing iCal rules, like libical >>> (http://koji.fedoraproject.org/koji/buildinfo?buildID=606329)... >> >> Is that what Alexander said, though? In his reply, I see: >> "Parsing event information would produce a rule definition we would >> store and SSSD would apply as HBAC rule". >> >> I don't think iCal dependency is something we want in SSSD, the >> rules should be converted from iCal to SSSD format in a layer atop >> libipa_hbac.. > >But doesn't the iCal rule have to be evaluated in SSSD? If so that >requires linking against libical, right? That's why I'm saying we import iCal in IPA, not that we keep using iCal as internal representation of time/date information for HBAC rules. I don't really want to impose iCal horror on HBAC rule parsing engine. I believe we can do simpler and better, given HBAC is all about ALLOW rules on the base of default DENY action. -- / Alexander Bokovoy From jdennis at redhat.com Tue Mar 10 16:18:59 2015 From: jdennis at redhat.com (John Dennis) Date: Tue, 10 Mar 2015 12:18:59 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150310161352.GT25455@redhat.com> References: <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> <20150310144041.GP25455@redhat.com> <54FF03EE.6020005@redhat.com> <20150310150601.GR22044@hendrix.arn.redhat.com> <54FF1300.3090104@seznam.cz> <20150310161352.GT25455@redhat.com> Message-ID: <54FF1973.30906@redhat.com> On 03/10/2015 12:13 PM, Alexander Bokovoy wrote: > HBAC rule is a tuple (user|group, host|hostgroup, service|servicegroup). > This tuple would get extension representing time/date information in a > multivalued attribute that would describe all time/date intervals > applicable to this rule. I must be misunderstanding something. Recurrence rules yield an unbounded number of "allow" intervals. Certainly you do not want to enumerate and store all the intervals, instead you want to evaluate the rule locally and obtain a simple yes/no answer, don't you? -- John From mkosek at redhat.com Tue Mar 10 16:22:24 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 10 Mar 2015 17:22:24 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150310161834.GU25455@redhat.com> References: <54FDB2F3.5030005@redhat.com> <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> <20150310144041.GP25455@redhat.com> <54FF03EE.6020005@redhat.com> <20150310150601.GR22044@hendrix.arn.redhat.com> <54FF1841.10209@redhat.com> <20150310161834.GU25455@redhat.com> Message-ID: <54FF1A40.3090702@redhat.com> On 03/10/2015 05:18 PM, Alexander Bokovoy wrote: > On Tue, 10 Mar 2015, John Dennis wrote: >> On 03/10/2015 11:06 AM, Jakub Hrozek wrote: >>>> We may need to use libraries for processing iCal rules, like libical >>>> (http://koji.fedoraproject.org/koji/buildinfo?buildID=606329)... >>> >>> Is that what Alexander said, though? In his reply, I see: >>> "Parsing event information would produce a rule definition we would >>> store and SSSD would apply as HBAC rule". >>> >>> I don't think iCal dependency is something we want in SSSD, the >>> rules should be converted from iCal to SSSD format in a layer atop >>> libipa_hbac.. >> >> But doesn't the iCal rule have to be evaluated in SSSD? If so that >> requires linking against libical, right? > That's why I'm saying we import iCal in IPA, not that we keep using iCal > as internal representation of time/date information for HBAC rules. > > I don't really want to impose iCal horror on HBAC rule parsing engine. > I believe we can do simpler and better, given HBAC is all about ALLOW > rules on the base of default DENY action. Ok, but how do you want to define rule as "Allow Joe to log in every Monday, except holidays (when the office is closed)"? We can't just have IPA processed the Ical and generate Allow ranges as there is indefinite number of the allow ranges. So if you want to described more complex rule (reocurring rule with some exceptions maybe), you end up with iCal anyway. Or not? From abokovoy at redhat.com Tue Mar 10 16:24:10 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 10 Mar 2015 18:24:10 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FF1973.30906@redhat.com> References: <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> <20150310144041.GP25455@redhat.com> <54FF03EE.6020005@redhat.com> <20150310150601.GR22044@hendrix.arn.redhat.com> <54FF1300.3090104@seznam.cz> <20150310161352.GT25455@redhat.com> <54FF1973.30906@redhat.com> Message-ID: <20150310162410.GV25455@redhat.com> On Tue, 10 Mar 2015, John Dennis wrote: >On 03/10/2015 12:13 PM, Alexander Bokovoy wrote: >> HBAC rule is a tuple (user|group, host|hostgroup, service|servicegroup). >> This tuple would get extension representing time/date information in a >> multivalued attribute that would describe all time/date intervals >> applicable to this rule. > >I must be misunderstanding something. Recurrence rules yield an >unbounded number of "allow" intervals. Certainly you do not want to >enumerate and store all the intervals, instead you want to evaluate the >rule locally and obtain a simple yes/no answer, don't you? Yes. We are not contradicting each other as there is nothing in my response quoted above that implies that description of these time/date intervals is explicit rather than functional. We really need to define the format of such description but it doesn't need to be iCal as it is. -- / Alexander Bokovoy From mkosek at redhat.com Tue Mar 10 16:29:59 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 10 Mar 2015 17:29:59 +0100 Subject: [Freeipa-devel] Purpose of default user group In-Reply-To: <54FF16F5.8020908@redhat.com> References: <54FEC13D.1000400@redhat.com> <54FEFF6D.60601@redhat.com> <54FF053C.30101@redhat.com> <20150310150130.GQ22044@hendrix.arn.redhat.com> <54FF1160.70202@redhat.com> <20150310155558.GS25455@redhat.com> <54FF16F5.8020908@redhat.com> Message-ID: <54FF1C07.5050701@redhat.com> On 03/10/2015 05:08 PM, Rob Crittenden wrote: > Alexander Bokovoy wrote: >> On Tue, 10 Mar 2015, Petr Spacek wrote: >>> On 10.3.2015 16:01, Jakub Hrozek wrote: >>>> On Tue, Mar 10, 2015 at 03:52:44PM +0100, Martin Kosek wrote: >>>>> On 03/10/2015 03:27 PM, Rob Crittenden wrote: >>>>>> Petr Vobornik wrote: >>>>>>> Hi, >>>>>>> >>>>>>> I would like to ask what is a purpose of a default user group - by >>>>>>> default ipausers? Default group is also a required field in ipa >>>>>>> config. >>>>>> >>>>>> To be able to apply some (undefined) group policy to all users. I'm >>>>>> not >>>>>> aware that it has ever been used for this. >>>>> >>>>> I would also interested in the use cases, especially given all the >>>>> pain we have >>>>> with ipausers and large user bases. Especially that for current >>>>> policies (SUDO, >>>>> HBAC, SELinux user policy), we always have other means to specify >>>>> "all users". >>>> >>>> yes, but those means usually specify both AD and IPA users, right? >>>> >>>> I always thought "ipausers" is a handy shortcut for selecting IPA users >>>> only and not AD users. >>> >>> I always thought that "ipausers" is an equivalent of "domain users" in AD >>> world (compare with "Trusted domain users"). >>> >>> In my admin life I considered "domain users" to be useful alias for real >>> authenticated user accounts (compare with "Everyone" = even >>> unauthenticated >>> access, "Authenticated users" = includes machine accounts too.) >>> >>> >>> Moreover, getting rid of ipausers does not help with 'big groups >>> problem' in >>> any way. E.g. at university you are almost inevitably going to have >>> groups >>> like 'students' which will contain more than 90 % of users anyway. >> For what use we need this distinction in IPA itself? >> - ACI (permissions) have separate notion to describe >> anonymous/any authenticated dichotomy >> - HBAC has 'all' category for users which in HBAC context means all >> authenticated users >> >> Where else we would need ipausers other than default POSIX group which >> we are not using it for? > > > Petr's point is that deleting ipausers is a short-term solution that > ignores the underlying problem. > > But yeah, ipausers is a solution looking for a problem AFAIK. It was a > future-proofing move because if we ever decided we needed on, slurping > in all the users at once and adding to some common group would be > time-consuming. I wonder if it would help if these special groups do not have explicit members defined, but are more descriptive. Something like DS Dynamic Groups [1]. If we could define - ipausers are all users in this container having this objectclass and DS and SSSD would take care of the rest. I am not sure if it would help with performance, it would be easier at least for managing the membership. I am also not sure how would we create the group for AD users. [1] https://fedorahosted.org/389/ticket/128 From abokovoy at redhat.com Tue Mar 10 16:33:21 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 10 Mar 2015 18:33:21 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: References: <20150309145830.GA25455@redhat.com> <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> <20150310144041.GP25455@redhat.com> <54FF03EE.6020005@redhat.com> <20150310150601.GR22044@hendrix.arn.redhat.com> <54FF1300.3090104@seznam.cz> Message-ID: <20150310163321.GW25455@redhat.com> On Tue, 10 Mar 2015, Gabe Alford wrote: >On Tue, Mar 10, 2015 at 9:51 AM, Stanislav L?zni?ka wrote: > >> On 03/10/2015 04:06 PM, Jakub Hrozek wrote: >> >>> On Tue, Mar 10, 2015 at 03:47:10PM +0100, Martin Kosek wrote: >>> >>>> This is where importing iCal is helpful because it allows you to >>>>> outsource the task of creating such event to something else. >>>>> >>>>> Parsing event information would produce a rule definition we would store >>>>> and SSSD would apply as HBAC rule. However, we don't need ourselves to >>>>> provide a complex UI to define such rules. Instead, we can do a simple >>>>> UI to create rules plus a UI to import rules defined in iCal by some >>>>> other software. The rest is visualizing HBAC time/date rules which is >>>>> separate from dealing with complexity of creating or importing rules. >>>>> >>>>> Additionally, for iCal-based imports we can utilize participants >>>>> information from the iCal to automatically set up members of the rule >>>>> (based on mail attribute). >>>>> >>>>> Ah, makes sense to me. >>>> >>>> With all the possibilities that iCal format offers, we would more or >>>> less end >>>> up storing iCal in HBAC rules (or our own format of iCal). I am just >>>> concerned >>>> it would make a bit complex processing on SSSD side, especially in the >>>> security >>>> sensitive piece for authorization rules. >>>> >>>> We may need to use libraries for processing iCal rules, like libical >>>> (http://koji.fedoraproject.org/koji/buildinfo?buildID=606329)... >>>> >>> Is that what Alexander said, though? In his reply, I see: >>> "Parsing event information would produce a rule definition we would >>> store and SSSD would apply as HBAC rule". >>> >> This is what kind of worried me, too. If I understand it well, this means >> you would have iCal events such as holidays (these were mentioned before), >> and you would like to generate HBAC rules based on these events. Those >> rules would, however, be different for each country (if this is still about >> holidays) and might collide among user and host groups. Therefore, you >> would have lots and lots of rules in the end, wouldn't you? >> >> I wonder if anyone does that. From what I've seen in AD and 389 Directory >> Server, time-based rules are being stored in a rather simple manner. I >> don't mind a more complex solution but I think such exceptions might be >> little too much. But I might have not understood the idea very well. > > >This is my understanding as well. If using AD as the example, there are two >ways that timebased rules are configured: > 1. Permit logon hours during specified timeframe on specified day(s) >of the week. > 2. Deny logon hours during specified timeframe on specified day(s) of >the week. > >There is nothing about holidays. I think that implementing holidays and >special exemptions should be avoided. Yep. Except that we DENY by default in HBAC rules. So we only handle ALLOW case already and there are strong reasons not to structure HBAC rules to provide DENY too. -- / Alexander Bokovoy From simo at redhat.com Tue Mar 10 16:35:32 2015 From: simo at redhat.com (Simo Sorce) Date: Tue, 10 Mar 2015 12:35:32 -0400 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <54FF0B8A.1090308@redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> Message-ID: <1426005332.4735.92.camel@willson.usersys.redhat.com> On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: > On 10.3.2015 15:53, Simo Sorce wrote: > > On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: > >> Hello, > >> > >> I would like to discuss Generic support for unknown DNS RR types (RFC 3597 > >> [0]). Here is the proposal: > >> > >> LDAP schema > >> =========== > >> - 1 new attribute: > >> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' EQUALITY > >> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) > >> > >> The attribute should be added to existing idnsRecord object class as MAY. > >> > >> This new attribute should contain data encoded according to ?RFC 3597 section > >> 5 [5]: > >> > >> The RDATA section of an RR of unknown type is represented as a > >> sequence of white space separated words as follows: > >> > >> The special token \# (a backslash immediately followed by a hash > >> sign), which identifies the RDATA as having the generic encoding > >> defined herein rather than a traditional type-specific encoding. > >> > >> An unsigned decimal integer specifying the RDATA length in octets. > >> > >> Zero or more words of hexadecimal data encoding the actual RDATA > >> field, each containing an even number of hexadecimal digits. > >> > >> If the RDATA is of zero length, the text representation contains only > >> the \# token and the single zero representing the length. > >> > >> Examples from RFC: > >> a.example. CLASS32 TYPE731 \# 6 abcd ( > >> ef 01 23 45 ) > >> b.example. HS TYPE62347 \# 0 > >> e.example. IN A \# 4 0A000001 > >> e.example. CLASS1 TYPE1 10.0.0.2 > >> > >> > >> Open questions about LDAP format > >> ================================ > >> Should we include "\#" constant? We know that the attribute contains record in > >> RFC 3597 syntax so it is not strictly necessary. > >> > >> I think it would be better to follow RFC 3597 format. It allows blind > >> copy&pasting from other tools, including direct calls to python-dns. > >> > >> It also eases writing conversion tools between DNS and LDAP format because > >> they do not need to change record values. > >> > >> > >> Another question is if we should explicitly include length of data represented > >> in hexadecimal notation as a decimal number. I'm very strongly inclined to let > >> it there because it is very good sanity check and again, it allows us to > >> re-use existing tools including parsers. > >> > >> I will ask Uninett.no for standardization after we sort this out (they own the > >> OID arc we use for DNS records). > >> > >> > >> Attribute usage > >> =============== > >> Every DNS RR type has assigned a number [1] which is used on wire. RR types > >> which are unknown to the server cannot be named by their mnemonic/type name > >> because server would not be able to do name->number conversion and to generate > >> DNS wire format. > >> > >> As a result, we have to encode the RR type number somehow. Let's use attribute > >> sub-types. > >> > >> E.g. a record with type 65280 and hex value 0A000001 will be represented as: > >> GenericRecord;TYPE65280: \# 4 0A000001 > >> > >> > >> CLI > >> === > >> $ ipa dnsrecord-add zone.example owner \ > >> --generic-type=65280 --generic-data='\# 4 0A000001' > >> > >> $ ipa dnsrecord-show zone.example owner > >> Record name: owner > >> TYPE65280 Record: \# 4 0A000001 > >> > >> > >> ACK? :-) > > > > Almost. > > We should refrain from using subtypes when not necessary, and in this > > case it is not necessary. > > > > Use: > > GenericRecord: 65280 \# 4 0A000001 > > I was considering that too but I can see two main drawbacks: > > 1) It does not work very well with DS ACI (targetattrfilter, anyone?). Adding > generic write access to GenericRecord == ability to add TLSA records too, > which you may not want. IMHO it is perfectly reasonable to limit write access > to certain types (e.g. to one from private range). > > 2) We would need a separate substring index for emulating filters like > (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence index > which will be handy one day when we decide to handle upgrades like > GenericRecord;TYPE256->UriRecord. > > Another (less important) annoyance is that conversion tools would have to > mangle record data instead of just converting attribute name->record type. > > > I can be convinced that subtypes are not necessary but I do not see clear > advantage of avoiding them. What is the problem with subtypes? Poor support by most clients, so it is generally discouraged. The problem with subtypes and ACIs though is that I think ACIs do not care about the subtype unless you explicit mention them. So perhaps bind_dyndb_ldap should refuse to use a generic type that shadows DNSSEC relevant records ? Simo. -- Simo Sorce * Red Hat, Inc * New York From simo at redhat.com Tue Mar 10 16:42:48 2015 From: simo at redhat.com (Simo Sorce) Date: Tue, 10 Mar 2015 12:42:48 -0400 Subject: [Freeipa-devel] Purpose of default user group In-Reply-To: <20150310150130.GQ22044@hendrix.arn.redhat.com> References: <54FEC13D.1000400@redhat.com> <54FEFF6D.60601@redhat.com> <54FF053C.30101@redhat.com> <20150310150130.GQ22044@hendrix.arn.redhat.com> Message-ID: <1426005768.4735.94.camel@willson.usersys.redhat.com> On Tue, 2015-03-10 at 16:01 +0100, Jakub Hrozek wrote: > On Tue, Mar 10, 2015 at 03:52:44PM +0100, Martin Kosek wrote: > > On 03/10/2015 03:27 PM, Rob Crittenden wrote: > > > Petr Vobornik wrote: > > >> Hi, > > >> > > >> I would like to ask what is a purpose of a default user group - by > > >> default ipausers? Default group is also a required field in ipa config. > > > > > > To be able to apply some (undefined) group policy to all users. I'm not > > > aware that it has ever been used for this. > > > > I would also interested in the use cases, especially given all the pain we have > > with ipausers and large user bases. Especially that for current policies (SUDO, > > HBAC, SELinux user policy), we always have other means to specify "all users". > > yes, but those means usually specify both AD and IPA users, right? > > I always thought "ipausers" is a handy shortcut for selecting IPA users > only and not AD users. We should probably turn ipausers into a fully virtual group that is added to the user's Authorization data in the KDC (MS-PAC or in future PAD). This way it will be possible to reference it in sssd but will not create issues with memberships in the server. But we need the PAD first, I guess. (we could do something with authentication indicators too, but that would be a hack). Simo. -- Simo Sorce * Red Hat, Inc * New York From abokovoy at redhat.com Tue Mar 10 16:56:54 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 10 Mar 2015 18:56:54 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FF1A40.3090702@redhat.com> References: <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> <20150310144041.GP25455@redhat.com> <54FF03EE.6020005@redhat.com> <20150310150601.GR22044@hendrix.arn.redhat.com> <54FF1841.10209@redhat.com> <20150310161834.GU25455@redhat.com> <54FF1A40.3090702@redhat.com> Message-ID: <20150310165654.GX25455@redhat.com> On Tue, 10 Mar 2015, Martin Kosek wrote: >On 03/10/2015 05:18 PM, Alexander Bokovoy wrote: >> On Tue, 10 Mar 2015, John Dennis wrote: >>> On 03/10/2015 11:06 AM, Jakub Hrozek wrote: >>>>> We may need to use libraries for processing iCal rules, like libical >>>>> (http://koji.fedoraproject.org/koji/buildinfo?buildID=606329)... >>>> >>>> Is that what Alexander said, though? In his reply, I see: >>>> "Parsing event information would produce a rule definition we would >>>> store and SSSD would apply as HBAC rule". >>>> >>>> I don't think iCal dependency is something we want in SSSD, the >>>> rules should be converted from iCal to SSSD format in a layer atop >>>> libipa_hbac.. >>> >>> But doesn't the iCal rule have to be evaluated in SSSD? If so that >>> requires linking against libical, right? >> That's why I'm saying we import iCal in IPA, not that we keep using iCal >> as internal representation of time/date information for HBAC rules. >> >> I don't really want to impose iCal horror on HBAC rule parsing engine. >> I believe we can do simpler and better, given HBAC is all about ALLOW >> rules on the base of default DENY action. > >Ok, but how do you want to define rule as > >"Allow Joe to log in every Monday, except holidays (when the office is closed)"? > >We can't just have IPA processed the Ical and generate Allow ranges as there is >indefinite number of the allow ranges. So if you want to described more complex >rule (reocurring rule with some exceptions maybe), you end up with iCal anyway. >Or not? See my answer to John. We don't need to end up with iCal at all since iCal doesn't have procedural definitions of holidays. It has EXDATE/RRULE allowing to express exceptions and repeating rules (EXRULE for exception rules was removed in RFC5545 and is not used anymore) but nothing more concrete. RFC5545 does define multiple things which are part of iCalendar format and which we don't really need to deal with in SSSD so we don't need full iCal at all. We need to be able to represent recurring events and some of exceptions to them within the rules but that is a subset of what is needed and can be implemented without involving a fully-compliant iCal library. -- / Alexander Bokovoy From abokovoy at redhat.com Tue Mar 10 17:00:29 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 10 Mar 2015 19:00:29 +0200 Subject: [Freeipa-devel] Purpose of default user group In-Reply-To: <1426005768.4735.94.camel@willson.usersys.redhat.com> References: <54FEC13D.1000400@redhat.com> <54FEFF6D.60601@redhat.com> <54FF053C.30101@redhat.com> <20150310150130.GQ22044@hendrix.arn.redhat.com> <1426005768.4735.94.camel@willson.usersys.redhat.com> Message-ID: <20150310170029.GY25455@redhat.com> On Tue, 10 Mar 2015, Simo Sorce wrote: >On Tue, 2015-03-10 at 16:01 +0100, Jakub Hrozek wrote: >> On Tue, Mar 10, 2015 at 03:52:44PM +0100, Martin Kosek wrote: >> > On 03/10/2015 03:27 PM, Rob Crittenden wrote: >> > > Petr Vobornik wrote: >> > >> Hi, >> > >> >> > >> I would like to ask what is a purpose of a default user group - by >> > >> default ipausers? Default group is also a required field in ipa config. >> > > >> > > To be able to apply some (undefined) group policy to all users. I'm not >> > > aware that it has ever been used for this. >> > >> > I would also interested in the use cases, especially given all the pain we have >> > with ipausers and large user bases. Especially that for current policies (SUDO, >> > HBAC, SELinux user policy), we always have other means to specify "all users". >> >> yes, but those means usually specify both AD and IPA users, right? >> >> I always thought "ipausers" is a handy shortcut for selecting IPA users >> only and not AD users. > >We should probably turn ipausers into a fully virtual group that is >added to the user's Authorization data in the KDC (MS-PAC or in future >PAD). >This way it will be possible to reference it in sssd but will not create >issues with memberships in the server. > >But we need the PAD first, I guess. >(we could do something with authentication indicators too, but that >would be a hack). Yep. If we need ipausers for POSIX context interpretation on IPA clients, PAD would be our choice as we already do with MS-PAC for AD users. Within LDAP server, if we want to address all IPA users to do some mass operations on them, I think we probably should have some specialized control that would give 389-ds chance to optimize on building this list of users before applying an operation to them. This would be something non-standard but more efficient than what we are doing right now. -- / Alexander Bokovoy From pspacek at redhat.com Tue Mar 10 17:26:04 2015 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 10 Mar 2015 18:26:04 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <1426005332.4735.92.camel@willson.usersys.redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> <1426005332.4735.92.camel@willson.usersys.redhat.com> Message-ID: <54FF292C.4020805@redhat.com> On 10.3.2015 17:35, Simo Sorce wrote: > On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: >> On 10.3.2015 15:53, Simo Sorce wrote: >>> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >>>> Hello, >>>> >>>> I would like to discuss Generic support for unknown DNS RR types (RFC 3597 >>>> [0]). Here is the proposal: >>>> >>>> LDAP schema >>>> =========== >>>> - 1 new attribute: >>>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' EQUALITY >>>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >>>> >>>> The attribute should be added to existing idnsRecord object class as MAY. >>>> >>>> This new attribute should contain data encoded according to ?RFC 3597 section >>>> 5 [5]: >>>> >>>> The RDATA section of an RR of unknown type is represented as a >>>> sequence of white space separated words as follows: >>>> >>>> The special token \# (a backslash immediately followed by a hash >>>> sign), which identifies the RDATA as having the generic encoding >>>> defined herein rather than a traditional type-specific encoding. >>>> >>>> An unsigned decimal integer specifying the RDATA length in octets. >>>> >>>> Zero or more words of hexadecimal data encoding the actual RDATA >>>> field, each containing an even number of hexadecimal digits. >>>> >>>> If the RDATA is of zero length, the text representation contains only >>>> the \# token and the single zero representing the length. >>>> >>>> Examples from RFC: >>>> a.example. CLASS32 TYPE731 \# 6 abcd ( >>>> ef 01 23 45 ) >>>> b.example. HS TYPE62347 \# 0 >>>> e.example. IN A \# 4 0A000001 >>>> e.example. CLASS1 TYPE1 10.0.0.2 >>>> >>>> >>>> Open questions about LDAP format >>>> ================================ >>>> Should we include "\#" constant? We know that the attribute contains record in >>>> RFC 3597 syntax so it is not strictly necessary. >>>> >>>> I think it would be better to follow RFC 3597 format. It allows blind >>>> copy&pasting from other tools, including direct calls to python-dns. >>>> >>>> It also eases writing conversion tools between DNS and LDAP format because >>>> they do not need to change record values. >>>> >>>> >>>> Another question is if we should explicitly include length of data represented >>>> in hexadecimal notation as a decimal number. I'm very strongly inclined to let >>>> it there because it is very good sanity check and again, it allows us to >>>> re-use existing tools including parsers. >>>> >>>> I will ask Uninett.no for standardization after we sort this out (they own the >>>> OID arc we use for DNS records). >>>> >>>> >>>> Attribute usage >>>> =============== >>>> Every DNS RR type has assigned a number [1] which is used on wire. RR types >>>> which are unknown to the server cannot be named by their mnemonic/type name >>>> because server would not be able to do name->number conversion and to generate >>>> DNS wire format. >>>> >>>> As a result, we have to encode the RR type number somehow. Let's use attribute >>>> sub-types. >>>> >>>> E.g. a record with type 65280 and hex value 0A000001 will be represented as: >>>> GenericRecord;TYPE65280: \# 4 0A000001 >>>> >>>> >>>> CLI >>>> === >>>> $ ipa dnsrecord-add zone.example owner \ >>>> --generic-type=65280 --generic-data='\# 4 0A000001' >>>> >>>> $ ipa dnsrecord-show zone.example owner >>>> Record name: owner >>>> TYPE65280 Record: \# 4 0A000001 >>>> >>>> >>>> ACK? :-) >>> >>> Almost. >>> We should refrain from using subtypes when not necessary, and in this >>> case it is not necessary. >>> >>> Use: >>> GenericRecord: 65280 \# 4 0A000001 >> >> I was considering that too but I can see two main drawbacks: >> >> 1) It does not work very well with DS ACI (targetattrfilter, anyone?). Adding >> generic write access to GenericRecord == ability to add TLSA records too, >> which you may not want. IMHO it is perfectly reasonable to limit write access >> to certain types (e.g. to one from private range). >> >> 2) We would need a separate substring index for emulating filters like >> (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence index >> which will be handy one day when we decide to handle upgrades like >> GenericRecord;TYPE256->UriRecord. >> >> Another (less important) annoyance is that conversion tools would have to >> mangle record data instead of just converting attribute name->record type. >> >> >> I can be convinced that subtypes are not necessary but I do not see clear >> advantage of avoiding them. What is the problem with subtypes? > > Poor support by most clients, so it is generally discouraged. Hmm, it does not sound like a thing we should care in this case. DNS tree is not meant for direct consumption by LDAP clients (compare with cn=compat). IMHO the only two clients we should care are FreeIPA framework and bind-dyndb-ldap so I do not see this as a problem, really. If someone wants to access DNS tree by hand - sure, use a standard compliant client! Working ACI and LDAP filters sounds like good price for supporting only standards compliant clients. AFAIK OpenLDAP works well and I suspect that ApacheDS will work too because Eclipse has nice support for sub-types built-in. If I can draw some conclusions from that, sub-types are not a thing aliens forgot here when leaving Earth one million years ago :-) > The problem with subtypes and ACIs though is that I think ACIs do not > care about the subtype unless you explicit mention them. IMHO that is exactly what I would like to see for GenericRecord. It allows us to write ACI which allows admins to add any GenericRecord and at the same time allows us to craft ACI which allows access only to GenericRecord;TYPE65280 for specific group/user. > So perhaps bind_dyndb_ldap should refuse to use a generic type that > shadows DNSSEC relevant records ? Sorry, this cannot possibly work because it depends on up-to-date blacklist. How would the plugin released in 2015 know that highly sensitive OPENPGPKEY type will be standardized in 2016 and assigned number XYZ? -- Petr^2 Spacek From simo at redhat.com Tue Mar 10 17:36:05 2015 From: simo at redhat.com (Simo Sorce) Date: Tue, 10 Mar 2015 13:36:05 -0400 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <54FF292C.4020805@redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> <1426005332.4735.92.camel@willson.usersys.redhat.com> <54FF292C.4020805@redhat.com> Message-ID: <1426008965.4735.95.camel@willson.usersys.redhat.com> On Tue, 2015-03-10 at 18:26 +0100, Petr Spacek wrote: > On 10.3.2015 17:35, Simo Sorce wrote: > > On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: > >> On 10.3.2015 15:53, Simo Sorce wrote: > >>> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: > >>>> Hello, > >>>> > >>>> I would like to discuss Generic support for unknown DNS RR types (RFC 3597 > >>>> [0]). Here is the proposal: > >>>> > >>>> LDAP schema > >>>> =========== > >>>> - 1 new attribute: > >>>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' EQUALITY > >>>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) > >>>> > >>>> The attribute should be added to existing idnsRecord object class as MAY. > >>>> > >>>> This new attribute should contain data encoded according to ?RFC 3597 section > >>>> 5 [5]: > >>>> > >>>> The RDATA section of an RR of unknown type is represented as a > >>>> sequence of white space separated words as follows: > >>>> > >>>> The special token \# (a backslash immediately followed by a hash > >>>> sign), which identifies the RDATA as having the generic encoding > >>>> defined herein rather than a traditional type-specific encoding. > >>>> > >>>> An unsigned decimal integer specifying the RDATA length in octets. > >>>> > >>>> Zero or more words of hexadecimal data encoding the actual RDATA > >>>> field, each containing an even number of hexadecimal digits. > >>>> > >>>> If the RDATA is of zero length, the text representation contains only > >>>> the \# token and the single zero representing the length. > >>>> > >>>> Examples from RFC: > >>>> a.example. CLASS32 TYPE731 \# 6 abcd ( > >>>> ef 01 23 45 ) > >>>> b.example. HS TYPE62347 \# 0 > >>>> e.example. IN A \# 4 0A000001 > >>>> e.example. CLASS1 TYPE1 10.0.0.2 > >>>> > >>>> > >>>> Open questions about LDAP format > >>>> ================================ > >>>> Should we include "\#" constant? We know that the attribute contains record in > >>>> RFC 3597 syntax so it is not strictly necessary. > >>>> > >>>> I think it would be better to follow RFC 3597 format. It allows blind > >>>> copy&pasting from other tools, including direct calls to python-dns. > >>>> > >>>> It also eases writing conversion tools between DNS and LDAP format because > >>>> they do not need to change record values. > >>>> > >>>> > >>>> Another question is if we should explicitly include length of data represented > >>>> in hexadecimal notation as a decimal number. I'm very strongly inclined to let > >>>> it there because it is very good sanity check and again, it allows us to > >>>> re-use existing tools including parsers. > >>>> > >>>> I will ask Uninett.no for standardization after we sort this out (they own the > >>>> OID arc we use for DNS records). > >>>> > >>>> > >>>> Attribute usage > >>>> =============== > >>>> Every DNS RR type has assigned a number [1] which is used on wire. RR types > >>>> which are unknown to the server cannot be named by their mnemonic/type name > >>>> because server would not be able to do name->number conversion and to generate > >>>> DNS wire format. > >>>> > >>>> As a result, we have to encode the RR type number somehow. Let's use attribute > >>>> sub-types. > >>>> > >>>> E.g. a record with type 65280 and hex value 0A000001 will be represented as: > >>>> GenericRecord;TYPE65280: \# 4 0A000001 > >>>> > >>>> > >>>> CLI > >>>> === > >>>> $ ipa dnsrecord-add zone.example owner \ > >>>> --generic-type=65280 --generic-data='\# 4 0A000001' > >>>> > >>>> $ ipa dnsrecord-show zone.example owner > >>>> Record name: owner > >>>> TYPE65280 Record: \# 4 0A000001 > >>>> > >>>> > >>>> ACK? :-) > >>> > >>> Almost. > >>> We should refrain from using subtypes when not necessary, and in this > >>> case it is not necessary. > >>> > >>> Use: > >>> GenericRecord: 65280 \# 4 0A000001 > >> > >> I was considering that too but I can see two main drawbacks: > >> > >> 1) It does not work very well with DS ACI (targetattrfilter, anyone?). Adding > >> generic write access to GenericRecord == ability to add TLSA records too, > >> which you may not want. IMHO it is perfectly reasonable to limit write access > >> to certain types (e.g. to one from private range). > >> > >> 2) We would need a separate substring index for emulating filters like > >> (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence index > >> which will be handy one day when we decide to handle upgrades like > >> GenericRecord;TYPE256->UriRecord. > >> > >> Another (less important) annoyance is that conversion tools would have to > >> mangle record data instead of just converting attribute name->record type. > >> > >> > >> I can be convinced that subtypes are not necessary but I do not see clear > >> advantage of avoiding them. What is the problem with subtypes? > > > > Poor support by most clients, so it is generally discouraged. > Hmm, it does not sound like a thing we should care in this case. DNS tree is > not meant for direct consumption by LDAP clients (compare with cn=compat). > > IMHO the only two clients we should care are FreeIPA framework and > bind-dyndb-ldap so I do not see this as a problem, really. If someone wants to > access DNS tree by hand - sure, use a standard compliant client! > > Working ACI and LDAP filters sounds like good price for supporting only > standards compliant clients. > > AFAIK OpenLDAP works well and I suspect that ApacheDS will work too because > Eclipse has nice support for sub-types built-in. If I can draw some > conclusions from that, sub-types are not a thing aliens forgot here when > leaving Earth one million years ago :-) > > > The problem with subtypes and ACIs though is that I think ACIs do not > > care about the subtype unless you explicit mention them. > IMHO that is exactly what I would like to see for GenericRecord. It allows us > to write ACI which allows admins to add any GenericRecord and at the same time > allows us to craft ACI which allows access only to GenericRecord;TYPE65280 for > specific group/user. > > > So perhaps bind_dyndb_ldap should refuse to use a generic type that > > shadows DNSSEC relevant records ? > Sorry, this cannot possibly work because it depends on up-to-date blacklist. > > How would the plugin released in 2015 know that highly sensitive OPENPGPKEY > type will be standardized in 2016 and assigned number XYZ? Ok, show me an example ACI that works and you get my ack :) Simo. -- Simo Sorce * Red Hat, Inc * New York From pspacek at redhat.com Tue Mar 10 18:24:40 2015 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 10 Mar 2015 19:24:40 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <1426008965.4735.95.camel@willson.usersys.redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> <1426005332.4735.92.camel@willson.usersys.redhat.com> <54FF292C.4020805@redhat.com> <1426008965.4735.95.camel@willson.usersys.redhat.com> Message-ID: <54FF36E8.1040707@redhat.com> On 10.3.2015 18:36, Simo Sorce wrote: > On Tue, 2015-03-10 at 18:26 +0100, Petr Spacek wrote: >> On 10.3.2015 17:35, Simo Sorce wrote: >>> On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: >>>> On 10.3.2015 15:53, Simo Sorce wrote: >>>>> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >>>>>> Hello, >>>>>> >>>>>> I would like to discuss Generic support for unknown DNS RR types (RFC 3597 >>>>>> [0]). Here is the proposal: >>>>>> >>>>>> LDAP schema >>>>>> =========== >>>>>> - 1 new attribute: >>>>>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' EQUALITY >>>>>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >>>>>> >>>>>> The attribute should be added to existing idnsRecord object class as MAY. >>>>>> >>>>>> This new attribute should contain data encoded according to ?RFC 3597 section >>>>>> 5 [5]: >>>>>> >>>>>> The RDATA section of an RR of unknown type is represented as a >>>>>> sequence of white space separated words as follows: >>>>>> >>>>>> The special token \# (a backslash immediately followed by a hash >>>>>> sign), which identifies the RDATA as having the generic encoding >>>>>> defined herein rather than a traditional type-specific encoding. >>>>>> >>>>>> An unsigned decimal integer specifying the RDATA length in octets. >>>>>> >>>>>> Zero or more words of hexadecimal data encoding the actual RDATA >>>>>> field, each containing an even number of hexadecimal digits. >>>>>> >>>>>> If the RDATA is of zero length, the text representation contains only >>>>>> the \# token and the single zero representing the length. >>>>>> >>>>>> Examples from RFC: >>>>>> a.example. CLASS32 TYPE731 \# 6 abcd ( >>>>>> ef 01 23 45 ) >>>>>> b.example. HS TYPE62347 \# 0 >>>>>> e.example. IN A \# 4 0A000001 >>>>>> e.example. CLASS1 TYPE1 10.0.0.2 >>>>>> >>>>>> >>>>>> Open questions about LDAP format >>>>>> ================================ >>>>>> Should we include "\#" constant? We know that the attribute contains record in >>>>>> RFC 3597 syntax so it is not strictly necessary. >>>>>> >>>>>> I think it would be better to follow RFC 3597 format. It allows blind >>>>>> copy&pasting from other tools, including direct calls to python-dns. >>>>>> >>>>>> It also eases writing conversion tools between DNS and LDAP format because >>>>>> they do not need to change record values. >>>>>> >>>>>> >>>>>> Another question is if we should explicitly include length of data represented >>>>>> in hexadecimal notation as a decimal number. I'm very strongly inclined to let >>>>>> it there because it is very good sanity check and again, it allows us to >>>>>> re-use existing tools including parsers. >>>>>> >>>>>> I will ask Uninett.no for standardization after we sort this out (they own the >>>>>> OID arc we use for DNS records). >>>>>> >>>>>> >>>>>> Attribute usage >>>>>> =============== >>>>>> Every DNS RR type has assigned a number [1] which is used on wire. RR types >>>>>> which are unknown to the server cannot be named by their mnemonic/type name >>>>>> because server would not be able to do name->number conversion and to generate >>>>>> DNS wire format. >>>>>> >>>>>> As a result, we have to encode the RR type number somehow. Let's use attribute >>>>>> sub-types. >>>>>> >>>>>> E.g. a record with type 65280 and hex value 0A000001 will be represented as: >>>>>> GenericRecord;TYPE65280: \# 4 0A000001 >>>>>> >>>>>> >>>>>> CLI >>>>>> === >>>>>> $ ipa dnsrecord-add zone.example owner \ >>>>>> --generic-type=65280 --generic-data='\# 4 0A000001' >>>>>> >>>>>> $ ipa dnsrecord-show zone.example owner >>>>>> Record name: owner >>>>>> TYPE65280 Record: \# 4 0A000001 >>>>>> >>>>>> >>>>>> ACK? :-) >>>>> >>>>> Almost. >>>>> We should refrain from using subtypes when not necessary, and in this >>>>> case it is not necessary. >>>>> >>>>> Use: >>>>> GenericRecord: 65280 \# 4 0A000001 >>>> >>>> I was considering that too but I can see two main drawbacks: >>>> >>>> 1) It does not work very well with DS ACI (targetattrfilter, anyone?). Adding >>>> generic write access to GenericRecord == ability to add TLSA records too, >>>> which you may not want. IMHO it is perfectly reasonable to limit write access >>>> to certain types (e.g. to one from private range). >>>> >>>> 2) We would need a separate substring index for emulating filters like >>>> (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence index >>>> which will be handy one day when we decide to handle upgrades like >>>> GenericRecord;TYPE256->UriRecord. >>>> >>>> Another (less important) annoyance is that conversion tools would have to >>>> mangle record data instead of just converting attribute name->record type. >>>> >>>> >>>> I can be convinced that subtypes are not necessary but I do not see clear >>>> advantage of avoiding them. What is the problem with subtypes? >>> >>> Poor support by most clients, so it is generally discouraged. >> Hmm, it does not sound like a thing we should care in this case. DNS tree is >> not meant for direct consumption by LDAP clients (compare with cn=compat). >> >> IMHO the only two clients we should care are FreeIPA framework and >> bind-dyndb-ldap so I do not see this as a problem, really. If someone wants to >> access DNS tree by hand - sure, use a standard compliant client! >> >> Working ACI and LDAP filters sounds like good price for supporting only >> standards compliant clients. >> >> AFAIK OpenLDAP works well and I suspect that ApacheDS will work too because >> Eclipse has nice support for sub-types built-in. If I can draw some >> conclusions from that, sub-types are not a thing aliens forgot here when >> leaving Earth one million years ago :-) >> >>> The problem with subtypes and ACIs though is that I think ACIs do not >>> care about the subtype unless you explicit mention them. >> IMHO that is exactly what I would like to see for GenericRecord. It allows us >> to write ACI which allows admins to add any GenericRecord and at the same time >> allows us to craft ACI which allows access only to GenericRecord;TYPE65280 for >> specific group/user. >> >>> So perhaps bind_dyndb_ldap should refuse to use a generic type that >>> shadows DNSSEC relevant records ? >> Sorry, this cannot possibly work because it depends on up-to-date blacklist. >> >> How would the plugin released in 2015 know that highly sensitive OPENPGPKEY >> type will be standardized in 2016 and assigned number XYZ? > > Ok, show me an example ACI that works and you get my ack :) Am I being punished for something? :-) Anyway, this monstrosity: (targetattr = "objectclass || txtRecord;test")(target = "ldap:///idnsname=*,cn=dns,dc=ipa,dc=example")(version 3.0;acl "permission:luser: Read DNS Entries";allow (compare,read,search) userdn = "ldap:///uid=luser,cn=users,cn=accounts,dc=ipa,dc=example";) Gives 'luser' read access only to txtRecord;test and *not* to the whole txtRecord in general. $ kinit luser $ ldapsearch -Y GSSAPI -s base -b 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' SASL username: luser at IPA.EXAMPLE # txt, ipa.example., dns, ipa.example dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example objectClass: top objectClass: idnsrecord tXTRecord;test: Guess what is new here! Filter '(tXTRecord;test=*)' works as expected and returns only objects with subtype ;test. The only weird thing I noticed is that search filter '(tXTRecord=*)' does not return the object if you have access only to an subtype with existing value but not to the 'vanilla' attribute. Maybe it is a bug? I will think about it for a while and possibly open a ticket. Anyway, this is not something we need for implementation. For completeness: $ kinit admin $ ldapsearch -Y GSSAPI -s base -b 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' SASL username: admin at IPA.EXAMPLE # txt, ipa.example., dns, ipa.example dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example objectClass: top objectClass: idnsrecord tXTRecord: nothing tXTRecord: something idnsName: txt tXTRecord;test: Guess what is new here! And yes, you assume correctly that (targetattr = "txtRecord") gives access to whole txtRecord including all its subtypes. ACK? :-) -- Petr^2 Spacek From jdennis at redhat.com Tue Mar 10 18:27:36 2015 From: jdennis at redhat.com (John Dennis) Date: Tue, 10 Mar 2015 14:27:36 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150310165654.GX25455@redhat.com> References: <54FDB77E.4030807@redhat.com> <20150309171318.GV29063@hendrix.arn.redhat.com> <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> <20150310144041.GP25455@redhat.com> <54FF03EE.6020005@redhat.com> <20150310150601.GR22044@hendrix.arn.redhat.com> <54FF1841.10209@redhat.com> <20150310161834.GU25455@redhat.com> <54FF1A40.3090702@redhat.com> <20150310165654.GX25455@redhat.com> Message-ID: <54FF3798.4050609@redhat.com> On 03/10/2015 12:56 PM, Alexander Bokovoy wrote: > See my answer to John. We don't need to end up with iCal at all since > iCal doesn't have procedural definitions of holidays. It has > EXDATE/RRULE allowing to express exceptions and repeating rules (EXRULE > for exception rules was removed in RFC5545 and is not used anymore) but > nothing more concrete. > > RFC5545 does define multiple things which are part of iCalendar format > and which we don't really need to deal with in SSSD so we don't need > full iCal at all. We need to be able to represent recurring events and > some of exceptions to them within the rules but that is a subset of what > is needed and can be implemented without involving a fully-compliant > iCal library. I always get a bit concerned when I hear we'll factor out or just import only the minimal code we need to support the minimal functionality we need from an otherwise large complex body of code implementing an entire RFC. Maybe the code in libical is perfectly suited to extracting the snippets we need (I don't know) but experience tells me complex code has complex inter-dependencies and reducing libical to our minimal requirements might be a significant effort. Is there really a problem with just linking with the entire libical library even if it's more than we need? -- John From simo at redhat.com Tue Mar 10 19:04:31 2015 From: simo at redhat.com (Simo Sorce) Date: Tue, 10 Mar 2015 15:04:31 -0400 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <54FF36E8.1040707@redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> <1426005332.4735.92.camel@willson.usersys.redhat.com> <54FF292C.4020805@redhat.com> <1426008965.4735.95.camel@willson.usersys.redhat.com> <54FF36E8.1040707@redhat.com> Message-ID: <1426014271.4735.107.camel@willson.usersys.redhat.com> On Tue, 2015-03-10 at 19:24 +0100, Petr Spacek wrote: > On 10.3.2015 18:36, Simo Sorce wrote: > > On Tue, 2015-03-10 at 18:26 +0100, Petr Spacek wrote: > >> On 10.3.2015 17:35, Simo Sorce wrote: > >>> On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: > >>>> On 10.3.2015 15:53, Simo Sorce wrote: > >>>>> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: > >>>>>> Hello, > >>>>>> > >>>>>> I would like to discuss Generic support for unknown DNS RR types (RFC 3597 > >>>>>> [0]). Here is the proposal: > >>>>>> > >>>>>> LDAP schema > >>>>>> =========== > >>>>>> - 1 new attribute: > >>>>>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' EQUALITY > >>>>>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) > >>>>>> > >>>>>> The attribute should be added to existing idnsRecord object class as MAY. > >>>>>> > >>>>>> This new attribute should contain data encoded according to ?RFC 3597 section > >>>>>> 5 [5]: > >>>>>> > >>>>>> The RDATA section of an RR of unknown type is represented as a > >>>>>> sequence of white space separated words as follows: > >>>>>> > >>>>>> The special token \# (a backslash immediately followed by a hash > >>>>>> sign), which identifies the RDATA as having the generic encoding > >>>>>> defined herein rather than a traditional type-specific encoding. > >>>>>> > >>>>>> An unsigned decimal integer specifying the RDATA length in octets. > >>>>>> > >>>>>> Zero or more words of hexadecimal data encoding the actual RDATA > >>>>>> field, each containing an even number of hexadecimal digits. > >>>>>> > >>>>>> If the RDATA is of zero length, the text representation contains only > >>>>>> the \# token and the single zero representing the length. > >>>>>> > >>>>>> Examples from RFC: > >>>>>> a.example. CLASS32 TYPE731 \# 6 abcd ( > >>>>>> ef 01 23 45 ) > >>>>>> b.example. HS TYPE62347 \# 0 > >>>>>> e.example. IN A \# 4 0A000001 > >>>>>> e.example. CLASS1 TYPE1 10.0.0.2 > >>>>>> > >>>>>> > >>>>>> Open questions about LDAP format > >>>>>> ================================ > >>>>>> Should we include "\#" constant? We know that the attribute contains record in > >>>>>> RFC 3597 syntax so it is not strictly necessary. > >>>>>> > >>>>>> I think it would be better to follow RFC 3597 format. It allows blind > >>>>>> copy&pasting from other tools, including direct calls to python-dns. > >>>>>> > >>>>>> It also eases writing conversion tools between DNS and LDAP format because > >>>>>> they do not need to change record values. > >>>>>> > >>>>>> > >>>>>> Another question is if we should explicitly include length of data represented > >>>>>> in hexadecimal notation as a decimal number. I'm very strongly inclined to let > >>>>>> it there because it is very good sanity check and again, it allows us to > >>>>>> re-use existing tools including parsers. > >>>>>> > >>>>>> I will ask Uninett.no for standardization after we sort this out (they own the > >>>>>> OID arc we use for DNS records). > >>>>>> > >>>>>> > >>>>>> Attribute usage > >>>>>> =============== > >>>>>> Every DNS RR type has assigned a number [1] which is used on wire. RR types > >>>>>> which are unknown to the server cannot be named by their mnemonic/type name > >>>>>> because server would not be able to do name->number conversion and to generate > >>>>>> DNS wire format. > >>>>>> > >>>>>> As a result, we have to encode the RR type number somehow. Let's use attribute > >>>>>> sub-types. > >>>>>> > >>>>>> E.g. a record with type 65280 and hex value 0A000001 will be represented as: > >>>>>> GenericRecord;TYPE65280: \# 4 0A000001 > >>>>>> > >>>>>> > >>>>>> CLI > >>>>>> === > >>>>>> $ ipa dnsrecord-add zone.example owner \ > >>>>>> --generic-type=65280 --generic-data='\# 4 0A000001' > >>>>>> > >>>>>> $ ipa dnsrecord-show zone.example owner > >>>>>> Record name: owner > >>>>>> TYPE65280 Record: \# 4 0A000001 > >>>>>> > >>>>>> > >>>>>> ACK? :-) > >>>>> > >>>>> Almost. > >>>>> We should refrain from using subtypes when not necessary, and in this > >>>>> case it is not necessary. > >>>>> > >>>>> Use: > >>>>> GenericRecord: 65280 \# 4 0A000001 > >>>> > >>>> I was considering that too but I can see two main drawbacks: > >>>> > >>>> 1) It does not work very well with DS ACI (targetattrfilter, anyone?). Adding > >>>> generic write access to GenericRecord == ability to add TLSA records too, > >>>> which you may not want. IMHO it is perfectly reasonable to limit write access > >>>> to certain types (e.g. to one from private range). > >>>> > >>>> 2) We would need a separate substring index for emulating filters like > >>>> (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence index > >>>> which will be handy one day when we decide to handle upgrades like > >>>> GenericRecord;TYPE256->UriRecord. > >>>> > >>>> Another (less important) annoyance is that conversion tools would have to > >>>> mangle record data instead of just converting attribute name->record type. > >>>> > >>>> > >>>> I can be convinced that subtypes are not necessary but I do not see clear > >>>> advantage of avoiding them. What is the problem with subtypes? > >>> > >>> Poor support by most clients, so it is generally discouraged. > >> Hmm, it does not sound like a thing we should care in this case. DNS tree is > >> not meant for direct consumption by LDAP clients (compare with cn=compat). > >> > >> IMHO the only two clients we should care are FreeIPA framework and > >> bind-dyndb-ldap so I do not see this as a problem, really. If someone wants to > >> access DNS tree by hand - sure, use a standard compliant client! > >> > >> Working ACI and LDAP filters sounds like good price for supporting only > >> standards compliant clients. > >> > >> AFAIK OpenLDAP works well and I suspect that ApacheDS will work too because > >> Eclipse has nice support for sub-types built-in. If I can draw some > >> conclusions from that, sub-types are not a thing aliens forgot here when > >> leaving Earth one million years ago :-) > >> > >>> The problem with subtypes and ACIs though is that I think ACIs do not > >>> care about the subtype unless you explicit mention them. > >> IMHO that is exactly what I would like to see for GenericRecord. It allows us > >> to write ACI which allows admins to add any GenericRecord and at the same time > >> allows us to craft ACI which allows access only to GenericRecord;TYPE65280 for > >> specific group/user. > >> > >>> So perhaps bind_dyndb_ldap should refuse to use a generic type that > >>> shadows DNSSEC relevant records ? > >> Sorry, this cannot possibly work because it depends on up-to-date blacklist. > >> > >> How would the plugin released in 2015 know that highly sensitive OPENPGPKEY > >> type will be standardized in 2016 and assigned number XYZ? > > > > Ok, show me an example ACI that works and you get my ack :) > > Am I being punished for something? :-) > > Anyway, this monstrosity: > > (targetattr = "objectclass || txtRecord;test")(target = > "ldap:///idnsname=*,cn=dns,dc=ipa,dc=example")(version 3.0;acl > "permission:luser: Read DNS Entries";allow (compare,read,search) userdn = > "ldap:///uid=luser,cn=users,cn=accounts,dc=ipa,dc=example";) > > Gives 'luser' read access only to txtRecord;test and *not* to the whole > txtRecord in general. > > $ kinit luser > $ ldapsearch -Y GSSAPI -s base -b > 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' > SASL username: luser at IPA.EXAMPLE > > # txt, ipa.example., dns, ipa.example > dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example > objectClass: top > objectClass: idnsrecord > tXTRecord;test: Guess what is new here! > > Filter '(tXTRecord;test=*)' works as expected and returns only objects with > subtype ;test. > > The only weird thing I noticed is that search filter '(tXTRecord=*)' does not > return the object if you have access only to an subtype with existing value > but not to the 'vanilla' attribute. > > Maybe it is a bug? I will think about it for a while and possibly open a > ticket. Anyway, this is not something we need for implementation. > > > For completeness: > > $ kinit admin > $ ldapsearch -Y GSSAPI -s base -b > 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' > SASL username: admin at IPA.EXAMPLE > > # txt, ipa.example., dns, ipa.example > dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example > objectClass: top > objectClass: idnsrecord > tXTRecord: nothing > tXTRecord: something > idnsName: txt > tXTRecord;test: Guess what is new here! > > > And yes, you assume correctly that (targetattr = "txtRecord") gives access to > whole txtRecord including all its subtypes. > > ACK? :-) > ACK. Make sure it is abundantly clear in the docs what is the implication of giving access to the generic attribute w/o qualifications. Simo. -- Simo Sorce * Red Hat, Inc * New York From mkosek at redhat.com Wed Mar 11 07:13:56 2015 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 11 Mar 2015 08:13:56 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <54FF36E8.1040707@redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> <1426005332.4735.92.camel@willson.usersys.redhat.com> <54FF292C.4020805@redhat.com> <1426008965.4735.95.camel@willson.usersys.redhat.com> <54FF36E8.1040707@redhat.com> Message-ID: <54FFEB34.9000407@redhat.com> On 03/10/2015 07:24 PM, Petr Spacek wrote: > On 10.3.2015 18:36, Simo Sorce wrote: >> On Tue, 2015-03-10 at 18:26 +0100, Petr Spacek wrote: >>> On 10.3.2015 17:35, Simo Sorce wrote: >>>> On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: >>>>> On 10.3.2015 15:53, Simo Sorce wrote: >>>>>> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >>>>>>> Hello, >>>>>>> >>>>>>> I would like to discuss Generic support for unknown DNS RR types (RFC 3597 >>>>>>> [0]). Here is the proposal: >>>>>>> >>>>>>> LDAP schema >>>>>>> =========== >>>>>>> - 1 new attribute: >>>>>>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' EQUALITY >>>>>>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >>>>>>> >>>>>>> The attribute should be added to existing idnsRecord object class as MAY. >>>>>>> >>>>>>> This new attribute should contain data encoded according to ?RFC 3597 section >>>>>>> 5 [5]: >>>>>>> >>>>>>> The RDATA section of an RR of unknown type is represented as a >>>>>>> sequence of white space separated words as follows: >>>>>>> >>>>>>> The special token \# (a backslash immediately followed by a hash >>>>>>> sign), which identifies the RDATA as having the generic encoding >>>>>>> defined herein rather than a traditional type-specific encoding. >>>>>>> >>>>>>> An unsigned decimal integer specifying the RDATA length in octets. >>>>>>> >>>>>>> Zero or more words of hexadecimal data encoding the actual RDATA >>>>>>> field, each containing an even number of hexadecimal digits. >>>>>>> >>>>>>> If the RDATA is of zero length, the text representation contains only >>>>>>> the \# token and the single zero representing the length. >>>>>>> >>>>>>> Examples from RFC: >>>>>>> a.example. CLASS32 TYPE731 \# 6 abcd ( >>>>>>> ef 01 23 45 ) >>>>>>> b.example. HS TYPE62347 \# 0 >>>>>>> e.example. IN A \# 4 0A000001 >>>>>>> e.example. CLASS1 TYPE1 10.0.0.2 >>>>>>> >>>>>>> >>>>>>> Open questions about LDAP format >>>>>>> ================================ >>>>>>> Should we include "\#" constant? We know that the attribute contains record in >>>>>>> RFC 3597 syntax so it is not strictly necessary. >>>>>>> >>>>>>> I think it would be better to follow RFC 3597 format. It allows blind >>>>>>> copy&pasting from other tools, including direct calls to python-dns. >>>>>>> >>>>>>> It also eases writing conversion tools between DNS and LDAP format because >>>>>>> they do not need to change record values. >>>>>>> >>>>>>> >>>>>>> Another question is if we should explicitly include length of data represented >>>>>>> in hexadecimal notation as a decimal number. I'm very strongly inclined to let >>>>>>> it there because it is very good sanity check and again, it allows us to >>>>>>> re-use existing tools including parsers. >>>>>>> >>>>>>> I will ask Uninett.no for standardization after we sort this out (they own the >>>>>>> OID arc we use for DNS records). >>>>>>> >>>>>>> >>>>>>> Attribute usage >>>>>>> =============== >>>>>>> Every DNS RR type has assigned a number [1] which is used on wire. RR types >>>>>>> which are unknown to the server cannot be named by their mnemonic/type name >>>>>>> because server would not be able to do name->number conversion and to generate >>>>>>> DNS wire format. >>>>>>> >>>>>>> As a result, we have to encode the RR type number somehow. Let's use attribute >>>>>>> sub-types. >>>>>>> >>>>>>> E.g. a record with type 65280 and hex value 0A000001 will be represented as: >>>>>>> GenericRecord;TYPE65280: \# 4 0A000001 >>>>>>> >>>>>>> >>>>>>> CLI >>>>>>> === >>>>>>> $ ipa dnsrecord-add zone.example owner \ >>>>>>> --generic-type=65280 --generic-data='\# 4 0A000001' >>>>>>> >>>>>>> $ ipa dnsrecord-show zone.example owner >>>>>>> Record name: owner >>>>>>> TYPE65280 Record: \# 4 0A000001 >>>>>>> >>>>>>> >>>>>>> ACK? :-) >>>>>> >>>>>> Almost. >>>>>> We should refrain from using subtypes when not necessary, and in this >>>>>> case it is not necessary. >>>>>> >>>>>> Use: >>>>>> GenericRecord: 65280 \# 4 0A000001 >>>>> >>>>> I was considering that too but I can see two main drawbacks: >>>>> >>>>> 1) It does not work very well with DS ACI (targetattrfilter, anyone?). Adding >>>>> generic write access to GenericRecord == ability to add TLSA records too, >>>>> which you may not want. IMHO it is perfectly reasonable to limit write access >>>>> to certain types (e.g. to one from private range). >>>>> >>>>> 2) We would need a separate substring index for emulating filters like >>>>> (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence index >>>>> which will be handy one day when we decide to handle upgrades like >>>>> GenericRecord;TYPE256->UriRecord. >>>>> >>>>> Another (less important) annoyance is that conversion tools would have to >>>>> mangle record data instead of just converting attribute name->record type. >>>>> >>>>> >>>>> I can be convinced that subtypes are not necessary but I do not see clear >>>>> advantage of avoiding them. What is the problem with subtypes? >>>> >>>> Poor support by most clients, so it is generally discouraged. >>> Hmm, it does not sound like a thing we should care in this case. DNS tree is >>> not meant for direct consumption by LDAP clients (compare with cn=compat). >>> >>> IMHO the only two clients we should care are FreeIPA framework and >>> bind-dyndb-ldap so I do not see this as a problem, really. If someone wants to >>> access DNS tree by hand - sure, use a standard compliant client! >>> >>> Working ACI and LDAP filters sounds like good price for supporting only >>> standards compliant clients. >>> >>> AFAIK OpenLDAP works well and I suspect that ApacheDS will work too because >>> Eclipse has nice support for sub-types built-in. If I can draw some >>> conclusions from that, sub-types are not a thing aliens forgot here when >>> leaving Earth one million years ago :-) >>> >>>> The problem with subtypes and ACIs though is that I think ACIs do not >>>> care about the subtype unless you explicit mention them. >>> IMHO that is exactly what I would like to see for GenericRecord. It allows us >>> to write ACI which allows admins to add any GenericRecord and at the same time >>> allows us to craft ACI which allows access only to GenericRecord;TYPE65280 for >>> specific group/user. >>> >>>> So perhaps bind_dyndb_ldap should refuse to use a generic type that >>>> shadows DNSSEC relevant records ? >>> Sorry, this cannot possibly work because it depends on up-to-date blacklist. >>> >>> How would the plugin released in 2015 know that highly sensitive OPENPGPKEY >>> type will be standardized in 2016 and assigned number XYZ? >> >> Ok, show me an example ACI that works and you get my ack :) > > Am I being punished for something? :-) > > Anyway, this monstrosity: > > (targetattr = "objectclass || txtRecord;test")(target = > "ldap:///idnsname=*,cn=dns,dc=ipa,dc=example")(version 3.0;acl > "permission:luser: Read DNS Entries";allow (compare,read,search) userdn = > "ldap:///uid=luser,cn=users,cn=accounts,dc=ipa,dc=example";) > > Gives 'luser' read access only to txtRecord;test and *not* to the whole > txtRecord in general. > > $ kinit luser > $ ldapsearch -Y GSSAPI -s base -b > 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' > SASL username: luser at IPA.EXAMPLE > > # txt, ipa.example., dns, ipa.example > dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example > objectClass: top > objectClass: idnsrecord > tXTRecord;test: Guess what is new here! > > Filter '(tXTRecord;test=*)' works as expected and returns only objects with > subtype ;test. > > The only weird thing I noticed is that search filter '(tXTRecord=*)' does not > return the object if you have access only to an subtype with existing value > but not to the 'vanilla' attribute. > > Maybe it is a bug? I will think about it for a while and possibly open a > ticket. Anyway, this is not something we need for implementation. Ludwig? IIRC, DS does not return object for LDAP search, if the user does not have access to *all* searched attributes. So I am on fence whether this is a bug or not. As user does not really have access to the base attribute... Martin From abokovoy at redhat.com Wed Mar 11 09:28:15 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Wed, 11 Mar 2015 11:28:15 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <54FF3798.4050609@redhat.com> References: <20150309182216.GD25455@redhat.com> <54FEF90A.407@redhat.com> <20150310144041.GP25455@redhat.com> <54FF03EE.6020005@redhat.com> <20150310150601.GR22044@hendrix.arn.redhat.com> <54FF1841.10209@redhat.com> <20150310161834.GU25455@redhat.com> <54FF1A40.3090702@redhat.com> <20150310165654.GX25455@redhat.com> <54FF3798.4050609@redhat.com> Message-ID: <20150311092815.GA3878@redhat.com> On Tue, 10 Mar 2015, John Dennis wrote: >On 03/10/2015 12:56 PM, Alexander Bokovoy wrote: >> See my answer to John. We don't need to end up with iCal at all since >> iCal doesn't have procedural definitions of holidays. It has >> EXDATE/RRULE allowing to express exceptions and repeating rules (EXRULE >> for exception rules was removed in RFC5545 and is not used anymore) but >> nothing more concrete. >> >> RFC5545 does define multiple things which are part of iCalendar format >> and which we don't really need to deal with in SSSD so we don't need >> full iCal at all. We need to be able to represent recurring events and >> some of exceptions to them within the rules but that is a subset of what >> is needed and can be implemented without involving a fully-compliant >> iCal library. > >I always get a bit concerned when I hear we'll factor out or just import >only the minimal code we need to support the minimal functionality we >need from an otherwise large complex body of code implementing an entire >RFC. > >Maybe the code in libical is perfectly suited to extracting the snippets >we need (I don't know) but experience tells me complex code has complex >inter-dependencies and reducing libical to our minimal requirements >might be a significant effort. Is there really a problem with just >linking with the entire libical library even if it's more than we need? I'd leave that to Jakub to decide. Here is a typical sample of what we would need from libical: parsing of recurrence rules: https://github.com/libical/libical/blob/master/src/test/icalrecur_test.c -- / Alexander Bokovoy From pspacek at redhat.com Wed Mar 11 10:09:27 2015 From: pspacek at redhat.com (Petr Spacek) Date: Wed, 11 Mar 2015 11:09:27 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <54FFEB34.9000407@redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> <1426005332.4735.92.camel@willson.usersys.redhat.com> <54FF292C.4020805@redhat.com> <1426008965.4735.95.camel@willson.usersys.redhat.com> <54FF36E8.1040707@redhat.com> <54FFEB34.9000407@redhat.com> Message-ID: <55001457.1020006@redhat.com> On 11.3.2015 08:13, Martin Kosek wrote: > On 03/10/2015 07:24 PM, Petr Spacek wrote: >> On 10.3.2015 18:36, Simo Sorce wrote: >>> On Tue, 2015-03-10 at 18:26 +0100, Petr Spacek wrote: >>>> On 10.3.2015 17:35, Simo Sorce wrote: >>>>> On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: >>>>>> On 10.3.2015 15:53, Simo Sorce wrote: >>>>>>> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >>>>>>>> Hello, >>>>>>>> >>>>>>>> I would like to discuss Generic support for unknown DNS RR types (RFC 3597 >>>>>>>> [0]). Here is the proposal: >>>>>>>> >>>>>>>> LDAP schema >>>>>>>> =========== >>>>>>>> - 1 new attribute: >>>>>>>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' EQUALITY >>>>>>>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >>>>>>>> >>>>>>>> The attribute should be added to existing idnsRecord object class as MAY. >>>>>>>> >>>>>>>> This new attribute should contain data encoded according to ?RFC 3597 section >>>>>>>> 5 [5]: >>>>>>>> >>>>>>>> The RDATA section of an RR of unknown type is represented as a >>>>>>>> sequence of white space separated words as follows: >>>>>>>> >>>>>>>> The special token \# (a backslash immediately followed by a hash >>>>>>>> sign), which identifies the RDATA as having the generic encoding >>>>>>>> defined herein rather than a traditional type-specific encoding. >>>>>>>> >>>>>>>> An unsigned decimal integer specifying the RDATA length in octets. >>>>>>>> >>>>>>>> Zero or more words of hexadecimal data encoding the actual RDATA >>>>>>>> field, each containing an even number of hexadecimal digits. >>>>>>>> >>>>>>>> If the RDATA is of zero length, the text representation contains only >>>>>>>> the \# token and the single zero representing the length. >>>>>>>> >>>>>>>> Examples from RFC: >>>>>>>> a.example. CLASS32 TYPE731 \# 6 abcd ( >>>>>>>> ef 01 23 45 ) >>>>>>>> b.example. HS TYPE62347 \# 0 >>>>>>>> e.example. IN A \# 4 0A000001 >>>>>>>> e.example. CLASS1 TYPE1 10.0.0.2 >>>>>>>> >>>>>>>> >>>>>>>> Open questions about LDAP format >>>>>>>> ================================ >>>>>>>> Should we include "\#" constant? We know that the attribute contains record in >>>>>>>> RFC 3597 syntax so it is not strictly necessary. >>>>>>>> >>>>>>>> I think it would be better to follow RFC 3597 format. It allows blind >>>>>>>> copy&pasting from other tools, including direct calls to python-dns. >>>>>>>> >>>>>>>> It also eases writing conversion tools between DNS and LDAP format because >>>>>>>> they do not need to change record values. >>>>>>>> >>>>>>>> >>>>>>>> Another question is if we should explicitly include length of data represented >>>>>>>> in hexadecimal notation as a decimal number. I'm very strongly inclined to let >>>>>>>> it there because it is very good sanity check and again, it allows us to >>>>>>>> re-use existing tools including parsers. >>>>>>>> >>>>>>>> I will ask Uninett.no for standardization after we sort this out (they own the >>>>>>>> OID arc we use for DNS records). >>>>>>>> >>>>>>>> >>>>>>>> Attribute usage >>>>>>>> =============== >>>>>>>> Every DNS RR type has assigned a number [1] which is used on wire. RR types >>>>>>>> which are unknown to the server cannot be named by their mnemonic/type name >>>>>>>> because server would not be able to do name->number conversion and to generate >>>>>>>> DNS wire format. >>>>>>>> >>>>>>>> As a result, we have to encode the RR type number somehow. Let's use attribute >>>>>>>> sub-types. >>>>>>>> >>>>>>>> E.g. a record with type 65280 and hex value 0A000001 will be represented as: >>>>>>>> GenericRecord;TYPE65280: \# 4 0A000001 >>>>>>>> >>>>>>>> >>>>>>>> CLI >>>>>>>> === >>>>>>>> $ ipa dnsrecord-add zone.example owner \ >>>>>>>> --generic-type=65280 --generic-data='\# 4 0A000001' >>>>>>>> >>>>>>>> $ ipa dnsrecord-show zone.example owner >>>>>>>> Record name: owner >>>>>>>> TYPE65280 Record: \# 4 0A000001 >>>>>>>> >>>>>>>> >>>>>>>> ACK? :-) >>>>>>> >>>>>>> Almost. >>>>>>> We should refrain from using subtypes when not necessary, and in this >>>>>>> case it is not necessary. >>>>>>> >>>>>>> Use: >>>>>>> GenericRecord: 65280 \# 4 0A000001 >>>>>> >>>>>> I was considering that too but I can see two main drawbacks: >>>>>> >>>>>> 1) It does not work very well with DS ACI (targetattrfilter, anyone?). Adding >>>>>> generic write access to GenericRecord == ability to add TLSA records too, >>>>>> which you may not want. IMHO it is perfectly reasonable to limit write access >>>>>> to certain types (e.g. to one from private range). >>>>>> >>>>>> 2) We would need a separate substring index for emulating filters like >>>>>> (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence index >>>>>> which will be handy one day when we decide to handle upgrades like >>>>>> GenericRecord;TYPE256->UriRecord. >>>>>> >>>>>> Another (less important) annoyance is that conversion tools would have to >>>>>> mangle record data instead of just converting attribute name->record type. >>>>>> >>>>>> >>>>>> I can be convinced that subtypes are not necessary but I do not see clear >>>>>> advantage of avoiding them. What is the problem with subtypes? >>>>> >>>>> Poor support by most clients, so it is generally discouraged. >>>> Hmm, it does not sound like a thing we should care in this case. DNS tree is >>>> not meant for direct consumption by LDAP clients (compare with cn=compat). >>>> >>>> IMHO the only two clients we should care are FreeIPA framework and >>>> bind-dyndb-ldap so I do not see this as a problem, really. If someone wants to >>>> access DNS tree by hand - sure, use a standard compliant client! >>>> >>>> Working ACI and LDAP filters sounds like good price for supporting only >>>> standards compliant clients. >>>> >>>> AFAIK OpenLDAP works well and I suspect that ApacheDS will work too because >>>> Eclipse has nice support for sub-types built-in. If I can draw some >>>> conclusions from that, sub-types are not a thing aliens forgot here when >>>> leaving Earth one million years ago :-) >>>> >>>>> The problem with subtypes and ACIs though is that I think ACIs do not >>>>> care about the subtype unless you explicit mention them. >>>> IMHO that is exactly what I would like to see for GenericRecord. It allows us >>>> to write ACI which allows admins to add any GenericRecord and at the same time >>>> allows us to craft ACI which allows access only to GenericRecord;TYPE65280 for >>>> specific group/user. >>>> >>>>> So perhaps bind_dyndb_ldap should refuse to use a generic type that >>>>> shadows DNSSEC relevant records ? >>>> Sorry, this cannot possibly work because it depends on up-to-date blacklist. >>>> >>>> How would the plugin released in 2015 know that highly sensitive OPENPGPKEY >>>> type will be standardized in 2016 and assigned number XYZ? >>> >>> Ok, show me an example ACI that works and you get my ack :) >> >> Am I being punished for something? :-) >> >> Anyway, this monstrosity: >> >> (targetattr = "objectclass || txtRecord;test")(target = >> "ldap:///idnsname=*,cn=dns,dc=ipa,dc=example")(version 3.0;acl >> "permission:luser: Read DNS Entries";allow (compare,read,search) userdn = >> "ldap:///uid=luser,cn=users,cn=accounts,dc=ipa,dc=example";) >> >> Gives 'luser' read access only to txtRecord;test and *not* to the whole >> txtRecord in general. >> >> $ kinit luser >> $ ldapsearch -Y GSSAPI -s base -b >> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >> SASL username: luser at IPA.EXAMPLE >> >> # txt, ipa.example., dns, ipa.example >> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >> objectClass: top >> objectClass: idnsrecord >> tXTRecord;test: Guess what is new here! >> >> Filter '(tXTRecord;test=*)' works as expected and returns only objects with >> subtype ;test. >> >> The only weird thing I noticed is that search filter '(tXTRecord=*)' does not >> return the object if you have access only to an subtype with existing value >> but not to the 'vanilla' attribute. >> >> Maybe it is a bug? I will think about it for a while and possibly open a >> ticket. Anyway, this is not something we need for implementation. > > Ludwig? IIRC, DS does not return object for LDAP search, if the user does not > have access to *all* searched attributes. So I am on fence whether this is a > bug or not. As user does not really have access to the base attribute... Here is a ticket so we do not forget: https://fedorahosted.org/389/ticket/48125 -- Petr^2 Spacek From pspacek at redhat.com Wed Mar 11 10:12:39 2015 From: pspacek at redhat.com (Petr Spacek) Date: Wed, 11 Mar 2015 11:12:39 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <1426014271.4735.107.camel@willson.usersys.redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> <1426005332.4735.92.camel@willson.usersys.redhat.com> <54FF292C.4020805@redhat.com> <1426008965.4735.95.camel@willson.usersys.redhat.com> <54FF36E8.1040707@redhat.com> <1426014271.4735.107.camel@willson.usersys.redhat.com> Message-ID: <55001517.1060501@redhat.com> On 10.3.2015 20:04, Simo Sorce wrote: > On Tue, 2015-03-10 at 19:24 +0100, Petr Spacek wrote: >> On 10.3.2015 18:36, Simo Sorce wrote: >>> On Tue, 2015-03-10 at 18:26 +0100, Petr Spacek wrote: >>>> On 10.3.2015 17:35, Simo Sorce wrote: >>>>> On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: >>>>>> On 10.3.2015 15:53, Simo Sorce wrote: >>>>>>> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >>>>>>>> Hello, >>>>>>>> >>>>>>>> I would like to discuss Generic support for unknown DNS RR types (RFC 3597 >>>>>>>> [0]). Here is the proposal: >>>>>>>> >>>>>>>> LDAP schema >>>>>>>> =========== >>>>>>>> - 1 new attribute: >>>>>>>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' EQUALITY >>>>>>>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >>>>>>>> >>>>>>>> The attribute should be added to existing idnsRecord object class as MAY. >>>>>>>> >>>>>>>> This new attribute should contain data encoded according to ?RFC 3597 section >>>>>>>> 5 [5]: >>>>>>>> >>>>>>>> The RDATA section of an RR of unknown type is represented as a >>>>>>>> sequence of white space separated words as follows: >>>>>>>> >>>>>>>> The special token \# (a backslash immediately followed by a hash >>>>>>>> sign), which identifies the RDATA as having the generic encoding >>>>>>>> defined herein rather than a traditional type-specific encoding. >>>>>>>> >>>>>>>> An unsigned decimal integer specifying the RDATA length in octets. >>>>>>>> >>>>>>>> Zero or more words of hexadecimal data encoding the actual RDATA >>>>>>>> field, each containing an even number of hexadecimal digits. >>>>>>>> >>>>>>>> If the RDATA is of zero length, the text representation contains only >>>>>>>> the \# token and the single zero representing the length. >>>>>>>> >>>>>>>> Examples from RFC: >>>>>>>> a.example. CLASS32 TYPE731 \# 6 abcd ( >>>>>>>> ef 01 23 45 ) >>>>>>>> b.example. HS TYPE62347 \# 0 >>>>>>>> e.example. IN A \# 4 0A000001 >>>>>>>> e.example. CLASS1 TYPE1 10.0.0.2 >>>>>>>> >>>>>>>> >>>>>>>> Open questions about LDAP format >>>>>>>> ================================ >>>>>>>> Should we include "\#" constant? We know that the attribute contains record in >>>>>>>> RFC 3597 syntax so it is not strictly necessary. >>>>>>>> >>>>>>>> I think it would be better to follow RFC 3597 format. It allows blind >>>>>>>> copy&pasting from other tools, including direct calls to python-dns. >>>>>>>> >>>>>>>> It also eases writing conversion tools between DNS and LDAP format because >>>>>>>> they do not need to change record values. >>>>>>>> >>>>>>>> >>>>>>>> Another question is if we should explicitly include length of data represented >>>>>>>> in hexadecimal notation as a decimal number. I'm very strongly inclined to let >>>>>>>> it there because it is very good sanity check and again, it allows us to >>>>>>>> re-use existing tools including parsers. >>>>>>>> >>>>>>>> I will ask Uninett.no for standardization after we sort this out (they own the >>>>>>>> OID arc we use for DNS records). >>>>>>>> >>>>>>>> >>>>>>>> Attribute usage >>>>>>>> =============== >>>>>>>> Every DNS RR type has assigned a number [1] which is used on wire. RR types >>>>>>>> which are unknown to the server cannot be named by their mnemonic/type name >>>>>>>> because server would not be able to do name->number conversion and to generate >>>>>>>> DNS wire format. >>>>>>>> >>>>>>>> As a result, we have to encode the RR type number somehow. Let's use attribute >>>>>>>> sub-types. >>>>>>>> >>>>>>>> E.g. a record with type 65280 and hex value 0A000001 will be represented as: >>>>>>>> GenericRecord;TYPE65280: \# 4 0A000001 >>>>>>>> >>>>>>>> >>>>>>>> CLI >>>>>>>> === >>>>>>>> $ ipa dnsrecord-add zone.example owner \ >>>>>>>> --generic-type=65280 --generic-data='\# 4 0A000001' >>>>>>>> >>>>>>>> $ ipa dnsrecord-show zone.example owner >>>>>>>> Record name: owner >>>>>>>> TYPE65280 Record: \# 4 0A000001 >>>>>>>> >>>>>>>> >>>>>>>> ACK? :-) >>>>>>> >>>>>>> Almost. >>>>>>> We should refrain from using subtypes when not necessary, and in this >>>>>>> case it is not necessary. >>>>>>> >>>>>>> Use: >>>>>>> GenericRecord: 65280 \# 4 0A000001 >>>>>> >>>>>> I was considering that too but I can see two main drawbacks: >>>>>> >>>>>> 1) It does not work very well with DS ACI (targetattrfilter, anyone?). Adding >>>>>> generic write access to GenericRecord == ability to add TLSA records too, >>>>>> which you may not want. IMHO it is perfectly reasonable to limit write access >>>>>> to certain types (e.g. to one from private range). >>>>>> >>>>>> 2) We would need a separate substring index for emulating filters like >>>>>> (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence index >>>>>> which will be handy one day when we decide to handle upgrades like >>>>>> GenericRecord;TYPE256->UriRecord. >>>>>> >>>>>> Another (less important) annoyance is that conversion tools would have to >>>>>> mangle record data instead of just converting attribute name->record type. >>>>>> >>>>>> >>>>>> I can be convinced that subtypes are not necessary but I do not see clear >>>>>> advantage of avoiding them. What is the problem with subtypes? >>>>> >>>>> Poor support by most clients, so it is generally discouraged. >>>> Hmm, it does not sound like a thing we should care in this case. DNS tree is >>>> not meant for direct consumption by LDAP clients (compare with cn=compat). >>>> >>>> IMHO the only two clients we should care are FreeIPA framework and >>>> bind-dyndb-ldap so I do not see this as a problem, really. If someone wants to >>>> access DNS tree by hand - sure, use a standard compliant client! >>>> >>>> Working ACI and LDAP filters sounds like good price for supporting only >>>> standards compliant clients. >>>> >>>> AFAIK OpenLDAP works well and I suspect that ApacheDS will work too because >>>> Eclipse has nice support for sub-types built-in. If I can draw some >>>> conclusions from that, sub-types are not a thing aliens forgot here when >>>> leaving Earth one million years ago :-) >>>> >>>>> The problem with subtypes and ACIs though is that I think ACIs do not >>>>> care about the subtype unless you explicit mention them. >>>> IMHO that is exactly what I would like to see for GenericRecord. It allows us >>>> to write ACI which allows admins to add any GenericRecord and at the same time >>>> allows us to craft ACI which allows access only to GenericRecord;TYPE65280 for >>>> specific group/user. >>>> >>>>> So perhaps bind_dyndb_ldap should refuse to use a generic type that >>>>> shadows DNSSEC relevant records ? >>>> Sorry, this cannot possibly work because it depends on up-to-date blacklist. >>>> >>>> How would the plugin released in 2015 know that highly sensitive OPENPGPKEY >>>> type will be standardized in 2016 and assigned number XYZ? >>> >>> Ok, show me an example ACI that works and you get my ack :) >> >> Am I being punished for something? :-) >> >> Anyway, this monstrosity: >> >> (targetattr = "objectclass || txtRecord;test")(target = >> "ldap:///idnsname=*,cn=dns,dc=ipa,dc=example")(version 3.0;acl >> "permission:luser: Read DNS Entries";allow (compare,read,search) userdn = >> "ldap:///uid=luser,cn=users,cn=accounts,dc=ipa,dc=example";) >> >> Gives 'luser' read access only to txtRecord;test and *not* to the whole >> txtRecord in general. >> >> $ kinit luser >> $ ldapsearch -Y GSSAPI -s base -b >> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >> SASL username: luser at IPA.EXAMPLE >> >> # txt, ipa.example., dns, ipa.example >> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >> objectClass: top >> objectClass: idnsrecord >> tXTRecord;test: Guess what is new here! >> >> Filter '(tXTRecord;test=*)' works as expected and returns only objects with >> subtype ;test. >> >> The only weird thing I noticed is that search filter '(tXTRecord=*)' does not >> return the object if you have access only to an subtype with existing value >> but not to the 'vanilla' attribute. >> >> Maybe it is a bug? I will think about it for a while and possibly open a >> ticket. Anyway, this is not something we need for implementation. >> >> >> For completeness: >> >> $ kinit admin >> $ ldapsearch -Y GSSAPI -s base -b >> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >> SASL username: admin at IPA.EXAMPLE >> >> # txt, ipa.example., dns, ipa.example >> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >> objectClass: top >> objectClass: idnsrecord >> tXTRecord: nothing >> tXTRecord: something >> idnsName: txt >> tXTRecord;test: Guess what is new here! >> >> >> And yes, you assume correctly that (targetattr = "txtRecord") gives access to >> whole txtRecord including all its subtypes. >> >> ACK? :-) >> > > ACK. Thank you. Now to the most important and difficult question: Should the attribute name be "GenericRecord" or "UnknownRecord"? I like "GenericRecord" but Honza prefers "UnknownRecord" so we need a third opinion :-) > Make sure it is abundantly clear in the docs what is the implication of > giving access to the generic attribute w/o qualifications. Sure. -- Petr^2 Spacek From pspacek at redhat.com Wed Mar 11 10:28:26 2015 From: pspacek at redhat.com (Petr Spacek) Date: Wed, 11 Mar 2015 11:28:26 +0100 Subject: [Freeipa-devel] [PATCH 0209] Fix logically dead code in ipap11helper module In-Reply-To: <54FD9774.9030209@redhat.com> References: <54FD9774.9030209@redhat.com> Message-ID: <550018CA.3030308@redhat.com> On 9.3.2015 13:52, Martin Basti wrote: > Patch attached. ACK for this patch. When you are at it, it would be good to fix other warnings too. GCC on Fedora 21 is yelling at me: p11helper.c: In function ?P11_Helper_find_keys?: p11helper.c:1062:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = 0; i < objects_len; ++i) { ^ p11helper.c: At top level: p11helper.c:2125:1: warning: missing initializer for field ?tp_free? of ?PyTypeObject? [-Wmissing-field-initializers] }; ^ In file included from /usr/include/python2.7/Python.h:80:0, from p11helper.c:37: /usr/include/python2.7/object.h:391:14: note: ?tp_free? declared here freefunc tp_free; /* Low-level free-memory routine */ Feel free to fix it in a separate patch. -- Petr^2 Spacek From jcholast at redhat.com Wed Mar 11 10:34:49 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 11 Mar 2015 11:34:49 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <55001517.1060501@redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> <1426005332.4735.92.camel@willson.usersys.redhat.com> <54FF292C.4020805@redhat.com> <1426008965.4735.95.camel@willson.usersys.redhat.com> <54FF36E8.1040707@redhat.com> <1426014271.4735.107.camel@willson.usersys.redhat.com> <55001517.1060501@redhat.com> Message-ID: <55001A49.9050001@redhat.com> Dne 11.3.2015 v 11:12 Petr Spacek napsal(a): > On 10.3.2015 20:04, Simo Sorce wrote: >> On Tue, 2015-03-10 at 19:24 +0100, Petr Spacek wrote: >>> On 10.3.2015 18:36, Simo Sorce wrote: >>>> On Tue, 2015-03-10 at 18:26 +0100, Petr Spacek wrote: >>>>> On 10.3.2015 17:35, Simo Sorce wrote: >>>>>> On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: >>>>>>> On 10.3.2015 15:53, Simo Sorce wrote: >>>>>>>> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >>>>>>>>> Hello, >>>>>>>>> >>>>>>>>> I would like to discuss Generic support for unknown DNS RR types (RFC 3597 >>>>>>>>> [0]). Here is the proposal: >>>>>>>>> >>>>>>>>> LDAP schema >>>>>>>>> =========== >>>>>>>>> - 1 new attribute: >>>>>>>>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' EQUALITY >>>>>>>>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >>>>>>>>> >>>>>>>>> The attribute should be added to existing idnsRecord object class as MAY. >>>>>>>>> >>>>>>>>> This new attribute should contain data encoded according to ?RFC 3597 section >>>>>>>>> 5 [5]: >>>>>>>>> >>>>>>>>> The RDATA section of an RR of unknown type is represented as a >>>>>>>>> sequence of white space separated words as follows: >>>>>>>>> >>>>>>>>> The special token \# (a backslash immediately followed by a hash >>>>>>>>> sign), which identifies the RDATA as having the generic encoding >>>>>>>>> defined herein rather than a traditional type-specific encoding. >>>>>>>>> >>>>>>>>> An unsigned decimal integer specifying the RDATA length in octets. >>>>>>>>> >>>>>>>>> Zero or more words of hexadecimal data encoding the actual RDATA >>>>>>>>> field, each containing an even number of hexadecimal digits. >>>>>>>>> >>>>>>>>> If the RDATA is of zero length, the text representation contains only >>>>>>>>> the \# token and the single zero representing the length. >>>>>>>>> >>>>>>>>> Examples from RFC: >>>>>>>>> a.example. CLASS32 TYPE731 \# 6 abcd ( >>>>>>>>> ef 01 23 45 ) >>>>>>>>> b.example. HS TYPE62347 \# 0 >>>>>>>>> e.example. IN A \# 4 0A000001 >>>>>>>>> e.example. CLASS1 TYPE1 10.0.0.2 >>>>>>>>> >>>>>>>>> >>>>>>>>> Open questions about LDAP format >>>>>>>>> ================================ >>>>>>>>> Should we include "\#" constant? We know that the attribute contains record in >>>>>>>>> RFC 3597 syntax so it is not strictly necessary. >>>>>>>>> >>>>>>>>> I think it would be better to follow RFC 3597 format. It allows blind >>>>>>>>> copy&pasting from other tools, including direct calls to python-dns. >>>>>>>>> >>>>>>>>> It also eases writing conversion tools between DNS and LDAP format because >>>>>>>>> they do not need to change record values. >>>>>>>>> >>>>>>>>> >>>>>>>>> Another question is if we should explicitly include length of data represented >>>>>>>>> in hexadecimal notation as a decimal number. I'm very strongly inclined to let >>>>>>>>> it there because it is very good sanity check and again, it allows us to >>>>>>>>> re-use existing tools including parsers. >>>>>>>>> >>>>>>>>> I will ask Uninett.no for standardization after we sort this out (they own the >>>>>>>>> OID arc we use for DNS records). >>>>>>>>> >>>>>>>>> >>>>>>>>> Attribute usage >>>>>>>>> =============== >>>>>>>>> Every DNS RR type has assigned a number [1] which is used on wire. RR types >>>>>>>>> which are unknown to the server cannot be named by their mnemonic/type name >>>>>>>>> because server would not be able to do name->number conversion and to generate >>>>>>>>> DNS wire format. >>>>>>>>> >>>>>>>>> As a result, we have to encode the RR type number somehow. Let's use attribute >>>>>>>>> sub-types. >>>>>>>>> >>>>>>>>> E.g. a record with type 65280 and hex value 0A000001 will be represented as: >>>>>>>>> GenericRecord;TYPE65280: \# 4 0A000001 >>>>>>>>> >>>>>>>>> >>>>>>>>> CLI >>>>>>>>> === >>>>>>>>> $ ipa dnsrecord-add zone.example owner \ >>>>>>>>> --generic-type=65280 --generic-data='\# 4 0A000001' >>>>>>>>> >>>>>>>>> $ ipa dnsrecord-show zone.example owner >>>>>>>>> Record name: owner >>>>>>>>> TYPE65280 Record: \# 4 0A000001 >>>>>>>>> >>>>>>>>> >>>>>>>>> ACK? :-) >>>>>>>> >>>>>>>> Almost. >>>>>>>> We should refrain from using subtypes when not necessary, and in this >>>>>>>> case it is not necessary. >>>>>>>> >>>>>>>> Use: >>>>>>>> GenericRecord: 65280 \# 4 0A000001 >>>>>>> >>>>>>> I was considering that too but I can see two main drawbacks: >>>>>>> >>>>>>> 1) It does not work very well with DS ACI (targetattrfilter, anyone?). Adding >>>>>>> generic write access to GenericRecord == ability to add TLSA records too, >>>>>>> which you may not want. IMHO it is perfectly reasonable to limit write access >>>>>>> to certain types (e.g. to one from private range). >>>>>>> >>>>>>> 2) We would need a separate substring index for emulating filters like >>>>>>> (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence index >>>>>>> which will be handy one day when we decide to handle upgrades like >>>>>>> GenericRecord;TYPE256->UriRecord. >>>>>>> >>>>>>> Another (less important) annoyance is that conversion tools would have to >>>>>>> mangle record data instead of just converting attribute name->record type. >>>>>>> >>>>>>> >>>>>>> I can be convinced that subtypes are not necessary but I do not see clear >>>>>>> advantage of avoiding them. What is the problem with subtypes? >>>>>> >>>>>> Poor support by most clients, so it is generally discouraged. >>>>> Hmm, it does not sound like a thing we should care in this case. DNS tree is >>>>> not meant for direct consumption by LDAP clients (compare with cn=compat). >>>>> >>>>> IMHO the only two clients we should care are FreeIPA framework and >>>>> bind-dyndb-ldap so I do not see this as a problem, really. If someone wants to >>>>> access DNS tree by hand - sure, use a standard compliant client! >>>>> >>>>> Working ACI and LDAP filters sounds like good price for supporting only >>>>> standards compliant clients. >>>>> >>>>> AFAIK OpenLDAP works well and I suspect that ApacheDS will work too because >>>>> Eclipse has nice support for sub-types built-in. If I can draw some >>>>> conclusions from that, sub-types are not a thing aliens forgot here when >>>>> leaving Earth one million years ago :-) >>>>> >>>>>> The problem with subtypes and ACIs though is that I think ACIs do not >>>>>> care about the subtype unless you explicit mention them. >>>>> IMHO that is exactly what I would like to see for GenericRecord. It allows us >>>>> to write ACI which allows admins to add any GenericRecord and at the same time >>>>> allows us to craft ACI which allows access only to GenericRecord;TYPE65280 for >>>>> specific group/user. >>>>> >>>>>> So perhaps bind_dyndb_ldap should refuse to use a generic type that >>>>>> shadows DNSSEC relevant records ? >>>>> Sorry, this cannot possibly work because it depends on up-to-date blacklist. >>>>> >>>>> How would the plugin released in 2015 know that highly sensitive OPENPGPKEY >>>>> type will be standardized in 2016 and assigned number XYZ? >>>> >>>> Ok, show me an example ACI that works and you get my ack :) >>> >>> Am I being punished for something? :-) >>> >>> Anyway, this monstrosity: >>> >>> (targetattr = "objectclass || txtRecord;test")(target = >>> "ldap:///idnsname=*,cn=dns,dc=ipa,dc=example")(version 3.0;acl >>> "permission:luser: Read DNS Entries";allow (compare,read,search) userdn = >>> "ldap:///uid=luser,cn=users,cn=accounts,dc=ipa,dc=example";) >>> >>> Gives 'luser' read access only to txtRecord;test and *not* to the whole >>> txtRecord in general. >>> >>> $ kinit luser >>> $ ldapsearch -Y GSSAPI -s base -b >>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>> SASL username: luser at IPA.EXAMPLE >>> >>> # txt, ipa.example., dns, ipa.example >>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>> objectClass: top >>> objectClass: idnsrecord >>> tXTRecord;test: Guess what is new here! >>> >>> Filter '(tXTRecord;test=*)' works as expected and returns only objects with >>> subtype ;test. >>> >>> The only weird thing I noticed is that search filter '(tXTRecord=*)' does not >>> return the object if you have access only to an subtype with existing value >>> but not to the 'vanilla' attribute. >>> >>> Maybe it is a bug? I will think about it for a while and possibly open a >>> ticket. Anyway, this is not something we need for implementation. >>> >>> >>> For completeness: >>> >>> $ kinit admin >>> $ ldapsearch -Y GSSAPI -s base -b >>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>> SASL username: admin at IPA.EXAMPLE >>> >>> # txt, ipa.example., dns, ipa.example >>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>> objectClass: top >>> objectClass: idnsrecord >>> tXTRecord: nothing >>> tXTRecord: something >>> idnsName: txt >>> tXTRecord;test: Guess what is new here! >>> >>> >>> And yes, you assume correctly that (targetattr = "txtRecord") gives access to >>> whole txtRecord including all its subtypes. >>> >>> ACK? :-) >>> >> >> ACK. > > Thank you. Now to the most important and difficult question: > Should the attribute name be "GenericRecord" or "UnknownRecord"? > > I like "GenericRecord" but Honza prefers "UnknownRecord" so we need a third > opinion :-) GenericRecord sounds like something that may be used for any record type, known or unknown. I don't think that's what we want. We want users to use it only for unknown record types and use appropriate Record attribute for known attributes. The RFC is titled "Handling of *Unknown* DNS Resource Record (RR) Types". The word "generic" is used only when referring to encoding of RDATA. You even used "*unknown* DNS record, RFC 3597" as description of the attribute yourself. > >> Make sure it is abundantly clear in the docs what is the implication of >> giving access to the generic attribute w/o qualifications. > > Sure. > -- Jan Cholasta From pspacek at redhat.com Wed Mar 11 11:42:11 2015 From: pspacek at redhat.com (Petr Spacek) Date: Wed, 11 Mar 2015 12:42:11 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <54FD8CAF.7030609@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> Message-ID: <55002A13.8010706@redhat.com> Hello Martin^3, good work, we are almost there! Please see my nitpicks in-line. On 9.3.2015 13:06, Martin Babinsky wrote: > On 03/06/2015 01:05 PM, Martin Babinsky wrote: >> This series of patches for the master/4.1 branch attempts to implement >> some of the Rob's and Petr Vobornik's ideas which originated from a >> discussion on this list regarding my original patch fixing >> https://fedorahosted.org/freeipa/ticket/4808. >> >> I suppose that these patches are just a first iteration, we may further >> discuss if this is the right thing to do. >> >> Below is a quote from the original discussion just to get the context: >> >> >> > > The next iteration of patches is attached below. Thanks to jcholast and > pvoborni for the comments and insights. > > -- > Martin^3 Babinsky > > freeipa-mbabinsk-0015-2-ipautil-new-functions-kinit_keytab-and-kinit_passwor.patch > > > From 97e4eed332391bab7a12dc593152e369f347fd3c Mon Sep 17 00:00:00 2001 > From: Martin Babinsky > Date: Mon, 9 Mar 2015 12:53:10 +0100 > Subject: [PATCH 1/3] ipautil: new functions kinit_keytab and kinit_password > > kinit_keytab replaces kinit_hostprincipal and performs Kerberos auth using > keytab file. Function is also able to repeat authentication multiple times > before giving up and raising StandardError. > > kinit_password wraps kinit auth using password and also supports FAST > authentication using httpd armor ccache. > --- > ipapython/ipautil.py | 60 ++++++++++++++++++++++++++++++++++++++++------------ > 1 file changed, 46 insertions(+), 14 deletions(-) > > diff --git a/ipapython/ipautil.py b/ipapython/ipautil.py > index 4116d974e620341119b56fad3cff1bda48af3bab..4547165ccf24ff6edf5c65e756aa321aa34b9e61 100644 > --- a/ipapython/ipautil.py > +++ b/ipapython/ipautil.py > @@ -1175,27 +1175,59 @@ def wait_for_open_socket(socket_name, timeout=0): > else: > raise e > > -def kinit_hostprincipal(keytab, ccachedir, principal): > + > +def kinit_keytab(keytab, ccache_path, principal, attempts=1): > """ > - Given a ccache directory and a principal kinit as that user. > + Given a ccache_path , keytab file and a principal kinit as that user. > + > + The optional parameter 'attempts' specifies how many times the credential > + initialization should be attempted before giving up and raising > + StandardError. > > This blindly overwrites the current CCNAME so if you need to save > it do so before calling this function. > > Thus far this is used to kinit as the local host. > """ > - try: > - ccache_file = 'FILE:%s/ccache' % ccachedir > - krbcontext = krbV.default_context() > - ktab = krbV.Keytab(name=keytab, context=krbcontext) > - princ = krbV.Principal(name=principal, context=krbcontext) > - os.environ['KRB5CCNAME'] = ccache_file > - ccache = krbV.CCache(name=ccache_file, context=krbcontext, primary_principal=princ) > - ccache.init(princ) > - ccache.init_creds_keytab(keytab=ktab, principal=princ) > - return ccache_file > - except krbV.Krb5Error, e: > - raise StandardError('Error initializing principal %s in %s: %s' % (principal, keytab, str(e))) > + root_logger.debug("Initializing principal %s using keytab %s" > + % (principal, keytab)) > + for attempt in xrange(attempts): I would like to see new code compatible with Python 3. Here I'm not sure what is the generic solution for xrange but in this particular case I would recommend you to use just range. Attempts variable should have small values so the x/range differences do not matter here. > + try: > + krbcontext = krbV.default_context() > + ktab = krbV.Keytab(name=keytab, context=krbcontext) > + princ = krbV.Principal(name=principal, context=krbcontext) > + os.environ['KRB5CCNAME'] = ccache_path This is a bit scary, especially when it comes to multi-threaded execution. If it is really necessary please add comment that this method is not thread-safe. > + ccache = krbV.CCache(name=ccache_path, context=krbcontext, > + primary_principal=princ) > + ccache.init(princ) > + ccache.init_creds_keytab(keytab=ktab, principal=princ) > + root_logger.debug("Attempt %d/%d: success" > + % (attempt + 1, attempts)) What about adding + 1 to range boundaries instead of + 1 here and there? > + return > + except krbV.Krb5Error, e: except ... , ... syntax is not going to work in Python 3. Maybe 'as' would be better? > + root_logger.debug("Attempt %d/%d: failed" > + % (attempt + 1, attempts)) > + time.sleep(1) > + > + root_logger.debug("Maximum number of attempts (%d) reached" > + % attempts) > + raise StandardError("Error initializing principal %s: %s" > + % (principal, str(e))) > + > + > +def kinit_password(principal, password, env={}, armor_ccache=""): > + """perform interactive kinit as principal using password. Additional > + enviroment variables can be specified using env. If using FAST for > + web-based authentication, use armor_ccache to specify http service ccache. > + """ > + root_logger.debug("Initializing principal %s using password" % principal) > + args = [paths.KINIT, principal] > + if armor_ccache: > + root_logger.debug("Using armor ccache %s for FAST webauth" > + % armor_ccache) > + args.extend(['-T', armor_ccache]) > + run(args, env=env, stdin=password) > + > > def dn_attribute_property(private_name): > ''' > -- 2.1.0 > > > freeipa-mbabinsk-0016-2-ipa-client-install-try-to-get-host-TGT-several-times.patch > > > From e438d8a0711b4271d24d7d24e98395503912a1c4 Mon Sep 17 00:00:00 2001 > From: Martin Babinsky > Date: Mon, 9 Mar 2015 12:53:57 +0100 > Subject: [PATCH 2/3] ipa-client-install: try to get host TGT several times > before giving up > > New option '--kinit-attempts' enables the host to make multiple attempts to > obtain TGT from KDC before giving up and aborting client installation. > > In addition, all kinit attempts were replaced by calls to > 'ipautil.kinit_keytab' and 'ipautil.kinit_password'. > > https://fedorahosted.org/freeipa/ticket/4808 > --- > ipa-client/ipa-install/ipa-client-install | 65 +++++++++++++++++-------------- > ipa-client/man/ipa-client-install.1 | 5 +++ > 2 files changed, 41 insertions(+), 29 deletions(-) > > diff --git a/ipa-client/ipa-install/ipa-client-install b/ipa-client/ipa-install/ipa-client-install > index ccaab5536e83b4b6ac60b81132c3455c0af19ae1..c817f9e86dbaa6a2cca7d1a463f53d491fa7badb 100755 > --- a/ipa-client/ipa-install/ipa-client-install > +++ b/ipa-client/ipa-install/ipa-client-install > @@ -91,6 +91,13 @@ def parse_options(): > > parser.values.ca_cert_file = value > > + def validate_kinit_attempts_option(option, opt, value, parser): > + if value < 1 or value > sys.maxint: > + raise OptionValueError( > + "%s option has invalid value %d" % (opt, value)) It would be nice if the error message said what is the expected value. ("Expected integer in range <1,%s>" % sys.maxint) BTW is it possible to do this using existing option parser? I would expect some generic support for type=uint or something similar. > + > + parser.values.kinit_attempts = value > + > parser = IPAOptionParser(version=version.VERSION) > > basic_group = OptionGroup(parser, "basic options") > @@ -144,6 +151,11 @@ def parse_options(): > help="do not modify the nsswitch.conf and PAM configuration") > basic_group.add_option("-f", "--force", dest="force", action="store_true", > default=False, help="force setting of LDAP/Kerberos conf") > + basic_group.add_option('--kinit-attempts', dest='kinit_attempts', > + action='callback', type='int', default=5, It would be good to check lockout numbers in default configuration to make sure that replication delay will not lock the principal. > + callback=validate_kinit_attempts_option, > + help=("number of attempts to obtain host TGT" > + " (defaults to %default).")) > basic_group.add_option("-d", "--debug", dest="debug", action="store_true", > default=False, help="print debugging information") > basic_group.add_option("-U", "--unattended", dest="unattended", > @@ -2351,6 +2363,7 @@ def install(options, env, fstore, statestore): > root_logger.debug( > "will use principal provided as option: %s", options.principal) > > + host_principal = 'host/%s@%s' % (hostname, cli_realm) > if not options.on_master: > nolog = tuple() > # First test out the kerberos configuration > @@ -2371,7 +2384,6 @@ def install(options, env, fstore, statestore): > env['KRB5_CONFIG'] = krb_name > (ccache_fd, ccache_name) = tempfile.mkstemp() > os.close(ccache_fd) > - env['KRB5CCNAME'] = os.environ['KRB5CCNAME'] = ccache_name > join_args = [paths.SBIN_IPA_JOIN, > "-s", cli_server[0], > "-b", str(realm_to_suffix(cli_realm)), > @@ -2409,29 +2421,24 @@ def install(options, env, fstore, statestore): > else: > stdin = sys.stdin.readline() > > - (stderr, stdout, returncode) = run(["kinit", principal], > - raiseonerr=False, > - stdin=stdin, > - env=env) > - if returncode != 0: > + try: > + ipautil.kinit_password(principal, stdin, env) > + except CalledProcessError, e: > print_port_conf_info() > - root_logger.error("Kerberos authentication failed") > - root_logger.info("%s", stdout) > + root_logger.error("Kerberos authentication failed: %s" > + % str(e)) Isn't str() redundant? IMHO %s calls str() implicitly. > return CLIENT_INSTALL_ERROR > elif options.keytab: > join_args.append("-f") > if os.path.exists(options.keytab): > - (stderr, stdout, returncode) = run( > - [paths.KINIT,'-k', '-t', options.keytab, > - 'host/%s@%s' % (hostname, cli_realm)], > - env=env, > - raiseonerr=False) > - > - if returncode != 0: > + try: > + ipautil.kinit_keytab(options.keytab, ccache_name, > + host_principal, > + attempts=options.kinit_attempts) > + except StandardError, e: > print_port_conf_info() > - root_logger.error("Kerberos authentication failed " > - "using keytab: %s", options.keytab) > - root_logger.info("%s", stdout) > + root_logger.error("Kerberos authentication failed: %s" > + % str(e)) Again str(). > return CLIENT_INSTALL_ERROR > else: > root_logger.error("Keytab file could not be found: %s" > @@ -2501,12 +2508,13 @@ def install(options, env, fstore, statestore): > # only the KDC we're installing under is contacted. > # Other KDCs might not have replicated the principal yet. > # Once we have the TGT, it's usable on any server. > - env['KRB5CCNAME'] = os.environ['KRB5CCNAME'] = CCACHE_FILE > try: > - run([paths.KINIT, '-k', '-t', paths.KRB5_KEYTAB, > - 'host/%s@%s' % (hostname, cli_realm)], env=env) > - except CalledProcessError, e: > - root_logger.error("Failed to obtain host TGT.") > + ipautil.kinit_keytab(paths.KRB5_KEYTAB, CCACHE_FILE, > + host_principal, > + attempts=options.kinit_attempts) > + except StandardError, e: > + print_port_conf_info() > + root_logger.error("Failed to obtain host TGT: %s" % str(e)) str()? > # failure to get ticket makes it impossible to login and bind > # from sssd to LDAP, abort installation and rollback changes > return CLIENT_INSTALL_ERROR > @@ -2543,16 +2551,15 @@ def install(options, env, fstore, statestore): > return CLIENT_INSTALL_ERROR > root_logger.info("Configured /etc/sssd/sssd.conf") > > - host_principal = 'host/%s@%s' % (hostname, cli_realm) > if options.on_master: > # If on master assume kerberos is already configured properly. > # Get the host TGT. > - os.environ['KRB5CCNAME'] = CCACHE_FILE > try: > - run([paths.KINIT, '-k', '-t', paths.KRB5_KEYTAB, > - host_principal]) > - except CalledProcessError, e: > - root_logger.error("Failed to obtain host TGT.") > + ipautil.kinit_keytab(paths.KRB5_KEYTAB, CCACHE_FILE, > + host_principal, > + attempts=options.kinit_attempts) > + except StandardError, e: > + root_logger.error("Failed to obtain host TGT: %s" % str(e)) str()? I will not mention it again but please look for it. > return CLIENT_INSTALL_ERROR > else: > # Configure krb5.conf > diff --git a/ipa-client/man/ipa-client-install.1 b/ipa-client/man/ipa-client-install.1 > index 726a6c133132dd2e3ba2fde43d8a2ec0549bfcef..56ed899a25e626b8ae61714f77f3588059fa86f9 100644 > --- a/ipa-client/man/ipa-client-install.1 > +++ b/ipa-client/man/ipa-client-install.1 > @@ -152,6 +152,11 @@ Do not use Authconfig to modify the nsswitch.conf and PAM configuration. > \fB\-f\fR, \fB\-\-force\fR > Force the settings even if errors occur > .TP > +\fB\-\-kinit\-attempts\fR=\fIKINIT_ATTEMPTS\fR > +Number of unsuccessful attempts to obtain host TGT that will be performed > +before aborting client installation. \fIKINIT_ATTEMPTS\fR should be a number > +greater than zero. By default 5 attempts to get TGT are performed. It would be nice to add a rationale to the text. Current text is somehow confusing for users not familiar with replication and related problems. My hope is descriptive manual will prevent users from creating cargo-cults based on copy&pastings texts they do not understand. > +.TP > \fB\-d\fR, \fB\-\-debug\fR > Print debugging information to stdout > .TP > -- 2.1.0 > > > freeipa-mbabinsk-0017-2-Adopted-kinit_keytab-and-kinit_password-for-kerberos.patch > > > From 912113529138e5b1bd8357ae6a17376cb5d32759 Mon Sep 17 00:00:00 2001 > From: Martin Babinsky > Date: Mon, 9 Mar 2015 12:54:36 +0100 > Subject: [PATCH 3/3] Adopted kinit_keytab and kinit_password for kerberos auth > > --- > daemons/dnssec/ipa-dnskeysync-replica | 6 ++- > daemons/dnssec/ipa-dnskeysyncd | 2 +- > daemons/dnssec/ipa-ods-exporter | 5 ++- > .../certmonger/dogtag-ipa-ca-renew-agent-submit | 3 +- > install/restart_scripts/renew_ca_cert | 7 ++-- > install/restart_scripts/renew_ra_cert | 4 +- > ipa-client/ipa-install/ipa-client-automount | 9 ++-- > ipa-client/ipaclient/ipa_certupdate.py | 3 +- > ipaserver/rpcserver.py | 49 +++++++++++----------- > 9 files changed, 47 insertions(+), 41 deletions(-) > > diff --git a/daemons/dnssec/ipa-dnskeysync-replica b/daemons/dnssec/ipa-dnskeysync-replica > index d04f360e04ee018dcdd1ba9b2ca42b1844617af9..e9cae519202203a10678b7384e5acf748f256427 100755 > --- a/daemons/dnssec/ipa-dnskeysync-replica > +++ b/daemons/dnssec/ipa-dnskeysync-replica > @@ -139,14 +139,16 @@ log.setLevel(level=logging.DEBUG) > # Kerberos initialization > PRINCIPAL = str('%s/%s' % (DAEMONNAME, ipalib.api.env.host)) > log.debug('Kerberos principal: %s', PRINCIPAL) > -ipautil.kinit_hostprincipal(paths.IPA_DNSKEYSYNCD_KEYTAB, WORKDIR, PRINCIPAL) > +ccache_filename = os.path.join(WORKDIR, 'ccache') BTW I really appreciate this patch set! We finally can use more descriptive names like 'ipa-dnskeysync-replica.ccache' which sometimes make debugging easier. > +ipautil.kinit_keytab(paths.IPA_DNSKEYSYNCD_KEYTAB, ccache_filename, > + PRINCIPAL) > log.debug('Got TGT') > > # LDAP initialization > ldap = ipalib.api.Backend[ldap2] > # fixme > log.debug('Connecting to LDAP') > -ldap.connect(ccache="%s/ccache" % WORKDIR) > +ldap.connect(ccache=ccache_filename) > log.debug('Connected') > > > diff --git a/daemons/dnssec/ipa-dnskeysyncd b/daemons/dnssec/ipa-dnskeysyncd > index 54a08a1e6307e89b3f52e78bddbe28cda8ac1345..4e64596c7f8ccd6cff47df4772c875917c71606a 100755 > --- a/daemons/dnssec/ipa-dnskeysyncd > +++ b/daemons/dnssec/ipa-dnskeysyncd > @@ -65,7 +65,7 @@ log = root_logger > # Kerberos initialization > PRINCIPAL = str('%s/%s' % (DAEMONNAME, api.env.host)) > log.debug('Kerberos principal: %s', PRINCIPAL) > -ipautil.kinit_hostprincipal(KEYTAB_FB, WORKDIR, PRINCIPAL) > +ipautil.kinit_keytab(KEYTAB_FB, os.path.join(WORKDIR, 'ccache'), PRINCIPAL) 'ipa-dnskeysyncd.ccache'? > # LDAP initialization > basedn = DN(api.env.container_dns, api.env.basedn) > diff --git a/daemons/dnssec/ipa-ods-exporter b/daemons/dnssec/ipa-ods-exporter > index dc1851d3a34bb09c1a87c86d101b11afe35e49fe..0cf825950338cdce330a15b3ea22150f6e02588a 100755 > --- a/daemons/dnssec/ipa-ods-exporter > +++ b/daemons/dnssec/ipa-ods-exporter > @@ -399,7 +399,8 @@ ipalib.api.finalize() > # Kerberos initialization > PRINCIPAL = str('%s/%s' % (DAEMONNAME, ipalib.api.env.host)) > log.debug('Kerberos principal: %s', PRINCIPAL) > -ipautil.kinit_hostprincipal(paths.IPA_ODS_EXPORTER_KEYTAB, WORKDIR, PRINCIPAL) > +ccache_name = os.path.join(WORKDIR, 'ccache') 'ipa-ods-exporter.ccache'? > +ipautil.kinit_keytab(paths.IPA_ODS_EXPORTER_KEYTAB, ccache_name, PRINCIPAL) > log.debug('Got TGT') > > # LDAP initialization > @@ -407,7 +408,7 @@ dns_dn = DN(ipalib.api.env.container_dns, ipalib.api.env.basedn) > ldap = ipalib.api.Backend[ldap2] > # fixme > log.debug('Connecting to LDAP') > -ldap.connect(ccache="%s/ccache" % WORKDIR) > +ldap.connect(ccache=ccache_name) > log.debug('Connected') > > > diff --git a/install/certmonger/dogtag-ipa-ca-renew-agent-submit b/install/certmonger/dogtag-ipa-ca-renew-agent-submit > index 7b91fc61148912c77d0ae962b3847d73e8d0720e..13d2c2a912d2fcf84053d36da5e07fc834f9cf25 100755 > --- a/install/certmonger/dogtag-ipa-ca-renew-agent-submit > +++ b/install/certmonger/dogtag-ipa-ca-renew-agent-submit > @@ -440,7 +440,8 @@ def main(): > certs.renewal_lock.acquire() > try: > principal = str('host/%s@%s' % (api.env.host, api.env.realm)) > - ipautil.kinit_hostprincipal(paths.KRB5_KEYTAB, tmpdir, principal) > + ipautil.kinit_keytab(paths.KRB5_KEYTAB, os.path.join(tmpdir, 'ccache'), 'dogtag-ipa-ca-renew-agent-submit.ccache'? > + principal) > > profile = os.environ.get('CERTMONGER_CA_PROFILE') > if profile: > diff --git a/install/restart_scripts/renew_ca_cert b/install/restart_scripts/renew_ca_cert > index c7bd5d74c5b4659b3ad66d630653ff6419868d99..67156122bb75f00a4c3f612697092e5bab3723fb 100644 > --- a/install/restart_scripts/renew_ca_cert > +++ b/install/restart_scripts/renew_ca_cert > @@ -73,8 +73,9 @@ def _main(): > tmpdir = tempfile.mkdtemp(prefix="tmp-") > try: > principal = str('host/%s@%s' % (api.env.host, api.env.realm)) > - ccache = ipautil.kinit_hostprincipal(paths.KRB5_KEYTAB, tmpdir, > - principal) > + ccache_filename = '%s/ccache' % tmpdir 'renew_ca_cert.ccache'? > + ipautil.kinit_keytab(paths.KRB5_KEYTAB, ccache_filename, > + principal) > > ca = cainstance.CAInstance(host_name=api.env.host, ldapi=False) > ca.update_cert_config(nickname, cert, configured_constants) > @@ -139,7 +140,7 @@ def _main(): > conn = None > try: > conn = ldap2(shared_instance=False, ldap_uri=api.env.ldap_uri) > - conn.connect(ccache=ccache) > + conn.connect(ccache=ccache_filename) > except Exception, e: > syslog.syslog( > syslog.LOG_ERR, "Failed to connect to LDAP: %s" % e) > diff --git a/install/restart_scripts/renew_ra_cert b/install/restart_scripts/renew_ra_cert > index 7dae3562380e919b2cc5f53825820291fc93fdc5..6276de78e4528dc1caa39be6628094a9d00e5988 100644 > --- a/install/restart_scripts/renew_ra_cert > +++ b/install/restart_scripts/renew_ra_cert > @@ -42,8 +42,8 @@ def _main(): > tmpdir = tempfile.mkdtemp(prefix="tmp-") > try: > principal = str('host/%s@%s' % (api.env.host, api.env.realm)) > - ccache = ipautil.kinit_hostprincipal(paths.KRB5_KEYTAB, tmpdir, > - principal) > + ipautil.kinit_keytab(paths.KRB5_KEYTAB, '%s/ccache' % tmpdir, 'renew_ra_cert.ccache'? > + principal) > > ca = cainstance.CAInstance(host_name=api.env.host, ldapi=False) > if ca.is_renewal_master(): > diff --git a/ipa-client/ipa-install/ipa-client-automount b/ipa-client/ipa-install/ipa-client-automount > index 7b9e701dead5f50a033a455eb62e30df78cc0249..19197d34ca580062742b3d7363e5dfb2dad0e4de 100755 > --- a/ipa-client/ipa-install/ipa-client-automount > +++ b/ipa-client/ipa-install/ipa-client-automount > @@ -425,10 +425,11 @@ def main(): > os.close(ccache_fd) > try: > try: > - os.environ['KRB5CCNAME'] = ccache_name > - ipautil.run([paths.KINIT, '-k', '-t', paths.KRB5_KEYTAB, 'host/%s@%s' % (api.env.host, api.env.realm)]) > - except ipautil.CalledProcessError, e: > - sys.exit("Failed to obtain host TGT.") > + host_princ = str('host/%s@%s' % (api.env.host, api.env.realm)) > + ipautil.kinit_keytab(paths.KRB5_KEYTAB, ccache_name, I'm not sure what is ccache_name here but it should be something descriptive. > + host_princ) > + except StandardError, e: > + sys.exit("Failed to obtain host TGT: %s" % e) > # Now we have a TGT, connect to IPA > try: > api.Backend.rpcclient.connect() > diff --git a/ipa-client/ipaclient/ipa_certupdate.py b/ipa-client/ipaclient/ipa_certupdate.py > index 031a34c3a54a02d43978eedcb794678a1550702b..d6f7bbb3daff3ae4dced5d69f83a0516003235ab 100644 > --- a/ipa-client/ipaclient/ipa_certupdate.py > +++ b/ipa-client/ipaclient/ipa_certupdate.py > @@ -57,7 +57,8 @@ class CertUpdate(admintool.AdminTool): > tmpdir = tempfile.mkdtemp(prefix="tmp-") > try: > principal = str('host/%s@%s' % (api.env.host, api.env.realm)) > - ipautil.kinit_hostprincipal(paths.KRB5_KEYTAB, tmpdir, principal) > + ipautil.kinit_keytab(paths.KRB5_KEYTAB, > + os.path.join(tmpdir, 'ccache'), principal) 'ipa_certupdate.ccache'? > > api.Backend.rpcclient.connect() > try: > diff --git a/ipaserver/rpcserver.py b/ipaserver/rpcserver.py > index d6bc955b9d9910a24eec5df1def579310eb54786..36f16908ac8477d9982bfee613b77576853054eb 100644 > --- a/ipaserver/rpcserver.py > +++ b/ipaserver/rpcserver.py > @@ -958,8 +958,8 @@ class login_password(Backend, KerberosSession, HTTP_Status): > > def kinit(self, user, realm, password, ccache_name): > # get http service ccache as an armor for FAST to enable OTP authentication > - armor_principal = krb5_format_service_principal_name( > - 'HTTP', self.api.env.host, realm) > + armor_principal = str(krb5_format_service_principal_name( > + 'HTTP', self.api.env.host, realm)) > keytab = paths.IPA_KEYTAB > armor_name = "%sA_%s" % (krbccache_prefix, user) > armor_path = os.path.join(krbccache_dir, armor_name) > @@ -967,34 +967,33 @@ class login_password(Backend, KerberosSession, HTTP_Status): > self.debug('Obtaining armor ccache: principal=%s keytab=%s ccache=%s', > armor_principal, keytab, armor_path) > > - (stdout, stderr, returncode) = ipautil.run( > - [paths.KINIT, '-kt', keytab, armor_principal], > - env={'KRB5CCNAME': armor_path}, raiseonerr=False) > - > - if returncode != 0: > - raise CCacheError() > + try: > + ipautil.kinit_keytab(paths.IPA_KEYTAB, armor_path, > + armor_principal) > + except StandardError, e: > + raise CCacheError(str(e)) > > # Format the user as a kerberos principal > principal = krb5_format_principal_name(user, realm) > > - (stdout, stderr, returncode) = ipautil.run( > - [paths.KINIT, principal, '-T', armor_path], > - env={'KRB5CCNAME': ccache_name, 'LC_ALL': 'C'}, > - stdin=password, raiseonerr=False) > + try: > + ipautil.kinit_password(principal, password, > + env={'KRB5CCNAME': ccache_name, > + 'LC_ALL': 'C'}, > + armor_ccache=armor_path) > > - self.debug('kinit: principal=%s returncode=%s, stderr="%s"', > - principal, returncode, stderr) > - > - self.debug('Cleanup the armor ccache') > - ipautil.run( > - [paths.KDESTROY, '-A', '-c', armor_path], > - env={'KRB5CCNAME': armor_path}, > - raiseonerr=False) > - > - if returncode != 0: > - if stderr.strip() == 'kinit: Cannot read password while getting initial credentials': > - raise PasswordExpired(principal=principal, message=unicode(stderr)) > - raise InvalidSessionPassword(principal=principal, message=unicode(stderr)) > + self.debug('Cleanup the armor ccache') > + ipautil.run( > + [paths.KDESTROY, '-A', '-c', armor_path], > + env={'KRB5CCNAME': armor_path}, > + raiseonerr=False) > + except ipautil.CalledProcessError, e: > + if ('kinit: Cannot read password while ' > + 'getting initial credentials') in e.output: I know it is not your code but please make sure it will work with non-English LANG or LC_MESSAGE. > + raise PasswordExpired(principal=principal, > + message=unicode(e.output)) > + raise InvalidSessionPassword(principal=principal, > + message=unicode(e.output)) > > class change_password(Backend, HTTP_Status): Hopefully this review is not too annoying :-) -- Petr^2 Spacek From pspacek at redhat.com Wed Mar 11 11:43:54 2015 From: pspacek at redhat.com (Petr Spacek) Date: Wed, 11 Mar 2015 12:43:54 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <55001A49.9050001@redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> <1426005332.4735.92.camel@willson.usersys.redhat.com> <54FF292C.4020805@redhat.com> <1426008965.4735.95.camel@willson.usersys.redhat.com> <54FF36E8.1040707@redhat.com> <1426014271.4735.107.camel@willson.usersys.redhat.com> <55001517.1060501@redhat.com> <55001A49.9050001@redhat.com> Message-ID: <55002A7A.9040409@redhat.com> On 11.3.2015 11:34, Jan Cholasta wrote: > Dne 11.3.2015 v 11:12 Petr Spacek napsal(a): >> On 10.3.2015 20:04, Simo Sorce wrote: >>> On Tue, 2015-03-10 at 19:24 +0100, Petr Spacek wrote: >>>> On 10.3.2015 18:36, Simo Sorce wrote: >>>>> On Tue, 2015-03-10 at 18:26 +0100, Petr Spacek wrote: >>>>>> On 10.3.2015 17:35, Simo Sorce wrote: >>>>>>> On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: >>>>>>>> On 10.3.2015 15:53, Simo Sorce wrote: >>>>>>>>> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >>>>>>>>>> Hello, >>>>>>>>>> >>>>>>>>>> I would like to discuss Generic support for unknown DNS RR types >>>>>>>>>> (RFC 3597 >>>>>>>>>> [0]). Here is the proposal: >>>>>>>>>> >>>>>>>>>> LDAP schema >>>>>>>>>> =========== >>>>>>>>>> - 1 new attribute: >>>>>>>>>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' >>>>>>>>>> EQUALITY >>>>>>>>>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >>>>>>>>>> >>>>>>>>>> The attribute should be added to existing idnsRecord object class as >>>>>>>>>> MAY. >>>>>>>>>> >>>>>>>>>> This new attribute should contain data encoded according to ?RFC >>>>>>>>>> 3597 section >>>>>>>>>> 5 [5]: >>>>>>>>>> >>>>>>>>>> The RDATA section of an RR of unknown type is represented as a >>>>>>>>>> sequence of white space separated words as follows: >>>>>>>>>> >>>>>>>>>> The special token \# (a backslash immediately followed by a hash >>>>>>>>>> sign), which identifies the RDATA as having the generic encoding >>>>>>>>>> defined herein rather than a traditional type-specific encoding. >>>>>>>>>> >>>>>>>>>> An unsigned decimal integer specifying the RDATA length in >>>>>>>>>> octets. >>>>>>>>>> >>>>>>>>>> Zero or more words of hexadecimal data encoding the actual RDATA >>>>>>>>>> field, each containing an even number of hexadecimal digits. >>>>>>>>>> >>>>>>>>>> If the RDATA is of zero length, the text representation contains >>>>>>>>>> only >>>>>>>>>> the \# token and the single zero representing the length. >>>>>>>>>> >>>>>>>>>> Examples from RFC: >>>>>>>>>> a.example. CLASS32 TYPE731 \# 6 abcd ( >>>>>>>>>> ef 01 23 45 ) >>>>>>>>>> b.example. HS TYPE62347 \# 0 >>>>>>>>>> e.example. IN A \# 4 0A000001 >>>>>>>>>> e.example. CLASS1 TYPE1 10.0.0.2 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Open questions about LDAP format >>>>>>>>>> ================================ >>>>>>>>>> Should we include "\#" constant? We know that the attribute contains >>>>>>>>>> record in >>>>>>>>>> RFC 3597 syntax so it is not strictly necessary. >>>>>>>>>> >>>>>>>>>> I think it would be better to follow RFC 3597 format. It allows blind >>>>>>>>>> copy&pasting from other tools, including direct calls to python-dns. >>>>>>>>>> >>>>>>>>>> It also eases writing conversion tools between DNS and LDAP format >>>>>>>>>> because >>>>>>>>>> they do not need to change record values. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Another question is if we should explicitly include length of data >>>>>>>>>> represented >>>>>>>>>> in hexadecimal notation as a decimal number. I'm very strongly >>>>>>>>>> inclined to let >>>>>>>>>> it there because it is very good sanity check and again, it allows >>>>>>>>>> us to >>>>>>>>>> re-use existing tools including parsers. >>>>>>>>>> >>>>>>>>>> I will ask Uninett.no for standardization after we sort this out >>>>>>>>>> (they own the >>>>>>>>>> OID arc we use for DNS records). >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Attribute usage >>>>>>>>>> =============== >>>>>>>>>> Every DNS RR type has assigned a number [1] which is used on wire. >>>>>>>>>> RR types >>>>>>>>>> which are unknown to the server cannot be named by their >>>>>>>>>> mnemonic/type name >>>>>>>>>> because server would not be able to do name->number conversion and >>>>>>>>>> to generate >>>>>>>>>> DNS wire format. >>>>>>>>>> >>>>>>>>>> As a result, we have to encode the RR type number somehow. Let's use >>>>>>>>>> attribute >>>>>>>>>> sub-types. >>>>>>>>>> >>>>>>>>>> E.g. a record with type 65280 and hex value 0A000001 will be >>>>>>>>>> represented as: >>>>>>>>>> GenericRecord;TYPE65280: \# 4 0A000001 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> CLI >>>>>>>>>> === >>>>>>>>>> $ ipa dnsrecord-add zone.example owner \ >>>>>>>>>> --generic-type=65280 --generic-data='\# 4 0A000001' >>>>>>>>>> >>>>>>>>>> $ ipa dnsrecord-show zone.example owner >>>>>>>>>> Record name: owner >>>>>>>>>> TYPE65280 Record: \# 4 0A000001 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> ACK? :-) >>>>>>>>> >>>>>>>>> Almost. >>>>>>>>> We should refrain from using subtypes when not necessary, and in this >>>>>>>>> case it is not necessary. >>>>>>>>> >>>>>>>>> Use: >>>>>>>>> GenericRecord: 65280 \# 4 0A000001 >>>>>>>> >>>>>>>> I was considering that too but I can see two main drawbacks: >>>>>>>> >>>>>>>> 1) It does not work very well with DS ACI (targetattrfilter, anyone?). >>>>>>>> Adding >>>>>>>> generic write access to GenericRecord == ability to add TLSA records too, >>>>>>>> which you may not want. IMHO it is perfectly reasonable to limit write >>>>>>>> access >>>>>>>> to certain types (e.g. to one from private range). >>>>>>>> >>>>>>>> 2) We would need a separate substring index for emulating filters like >>>>>>>> (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence >>>>>>>> index >>>>>>>> which will be handy one day when we decide to handle upgrades like >>>>>>>> GenericRecord;TYPE256->UriRecord. >>>>>>>> >>>>>>>> Another (less important) annoyance is that conversion tools would have to >>>>>>>> mangle record data instead of just converting attribute name->record >>>>>>>> type. >>>>>>>> >>>>>>>> >>>>>>>> I can be convinced that subtypes are not necessary but I do not see clear >>>>>>>> advantage of avoiding them. What is the problem with subtypes? >>>>>>> >>>>>>> Poor support by most clients, so it is generally discouraged. >>>>>> Hmm, it does not sound like a thing we should care in this case. DNS >>>>>> tree is >>>>>> not meant for direct consumption by LDAP clients (compare with cn=compat). >>>>>> >>>>>> IMHO the only two clients we should care are FreeIPA framework and >>>>>> bind-dyndb-ldap so I do not see this as a problem, really. If someone >>>>>> wants to >>>>>> access DNS tree by hand - sure, use a standard compliant client! >>>>>> >>>>>> Working ACI and LDAP filters sounds like good price for supporting only >>>>>> standards compliant clients. >>>>>> >>>>>> AFAIK OpenLDAP works well and I suspect that ApacheDS will work too because >>>>>> Eclipse has nice support for sub-types built-in. If I can draw some >>>>>> conclusions from that, sub-types are not a thing aliens forgot here when >>>>>> leaving Earth one million years ago :-) >>>>>> >>>>>>> The problem with subtypes and ACIs though is that I think ACIs do not >>>>>>> care about the subtype unless you explicit mention them. >>>>>> IMHO that is exactly what I would like to see for GenericRecord. It >>>>>> allows us >>>>>> to write ACI which allows admins to add any GenericRecord and at the >>>>>> same time >>>>>> allows us to craft ACI which allows access only to >>>>>> GenericRecord;TYPE65280 for >>>>>> specific group/user. >>>>>> >>>>>>> So perhaps bind_dyndb_ldap should refuse to use a generic type that >>>>>>> shadows DNSSEC relevant records ? >>>>>> Sorry, this cannot possibly work because it depends on up-to-date >>>>>> blacklist. >>>>>> >>>>>> How would the plugin released in 2015 know that highly sensitive OPENPGPKEY >>>>>> type will be standardized in 2016 and assigned number XYZ? >>>>> >>>>> Ok, show me an example ACI that works and you get my ack :) >>>> >>>> Am I being punished for something? :-) >>>> >>>> Anyway, this monstrosity: >>>> >>>> (targetattr = "objectclass || txtRecord;test")(target = >>>> "ldap:///idnsname=*,cn=dns,dc=ipa,dc=example")(version 3.0;acl >>>> "permission:luser: Read DNS Entries";allow (compare,read,search) userdn = >>>> "ldap:///uid=luser,cn=users,cn=accounts,dc=ipa,dc=example";) >>>> >>>> Gives 'luser' read access only to txtRecord;test and *not* to the whole >>>> txtRecord in general. >>>> >>>> $ kinit luser >>>> $ ldapsearch -Y GSSAPI -s base -b >>>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>>> SASL username: luser at IPA.EXAMPLE >>>> >>>> # txt, ipa.example., dns, ipa.example >>>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>>> objectClass: top >>>> objectClass: idnsrecord >>>> tXTRecord;test: Guess what is new here! >>>> >>>> Filter '(tXTRecord;test=*)' works as expected and returns only objects with >>>> subtype ;test. >>>> >>>> The only weird thing I noticed is that search filter '(tXTRecord=*)' does not >>>> return the object if you have access only to an subtype with existing value >>>> but not to the 'vanilla' attribute. >>>> >>>> Maybe it is a bug? I will think about it for a while and possibly open a >>>> ticket. Anyway, this is not something we need for implementation. >>>> >>>> >>>> For completeness: >>>> >>>> $ kinit admin >>>> $ ldapsearch -Y GSSAPI -s base -b >>>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>>> SASL username: admin at IPA.EXAMPLE >>>> >>>> # txt, ipa.example., dns, ipa.example >>>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>>> objectClass: top >>>> objectClass: idnsrecord >>>> tXTRecord: nothing >>>> tXTRecord: something >>>> idnsName: txt >>>> tXTRecord;test: Guess what is new here! >>>> >>>> >>>> And yes, you assume correctly that (targetattr = "txtRecord") gives access to >>>> whole txtRecord including all its subtypes. >>>> >>>> ACK? :-) >>>> >>> >>> ACK. >> >> Thank you. Now to the most important and difficult question: >> Should the attribute name be "GenericRecord" or "UnknownRecord"? >> >> I like "GenericRecord" but Honza prefers "UnknownRecord" so we need a third >> opinion :-) > > GenericRecord sounds like something that may be used for any record type, > known or unknown. I don't think that's what we want. We want users to use it > only for unknown record types and use appropriate Record attribute for > known attributes. > > The RFC is titled "Handling of *Unknown* DNS Resource Record (RR) Types". The > word "generic" is used only when referring to encoding of RDATA. Okay, be it 'UnknownRecord'. Petr^2 Spacek > You even used "*unknown* DNS record, RFC 3597" as description of the attribute > yourself. > >> >>> Make sure it is abundantly clear in the docs what is the implication of >>> giving access to the generic attribute w/o qualifications. >> >> Sure. From mbabinsk at redhat.com Wed Mar 11 12:00:03 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Wed, 11 Mar 2015 13:00:03 +0100 Subject: [Freeipa-devel] [PATCHES 0018-0019] ipa-dns-install: Use LDAPI for all DS connections Message-ID: <55002E43.7050601@redhat.com> These patches solve https://fedorahosted.org/freeipa/ticket/4933. They are to be applied to master branch. I will rebase them for ipa-4-1 after the review. -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0018-1-make-BindInstance-and-friends-autobind-ready.patch Type: text/x-patch Size: 4018 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0019-1-ipa-dns-install-use-LDAPI-for-all-DS-operations.patch Type: text/x-patch Size: 6279 bytes Desc: not available URL: From mbabinsk at redhat.com Wed Mar 11 12:38:38 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Wed, 11 Mar 2015 13:38:38 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <55002A13.8010706@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> Message-ID: <5500374E.6050507@redhat.com> On 03/11/2015 12:42 PM, Petr Spacek wrote: > Hello Martin^3, > > good work, we are almost there! Please see my nitpicks in-line. > > On 9.3.2015 13:06, Martin Babinsky wrote: >> On 03/06/2015 01:05 PM, Martin Babinsky wrote: >>> This series of patches for the master/4.1 branch attempts to implement >>> some of the Rob's and Petr Vobornik's ideas which originated from a >>> discussion on this list regarding my original patch fixing >>> https://fedorahosted.org/freeipa/ticket/4808. >>> >>> I suppose that these patches are just a first iteration, we may further >>> discuss if this is the right thing to do. >>> >>> Below is a quote from the original discussion just to get the context: >>> >>> >>> >> >> The next iteration of patches is attached below. Thanks to jcholast and >> pvoborni for the comments and insights. >> >> -- >> Martin^3 Babinsky >> >> freeipa-mbabinsk-0015-2-ipautil-new-functions-kinit_keytab-and-kinit_passwor.patch >> >> >> From 97e4eed332391bab7a12dc593152e369f347fd3c Mon Sep 17 00:00:00 2001 >> From: Martin Babinsky >> Date: Mon, 9 Mar 2015 12:53:10 +0100 >> Subject: [PATCH 1/3] ipautil: new functions kinit_keytab and kinit_password >> >> kinit_keytab replaces kinit_hostprincipal and performs Kerberos auth using >> keytab file. Function is also able to repeat authentication multiple times >> before giving up and raising StandardError. >> >> kinit_password wraps kinit auth using password and also supports FAST >> authentication using httpd armor ccache. >> --- >> ipapython/ipautil.py | 60 ++++++++++++++++++++++++++++++++++++++++------------ >> 1 file changed, 46 insertions(+), 14 deletions(-) >> >> diff --git a/ipapython/ipautil.py b/ipapython/ipautil.py >> index 4116d974e620341119b56fad3cff1bda48af3bab..4547165ccf24ff6edf5c65e756aa321aa34b9e61 100644 >> --- a/ipapython/ipautil.py >> +++ b/ipapython/ipautil.py >> @@ -1175,27 +1175,59 @@ def wait_for_open_socket(socket_name, timeout=0): >> else: >> raise e >> >> -def kinit_hostprincipal(keytab, ccachedir, principal): >> + >> +def kinit_keytab(keytab, ccache_path, principal, attempts=1): >> """ >> - Given a ccache directory and a principal kinit as that user. >> + Given a ccache_path , keytab file and a principal kinit as that user. >> + >> + The optional parameter 'attempts' specifies how many times the credential >> + initialization should be attempted before giving up and raising >> + StandardError. >> >> This blindly overwrites the current CCNAME so if you need to save >> it do so before calling this function. >> >> Thus far this is used to kinit as the local host. >> """ >> - try: >> - ccache_file = 'FILE:%s/ccache' % ccachedir >> - krbcontext = krbV.default_context() >> - ktab = krbV.Keytab(name=keytab, context=krbcontext) >> - princ = krbV.Principal(name=principal, context=krbcontext) >> - os.environ['KRB5CCNAME'] = ccache_file >> - ccache = krbV.CCache(name=ccache_file, context=krbcontext, primary_principal=princ) >> - ccache.init(princ) >> - ccache.init_creds_keytab(keytab=ktab, principal=princ) >> - return ccache_file >> - except krbV.Krb5Error, e: >> - raise StandardError('Error initializing principal %s in %s: %s' % (principal, keytab, str(e))) >> + root_logger.debug("Initializing principal %s using keytab %s" >> + % (principal, keytab)) >> + for attempt in xrange(attempts): > I would like to see new code compatible with Python 3. Here I'm not sure what > is the generic solution for xrange but in this particular case I would > recommend you to use just range. Attempts variable should have small values so > the x/range differences do not matter here. > >> + try: >> + krbcontext = krbV.default_context() >> + ktab = krbV.Keytab(name=keytab, context=krbcontext) >> + princ = krbV.Principal(name=principal, context=krbcontext) >> + os.environ['KRB5CCNAME'] = ccache_path > This is a bit scary, especially when it comes to multi-threaded execution. If > it is really necessary please add comment that this method is not thread-safe. > >> + ccache = krbV.CCache(name=ccache_path, context=krbcontext, >> + primary_principal=princ) >> + ccache.init(princ) >> + ccache.init_creds_keytab(keytab=ktab, principal=princ) >> + root_logger.debug("Attempt %d/%d: success" >> + % (attempt + 1, attempts)) > What about adding + 1 to range boundaries instead of + 1 here and there? > >> + return >> + except krbV.Krb5Error, e: > except ... , ... syntax is not going to work in Python 3. Maybe 'as' would be > better? > >> + root_logger.debug("Attempt %d/%d: failed" >> + % (attempt + 1, attempts)) >> + time.sleep(1) >> + >> + root_logger.debug("Maximum number of attempts (%d) reached" >> + % attempts) >> + raise StandardError("Error initializing principal %s: %s" >> + % (principal, str(e))) >> + >> + >> +def kinit_password(principal, password, env={}, armor_ccache=""): >> + """perform interactive kinit as principal using password. Additional >> + enviroment variables can be specified using env. If using FAST for >> + web-based authentication, use armor_ccache to specify http service ccache. >> + """ >> + root_logger.debug("Initializing principal %s using password" % principal) >> + args = [paths.KINIT, principal] >> + if armor_ccache: >> + root_logger.debug("Using armor ccache %s for FAST webauth" >> + % armor_ccache) >> + args.extend(['-T', armor_ccache]) >> + run(args, env=env, stdin=password) >> + >> >> def dn_attribute_property(private_name): >> ''' >> -- 2.1.0 >> >> >> freeipa-mbabinsk-0016-2-ipa-client-install-try-to-get-host-TGT-several-times.patch >> >> >> From e438d8a0711b4271d24d7d24e98395503912a1c4 Mon Sep 17 00:00:00 2001 >> From: Martin Babinsky >> Date: Mon, 9 Mar 2015 12:53:57 +0100 >> Subject: [PATCH 2/3] ipa-client-install: try to get host TGT several times >> before giving up >> >> New option '--kinit-attempts' enables the host to make multiple attempts to >> obtain TGT from KDC before giving up and aborting client installation. >> >> In addition, all kinit attempts were replaced by calls to >> 'ipautil.kinit_keytab' and 'ipautil.kinit_password'. >> >> https://fedorahosted.org/freeipa/ticket/4808 >> --- >> ipa-client/ipa-install/ipa-client-install | 65 +++++++++++++++++-------------- >> ipa-client/man/ipa-client-install.1 | 5 +++ >> 2 files changed, 41 insertions(+), 29 deletions(-) >> >> diff --git a/ipa-client/ipa-install/ipa-client-install b/ipa-client/ipa-install/ipa-client-install >> index ccaab5536e83b4b6ac60b81132c3455c0af19ae1..c817f9e86dbaa6a2cca7d1a463f53d491fa7badb 100755 >> --- a/ipa-client/ipa-install/ipa-client-install >> +++ b/ipa-client/ipa-install/ipa-client-install >> @@ -91,6 +91,13 @@ def parse_options(): >> >> parser.values.ca_cert_file = value >> >> + def validate_kinit_attempts_option(option, opt, value, parser): >> + if value < 1 or value > sys.maxint: >> + raise OptionValueError( >> + "%s option has invalid value %d" % (opt, value)) > It would be nice if the error message said what is the expected value. > ("Expected integer in range <1,%s>" % sys.maxint) > > BTW is it possible to do this using existing option parser? I would expect > some generic support for type=uint or something similar. > >> + >> + parser.values.kinit_attempts = value >> + >> parser = IPAOptionParser(version=version.VERSION) >> >> basic_group = OptionGroup(parser, "basic options") >> @@ -144,6 +151,11 @@ def parse_options(): >> help="do not modify the nsswitch.conf and PAM configuration") >> basic_group.add_option("-f", "--force", dest="force", action="store_true", >> default=False, help="force setting of LDAP/Kerberos conf") >> + basic_group.add_option('--kinit-attempts', dest='kinit_attempts', >> + action='callback', type='int', default=5, > > It would be good to check lockout numbers in default configuration to make > sure that replication delay will not lock the principal. > >> + callback=validate_kinit_attempts_option, >> + help=("number of attempts to obtain host TGT" >> + " (defaults to %default).")) >> basic_group.add_option("-d", "--debug", dest="debug", action="store_true", >> default=False, help="print debugging information") >> basic_group.add_option("-U", "--unattended", dest="unattended", >> @@ -2351,6 +2363,7 @@ def install(options, env, fstore, statestore): >> root_logger.debug( >> "will use principal provided as option: %s", options.principal) >> >> + host_principal = 'host/%s@%s' % (hostname, cli_realm) >> if not options.on_master: >> nolog = tuple() >> # First test out the kerberos configuration >> @@ -2371,7 +2384,6 @@ def install(options, env, fstore, statestore): >> env['KRB5_CONFIG'] = krb_name >> (ccache_fd, ccache_name) = tempfile.mkstemp() >> os.close(ccache_fd) >> - env['KRB5CCNAME'] = os.environ['KRB5CCNAME'] = ccache_name >> join_args = [paths.SBIN_IPA_JOIN, >> "-s", cli_server[0], >> "-b", str(realm_to_suffix(cli_realm)), >> @@ -2409,29 +2421,24 @@ def install(options, env, fstore, statestore): >> else: >> stdin = sys.stdin.readline() >> >> - (stderr, stdout, returncode) = run(["kinit", principal], >> - raiseonerr=False, >> - stdin=stdin, >> - env=env) >> - if returncode != 0: >> + try: >> + ipautil.kinit_password(principal, stdin, env) >> + except CalledProcessError, e: >> print_port_conf_info() >> - root_logger.error("Kerberos authentication failed") >> - root_logger.info("%s", stdout) >> + root_logger.error("Kerberos authentication failed: %s" >> + % str(e)) > Isn't str() redundant? IMHO %s calls str() implicitly. > >> return CLIENT_INSTALL_ERROR >> elif options.keytab: >> join_args.append("-f") >> if os.path.exists(options.keytab): >> - (stderr, stdout, returncode) = run( >> - [paths.KINIT,'-k', '-t', options.keytab, >> - 'host/%s@%s' % (hostname, cli_realm)], >> - env=env, >> - raiseonerr=False) >> - >> - if returncode != 0: >> + try: >> + ipautil.kinit_keytab(options.keytab, ccache_name, >> + host_principal, >> + attempts=options.kinit_attempts) >> + except StandardError, e: >> print_port_conf_info() >> - root_logger.error("Kerberos authentication failed " >> - "using keytab: %s", options.keytab) >> - root_logger.info("%s", stdout) >> + root_logger.error("Kerberos authentication failed: %s" >> + % str(e)) > Again str(). > >> return CLIENT_INSTALL_ERROR >> else: >> root_logger.error("Keytab file could not be found: %s" >> @@ -2501,12 +2508,13 @@ def install(options, env, fstore, statestore): >> # only the KDC we're installing under is contacted. >> # Other KDCs might not have replicated the principal yet. >> # Once we have the TGT, it's usable on any server. >> - env['KRB5CCNAME'] = os.environ['KRB5CCNAME'] = CCACHE_FILE >> try: >> - run([paths.KINIT, '-k', '-t', paths.KRB5_KEYTAB, >> - 'host/%s@%s' % (hostname, cli_realm)], env=env) >> - except CalledProcessError, e: >> - root_logger.error("Failed to obtain host TGT.") >> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, CCACHE_FILE, >> + host_principal, >> + attempts=options.kinit_attempts) >> + except StandardError, e: >> + print_port_conf_info() >> + root_logger.error("Failed to obtain host TGT: %s" % str(e)) > str()? > >> # failure to get ticket makes it impossible to login and bind >> # from sssd to LDAP, abort installation and rollback changes >> return CLIENT_INSTALL_ERROR >> @@ -2543,16 +2551,15 @@ def install(options, env, fstore, statestore): >> return CLIENT_INSTALL_ERROR >> root_logger.info("Configured /etc/sssd/sssd.conf") >> >> - host_principal = 'host/%s@%s' % (hostname, cli_realm) >> if options.on_master: >> # If on master assume kerberos is already configured properly. >> # Get the host TGT. >> - os.environ['KRB5CCNAME'] = CCACHE_FILE >> try: >> - run([paths.KINIT, '-k', '-t', paths.KRB5_KEYTAB, >> - host_principal]) >> - except CalledProcessError, e: >> - root_logger.error("Failed to obtain host TGT.") >> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, CCACHE_FILE, >> + host_principal, >> + attempts=options.kinit_attempts) >> + except StandardError, e: >> + root_logger.error("Failed to obtain host TGT: %s" % str(e)) > str()? I will not mention it again but please look for it. > >> return CLIENT_INSTALL_ERROR >> else: >> # Configure krb5.conf >> diff --git a/ipa-client/man/ipa-client-install.1 b/ipa-client/man/ipa-client-install.1 >> index 726a6c133132dd2e3ba2fde43d8a2ec0549bfcef..56ed899a25e626b8ae61714f77f3588059fa86f9 100644 >> --- a/ipa-client/man/ipa-client-install.1 >> +++ b/ipa-client/man/ipa-client-install.1 >> @@ -152,6 +152,11 @@ Do not use Authconfig to modify the nsswitch.conf and PAM configuration. >> \fB\-f\fR, \fB\-\-force\fR >> Force the settings even if errors occur >> .TP >> +\fB\-\-kinit\-attempts\fR=\fIKINIT_ATTEMPTS\fR >> +Number of unsuccessful attempts to obtain host TGT that will be performed >> +before aborting client installation. \fIKINIT_ATTEMPTS\fR should be a number >> +greater than zero. By default 5 attempts to get TGT are performed. > It would be nice to add a rationale to the text. Current text is somehow > confusing for users not familiar with replication and related problems. > > My hope is descriptive manual will prevent users from creating cargo-cults > based on copy&pastings texts they do not understand. > >> +.TP >> \fB\-d\fR, \fB\-\-debug\fR >> Print debugging information to stdout >> .TP >> -- 2.1.0 >> >> >> freeipa-mbabinsk-0017-2-Adopted-kinit_keytab-and-kinit_password-for-kerberos.patch >> >> >> From 912113529138e5b1bd8357ae6a17376cb5d32759 Mon Sep 17 00:00:00 2001 >> From: Martin Babinsky >> Date: Mon, 9 Mar 2015 12:54:36 +0100 >> Subject: [PATCH 3/3] Adopted kinit_keytab and kinit_password for kerberos auth >> >> --- >> daemons/dnssec/ipa-dnskeysync-replica | 6 ++- >> daemons/dnssec/ipa-dnskeysyncd | 2 +- >> daemons/dnssec/ipa-ods-exporter | 5 ++- >> .../certmonger/dogtag-ipa-ca-renew-agent-submit | 3 +- >> install/restart_scripts/renew_ca_cert | 7 ++-- >> install/restart_scripts/renew_ra_cert | 4 +- >> ipa-client/ipa-install/ipa-client-automount | 9 ++-- >> ipa-client/ipaclient/ipa_certupdate.py | 3 +- >> ipaserver/rpcserver.py | 49 +++++++++++----------- >> 9 files changed, 47 insertions(+), 41 deletions(-) >> >> diff --git a/daemons/dnssec/ipa-dnskeysync-replica b/daemons/dnssec/ipa-dnskeysync-replica >> index d04f360e04ee018dcdd1ba9b2ca42b1844617af9..e9cae519202203a10678b7384e5acf748f256427 100755 >> --- a/daemons/dnssec/ipa-dnskeysync-replica >> +++ b/daemons/dnssec/ipa-dnskeysync-replica >> @@ -139,14 +139,16 @@ log.setLevel(level=logging.DEBUG) >> # Kerberos initialization >> PRINCIPAL = str('%s/%s' % (DAEMONNAME, ipalib.api.env.host)) >> log.debug('Kerberos principal: %s', PRINCIPAL) >> -ipautil.kinit_hostprincipal(paths.IPA_DNSKEYSYNCD_KEYTAB, WORKDIR, PRINCIPAL) >> +ccache_filename = os.path.join(WORKDIR, 'ccache') > BTW I really appreciate this patch set! We finally can use more descriptive > names like 'ipa-dnskeysync-replica.ccache' which sometimes make debugging easier. > >> +ipautil.kinit_keytab(paths.IPA_DNSKEYSYNCD_KEYTAB, ccache_filename, >> + PRINCIPAL) >> log.debug('Got TGT') >> >> # LDAP initialization >> ldap = ipalib.api.Backend[ldap2] >> # fixme >> log.debug('Connecting to LDAP') >> -ldap.connect(ccache="%s/ccache" % WORKDIR) >> +ldap.connect(ccache=ccache_filename) >> log.debug('Connected') >> >> >> diff --git a/daemons/dnssec/ipa-dnskeysyncd b/daemons/dnssec/ipa-dnskeysyncd >> index 54a08a1e6307e89b3f52e78bddbe28cda8ac1345..4e64596c7f8ccd6cff47df4772c875917c71606a 100755 >> --- a/daemons/dnssec/ipa-dnskeysyncd >> +++ b/daemons/dnssec/ipa-dnskeysyncd >> @@ -65,7 +65,7 @@ log = root_logger >> # Kerberos initialization >> PRINCIPAL = str('%s/%s' % (DAEMONNAME, api.env.host)) >> log.debug('Kerberos principal: %s', PRINCIPAL) >> -ipautil.kinit_hostprincipal(KEYTAB_FB, WORKDIR, PRINCIPAL) >> +ipautil.kinit_keytab(KEYTAB_FB, os.path.join(WORKDIR, 'ccache'), PRINCIPAL) > 'ipa-dnskeysyncd.ccache'? > >> # LDAP initialization >> basedn = DN(api.env.container_dns, api.env.basedn) >> diff --git a/daemons/dnssec/ipa-ods-exporter b/daemons/dnssec/ipa-ods-exporter >> index dc1851d3a34bb09c1a87c86d101b11afe35e49fe..0cf825950338cdce330a15b3ea22150f6e02588a 100755 >> --- a/daemons/dnssec/ipa-ods-exporter >> +++ b/daemons/dnssec/ipa-ods-exporter >> @@ -399,7 +399,8 @@ ipalib.api.finalize() >> # Kerberos initialization >> PRINCIPAL = str('%s/%s' % (DAEMONNAME, ipalib.api.env.host)) >> log.debug('Kerberos principal: %s', PRINCIPAL) >> -ipautil.kinit_hostprincipal(paths.IPA_ODS_EXPORTER_KEYTAB, WORKDIR, PRINCIPAL) >> +ccache_name = os.path.join(WORKDIR, 'ccache') > 'ipa-ods-exporter.ccache'? > >> +ipautil.kinit_keytab(paths.IPA_ODS_EXPORTER_KEYTAB, ccache_name, PRINCIPAL) >> log.debug('Got TGT') >> >> # LDAP initialization >> @@ -407,7 +408,7 @@ dns_dn = DN(ipalib.api.env.container_dns, ipalib.api.env.basedn) >> ldap = ipalib.api.Backend[ldap2] >> # fixme >> log.debug('Connecting to LDAP') >> -ldap.connect(ccache="%s/ccache" % WORKDIR) >> +ldap.connect(ccache=ccache_name) >> log.debug('Connected') >> >> >> diff --git a/install/certmonger/dogtag-ipa-ca-renew-agent-submit b/install/certmonger/dogtag-ipa-ca-renew-agent-submit >> index 7b91fc61148912c77d0ae962b3847d73e8d0720e..13d2c2a912d2fcf84053d36da5e07fc834f9cf25 100755 >> --- a/install/certmonger/dogtag-ipa-ca-renew-agent-submit >> +++ b/install/certmonger/dogtag-ipa-ca-renew-agent-submit >> @@ -440,7 +440,8 @@ def main(): >> certs.renewal_lock.acquire() >> try: >> principal = str('host/%s@%s' % (api.env.host, api.env.realm)) >> - ipautil.kinit_hostprincipal(paths.KRB5_KEYTAB, tmpdir, principal) >> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, os.path.join(tmpdir, 'ccache'), > 'dogtag-ipa-ca-renew-agent-submit.ccache'? > >> + principal) >> >> profile = os.environ.get('CERTMONGER_CA_PROFILE') >> if profile: >> diff --git a/install/restart_scripts/renew_ca_cert b/install/restart_scripts/renew_ca_cert >> index c7bd5d74c5b4659b3ad66d630653ff6419868d99..67156122bb75f00a4c3f612697092e5bab3723fb 100644 >> --- a/install/restart_scripts/renew_ca_cert >> +++ b/install/restart_scripts/renew_ca_cert >> @@ -73,8 +73,9 @@ def _main(): >> tmpdir = tempfile.mkdtemp(prefix="tmp-") >> try: >> principal = str('host/%s@%s' % (api.env.host, api.env.realm)) >> - ccache = ipautil.kinit_hostprincipal(paths.KRB5_KEYTAB, tmpdir, >> - principal) >> + ccache_filename = '%s/ccache' % tmpdir > 'renew_ca_cert.ccache'? > >> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, ccache_filename, >> + principal) >> >> ca = cainstance.CAInstance(host_name=api.env.host, ldapi=False) >> ca.update_cert_config(nickname, cert, configured_constants) >> @@ -139,7 +140,7 @@ def _main(): >> conn = None >> try: >> conn = ldap2(shared_instance=False, ldap_uri=api.env.ldap_uri) >> - conn.connect(ccache=ccache) >> + conn.connect(ccache=ccache_filename) >> except Exception, e: >> syslog.syslog( >> syslog.LOG_ERR, "Failed to connect to LDAP: %s" % e) >> diff --git a/install/restart_scripts/renew_ra_cert b/install/restart_scripts/renew_ra_cert >> index 7dae3562380e919b2cc5f53825820291fc93fdc5..6276de78e4528dc1caa39be6628094a9d00e5988 100644 >> --- a/install/restart_scripts/renew_ra_cert >> +++ b/install/restart_scripts/renew_ra_cert >> @@ -42,8 +42,8 @@ def _main(): >> tmpdir = tempfile.mkdtemp(prefix="tmp-") >> try: >> principal = str('host/%s@%s' % (api.env.host, api.env.realm)) >> - ccache = ipautil.kinit_hostprincipal(paths.KRB5_KEYTAB, tmpdir, >> - principal) >> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, '%s/ccache' % tmpdir, > 'renew_ra_cert.ccache'? > >> + principal) >> >> ca = cainstance.CAInstance(host_name=api.env.host, ldapi=False) >> if ca.is_renewal_master(): >> diff --git a/ipa-client/ipa-install/ipa-client-automount b/ipa-client/ipa-install/ipa-client-automount >> index 7b9e701dead5f50a033a455eb62e30df78cc0249..19197d34ca580062742b3d7363e5dfb2dad0e4de 100755 >> --- a/ipa-client/ipa-install/ipa-client-automount >> +++ b/ipa-client/ipa-install/ipa-client-automount >> @@ -425,10 +425,11 @@ def main(): >> os.close(ccache_fd) >> try: >> try: >> - os.environ['KRB5CCNAME'] = ccache_name >> - ipautil.run([paths.KINIT, '-k', '-t', paths.KRB5_KEYTAB, 'host/%s@%s' % (api.env.host, api.env.realm)]) >> - except ipautil.CalledProcessError, e: >> - sys.exit("Failed to obtain host TGT.") >> + host_princ = str('host/%s@%s' % (api.env.host, api.env.realm)) >> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, ccache_name, > I'm not sure what is ccache_name here but it should be something descriptive. > >> + host_princ) >> + except StandardError, e: >> + sys.exit("Failed to obtain host TGT: %s" % e) >> # Now we have a TGT, connect to IPA >> try: >> api.Backend.rpcclient.connect() >> diff --git a/ipa-client/ipaclient/ipa_certupdate.py b/ipa-client/ipaclient/ipa_certupdate.py >> index 031a34c3a54a02d43978eedcb794678a1550702b..d6f7bbb3daff3ae4dced5d69f83a0516003235ab 100644 >> --- a/ipa-client/ipaclient/ipa_certupdate.py >> +++ b/ipa-client/ipaclient/ipa_certupdate.py >> @@ -57,7 +57,8 @@ class CertUpdate(admintool.AdminTool): >> tmpdir = tempfile.mkdtemp(prefix="tmp-") >> try: >> principal = str('host/%s@%s' % (api.env.host, api.env.realm)) >> - ipautil.kinit_hostprincipal(paths.KRB5_KEYTAB, tmpdir, principal) >> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, >> + os.path.join(tmpdir, 'ccache'), principal) > 'ipa_certupdate.ccache'? > >> >> api.Backend.rpcclient.connect() >> try: >> diff --git a/ipaserver/rpcserver.py b/ipaserver/rpcserver.py >> index d6bc955b9d9910a24eec5df1def579310eb54786..36f16908ac8477d9982bfee613b77576853054eb 100644 >> --- a/ipaserver/rpcserver.py >> +++ b/ipaserver/rpcserver.py >> @@ -958,8 +958,8 @@ class login_password(Backend, KerberosSession, HTTP_Status): >> >> def kinit(self, user, realm, password, ccache_name): >> # get http service ccache as an armor for FAST to enable OTP authentication >> - armor_principal = krb5_format_service_principal_name( >> - 'HTTP', self.api.env.host, realm) >> + armor_principal = str(krb5_format_service_principal_name( >> + 'HTTP', self.api.env.host, realm)) >> keytab = paths.IPA_KEYTAB >> armor_name = "%sA_%s" % (krbccache_prefix, user) >> armor_path = os.path.join(krbccache_dir, armor_name) >> @@ -967,34 +967,33 @@ class login_password(Backend, KerberosSession, HTTP_Status): >> self.debug('Obtaining armor ccache: principal=%s keytab=%s ccache=%s', >> armor_principal, keytab, armor_path) >> >> - (stdout, stderr, returncode) = ipautil.run( >> - [paths.KINIT, '-kt', keytab, armor_principal], >> - env={'KRB5CCNAME': armor_path}, raiseonerr=False) >> - >> - if returncode != 0: >> - raise CCacheError() >> + try: >> + ipautil.kinit_keytab(paths.IPA_KEYTAB, armor_path, >> + armor_principal) >> + except StandardError, e: >> + raise CCacheError(str(e)) >> >> # Format the user as a kerberos principal >> principal = krb5_format_principal_name(user, realm) >> >> - (stdout, stderr, returncode) = ipautil.run( >> - [paths.KINIT, principal, '-T', armor_path], >> - env={'KRB5CCNAME': ccache_name, 'LC_ALL': 'C'}, >> - stdin=password, raiseonerr=False) >> + try: >> + ipautil.kinit_password(principal, password, >> + env={'KRB5CCNAME': ccache_name, >> + 'LC_ALL': 'C'}, >> + armor_ccache=armor_path) >> >> - self.debug('kinit: principal=%s returncode=%s, stderr="%s"', >> - principal, returncode, stderr) >> - >> - self.debug('Cleanup the armor ccache') >> - ipautil.run( >> - [paths.KDESTROY, '-A', '-c', armor_path], >> - env={'KRB5CCNAME': armor_path}, >> - raiseonerr=False) >> - >> - if returncode != 0: >> - if stderr.strip() == 'kinit: Cannot read password while getting initial credentials': >> - raise PasswordExpired(principal=principal, message=unicode(stderr)) >> - raise InvalidSessionPassword(principal=principal, message=unicode(stderr)) >> + self.debug('Cleanup the armor ccache') >> + ipautil.run( >> + [paths.KDESTROY, '-A', '-c', armor_path], >> + env={'KRB5CCNAME': armor_path}, >> + raiseonerr=False) >> + except ipautil.CalledProcessError, e: >> + if ('kinit: Cannot read password while ' >> + 'getting initial credentials') in e.output: > I know it is not your code but please make sure it will work with non-English > LANG or LC_MESSAGE. > >> + raise PasswordExpired(principal=principal, >> + message=unicode(e.output)) >> + raise InvalidSessionPassword(principal=principal, >> + message=unicode(e.output)) >> >> class change_password(Backend, HTTP_Status): > > Hopefully this review is not too annoying :-) > Thank you for plenty of suggestions Petr. I will try to update the patches accordingly. -- Martin^3 Babinsky From mbabinsk at redhat.com Wed Mar 11 13:27:10 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Wed, 11 Mar 2015 14:27:10 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <55002A13.8010706@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> Message-ID: <550042AE.1000002@redhat.com> Actually, now that I think about it, I will try to address some of your comments: On 03/11/2015 12:42 PM, Petr Spacek wrote: > Hello Martin^3, > > good work, we are almost there! Please see my nitpicks in-line. > > On 9.3.2015 13:06, Martin Babinsky wrote: >> On 03/06/2015 01:05 PM, Martin Babinsky wrote: >>> This series of patches for the master/4.1 branch attempts to implement >>> some of the Rob's and Petr Vobornik's ideas which originated from a >>> discussion on this list regarding my original patch fixing >>> https://fedorahosted.org/freeipa/ticket/4808. >>> >>> I suppose that these patches are just a first iteration, we may further >>> discuss if this is the right thing to do. >>> >>> Below is a quote from the original discussion just to get the context: >>> >>> >>> >> >> The next iteration of patches is attached below. Thanks to jcholast and >> pvoborni for the comments and insights. >> >> -- >> Martin^3 Babinsky >> >> freeipa-mbabinsk-0015-2-ipautil-new-functions-kinit_keytab-and-kinit_passwor.patch >> >> >> From 97e4eed332391bab7a12dc593152e369f347fd3c Mon Sep 17 00:00:00 2001 >> From: Martin Babinsky >> Date: Mon, 9 Mar 2015 12:53:10 +0100 >> Subject: [PATCH 1/3] ipautil: new functions kinit_keytab and kinit_password >> >> kinit_keytab replaces kinit_hostprincipal and performs Kerberos auth using >> keytab file. Function is also able to repeat authentication multiple times >> before giving up and raising StandardError. >> >> kinit_password wraps kinit auth using password and also supports FAST >> authentication using httpd armor ccache. >> --- >> ipapython/ipautil.py | 60 ++++++++++++++++++++++++++++++++++++++++------------ >> 1 file changed, 46 insertions(+), 14 deletions(-) >> >> diff --git a/ipapython/ipautil.py b/ipapython/ipautil.py >> index 4116d974e620341119b56fad3cff1bda48af3bab..4547165ccf24ff6edf5c65e756aa321aa34b9e61 100644 >> --- a/ipapython/ipautil.py >> +++ b/ipapython/ipautil.py >> @@ -1175,27 +1175,59 @@ def wait_for_open_socket(socket_name, timeout=0): >> else: >> raise e >> >> -def kinit_hostprincipal(keytab, ccachedir, principal): >> + >> +def kinit_keytab(keytab, ccache_path, principal, attempts=1): >> """ >> - Given a ccache directory and a principal kinit as that user. >> + Given a ccache_path , keytab file and a principal kinit as that user. >> + >> + The optional parameter 'attempts' specifies how many times the credential >> + initialization should be attempted before giving up and raising >> + StandardError. >> >> This blindly overwrites the current CCNAME so if you need to save >> it do so before calling this function. >> >> Thus far this is used to kinit as the local host. >> """ >> - try: >> - ccache_file = 'FILE:%s/ccache' % ccachedir >> - krbcontext = krbV.default_context() >> - ktab = krbV.Keytab(name=keytab, context=krbcontext) >> - princ = krbV.Principal(name=principal, context=krbcontext) >> - os.environ['KRB5CCNAME'] = ccache_file >> - ccache = krbV.CCache(name=ccache_file, context=krbcontext, primary_principal=princ) >> - ccache.init(princ) >> - ccache.init_creds_keytab(keytab=ktab, principal=princ) >> - return ccache_file >> - except krbV.Krb5Error, e: >> - raise StandardError('Error initializing principal %s in %s: %s' % (principal, keytab, str(e))) >> + root_logger.debug("Initializing principal %s using keytab %s" >> + % (principal, keytab)) >> + for attempt in xrange(attempts): > I would like to see new code compatible with Python 3. Here I'm not sure what > is the generic solution for xrange but in this particular case I would > recommend you to use just range. Attempts variable should have small values so > the x/range differences do not matter here. > >> + try: >> + krbcontext = krbV.default_context() >> + ktab = krbV.Keytab(name=keytab, context=krbcontext) >> + princ = krbV.Principal(name=principal, context=krbcontext) >> + os.environ['KRB5CCNAME'] = ccache_path > This is a bit scary, especially when it comes to multi-threaded execution. If > it is really necessary please add comment that this method is not thread-safe. > So far this function is used in various installers/uninstallers/restart scripts, so I'm not quite sure if there is even some multithreaded thingy going on there (I have never done multithreaded programming so I don't know, someone with more experience can comment on this). I will, however, add the warning to the docstring. >> + ccache = krbV.CCache(name=ccache_path, context=krbcontext, >> + primary_principal=princ) >> + ccache.init(princ) >> + ccache.init_creds_keytab(keytab=ktab, principal=princ) >> + root_logger.debug("Attempt %d/%d: success" >> + % (attempt + 1, attempts)) > What about adding + 1 to range boundaries instead of + 1 here and there? > >> + return >> + except krbV.Krb5Error, e: > except ... , ... syntax is not going to work in Python 3. Maybe 'as' would be > better? > AFAIK except ... as ... syntax was added in Python 2.6. Using this syntax can break older versions of Python. If this is not a concern for us, I will fix this and use this syntax also in my later patches. >> + root_logger.debug("Attempt %d/%d: failed" >> + % (attempt + 1, attempts)) >> + time.sleep(1) >> + >> + root_logger.debug("Maximum number of attempts (%d) reached" >> + % attempts) >> + raise StandardError("Error initializing principal %s: %s" >> + % (principal, str(e))) >> + >> + >> +def kinit_password(principal, password, env={}, armor_ccache=""): >> + """perform interactive kinit as principal using password. Additional >> + enviroment variables can be specified using env. If using FAST for >> + web-based authentication, use armor_ccache to specify http service ccache. >> + """ >> + root_logger.debug("Initializing principal %s using password" % principal) >> + args = [paths.KINIT, principal] >> + if armor_ccache: >> + root_logger.debug("Using armor ccache %s for FAST webauth" >> + % armor_ccache) >> + args.extend(['-T', armor_ccache]) >> + run(args, env=env, stdin=password) >> + >> >> def dn_attribute_property(private_name): >> ''' >> -- 2.1.0 >> >> >> freeipa-mbabinsk-0016-2-ipa-client-install-try-to-get-host-TGT-several-times.patch >> >> >> From e438d8a0711b4271d24d7d24e98395503912a1c4 Mon Sep 17 00:00:00 2001 >> From: Martin Babinsky >> Date: Mon, 9 Mar 2015 12:53:57 +0100 >> Subject: [PATCH 2/3] ipa-client-install: try to get host TGT several times >> before giving up >> >> New option '--kinit-attempts' enables the host to make multiple attempts to >> obtain TGT from KDC before giving up and aborting client installation. >> >> In addition, all kinit attempts were replaced by calls to >> 'ipautil.kinit_keytab' and 'ipautil.kinit_password'. >> >> https://fedorahosted.org/freeipa/ticket/4808 >> --- >> ipa-client/ipa-install/ipa-client-install | 65 +++++++++++++++++-------------- >> ipa-client/man/ipa-client-install.1 | 5 +++ >> 2 files changed, 41 insertions(+), 29 deletions(-) >> >> diff --git a/ipa-client/ipa-install/ipa-client-install b/ipa-client/ipa-install/ipa-client-install >> index ccaab5536e83b4b6ac60b81132c3455c0af19ae1..c817f9e86dbaa6a2cca7d1a463f53d491fa7badb 100755 >> --- a/ipa-client/ipa-install/ipa-client-install >> +++ b/ipa-client/ipa-install/ipa-client-install >> @@ -91,6 +91,13 @@ def parse_options(): >> >> parser.values.ca_cert_file = value >> >> + def validate_kinit_attempts_option(option, opt, value, parser): >> + if value < 1 or value > sys.maxint: >> + raise OptionValueError( >> + "%s option has invalid value %d" % (opt, value)) > It would be nice if the error message said what is the expected value. > ("Expected integer in range <1,%s>" % sys.maxint) > > BTW is it possible to do this using existing option parser? I would expect > some generic support for type=uint or something similar. > OptionParser supports 'type' keywords when adding options, which perform the neccessary conversions (int(), etc) and validation (see https://docs.python.org/2/library/optparse.html#optparse-standard-option-types). However, in this case you still have to manually check for values less that 1 which do not make sense. AFAIK OptionParser has no built-in way to do this. >> + >> + parser.values.kinit_attempts = value >> + >> parser = IPAOptionParser(version=version.VERSION) >> >> basic_group = OptionGroup(parser, "basic options") >> @@ -144,6 +151,11 @@ def parse_options(): >> help="do not modify the nsswitch.conf and PAM configuration") >> basic_group.add_option("-f", "--force", dest="force", action="store_true", >> default=False, help="force setting of LDAP/Kerberos conf") >> + basic_group.add_option('--kinit-attempts', dest='kinit_attempts', >> + action='callback', type='int', default=5, > > It would be good to check lockout numbers in default configuration to make > sure that replication delay will not lock the principal. > I'm not sure that I follow, could you be more specific what you mean by this? >> + callback=validate_kinit_attempts_option, >> + help=("number of attempts to obtain host TGT" >> + " (defaults to %default).")) >> basic_group.add_option("-d", "--debug", dest="debug", action="store_true", >> default=False, help="print debugging information") >> basic_group.add_option("-U", "--unattended", dest="unattended", >> @@ -2351,6 +2363,7 @@ def install(options, env, fstore, statestore): >> root_logger.debug( >> "will use principal provided as option: %s", options.principal) >> >> + host_principal = 'host/%s@%s' % (hostname, cli_realm) >> if not options.on_master: >> nolog = tuple() >> # First test out the kerberos configuration >> @@ -2371,7 +2384,6 @@ def install(options, env, fstore, statestore): >> env['KRB5_CONFIG'] = krb_name >> (ccache_fd, ccache_name) = tempfile.mkstemp() >> os.close(ccache_fd) >> - env['KRB5CCNAME'] = os.environ['KRB5CCNAME'] = ccache_name >> join_args = [paths.SBIN_IPA_JOIN, >> "-s", cli_server[0], >> "-b", str(realm_to_suffix(cli_realm)), >> @@ -2409,29 +2421,24 @@ def install(options, env, fstore, statestore): >> else: >> stdin = sys.stdin.readline() >> >> - (stderr, stdout, returncode) = run(["kinit", principal], >> - raiseonerr=False, >> - stdin=stdin, >> - env=env) >> - if returncode != 0: >> + try: >> + ipautil.kinit_password(principal, stdin, env) >> + except CalledProcessError, e: >> print_port_conf_info() >> - root_logger.error("Kerberos authentication failed") >> - root_logger.info("%s", stdout) >> + root_logger.error("Kerberos authentication failed: %s" >> + % str(e)) > Isn't str() redundant? IMHO %s calls str() implicitly. > Indeed it does, I will fix it (also in other places). >> return CLIENT_INSTALL_ERROR >> elif options.keytab: >> join_args.append("-f") >> if os.path.exists(options.keytab): >> - (stderr, stdout, returncode) = run( >> - [paths.KINIT,'-k', '-t', options.keytab, >> - 'host/%s@%s' % (hostname, cli_realm)], >> - env=env, >> - raiseonerr=False) >> - >> - if returncode != 0: >> + try: >> + ipautil.kinit_keytab(options.keytab, ccache_name, >> + host_principal, >> + attempts=options.kinit_attempts) >> + except StandardError, e: >> print_port_conf_info() >> - root_logger.error("Kerberos authentication failed " >> - "using keytab: %s", options.keytab) >> - root_logger.info("%s", stdout) >> + root_logger.error("Kerberos authentication failed: %s" >> + % str(e)) > Again str(). > >> return CLIENT_INSTALL_ERROR >> else: >> root_logger.error("Keytab file could not be found: %s" >> @@ -2501,12 +2508,13 @@ def install(options, env, fstore, statestore): >> # only the KDC we're installing under is contacted. >> # Other KDCs might not have replicated the principal yet. >> # Once we have the TGT, it's usable on any server. >> - env['KRB5CCNAME'] = os.environ['KRB5CCNAME'] = CCACHE_FILE >> try: >> - run([paths.KINIT, '-k', '-t', paths.KRB5_KEYTAB, >> - 'host/%s@%s' % (hostname, cli_realm)], env=env) >> - except CalledProcessError, e: >> - root_logger.error("Failed to obtain host TGT.") >> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, CCACHE_FILE, >> + host_principal, >> + attempts=options.kinit_attempts) >> + except StandardError, e: >> + print_port_conf_info() >> + root_logger.error("Failed to obtain host TGT: %s" % str(e)) > str()? > >> # failure to get ticket makes it impossible to login and bind >> # from sssd to LDAP, abort installation and rollback changes >> return CLIENT_INSTALL_ERROR >> @@ -2543,16 +2551,15 @@ def install(options, env, fstore, statestore): >> return CLIENT_INSTALL_ERROR >> root_logger.info("Configured /etc/sssd/sssd.conf") >> >> - host_principal = 'host/%s@%s' % (hostname, cli_realm) >> if options.on_master: >> # If on master assume kerberos is already configured properly. >> # Get the host TGT. >> - os.environ['KRB5CCNAME'] = CCACHE_FILE >> try: >> - run([paths.KINIT, '-k', '-t', paths.KRB5_KEYTAB, >> - host_principal]) >> - except CalledProcessError, e: >> - root_logger.error("Failed to obtain host TGT.") >> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, CCACHE_FILE, >> + host_principal, >> + attempts=options.kinit_attempts) >> + except StandardError, e: >> + root_logger.error("Failed to obtain host TGT: %s" % str(e)) > str()? I will not mention it again but please look for it. > >> return CLIENT_INSTALL_ERROR >> else: >> # Configure krb5.conf >> diff --git a/ipa-client/man/ipa-client-install.1 b/ipa-client/man/ipa-client-install.1 >> index 726a6c133132dd2e3ba2fde43d8a2ec0549bfcef..56ed899a25e626b8ae61714f77f3588059fa86f9 100644 >> --- a/ipa-client/man/ipa-client-install.1 >> +++ b/ipa-client/man/ipa-client-install.1 >> @@ -152,6 +152,11 @@ Do not use Authconfig to modify the nsswitch.conf and PAM configuration. >> \fB\-f\fR, \fB\-\-force\fR >> Force the settings even if errors occur >> .TP >> +\fB\-\-kinit\-attempts\fR=\fIKINIT_ATTEMPTS\fR >> +Number of unsuccessful attempts to obtain host TGT that will be performed >> +before aborting client installation. \fIKINIT_ATTEMPTS\fR should be a number >> +greater than zero. By default 5 attempts to get TGT are performed. > It would be nice to add a rationale to the text. Current text is somehow > confusing for users not familiar with replication and related problems. > > My hope is descriptive manual will prevent users from creating cargo-cults > based on copy&pastings texts they do not understand. > I will try to reformulate this so it makes more sense. >> +.TP >> \fB\-d\fR, \fB\-\-debug\fR >> Print debugging information to stdout >> .TP >> -- 2.1.0 >> >> >> freeipa-mbabinsk-0017-2-Adopted-kinit_keytab-and-kinit_password-for-kerberos.patch >> >> >> From 912113529138e5b1bd8357ae6a17376cb5d32759 Mon Sep 17 00:00:00 2001 >> From: Martin Babinsky >> Date: Mon, 9 Mar 2015 12:54:36 +0100 >> Subject: [PATCH 3/3] Adopted kinit_keytab and kinit_password for kerberos auth >> >> --- >> daemons/dnssec/ipa-dnskeysync-replica | 6 ++- >> daemons/dnssec/ipa-dnskeysyncd | 2 +- >> daemons/dnssec/ipa-ods-exporter | 5 ++- >> .../certmonger/dogtag-ipa-ca-renew-agent-submit | 3 +- >> install/restart_scripts/renew_ca_cert | 7 ++-- >> install/restart_scripts/renew_ra_cert | 4 +- >> ipa-client/ipa-install/ipa-client-automount | 9 ++-- >> ipa-client/ipaclient/ipa_certupdate.py | 3 +- >> ipaserver/rpcserver.py | 49 +++++++++++----------- >> 9 files changed, 47 insertions(+), 41 deletions(-) >> >> diff --git a/daemons/dnssec/ipa-dnskeysync-replica b/daemons/dnssec/ipa-dnskeysync-replica >> index d04f360e04ee018dcdd1ba9b2ca42b1844617af9..e9cae519202203a10678b7384e5acf748f256427 100755 >> --- a/daemons/dnssec/ipa-dnskeysync-replica >> +++ b/daemons/dnssec/ipa-dnskeysync-replica >> @@ -139,14 +139,16 @@ log.setLevel(level=logging.DEBUG) >> # Kerberos initialization >> PRINCIPAL = str('%s/%s' % (DAEMONNAME, ipalib.api.env.host)) >> log.debug('Kerberos principal: %s', PRINCIPAL) >> -ipautil.kinit_hostprincipal(paths.IPA_DNSKEYSYNCD_KEYTAB, WORKDIR, PRINCIPAL) >> +ccache_filename = os.path.join(WORKDIR, 'ccache') > BTW I really appreciate this patch set! We finally can use more descriptive > names like 'ipa-dnskeysync-replica.ccache' which sometimes make debugging easier. > Named ccaches seems to be a good idea. I will fix this in all places where the ccache is somehow persistent (and not deleted after installation). >> +ipautil.kinit_keytab(paths.IPA_DNSKEYSYNCD_KEYTAB, ccache_filename, >> + PRINCIPAL) >> log.debug('Got TGT') >> >> # LDAP initialization >> ldap = ipalib.api.Backend[ldap2] >> # fixme >> log.debug('Connecting to LDAP') >> -ldap.connect(ccache="%s/ccache" % WORKDIR) >> +ldap.connect(ccache=ccache_filename) >> log.debug('Connected') >> >> >> diff --git a/daemons/dnssec/ipa-dnskeysyncd b/daemons/dnssec/ipa-dnskeysyncd >> index 54a08a1e6307e89b3f52e78bddbe28cda8ac1345..4e64596c7f8ccd6cff47df4772c875917c71606a 100755 >> --- a/daemons/dnssec/ipa-dnskeysyncd >> +++ b/daemons/dnssec/ipa-dnskeysyncd >> @@ -65,7 +65,7 @@ log = root_logger >> # Kerberos initialization >> PRINCIPAL = str('%s/%s' % (DAEMONNAME, api.env.host)) >> log.debug('Kerberos principal: %s', PRINCIPAL) >> -ipautil.kinit_hostprincipal(KEYTAB_FB, WORKDIR, PRINCIPAL) >> +ipautil.kinit_keytab(KEYTAB_FB, os.path.join(WORKDIR, 'ccache'), PRINCIPAL) > 'ipa-dnskeysyncd.ccache'? > >> # LDAP initialization >> basedn = DN(api.env.container_dns, api.env.basedn) >> diff --git a/daemons/dnssec/ipa-ods-exporter b/daemons/dnssec/ipa-ods-exporter >> index dc1851d3a34bb09c1a87c86d101b11afe35e49fe..0cf825950338cdce330a15b3ea22150f6e02588a 100755 >> --- a/daemons/dnssec/ipa-ods-exporter >> +++ b/daemons/dnssec/ipa-ods-exporter >> @@ -399,7 +399,8 @@ ipalib.api.finalize() >> # Kerberos initialization >> PRINCIPAL = str('%s/%s' % (DAEMONNAME, ipalib.api.env.host)) >> log.debug('Kerberos principal: %s', PRINCIPAL) >> -ipautil.kinit_hostprincipal(paths.IPA_ODS_EXPORTER_KEYTAB, WORKDIR, PRINCIPAL) >> +ccache_name = os.path.join(WORKDIR, 'ccache') > 'ipa-ods-exporter.ccache'? > >> +ipautil.kinit_keytab(paths.IPA_ODS_EXPORTER_KEYTAB, ccache_name, PRINCIPAL) >> log.debug('Got TGT') >> >> # LDAP initialization >> @@ -407,7 +408,7 @@ dns_dn = DN(ipalib.api.env.container_dns, ipalib.api.env.basedn) >> ldap = ipalib.api.Backend[ldap2] >> # fixme >> log.debug('Connecting to LDAP') >> -ldap.connect(ccache="%s/ccache" % WORKDIR) >> +ldap.connect(ccache=ccache_name) >> log.debug('Connected') >> >> >> diff --git a/install/certmonger/dogtag-ipa-ca-renew-agent-submit b/install/certmonger/dogtag-ipa-ca-renew-agent-submit >> index 7b91fc61148912c77d0ae962b3847d73e8d0720e..13d2c2a912d2fcf84053d36da5e07fc834f9cf25 100755 >> --- a/install/certmonger/dogtag-ipa-ca-renew-agent-submit >> +++ b/install/certmonger/dogtag-ipa-ca-renew-agent-submit >> @@ -440,7 +440,8 @@ def main(): >> certs.renewal_lock.acquire() >> try: >> principal = str('host/%s@%s' % (api.env.host, api.env.realm)) >> - ipautil.kinit_hostprincipal(paths.KRB5_KEYTAB, tmpdir, principal) >> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, os.path.join(tmpdir, 'ccache'), > 'dogtag-ipa-ca-renew-agent-submit.ccache'? > >> + principal) >> >> profile = os.environ.get('CERTMONGER_CA_PROFILE') >> if profile: >> diff --git a/install/restart_scripts/renew_ca_cert b/install/restart_scripts/renew_ca_cert >> index c7bd5d74c5b4659b3ad66d630653ff6419868d99..67156122bb75f00a4c3f612697092e5bab3723fb 100644 >> --- a/install/restart_scripts/renew_ca_cert >> +++ b/install/restart_scripts/renew_ca_cert >> @@ -73,8 +73,9 @@ def _main(): >> tmpdir = tempfile.mkdtemp(prefix="tmp-") >> try: >> principal = str('host/%s@%s' % (api.env.host, api.env.realm)) >> - ccache = ipautil.kinit_hostprincipal(paths.KRB5_KEYTAB, tmpdir, >> - principal) >> + ccache_filename = '%s/ccache' % tmpdir > 'renew_ca_cert.ccache'? > >> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, ccache_filename, >> + principal) >> >> ca = cainstance.CAInstance(host_name=api.env.host, ldapi=False) >> ca.update_cert_config(nickname, cert, configured_constants) >> @@ -139,7 +140,7 @@ def _main(): >> conn = None >> try: >> conn = ldap2(shared_instance=False, ldap_uri=api.env.ldap_uri) >> - conn.connect(ccache=ccache) >> + conn.connect(ccache=ccache_filename) >> except Exception, e: >> syslog.syslog( >> syslog.LOG_ERR, "Failed to connect to LDAP: %s" % e) >> diff --git a/install/restart_scripts/renew_ra_cert b/install/restart_scripts/renew_ra_cert >> index 7dae3562380e919b2cc5f53825820291fc93fdc5..6276de78e4528dc1caa39be6628094a9d00e5988 100644 >> --- a/install/restart_scripts/renew_ra_cert >> +++ b/install/restart_scripts/renew_ra_cert >> @@ -42,8 +42,8 @@ def _main(): >> tmpdir = tempfile.mkdtemp(prefix="tmp-") >> try: >> principal = str('host/%s@%s' % (api.env.host, api.env.realm)) >> - ccache = ipautil.kinit_hostprincipal(paths.KRB5_KEYTAB, tmpdir, >> - principal) >> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, '%s/ccache' % tmpdir, > 'renew_ra_cert.ccache'? > >> + principal) >> >> ca = cainstance.CAInstance(host_name=api.env.host, ldapi=False) >> if ca.is_renewal_master(): >> diff --git a/ipa-client/ipa-install/ipa-client-automount b/ipa-client/ipa-install/ipa-client-automount >> index 7b9e701dead5f50a033a455eb62e30df78cc0249..19197d34ca580062742b3d7363e5dfb2dad0e4de 100755 >> --- a/ipa-client/ipa-install/ipa-client-automount >> +++ b/ipa-client/ipa-install/ipa-client-automount >> @@ -425,10 +425,11 @@ def main(): >> os.close(ccache_fd) >> try: >> try: >> - os.environ['KRB5CCNAME'] = ccache_name >> - ipautil.run([paths.KINIT, '-k', '-t', paths.KRB5_KEYTAB, 'host/%s@%s' % (api.env.host, api.env.realm)]) >> - except ipautil.CalledProcessError, e: >> - sys.exit("Failed to obtain host TGT.") >> + host_princ = str('host/%s@%s' % (api.env.host, api.env.realm)) >> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, ccache_name, > I'm not sure what is ccache_name here but it should be something descriptive. > In this case ccache_name points to a temporary file made by tempfile.mkstemp() which is cleaned up in a later finally: block (so you will not get to it even if the whole thing comes crashing). I'm not sure if there's a point in renaming it. >> + host_princ) >> + except StandardError, e: >> + sys.exit("Failed to obtain host TGT: %s" % e) >> # Now we have a TGT, connect to IPA >> try: >> api.Backend.rpcclient.connect() >> diff --git a/ipa-client/ipaclient/ipa_certupdate.py b/ipa-client/ipaclient/ipa_certupdate.py >> index 031a34c3a54a02d43978eedcb794678a1550702b..d6f7bbb3daff3ae4dced5d69f83a0516003235ab 100644 >> --- a/ipa-client/ipaclient/ipa_certupdate.py >> +++ b/ipa-client/ipaclient/ipa_certupdate.py >> @@ -57,7 +57,8 @@ class CertUpdate(admintool.AdminTool): >> tmpdir = tempfile.mkdtemp(prefix="tmp-") >> try: >> principal = str('host/%s@%s' % (api.env.host, api.env.realm)) >> - ipautil.kinit_hostprincipal(paths.KRB5_KEYTAB, tmpdir, principal) >> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, >> + os.path.join(tmpdir, 'ccache'), principal) > 'ipa_certupdate.ccache'? > >> >> api.Backend.rpcclient.connect() >> try: >> diff --git a/ipaserver/rpcserver.py b/ipaserver/rpcserver.py >> index d6bc955b9d9910a24eec5df1def579310eb54786..36f16908ac8477d9982bfee613b77576853054eb 100644 >> --- a/ipaserver/rpcserver.py >> +++ b/ipaserver/rpcserver.py >> @@ -958,8 +958,8 @@ class login_password(Backend, KerberosSession, HTTP_Status): >> >> def kinit(self, user, realm, password, ccache_name): >> # get http service ccache as an armor for FAST to enable OTP authentication >> - armor_principal = krb5_format_service_principal_name( >> - 'HTTP', self.api.env.host, realm) >> + armor_principal = str(krb5_format_service_principal_name( >> + 'HTTP', self.api.env.host, realm)) >> keytab = paths.IPA_KEYTAB >> armor_name = "%sA_%s" % (krbccache_prefix, user) >> armor_path = os.path.join(krbccache_dir, armor_name) >> @@ -967,34 +967,33 @@ class login_password(Backend, KerberosSession, HTTP_Status): >> self.debug('Obtaining armor ccache: principal=%s keytab=%s ccache=%s', >> armor_principal, keytab, armor_path) >> >> - (stdout, stderr, returncode) = ipautil.run( >> - [paths.KINIT, '-kt', keytab, armor_principal], >> - env={'KRB5CCNAME': armor_path}, raiseonerr=False) >> - >> - if returncode != 0: >> - raise CCacheError() >> + try: >> + ipautil.kinit_keytab(paths.IPA_KEYTAB, armor_path, >> + armor_principal) >> + except StandardError, e: >> + raise CCacheError(str(e)) >> >> # Format the user as a kerberos principal >> principal = krb5_format_principal_name(user, realm) >> >> - (stdout, stderr, returncode) = ipautil.run( >> - [paths.KINIT, principal, '-T', armor_path], >> - env={'KRB5CCNAME': ccache_name, 'LC_ALL': 'C'}, >> - stdin=password, raiseonerr=False) >> + try: >> + ipautil.kinit_password(principal, password, >> + env={'KRB5CCNAME': ccache_name, >> + 'LC_ALL': 'C'}, >> + armor_ccache=armor_path) >> >> - self.debug('kinit: principal=%s returncode=%s, stderr="%s"', >> - principal, returncode, stderr) >> - >> - self.debug('Cleanup the armor ccache') >> - ipautil.run( >> - [paths.KDESTROY, '-A', '-c', armor_path], >> - env={'KRB5CCNAME': armor_path}, >> - raiseonerr=False) >> - >> - if returncode != 0: >> - if stderr.strip() == 'kinit: Cannot read password while getting initial credentials': >> - raise PasswordExpired(principal=principal, message=unicode(stderr)) >> - raise InvalidSessionPassword(principal=principal, message=unicode(stderr)) >> + self.debug('Cleanup the armor ccache') >> + ipautil.run( >> + [paths.KDESTROY, '-A', '-c', armor_path], >> + env={'KRB5CCNAME': armor_path}, >> + raiseonerr=False) >> + except ipautil.CalledProcessError, e: >> + if ('kinit: Cannot read password while ' >> + 'getting initial credentials') in e.output: > I know it is not your code but please make sure it will work with non-English > LANG or LC_MESSAGE. > Ah yes I did not realize it, I will try to fix it. >> + raise PasswordExpired(principal=principal, >> + message=unicode(e.output)) >> + raise InvalidSessionPassword(principal=principal, >> + message=unicode(e.output)) >> >> class change_password(Backend, HTTP_Status): > > Hopefully this review is not too annoying :-) > Not at all :). -- Martin^3 Babinsky From tbabej at redhat.com Wed Mar 11 13:36:20 2015 From: tbabej at redhat.com (Tomas Babej) Date: Wed, 11 Mar 2015 14:36:20 +0100 Subject: [Freeipa-devel] [PATCH 0209] Fix logically dead code in ipap11helper module In-Reply-To: <550018CA.3030308@redhat.com> References: <54FD9774.9030209@redhat.com> <550018CA.3030308@redhat.com> Message-ID: <550044D4.5050903@redhat.com> On 03/11/2015 11:28 AM, Petr Spacek wrote: > On 9.3.2015 13:52, Martin Basti wrote: >> Patch attached. > ACK for this patch. > > When you are at it, it would be good to fix other warnings too. GCC on Fedora > 21 is yelling at me: > > p11helper.c: In function ?P11_Helper_find_keys?: > p11helper.c:1062:23: warning: comparison between signed and unsigned integer > expressions [-Wsign-compare] > for (int i = 0; i < objects_len; ++i) { > ^ > p11helper.c: At top level: > p11helper.c:2125:1: warning: missing initializer for field ?tp_free? of > ?PyTypeObject? [-Wmissing-field-initializers] > }; > ^ > In file included from /usr/include/python2.7/Python.h:80:0, > from p11helper.c:37: > /usr/include/python2.7/object.h:391:14: note: ?tp_free? declared here > freefunc tp_free; /* Low-level free-memory routine */ > > Feel free to fix it in a separate patch. > Pushed to: ipa-4-1: 939fd3dd6ccc0e96b79899069c479dbd8844a4b4 master: 6af49259c2be8da66fe4f46b1eb6c25a69185a6b From edewata at redhat.com Wed Mar 11 14:12:45 2015 From: edewata at redhat.com (Endi Sukma Dewata) Date: Wed, 11 Mar 2015 21:12:45 +0700 Subject: [Freeipa-devel] [PATCH] Password vault In-Reply-To: <54F96B22.9050507@redhat.com> References: <54E1AF55.3060409@redhat.com> <54EBEB55.6010306@redhat.com> <54F96B22.9050507@redhat.com> Message-ID: <55004D5D.6060300@redhat.com> Thanks for the review. New patch attached to be applied on top of all previous patches. Please see comments below. On 3/6/2015 3:53 PM, Jan Cholasta wrote: > Patch 353: > > 1) Please follow PEP8 in new code. > > The pep8 tool reports these errors in existing files: > > ./ipalib/constants.py:98:80: E501 line too long (84 > 79 characters) > ./ipalib/plugins/baseldap.py:1527:80: E501 line too long (81 > 79 > characters) > ./ipalib/plugins/user.py:915:80: E501 line too long (80 > 79 characters) > > as well as many errors in the files this patch adds. For some reason pylint keeps crashing during build so I cannot run it for all files. I'm fixing the errors that I can see. If you see other errors please let me know while I'm still trying to figure out the problem. Is there an existing ticket for fixing PEP8 errors? Let's use that for fixing the errors in the existing code. > 2) Pylint reports the following error: > > ipatests/test_xmlrpc/test_vault_plugin.py:153: > [E0102(function-redefined), test_vault] class already defined line 27) Fixed. > 3) The container_vault config option should be renamed to > container_vaultcontainer, as it is used in the vaultcontainer plugin, > not the vault plugin. It was named container_vault because it defines the DN for of the subtree that contains all vault-related entries. I moved the base_dn variable from vaultcontainer object to the vault object for clarity. > 4) The vault object should be child of the vaultcontainer object. > > Not only is this correct from the object model perspective, but it would > also make all the container_id hacks go away. It's a bit difficult because it will affect how the container & vault ID's are represented on the CLI. In the design the container ID would be a single value like this: $ ipa vault-add /services/server.example.com/HTTP And if the vault ID is relative (without initial slash), it will be appended to the user's private container (i.e. /users//): $ ipa vault-add PrivateVault The implementation is not complete yet. Currently it accepts this format: $ ipa vault-add [--container ] and I'm still planning to add this: $ ipa vault-add If the vault must be a child of vaultcontainer, and the vaultcontainer must be a child of a vaultcontainer, does it mean the vault ID would have to be split into separate arguments like this? $ ipa vaultcontainer-add services server.example.com HTTP If that's the case we'd lose the ability to specify a relative vault ID. > 5) When specifying param flags, use set literals. > > This is especially wrong, because it's not a tuple, but a string in > parentheses: > > + flags=('virtual_attribute'), Fixed. > 6) The `container` param of vault should actually be an option in > vault_* commands. > > Also it should be renamed to `container_id`, for consistency with > vaultcontainer. Fixed. It was actually made to be consistent with the 'parent' attribute in the vaultcontainer class. Now the 'parent' has been renamed to 'parent_id' as well. > 7) The `vault_id` param of vault should have "no_option" in flags, since > it is output-only. Fixed. > 8) Don't translate docstrings where not necessary: > > + def get_dn(self, *keys, **options): > + __doc__ = _(""" > + Generates vault DN from vault ID. > + """) > > Only plugin modules and classes should have translated docstrings. Fixed. > 9) This looks wrong in vault.get_dn() and vaultcontainer.get_dn(): > > + name = None > + if keys: > + name = keys[0] > > Primary key of the object should always be set, so the if statement > should not be there. Fixed. > Also, primary key of any given object is always last in "keys", so use > keys[-1] instead of keys[0]. Fixed. > 10) Use "self.api" instead of "api" to access the API in plugins. Fixed. > 11) No clever optimizations like this please: > > + # vault DN cannot be the container base DN > + if len(dn) == len(api.Object.vaultcontainer.base_dn): > + raise ValueError('Invalid vault DN: %s' % dn) > > Compare the DNs by value instead. Actually the DN values have already been compared in the code right above it: # make sure the DN is a vault DN if not dn.endswith(self.api.Object.vaultcontainer.base_dn): raise ValueError('Invalid vault DN: %s' % dn) This code confirms that the incoming vault DN is within the vault subtree. After that, the DN length comparison above is just to make sure the incoming vault DN is not the root of the vault subtree itself. It doesn't need to compare the values again. > 12) vault.split_id() is not used anywhere. Removed. > 13) Bytes is not base64-encoded data: > > + Bytes('data?', > + cli_name='data', > + doc=_('Base-64 encoded binary data to archive'), > + ), > > It is base64-encoded in the CLI, but on the API level it is not. The doc > should say just "Binary data to archive". Fixed. > 14) Use File instead of Str for input files: > > + Str('in?', > + cli_name='in', > + doc=_('File containing data to archive'), > + ), The File type doesn't work with binary files because it tries to decode the content. > 15) Use MutuallyExclusiveError instead of ValidationError when there are > mutually exclusive options specified. Fixed. > 16) You do way too much stuff in vault_add.forward(). Only code that > must be done on the client needs to be there, i.e. handling of the > "data", "text" and "in" options. > > The vault_archive call must be in vault_add.execute(), otherwise a) we > will be making 2 RPC calls from the client and b) it won't be called at > all when api.env.in_server is True. This is done by design. The vault_add.forward() generates the salt and the keys. The vault_archive.forward() will encrypt the data. These operations have to be done on the client side to secure the transport of the data from the client through the server and finally to KRA. This mechanism prevents the server from looking at the unencrypted data. The add & archive combination was added for convenience, not for optimization. This way you would be able to archive data into a new vault using a single command. Without this, you'd have to execute two separate commands: add & archive, which will result in 2 RPC calls anyway. > 17) Why are vaultcontainer objects automatically created in vault_add? > > If you have to automatically create them, you also have to automatically > delete them when the command fails. But that's a hassle, so I would just > not create them automatically. The vaultcontainer is created automatically to provide a private container (i.e. /users//) for the each user if they need it. Without this, the admin will have to create the container manually first before a user can create a vault, which would be an unreasonable requirement. If the vault_add fails, it's ok to leave the private container intact because it can be used again if the user tries to create a vault again later and it will not affect other users. If the user is deleted, the private container will be deleted too. The code was fixed to create the container only if they are adding a vault/vault container into the user's private container. If they are adding into other container, the container must already exist. > 18) Why are vaultcontainer objects automatically created in vault_find? > > This is just plain wrong and has to be removed, now. The code was supposed to create the user's private container like in #17, but the behavior has been changed. If the container being searched is the user's private container, it will ignore the container not found error and return zero results as if the private container already exists. For other containers the container must already exist. For this to work I had to add a handle_not_found() into LDAPSearch so the plugins can customize the proper search response for the missing private container. > 19) What is the reason behind all the json stuff in vault_transport_cert? > > vault_transport_cert.__json__() is exactly the same as > Command.__json__() and hence redundant. Removed. It should've been cleaned up. > 20) Are vault_transport_cert, vault_archive and vault_retrieve supposed > to be runnable by users? If not, add "NO_CLI = True" to the class > definition. Yes. These commands are available if the users want to retrieve the transport cert for other purposes, or archive/retrieve data to/from the vault as a whole instead of individual secrets. > 21) vault_archive is not a retrieve operation, it should be based on > LDAPUpdate instead of LDAPRetrieve. Or Command actually, since it does > not do anything with LDAP. The same applies to vault_retrieve. The vault_archive does not actually modify the LDAP entry because it stores the data in KRA. It is actually an LDAPRetrieve operation because it needs to get the vault info before it can perform the archival operation. Same thing with vault_retrieve. > 22) vault_archive will break with binary data that is not UTF-8 encoded > text. > > This is where it occurs: > > + vault_data[u'data'] = unicode(data) > > Generally, don't use unicode() on str values and str() on unicode values > directly, always use .decode() and .encode(). It needs to be a Unicode because json.dumps() doesn't work with binary data. Fixed by adding base-64 encoding. > 23) Since vault containers are nested, the vaultcontainer object should > be child of itself. > > There is no support for nested objects in the framework, but it > shouldn't be too hard to do anyway. See #4. > 24) Instead of: > > + while len(dn) > len(self.base_dn): > + > + rdn = dn[0] > ... > + dn = DN(*dn[1:]) > > you can do: > > + for rdn in dn[:-len(self.base_dn)]: > ... Fixed. > 25) Why are parent vaultcontainer objects automatically created in > vaultcontainer_add? See #17. > 26) Instead of the delete_entry refactoring in baseldap and > vaultcontainer_add, you can put this in vaultcontainer_add's pre_callback: > > try: > ldap.get_entries(dn, scope=ldap.SCOPE_ONELEVEL, attrs_list=[]) > except errors.NotFound: > pass > else: > if not options.get('force', False): > raise errors.NotAllowedOnNonLeaf() I suppose you meant vaultcontainer_del. Fixed, but this will generate an additional search for each delete. I'm leaving the changes baseldap because it may be useful later and it doesn't change the behavior of the current code. > 27) Why are parent vaultcontainer objects automatically created in > vaultcontainer_find? See #18. > 28) The vault and vaultcontainer plugins seem to be pretty similar, I > think it would make sense to put common stuff in a base class and > inherit vault and vaultcontainer from that. I plan to refactor the common code later. Right now the focus is to get the functionality working correctly first. > More later. > > Honza > -- Endi S. Dewata -------------- next part -------------- >From 6c3d2ea06089a9e91741d57c412f6efbaee94a83 Mon Sep 17 00:00:00 2001 From: "Endi S. Dewata" Date: Mon, 9 Mar 2015 12:09:20 -0400 Subject: [PATCH] Vault improvements. The vault plugins have been modified to clean up the code, to fix some issues, and to improve error handling. The LDAPCreate and LDAPSearch classes have been refactored to allow subclasses to provide custom error handling. The test scripts have been updated accordingly. https://fedorahosted.org/freeipa/ticket/3872 --- API.txt | 50 ++-- ipalib/plugins/baseldap.py | 35 +-- ipalib/plugins/user.py | 6 +- ipalib/plugins/vault.py | 269 ++++++++++----------- ipalib/plugins/vaultcontainer.py | 224 +++++++++-------- ipalib/plugins/vaultsecret.py | 108 ++++----- ipatests/test_xmlrpc/test_vault_plugin.py | 2 +- ipatests/test_xmlrpc/test_vaultcontainer_plugin.py | 24 +- ipatests/test_xmlrpc/test_vaultsecret_plugin.py | 2 +- 9 files changed, 365 insertions(+), 355 deletions(-) diff --git a/API.txt b/API.txt index ffbffa78cde372d5c7027b758be58bf07caebbc6..3a741755ab3e15e0175599a16a090b04d46d6be8 100644 --- a/API.txt +++ b/API.txt @@ -4518,7 +4518,7 @@ args: 1,20,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, required=True) option: Str('addattr*', cli_name='addattr', exclude='webui') option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container', attribute=False, cli_name='container', multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', required=False) +option: Str('container_id?', cli_name='container_id') option: Bytes('data?', cli_name='data') option: Str('description', attribute=True, cli_name='desc', multivalue=False, required=False) option: Str('escrow_public_key_file?', cli_name='escrow_public_key_file') @@ -4543,7 +4543,7 @@ command: vault_add_member args: 1,7,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') @@ -4556,7 +4556,7 @@ command: vault_add_owner args: 1,7,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') @@ -4569,7 +4569,7 @@ command: vault_archive args: 1,15,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Bytes('data?', cli_name='data') option: Bytes('encryption_key?', cli_name='encryption_key') option: Str('in?', cli_name='in') @@ -4589,7 +4589,7 @@ output: PrimaryKey('value', None, None) command: vault_del args: 1,3,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=True, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Flag('continue', autofill=True, cli_name='continue', default=False) option: Str('version?', exclude='webui') output: Output('result', , None) @@ -4600,7 +4600,7 @@ args: 1,15,4 arg: Str('criteria?', noextrawhitespace=False) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('cn', attribute=True, autofill=False, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=False) -option: Str('container', attribute=False, autofill=False, cli_name='container', multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', query=True, required=False) +option: Str('container_id?', cli_name='container_id') option: Str('description', attribute=True, autofill=False, cli_name='desc', multivalue=False, query=True, required=False) option: Bytes('ipaescrowpublickey', attribute=True, autofill=False, cli_name='escrow_public_key', multivalue=False, query=True, required=False) option: Bytes('ipapublickey', attribute=True, autofill=False, cli_name='public_key', multivalue=False, query=True, required=False) @@ -4622,7 +4622,7 @@ args: 1,15,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Str('addattr*', cli_name='addattr', exclude='webui') option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container', attribute=False, autofill=False, cli_name='container', multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', required=False) +option: Str('container_id?', cli_name='container_id') option: Str('delattr*', cli_name='delattr', exclude='webui') option: Str('description', attribute=True, autofill=False, cli_name='desc', multivalue=False, required=False) option: Bytes('ipaescrowpublickey', attribute=True, autofill=False, cli_name='escrow_public_key', multivalue=False, required=False) @@ -4642,7 +4642,7 @@ command: vault_remove_member args: 1,7,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') @@ -4655,7 +4655,7 @@ command: vault_remove_owner args: 1,7,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') @@ -4668,7 +4668,7 @@ command: vault_retrieve args: 1,16,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Bytes('escrow_private_key?', cli_name='escrow_private_key') option: Str('escrow_private_key_file?', cli_name='escrow_private_key_file') option: Flag('no_members', autofill=True, default=False, exclude='webui') @@ -4690,7 +4690,7 @@ command: vault_show args: 1,6,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Flag('no_members', autofill=True, default=False, exclude='webui') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Flag('rights', autofill=True, default=False) @@ -4711,7 +4711,7 @@ option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui option: Str('container_id', attribute=False, cli_name='container_id', multivalue=False, required=False) option: Str('description', attribute=True, cli_name='desc', multivalue=False, required=False) option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent', attribute=False, cli_name='parent', multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', required=False) +option: Str('parent_id?', cli_name='parent_id') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Str('setattr*', cli_name='setattr', exclude='webui') option: Str('version?', exclude='webui') @@ -4724,7 +4724,7 @@ arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multiva option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent?', cli_name='parent') +option: Str('parent_id?', cli_name='parent_id') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Str('user*', alwaysask=True, cli_name='users', csv=True) option: Str('version?', exclude='webui') @@ -4737,7 +4737,7 @@ arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multiva option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent?', cli_name='parent') +option: Str('parent_id?', cli_name='parent_id') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Str('user*', alwaysask=True, cli_name='users', csv=True) option: Str('version?', exclude='webui') @@ -4749,7 +4749,7 @@ args: 1,4,3 arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=True, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('continue', autofill=True, cli_name='continue', default=False) option: Flag('force?', autofill=True, default=False) -option: Str('parent?', cli_name='parent') +option: Str('parent_id?', cli_name='parent_id') option: Str('version?', exclude='webui') output: Output('result', , None) output: Output('summary', (, ), None) @@ -4762,7 +4762,7 @@ option: Str('cn', attribute=True, autofill=False, cli_name='container_name', max option: Str('container_id', attribute=False, autofill=False, cli_name='container_id', multivalue=False, query=True, required=False) option: Str('description', attribute=True, autofill=False, cli_name='desc', multivalue=False, query=True, required=False) option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent', attribute=False, autofill=False, cli_name='parent', multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', query=True, required=False) +option: Str('parent_id?', cli_name='parent_id') option: Flag('pkey_only?', autofill=True, default=False) option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Int('sizelimit?', autofill=False, minvalue=0) @@ -4781,7 +4781,7 @@ option: Str('container_id', attribute=False, autofill=False, cli_name='container option: Str('delattr*', cli_name='delattr', exclude='webui') option: Str('description', attribute=True, autofill=False, cli_name='desc', multivalue=False, required=False) option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent', attribute=False, autofill=False, cli_name='parent', multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', required=False) +option: Str('parent_id?', cli_name='parent_id') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Flag('rights', autofill=True, default=False) option: Str('setattr*', cli_name='setattr', exclude='webui') @@ -4795,7 +4795,7 @@ arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multiva option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent?', cli_name='parent') +option: Str('parent_id?', cli_name='parent_id') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Str('user*', alwaysask=True, cli_name='users', csv=True) option: Str('version?', exclude='webui') @@ -4808,7 +4808,7 @@ arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multiva option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent?', cli_name='parent') +option: Str('parent_id?', cli_name='parent_id') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Str('user*', alwaysask=True, cli_name='users', csv=True) option: Str('version?', exclude='webui') @@ -4820,7 +4820,7 @@ args: 1,6,3 arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent?', cli_name='parent') +option: Str('parent_id?', cli_name='parent_id') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Flag('rights', autofill=True, default=False) option: Str('version?', exclude='webui') @@ -4832,7 +4832,7 @@ args: 2,12,3 arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) arg: Str('secret_name', attribute=True, cli_name='secret', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Bytes('data?', cli_name='data') option: Str('description?', cli_name='desc') option: Str('in?', cli_name='in') @@ -4851,7 +4851,7 @@ args: 2,8,3 arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) arg: Str('secret_name', attribute=True, cli_name='secret', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Str('password?', cli_name='password') option: Str('password_file?', cli_name='password_file') option: Bytes('private_key?', cli_name='private_key') @@ -4866,7 +4866,7 @@ args: 2,12,4 arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) arg: Str('criteria?', noextrawhitespace=False) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Bytes('data', attribute=True, autofill=False, cli_name='data', multivalue=False, query=True, required=False) option: Str('description', attribute=True, autofill=False, cli_name='desc', multivalue=False, query=True, required=False) option: Str('password?', cli_name='password') @@ -4886,7 +4886,7 @@ args: 2,12,3 arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) arg: Str('secret_name', attribute=True, cli_name='secret', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Bytes('data?', cli_name='data') option: Str('description?', cli_name='desc') option: Str('in?', cli_name='in') @@ -4905,7 +4905,7 @@ args: 2,11,3 arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) arg: Str('secret_name', attribute=True, cli_name='secret', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Str('out?', cli_name='out') option: Str('password?', cli_name='password') option: Str('password_file?', cli_name='password_file') diff --git a/ipalib/plugins/baseldap.py b/ipalib/plugins/baseldap.py index d693709ac1ba7ddb3c559199c199039b6f8bd9ac..fceaf95f42bef5fa71cbedeb291bd68d2919bc5a 100644 --- a/ipalib/plugins/baseldap.py +++ b/ipalib/plugins/baseldap.py @@ -1152,19 +1152,7 @@ class LDAPCreate(BaseLDAPCommand, crud.Create): try: self._exc_wrapper(keys, options, ldap.add_entry)(entry_attrs) except errors.NotFound: - parent = self.obj.parent_object - if parent: - raise errors.NotFound( - reason=self.obj.parent_not_found_msg % { - 'parent': keys[-2], - 'oname': self.api.Object[parent].object_name, - } - ) - raise errors.NotFound( - reason=self.obj.container_not_found_msg % { - 'container': self.obj.container_dn, - } - ) + self.handle_not_found(*keys, **options) except errors.DuplicateEntry: self.obj.handle_duplicate_entry(*keys) @@ -1213,6 +1201,21 @@ class LDAPCreate(BaseLDAPCommand, crud.Create): def exc_callback(self, keys, options, exc, call_func, *call_args, **call_kwargs): raise exc + def handle_not_found(self, *args, **options): + parent = self.obj.parent_object + if parent: + raise errors.NotFound( + reason=self.obj.parent_not_found_msg % { + 'parent': args[-2], + 'oname': self.api.Object[parent].object_name, + } + ) + raise errors.NotFound( + reason=self.obj.container_not_found_msg % { + 'container': self.obj.container_dn, + } + ) + def interactive_prompt_callback(self, kw): return @@ -2000,7 +2003,8 @@ class LDAPSearch(BaseLDAPCommand, crud.Search): except errors.EmptyResult: (entries, truncated) = ([], False) except errors.NotFound: - self.api.Object[self.obj.parent_object].handle_not_found(*args[:-1]) + self.handle_not_found(*args, **options) + (entries, truncated) = ([], False) for callback in self.get_callbacks('post'): truncated = callback(self, ldap, entries, truncated, *args, **options) @@ -2026,6 +2030,9 @@ class LDAPSearch(BaseLDAPCommand, crud.Search): truncated=truncated, ) + def handle_not_found(self, *args, **options): + self.api.Object[self.obj.parent_object].handle_not_found(*args[:-1]) + def pre_callback(self, ldap, filters, attrs_list, base_dn, scope, *args, **options): assert isinstance(base_dn, DN) return (filters, base_dn, scope) diff --git a/ipalib/plugins/user.py b/ipalib/plugins/user.py index 70b237dc102f46ab62e10aab0250aa496dad60c6..f8ee3db05b5a36cf4ebb4261f8535932b834b1bf 100644 --- a/ipalib/plugins/user.py +++ b/ipalib/plugins/user.py @@ -903,10 +903,12 @@ class user_del(LDAPDelete): # Delete user's private vault container. vaultcontainer_id = self.api.Object.vaultcontainer.get_private_id(owner) - (vaultcontainer_name, vaultcontainer_parent_id) = self.api.Object.vaultcontainer.split_id(vaultcontainer_id) + (vaultcontainer_name, vaultcontainer_parent_id) =\ + self.api.Object.vaultcontainer.split_id(vaultcontainer_id) try: - self.api.Command.vaultcontainer_del(vaultcontainer_name, parent=vaultcontainer_parent_id) + self.api.Command.vaultcontainer_del( + vaultcontainer_name, parent_id=vaultcontainer_parent_id) except errors.NotFound: pass diff --git a/ipalib/plugins/vault.py b/ipalib/plugins/vault.py index 69c12aaf1c8503a345e115fb1a660aed8f85fb17..0b12491a8fb140b889b7686486c794238c7ed48e 100644 --- a/ipalib/plugins/vault.py +++ b/ipalib/plugins/vault.py @@ -41,8 +41,8 @@ from ipalib.frontend import Command from ipalib import api, errors from ipalib import Str, Bytes, Flag from ipalib.plugable import Registry -from ipalib.plugins.baseldap import LDAPObject, LDAPCreate, LDAPDelete, LDAPSearch, LDAPUpdate, LDAPRetrieve,\ - LDAPAddMember, LDAPRemoveMember +from ipalib.plugins.baseldap import LDAPObject, LDAPCreate, LDAPDelete, LDAPSearch, LDAPUpdate,\ + LDAPRetrieve, LDAPAddMember, LDAPRemoveMember from ipalib.request import context from ipalib.plugins.user import split_principal from ipalib import _, ngettext @@ -61,7 +61,7 @@ EXAMPLES: ipa vault-find """) + _(""" List shared vaults: - ipa vault-find --container /shared + ipa vault-find --container-id /shared """) + _(""" Add a standard vault: ipa vault-add MyVault @@ -135,6 +135,8 @@ class vault(LDAPObject): Vault object. """) + base_dn = DN(api.env.container_vault, api.env.basedn) + object_name = _('vault') object_name_plural = _('vaults') @@ -173,19 +175,11 @@ class vault(LDAPObject): pattern_errmsg='may only include letters, numbers, _, ., and -', maxlength=255, ), - Str('container?', - cli_name='container', - label=_('Container'), - doc=_('Container'), - flags=('virtual_attribute'), - pattern='^[a-zA-Z0-9_.-/]+$', - pattern_errmsg='may only include letters, numbers, _, ., -, and /', - ), Str('vault_id?', cli_name='vault_id', label=_('Vault ID'), doc=_('Vault ID'), - flags=('virtual_attribute'), + flags={'no_option', 'virtual_attribute'}, ), Str('description?', cli_name='desc', @@ -217,22 +211,16 @@ class vault(LDAPObject): ) def get_dn(self, *keys, **options): - __doc__ = _(""" + """ Generates vault DN from vault ID. - """) + """ # get vault ID from parameters - name = None - if keys: - name = keys[0] + name = keys[-1] + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + vault_id = container_id + name - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) - - vault_id = container_id - if name: - vault_id = container_id + name - - dn = api.Object.vaultcontainer.base_dn + dn = self.base_dn # for each name in the ID, prepend the base DN for name in vault_id.split(u'/'): @@ -242,45 +230,30 @@ class vault(LDAPObject): return dn def get_id(self, dn): - __doc__ = _(""" + """ Generates vault ID from vault DN. - """) + """ # make sure the DN is a vault DN - if not dn.endswith(api.Object.vaultcontainer.base_dn): + if not dn.endswith(self.base_dn): raise ValueError('Invalid vault DN: %s' % dn) # vault DN cannot be the container base DN - if len(dn) == len(api.Object.vaultcontainer.base_dn): + if len(dn) == len(self.base_dn): raise ValueError('Invalid vault DN: %s' % dn) # construct the vault ID from the bottom up id = u'' - while len(dn) > len(api.Object.vaultcontainer.base_dn): - - rdn = dn[0] + for rdn in dn[:-len(self.base_dn)]: name = rdn['cn'] id = u'/' + name + id - dn = DN(*dn[1:]) - return id - def split_id(self, id): - __doc__ = _(""" - Splits a vault ID into (vault name, container ID) tuple. - """) - - # split ID into container ID and vault name - parts = id.rsplit(u'/', 1) - - # return vault name and container ID - return (parts[1], parts[0] + u'/') - def get_kra_id(self, id): - __doc__ = _(""" + """ Generates a client key ID to store/retrieve data in KRA. - """) + """ return 'ipa:' + id def generate_symmetric_key(self, password, salt): @@ -354,9 +327,13 @@ class vault_add(LDAPCreate): __doc__ = _('Create a new vault.') takes_options = LDAPCreate.takes_options + ( + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), + ), Bytes('data?', cli_name='data', - doc=_('Base-64 encoded binary data to archive'), + doc=_('Binary data to archive'), ), Str('text?', cli_name='text', @@ -389,7 +366,7 @@ class vault_add(LDAPCreate): def forward(self, *args, **options): vault_name = args[0] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = options.get('ipavaulttype') data = options.get('data') @@ -420,18 +397,14 @@ class vault_add(LDAPCreate): # get data if data: - if text: - raise errors.ValidationError(name='text', - error=_('Input data already specified')) - - if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + if text or input_file: + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) elif text: if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) data = text.encode('utf-8') @@ -564,9 +537,9 @@ class vault_add(LDAPCreate): response = super(vault_add, self).forward(*args, **options) # archive initial data - api.Command.vault_archive( + self.api.Command.vault_archive( vault_name, - container=container_id, + container_id=container_id, data=data, password=password, encryption_key=encryption_key) @@ -576,13 +549,21 @@ class vault_add(LDAPCreate): def pre_callback(self, ldap, dn, entry_attrs, attrs_list, *keys, **options): assert isinstance(dn, DN) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + + # set owner principal = getattr(context, 'principal') (username, realm) = split_principal(principal) - owner_dn = self.api.Object['user'].get_dn(username) + owner_dn = self.api.Object.user.get_dn(username) entry_attrs['owner'] = owner_dn - container_dn = DN(*dn[1:]) - api.Object.vaultcontainer.create_entry(container_dn, owner=owner_dn) + # container is user's private container, create container + if container_id == self.api.Object.vaultcontainer.get_private_id(): + try: + self.api.Object.vaultcontainer.create_entry( + DN(*dn[1:]), owner=owner_dn) + except errors.DuplicateEntry: + pass return dn @@ -593,31 +574,39 @@ class vault_add(LDAPCreate): return dn + def handle_not_found(self, *args, **options): + + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + + reason=self.obj.parent_not_found_msg % { + 'parent': container_id, + 'oname': self.api.Object.vaultcontainer.object_name, + } @register() class vault_del(LDAPDelete): __doc__ = _('Delete a vault.') - msg_summary = _('Deleted vault "%(value)s"') - takes_options = LDAPDelete.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), ) + msg_summary = _('Deleted vault "%(value)s"') + def post_callback(self, ldap, dn, *keys, **options): assert isinstance(dn, DN) vault_id = self.obj.get_id(dn) - kra_client = api.Backend.kra.get_client() + kra_client = self.api.Backend.kra.get_client() kra_account = pki.account.AccountClient(kra_client.connection) kra_account.login() - client_key_id = self.api.Object.vault.get_kra_id(vault_id) + client_key_id = self.obj.get_kra_id(vault_id) # deactivate vault record in KRA response = kra_client.keys.list_keys(client_key_id, pki.key.KeyClient.KEY_STATUS_ACTIVE) @@ -636,6 +625,13 @@ class vault_del(LDAPDelete): class vault_find(LDAPSearch): __doc__ = _('Search for vaults.') + takes_options = LDAPSearch.takes_options + ( + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), + ), + ) + msg_summary = ngettext( '%(count)d vault matched', '%(count)d vaults matched', 0 ) @@ -643,14 +639,9 @@ class vault_find(LDAPSearch): def pre_callback(self, ldap, filter, attrs_list, base_dn, scope, *keys, **options): assert isinstance(base_dn, DN) - principal = getattr(context, 'principal') - (username, realm) = split_principal(principal) - owner_dn = self.api.Object['user'].get_dn(username) - - container_id = self.Object.vaultcontainer.normalize_id(options.get('container')) - base_dn = self.Object.vaultcontainer.get_dn(parent=container_id) - - api.Object.vaultcontainer.create_entry(base_dn, owner=owner_dn) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + (name, parent_id) = self.api.Object.vaultcontainer.split_id(container_id) + base_dn = self.api.Object.vaultcontainer.get_dn(name, parent_id=parent_id) return (filter, base_dn, scope) @@ -662,11 +653,31 @@ class vault_find(LDAPSearch): return truncated + def handle_not_found(self, *args, **options): + + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + + # vault container is user's private container, ignore + if container_id == self.api.Object.vaultcontainer.get_private_id(): + return + + # otherwise, raise an error + reason=self.obj.parent_not_found_msg % { + 'parent': container_id, + 'oname': self.api.Object.vaultcontainer.object_name, + } @register() class vault_mod(LDAPUpdate): __doc__ = _('Modify a vault.') + takes_options = LDAPUpdate.takes_options + ( + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), + ), + ) + msg_summary = _('Modified vault "%(value)s"') def post_callback(self, ldap, dn, entry_attrs, *keys, **options): @@ -682,9 +693,9 @@ class vault_show(LDAPRetrieve): __doc__ = _('Display information about a vault.') takes_options = LDAPRetrieve.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), ) @@ -700,11 +711,6 @@ class vault_show(LDAPRetrieve): class vault_transport_cert(Command): __doc__ = _('Retrieve vault transport certificate.') - # list of attributes we want exported to JSON - json_friendly_attributes = ( - 'takes_args', - ) - takes_options = ( Str('out?', cli_name='out', @@ -718,13 +724,6 @@ class vault_transport_cert(Command): ), ) - def __json__(self): - json_dict = dict( - (a, getattr(self, a)) for a in self.json_friendly_attributes - ) - json_dict['takes_options'] = list(self.get_json_options()) - return json_dict - def forward(self, *args, **options): file = options.get('out') @@ -743,7 +742,7 @@ class vault_transport_cert(Command): def execute(self, *args, **options): - kra_client = api.Backend.kra.get_client() + kra_client = self.api.Backend.kra.get_client() transport_cert = kra_client.system_certs.get_transport_cert() return { 'result': { @@ -757,13 +756,13 @@ class vault_archive(LDAPRetrieve): __doc__ = _('Archive data into a vault.') takes_options = LDAPRetrieve.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), Bytes('data?', cli_name='data', - doc=_('Base-64 encoded binary data to archive'), + doc=_('Binary data to archive'), ), Str('text?', cli_name='text', @@ -804,7 +803,7 @@ class vault_archive(LDAPRetrieve): def forward(self, *args, **options): vault_name = args[0] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = 'standard' salt = None @@ -812,7 +811,7 @@ class vault_archive(LDAPRetrieve): escrow_public_key = None # retrieve vault info - response = api.Command.vault_show(vault_name, container=container_id) + response = self.api.Command.vault_show(vault_name, container_id=container_id) result = response['result'] if result.has_key('ipavaulttype'): @@ -850,18 +849,14 @@ class vault_archive(LDAPRetrieve): # get data if data: - if text: - raise errors.ValidationError(name='text', - error=_('Input data already specified')) - - if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + if text or input_file: + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) elif text: if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) data = text.encode('utf-8') @@ -902,9 +897,9 @@ class vault_archive(LDAPRetrieve): password = unicode(getpass.getpass('Password: ')) try: - api.Command.vault_retrieve( + self.api.Command.vault_retrieve( vault_name, - container=container_id, + container_id=container_id, password=password) except errors.NotFound: @@ -959,7 +954,7 @@ class vault_archive(LDAPRetrieve): (file, filename) = tempfile.mkstemp() os.close(file) try: - api.Command.vault_transport_cert(out=unicode(filename)) + self.api.Command.vault_transport_cert(out=unicode(filename)) transport_cert_der = nss.read_der_from_file(filename, True) nss_transport_cert = nss.Certificate(transport_cert_der) @@ -981,7 +976,7 @@ class vault_archive(LDAPRetrieve): options['nonce'] = unicode(base64.b64encode(nonce)) vault_data = {} - vault_data[u'data'] = unicode(data) + vault_data[u'data'] = unicode(base64.b64encode(data)) if encrypted_key: vault_data[u'encrypted_key'] = unicode(base64.b64encode(encrypted_key)) @@ -1012,7 +1007,7 @@ class vault_archive(LDAPRetrieve): principal = getattr(context, 'principal') (username, realm) = split_principal(principal) - user_dn = self.api.Object['user'].get_dn(username) + user_dn = self.api.Object.user.get_dn(username) if user_dn not in owners and user_dn not in members: raise errors.ACIError(info=_("Insufficient access to vault '%s'.") % vault_id) @@ -1020,12 +1015,12 @@ class vault_archive(LDAPRetrieve): entry_attrs['vault_id'] = vault_id # connect to KRA - kra_client = api.Backend.kra.get_client() + kra_client = self.api.Backend.kra.get_client() kra_account = pki.account.AccountClient(kra_client.connection) kra_account.login() - client_key_id = self.api.Object.vault.get_kra_id(vault_id) + client_key_id = self.obj.get_kra_id(vault_id) # deactivate existing vault record in KRA response = kra_client.keys.list_keys( @@ -1062,9 +1057,9 @@ class vault_retrieve(LDAPRetrieve): __doc__ = _('Retrieve a data from a vault.') takes_options = LDAPRetrieve.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), Flag('show_text?', doc=_('Show text data'), @@ -1122,13 +1117,13 @@ class vault_retrieve(LDAPRetrieve): def forward(self, *args, **options): vault_name = args[0] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = 'standard' salt = None # retrieve vault info - response = api.Command.vault_show(vault_name, container=container_id) + response = self.api.Command.vault_show(vault_name, container_id=container_id) result = response['result'] if result.has_key('ipavaulttype'): @@ -1179,7 +1174,7 @@ class vault_retrieve(LDAPRetrieve): (file, filename) = tempfile.mkstemp() os.close(file) try: - api.Command.vault_transport_cert(out=unicode(filename)) + self.api.Command.vault_transport_cert(out=unicode(filename)) transport_cert_der = nss.read_der_from_file(filename, True) nss_transport_cert = nss.Certificate(transport_cert_der) @@ -1209,7 +1204,7 @@ class vault_retrieve(LDAPRetrieve): nonce_iv=nonce) vault_data = json.loads(json_vault_data) - data = str(vault_data[u'data']) + data = base64.b64decode(str(vault_data[u'data'])) encrypted_key = None @@ -1343,7 +1338,7 @@ class vault_retrieve(LDAPRetrieve): principal = getattr(context, 'principal') (username, realm) = split_principal(principal) - user_dn = self.api.Object['user'].get_dn(username) + user_dn = self.api.Object.user.get_dn(username) if user_dn not in owners and user_dn not in members: raise errors.ACIError(info=_("Insufficient access to vault '%s'.") % vault_id) @@ -1353,12 +1348,12 @@ class vault_retrieve(LDAPRetrieve): wrapped_session_key = base64.b64decode(options['session_key']) # connect to KRA - kra_client = api.Backend.kra.get_client() + kra_client = self.api.Backend.kra.get_client() kra_account = pki.account.AccountClient(kra_client.connection) kra_account.login() - client_key_id = self.api.Object.vault.get_kra_id(vault_id) + client_key_id = self.obj.get_kra_id(vault_id) # find vault record in KRA response = kra_client.keys.list_keys( @@ -1388,9 +1383,9 @@ class vault_add_owner(LDAPAddMember): __doc__ = _('Add owners to a vault.') takes_options = LDAPAddMember.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), ) @@ -1410,9 +1405,9 @@ class vault_remove_owner(LDAPRemoveMember): __doc__ = _('Remove owners from a vault.') takes_options = LDAPRemoveMember.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), ) @@ -1432,9 +1427,9 @@ class vault_add_member(LDAPAddMember): __doc__ = _('Add members to a vault.') takes_options = LDAPAddMember.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), ) @@ -1451,9 +1446,9 @@ class vault_remove_member(LDAPRemoveMember): __doc__ = _('Remove members from a vault.') takes_options = LDAPRemoveMember.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), ) diff --git a/ipalib/plugins/vaultcontainer.py b/ipalib/plugins/vaultcontainer.py index 881bbea7886e06356489cd49cc32f2a6a6460a5e..e0dc728362247f7cde252e21a54fa5c23761eb49 100644 --- a/ipalib/plugins/vaultcontainer.py +++ b/ipalib/plugins/vaultcontainer.py @@ -40,7 +40,7 @@ EXAMPLES: ipa vaultcontainer-find """) + _(""" List top-level vault containers: - ipa vaultcontainer-find --parent / + ipa vaultcontainer-find --parent-id / """) + _(""" Add a vault container: ipa vaultcontainer-add MyContainer @@ -75,7 +75,6 @@ class vaultcontainer(LDAPObject): Vault container object. """) - base_dn = DN(api.env.container_vault, api.env.basedn) object_name = _('vault container') object_name_plural = _('vault containers') @@ -109,19 +108,11 @@ class vaultcontainer(LDAPObject): pattern_errmsg='may only include letters, numbers, _, ., and -', maxlength=255, ), - Str('parent?', - cli_name='parent', - label=_('Parent'), - doc=_('Parent container'), - flags=('virtual_attribute'), - pattern='^[a-zA-Z0-9_.-/]+$', - pattern_errmsg='may only include letters, numbers, _, ., -, and /', - ), Str('container_id?', cli_name='container_id', label=_('Container ID'), doc=_('Container ID'), - flags=('virtual_attribute'), + flags={'no_option', 'virtual_attribute'}, ), Str('description?', cli_name='desc', @@ -131,22 +122,19 @@ class vaultcontainer(LDAPObject): ) def get_dn(self, *keys, **options): - __doc__ = _(""" + """ Generates vault container DN from container ID. - """) + """ # get container ID from parameters - name = None - if keys: - name = keys[0] - - parent_id = api.Object.vaultcontainer.normalize_id(options.get('parent')) + name = keys[-1] + parent_id = self.normalize_id(options.get('parent_id')) container_id = parent_id if name: container_id = parent_id + name + u'/' - dn = self.base_dn + dn = self.api.Object.vault.base_dn # for each name in the ID, prepend the base DN for name in container_id.split(u'/'): @@ -156,30 +144,26 @@ class vaultcontainer(LDAPObject): return dn def get_id(self, dn): - __doc__ = _(""" + """ Generates container ID from container DN. - """) + """ # make sure the DN is a container DN - if not dn.endswith(self.base_dn): + if not dn.endswith(self.api.Object.vault.base_dn): raise ValueError('Invalid container DN: %s' % dn) # construct container ID from the bottom up id = u'/' - while len(dn) > len(self.base_dn): - - rdn = dn[0] + for rdn in dn[:-len(self.api.Object.vault.base_dn)]: name = rdn['cn'] id = u'/' + name + id - dn = DN(*dn[1:]) - return id def get_private_id(self, username=None): - __doc__ = _(""" + """ Returns user's private container ID (i.e. /users//). - """) + """ if not username: principal = getattr(context, 'principal') @@ -188,9 +172,9 @@ class vaultcontainer(LDAPObject): return u'/users/' + username + u'/' def normalize_id(self, id): - __doc__ = _(""" + """ Normalizes container ID. - """) + """ # if ID is empty, return user's private container ID if not id: @@ -208,13 +192,13 @@ class vaultcontainer(LDAPObject): return self.get_private_id() + id def split_id(self, id): - __doc__ = _(""" + """ Splits a normalized container ID into (container name, parent ID) tuple. - """) + """ # handle root ID if id == u'/': - return (None, None) + return (None, u'/') # split ID into parent ID, container name, and empty string parts = id.rsplit(u'/', 2) @@ -223,23 +207,10 @@ class vaultcontainer(LDAPObject): return (parts[1], parts[0] + u'/') def create_entry(self, dn, owner=None): - __doc__ = _(""" + """ Creates a container entry and its parents. - """) + """ - # if entry already exists, return - try: - self.backend.get_entry(dn) - return - - except errors.NotFound: - pass - - # otherwise, create parent entry first - parent_dn = DN(*dn[1:]) - self.create_entry(parent_dn, owner=owner) - - # then create the entry itself rdn = dn[0] entry = self.backend.make_entry( dn, @@ -248,6 +219,20 @@ class vaultcontainer(LDAPObject): 'cn': rdn['cn'], 'owner': owner }) + + # if entry can be added return + try: + self.backend.add_entry(entry) + return + + except errors.NotFound: + pass + + # otherwise, create parent entry first + parent_dn = DN(*dn[1:]) + self.create_entry(parent_dn, owner=owner) + + # then create the entry again self.backend.add_entry(entry) @@ -255,18 +240,33 @@ class vaultcontainer(LDAPObject): class vaultcontainer_add(LDAPCreate): __doc__ = _('Create a new vault container.') + takes_options = LDAPCreate.takes_options + ( + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), + ), + ) + msg_summary = _('Added vault container "%(value)s"') def pre_callback(self, ldap, dn, entry_attrs, attrs_list, *keys, **options): assert isinstance(dn, DN) + parent_id = self.obj.normalize_id(options.get('parent_id')) + + # set owner principal = getattr(context, 'principal') (username, realm) = split_principal(principal) - owner_dn = self.api.Object['user'].get_dn(username) + owner_dn = self.api.Object.user.get_dn(username) entry_attrs['owner'] = owner_dn - parent_dn = DN(*dn[1:]) - self.obj.create_entry(parent_dn, owner=owner_dn) + # parent is user's private container, create parent + if parent_id == self.obj.get_private_id(): + try: + self.obj.create_entry( + DN(*dn[1:]), owner=owner_dn) + except errors.DuplicateEntry: + pass return dn @@ -277,17 +277,26 @@ class vaultcontainer_add(LDAPCreate): return dn + def handle_not_found(self, *args, **options): + + parent_id = self.obj.normalize_id(options.get('parent_id')) + + raise errors.NotFound( + reason=self.obj.parent_not_found_msg % { + 'parent': parent_id, + 'oname': self.obj.object_name, + } + ) + @register() class vaultcontainer_del(LDAPDelete): __doc__ = _('Delete a vault container.') - msg_summary = _('Deleted vault container "%(value)s"') - takes_options = LDAPDelete.takes_options + ( - Str('parent?', - cli_name='parent', - doc=_('Parent container'), + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), ), Flag('force?', doc=_('Force deletion'), @@ -295,42 +304,33 @@ class vaultcontainer_del(LDAPDelete): ), ) - def delete_entry(self, pkey, *keys, **options): - __doc__ = _(""" - Overwrites the base method to control deleting subtree with force option. - """) + msg_summary = _('Deleted vault container "%(value)s"') - ldap = self.obj.backend - nkeys = keys[:-1] + (pkey, ) - dn = self.obj.get_dn(*nkeys, **options) + def pre_callback(self, ldap, dn, *keys, **options): assert isinstance(dn, DN) - for callback in self.get_callbacks('pre'): - dn = callback(self, ldap, dn, *nkeys, **options) - assert isinstance(dn, DN) - try: - self._exc_wrapper(nkeys, options, ldap.delete_entry)(dn) + ldap.get_entries(dn, scope=ldap.SCOPE_ONELEVEL, attrs_list=[]) except errors.NotFound: - self.obj.handle_not_found(*nkeys) - except errors.NotAllowedOnNonLeaf: - # this entry is not a leaf entry - # if forced, delete all child nodes - if options.get('force'): - self.delete_subtree(dn, *nkeys, **options) - else: - raise + pass + else: + if not options.get('force', False): + raise errors.NotAllowedOnNonLeaf() - for callback in self.get_callbacks('post'): - result = callback(self, ldap, dn, *nkeys, **options) - - return result + return dn @register() class vaultcontainer_find(LDAPSearch): __doc__ = _('Search for vault containers.') + takes_options = LDAPSearch.takes_options + ( + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), + ), + ) + msg_summary = ngettext( '%(count)d vault container matched', '%(count)d vault containers matched', 0 ) @@ -338,14 +338,9 @@ class vaultcontainer_find(LDAPSearch): def pre_callback(self, ldap, filter, attrs_list, base_dn, scope, *keys, **options): assert isinstance(base_dn, DN) - principal = getattr(context, 'principal') - (username, realm) = split_principal(principal) - owner_dn = self.api.Object['user'].get_dn(username) - - parent_id = self.obj.normalize_id(options.get('parent')) - base_dn = self.obj.get_dn(parent=parent_id) - - self.obj.create_entry(base_dn, owner=owner_dn) + parent_id = self.obj.normalize_id(options.get('parent_id')) + (name, grandparent_id) = self.obj.split_id(parent_id) + base_dn = self.obj.get_dn(name, parent_id=grandparent_id) return (filter, base_dn, scope) @@ -356,11 +351,32 @@ class vaultcontainer_find(LDAPSearch): return truncated + def handle_not_found(self, *args, **options): + + parent_id = self.obj.normalize_id(options.get('parent_id')) + + # parent is user's private container, ignore + if parent_id == self.obj.get_private_id(): + return + + # otherwise, raise an error + reason=self.obj.parent_not_found_msg % { + 'parent': parent_id, + 'oname': self.obj.object_name, + } + @register() class vaultcontainer_mod(LDAPUpdate): __doc__ = _('Modify a vault container.') + takes_options = LDAPUpdate.takes_options + ( + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), + ), + ) + msg_summary = _('Modified vault container "%(value)s"') def post_callback(self, ldap, dn, entry_attrs, *keys, **options): @@ -376,9 +392,9 @@ class vaultcontainer_show(LDAPRetrieve): __doc__ = _('Display information about a vault container.') takes_options = LDAPRetrieve.takes_options + ( - Str('parent?', - cli_name='parent', - doc=_('Parent container'), + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), ), ) @@ -395,9 +411,9 @@ class vaultcontainer_add_owner(LDAPAddMember): __doc__ = _('Add owners to a vault container.') takes_options = LDAPAddMember.takes_options + ( - Str('parent?', - cli_name='parent', - doc=_('Parent container'), + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), ), ) @@ -417,9 +433,9 @@ class vaultcontainer_remove_owner(LDAPRemoveMember): __doc__ = _('Remove owners from a vault container.') takes_options = LDAPRemoveMember.takes_options + ( - Str('parent?', - cli_name='parent', - doc=_('Parent container'), + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), ), ) @@ -439,9 +455,9 @@ class vaultcontainer_add_member(LDAPAddMember): __doc__ = _('Add members to a vault container.') takes_options = LDAPAddMember.takes_options + ( - Str('parent?', - cli_name='parent', - doc=_('Parent container'), + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), ), ) @@ -459,9 +475,9 @@ class vaultcontainer_remove_member(LDAPRemoveMember): takes_options = LDAPRemoveMember.takes_options + ( - Str('parent?', - cli_name='parent', - doc=_('Parent container'), + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), ), ) diff --git a/ipalib/plugins/vaultsecret.py b/ipalib/plugins/vaultsecret.py index b0896155a5af13013f13f2b3ae586da2a8832c30..7f44f0816b571dac718fa02edfd187fe8666565e 100644 --- a/ipalib/plugins/vaultsecret.py +++ b/ipalib/plugins/vaultsecret.py @@ -93,7 +93,7 @@ class vaultsecret(LDAPObject): Bytes('data?', cli_name='data', label=_('Data'), - doc=_('Base-64 encoded binary secret data'), + doc=_('Binary secret data'), ), ) @@ -103,8 +103,8 @@ class vaultsecret_add(LDAPRetrieve): __doc__ = _('Add a new vault secret.') takes_options = ( - Str('container?', - cli_name='container', + Str('container_id?', + cli_name='container_id', doc=_('Container ID'), ), Str('description?', @@ -113,7 +113,7 @@ class vaultsecret_add(LDAPRetrieve): ), Bytes('data?', cli_name='data', - doc=_('Base-64 encoded binary secret data'), + doc=_('Binary secret data'), ), Str('text?', cli_name='text', @@ -147,13 +147,13 @@ class vaultsecret_add(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = 'standard' salt = None # retrieve vault info - response = api.Command.vault_show(vault_name, container=container_id) + response = self.api.Command.vault_show(vault_name, container_id=container_id) result = response['result'] if result.has_key('ipavaulttype'): @@ -256,9 +256,9 @@ class vaultsecret_add(LDAPRetrieve): error=_('Invalid vault type')) # retrieve secrets - response = api.Command.vault_retrieve( + response = self.api.Command.vault_retrieve( vault_name, - container=container_id, + container_id=container_id, password=password, private_key=private_key) @@ -276,18 +276,14 @@ class vaultsecret_add(LDAPRetrieve): # get data if data: - if text: - raise errors.ValidationError(name='text', - error=_('Input data already specified')) - - if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + if text or input_file: + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) elif text: if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) data = text.encode('utf-8') @@ -315,9 +311,9 @@ class vaultsecret_add(LDAPRetrieve): # rearchive secrets vault_data = json.dumps(json_data) - response = api.Command.vault_archive( + response = self.api.Command.vault_archive( vault_name, - container=container_id, + container_id=container_id, data=vault_data, password=password) @@ -338,8 +334,8 @@ class vaultsecret_del(LDAPRetrieve): __doc__ = _('Delete a vault secret.') takes_options = ( - Str('container?', - cli_name='container', + Str('container_id?', + cli_name='container_id', doc=_('Container ID'), ), Str('password?', @@ -366,13 +362,13 @@ class vaultsecret_del(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = 'standard' salt = None # retrieve vault info - response = api.Command.vault_show(vault_name, container=container_id) + response = self.api.Command.vault_show(vault_name, container_id=container_id) result = response['result'] if result.has_key('ipavaulttype'): @@ -463,9 +459,9 @@ class vaultsecret_del(LDAPRetrieve): error=_('Invalid vault type')) # retrieve secrets - response = api.Command.vault_retrieve( + response = self.api.Command.vault_retrieve( vault_name, - container=container_id, + container_id=container_id, password=password, private_key=private_key) @@ -496,9 +492,9 @@ class vaultsecret_del(LDAPRetrieve): # rearchive secrets vault_data = json.dumps(json_data) - response = api.Command.vault_archive( + response = self.api.Command.vault_archive( vault_name, - container=container_id, + container_id=container_id, data=vault_data, password=password) @@ -515,13 +511,11 @@ class vaultsecret_del(LDAPRetrieve): @register() class vaultsecret_find(LDAPSearch): - __doc__ = _(""" - Search for vault secrets. - """) + __doc__ = _('Search for vault secrets.') takes_options = ( - Str('container?', - cli_name='container', + Str('container_id?', + cli_name='container_id', doc=_('Container ID'), ), Str('password?', @@ -545,13 +539,13 @@ class vaultsecret_find(LDAPSearch): def forward(self, *args, **options): vault_name = args[0] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = 'standard' salt = None # retrieve vault info - response = api.Command.vault_show(vault_name, container=container_id) + response = self.api.Command.vault_show(vault_name, container_id=container_id) result = response['result'] if result.has_key('ipavaulttype'): @@ -641,9 +635,9 @@ class vaultsecret_find(LDAPSearch): raise errors.ValidationError(name='vault_type', error=_('Invalid vault type')) - response = api.Command.vault_retrieve( + response = self.api.Command.vault_retrieve( vault_name, - container=container_id, + container_id=container_id, password=password, private_key=private_key) @@ -678,8 +672,8 @@ class vaultsecret_mod(LDAPRetrieve): __doc__ = _('Modify a vault secret.') takes_options = ( - Str('container?', - cli_name='container', + Str('container_id?', + cli_name='container_id', doc=_('Container ID'), ), Str('description?', @@ -722,13 +716,13 @@ class vaultsecret_mod(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = 'standard' salt = None # retrieve vault info - response = api.Command.vault_show(vault_name, container=container_id) + response = self.api.Command.vault_show(vault_name, container_id=container_id) result = response['result'] if result.has_key('ipavaulttype'): @@ -830,18 +824,14 @@ class vaultsecret_mod(LDAPRetrieve): # get data if data: - if text: - raise errors.ValidationError(name='text', - error=_('Input data already specified')) - - if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + if text or input_file: + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) elif text: if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) data = text.encode('utf-8') @@ -853,9 +843,9 @@ class vaultsecret_mod(LDAPRetrieve): pass # retrieve secrets - response = api.Command.vault_retrieve( + response = self.api.Command.vault_retrieve( vault_name, - container=container_id, + container_id=container_id, password=password, private_key=private_key) @@ -889,9 +879,9 @@ class vaultsecret_mod(LDAPRetrieve): # rearchive secrets vault_data = json.dumps(json_data) - response = api.Command.vault_archive( + response = self.api.Command.vault_archive( vault_name, - container=container_id, + container_id=container_id, data=vault_data, password=password) @@ -912,8 +902,8 @@ class vaultsecret_show(LDAPRetrieve): __doc__ = _('Display information about a vault secret.') takes_options = ( - Str('container?', - cli_name='container', + Str('container_id?', + cli_name='container_id', doc=_('Container ID'), ), Flag('show_text?', @@ -956,13 +946,13 @@ class vaultsecret_show(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = 'standard' salt = None # retrieve vault info - response = api.Command.vault_show(vault_name, container=container_id) + response = self.api.Command.vault_show(vault_name, container_id=container_id) result = response['result'] if result.has_key('ipavaulttype'): @@ -1062,9 +1052,9 @@ class vaultsecret_show(LDAPRetrieve): error=_('Invalid vault type')) # retrieve secrets - response = api.Command.vault_retrieve( + response = self.api.Command.vault_retrieve( vault_name, - container=container_id, + container_id=container_id, password=password, private_key=private_key) diff --git a/ipatests/test_xmlrpc/test_vault_plugin.py b/ipatests/test_xmlrpc/test_vault_plugin.py index 04f56ecafd3a141b39a0e32f1725258582b6b141..f3a280b40d5b6972e8755f63d46013cadaa68334 100644 --- a/ipatests/test_xmlrpc/test_vault_plugin.py +++ b/ipatests/test_xmlrpc/test_vault_plugin.py @@ -150,7 +150,7 @@ MszdQuc/FTSJ2DYsIwx7qq5c8mtargOjWRgZU22IgY9PKeIcitQjqw== -----END RSA PRIVATE KEY----- """ -class test_vault(Declarative): +class test_vault_plugin(Declarative): cleanup_commands = [ ('vault_del', [test_vault], {'continue': True}), diff --git a/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py b/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py index b99f9f68b8a480908602503d726205eeec36c2d2..22e13769df19b40cd39a144df662bae8bbf53d9e 100644 --- a/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py +++ b/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py @@ -32,12 +32,12 @@ base_container = u'base_container' child_container = u'child_container' grandchild_container = u'grandchild_container' -class test_vault(Declarative): +class test_vaultcontainer_plugin(Declarative): cleanup_commands = [ ('vaultcontainer_del', [private_container], {'continue': True}), - ('vaultcontainer_del', [shared_container], {'parent': u'/shared/', 'continue': True}), - ('vaultcontainer_del', [service_container], {'parent': u'/services/', 'continue': True}), + ('vaultcontainer_del', [shared_container], {'parent_id': u'/shared/', 'continue': True}), + ('vaultcontainer_del', [service_container], {'parent_id': u'/services/', 'continue': True}), ('vaultcontainer_del', [base_container], {'force': True, 'continue': True}), ] @@ -49,7 +49,7 @@ class test_vault(Declarative): 'vaultcontainer_find', [], { - 'parent': u'/', + 'parent_id': u'/', }, ), 'expected': { @@ -202,7 +202,7 @@ class test_vault(Declarative): 'vaultcontainer_add', [shared_container], { - 'parent': u'/shared/', + 'parent_id': u'/shared/', }, ), 'expected': { @@ -224,7 +224,7 @@ class test_vault(Declarative): 'vaultcontainer_find', [], { - 'parent': u'/shared/', + 'parent_id': u'/shared/', }, ), 'expected': { @@ -247,7 +247,7 @@ class test_vault(Declarative): 'vaultcontainer_show', [shared_container], { - 'parent': u'/shared/', + 'parent_id': u'/shared/', }, ), 'expected': { @@ -268,7 +268,7 @@ class test_vault(Declarative): 'vaultcontainer_mod', [shared_container], { - 'parent': u'/shared/', + 'parent_id': u'/shared/', 'description': u'shared container', }, ), @@ -290,7 +290,7 @@ class test_vault(Declarative): 'vaultcontainer_del', [shared_container], { - 'parent': u'/shared/', + 'parent_id': u'/shared/', }, ), 'expected': { @@ -308,7 +308,7 @@ class test_vault(Declarative): 'vaultcontainer_add', [service_container], { - 'parent': u'/services/', + 'parent_id': u'/services/', }, ), 'expected': { @@ -349,7 +349,7 @@ class test_vault(Declarative): 'vaultcontainer_add', [child_container], { - 'parent': base_container, + 'parent_id': base_container, }, ), 'expected': { @@ -371,7 +371,7 @@ class test_vault(Declarative): 'vaultcontainer_add', [grandchild_container], { - 'parent': base_container + u'/' + child_container, + 'parent_id': base_container + u'/' + child_container, }, ), 'expected': { diff --git a/ipatests/test_xmlrpc/test_vaultsecret_plugin.py b/ipatests/test_xmlrpc/test_vaultsecret_plugin.py index 68f0fb0d7085be512939e397ea49abbcf3ca3c7b..cbfd231633e7c3c000e57d52d85b83f44f71df3c 100644 --- a/ipatests/test_xmlrpc/test_vaultsecret_plugin.py +++ b/ipatests/test_xmlrpc/test_vaultsecret_plugin.py @@ -29,7 +29,7 @@ test_vaultsecret = u'test_vaultsecret' binary_data = '\x01\x02\x03\x04' text_data = u'secret' -class test_vault(Declarative): +class test_vaultsecret_plugin(Declarative): cleanup_commands = [ ('vault_del', [test_vault], {'continue': True}), -- 1.9.0 From mbasti at redhat.com Wed Mar 11 14:13:38 2015 From: mbasti at redhat.com (Martin Basti) Date: Wed, 11 Mar 2015 15:13:38 +0100 Subject: [Freeipa-devel] [PATCHES 0018-0019] ipa-dns-install: Use LDAPI for all DS connections In-Reply-To: <55002E43.7050601@redhat.com> References: <55002E43.7050601@redhat.com> Message-ID: <55004D92.4080700@redhat.com> On 11/03/15 13:00, Martin Babinsky wrote: > These patches solve https://fedorahosted.org/freeipa/ticket/4933. > > They are to be applied to master branch. I will rebase them for > ipa-4-1 after the review. > Thank you for the patches. I have a few comments: IPA-4-1 Replace simple bind with LDAPI is too big change for 4-1, we should start TLS if possible to avoid MINSSF>0 error. The LDAPI patches should go only into IPA master branch. You can do something like this: --- a/ipaserver/install/service.py +++ b/ipaserver/install/service.py @@ -107,6 +107,10 @@ class Service(object): if not self.realm: raise errors.NotFound(reason="realm is missing for %s" % (self)) conn = ipaldap.IPAdmin(ldapi=self.ldapi, realm=self.realm) + elif self.dm_password is not None: + conn = ipaldap.IPAdmin(self.fqdn, port=389, + cacert=paths.IPA_CA_CRT, + start_tls=True) else: conn = ipaldap.IPAdmin(self.fqdn, port=389) PATCH 0018: 1) please add there more chatty commit message about using LDAPI 2) I do not like much idea of adding 'realm' kwarg into __init__ method of OpenDNSSECInstance IIUC, it is because get_masters() method, which requires realm to use LDAPI. You can just add ods.realm=, before call get_master() in ipa-dns-install if options.dnssec_master: + ods.realm=api.env.realm dnssec_masters = ods.get_masters() (Honza will change it anyway during refactoring) PATCH 0019: 1) commit message deserves to be more chatty, can you explain there why you removed kerberos cache? Martin^2 -- Martin Basti From mkosek at redhat.com Wed Mar 11 14:28:31 2015 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 11 Mar 2015 15:28:31 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <55002A7A.9040409@redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> <1426005332.4735.92.camel@willson.usersys.redhat.com> <54FF292C.4020805@redhat.com> <1426008965.4735.95.camel@willson.usersys.redhat.com> <54FF36E8.1040707@redhat.com> <1426014271.4735.107.camel@willson.usersys.redhat.com> <55001517.1060501@redhat.com> <55001A49.9050001@redhat.com> <55002A7A.9040409@redhat.com> Message-ID: <5500510F.5000100@redhat.com> On 03/11/2015 12:43 PM, Petr Spacek wrote: > On 11.3.2015 11:34, Jan Cholasta wrote: >> Dne 11.3.2015 v 11:12 Petr Spacek napsal(a): >>> On 10.3.2015 20:04, Simo Sorce wrote: >>>> On Tue, 2015-03-10 at 19:24 +0100, Petr Spacek wrote: >>>>> On 10.3.2015 18:36, Simo Sorce wrote: >>>>>> On Tue, 2015-03-10 at 18:26 +0100, Petr Spacek wrote: >>>>>>> On 10.3.2015 17:35, Simo Sorce wrote: >>>>>>>> On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: >>>>>>>>> On 10.3.2015 15:53, Simo Sorce wrote: >>>>>>>>>> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >>>>>>>>>>> Hello, >>>>>>>>>>> >>>>>>>>>>> I would like to discuss Generic support for unknown DNS RR types >>>>>>>>>>> (RFC 3597 >>>>>>>>>>> [0]). Here is the proposal: >>>>>>>>>>> >>>>>>>>>>> LDAP schema >>>>>>>>>>> =========== >>>>>>>>>>> - 1 new attribute: >>>>>>>>>>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' >>>>>>>>>>> EQUALITY >>>>>>>>>>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >>>>>>>>>>> >>>>>>>>>>> The attribute should be added to existing idnsRecord object class as >>>>>>>>>>> MAY. >>>>>>>>>>> >>>>>>>>>>> This new attribute should contain data encoded according to ?RFC >>>>>>>>>>> 3597 section >>>>>>>>>>> 5 [5]: >>>>>>>>>>> >>>>>>>>>>> The RDATA section of an RR of unknown type is represented as a >>>>>>>>>>> sequence of white space separated words as follows: >>>>>>>>>>> >>>>>>>>>>> The special token \# (a backslash immediately followed by a hash >>>>>>>>>>> sign), which identifies the RDATA as having the generic encoding >>>>>>>>>>> defined herein rather than a traditional type-specific encoding. >>>>>>>>>>> >>>>>>>>>>> An unsigned decimal integer specifying the RDATA length in >>>>>>>>>>> octets. >>>>>>>>>>> >>>>>>>>>>> Zero or more words of hexadecimal data encoding the actual RDATA >>>>>>>>>>> field, each containing an even number of hexadecimal digits. >>>>>>>>>>> >>>>>>>>>>> If the RDATA is of zero length, the text representation contains >>>>>>>>>>> only >>>>>>>>>>> the \# token and the single zero representing the length. >>>>>>>>>>> >>>>>>>>>>> Examples from RFC: >>>>>>>>>>> a.example. CLASS32 TYPE731 \# 6 abcd ( >>>>>>>>>>> ef 01 23 45 ) >>>>>>>>>>> b.example. HS TYPE62347 \# 0 >>>>>>>>>>> e.example. IN A \# 4 0A000001 >>>>>>>>>>> e.example. CLASS1 TYPE1 10.0.0.2 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Open questions about LDAP format >>>>>>>>>>> ================================ >>>>>>>>>>> Should we include "\#" constant? We know that the attribute contains >>>>>>>>>>> record in >>>>>>>>>>> RFC 3597 syntax so it is not strictly necessary. >>>>>>>>>>> >>>>>>>>>>> I think it would be better to follow RFC 3597 format. It allows blind >>>>>>>>>>> copy&pasting from other tools, including direct calls to python-dns. >>>>>>>>>>> >>>>>>>>>>> It also eases writing conversion tools between DNS and LDAP format >>>>>>>>>>> because >>>>>>>>>>> they do not need to change record values. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Another question is if we should explicitly include length of data >>>>>>>>>>> represented >>>>>>>>>>> in hexadecimal notation as a decimal number. I'm very strongly >>>>>>>>>>> inclined to let >>>>>>>>>>> it there because it is very good sanity check and again, it allows >>>>>>>>>>> us to >>>>>>>>>>> re-use existing tools including parsers. >>>>>>>>>>> >>>>>>>>>>> I will ask Uninett.no for standardization after we sort this out >>>>>>>>>>> (they own the >>>>>>>>>>> OID arc we use for DNS records). >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Attribute usage >>>>>>>>>>> =============== >>>>>>>>>>> Every DNS RR type has assigned a number [1] which is used on wire. >>>>>>>>>>> RR types >>>>>>>>>>> which are unknown to the server cannot be named by their >>>>>>>>>>> mnemonic/type name >>>>>>>>>>> because server would not be able to do name->number conversion and >>>>>>>>>>> to generate >>>>>>>>>>> DNS wire format. >>>>>>>>>>> >>>>>>>>>>> As a result, we have to encode the RR type number somehow. Let's use >>>>>>>>>>> attribute >>>>>>>>>>> sub-types. >>>>>>>>>>> >>>>>>>>>>> E.g. a record with type 65280 and hex value 0A000001 will be >>>>>>>>>>> represented as: >>>>>>>>>>> GenericRecord;TYPE65280: \# 4 0A000001 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> CLI >>>>>>>>>>> === >>>>>>>>>>> $ ipa dnsrecord-add zone.example owner \ >>>>>>>>>>> --generic-type=65280 --generic-data='\# 4 0A000001' >>>>>>>>>>> >>>>>>>>>>> $ ipa dnsrecord-show zone.example owner >>>>>>>>>>> Record name: owner >>>>>>>>>>> TYPE65280 Record: \# 4 0A000001 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> ACK? :-) >>>>>>>>>> >>>>>>>>>> Almost. >>>>>>>>>> We should refrain from using subtypes when not necessary, and in this >>>>>>>>>> case it is not necessary. >>>>>>>>>> >>>>>>>>>> Use: >>>>>>>>>> GenericRecord: 65280 \# 4 0A000001 >>>>>>>>> >>>>>>>>> I was considering that too but I can see two main drawbacks: >>>>>>>>> >>>>>>>>> 1) It does not work very well with DS ACI (targetattrfilter, anyone?). >>>>>>>>> Adding >>>>>>>>> generic write access to GenericRecord == ability to add TLSA records too, >>>>>>>>> which you may not want. IMHO it is perfectly reasonable to limit write >>>>>>>>> access >>>>>>>>> to certain types (e.g. to one from private range). >>>>>>>>> >>>>>>>>> 2) We would need a separate substring index for emulating filters like >>>>>>>>> (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence >>>>>>>>> index >>>>>>>>> which will be handy one day when we decide to handle upgrades like >>>>>>>>> GenericRecord;TYPE256->UriRecord. >>>>>>>>> >>>>>>>>> Another (less important) annoyance is that conversion tools would have to >>>>>>>>> mangle record data instead of just converting attribute name->record >>>>>>>>> type. >>>>>>>>> >>>>>>>>> >>>>>>>>> I can be convinced that subtypes are not necessary but I do not see clear >>>>>>>>> advantage of avoiding them. What is the problem with subtypes? >>>>>>>> >>>>>>>> Poor support by most clients, so it is generally discouraged. >>>>>>> Hmm, it does not sound like a thing we should care in this case. DNS >>>>>>> tree is >>>>>>> not meant for direct consumption by LDAP clients (compare with cn=compat). >>>>>>> >>>>>>> IMHO the only two clients we should care are FreeIPA framework and >>>>>>> bind-dyndb-ldap so I do not see this as a problem, really. If someone >>>>>>> wants to >>>>>>> access DNS tree by hand - sure, use a standard compliant client! >>>>>>> >>>>>>> Working ACI and LDAP filters sounds like good price for supporting only >>>>>>> standards compliant clients. >>>>>>> >>>>>>> AFAIK OpenLDAP works well and I suspect that ApacheDS will work too because >>>>>>> Eclipse has nice support for sub-types built-in. If I can draw some >>>>>>> conclusions from that, sub-types are not a thing aliens forgot here when >>>>>>> leaving Earth one million years ago :-) >>>>>>> >>>>>>>> The problem with subtypes and ACIs though is that I think ACIs do not >>>>>>>> care about the subtype unless you explicit mention them. >>>>>>> IMHO that is exactly what I would like to see for GenericRecord. It >>>>>>> allows us >>>>>>> to write ACI which allows admins to add any GenericRecord and at the >>>>>>> same time >>>>>>> allows us to craft ACI which allows access only to >>>>>>> GenericRecord;TYPE65280 for >>>>>>> specific group/user. >>>>>>> >>>>>>>> So perhaps bind_dyndb_ldap should refuse to use a generic type that >>>>>>>> shadows DNSSEC relevant records ? >>>>>>> Sorry, this cannot possibly work because it depends on up-to-date >>>>>>> blacklist. >>>>>>> >>>>>>> How would the plugin released in 2015 know that highly sensitive OPENPGPKEY >>>>>>> type will be standardized in 2016 and assigned number XYZ? >>>>>> >>>>>> Ok, show me an example ACI that works and you get my ack :) >>>>> >>>>> Am I being punished for something? :-) >>>>> >>>>> Anyway, this monstrosity: >>>>> >>>>> (targetattr = "objectclass || txtRecord;test")(target = >>>>> "ldap:///idnsname=*,cn=dns,dc=ipa,dc=example")(version 3.0;acl >>>>> "permission:luser: Read DNS Entries";allow (compare,read,search) userdn = >>>>> "ldap:///uid=luser,cn=users,cn=accounts,dc=ipa,dc=example";) >>>>> >>>>> Gives 'luser' read access only to txtRecord;test and *not* to the whole >>>>> txtRecord in general. >>>>> >>>>> $ kinit luser >>>>> $ ldapsearch -Y GSSAPI -s base -b >>>>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>>>> SASL username: luser at IPA.EXAMPLE >>>>> >>>>> # txt, ipa.example., dns, ipa.example >>>>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>>>> objectClass: top >>>>> objectClass: idnsrecord >>>>> tXTRecord;test: Guess what is new here! >>>>> >>>>> Filter '(tXTRecord;test=*)' works as expected and returns only objects with >>>>> subtype ;test. >>>>> >>>>> The only weird thing I noticed is that search filter '(tXTRecord=*)' does not >>>>> return the object if you have access only to an subtype with existing value >>>>> but not to the 'vanilla' attribute. >>>>> >>>>> Maybe it is a bug? I will think about it for a while and possibly open a >>>>> ticket. Anyway, this is not something we need for implementation. >>>>> >>>>> >>>>> For completeness: >>>>> >>>>> $ kinit admin >>>>> $ ldapsearch -Y GSSAPI -s base -b >>>>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>>>> SASL username: admin at IPA.EXAMPLE >>>>> >>>>> # txt, ipa.example., dns, ipa.example >>>>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>>>> objectClass: top >>>>> objectClass: idnsrecord >>>>> tXTRecord: nothing >>>>> tXTRecord: something >>>>> idnsName: txt >>>>> tXTRecord;test: Guess what is new here! >>>>> >>>>> >>>>> And yes, you assume correctly that (targetattr = "txtRecord") gives access to >>>>> whole txtRecord including all its subtypes. >>>>> >>>>> ACK? :-) >>>>> >>>> >>>> ACK. >>> >>> Thank you. Now to the most important and difficult question: >>> Should the attribute name be "GenericRecord" or "UnknownRecord"? >>> >>> I like "GenericRecord" but Honza prefers "UnknownRecord" so we need a third >>> opinion :-) >> >> GenericRecord sounds like something that may be used for any record type, >> known or unknown. I don't think that's what we want. We want users to use it >> only for unknown record types and use appropriate Record attribute for >> known attributes. >> >> The RFC is titled "Handling of *Unknown* DNS Resource Record (RR) Types". The >> word "generic" is used only when referring to encoding of RDATA. > > Okay, be it 'UnknownRecord'. > > Petr^2 Spacek I am just afraid it is quite general name, that may collide with other attribute names. If it would be named "idnsUnknownRecord", it would be more unique. But I assume we cannot add idns prefix for records themselves... Martin From pspacek at redhat.com Wed Mar 11 14:38:27 2015 From: pspacek at redhat.com (Petr Spacek) Date: Wed, 11 Mar 2015 15:38:27 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <5500510F.5000100@redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> <1426005332.4735.92.camel@willson.usersys.redhat.com> <54FF292C.4020805@redhat.com> <1426008965.4735.95.camel@willson.usersys.redhat.com> <54FF36E8.1040707@redhat.com> <1426014271.4735.107.camel@willson.usersys.redhat.com> <55001517.1060501@redhat.com> <55001A49.9050001@redhat.com> <55002A7A.9040409@redhat.com> <5500510F.5000100@redhat.com> Message-ID: <55005363.6000302@redhat.com> On 11.3.2015 15:28, Martin Kosek wrote: > On 03/11/2015 12:43 PM, Petr Spacek wrote: >> On 11.3.2015 11:34, Jan Cholasta wrote: >>> Dne 11.3.2015 v 11:12 Petr Spacek napsal(a): >>>> On 10.3.2015 20:04, Simo Sorce wrote: >>>>> On Tue, 2015-03-10 at 19:24 +0100, Petr Spacek wrote: >>>>>> On 10.3.2015 18:36, Simo Sorce wrote: >>>>>>> On Tue, 2015-03-10 at 18:26 +0100, Petr Spacek wrote: >>>>>>>> On 10.3.2015 17:35, Simo Sorce wrote: >>>>>>>>> On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: >>>>>>>>>> On 10.3.2015 15:53, Simo Sorce wrote: >>>>>>>>>>> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >>>>>>>>>>>> Hello, >>>>>>>>>>>> >>>>>>>>>>>> I would like to discuss Generic support for unknown DNS RR types >>>>>>>>>>>> (RFC 3597 >>>>>>>>>>>> [0]). Here is the proposal: >>>>>>>>>>>> >>>>>>>>>>>> LDAP schema >>>>>>>>>>>> =========== >>>>>>>>>>>> - 1 new attribute: >>>>>>>>>>>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' >>>>>>>>>>>> EQUALITY >>>>>>>>>>>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >>>>>>>>>>>> >>>>>>>>>>>> The attribute should be added to existing idnsRecord object class as >>>>>>>>>>>> MAY. >>>>>>>>>>>> >>>>>>>>>>>> This new attribute should contain data encoded according to ?RFC >>>>>>>>>>>> 3597 section >>>>>>>>>>>> 5 [5]: >>>>>>>>>>>> >>>>>>>>>>>> The RDATA section of an RR of unknown type is represented as a >>>>>>>>>>>> sequence of white space separated words as follows: >>>>>>>>>>>> >>>>>>>>>>>> The special token \# (a backslash immediately followed by a hash >>>>>>>>>>>> sign), which identifies the RDATA as having the generic encoding >>>>>>>>>>>> defined herein rather than a traditional type-specific encoding. >>>>>>>>>>>> >>>>>>>>>>>> An unsigned decimal integer specifying the RDATA length in >>>>>>>>>>>> octets. >>>>>>>>>>>> >>>>>>>>>>>> Zero or more words of hexadecimal data encoding the actual RDATA >>>>>>>>>>>> field, each containing an even number of hexadecimal digits. >>>>>>>>>>>> >>>>>>>>>>>> If the RDATA is of zero length, the text representation contains >>>>>>>>>>>> only >>>>>>>>>>>> the \# token and the single zero representing the length. >>>>>>>>>>>> >>>>>>>>>>>> Examples from RFC: >>>>>>>>>>>> a.example. CLASS32 TYPE731 \# 6 abcd ( >>>>>>>>>>>> ef 01 23 45 ) >>>>>>>>>>>> b.example. HS TYPE62347 \# 0 >>>>>>>>>>>> e.example. IN A \# 4 0A000001 >>>>>>>>>>>> e.example. CLASS1 TYPE1 10.0.0.2 >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Open questions about LDAP format >>>>>>>>>>>> ================================ >>>>>>>>>>>> Should we include "\#" constant? We know that the attribute contains >>>>>>>>>>>> record in >>>>>>>>>>>> RFC 3597 syntax so it is not strictly necessary. >>>>>>>>>>>> >>>>>>>>>>>> I think it would be better to follow RFC 3597 format. It allows blind >>>>>>>>>>>> copy&pasting from other tools, including direct calls to python-dns. >>>>>>>>>>>> >>>>>>>>>>>> It also eases writing conversion tools between DNS and LDAP format >>>>>>>>>>>> because >>>>>>>>>>>> they do not need to change record values. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Another question is if we should explicitly include length of data >>>>>>>>>>>> represented >>>>>>>>>>>> in hexadecimal notation as a decimal number. I'm very strongly >>>>>>>>>>>> inclined to let >>>>>>>>>>>> it there because it is very good sanity check and again, it allows >>>>>>>>>>>> us to >>>>>>>>>>>> re-use existing tools including parsers. >>>>>>>>>>>> >>>>>>>>>>>> I will ask Uninett.no for standardization after we sort this out >>>>>>>>>>>> (they own the >>>>>>>>>>>> OID arc we use for DNS records). >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Attribute usage >>>>>>>>>>>> =============== >>>>>>>>>>>> Every DNS RR type has assigned a number [1] which is used on wire. >>>>>>>>>>>> RR types >>>>>>>>>>>> which are unknown to the server cannot be named by their >>>>>>>>>>>> mnemonic/type name >>>>>>>>>>>> because server would not be able to do name->number conversion and >>>>>>>>>>>> to generate >>>>>>>>>>>> DNS wire format. >>>>>>>>>>>> >>>>>>>>>>>> As a result, we have to encode the RR type number somehow. Let's use >>>>>>>>>>>> attribute >>>>>>>>>>>> sub-types. >>>>>>>>>>>> >>>>>>>>>>>> E.g. a record with type 65280 and hex value 0A000001 will be >>>>>>>>>>>> represented as: >>>>>>>>>>>> GenericRecord;TYPE65280: \# 4 0A000001 >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> CLI >>>>>>>>>>>> === >>>>>>>>>>>> $ ipa dnsrecord-add zone.example owner \ >>>>>>>>>>>> --generic-type=65280 --generic-data='\# 4 0A000001' >>>>>>>>>>>> >>>>>>>>>>>> $ ipa dnsrecord-show zone.example owner >>>>>>>>>>>> Record name: owner >>>>>>>>>>>> TYPE65280 Record: \# 4 0A000001 >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> ACK? :-) >>>>>>>>>>> >>>>>>>>>>> Almost. >>>>>>>>>>> We should refrain from using subtypes when not necessary, and in this >>>>>>>>>>> case it is not necessary. >>>>>>>>>>> >>>>>>>>>>> Use: >>>>>>>>>>> GenericRecord: 65280 \# 4 0A000001 >>>>>>>>>> >>>>>>>>>> I was considering that too but I can see two main drawbacks: >>>>>>>>>> >>>>>>>>>> 1) It does not work very well with DS ACI (targetattrfilter, anyone?). >>>>>>>>>> Adding >>>>>>>>>> generic write access to GenericRecord == ability to add TLSA records too, >>>>>>>>>> which you may not want. IMHO it is perfectly reasonable to limit write >>>>>>>>>> access >>>>>>>>>> to certain types (e.g. to one from private range). >>>>>>>>>> >>>>>>>>>> 2) We would need a separate substring index for emulating filters like >>>>>>>>>> (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence >>>>>>>>>> index >>>>>>>>>> which will be handy one day when we decide to handle upgrades like >>>>>>>>>> GenericRecord;TYPE256->UriRecord. >>>>>>>>>> >>>>>>>>>> Another (less important) annoyance is that conversion tools would have to >>>>>>>>>> mangle record data instead of just converting attribute name->record >>>>>>>>>> type. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> I can be convinced that subtypes are not necessary but I do not see clear >>>>>>>>>> advantage of avoiding them. What is the problem with subtypes? >>>>>>>>> >>>>>>>>> Poor support by most clients, so it is generally discouraged. >>>>>>>> Hmm, it does not sound like a thing we should care in this case. DNS >>>>>>>> tree is >>>>>>>> not meant for direct consumption by LDAP clients (compare with cn=compat). >>>>>>>> >>>>>>>> IMHO the only two clients we should care are FreeIPA framework and >>>>>>>> bind-dyndb-ldap so I do not see this as a problem, really. If someone >>>>>>>> wants to >>>>>>>> access DNS tree by hand - sure, use a standard compliant client! >>>>>>>> >>>>>>>> Working ACI and LDAP filters sounds like good price for supporting only >>>>>>>> standards compliant clients. >>>>>>>> >>>>>>>> AFAIK OpenLDAP works well and I suspect that ApacheDS will work too because >>>>>>>> Eclipse has nice support for sub-types built-in. If I can draw some >>>>>>>> conclusions from that, sub-types are not a thing aliens forgot here when >>>>>>>> leaving Earth one million years ago :-) >>>>>>>> >>>>>>>>> The problem with subtypes and ACIs though is that I think ACIs do not >>>>>>>>> care about the subtype unless you explicit mention them. >>>>>>>> IMHO that is exactly what I would like to see for GenericRecord. It >>>>>>>> allows us >>>>>>>> to write ACI which allows admins to add any GenericRecord and at the >>>>>>>> same time >>>>>>>> allows us to craft ACI which allows access only to >>>>>>>> GenericRecord;TYPE65280 for >>>>>>>> specific group/user. >>>>>>>> >>>>>>>>> So perhaps bind_dyndb_ldap should refuse to use a generic type that >>>>>>>>> shadows DNSSEC relevant records ? >>>>>>>> Sorry, this cannot possibly work because it depends on up-to-date >>>>>>>> blacklist. >>>>>>>> >>>>>>>> How would the plugin released in 2015 know that highly sensitive OPENPGPKEY >>>>>>>> type will be standardized in 2016 and assigned number XYZ? >>>>>>> >>>>>>> Ok, show me an example ACI that works and you get my ack :) >>>>>> >>>>>> Am I being punished for something? :-) >>>>>> >>>>>> Anyway, this monstrosity: >>>>>> >>>>>> (targetattr = "objectclass || txtRecord;test")(target = >>>>>> "ldap:///idnsname=*,cn=dns,dc=ipa,dc=example")(version 3.0;acl >>>>>> "permission:luser: Read DNS Entries";allow (compare,read,search) userdn = >>>>>> "ldap:///uid=luser,cn=users,cn=accounts,dc=ipa,dc=example";) >>>>>> >>>>>> Gives 'luser' read access only to txtRecord;test and *not* to the whole >>>>>> txtRecord in general. >>>>>> >>>>>> $ kinit luser >>>>>> $ ldapsearch -Y GSSAPI -s base -b >>>>>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>>>>> SASL username: luser at IPA.EXAMPLE >>>>>> >>>>>> # txt, ipa.example., dns, ipa.example >>>>>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>>>>> objectClass: top >>>>>> objectClass: idnsrecord >>>>>> tXTRecord;test: Guess what is new here! >>>>>> >>>>>> Filter '(tXTRecord;test=*)' works as expected and returns only objects with >>>>>> subtype ;test. >>>>>> >>>>>> The only weird thing I noticed is that search filter '(tXTRecord=*)' does not >>>>>> return the object if you have access only to an subtype with existing value >>>>>> but not to the 'vanilla' attribute. >>>>>> >>>>>> Maybe it is a bug? I will think about it for a while and possibly open a >>>>>> ticket. Anyway, this is not something we need for implementation. >>>>>> >>>>>> >>>>>> For completeness: >>>>>> >>>>>> $ kinit admin >>>>>> $ ldapsearch -Y GSSAPI -s base -b >>>>>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>>>>> SASL username: admin at IPA.EXAMPLE >>>>>> >>>>>> # txt, ipa.example., dns, ipa.example >>>>>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>>>>> objectClass: top >>>>>> objectClass: idnsrecord >>>>>> tXTRecord: nothing >>>>>> tXTRecord: something >>>>>> idnsName: txt >>>>>> tXTRecord;test: Guess what is new here! >>>>>> >>>>>> >>>>>> And yes, you assume correctly that (targetattr = "txtRecord") gives access to >>>>>> whole txtRecord including all its subtypes. >>>>>> >>>>>> ACK? :-) >>>>>> >>>>> >>>>> ACK. >>>> >>>> Thank you. Now to the most important and difficult question: >>>> Should the attribute name be "GenericRecord" or "UnknownRecord"? >>>> >>>> I like "GenericRecord" but Honza prefers "UnknownRecord" so we need a third >>>> opinion :-) >>> >>> GenericRecord sounds like something that may be used for any record type, >>> known or unknown. I don't think that's what we want. We want users to use it >>> only for unknown record types and use appropriate Record attribute for >>> known attributes. >>> >>> The RFC is titled "Handling of *Unknown* DNS Resource Record (RR) Types". The >>> word "generic" is used only when referring to encoding of RDATA. >> >> Okay, be it 'UnknownRecord'. >> >> Petr^2 Spacek > > I am just afraid it is quite general name, that may collide with other > attribute names. If it would be named "idnsUnknownRecord", it would be more > unique. But I assume we cannot add idns prefix for records themselves... Good point. What about UnknownDNSRecord? -- Petr^2 Spacek From mkosek at redhat.com Wed Mar 11 14:45:42 2015 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 11 Mar 2015 15:45:42 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <55005363.6000302@redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> <1426005332.4735.92.camel@willson.usersys.redhat.com> <54FF292C.4020805@redhat.com> <1426008965.4735.95.camel@willson.usersys.redhat.com> <54FF36E8.1040707@redhat.com> <1426014271.4735.107.camel@willson.usersys.redhat.com> <55001517.1060501@redhat.com> <55001A49.9050001@redhat.com> <55002A7A.9040409@redhat.com> <5500510F.5000100@redhat.com> <55005363.6000302@redhat.com> Message-ID: <55005516.8090007@redhat.com> On 03/11/2015 03:38 PM, Petr Spacek wrote: > On 11.3.2015 15:28, Martin Kosek wrote: >> On 03/11/2015 12:43 PM, Petr Spacek wrote: >>> On 11.3.2015 11:34, Jan Cholasta wrote: >>>> Dne 11.3.2015 v 11:12 Petr Spacek napsal(a): >>>>> On 10.3.2015 20:04, Simo Sorce wrote: >>>>>> On Tue, 2015-03-10 at 19:24 +0100, Petr Spacek wrote: >>>>>>> On 10.3.2015 18:36, Simo Sorce wrote: >>>>>>>> On Tue, 2015-03-10 at 18:26 +0100, Petr Spacek wrote: >>>>>>>>> On 10.3.2015 17:35, Simo Sorce wrote: >>>>>>>>>> On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: >>>>>>>>>>> On 10.3.2015 15:53, Simo Sorce wrote: >>>>>>>>>>>> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >>>>>>>>>>>>> Hello, >>>>>>>>>>>>> >>>>>>>>>>>>> I would like to discuss Generic support for unknown DNS RR types >>>>>>>>>>>>> (RFC 3597 >>>>>>>>>>>>> [0]). Here is the proposal: >>>>>>>>>>>>> >>>>>>>>>>>>> LDAP schema >>>>>>>>>>>>> =========== >>>>>>>>>>>>> - 1 new attribute: >>>>>>>>>>>>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' >>>>>>>>>>>>> EQUALITY >>>>>>>>>>>>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >>>>>>>>>>>>> >>>>>>>>>>>>> The attribute should be added to existing idnsRecord object class as >>>>>>>>>>>>> MAY. >>>>>>>>>>>>> >>>>>>>>>>>>> This new attribute should contain data encoded according to ?RFC >>>>>>>>>>>>> 3597 section >>>>>>>>>>>>> 5 [5]: >>>>>>>>>>>>> >>>>>>>>>>>>> The RDATA section of an RR of unknown type is represented as a >>>>>>>>>>>>> sequence of white space separated words as follows: >>>>>>>>>>>>> >>>>>>>>>>>>> The special token \# (a backslash immediately followed by a hash >>>>>>>>>>>>> sign), which identifies the RDATA as having the generic encoding >>>>>>>>>>>>> defined herein rather than a traditional type-specific encoding. >>>>>>>>>>>>> >>>>>>>>>>>>> An unsigned decimal integer specifying the RDATA length in >>>>>>>>>>>>> octets. >>>>>>>>>>>>> >>>>>>>>>>>>> Zero or more words of hexadecimal data encoding the actual RDATA >>>>>>>>>>>>> field, each containing an even number of hexadecimal digits. >>>>>>>>>>>>> >>>>>>>>>>>>> If the RDATA is of zero length, the text representation contains >>>>>>>>>>>>> only >>>>>>>>>>>>> the \# token and the single zero representing the length. >>>>>>>>>>>>> >>>>>>>>>>>>> Examples from RFC: >>>>>>>>>>>>> a.example. CLASS32 TYPE731 \# 6 abcd ( >>>>>>>>>>>>> ef 01 23 45 ) >>>>>>>>>>>>> b.example. HS TYPE62347 \# 0 >>>>>>>>>>>>> e.example. IN A \# 4 0A000001 >>>>>>>>>>>>> e.example. CLASS1 TYPE1 10.0.0.2 >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Open questions about LDAP format >>>>>>>>>>>>> ================================ >>>>>>>>>>>>> Should we include "\#" constant? We know that the attribute contains >>>>>>>>>>>>> record in >>>>>>>>>>>>> RFC 3597 syntax so it is not strictly necessary. >>>>>>>>>>>>> >>>>>>>>>>>>> I think it would be better to follow RFC 3597 format. It allows blind >>>>>>>>>>>>> copy&pasting from other tools, including direct calls to python-dns. >>>>>>>>>>>>> >>>>>>>>>>>>> It also eases writing conversion tools between DNS and LDAP format >>>>>>>>>>>>> because >>>>>>>>>>>>> they do not need to change record values. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Another question is if we should explicitly include length of data >>>>>>>>>>>>> represented >>>>>>>>>>>>> in hexadecimal notation as a decimal number. I'm very strongly >>>>>>>>>>>>> inclined to let >>>>>>>>>>>>> it there because it is very good sanity check and again, it allows >>>>>>>>>>>>> us to >>>>>>>>>>>>> re-use existing tools including parsers. >>>>>>>>>>>>> >>>>>>>>>>>>> I will ask Uninett.no for standardization after we sort this out >>>>>>>>>>>>> (they own the >>>>>>>>>>>>> OID arc we use for DNS records). >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Attribute usage >>>>>>>>>>>>> =============== >>>>>>>>>>>>> Every DNS RR type has assigned a number [1] which is used on wire. >>>>>>>>>>>>> RR types >>>>>>>>>>>>> which are unknown to the server cannot be named by their >>>>>>>>>>>>> mnemonic/type name >>>>>>>>>>>>> because server would not be able to do name->number conversion and >>>>>>>>>>>>> to generate >>>>>>>>>>>>> DNS wire format. >>>>>>>>>>>>> >>>>>>>>>>>>> As a result, we have to encode the RR type number somehow. Let's use >>>>>>>>>>>>> attribute >>>>>>>>>>>>> sub-types. >>>>>>>>>>>>> >>>>>>>>>>>>> E.g. a record with type 65280 and hex value 0A000001 will be >>>>>>>>>>>>> represented as: >>>>>>>>>>>>> GenericRecord;TYPE65280: \# 4 0A000001 >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> CLI >>>>>>>>>>>>> === >>>>>>>>>>>>> $ ipa dnsrecord-add zone.example owner \ >>>>>>>>>>>>> --generic-type=65280 --generic-data='\# 4 0A000001' >>>>>>>>>>>>> >>>>>>>>>>>>> $ ipa dnsrecord-show zone.example owner >>>>>>>>>>>>> Record name: owner >>>>>>>>>>>>> TYPE65280 Record: \# 4 0A000001 >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> ACK? :-) >>>>>>>>>>>> >>>>>>>>>>>> Almost. >>>>>>>>>>>> We should refrain from using subtypes when not necessary, and in this >>>>>>>>>>>> case it is not necessary. >>>>>>>>>>>> >>>>>>>>>>>> Use: >>>>>>>>>>>> GenericRecord: 65280 \# 4 0A000001 >>>>>>>>>>> >>>>>>>>>>> I was considering that too but I can see two main drawbacks: >>>>>>>>>>> >>>>>>>>>>> 1) It does not work very well with DS ACI (targetattrfilter, anyone?). >>>>>>>>>>> Adding >>>>>>>>>>> generic write access to GenericRecord == ability to add TLSA records too, >>>>>>>>>>> which you may not want. IMHO it is perfectly reasonable to limit write >>>>>>>>>>> access >>>>>>>>>>> to certain types (e.g. to one from private range). >>>>>>>>>>> >>>>>>>>>>> 2) We would need a separate substring index for emulating filters like >>>>>>>>>>> (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence >>>>>>>>>>> index >>>>>>>>>>> which will be handy one day when we decide to handle upgrades like >>>>>>>>>>> GenericRecord;TYPE256->UriRecord. >>>>>>>>>>> >>>>>>>>>>> Another (less important) annoyance is that conversion tools would have to >>>>>>>>>>> mangle record data instead of just converting attribute name->record >>>>>>>>>>> type. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> I can be convinced that subtypes are not necessary but I do not see clear >>>>>>>>>>> advantage of avoiding them. What is the problem with subtypes? >>>>>>>>>> >>>>>>>>>> Poor support by most clients, so it is generally discouraged. >>>>>>>>> Hmm, it does not sound like a thing we should care in this case. DNS >>>>>>>>> tree is >>>>>>>>> not meant for direct consumption by LDAP clients (compare with cn=compat). >>>>>>>>> >>>>>>>>> IMHO the only two clients we should care are FreeIPA framework and >>>>>>>>> bind-dyndb-ldap so I do not see this as a problem, really. If someone >>>>>>>>> wants to >>>>>>>>> access DNS tree by hand - sure, use a standard compliant client! >>>>>>>>> >>>>>>>>> Working ACI and LDAP filters sounds like good price for supporting only >>>>>>>>> standards compliant clients. >>>>>>>>> >>>>>>>>> AFAIK OpenLDAP works well and I suspect that ApacheDS will work too because >>>>>>>>> Eclipse has nice support for sub-types built-in. If I can draw some >>>>>>>>> conclusions from that, sub-types are not a thing aliens forgot here when >>>>>>>>> leaving Earth one million years ago :-) >>>>>>>>> >>>>>>>>>> The problem with subtypes and ACIs though is that I think ACIs do not >>>>>>>>>> care about the subtype unless you explicit mention them. >>>>>>>>> IMHO that is exactly what I would like to see for GenericRecord. It >>>>>>>>> allows us >>>>>>>>> to write ACI which allows admins to add any GenericRecord and at the >>>>>>>>> same time >>>>>>>>> allows us to craft ACI which allows access only to >>>>>>>>> GenericRecord;TYPE65280 for >>>>>>>>> specific group/user. >>>>>>>>> >>>>>>>>>> So perhaps bind_dyndb_ldap should refuse to use a generic type that >>>>>>>>>> shadows DNSSEC relevant records ? >>>>>>>>> Sorry, this cannot possibly work because it depends on up-to-date >>>>>>>>> blacklist. >>>>>>>>> >>>>>>>>> How would the plugin released in 2015 know that highly sensitive OPENPGPKEY >>>>>>>>> type will be standardized in 2016 and assigned number XYZ? >>>>>>>> >>>>>>>> Ok, show me an example ACI that works and you get my ack :) >>>>>>> >>>>>>> Am I being punished for something? :-) >>>>>>> >>>>>>> Anyway, this monstrosity: >>>>>>> >>>>>>> (targetattr = "objectclass || txtRecord;test")(target = >>>>>>> "ldap:///idnsname=*,cn=dns,dc=ipa,dc=example")(version 3.0;acl >>>>>>> "permission:luser: Read DNS Entries";allow (compare,read,search) userdn = >>>>>>> "ldap:///uid=luser,cn=users,cn=accounts,dc=ipa,dc=example";) >>>>>>> >>>>>>> Gives 'luser' read access only to txtRecord;test and *not* to the whole >>>>>>> txtRecord in general. >>>>>>> >>>>>>> $ kinit luser >>>>>>> $ ldapsearch -Y GSSAPI -s base -b >>>>>>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>>>>>> SASL username: luser at IPA.EXAMPLE >>>>>>> >>>>>>> # txt, ipa.example., dns, ipa.example >>>>>>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>>>>>> objectClass: top >>>>>>> objectClass: idnsrecord >>>>>>> tXTRecord;test: Guess what is new here! >>>>>>> >>>>>>> Filter '(tXTRecord;test=*)' works as expected and returns only objects with >>>>>>> subtype ;test. >>>>>>> >>>>>>> The only weird thing I noticed is that search filter '(tXTRecord=*)' does not >>>>>>> return the object if you have access only to an subtype with existing value >>>>>>> but not to the 'vanilla' attribute. >>>>>>> >>>>>>> Maybe it is a bug? I will think about it for a while and possibly open a >>>>>>> ticket. Anyway, this is not something we need for implementation. >>>>>>> >>>>>>> >>>>>>> For completeness: >>>>>>> >>>>>>> $ kinit admin >>>>>>> $ ldapsearch -Y GSSAPI -s base -b >>>>>>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>>>>>> SASL username: admin at IPA.EXAMPLE >>>>>>> >>>>>>> # txt, ipa.example., dns, ipa.example >>>>>>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>>>>>> objectClass: top >>>>>>> objectClass: idnsrecord >>>>>>> tXTRecord: nothing >>>>>>> tXTRecord: something >>>>>>> idnsName: txt >>>>>>> tXTRecord;test: Guess what is new here! >>>>>>> >>>>>>> >>>>>>> And yes, you assume correctly that (targetattr = "txtRecord") gives access to >>>>>>> whole txtRecord including all its subtypes. >>>>>>> >>>>>>> ACK? :-) >>>>>>> >>>>>> >>>>>> ACK. >>>>> >>>>> Thank you. Now to the most important and difficult question: >>>>> Should the attribute name be "GenericRecord" or "UnknownRecord"? >>>>> >>>>> I like "GenericRecord" but Honza prefers "UnknownRecord" so we need a third >>>>> opinion :-) >>>> >>>> GenericRecord sounds like something that may be used for any record type, >>>> known or unknown. I don't think that's what we want. We want users to use it >>>> only for unknown record types and use appropriate Record attribute for >>>> known attributes. >>>> >>>> The RFC is titled "Handling of *Unknown* DNS Resource Record (RR) Types". The >>>> word "generic" is used only when referring to encoding of RDATA. >>> >>> Okay, be it 'UnknownRecord'. >>> >>> Petr^2 Spacek >> >> I am just afraid it is quite general name, that may collide with other >> attribute names. If it would be named "idnsUnknownRecord", it would be more >> unique. But I assume we cannot add idns prefix for records themselves... > > Good point. What about UnknownDNSRecord? Maybe. Question is how consistent we want to be with other DNS record names (arecord, ptrrecord) and how consistent we want to be with Uninett schema (details in https://fedorahosted.org/bind-dyndb-ldap/wiki/LDAPSchema) and if this new record would be discussed with them and added to their OID space. Martin From pspacek at redhat.com Wed Mar 11 14:53:03 2015 From: pspacek at redhat.com (Petr Spacek) Date: Wed, 11 Mar 2015 15:53:03 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <550042AE.1000002@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> <550042AE.1000002@redhat.com> Message-ID: <550056CF.4040809@redhat.com> On 11.3.2015 14:27, Martin Babinsky wrote: > Actually, now that I think about it, I will try to address some of your comments: >>> + except krbV.Krb5Error, e: >> except ... , ... syntax is not going to work in Python 3. Maybe 'as' would be >> better? >> > AFAIK except ... as ... syntax was added in Python 2.6. Using this syntax can > break older versions of Python. If this is not a concern for us, I will fix > this and use this syntax also in my later patches. Please see http://www.freeipa.org/page/Python_Coding_Style :-) Python 2.7 is required anyway. >>> diff --git a/ipa-client/ipa-install/ipa-client-install >>> b/ipa-client/ipa-install/ipa-client-install >>> index >>> ccaab5536e83b4b6ac60b81132c3455c0af19ae1..c817f9e86dbaa6a2cca7d1a463f53d491fa7badb >>> 100755 >>> --- a/ipa-client/ipa-install/ipa-client-install >>> +++ b/ipa-client/ipa-install/ipa-client-install >>> @@ -91,6 +91,13 @@ def parse_options(): >>> >>> parser.values.ca_cert_file = value >>> >>> + def validate_kinit_attempts_option(option, opt, value, parser): >>> + if value < 1 or value > sys.maxint: >>> + raise OptionValueError( >>> + "%s option has invalid value %d" % (opt, value)) >> It would be nice if the error message said what is the expected value. >> ("Expected integer in range <1,%s>" % sys.maxint) >> >> BTW is it possible to do this using existing option parser? I would expect >> some generic support for type=uint or something similar. >> > OptionParser supports 'type' keywords when adding options, which perform the > neccessary conversions (int(), etc) and validation (see > https://docs.python.org/2/library/optparse.html#optparse-standard-option-types). > However, in this case you still have to manually check for values less that 1 > which do not make sense. AFAIK OptionParser has no built-in way to do this. Okay then. >>> + >>> + parser.values.kinit_attempts = value >>> + >>> parser = IPAOptionParser(version=version.VERSION) >>> >>> basic_group = OptionGroup(parser, "basic options") >>> @@ -144,6 +151,11 @@ def parse_options(): >>> help="do not modify the nsswitch.conf and PAM >>> configuration") >>> basic_group.add_option("-f", "--force", dest="force", >>> action="store_true", >>> default=False, help="force setting of LDAP/Kerberos >>> conf") >>> + basic_group.add_option('--kinit-attempts', dest='kinit_attempts', >>> + action='callback', type='int', default=5, >> >> It would be good to check lockout numbers in default configuration to make >> sure that replication delay will not lock the principal. >> > I'm not sure that I follow, could you be more specific what you mean by this? KDC and DS will lock account after n failed attempts. See $ ipa pwpolicy-find to find out the number in your installation (keytab == password). >>> freeipa-mbabinsk-0017-2-Adopted-kinit_keytab-and-kinit_password-for-kerberos.patch >>> >>> >>> >>> From 912113529138e5b1bd8357ae6a17376cb5d32759 Mon Sep 17 00:00:00 2001 >>> From: Martin Babinsky >>> Date: Mon, 9 Mar 2015 12:54:36 +0100 >>> Subject: [PATCH 3/3] Adopted kinit_keytab and kinit_password for kerberos auth >>> >>> --- >>> daemons/dnssec/ipa-dnskeysync-replica | 6 ++- >>> daemons/dnssec/ipa-dnskeysyncd | 2 +- >>> daemons/dnssec/ipa-ods-exporter | 5 ++- >>> .../certmonger/dogtag-ipa-ca-renew-agent-submit | 3 +- >>> install/restart_scripts/renew_ca_cert | 7 ++-- >>> install/restart_scripts/renew_ra_cert | 4 +- >>> ipa-client/ipa-install/ipa-client-automount | 9 ++-- >>> ipa-client/ipaclient/ipa_certupdate.py | 3 +- >>> ipaserver/rpcserver.py | 49 >>> +++++++++++----------- >>> 9 files changed, 47 insertions(+), 41 deletions(-) >>> >>> diff --git a/daemons/dnssec/ipa-dnskeysync-replica >>> b/daemons/dnssec/ipa-dnskeysync-replica >>> index >>> d04f360e04ee018dcdd1ba9b2ca42b1844617af9..e9cae519202203a10678b7384e5acf748f256427 >>> 100755 >>> --- a/daemons/dnssec/ipa-dnskeysync-replica >>> +++ b/daemons/dnssec/ipa-dnskeysync-replica >>> @@ -139,14 +139,16 @@ log.setLevel(level=logging.DEBUG) >>> # Kerberos initialization >>> PRINCIPAL = str('%s/%s' % (DAEMONNAME, ipalib.api.env.host)) >>> log.debug('Kerberos principal: %s', PRINCIPAL) >>> -ipautil.kinit_hostprincipal(paths.IPA_DNSKEYSYNCD_KEYTAB, WORKDIR, PRINCIPAL) >>> +ccache_filename = os.path.join(WORKDIR, 'ccache') >> BTW I really appreciate this patch set! We finally can use more descriptive >> names like 'ipa-dnskeysync-replica.ccache' which sometimes make debugging >> easier. >> > Named ccaches seems to be a good idea. I will fix this in all places where the > ccache is somehow persistent (and not deleted after installation). Thank you! >>> diff --git a/ipa-client/ipa-install/ipa-client-automount >>> b/ipa-client/ipa-install/ipa-client-automount >>> index >>> 7b9e701dead5f50a033a455eb62e30df78cc0249..19197d34ca580062742b3d7363e5dfb2dad0e4de >>> 100755 >>> --- a/ipa-client/ipa-install/ipa-client-automount >>> +++ b/ipa-client/ipa-install/ipa-client-automount >>> @@ -425,10 +425,11 @@ def main(): >>> os.close(ccache_fd) >>> try: >>> try: >>> - os.environ['KRB5CCNAME'] = ccache_name >>> - ipautil.run([paths.KINIT, '-k', '-t', paths.KRB5_KEYTAB, >>> 'host/%s@%s' % (api.env.host, api.env.realm)]) >>> - except ipautil.CalledProcessError, e: >>> - sys.exit("Failed to obtain host TGT.") >>> + host_princ = str('host/%s@%s' % (api.env.host, api.env.realm)) >>> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, ccache_name, >> I'm not sure what is ccache_name here but it should be something descriptive. >> > In this case ccache_name points to a temporary file made by tempfile.mkstemp() > which is cleaned up in a later finally: block (so you will not get to it even > if the whole thing comes crashing). I'm not sure if there's a point in > renaming it. Okay, that is exactly where I wasn't sure :-) -- Petr^2 Spacek From mbabinsk at redhat.com Wed Mar 11 15:19:10 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Wed, 11 Mar 2015 16:19:10 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <550056CF.4040809@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> <550042AE.1000002@redhat.com> <550056CF.4040809@redhat.com> Message-ID: <55005CEE.5050303@redhat.com> On 03/11/2015 03:53 PM, Petr Spacek wrote: > On 11.3.2015 14:27, Martin Babinsky wrote: >> Actually, now that I think about it, I will try to address some of your comments: > >>>> + except krbV.Krb5Error, e: >>> except ... , ... syntax is not going to work in Python 3. Maybe 'as' would be >>> better? >>> >> AFAIK except ... as ... syntax was added in Python 2.6. Using this syntax can >> break older versions of Python. If this is not a concern for us, I will fix >> this and use this syntax also in my later patches. > > Please see http://www.freeipa.org/page/Python_Coding_Style :-) Python 2.7 is > required anyway. > Ah I have forgotten about our Coding Style page completely! Ok 'except ... as e' it is then. >>>> diff --git a/ipa-client/ipa-install/ipa-client-install >>>> b/ipa-client/ipa-install/ipa-client-install >>>> index >>>> ccaab5536e83b4b6ac60b81132c3455c0af19ae1..c817f9e86dbaa6a2cca7d1a463f53d491fa7badb >>>> 100755 >>>> --- a/ipa-client/ipa-install/ipa-client-install >>>> +++ b/ipa-client/ipa-install/ipa-client-install >>>> @@ -91,6 +91,13 @@ def parse_options(): >>>> >>>> parser.values.ca_cert_file = value >>>> >>>> + def validate_kinit_attempts_option(option, opt, value, parser): >>>> + if value < 1 or value > sys.maxint: >>>> + raise OptionValueError( >>>> + "%s option has invalid value %d" % (opt, value)) >>> It would be nice if the error message said what is the expected value. >>> ("Expected integer in range <1,%s>" % sys.maxint) >>> >>> BTW is it possible to do this using existing option parser? I would expect >>> some generic support for type=uint or something similar. >>> >> OptionParser supports 'type' keywords when adding options, which perform the >> neccessary conversions (int(), etc) and validation (see >> https://docs.python.org/2/library/optparse.html#optparse-standard-option-types). >> However, in this case you still have to manually check for values less that 1 >> which do not make sense. AFAIK OptionParser has no built-in way to do this. > > Okay then. > >>>> + >>>> + parser.values.kinit_attempts = value >>>> + >>>> parser = IPAOptionParser(version=version.VERSION) >>>> >>>> basic_group = OptionGroup(parser, "basic options") >>>> @@ -144,6 +151,11 @@ def parse_options(): >>>> help="do not modify the nsswitch.conf and PAM >>>> configuration") >>>> basic_group.add_option("-f", "--force", dest="force", >>>> action="store_true", >>>> default=False, help="force setting of LDAP/Kerberos >>>> conf") >>>> + basic_group.add_option('--kinit-attempts', dest='kinit_attempts', >>>> + action='callback', type='int', default=5, >>> >>> It would be good to check lockout numbers in default configuration to make >>> sure that replication delay will not lock the principal. >>> >> I'm not sure that I follow, could you be more specific what you mean by this? > > KDC and DS will lock account after n failed attempts. See $ ipa pwpolicy-find > to find out the number in your installation (keytab == password). > Ok I will check it. >>>> freeipa-mbabinsk-0017-2-Adopted-kinit_keytab-and-kinit_password-for-kerberos.patch >>>> >>>> >>>> >>>> From 912113529138e5b1bd8357ae6a17376cb5d32759 Mon Sep 17 00:00:00 2001 >>>> From: Martin Babinsky >>>> Date: Mon, 9 Mar 2015 12:54:36 +0100 >>>> Subject: [PATCH 3/3] Adopted kinit_keytab and kinit_password for kerberos auth >>>> >>>> --- >>>> daemons/dnssec/ipa-dnskeysync-replica | 6 ++- >>>> daemons/dnssec/ipa-dnskeysyncd | 2 +- >>>> daemons/dnssec/ipa-ods-exporter | 5 ++- >>>> .../certmonger/dogtag-ipa-ca-renew-agent-submit | 3 +- >>>> install/restart_scripts/renew_ca_cert | 7 ++-- >>>> install/restart_scripts/renew_ra_cert | 4 +- >>>> ipa-client/ipa-install/ipa-client-automount | 9 ++-- >>>> ipa-client/ipaclient/ipa_certupdate.py | 3 +- >>>> ipaserver/rpcserver.py | 49 >>>> +++++++++++----------- >>>> 9 files changed, 47 insertions(+), 41 deletions(-) >>>> >>>> diff --git a/daemons/dnssec/ipa-dnskeysync-replica >>>> b/daemons/dnssec/ipa-dnskeysync-replica >>>> index >>>> d04f360e04ee018dcdd1ba9b2ca42b1844617af9..e9cae519202203a10678b7384e5acf748f256427 >>>> 100755 >>>> --- a/daemons/dnssec/ipa-dnskeysync-replica >>>> +++ b/daemons/dnssec/ipa-dnskeysync-replica >>>> @@ -139,14 +139,16 @@ log.setLevel(level=logging.DEBUG) >>>> # Kerberos initialization >>>> PRINCIPAL = str('%s/%s' % (DAEMONNAME, ipalib.api.env.host)) >>>> log.debug('Kerberos principal: %s', PRINCIPAL) >>>> -ipautil.kinit_hostprincipal(paths.IPA_DNSKEYSYNCD_KEYTAB, WORKDIR, PRINCIPAL) >>>> +ccache_filename = os.path.join(WORKDIR, 'ccache') >>> BTW I really appreciate this patch set! We finally can use more descriptive >>> names like 'ipa-dnskeysync-replica.ccache' which sometimes make debugging >>> easier. >>> >> Named ccaches seems to be a good idea. I will fix this in all places where the >> ccache is somehow persistent (and not deleted after installation). > > Thank you! > >>>> diff --git a/ipa-client/ipa-install/ipa-client-automount >>>> b/ipa-client/ipa-install/ipa-client-automount >>>> index >>>> 7b9e701dead5f50a033a455eb62e30df78cc0249..19197d34ca580062742b3d7363e5dfb2dad0e4de >>>> 100755 >>>> --- a/ipa-client/ipa-install/ipa-client-automount >>>> +++ b/ipa-client/ipa-install/ipa-client-automount >>>> @@ -425,10 +425,11 @@ def main(): >>>> os.close(ccache_fd) >>>> try: >>>> try: >>>> - os.environ['KRB5CCNAME'] = ccache_name >>>> - ipautil.run([paths.KINIT, '-k', '-t', paths.KRB5_KEYTAB, >>>> 'host/%s@%s' % (api.env.host, api.env.realm)]) >>>> - except ipautil.CalledProcessError, e: >>>> - sys.exit("Failed to obtain host TGT.") >>>> + host_princ = str('host/%s@%s' % (api.env.host, api.env.realm)) >>>> + ipautil.kinit_keytab(paths.KRB5_KEYTAB, ccache_name, >>> I'm not sure what is ccache_name here but it should be something descriptive. >>> >> In this case ccache_name points to a temporary file made by tempfile.mkstemp() >> which is cleaned up in a later finally: block (so you will not get to it even >> if the whole thing comes crashing). I'm not sure if there's a point in >> renaming it. > > Okay, that is exactly where I wasn't sure :-) > This situation (making temporary ccache and then cleaning it up) also occurs in ipa-client-install, renew-ca/ra-cert, dogtag-ipa-ca-renew-agent-submit (what a name!), and ipa_certupdate. -- Martin^3 Babinsky From jcholast at redhat.com Wed Mar 11 15:20:28 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 11 Mar 2015 16:20:28 +0100 Subject: [Freeipa-devel] [PATCHES 306-316] Automated migration tool from Winsync In-Reply-To: <54FF0F46.3010109@redhat.com> References: <54FD8369.10803@redhat.com> <54FF0F46.3010109@redhat.com> Message-ID: <55005D3C.5090304@redhat.com> Hi, Dne 10.3.2015 v 16:35 Tomas Babej napsal(a): > > On 03/09/2015 12:26 PM, Tomas Babej wrote: >> Hi, >> >> this couple of patches provides a initial implementation of the >> winsync migration tool: >> >> https://fedorahosted.org/freeipa/ticket/4524 >> >> Some parts could use some polishing, but this is a sound foundation. >> >> Tomas >> >> >> > > Attaching one more patch to the bundle. This one should make the winsync > tool readily available after install. > > Tomas > > Nitpicks: The winsync_migrate module should be in ipaserver.install. Also I don't see why it has to be a package when there is just one short file in it. By convention, the AdminTool subclass should be named WinsyncMigrate, or the tool should be named ipa-migrate-winsync. Honza -- Jan Cholasta From pspacek at redhat.com Wed Mar 11 15:55:59 2015 From: pspacek at redhat.com (Petr Spacek) Date: Wed, 11 Mar 2015 16:55:59 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <55005516.8090007@redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> <1426005332.4735.92.camel@willson.usersys.redhat.com> <54FF292C.4020805@redhat.com> <1426008965.4735.95.camel@willson.usersys.redhat.com> <54FF36E8.1040707@redhat.com> <1426014271.4735.107.camel@willson.usersys.redhat.com> <55001517.1060501@redhat.com> <55001A49.9050001@redhat.com> <55002A7A.9040409@redhat.com> <5500510F.5000100@redhat.com> <55005363.6000302@redhat.com> <55005516.8090007@redhat.com> Message-ID: <5500658F.2090601@redhat.com> On 11.3.2015 15:45, Martin Kosek wrote: > On 03/11/2015 03:38 PM, Petr Spacek wrote: >> On 11.3.2015 15:28, Martin Kosek wrote: >>> On 03/11/2015 12:43 PM, Petr Spacek wrote: >>>> On 11.3.2015 11:34, Jan Cholasta wrote: >>>>> Dne 11.3.2015 v 11:12 Petr Spacek napsal(a): >>>>>> On 10.3.2015 20:04, Simo Sorce wrote: >>>>>>> On Tue, 2015-03-10 at 19:24 +0100, Petr Spacek wrote: >>>>>>>> On 10.3.2015 18:36, Simo Sorce wrote: >>>>>>>>> On Tue, 2015-03-10 at 18:26 +0100, Petr Spacek wrote: >>>>>>>>>> On 10.3.2015 17:35, Simo Sorce wrote: >>>>>>>>>>> On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: >>>>>>>>>>>> On 10.3.2015 15:53, Simo Sorce wrote: >>>>>>>>>>>>> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >>>>>>>>>>>>>> Hello, >>>>>>>>>>>>>> >>>>>>>>>>>>>> I would like to discuss Generic support for unknown DNS RR types >>>>>>>>>>>>>> (RFC 3597 >>>>>>>>>>>>>> [0]). Here is the proposal: >>>>>>>>>>>>>> >>>>>>>>>>>>>> LDAP schema >>>>>>>>>>>>>> =========== >>>>>>>>>>>>>> - 1 new attribute: >>>>>>>>>>>>>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' >>>>>>>>>>>>>> EQUALITY >>>>>>>>>>>>>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >>>>>>>>>>>>>> >>>>>>>>>>>>>> The attribute should be added to existing idnsRecord object class as >>>>>>>>>>>>>> MAY. >>>>>>>>>>>>>> >>>>>>>>>>>>>> This new attribute should contain data encoded according to ?RFC >>>>>>>>>>>>>> 3597 section >>>>>>>>>>>>>> 5 [5]: >>>>>>>>>>>>>> >>>>>>>>>>>>>> The RDATA section of an RR of unknown type is represented as a >>>>>>>>>>>>>> sequence of white space separated words as follows: >>>>>>>>>>>>>> >>>>>>>>>>>>>> The special token \# (a backslash immediately followed by a hash >>>>>>>>>>>>>> sign), which identifies the RDATA as having the generic encoding >>>>>>>>>>>>>> defined herein rather than a traditional type-specific encoding. >>>>>>>>>>>>>> >>>>>>>>>>>>>> An unsigned decimal integer specifying the RDATA length in >>>>>>>>>>>>>> octets. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Zero or more words of hexadecimal data encoding the actual RDATA >>>>>>>>>>>>>> field, each containing an even number of hexadecimal digits. >>>>>>>>>>>>>> >>>>>>>>>>>>>> If the RDATA is of zero length, the text representation contains >>>>>>>>>>>>>> only >>>>>>>>>>>>>> the \# token and the single zero representing the length. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Examples from RFC: >>>>>>>>>>>>>> a.example. CLASS32 TYPE731 \# 6 abcd ( >>>>>>>>>>>>>> ef 01 23 45 ) >>>>>>>>>>>>>> b.example. HS TYPE62347 \# 0 >>>>>>>>>>>>>> e.example. IN A \# 4 0A000001 >>>>>>>>>>>>>> e.example. CLASS1 TYPE1 10.0.0.2 >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Open questions about LDAP format >>>>>>>>>>>>>> ================================ >>>>>>>>>>>>>> Should we include "\#" constant? We know that the attribute contains >>>>>>>>>>>>>> record in >>>>>>>>>>>>>> RFC 3597 syntax so it is not strictly necessary. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I think it would be better to follow RFC 3597 format. It allows blind >>>>>>>>>>>>>> copy&pasting from other tools, including direct calls to python-dns. >>>>>>>>>>>>>> >>>>>>>>>>>>>> It also eases writing conversion tools between DNS and LDAP format >>>>>>>>>>>>>> because >>>>>>>>>>>>>> they do not need to change record values. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Another question is if we should explicitly include length of data >>>>>>>>>>>>>> represented >>>>>>>>>>>>>> in hexadecimal notation as a decimal number. I'm very strongly >>>>>>>>>>>>>> inclined to let >>>>>>>>>>>>>> it there because it is very good sanity check and again, it allows >>>>>>>>>>>>>> us to >>>>>>>>>>>>>> re-use existing tools including parsers. >>>>>>>>>>>>>> >>>>>>>>>>>>>> I will ask Uninett.no for standardization after we sort this out >>>>>>>>>>>>>> (they own the >>>>>>>>>>>>>> OID arc we use for DNS records). >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Attribute usage >>>>>>>>>>>>>> =============== >>>>>>>>>>>>>> Every DNS RR type has assigned a number [1] which is used on wire. >>>>>>>>>>>>>> RR types >>>>>>>>>>>>>> which are unknown to the server cannot be named by their >>>>>>>>>>>>>> mnemonic/type name >>>>>>>>>>>>>> because server would not be able to do name->number conversion and >>>>>>>>>>>>>> to generate >>>>>>>>>>>>>> DNS wire format. >>>>>>>>>>>>>> >>>>>>>>>>>>>> As a result, we have to encode the RR type number somehow. Let's use >>>>>>>>>>>>>> attribute >>>>>>>>>>>>>> sub-types. >>>>>>>>>>>>>> >>>>>>>>>>>>>> E.g. a record with type 65280 and hex value 0A000001 will be >>>>>>>>>>>>>> represented as: >>>>>>>>>>>>>> GenericRecord;TYPE65280: \# 4 0A000001 >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> CLI >>>>>>>>>>>>>> === >>>>>>>>>>>>>> $ ipa dnsrecord-add zone.example owner \ >>>>>>>>>>>>>> --generic-type=65280 --generic-data='\# 4 0A000001' >>>>>>>>>>>>>> >>>>>>>>>>>>>> $ ipa dnsrecord-show zone.example owner >>>>>>>>>>>>>> Record name: owner >>>>>>>>>>>>>> TYPE65280 Record: \# 4 0A000001 >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> ACK? :-) >>>>>>>>>>>>> >>>>>>>>>>>>> Almost. >>>>>>>>>>>>> We should refrain from using subtypes when not necessary, and in this >>>>>>>>>>>>> case it is not necessary. >>>>>>>>>>>>> >>>>>>>>>>>>> Use: >>>>>>>>>>>>> GenericRecord: 65280 \# 4 0A000001 >>>>>>>>>>>> >>>>>>>>>>>> I was considering that too but I can see two main drawbacks: >>>>>>>>>>>> >>>>>>>>>>>> 1) It does not work very well with DS ACI (targetattrfilter, anyone?). >>>>>>>>>>>> Adding >>>>>>>>>>>> generic write access to GenericRecord == ability to add TLSA records too, >>>>>>>>>>>> which you may not want. IMHO it is perfectly reasonable to limit write >>>>>>>>>>>> access >>>>>>>>>>>> to certain types (e.g. to one from private range). >>>>>>>>>>>> >>>>>>>>>>>> 2) We would need a separate substring index for emulating filters like >>>>>>>>>>>> (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence >>>>>>>>>>>> index >>>>>>>>>>>> which will be handy one day when we decide to handle upgrades like >>>>>>>>>>>> GenericRecord;TYPE256->UriRecord. >>>>>>>>>>>> >>>>>>>>>>>> Another (less important) annoyance is that conversion tools would have to >>>>>>>>>>>> mangle record data instead of just converting attribute name->record >>>>>>>>>>>> type. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> I can be convinced that subtypes are not necessary but I do not see clear >>>>>>>>>>>> advantage of avoiding them. What is the problem with subtypes? >>>>>>>>>>> >>>>>>>>>>> Poor support by most clients, so it is generally discouraged. >>>>>>>>>> Hmm, it does not sound like a thing we should care in this case. DNS >>>>>>>>>> tree is >>>>>>>>>> not meant for direct consumption by LDAP clients (compare with cn=compat). >>>>>>>>>> >>>>>>>>>> IMHO the only two clients we should care are FreeIPA framework and >>>>>>>>>> bind-dyndb-ldap so I do not see this as a problem, really. If someone >>>>>>>>>> wants to >>>>>>>>>> access DNS tree by hand - sure, use a standard compliant client! >>>>>>>>>> >>>>>>>>>> Working ACI and LDAP filters sounds like good price for supporting only >>>>>>>>>> standards compliant clients. >>>>>>>>>> >>>>>>>>>> AFAIK OpenLDAP works well and I suspect that ApacheDS will work too because >>>>>>>>>> Eclipse has nice support for sub-types built-in. If I can draw some >>>>>>>>>> conclusions from that, sub-types are not a thing aliens forgot here when >>>>>>>>>> leaving Earth one million years ago :-) >>>>>>>>>> >>>>>>>>>>> The problem with subtypes and ACIs though is that I think ACIs do not >>>>>>>>>>> care about the subtype unless you explicit mention them. >>>>>>>>>> IMHO that is exactly what I would like to see for GenericRecord. It >>>>>>>>>> allows us >>>>>>>>>> to write ACI which allows admins to add any GenericRecord and at the >>>>>>>>>> same time >>>>>>>>>> allows us to craft ACI which allows access only to >>>>>>>>>> GenericRecord;TYPE65280 for >>>>>>>>>> specific group/user. >>>>>>>>>> >>>>>>>>>>> So perhaps bind_dyndb_ldap should refuse to use a generic type that >>>>>>>>>>> shadows DNSSEC relevant records ? >>>>>>>>>> Sorry, this cannot possibly work because it depends on up-to-date >>>>>>>>>> blacklist. >>>>>>>>>> >>>>>>>>>> How would the plugin released in 2015 know that highly sensitive OPENPGPKEY >>>>>>>>>> type will be standardized in 2016 and assigned number XYZ? >>>>>>>>> >>>>>>>>> Ok, show me an example ACI that works and you get my ack :) >>>>>>>> >>>>>>>> Am I being punished for something? :-) >>>>>>>> >>>>>>>> Anyway, this monstrosity: >>>>>>>> >>>>>>>> (targetattr = "objectclass || txtRecord;test")(target = >>>>>>>> "ldap:///idnsname=*,cn=dns,dc=ipa,dc=example")(version 3.0;acl >>>>>>>> "permission:luser: Read DNS Entries";allow (compare,read,search) userdn = >>>>>>>> "ldap:///uid=luser,cn=users,cn=accounts,dc=ipa,dc=example";) >>>>>>>> >>>>>>>> Gives 'luser' read access only to txtRecord;test and *not* to the whole >>>>>>>> txtRecord in general. >>>>>>>> >>>>>>>> $ kinit luser >>>>>>>> $ ldapsearch -Y GSSAPI -s base -b >>>>>>>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>>>>>>> SASL username: luser at IPA.EXAMPLE >>>>>>>> >>>>>>>> # txt, ipa.example., dns, ipa.example >>>>>>>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>>>>>>> objectClass: top >>>>>>>> objectClass: idnsrecord >>>>>>>> tXTRecord;test: Guess what is new here! >>>>>>>> >>>>>>>> Filter '(tXTRecord;test=*)' works as expected and returns only objects with >>>>>>>> subtype ;test. >>>>>>>> >>>>>>>> The only weird thing I noticed is that search filter '(tXTRecord=*)' does not >>>>>>>> return the object if you have access only to an subtype with existing value >>>>>>>> but not to the 'vanilla' attribute. >>>>>>>> >>>>>>>> Maybe it is a bug? I will think about it for a while and possibly open a >>>>>>>> ticket. Anyway, this is not something we need for implementation. >>>>>>>> >>>>>>>> >>>>>>>> For completeness: >>>>>>>> >>>>>>>> $ kinit admin >>>>>>>> $ ldapsearch -Y GSSAPI -s base -b >>>>>>>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>>>>>>> SASL username: admin at IPA.EXAMPLE >>>>>>>> >>>>>>>> # txt, ipa.example., dns, ipa.example >>>>>>>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>>>>>>> objectClass: top >>>>>>>> objectClass: idnsrecord >>>>>>>> tXTRecord: nothing >>>>>>>> tXTRecord: something >>>>>>>> idnsName: txt >>>>>>>> tXTRecord;test: Guess what is new here! >>>>>>>> >>>>>>>> >>>>>>>> And yes, you assume correctly that (targetattr = "txtRecord") gives access to >>>>>>>> whole txtRecord including all its subtypes. >>>>>>>> >>>>>>>> ACK? :-) >>>>>>>> >>>>>>> >>>>>>> ACK. >>>>>> >>>>>> Thank you. Now to the most important and difficult question: >>>>>> Should the attribute name be "GenericRecord" or "UnknownRecord"? >>>>>> >>>>>> I like "GenericRecord" but Honza prefers "UnknownRecord" so we need a third >>>>>> opinion :-) >>>>> >>>>> GenericRecord sounds like something that may be used for any record type, >>>>> known or unknown. I don't think that's what we want. We want users to use it >>>>> only for unknown record types and use appropriate Record attribute for >>>>> known attributes. >>>>> >>>>> The RFC is titled "Handling of *Unknown* DNS Resource Record (RR) Types". The >>>>> word "generic" is used only when referring to encoding of RDATA. >>>> >>>> Okay, be it 'UnknownRecord'. >>>> >>>> Petr^2 Spacek >>> >>> I am just afraid it is quite general name, that may collide with other >>> attribute names. If it would be named "idnsUnknownRecord", it would be more >>> unique. But I assume we cannot add idns prefix for records themselves... >> >> Good point. What about UnknownDNSRecord? > > Maybe. Question is how consistent we want to be with other DNS record names > (arecord, ptrrecord) and how consistent we want to be with Uninett schema > (details in https://fedorahosted.org/bind-dyndb-ldap/wiki/LDAPSchema) and if > this new record would be discussed with them and added to their OID space. Currently my intention is to contact Uninett and try to standardize it when we finally agree on something. -- Petr^2 Spacek From mkosek at redhat.com Wed Mar 11 15:59:59 2015 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 11 Mar 2015 16:59:59 +0100 Subject: [Freeipa-devel] [PATCHES 0018-0019] ipa-dns-install: Use LDAPI for all DS connections In-Reply-To: <55002E43.7050601@redhat.com> References: <55002E43.7050601@redhat.com> Message-ID: <5500667F.9060404@redhat.com> On 03/11/2015 01:00 PM, Martin Babinsky wrote: > These patches solve https://fedorahosted.org/freeipa/ticket/4933. > > They are to be applied to master branch. I will rebase them for ipa-4-1 after > the review. > > > It looks like you will also fix ipa-dns-install part of https://fedorahosted.org/freeipa/ticket/2957. So this ticket should be also referred I think (I think we should not close it until other referred tools are also fixed). Martin From mkosek at redhat.com Wed Mar 11 16:02:30 2015 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 11 Mar 2015 17:02:30 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <5500658F.2090601@redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> <1426005332.4735.92.camel@willson.usersys.redhat.com> <54FF292C.4020805@redhat.com> <1426008965.4735.95.camel@willson.usersys.redhat.com> <54FF36E8.1040707@redhat.com> <1426014271.4735.107.camel@willson.usersys.redhat.com> <55001517.1060501@redhat.com> <55001A49.9050001@redhat.com> <55002A7A.9040409@redhat.com> <5500510F.5000100@redhat.com> <55005363.6000302@redhat.com> <55005516.8090007@redhat.com> <5500658F.2090601@redhat.com> Message-ID: <55006716.4020501@redhat.com> On 03/11/2015 04:55 PM, Petr Spacek wrote: > On 11.3.2015 15:45, Martin Kosek wrote: >> On 03/11/2015 03:38 PM, Petr Spacek wrote: >>> On 11.3.2015 15:28, Martin Kosek wrote: >>>> On 03/11/2015 12:43 PM, Petr Spacek wrote: >>>>> On 11.3.2015 11:34, Jan Cholasta wrote: >>>>>> Dne 11.3.2015 v 11:12 Petr Spacek napsal(a): >>>>>>> On 10.3.2015 20:04, Simo Sorce wrote: >>>>>>>> On Tue, 2015-03-10 at 19:24 +0100, Petr Spacek wrote: >>>>>>>>> On 10.3.2015 18:36, Simo Sorce wrote: >>>>>>>>>> On Tue, 2015-03-10 at 18:26 +0100, Petr Spacek wrote: >>>>>>>>>>> On 10.3.2015 17:35, Simo Sorce wrote: >>>>>>>>>>>> On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: >>>>>>>>>>>>> On 10.3.2015 15:53, Simo Sorce wrote: >>>>>>>>>>>>>> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >>>>>>>>>>>>>>> Hello, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I would like to discuss Generic support for unknown DNS RR types >>>>>>>>>>>>>>> (RFC 3597 >>>>>>>>>>>>>>> [0]). Here is the proposal: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> LDAP schema >>>>>>>>>>>>>>> =========== >>>>>>>>>>>>>>> - 1 new attribute: >>>>>>>>>>>>>>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' >>>>>>>>>>>>>>> EQUALITY >>>>>>>>>>>>>>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The attribute should be added to existing idnsRecord object class as >>>>>>>>>>>>>>> MAY. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> This new attribute should contain data encoded according to ?RFC >>>>>>>>>>>>>>> 3597 section >>>>>>>>>>>>>>> 5 [5]: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The RDATA section of an RR of unknown type is represented as a >>>>>>>>>>>>>>> sequence of white space separated words as follows: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The special token \# (a backslash immediately followed by a hash >>>>>>>>>>>>>>> sign), which identifies the RDATA as having the generic encoding >>>>>>>>>>>>>>> defined herein rather than a traditional type-specific encoding. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> An unsigned decimal integer specifying the RDATA length in >>>>>>>>>>>>>>> octets. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Zero or more words of hexadecimal data encoding the actual RDATA >>>>>>>>>>>>>>> field, each containing an even number of hexadecimal digits. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> If the RDATA is of zero length, the text representation contains >>>>>>>>>>>>>>> only >>>>>>>>>>>>>>> the \# token and the single zero representing the length. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Examples from RFC: >>>>>>>>>>>>>>> a.example. CLASS32 TYPE731 \# 6 abcd ( >>>>>>>>>>>>>>> ef 01 23 45 ) >>>>>>>>>>>>>>> b.example. HS TYPE62347 \# 0 >>>>>>>>>>>>>>> e.example. IN A \# 4 0A000001 >>>>>>>>>>>>>>> e.example. CLASS1 TYPE1 10.0.0.2 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Open questions about LDAP format >>>>>>>>>>>>>>> ================================ >>>>>>>>>>>>>>> Should we include "\#" constant? We know that the attribute contains >>>>>>>>>>>>>>> record in >>>>>>>>>>>>>>> RFC 3597 syntax so it is not strictly necessary. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I think it would be better to follow RFC 3597 format. It allows blind >>>>>>>>>>>>>>> copy&pasting from other tools, including direct calls to python-dns. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> It also eases writing conversion tools between DNS and LDAP format >>>>>>>>>>>>>>> because >>>>>>>>>>>>>>> they do not need to change record values. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Another question is if we should explicitly include length of data >>>>>>>>>>>>>>> represented >>>>>>>>>>>>>>> in hexadecimal notation as a decimal number. I'm very strongly >>>>>>>>>>>>>>> inclined to let >>>>>>>>>>>>>>> it there because it is very good sanity check and again, it allows >>>>>>>>>>>>>>> us to >>>>>>>>>>>>>>> re-use existing tools including parsers. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I will ask Uninett.no for standardization after we sort this out >>>>>>>>>>>>>>> (they own the >>>>>>>>>>>>>>> OID arc we use for DNS records). >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Attribute usage >>>>>>>>>>>>>>> =============== >>>>>>>>>>>>>>> Every DNS RR type has assigned a number [1] which is used on wire. >>>>>>>>>>>>>>> RR types >>>>>>>>>>>>>>> which are unknown to the server cannot be named by their >>>>>>>>>>>>>>> mnemonic/type name >>>>>>>>>>>>>>> because server would not be able to do name->number conversion and >>>>>>>>>>>>>>> to generate >>>>>>>>>>>>>>> DNS wire format. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> As a result, we have to encode the RR type number somehow. Let's use >>>>>>>>>>>>>>> attribute >>>>>>>>>>>>>>> sub-types. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> E.g. a record with type 65280 and hex value 0A000001 will be >>>>>>>>>>>>>>> represented as: >>>>>>>>>>>>>>> GenericRecord;TYPE65280: \# 4 0A000001 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> CLI >>>>>>>>>>>>>>> === >>>>>>>>>>>>>>> $ ipa dnsrecord-add zone.example owner \ >>>>>>>>>>>>>>> --generic-type=65280 --generic-data='\# 4 0A000001' >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> $ ipa dnsrecord-show zone.example owner >>>>>>>>>>>>>>> Record name: owner >>>>>>>>>>>>>>> TYPE65280 Record: \# 4 0A000001 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ACK? :-) >>>>>>>>>>>>>> >>>>>>>>>>>>>> Almost. >>>>>>>>>>>>>> We should refrain from using subtypes when not necessary, and in this >>>>>>>>>>>>>> case it is not necessary. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Use: >>>>>>>>>>>>>> GenericRecord: 65280 \# 4 0A000001 >>>>>>>>>>>>> >>>>>>>>>>>>> I was considering that too but I can see two main drawbacks: >>>>>>>>>>>>> >>>>>>>>>>>>> 1) It does not work very well with DS ACI (targetattrfilter, anyone?). >>>>>>>>>>>>> Adding >>>>>>>>>>>>> generic write access to GenericRecord == ability to add TLSA records too, >>>>>>>>>>>>> which you may not want. IMHO it is perfectly reasonable to limit write >>>>>>>>>>>>> access >>>>>>>>>>>>> to certain types (e.g. to one from private range). >>>>>>>>>>>>> >>>>>>>>>>>>> 2) We would need a separate substring index for emulating filters like >>>>>>>>>>>>> (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence >>>>>>>>>>>>> index >>>>>>>>>>>>> which will be handy one day when we decide to handle upgrades like >>>>>>>>>>>>> GenericRecord;TYPE256->UriRecord. >>>>>>>>>>>>> >>>>>>>>>>>>> Another (less important) annoyance is that conversion tools would have to >>>>>>>>>>>>> mangle record data instead of just converting attribute name->record >>>>>>>>>>>>> type. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> I can be convinced that subtypes are not necessary but I do not see clear >>>>>>>>>>>>> advantage of avoiding them. What is the problem with subtypes? >>>>>>>>>>>> >>>>>>>>>>>> Poor support by most clients, so it is generally discouraged. >>>>>>>>>>> Hmm, it does not sound like a thing we should care in this case. DNS >>>>>>>>>>> tree is >>>>>>>>>>> not meant for direct consumption by LDAP clients (compare with cn=compat). >>>>>>>>>>> >>>>>>>>>>> IMHO the only two clients we should care are FreeIPA framework and >>>>>>>>>>> bind-dyndb-ldap so I do not see this as a problem, really. If someone >>>>>>>>>>> wants to >>>>>>>>>>> access DNS tree by hand - sure, use a standard compliant client! >>>>>>>>>>> >>>>>>>>>>> Working ACI and LDAP filters sounds like good price for supporting only >>>>>>>>>>> standards compliant clients. >>>>>>>>>>> >>>>>>>>>>> AFAIK OpenLDAP works well and I suspect that ApacheDS will work too because >>>>>>>>>>> Eclipse has nice support for sub-types built-in. If I can draw some >>>>>>>>>>> conclusions from that, sub-types are not a thing aliens forgot here when >>>>>>>>>>> leaving Earth one million years ago :-) >>>>>>>>>>> >>>>>>>>>>>> The problem with subtypes and ACIs though is that I think ACIs do not >>>>>>>>>>>> care about the subtype unless you explicit mention them. >>>>>>>>>>> IMHO that is exactly what I would like to see for GenericRecord. It >>>>>>>>>>> allows us >>>>>>>>>>> to write ACI which allows admins to add any GenericRecord and at the >>>>>>>>>>> same time >>>>>>>>>>> allows us to craft ACI which allows access only to >>>>>>>>>>> GenericRecord;TYPE65280 for >>>>>>>>>>> specific group/user. >>>>>>>>>>> >>>>>>>>>>>> So perhaps bind_dyndb_ldap should refuse to use a generic type that >>>>>>>>>>>> shadows DNSSEC relevant records ? >>>>>>>>>>> Sorry, this cannot possibly work because it depends on up-to-date >>>>>>>>>>> blacklist. >>>>>>>>>>> >>>>>>>>>>> How would the plugin released in 2015 know that highly sensitive OPENPGPKEY >>>>>>>>>>> type will be standardized in 2016 and assigned number XYZ? >>>>>>>>>> >>>>>>>>>> Ok, show me an example ACI that works and you get my ack :) >>>>>>>>> >>>>>>>>> Am I being punished for something? :-) >>>>>>>>> >>>>>>>>> Anyway, this monstrosity: >>>>>>>>> >>>>>>>>> (targetattr = "objectclass || txtRecord;test")(target = >>>>>>>>> "ldap:///idnsname=*,cn=dns,dc=ipa,dc=example")(version 3.0;acl >>>>>>>>> "permission:luser: Read DNS Entries";allow (compare,read,search) userdn = >>>>>>>>> "ldap:///uid=luser,cn=users,cn=accounts,dc=ipa,dc=example";) >>>>>>>>> >>>>>>>>> Gives 'luser' read access only to txtRecord;test and *not* to the whole >>>>>>>>> txtRecord in general. >>>>>>>>> >>>>>>>>> $ kinit luser >>>>>>>>> $ ldapsearch -Y GSSAPI -s base -b >>>>>>>>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>>>>>>>> SASL username: luser at IPA.EXAMPLE >>>>>>>>> >>>>>>>>> # txt, ipa.example., dns, ipa.example >>>>>>>>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>>>>>>>> objectClass: top >>>>>>>>> objectClass: idnsrecord >>>>>>>>> tXTRecord;test: Guess what is new here! >>>>>>>>> >>>>>>>>> Filter '(tXTRecord;test=*)' works as expected and returns only objects with >>>>>>>>> subtype ;test. >>>>>>>>> >>>>>>>>> The only weird thing I noticed is that search filter '(tXTRecord=*)' does not >>>>>>>>> return the object if you have access only to an subtype with existing value >>>>>>>>> but not to the 'vanilla' attribute. >>>>>>>>> >>>>>>>>> Maybe it is a bug? I will think about it for a while and possibly open a >>>>>>>>> ticket. Anyway, this is not something we need for implementation. >>>>>>>>> >>>>>>>>> >>>>>>>>> For completeness: >>>>>>>>> >>>>>>>>> $ kinit admin >>>>>>>>> $ ldapsearch -Y GSSAPI -s base -b >>>>>>>>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>>>>>>>> SASL username: admin at IPA.EXAMPLE >>>>>>>>> >>>>>>>>> # txt, ipa.example., dns, ipa.example >>>>>>>>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>>>>>>>> objectClass: top >>>>>>>>> objectClass: idnsrecord >>>>>>>>> tXTRecord: nothing >>>>>>>>> tXTRecord: something >>>>>>>>> idnsName: txt >>>>>>>>> tXTRecord;test: Guess what is new here! >>>>>>>>> >>>>>>>>> >>>>>>>>> And yes, you assume correctly that (targetattr = "txtRecord") gives access to >>>>>>>>> whole txtRecord including all its subtypes. >>>>>>>>> >>>>>>>>> ACK? :-) >>>>>>>>> >>>>>>>> >>>>>>>> ACK. >>>>>>> >>>>>>> Thank you. Now to the most important and difficult question: >>>>>>> Should the attribute name be "GenericRecord" or "UnknownRecord"? >>>>>>> >>>>>>> I like "GenericRecord" but Honza prefers "UnknownRecord" so we need a third >>>>>>> opinion :-) >>>>>> >>>>>> GenericRecord sounds like something that may be used for any record type, >>>>>> known or unknown. I don't think that's what we want. We want users to use it >>>>>> only for unknown record types and use appropriate Record attribute for >>>>>> known attributes. >>>>>> >>>>>> The RFC is titled "Handling of *Unknown* DNS Resource Record (RR) Types". The >>>>>> word "generic" is used only when referring to encoding of RDATA. >>>>> >>>>> Okay, be it 'UnknownRecord'. >>>>> >>>>> Petr^2 Spacek >>>> >>>> I am just afraid it is quite general name, that may collide with other >>>> attribute names. If it would be named "idnsUnknownRecord", it would be more >>>> unique. But I assume we cannot add idns prefix for records themselves... >>> >>> Good point. What about UnknownDNSRecord? >> >> Maybe. Question is how consistent we want to be with other DNS record names >> (arecord, ptrrecord) and how consistent we want to be with Uninett schema >> (details in https://fedorahosted.org/bind-dyndb-ldap/wiki/LDAPSchema) and if >> this new record would be discussed with them and added to their OID space. > > Currently my intention is to contact Uninett and try to standardize it when we > finally agree on something. > I think we agreed on UnknownRecord or it's variant, so please feel free to contact and ask them. I think they will be more surprised with the subtype than with the actual name :-) From mbabinsk at redhat.com Wed Mar 11 16:35:09 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Wed, 11 Mar 2015 17:35:09 +0100 Subject: [Freeipa-devel] [PATCHES 0018-0019] ipa-dns-install: Use LDAPI for all DS connections In-Reply-To: <5500667F.9060404@redhat.com> References: <55002E43.7050601@redhat.com> <5500667F.9060404@redhat.com> Message-ID: <55006EBD.4040205@redhat.com> On 03/11/2015 04:59 PM, Martin Kosek wrote: > On 03/11/2015 01:00 PM, Martin Babinsky wrote: >> These patches solve https://fedorahosted.org/freeipa/ticket/4933. >> >> They are to be applied to master branch. I will rebase them for ipa-4-1 after >> the review. >> >> >> > > It looks like you will also fix ipa-dns-install part of > https://fedorahosted.org/freeipa/ticket/2957. > > So this ticket should be also referred I think (I think we should not close it > until other referred tools are also fixed). > > Martin > Yes that is our evil plan: For 4.1 we will issue only a (relatively) simple fix regarding https://fedorahosted.org/freeipa/ticket/4933 which preserves the -p option. For master we will make ipa-dns-install use LDAPI w/ autobind and make -p option deprecated. The other tools will hopefully meet the same fate once the install/upgrade refactoring is made. -- Martin^3 Babinsky From lslebodn at redhat.com Thu Mar 12 08:59:30 2015 From: lslebodn at redhat.com (Lukas Slebodnik) Date: Thu, 12 Mar 2015 09:59:30 +0100 Subject: [Freeipa-devel] [PATCHES] SPEC: Require python2 version of sssd bindings In-Reply-To: <20150306140505.GA2345@mail.corp.redhat.com> References: <20150227205039.GB2327@mail.corp.redhat.com> <54F80B99.6030104@redhat.com> <20150305102328.GA18226@mail.corp.redhat.com> <54F87448.4050703@redhat.com> <20150306140505.GA2345@mail.corp.redhat.com> Message-ID: <20150312085930.GA18188@mail.corp.redhat.com> On (06/03/15 15:05), Lukas Slebodnik wrote: >On (05/03/15 16:20), Petr Vobornik wrote: >>On 03/05/2015 11:23 AM, Lukas Slebodnik wrote: >>>On (05/03/15 08:54), Petr Vobornik wrote: >>>>On 02/27/2015 09:50 PM, Lukas Slebodnik wrote: >>>>>ehlo, >>>>> >>>>>Please review attached patches and fix freeipa in fedora 22 ASAP. >>>>> >>>>>I think the most critical is 1st patch >>>>> >>>>>sh$ git grep "SSSDConfig" | grep import >>>>>install/tools/ipa-upgradeconfig:import SSSDConfig >>>>>ipa-client/ipa-install/ipa-client-automount:import SSSDConfig >>>>>ipa-client/ipa-install/ipa-client-install: import SSSDConfig >>>>> >>>>>BTW package python-sssdconfig is provides since sssd-1.10.0alpha1 (2013-04-02) >>>>>but it was not explicitely required. >>>>> >>>>>The latest python3 changes in sssd (fedora 22) is just a result of negligent >>>>>packaging of freeipa. >>>>> >>>>>LS >>>>> >>>> >>>>Fedora 22 was amended. >>>> >>>>Patch 1: ACK >>>> >>>>Patch 2: ACK >>>> >>>>Patch3: >>>>the package name is libsss_nss_idmap-python not python-libsss_nss_idmap >>>>which already is required in adtrust package >>>In sssd upstream we decided to rename package libsss_nss_idmap-python to >>>python-libsss_nss_idmap according to new rpm python guidelines. >>>The python3 version has alredy correct name. >>> >>>We will rename package in downstream with next major release (1.13). >>>Of course it we will add "Provides: libsss_nss_idmap-python". >>> >>>We can push 3rd patch later or I can update 3rd patch. >>>What do you prefer? >>> >>>Than you very much for review. >>> >>>LS >>> >> >>Patch 3 should be updated to not forget the remaining change in ipa-python >>package. >> >>It then should be updated downstream and master when 1.13 is released in >>Fedora, or in master sooner if SSSD 1.13 becomes the minimal version required >>by master. > >Fixed. > >BTW Why ther is a pylint comment for some sssd modules >I did not kave any pylint problems after removing comment. > >ipalib/plugins/trust.py:32: import pysss_murmur #pylint: disable=F0401 >ipalib/plugins/trust.py:38: import pysss_nss_idmap #pylint: disable=F0401 > > >And why are these modules optional (try except) > >ipalib/plugins/trust.py-31-try: >ipalib/plugins/trust.py:32: import pysss_murmur #pylint: disable=F0401 >ipalib/plugins/trust.py-33- _murmur_installed = True >ipalib/plugins/trust.py-34-except Exception, e: >ipalib/plugins/trust.py-35- _murmur_installed = False >ipalib/plugins/trust.py-36- >ipalib/plugins/trust.py-37-try: >ipalib/plugins/trust.py:38: import pysss_nss_idmap #pylint: disable=F0401 >ipalib/plugins/trust.py-39- _nss_idmap_installed = True >ipalib/plugins/trust.py-40-except Exception, e: >ipalib/plugins/trust.py-41- _nss_idmap_installed = False > >LS >>From 41523fd6ab9ea95644cac1a6cd20386a620a1df5 Mon Sep 17 00:00:00 2001 >From: Lukas Slebodnik >Date: Fri, 27 Feb 2015 20:40:06 +0100 >Subject: [PATCH 1/3] SPEC: Explicitly requires python-sssdconfig bump LS From mkosek at redhat.com Thu Mar 12 08:59:37 2015 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 12 Mar 2015 09:59:37 +0100 Subject: [Freeipa-devel] Rename IPAv3_AD_trust_setup? In-Reply-To: <20150310080611.GI25455@redhat.com> References: <54FEA28B.6020804@redhat.com> <20150310080611.GI25455@redhat.com> Message-ID: <55015579.2000204@redhat.com> On 03/10/2015 09:06 AM, Alexander Bokovoy wrote: > On Tue, 10 Mar 2015, Martin Kosek wrote: >> Hi, >> >> I just saw someone refer to [1] with respect to FreeIPA 4.x. Would it make >> sense to just rename the page from [1] to [2] (with keeping redirect of course)? >> >> This would move the page from Howto/ name space which we use for community >> HOWTO articles and move it to standard default name space. We would also not >> confuse people with the "v3" version, since it applies to all our FreeIPA >> versions, including v4. >> >> [1] http://www.freeipa.org/page/Howto/IPAv3_AD_trust_setup >> [2] http://www.freeipa.org/page/Active_Directory_Trust_setup > I'm fine if there is a redirect. Ok, good. I renamed the page and fixed the links in our wiki. This is the new link: http://www.freeipa.org/page/Active_Directory_trust_setup I actually changed all the links to point to this page, some were still pointing to the old, obsolete HOWTO. This one was moved to Obsolete name space, so that we do not clutter the main name space: http://www.freeipa.org/page/Obsolete:IPAv3_testing_AD_trust HTH. If you see other issues, please tell me (or fix it right away). Martin From pspacek at redhat.com Thu Mar 12 11:21:08 2015 From: pspacek at redhat.com (Petr Spacek) Date: Thu, 12 Mar 2015 12:21:08 +0100 Subject: [Freeipa-devel] Generic support for unknown DNS RR types (RFC 3597) In-Reply-To: <55006716.4020501@redhat.com> References: <54FF007F.7060701@redhat.com> <1425999202.4735.90.camel@willson.usersys.redhat.com> <54FF0B8A.1090308@redhat.com> <1426005332.4735.92.camel@willson.usersys.redhat.com> <54FF292C.4020805@redhat.com> <1426008965.4735.95.camel@willson.usersys.redhat.com> <54FF36E8.1040707@redhat.com> <1426014271.4735.107.camel@willson.usersys.redhat.com> <55001517.1060501@redhat.com> <55001A49.9050001@redhat.com> <55002A7A.9040409@redhat.com> <5500510F.5000100@redhat.com> <55005363.6000302@redhat.com> <55005516.8090007@redhat.com> <5500658F.2090601@redhat.com> <55006716.4020501@redhat.com> Message-ID: <550176A4.5050807@redhat.com> On 11.3.2015 17:02, Martin Kosek wrote: > On 03/11/2015 04:55 PM, Petr Spacek wrote: >> On 11.3.2015 15:45, Martin Kosek wrote: >>> On 03/11/2015 03:38 PM, Petr Spacek wrote: >>>> On 11.3.2015 15:28, Martin Kosek wrote: >>>>> On 03/11/2015 12:43 PM, Petr Spacek wrote: >>>>>> On 11.3.2015 11:34, Jan Cholasta wrote: >>>>>>> Dne 11.3.2015 v 11:12 Petr Spacek napsal(a): >>>>>>>> On 10.3.2015 20:04, Simo Sorce wrote: >>>>>>>>> On Tue, 2015-03-10 at 19:24 +0100, Petr Spacek wrote: >>>>>>>>>> On 10.3.2015 18:36, Simo Sorce wrote: >>>>>>>>>>> On Tue, 2015-03-10 at 18:26 +0100, Petr Spacek wrote: >>>>>>>>>>>> On 10.3.2015 17:35, Simo Sorce wrote: >>>>>>>>>>>>> On Tue, 2015-03-10 at 16:19 +0100, Petr Spacek wrote: >>>>>>>>>>>>>> On 10.3.2015 15:53, Simo Sorce wrote: >>>>>>>>>>>>>>> On Tue, 2015-03-10 at 15:32 +0100, Petr Spacek wrote: >>>>>>>>>>>>>>>> Hello, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I would like to discuss Generic support for unknown DNS RR types >>>>>>>>>>>>>>>> (RFC 3597 >>>>>>>>>>>>>>>> [0]). Here is the proposal: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> LDAP schema >>>>>>>>>>>>>>>> =========== >>>>>>>>>>>>>>>> - 1 new attribute: >>>>>>>>>>>>>>>> ( NAME 'GenericRecord' DESC 'unknown DNS record, RFC 3597' >>>>>>>>>>>>>>>> EQUALITY >>>>>>>>>>>>>>>> caseIgnoreIA5Match SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 ) >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> The attribute should be added to existing idnsRecord object class as >>>>>>>>>>>>>>>> MAY. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> This new attribute should contain data encoded according to ?RFC >>>>>>>>>>>>>>>> 3597 section >>>>>>>>>>>>>>>> 5 [5]: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> The RDATA section of an RR of unknown type is represented as a >>>>>>>>>>>>>>>> sequence of white space separated words as follows: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> The special token \# (a backslash immediately followed by a hash >>>>>>>>>>>>>>>> sign), which identifies the RDATA as having the generic encoding >>>>>>>>>>>>>>>> defined herein rather than a traditional type-specific encoding. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> An unsigned decimal integer specifying the RDATA length in >>>>>>>>>>>>>>>> octets. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Zero or more words of hexadecimal data encoding the actual RDATA >>>>>>>>>>>>>>>> field, each containing an even number of hexadecimal digits. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> If the RDATA is of zero length, the text representation contains >>>>>>>>>>>>>>>> only >>>>>>>>>>>>>>>> the \# token and the single zero representing the length. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Examples from RFC: >>>>>>>>>>>>>>>> a.example. CLASS32 TYPE731 \# 6 abcd ( >>>>>>>>>>>>>>>> ef 01 23 45 ) >>>>>>>>>>>>>>>> b.example. HS TYPE62347 \# 0 >>>>>>>>>>>>>>>> e.example. IN A \# 4 0A000001 >>>>>>>>>>>>>>>> e.example. CLASS1 TYPE1 10.0.0.2 >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Open questions about LDAP format >>>>>>>>>>>>>>>> ================================ >>>>>>>>>>>>>>>> Should we include "\#" constant? We know that the attribute contains >>>>>>>>>>>>>>>> record in >>>>>>>>>>>>>>>> RFC 3597 syntax so it is not strictly necessary. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I think it would be better to follow RFC 3597 format. It allows blind >>>>>>>>>>>>>>>> copy&pasting from other tools, including direct calls to python-dns. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> It also eases writing conversion tools between DNS and LDAP format >>>>>>>>>>>>>>>> because >>>>>>>>>>>>>>>> they do not need to change record values. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Another question is if we should explicitly include length of data >>>>>>>>>>>>>>>> represented >>>>>>>>>>>>>>>> in hexadecimal notation as a decimal number. I'm very strongly >>>>>>>>>>>>>>>> inclined to let >>>>>>>>>>>>>>>> it there because it is very good sanity check and again, it allows >>>>>>>>>>>>>>>> us to >>>>>>>>>>>>>>>> re-use existing tools including parsers. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I will ask Uninett.no for standardization after we sort this out >>>>>>>>>>>>>>>> (they own the >>>>>>>>>>>>>>>> OID arc we use for DNS records). >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Attribute usage >>>>>>>>>>>>>>>> =============== >>>>>>>>>>>>>>>> Every DNS RR type has assigned a number [1] which is used on wire. >>>>>>>>>>>>>>>> RR types >>>>>>>>>>>>>>>> which are unknown to the server cannot be named by their >>>>>>>>>>>>>>>> mnemonic/type name >>>>>>>>>>>>>>>> because server would not be able to do name->number conversion and >>>>>>>>>>>>>>>> to generate >>>>>>>>>>>>>>>> DNS wire format. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> As a result, we have to encode the RR type number somehow. Let's use >>>>>>>>>>>>>>>> attribute >>>>>>>>>>>>>>>> sub-types. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> E.g. a record with type 65280 and hex value 0A000001 will be >>>>>>>>>>>>>>>> represented as: >>>>>>>>>>>>>>>> GenericRecord;TYPE65280: \# 4 0A000001 >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> CLI >>>>>>>>>>>>>>>> === >>>>>>>>>>>>>>>> $ ipa dnsrecord-add zone.example owner \ >>>>>>>>>>>>>>>> --generic-type=65280 --generic-data='\# 4 0A000001' >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> $ ipa dnsrecord-show zone.example owner >>>>>>>>>>>>>>>> Record name: owner >>>>>>>>>>>>>>>> TYPE65280 Record: \# 4 0A000001 >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ACK? :-) >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Almost. >>>>>>>>>>>>>>> We should refrain from using subtypes when not necessary, and in this >>>>>>>>>>>>>>> case it is not necessary. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Use: >>>>>>>>>>>>>>> GenericRecord: 65280 \# 4 0A000001 >>>>>>>>>>>>>> >>>>>>>>>>>>>> I was considering that too but I can see two main drawbacks: >>>>>>>>>>>>>> >>>>>>>>>>>>>> 1) It does not work very well with DS ACI (targetattrfilter, anyone?). >>>>>>>>>>>>>> Adding >>>>>>>>>>>>>> generic write access to GenericRecord == ability to add TLSA records too, >>>>>>>>>>>>>> which you may not want. IMHO it is perfectly reasonable to limit write >>>>>>>>>>>>>> access >>>>>>>>>>>>>> to certain types (e.g. to one from private range). >>>>>>>>>>>>>> >>>>>>>>>>>>>> 2) We would need a separate substring index for emulating filters like >>>>>>>>>>>>>> (type==65280). AFAIK GenericRecord;TYPE65280 should work with presence >>>>>>>>>>>>>> index >>>>>>>>>>>>>> which will be handy one day when we decide to handle upgrades like >>>>>>>>>>>>>> GenericRecord;TYPE256->UriRecord. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Another (less important) annoyance is that conversion tools would have to >>>>>>>>>>>>>> mangle record data instead of just converting attribute name->record >>>>>>>>>>>>>> type. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> I can be convinced that subtypes are not necessary but I do not see clear >>>>>>>>>>>>>> advantage of avoiding them. What is the problem with subtypes? >>>>>>>>>>>>> >>>>>>>>>>>>> Poor support by most clients, so it is generally discouraged. >>>>>>>>>>>> Hmm, it does not sound like a thing we should care in this case. DNS >>>>>>>>>>>> tree is >>>>>>>>>>>> not meant for direct consumption by LDAP clients (compare with cn=compat). >>>>>>>>>>>> >>>>>>>>>>>> IMHO the only two clients we should care are FreeIPA framework and >>>>>>>>>>>> bind-dyndb-ldap so I do not see this as a problem, really. If someone >>>>>>>>>>>> wants to >>>>>>>>>>>> access DNS tree by hand - sure, use a standard compliant client! >>>>>>>>>>>> >>>>>>>>>>>> Working ACI and LDAP filters sounds like good price for supporting only >>>>>>>>>>>> standards compliant clients. >>>>>>>>>>>> >>>>>>>>>>>> AFAIK OpenLDAP works well and I suspect that ApacheDS will work too because >>>>>>>>>>>> Eclipse has nice support for sub-types built-in. If I can draw some >>>>>>>>>>>> conclusions from that, sub-types are not a thing aliens forgot here when >>>>>>>>>>>> leaving Earth one million years ago :-) >>>>>>>>>>>> >>>>>>>>>>>>> The problem with subtypes and ACIs though is that I think ACIs do not >>>>>>>>>>>>> care about the subtype unless you explicit mention them. >>>>>>>>>>>> IMHO that is exactly what I would like to see for GenericRecord. It >>>>>>>>>>>> allows us >>>>>>>>>>>> to write ACI which allows admins to add any GenericRecord and at the >>>>>>>>>>>> same time >>>>>>>>>>>> allows us to craft ACI which allows access only to >>>>>>>>>>>> GenericRecord;TYPE65280 for >>>>>>>>>>>> specific group/user. >>>>>>>>>>>> >>>>>>>>>>>>> So perhaps bind_dyndb_ldap should refuse to use a generic type that >>>>>>>>>>>>> shadows DNSSEC relevant records ? >>>>>>>>>>>> Sorry, this cannot possibly work because it depends on up-to-date >>>>>>>>>>>> blacklist. >>>>>>>>>>>> >>>>>>>>>>>> How would the plugin released in 2015 know that highly sensitive OPENPGPKEY >>>>>>>>>>>> type will be standardized in 2016 and assigned number XYZ? >>>>>>>>>>> >>>>>>>>>>> Ok, show me an example ACI that works and you get my ack :) >>>>>>>>>> >>>>>>>>>> Am I being punished for something? :-) >>>>>>>>>> >>>>>>>>>> Anyway, this monstrosity: >>>>>>>>>> >>>>>>>>>> (targetattr = "objectclass || txtRecord;test")(target = >>>>>>>>>> "ldap:///idnsname=*,cn=dns,dc=ipa,dc=example")(version 3.0;acl >>>>>>>>>> "permission:luser: Read DNS Entries";allow (compare,read,search) userdn = >>>>>>>>>> "ldap:///uid=luser,cn=users,cn=accounts,dc=ipa,dc=example";) >>>>>>>>>> >>>>>>>>>> Gives 'luser' read access only to txtRecord;test and *not* to the whole >>>>>>>>>> txtRecord in general. >>>>>>>>>> >>>>>>>>>> $ kinit luser >>>>>>>>>> $ ldapsearch -Y GSSAPI -s base -b >>>>>>>>>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>>>>>>>>> SASL username: luser at IPA.EXAMPLE >>>>>>>>>> >>>>>>>>>> # txt, ipa.example., dns, ipa.example >>>>>>>>>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>>>>>>>>> objectClass: top >>>>>>>>>> objectClass: idnsrecord >>>>>>>>>> tXTRecord;test: Guess what is new here! >>>>>>>>>> >>>>>>>>>> Filter '(tXTRecord;test=*)' works as expected and returns only objects with >>>>>>>>>> subtype ;test. >>>>>>>>>> >>>>>>>>>> The only weird thing I noticed is that search filter '(tXTRecord=*)' does not >>>>>>>>>> return the object if you have access only to an subtype with existing value >>>>>>>>>> but not to the 'vanilla' attribute. >>>>>>>>>> >>>>>>>>>> Maybe it is a bug? I will think about it for a while and possibly open a >>>>>>>>>> ticket. Anyway, this is not something we need for implementation. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> For completeness: >>>>>>>>>> >>>>>>>>>> $ kinit admin >>>>>>>>>> $ ldapsearch -Y GSSAPI -s base -b >>>>>>>>>> 'idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example' >>>>>>>>>> SASL username: admin at IPA.EXAMPLE >>>>>>>>>> >>>>>>>>>> # txt, ipa.example., dns, ipa.example >>>>>>>>>> dn: idnsname=txt,idnsname=ipa.example.,cn=dns,dc=ipa,dc=example >>>>>>>>>> objectClass: top >>>>>>>>>> objectClass: idnsrecord >>>>>>>>>> tXTRecord: nothing >>>>>>>>>> tXTRecord: something >>>>>>>>>> idnsName: txt >>>>>>>>>> tXTRecord;test: Guess what is new here! >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> And yes, you assume correctly that (targetattr = "txtRecord") gives access to >>>>>>>>>> whole txtRecord including all its subtypes. >>>>>>>>>> >>>>>>>>>> ACK? :-) >>>>>>>>>> >>>>>>>>> >>>>>>>>> ACK. >>>>>>>> >>>>>>>> Thank you. Now to the most important and difficult question: >>>>>>>> Should the attribute name be "GenericRecord" or "UnknownRecord"? >>>>>>>> >>>>>>>> I like "GenericRecord" but Honza prefers "UnknownRecord" so we need a third >>>>>>>> opinion :-) >>>>>>> >>>>>>> GenericRecord sounds like something that may be used for any record type, >>>>>>> known or unknown. I don't think that's what we want. We want users to use it >>>>>>> only for unknown record types and use appropriate Record attribute for >>>>>>> known attributes. >>>>>>> >>>>>>> The RFC is titled "Handling of *Unknown* DNS Resource Record (RR) Types". The >>>>>>> word "generic" is used only when referring to encoding of RDATA. >>>>>> >>>>>> Okay, be it 'UnknownRecord'. >>>>>> >>>>>> Petr^2 Spacek >>>>> >>>>> I am just afraid it is quite general name, that may collide with other >>>>> attribute names. If it would be named "idnsUnknownRecord", it would be more >>>>> unique. But I assume we cannot add idns prefix for records themselves... >>>> >>>> Good point. What about UnknownDNSRecord? >>> >>> Maybe. Question is how consistent we want to be with other DNS record names >>> (arecord, ptrrecord) and how consistent we want to be with Uninett schema >>> (details in https://fedorahosted.org/bind-dyndb-ldap/wiki/LDAPSchema) and if >>> this new record would be discussed with them and added to their OID space. >> >> Currently my intention is to contact Uninett and try to standardize it when we >> finally agree on something. >> > > I think we agreed on UnknownRecord or it's variant, so please feel free to > contact and ask them. I think they will be more surprised with the subtype than > with the actual name :-) Sure. I e-mailed drift at uninett.no with the latest version of proposal and link to this thread. -- Petr^2 Spacek From pvoborni at redhat.com Thu Mar 12 12:43:58 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Thu, 12 Mar 2015 13:43:58 +0100 Subject: [Freeipa-devel] [PATCHES] SPEC: Require python2 version of sssd bindings In-Reply-To: <20150306141318.GX25455@redhat.com> References: <20150227205039.GB2327@mail.corp.redhat.com> <54F80B99.6030104@redhat.com> <20150305102328.GA18226@mail.corp.redhat.com> <54F87448.4050703@redhat.com> <20150306140505.GA2345@mail.corp.redhat.com> <20150306141318.GX25455@redhat.com> Message-ID: <55018A0E.70708@redhat.com> On 03/06/2015 03:13 PM, Alexander Bokovoy wrote: > On Fri, 06 Mar 2015, Lukas Slebodnik wrote: >> On (05/03/15 16:20), Petr Vobornik wrote: >>> On 03/05/2015 11:23 AM, Lukas Slebodnik wrote: >>>> On (05/03/15 08:54), Petr Vobornik wrote: >>>>> On 02/27/2015 09:50 PM, Lukas Slebodnik wrote: >>>>>> ehlo, >>>>>> >>>>>> Please review attached patches and fix freeipa in fedora 22 ASAP. >>>>>> >>>>>> I think the most critical is 1st patch >>>>>> >>>>>> sh$ git grep "SSSDConfig" | grep import >>>>>> install/tools/ipa-upgradeconfig:import SSSDConfig >>>>>> ipa-client/ipa-install/ipa-client-automount:import SSSDConfig >>>>>> ipa-client/ipa-install/ipa-client-install: import SSSDConfig >>>>>> >>>>>> BTW package python-sssdconfig is provides since sssd-1.10.0alpha1 >>>>>> (2013-04-02) >>>>>> but it was not explicitely required. >>>>>> >>>>>> The latest python3 changes in sssd (fedora 22) is just a result of >>>>>> negligent >>>>>> packaging of freeipa. >>>>>> >>>>>> LS >>>>>> >>>>> >>>>> Fedora 22 was amended. >>>>> >>>>> Patch 1: ACK >>>>> >>>>> Patch 2: ACK >>>>> >>>>> Patch3: >>>>> the package name is libsss_nss_idmap-python not >>>>> python-libsss_nss_idmap >>>>> which already is required in adtrust package >>>> In sssd upstream we decided to rename package >>>> libsss_nss_idmap-python to >>>> python-libsss_nss_idmap according to new rpm python guidelines. >>>> The python3 version has alredy correct name. >>>> >>>> We will rename package in downstream with next major release (1.13). >>>> Of course it we will add "Provides: libsss_nss_idmap-python". >>>> >>>> We can push 3rd patch later or I can update 3rd patch. >>>> What do you prefer? >>>> >>>> Than you very much for review. >>>> >>>> LS >>>> >>> >>> Patch 3 should be updated to not forget the remaining change in >>> ipa-python >>> package. >>> >>> It then should be updated downstream and master when 1.13 is released in >>> Fedora, or in master sooner if SSSD 1.13 becomes the minimal version >>> required >>> by master. >> >> Fixed. >> >> BTW Why ther is a pylint comment for some sssd modules >> I did not kave any pylint problems after removing comment. >> >> ipalib/plugins/trust.py:32: import pysss_murmur #pylint: disable=F0401 >> ipalib/plugins/trust.py:38: import pysss_nss_idmap #pylint: >> disable=F0401 >> >> >> And why are these modules optional (try except) > Because they are needed to properly load in the case trust subpackages > are not installed, to generate proper messages to users who will try > these commands, like 'ipa trust-add' while the infrastructure is not in > place. > > pylint is dumb for such cases. > > Alexander, the point was not to require python_nss_idmap and python-sss-murmur on ipa clients? If so python-sss-murmur should be required only by trust-ad package and not python package (patch2). And patch 3 (adding libsss_nss_idmap-python to python package) should not be used. -- Petr Vobornik From abokovoy at redhat.com Thu Mar 12 12:53:18 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Thu, 12 Mar 2015 14:53:18 +0200 Subject: [Freeipa-devel] [PATCHES] SPEC: Require python2 version of sssd bindings In-Reply-To: <55018A0E.70708@redhat.com> References: <20150227205039.GB2327@mail.corp.redhat.com> <54F80B99.6030104@redhat.com> <20150305102328.GA18226@mail.corp.redhat.com> <54F87448.4050703@redhat.com> <20150306140505.GA2345@mail.corp.redhat.com> <20150306141318.GX25455@redhat.com> <55018A0E.70708@redhat.com> Message-ID: <20150312125318.GC3878@redhat.com> On Thu, 12 Mar 2015, Petr Vobornik wrote: >On 03/06/2015 03:13 PM, Alexander Bokovoy wrote: >>On Fri, 06 Mar 2015, Lukas Slebodnik wrote: >>>On (05/03/15 16:20), Petr Vobornik wrote: >>>>On 03/05/2015 11:23 AM, Lukas Slebodnik wrote: >>>>>On (05/03/15 08:54), Petr Vobornik wrote: >>>>>>On 02/27/2015 09:50 PM, Lukas Slebodnik wrote: >>>>>>>ehlo, >>>>>>> >>>>>>>Please review attached patches and fix freeipa in fedora 22 ASAP. >>>>>>> >>>>>>>I think the most critical is 1st patch >>>>>>> >>>>>>>sh$ git grep "SSSDConfig" | grep import >>>>>>>install/tools/ipa-upgradeconfig:import SSSDConfig >>>>>>>ipa-client/ipa-install/ipa-client-automount:import SSSDConfig >>>>>>>ipa-client/ipa-install/ipa-client-install: import SSSDConfig >>>>>>> >>>>>>>BTW package python-sssdconfig is provides since sssd-1.10.0alpha1 >>>>>>>(2013-04-02) >>>>>>>but it was not explicitely required. >>>>>>> >>>>>>>The latest python3 changes in sssd (fedora 22) is just a result of >>>>>>>negligent >>>>>>>packaging of freeipa. >>>>>>> >>>>>>>LS >>>>>>> >>>>>> >>>>>>Fedora 22 was amended. >>>>>> >>>>>>Patch 1: ACK >>>>>> >>>>>>Patch 2: ACK >>>>>> >>>>>>Patch3: >>>>>>the package name is libsss_nss_idmap-python not >>>>>>python-libsss_nss_idmap >>>>>>which already is required in adtrust package >>>>>In sssd upstream we decided to rename package >>>>>libsss_nss_idmap-python to >>>>>python-libsss_nss_idmap according to new rpm python guidelines. >>>>>The python3 version has alredy correct name. >>>>> >>>>>We will rename package in downstream with next major release (1.13). >>>>>Of course it we will add "Provides: libsss_nss_idmap-python". >>>>> >>>>>We can push 3rd patch later or I can update 3rd patch. >>>>>What do you prefer? >>>>> >>>>>Than you very much for review. >>>>> >>>>>LS >>>>> >>>> >>>>Patch 3 should be updated to not forget the remaining change in >>>>ipa-python >>>>package. >>>> >>>>It then should be updated downstream and master when 1.13 is released in >>>>Fedora, or in master sooner if SSSD 1.13 becomes the minimal version >>>>required >>>>by master. >>> >>>Fixed. >>> >>>BTW Why ther is a pylint comment for some sssd modules >>>I did not kave any pylint problems after removing comment. >>> >>>ipalib/plugins/trust.py:32: import pysss_murmur #pylint: disable=F0401 >>>ipalib/plugins/trust.py:38: import pysss_nss_idmap #pylint: >>>disable=F0401 >>> >>> >>>And why are these modules optional (try except) >>Because they are needed to properly load in the case trust subpackages >>are not installed, to generate proper messages to users who will try >>these commands, like 'ipa trust-add' while the infrastructure is not in >>place. >> >>pylint is dumb for such cases. >> >> > >Alexander, the point was not to require python_nss_idmap and >python-sss-murmur on ipa clients? Pylint is not used on ipa clients. The import statements do protection against failed import and that's what we use on the client side. >If so python-sss-murmur should be required only by trust-ad package >and not python package (patch2). And patch 3 (adding >libsss_nss_idmap-python to python package) should not be used. We already have dependencies in trust-ad subpackage: %package server-trust-ad Summary: Virtual package to install packages required for Active Directory trusts Group: System Environment/Base Requires: %{name}-server = %version-%release Requires: m2crypto Requires: samba-python Requires: samba >= %{samba_version} Requires: samba-winbind Requires: libsss_idmap Requires: libsss_nss_idmap-python However, we don't ship the original plugins in this package because otherwise you wouldn't be able to use 'ipa trust*' from any machine other than those where trust-ad subpackage is installed. That's why we use import statements and catch the import exceptions. -- / Alexander Bokovoy From abokovoy at redhat.com Thu Mar 12 12:58:17 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Thu, 12 Mar 2015 14:58:17 +0200 Subject: [Freeipa-devel] [PATCHES] SPEC: Require python2 version of sssd bindings In-Reply-To: <20150312125318.GC3878@redhat.com> References: <20150227205039.GB2327@mail.corp.redhat.com> <54F80B99.6030104@redhat.com> <20150305102328.GA18226@mail.corp.redhat.com> <54F87448.4050703@redhat.com> <20150306140505.GA2345@mail.corp.redhat.com> <20150306141318.GX25455@redhat.com> <55018A0E.70708@redhat.com> <20150312125318.GC3878@redhat.com> Message-ID: <20150312125817.GD3878@redhat.com> On Thu, 12 Mar 2015, Alexander Bokovoy wrote: >On Thu, 12 Mar 2015, Petr Vobornik wrote: >>On 03/06/2015 03:13 PM, Alexander Bokovoy wrote: >>>On Fri, 06 Mar 2015, Lukas Slebodnik wrote: >>>>On (05/03/15 16:20), Petr Vobornik wrote: >>>>>On 03/05/2015 11:23 AM, Lukas Slebodnik wrote: >>>>>>On (05/03/15 08:54), Petr Vobornik wrote: >>>>>>>On 02/27/2015 09:50 PM, Lukas Slebodnik wrote: >>>>>>>>ehlo, >>>>>>>> >>>>>>>>Please review attached patches and fix freeipa in fedora 22 ASAP. >>>>>>>> >>>>>>>>I think the most critical is 1st patch >>>>>>>> >>>>>>>>sh$ git grep "SSSDConfig" | grep import >>>>>>>>install/tools/ipa-upgradeconfig:import SSSDConfig >>>>>>>>ipa-client/ipa-install/ipa-client-automount:import SSSDConfig >>>>>>>>ipa-client/ipa-install/ipa-client-install: import SSSDConfig >>>>>>>> >>>>>>>>BTW package python-sssdconfig is provides since sssd-1.10.0alpha1 >>>>>>>>(2013-04-02) >>>>>>>>but it was not explicitely required. >>>>>>>> >>>>>>>>The latest python3 changes in sssd (fedora 22) is just a result of >>>>>>>>negligent >>>>>>>>packaging of freeipa. >>>>>>>> >>>>>>>>LS >>>>>>>> >>>>>>> >>>>>>>Fedora 22 was amended. >>>>>>> >>>>>>>Patch 1: ACK >>>>>>> >>>>>>>Patch 2: ACK >>>>>>> >>>>>>>Patch3: >>>>>>>the package name is libsss_nss_idmap-python not >>>>>>>python-libsss_nss_idmap >>>>>>>which already is required in adtrust package >>>>>>In sssd upstream we decided to rename package >>>>>>libsss_nss_idmap-python to >>>>>>python-libsss_nss_idmap according to new rpm python guidelines. >>>>>>The python3 version has alredy correct name. >>>>>> >>>>>>We will rename package in downstream with next major release (1.13). >>>>>>Of course it we will add "Provides: libsss_nss_idmap-python". >>>>>> >>>>>>We can push 3rd patch later or I can update 3rd patch. >>>>>>What do you prefer? >>>>>> >>>>>>Than you very much for review. >>>>>> >>>>>>LS >>>>>> >>>>> >>>>>Patch 3 should be updated to not forget the remaining change in >>>>>ipa-python >>>>>package. >>>>> >>>>>It then should be updated downstream and master when 1.13 is released in >>>>>Fedora, or in master sooner if SSSD 1.13 becomes the minimal version >>>>>required >>>>>by master. >>>> >>>>Fixed. >>>> >>>>BTW Why ther is a pylint comment for some sssd modules >>>>I did not kave any pylint problems after removing comment. >>>> >>>>ipalib/plugins/trust.py:32: import pysss_murmur #pylint: disable=F0401 >>>>ipalib/plugins/trust.py:38: import pysss_nss_idmap #pylint: >>>>disable=F0401 >>>> >>>> >>>>And why are these modules optional (try except) >>>Because they are needed to properly load in the case trust subpackages >>>are not installed, to generate proper messages to users who will try >>>these commands, like 'ipa trust-add' while the infrastructure is not in >>>place. >>> >>>pylint is dumb for such cases. >>> >>> >> >>Alexander, the point was not to require python_nss_idmap and >>python-sss-murmur on ipa clients? >Pylint is not used on ipa clients. The import statements do protection >against failed import and that's what we use on the client side. > >>If so python-sss-murmur should be required only by trust-ad package >>and not python package (patch2). And patch 3 (adding >>libsss_nss_idmap-python to python package) should not be used. >We already have dependencies in trust-ad subpackage: >%package server-trust-ad >Summary: Virtual package to install packages required for Active Directory trusts >Group: System Environment/Base >Requires: %{name}-server = %version-%release >Requires: m2crypto >Requires: samba-python >Requires: samba >= %{samba_version} >Requires: samba-winbind >Requires: libsss_idmap >Requires: libsss_nss_idmap-python > >However, we don't ship the original plugins in this package because >otherwise you wouldn't be able to use 'ipa trust*' from any machine >other than those where trust-ad subpackage is installed. That's why we >use import statements and catch the import exceptions. Sent too early. ... and python-sss-murmur shoudl be required by trust-ad subpackage, yes. -- / Alexander Bokovoy From redhatrises at gmail.com Thu Mar 12 13:37:54 2015 From: redhatrises at gmail.com (Gabe Alford) Date: Thu, 12 Mar 2015 07:37:54 -0600 Subject: [Freeipa-devel] [PATCH 0044] Man pages: ipa-replica-prepare can only be created on first master Message-ID: Hello, Fix for https://fedorahosted.org/freeipa/ticket/4944. Since there seems to be plenty of time, I added it to the freeipa-4-1 branch. Thanks, Gabe -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rga-0044-ipa-replica-prepare-can-only-be-created-on-the-first.patch Type: text/x-patch Size: 1519 bytes Desc: not available URL: From mkosek at redhat.com Thu Mar 12 14:26:17 2015 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 12 Mar 2015 15:26:17 +0100 Subject: [Freeipa-devel] [PATCH 0044] Man pages: ipa-replica-prepare can only be created on first master In-Reply-To: References: Message-ID: <5501A209.70504@redhat.com> On 03/12/2015 02:37 PM, Gabe Alford wrote: > Hello, > > Fix for https://fedorahosted.org/freeipa/ticket/4944. Since there seems to > be plenty of time, I added it to the freeipa-4-1 branch. Thanks Gabe! I would still suggest against moving the tickets to milestones yourself, all new tickets should still undergo the weekly triage so that all core developers see it and we can decide the target milestone. With this one, it would likely indeed end in 4.1.x, especially given you contributed a patch, but still... For the patch itself, I still think the wording is not as should be: - following line is not entirely trie, you can install can create replica also on servers installed with ipa-replica-install :-) +A replica can be created on any IPA master server installed with ipa\-server\-install. - Following line may also use some rewording: However if you want to create a replica as a redundant CA with an existing replica or master, ipa\-replica\-prepare should be run on a replica or master that contains the CA. Maybe we should add subsection to DESCRIPTION section, with following lines: - A replica should only be installed on the same or higher version of IPA on the remote system. - A replica with PKI can only be installed from replica file prepared on a master with PKI Makes sense? Martin From mbabinsk at redhat.com Thu Mar 12 14:59:54 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Thu, 12 Mar 2015 15:59:54 +0100 Subject: [Freeipa-devel] [PATCHES 0018-0019] ipa-dns-install: Use LDAPI for all DS connections In-Reply-To: <55004D92.4080700@redhat.com> References: <55002E43.7050601@redhat.com> <55004D92.4080700@redhat.com> Message-ID: <5501A9EA.7040208@redhat.com> On 03/11/2015 03:13 PM, Martin Basti wrote: > On 11/03/15 13:00, Martin Babinsky wrote: >> These patches solve https://fedorahosted.org/freeipa/ticket/4933. >> >> They are to be applied to master branch. I will rebase them for >> ipa-4-1 after the review. >> > Thank you for the patches. > > I have a few comments: > > IPA-4-1 > Replace simple bind with LDAPI is too big change for 4-1, we should > start TLS if possible to avoid MINSSF>0 error. The LDAPI patches should > go only into IPA master branch. > > You can do something like this: > --- a/ipaserver/install/service.py > +++ b/ipaserver/install/service.py > @@ -107,6 +107,10 @@ class Service(object): > if not self.realm: > raise errors.NotFound(reason="realm is missing for > %s" % (self)) > conn = ipaldap.IPAdmin(ldapi=self.ldapi, > realm=self.realm) > + elif self.dm_password is not None: > + conn = ipaldap.IPAdmin(self.fqdn, port=389, > + cacert=paths.IPA_CA_CRT, > + start_tls=True) > else: > conn = ipaldap.IPAdmin(self.fqdn, port=389) > > > PATCH 0018: > 1) > please add there more chatty commit message about using LDAPI > > 2) > I do not like much idea of adding 'realm' kwarg into __init__ method of > OpenDNSSECInstance > IIUC, it is because get_masters() method, which requires realm to use > LDAPI. > > You can just add ods.realm=, before call get_master() in > ipa-dns-install > if options.dnssec_master: > + ods.realm=api.env.realm > dnssec_masters = ods.get_masters() > (Honza will change it anyway during refactoring) > > PATCH 0019: > 1) > commit message deserves to be more chatty, can you explain there why you > removed kerberos cache? > > Martin^2 > Attaching updated patches. Patch 0018 should go to both 4.1 and master branches. Patch 0019 should go only to master. -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0018-2-ipa-dns-install-use-STARTTLS-to.patch Type: text/x-patch Size: 9148 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0019-2-ipa-dns-install-use-LDAPI-to.patch Type: text/x-patch Size: 10905 bytes Desc: not available URL: From dkupka at redhat.com Thu Mar 12 15:10:02 2015 From: dkupka at redhat.com (David Kupka) Date: Thu, 12 Mar 2015 16:10:02 +0100 Subject: [Freeipa-devel] [PATCH 0203] Remove unused PRE_SCHEMA upgrade In-Reply-To: <54F9CD2F.3050900@redhat.com> References: <54F9CD2F.3050900@redhat.com> Message-ID: <5501AC4A.60900@redhat.com> On 03/06/2015 04:52 PM, Martin Basti wrote: > This upgrade step is not used anymore. > > Required by: https://fedorahosted.org/freeipa/ticket/4904 > > Patch attached. > > > Looks and works good to me, ACK. -- David Kupka From dkupka at redhat.com Thu Mar 12 15:10:10 2015 From: dkupka at redhat.com (David Kupka) Date: Thu, 12 Mar 2015 16:10:10 +0100 Subject: [Freeipa-devel] [PATCHES 0204-0207] Server upgrade: Make LDAP data upgrade deterministic In-Reply-To: <54F9CCB9.4070600@redhat.com> References: <54F9CCB9.4070600@redhat.com> Message-ID: <5501AC52.9000007@redhat.com> On 03/06/2015 04:50 PM, Martin Basti wrote: > The patchset ensure, the upgrade order will respect ordering of entries > in *.update files. > > Required for: https://fedorahosted.org/freeipa/ticket/4904 > > Patch 205 also fixes https://fedorahosted.org/freeipa/ticket/3560 > > Required patch mbasti-0203 > > Patches attached. > > > Changes in code looks good and the upgrade process still works, ACK. -- David Kupka From dkupka at redhat.com Thu Mar 12 15:10:36 2015 From: dkupka at redhat.com (David Kupka) Date: Thu, 12 Mar 2015 16:10:36 +0100 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <54F9DD12.2050008@redhat.com> References: <54F9DD12.2050008@redhat.com> Message-ID: <5501AC6C.8000603@redhat.com> On 03/06/2015 06:00 PM, Martin Basti wrote: > Upgrade plugins which modify LDAP data directly should not be executed > in --test mode. > > This patch is a workaround, to ensure update with --test option will not > modify any LDAP data. > > https://fedorahosted.org/freeipa/ticket/3448 > > Patch attached. > > > Ideally we want to fix all plugins to dry-run the upgrade not just skip when there is '--test' option but it is a good first step. Works for me, ACK. -- David Kupka From rcritten at redhat.com Thu Mar 12 15:21:45 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 12 Mar 2015 11:21:45 -0400 Subject: [Freeipa-devel] [PATCHES 0204-0207] Server upgrade: Make LDAP data upgrade deterministic In-Reply-To: <54F9CCB9.4070600@redhat.com> References: <54F9CCB9.4070600@redhat.com> Message-ID: <5501AF09.1090202@redhat.com> Martin Basti wrote: > The patchset ensure, the upgrade order will respect ordering of entries > in *.update files. > > Required for: https://fedorahosted.org/freeipa/ticket/4904 > > Patch 205 also fixes https://fedorahosted.org/freeipa/ticket/3560 > > Required patch mbasti-0203 > > Patches attached. > > > Just reading the patches, untested. I think ordered should default to True in the update() method of ldapupdater to keep in spirit with the design. Otherwise LGTM that it implements what was designed. rob From rcritten at redhat.com Thu Mar 12 15:22:26 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 12 Mar 2015 11:22:26 -0400 Subject: [Freeipa-devel] [PATCH 0203] Remove unused PRE_SCHEMA upgrade In-Reply-To: <5501AC4A.60900@redhat.com> References: <54F9CD2F.3050900@redhat.com> <5501AC4A.60900@redhat.com> Message-ID: <5501AF32.6040503@redhat.com> David Kupka wrote: > On 03/06/2015 04:52 PM, Martin Basti wrote: >> This upgrade step is not used anymore. >> >> Required by: https://fedorahosted.org/freeipa/ticket/4904 >> >> Patch attached. >> >> >> > > Looks and works good to me, ACK. Is this going away because one can simply create an update file that exists alphabetically before the schema update? If so then ACK. rob From rcritten at redhat.com Thu Mar 12 15:23:05 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 12 Mar 2015 11:23:05 -0400 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5501AC6C.8000603@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> Message-ID: <5501AF59.50204@redhat.com> David Kupka wrote: > On 03/06/2015 06:00 PM, Martin Basti wrote: >> Upgrade plugins which modify LDAP data directly should not be executed >> in --test mode. >> >> This patch is a workaround, to ensure update with --test option will not >> modify any LDAP data. >> >> https://fedorahosted.org/freeipa/ticket/3448 >> >> Patch attached. >> >> >> > > Ideally we want to fix all plugins to dry-run the upgrade not just skip > when there is '--test' option but it is a good first step. > Works for me, ACK. > I agree that this breaks the spirit of --test and think it should be fixed before committing. rob From mbasti at redhat.com Thu Mar 12 16:04:12 2015 From: mbasti at redhat.com (Martin Basti) Date: Thu, 12 Mar 2015 17:04:12 +0100 Subject: [Freeipa-devel] [PATCH 0203] Remove unused PRE_SCHEMA upgrade In-Reply-To: <5501AF32.6040503@redhat.com> References: <54F9CD2F.3050900@redhat.com> <5501AC4A.60900@redhat.com> <5501AF32.6040503@redhat.com> Message-ID: <5501B8FC.4030408@redhat.com> On 12/03/15 16:22, Rob Crittenden wrote: > David Kupka wrote: >> On 03/06/2015 04:52 PM, Martin Basti wrote: >>> This upgrade step is not used anymore. >>> >>> Required by: https://fedorahosted.org/freeipa/ticket/4904 >>> >>> Patch attached. >>> >>> >>> >> Looks and works good to me, ACK. > Is this going away because one can simply create an update file that > exists alphabetically before the schema update? If so then ACK. > > rob No this never works, and will not work without changes in DS, I was discussing this with DS guys. If you add new replica to schema, the schema has to be there before data replication. Martin -- Martin Basti From pspacek at redhat.com Thu Mar 12 16:05:52 2015 From: pspacek at redhat.com (Petr Spacek) Date: Thu, 12 Mar 2015 17:05:52 +0100 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5501AF59.50204@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> Message-ID: <5501B960.3060808@redhat.com> On 12.3.2015 16:23, Rob Crittenden wrote: > David Kupka wrote: >> On 03/06/2015 06:00 PM, Martin Basti wrote: >>> Upgrade plugins which modify LDAP data directly should not be executed >>> in --test mode. >>> >>> This patch is a workaround, to ensure update with --test option will not >>> modify any LDAP data. >>> >>> https://fedorahosted.org/freeipa/ticket/3448 >>> >>> Patch attached. >>> >>> >>> >> >> Ideally we want to fix all plugins to dry-run the upgrade not just skip >> when there is '--test' option but it is a good first step. >> Works for me, ACK. >> > > I agree that this breaks the spirit of --test and think it should be > fixed before committing. Considering how often is the option is used ... I do not think that this requires 'proper' fix now. It was broken for *years* so this patch is a huge improvement and IMHO should be commited in current form. We can re-visit it later on, open a ticket :-) -- Petr^2 Spacek From rcritten at redhat.com Thu Mar 12 16:08:53 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 12 Mar 2015 12:08:53 -0400 Subject: [Freeipa-devel] [PATCH 0203] Remove unused PRE_SCHEMA upgrade In-Reply-To: <5501B8FC.4030408@redhat.com> References: <54F9CD2F.3050900@redhat.com> <5501AC4A.60900@redhat.com> <5501AF32.6040503@redhat.com> <5501B8FC.4030408@redhat.com> Message-ID: <5501BA15.2010803@redhat.com> Martin Basti wrote: > On 12/03/15 16:22, Rob Crittenden wrote: >> David Kupka wrote: >>> On 03/06/2015 04:52 PM, Martin Basti wrote: >>>> This upgrade step is not used anymore. >>>> >>>> Required by: https://fedorahosted.org/freeipa/ticket/4904 >>>> >>>> Patch attached. >>>> >>>> >>>> >>> Looks and works good to me, ACK. >> Is this going away because one can simply create an update file that >> exists alphabetically before the schema update? If so then ACK. >> >> rob > No this never works, and will not work without changes in DS, I was > discussing this with DS guys. If you add new replica to schema, the > schema has to be there before data replication. > > Martin > That's a rather narrow case though. You could make changes that only affect existing schema, or something in cn=config. rob From rcritten at redhat.com Thu Mar 12 16:10:49 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Thu, 12 Mar 2015 12:10:49 -0400 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5501B960.3060808@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> <5501B960.3060808@redhat.com> Message-ID: <5501BA89.2070507@redhat.com> Petr Spacek wrote: > On 12.3.2015 16:23, Rob Crittenden wrote: >> David Kupka wrote: >>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>> Upgrade plugins which modify LDAP data directly should not be executed >>>> in --test mode. >>>> >>>> This patch is a workaround, to ensure update with --test option will not >>>> modify any LDAP data. >>>> >>>> https://fedorahosted.org/freeipa/ticket/3448 >>>> >>>> Patch attached. >>>> >>>> >>>> >>> >>> Ideally we want to fix all plugins to dry-run the upgrade not just skip >>> when there is '--test' option but it is a good first step. >>> Works for me, ACK. >>> >> >> I agree that this breaks the spirit of --test and think it should be >> fixed before committing. > > Considering how often is the option is used ... I do not think that this > requires 'proper' fix now. It was broken for *years* so this patch is a huge > improvement and IMHO should be commited in current form. We can re-visit it > later on, open a ticket :-) > No. There is no rush for this, at least not for the promise of a future fix that will never come. rob From pspacek at redhat.com Thu Mar 12 16:11:18 2015 From: pspacek at redhat.com (Petr Spacek) Date: Thu, 12 Mar 2015 17:11:18 +0100 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5501B960.3060808@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> <5501B960.3060808@redhat.com> Message-ID: <5501BAA6.3010404@redhat.com> On 12.3.2015 17:05, Petr Spacek wrote: > On 12.3.2015 16:23, Rob Crittenden wrote: >> David Kupka wrote: >>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>> Upgrade plugins which modify LDAP data directly should not be executed >>>> in --test mode. >>>> >>>> This patch is a workaround, to ensure update with --test option will not >>>> modify any LDAP data. >>>> >>>> https://fedorahosted.org/freeipa/ticket/3448 >>>> >>>> Patch attached. >>>> >>>> >>>> >>> >>> Ideally we want to fix all plugins to dry-run the upgrade not just skip >>> when there is '--test' option but it is a good first step. >>> Works for me, ACK. >>> >> >> I agree that this breaks the spirit of --test and think it should be >> fixed before committing. > > Considering how often is the option is used ... I do not think that this > requires 'proper' fix now. It was broken for *years* so this patch is a huge > improvement and IMHO should be commited in current form. We can re-visit it > later on, open a ticket :-) BTW 'proper' fix would be to implement transactions in DS, run all updates in one huge transaction and do roll-back at the end. That would allow us to actually test updates which can depend on each other etc. -- Petr^2 Spacek From mbabinsk at redhat.com Thu Mar 12 16:15:51 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Thu, 12 Mar 2015 17:15:51 +0100 Subject: [Freeipa-devel] [PATCHES 0018-0019] ipa-dns-install: Use LDAPI for all DS connections In-Reply-To: <5501A9EA.7040208@redhat.com> References: <55002E43.7050601@redhat.com> <55004D92.4080700@redhat.com> <5501A9EA.7040208@redhat.com> Message-ID: <5501BBB7.2040709@redhat.com> On 03/12/2015 03:59 PM, Martin Babinsky wrote: > On 03/11/2015 03:13 PM, Martin Basti wrote: >> On 11/03/15 13:00, Martin Babinsky wrote: >>> These patches solve https://fedorahosted.org/freeipa/ticket/4933. >>> >>> They are to be applied to master branch. I will rebase them for >>> ipa-4-1 after the review. >>> >> Thank you for the patches. >> >> I have a few comments: >> >> IPA-4-1 >> Replace simple bind with LDAPI is too big change for 4-1, we should >> start TLS if possible to avoid MINSSF>0 error. The LDAPI patches should >> go only into IPA master branch. >> >> You can do something like this: >> --- a/ipaserver/install/service.py >> +++ b/ipaserver/install/service.py >> @@ -107,6 +107,10 @@ class Service(object): >> if not self.realm: >> raise errors.NotFound(reason="realm is missing for >> %s" % (self)) >> conn = ipaldap.IPAdmin(ldapi=self.ldapi, >> realm=self.realm) >> + elif self.dm_password is not None: >> + conn = ipaldap.IPAdmin(self.fqdn, port=389, >> + cacert=paths.IPA_CA_CRT, >> + start_tls=True) >> else: >> conn = ipaldap.IPAdmin(self.fqdn, port=389) >> >> >> PATCH 0018: >> 1) >> please add there more chatty commit message about using LDAPI >> >> 2) >> I do not like much idea of adding 'realm' kwarg into __init__ method of >> OpenDNSSECInstance >> IIUC, it is because get_masters() method, which requires realm to use >> LDAPI. >> >> You can just add ods.realm=, before call get_master() in >> ipa-dns-install >> if options.dnssec_master: >> + ods.realm=api.env.realm >> dnssec_masters = ods.get_masters() >> (Honza will change it anyway during refactoring) >> >> PATCH 0019: >> 1) >> commit message deserves to be more chatty, can you explain there why you >> removed kerberos cache? >> >> Martin^2 >> > > Attaching updated patches. > > Patch 0018 should go to both 4.1 and master branches. > > Patch 0019 should go only to master. > > > One more update. Patch 0018 is for both 4.1 and master. Patch 0019 is for master only. -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0018-3-ipa-dns-install-use-STARTTLS-to.patch Type: text/x-patch Size: 8091 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0019-3-ipa-dns-install-use-LDAPI-to.patch Type: text/x-patch Size: 10445 bytes Desc: not available URL: From mbasti at redhat.com Thu Mar 12 16:30:03 2015 From: mbasti at redhat.com (Martin Basti) Date: Thu, 12 Mar 2015 17:30:03 +0100 Subject: [Freeipa-devel] [PATCH 0203] Remove unused PRE_SCHEMA upgrade In-Reply-To: <5501BA15.2010803@redhat.com> References: <54F9CD2F.3050900@redhat.com> <5501AC4A.60900@redhat.com> <5501AF32.6040503@redhat.com> <5501B8FC.4030408@redhat.com> <5501BA15.2010803@redhat.com> Message-ID: <5501BF0B.8080503@redhat.com> On 12/03/15 17:08, Rob Crittenden wrote: > Martin Basti wrote: >> On 12/03/15 16:22, Rob Crittenden wrote: >>> David Kupka wrote: >>>> On 03/06/2015 04:52 PM, Martin Basti wrote: >>>>> This upgrade step is not used anymore. >>>>> >>>>> Required by: https://fedorahosted.org/freeipa/ticket/4904 >>>>> >>>>> Patch attached. >>>>> >>>>> >>>>> >>>> Looks and works good to me, ACK. >>> Is this going away because one can simply create an update file that >>> exists alphabetically before the schema update? If so then ACK. >>> >>> rob >> No this never works, and will not work without changes in DS, I was >> discussing this with DS guys. If you add new replica to schema, the >> schema has to be there before data replication. >> >> Martin >> > That's a rather narrow case though. You could make changes that only > affect existing schema, or something in cn=config. > > rob Let summarize this: * It is unused code * we have schema update to modify schema (is there any extra requirement to modify schema before schema update? I though the schema update replace old schema with new) * it is not usable on new replicas (why to modify up to date schema?, why to modify new configuration?) * we can not use this to update data * only way how we can us this is to change non-replicating data, on current server. However, might there be really need to update cn=config before schema update? Martin -- Martin Basti From edewata at redhat.com Thu Mar 12 19:27:57 2015 From: edewata at redhat.com (Endi Sukma Dewata) Date: Fri, 13 Mar 2015 02:27:57 +0700 Subject: [Freeipa-devel] [PATCH] Password vault In-Reply-To: <55004D5D.6060300@redhat.com> References: <54E1AF55.3060409@redhat.com> <54EBEB55.6010306@redhat.com> <54F96B22.9050507@redhat.com> <55004D5D.6060300@redhat.com> Message-ID: <5501E8BD.7000503@redhat.com> On 3/11/2015 9:12 PM, Endi Sukma Dewata wrote: > Thanks for the review. New patch attached to be applied on top of all > previous patches. Please see comments below. New patch #362-1 attached replacing #362. It fixed some issues in handle_not_found(). -- Endi S. Dewata -------------- next part -------------- >From 6741138b647427cd3448ff72369dfc6fa21354aa Mon Sep 17 00:00:00 2001 From: "Endi S. Dewata" Date: Mon, 9 Mar 2015 12:09:20 -0400 Subject: [PATCH] Vault improvements. The vault plugins have been modified to clean up the code, to fix some issues, and to improve error handling. The LDAPCreate and LDAPSearch classes have been refactored to allow subclasses to provide custom error handling. The test scripts have been updated accordingly. https://fedorahosted.org/freeipa/ticket/3872 --- API.txt | 50 ++-- ipalib/plugins/baseldap.py | 35 +-- ipalib/plugins/user.py | 6 +- ipalib/plugins/vault.py | 273 ++++++++++----------- ipalib/plugins/vaultcontainer.py | 226 +++++++++-------- ipalib/plugins/vaultsecret.py | 108 ++++---- ipatests/test_xmlrpc/test_vault_plugin.py | 2 +- ipatests/test_xmlrpc/test_vaultcontainer_plugin.py | 24 +- ipatests/test_xmlrpc/test_vaultsecret_plugin.py | 2 +- 9 files changed, 371 insertions(+), 355 deletions(-) diff --git a/API.txt b/API.txt index ffbffa78cde372d5c7027b758be58bf07caebbc6..3a741755ab3e15e0175599a16a090b04d46d6be8 100644 --- a/API.txt +++ b/API.txt @@ -4518,7 +4518,7 @@ args: 1,20,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, required=True) option: Str('addattr*', cli_name='addattr', exclude='webui') option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container', attribute=False, cli_name='container', multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', required=False) +option: Str('container_id?', cli_name='container_id') option: Bytes('data?', cli_name='data') option: Str('description', attribute=True, cli_name='desc', multivalue=False, required=False) option: Str('escrow_public_key_file?', cli_name='escrow_public_key_file') @@ -4543,7 +4543,7 @@ command: vault_add_member args: 1,7,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') @@ -4556,7 +4556,7 @@ command: vault_add_owner args: 1,7,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') @@ -4569,7 +4569,7 @@ command: vault_archive args: 1,15,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Bytes('data?', cli_name='data') option: Bytes('encryption_key?', cli_name='encryption_key') option: Str('in?', cli_name='in') @@ -4589,7 +4589,7 @@ output: PrimaryKey('value', None, None) command: vault_del args: 1,3,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=True, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Flag('continue', autofill=True, cli_name='continue', default=False) option: Str('version?', exclude='webui') output: Output('result', , None) @@ -4600,7 +4600,7 @@ args: 1,15,4 arg: Str('criteria?', noextrawhitespace=False) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('cn', attribute=True, autofill=False, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=False) -option: Str('container', attribute=False, autofill=False, cli_name='container', multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', query=True, required=False) +option: Str('container_id?', cli_name='container_id') option: Str('description', attribute=True, autofill=False, cli_name='desc', multivalue=False, query=True, required=False) option: Bytes('ipaescrowpublickey', attribute=True, autofill=False, cli_name='escrow_public_key', multivalue=False, query=True, required=False) option: Bytes('ipapublickey', attribute=True, autofill=False, cli_name='public_key', multivalue=False, query=True, required=False) @@ -4622,7 +4622,7 @@ args: 1,15,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Str('addattr*', cli_name='addattr', exclude='webui') option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container', attribute=False, autofill=False, cli_name='container', multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', required=False) +option: Str('container_id?', cli_name='container_id') option: Str('delattr*', cli_name='delattr', exclude='webui') option: Str('description', attribute=True, autofill=False, cli_name='desc', multivalue=False, required=False) option: Bytes('ipaescrowpublickey', attribute=True, autofill=False, cli_name='escrow_public_key', multivalue=False, required=False) @@ -4642,7 +4642,7 @@ command: vault_remove_member args: 1,7,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') @@ -4655,7 +4655,7 @@ command: vault_remove_owner args: 1,7,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') @@ -4668,7 +4668,7 @@ command: vault_retrieve args: 1,16,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Bytes('escrow_private_key?', cli_name='escrow_private_key') option: Str('escrow_private_key_file?', cli_name='escrow_private_key_file') option: Flag('no_members', autofill=True, default=False, exclude='webui') @@ -4690,7 +4690,7 @@ command: vault_show args: 1,6,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Flag('no_members', autofill=True, default=False, exclude='webui') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Flag('rights', autofill=True, default=False) @@ -4711,7 +4711,7 @@ option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui option: Str('container_id', attribute=False, cli_name='container_id', multivalue=False, required=False) option: Str('description', attribute=True, cli_name='desc', multivalue=False, required=False) option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent', attribute=False, cli_name='parent', multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', required=False) +option: Str('parent_id?', cli_name='parent_id') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Str('setattr*', cli_name='setattr', exclude='webui') option: Str('version?', exclude='webui') @@ -4724,7 +4724,7 @@ arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multiva option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent?', cli_name='parent') +option: Str('parent_id?', cli_name='parent_id') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Str('user*', alwaysask=True, cli_name='users', csv=True) option: Str('version?', exclude='webui') @@ -4737,7 +4737,7 @@ arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multiva option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent?', cli_name='parent') +option: Str('parent_id?', cli_name='parent_id') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Str('user*', alwaysask=True, cli_name='users', csv=True) option: Str('version?', exclude='webui') @@ -4749,7 +4749,7 @@ args: 1,4,3 arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=True, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('continue', autofill=True, cli_name='continue', default=False) option: Flag('force?', autofill=True, default=False) -option: Str('parent?', cli_name='parent') +option: Str('parent_id?', cli_name='parent_id') option: Str('version?', exclude='webui') output: Output('result', , None) output: Output('summary', (, ), None) @@ -4762,7 +4762,7 @@ option: Str('cn', attribute=True, autofill=False, cli_name='container_name', max option: Str('container_id', attribute=False, autofill=False, cli_name='container_id', multivalue=False, query=True, required=False) option: Str('description', attribute=True, autofill=False, cli_name='desc', multivalue=False, query=True, required=False) option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent', attribute=False, autofill=False, cli_name='parent', multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', query=True, required=False) +option: Str('parent_id?', cli_name='parent_id') option: Flag('pkey_only?', autofill=True, default=False) option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Int('sizelimit?', autofill=False, minvalue=0) @@ -4781,7 +4781,7 @@ option: Str('container_id', attribute=False, autofill=False, cli_name='container option: Str('delattr*', cli_name='delattr', exclude='webui') option: Str('description', attribute=True, autofill=False, cli_name='desc', multivalue=False, required=False) option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent', attribute=False, autofill=False, cli_name='parent', multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', required=False) +option: Str('parent_id?', cli_name='parent_id') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Flag('rights', autofill=True, default=False) option: Str('setattr*', cli_name='setattr', exclude='webui') @@ -4795,7 +4795,7 @@ arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multiva option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent?', cli_name='parent') +option: Str('parent_id?', cli_name='parent_id') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Str('user*', alwaysask=True, cli_name='users', csv=True) option: Str('version?', exclude='webui') @@ -4808,7 +4808,7 @@ arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multiva option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent?', cli_name='parent') +option: Str('parent_id?', cli_name='parent_id') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Str('user*', alwaysask=True, cli_name='users', csv=True) option: Str('version?', exclude='webui') @@ -4820,7 +4820,7 @@ args: 1,6,3 arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Flag('no_members', autofill=True, default=False, exclude='webui') -option: Str('parent?', cli_name='parent') +option: Str('parent_id?', cli_name='parent_id') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Flag('rights', autofill=True, default=False) option: Str('version?', exclude='webui') @@ -4832,7 +4832,7 @@ args: 2,12,3 arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) arg: Str('secret_name', attribute=True, cli_name='secret', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Bytes('data?', cli_name='data') option: Str('description?', cli_name='desc') option: Str('in?', cli_name='in') @@ -4851,7 +4851,7 @@ args: 2,8,3 arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) arg: Str('secret_name', attribute=True, cli_name='secret', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Str('password?', cli_name='password') option: Str('password_file?', cli_name='password_file') option: Bytes('private_key?', cli_name='private_key') @@ -4866,7 +4866,7 @@ args: 2,12,4 arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) arg: Str('criteria?', noextrawhitespace=False) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Bytes('data', attribute=True, autofill=False, cli_name='data', multivalue=False, query=True, required=False) option: Str('description', attribute=True, autofill=False, cli_name='desc', multivalue=False, query=True, required=False) option: Str('password?', cli_name='password') @@ -4886,7 +4886,7 @@ args: 2,12,3 arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) arg: Str('secret_name', attribute=True, cli_name='secret', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Bytes('data?', cli_name='data') option: Str('description?', cli_name='desc') option: Str('in?', cli_name='in') @@ -4905,7 +4905,7 @@ args: 2,11,3 arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) arg: Str('secret_name', attribute=True, cli_name='secret', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('container?', cli_name='container') +option: Str('container_id?', cli_name='container_id') option: Str('out?', cli_name='out') option: Str('password?', cli_name='password') option: Str('password_file?', cli_name='password_file') diff --git a/ipalib/plugins/baseldap.py b/ipalib/plugins/baseldap.py index d693709ac1ba7ddb3c559199c199039b6f8bd9ac..fceaf95f42bef5fa71cbedeb291bd68d2919bc5a 100644 --- a/ipalib/plugins/baseldap.py +++ b/ipalib/plugins/baseldap.py @@ -1152,19 +1152,7 @@ class LDAPCreate(BaseLDAPCommand, crud.Create): try: self._exc_wrapper(keys, options, ldap.add_entry)(entry_attrs) except errors.NotFound: - parent = self.obj.parent_object - if parent: - raise errors.NotFound( - reason=self.obj.parent_not_found_msg % { - 'parent': keys[-2], - 'oname': self.api.Object[parent].object_name, - } - ) - raise errors.NotFound( - reason=self.obj.container_not_found_msg % { - 'container': self.obj.container_dn, - } - ) + self.handle_not_found(*keys, **options) except errors.DuplicateEntry: self.obj.handle_duplicate_entry(*keys) @@ -1213,6 +1201,21 @@ class LDAPCreate(BaseLDAPCommand, crud.Create): def exc_callback(self, keys, options, exc, call_func, *call_args, **call_kwargs): raise exc + def handle_not_found(self, *args, **options): + parent = self.obj.parent_object + if parent: + raise errors.NotFound( + reason=self.obj.parent_not_found_msg % { + 'parent': args[-2], + 'oname': self.api.Object[parent].object_name, + } + ) + raise errors.NotFound( + reason=self.obj.container_not_found_msg % { + 'container': self.obj.container_dn, + } + ) + def interactive_prompt_callback(self, kw): return @@ -2000,7 +2003,8 @@ class LDAPSearch(BaseLDAPCommand, crud.Search): except errors.EmptyResult: (entries, truncated) = ([], False) except errors.NotFound: - self.api.Object[self.obj.parent_object].handle_not_found(*args[:-1]) + self.handle_not_found(*args, **options) + (entries, truncated) = ([], False) for callback in self.get_callbacks('post'): truncated = callback(self, ldap, entries, truncated, *args, **options) @@ -2026,6 +2030,9 @@ class LDAPSearch(BaseLDAPCommand, crud.Search): truncated=truncated, ) + def handle_not_found(self, *args, **options): + self.api.Object[self.obj.parent_object].handle_not_found(*args[:-1]) + def pre_callback(self, ldap, filters, attrs_list, base_dn, scope, *args, **options): assert isinstance(base_dn, DN) return (filters, base_dn, scope) diff --git a/ipalib/plugins/user.py b/ipalib/plugins/user.py index 70b237dc102f46ab62e10aab0250aa496dad60c6..f8ee3db05b5a36cf4ebb4261f8535932b834b1bf 100644 --- a/ipalib/plugins/user.py +++ b/ipalib/plugins/user.py @@ -903,10 +903,12 @@ class user_del(LDAPDelete): # Delete user's private vault container. vaultcontainer_id = self.api.Object.vaultcontainer.get_private_id(owner) - (vaultcontainer_name, vaultcontainer_parent_id) = self.api.Object.vaultcontainer.split_id(vaultcontainer_id) + (vaultcontainer_name, vaultcontainer_parent_id) =\ + self.api.Object.vaultcontainer.split_id(vaultcontainer_id) try: - self.api.Command.vaultcontainer_del(vaultcontainer_name, parent=vaultcontainer_parent_id) + self.api.Command.vaultcontainer_del( + vaultcontainer_name, parent_id=vaultcontainer_parent_id) except errors.NotFound: pass diff --git a/ipalib/plugins/vault.py b/ipalib/plugins/vault.py index 69c12aaf1c8503a345e115fb1a660aed8f85fb17..d47067758186601365e5924f5d13c7ab51ba66e5 100644 --- a/ipalib/plugins/vault.py +++ b/ipalib/plugins/vault.py @@ -41,8 +41,8 @@ from ipalib.frontend import Command from ipalib import api, errors from ipalib import Str, Bytes, Flag from ipalib.plugable import Registry -from ipalib.plugins.baseldap import LDAPObject, LDAPCreate, LDAPDelete, LDAPSearch, LDAPUpdate, LDAPRetrieve,\ - LDAPAddMember, LDAPRemoveMember +from ipalib.plugins.baseldap import LDAPObject, LDAPCreate, LDAPDelete, LDAPSearch, LDAPUpdate,\ + LDAPRetrieve, LDAPAddMember, LDAPRemoveMember from ipalib.request import context from ipalib.plugins.user import split_principal from ipalib import _, ngettext @@ -61,7 +61,7 @@ EXAMPLES: ipa vault-find """) + _(""" List shared vaults: - ipa vault-find --container /shared + ipa vault-find --container-id /shared """) + _(""" Add a standard vault: ipa vault-add MyVault @@ -135,6 +135,8 @@ class vault(LDAPObject): Vault object. """) + base_dn = DN(api.env.container_vault, api.env.basedn) + object_name = _('vault') object_name_plural = _('vaults') @@ -173,19 +175,11 @@ class vault(LDAPObject): pattern_errmsg='may only include letters, numbers, _, ., and -', maxlength=255, ), - Str('container?', - cli_name='container', - label=_('Container'), - doc=_('Container'), - flags=('virtual_attribute'), - pattern='^[a-zA-Z0-9_.-/]+$', - pattern_errmsg='may only include letters, numbers, _, ., -, and /', - ), Str('vault_id?', cli_name='vault_id', label=_('Vault ID'), doc=_('Vault ID'), - flags=('virtual_attribute'), + flags={'no_option', 'virtual_attribute'}, ), Str('description?', cli_name='desc', @@ -217,22 +211,16 @@ class vault(LDAPObject): ) def get_dn(self, *keys, **options): - __doc__ = _(""" + """ Generates vault DN from vault ID. - """) + """ # get vault ID from parameters - name = None - if keys: - name = keys[0] + name = keys[-1] + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + vault_id = container_id + name - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) - - vault_id = container_id - if name: - vault_id = container_id + name - - dn = api.Object.vaultcontainer.base_dn + dn = self.base_dn # for each name in the ID, prepend the base DN for name in vault_id.split(u'/'): @@ -242,45 +230,30 @@ class vault(LDAPObject): return dn def get_id(self, dn): - __doc__ = _(""" + """ Generates vault ID from vault DN. - """) + """ # make sure the DN is a vault DN - if not dn.endswith(api.Object.vaultcontainer.base_dn): + if not dn.endswith(self.base_dn): raise ValueError('Invalid vault DN: %s' % dn) # vault DN cannot be the container base DN - if len(dn) == len(api.Object.vaultcontainer.base_dn): + if len(dn) == len(self.base_dn): raise ValueError('Invalid vault DN: %s' % dn) # construct the vault ID from the bottom up id = u'' - while len(dn) > len(api.Object.vaultcontainer.base_dn): - - rdn = dn[0] + for rdn in dn[:-len(self.base_dn)]: name = rdn['cn'] id = u'/' + name + id - dn = DN(*dn[1:]) - return id - def split_id(self, id): - __doc__ = _(""" - Splits a vault ID into (vault name, container ID) tuple. - """) - - # split ID into container ID and vault name - parts = id.rsplit(u'/', 1) - - # return vault name and container ID - return (parts[1], parts[0] + u'/') - def get_kra_id(self, id): - __doc__ = _(""" + """ Generates a client key ID to store/retrieve data in KRA. - """) + """ return 'ipa:' + id def generate_symmetric_key(self, password, salt): @@ -354,9 +327,13 @@ class vault_add(LDAPCreate): __doc__ = _('Create a new vault.') takes_options = LDAPCreate.takes_options + ( + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), + ), Bytes('data?', cli_name='data', - doc=_('Base-64 encoded binary data to archive'), + doc=_('Binary data to archive'), ), Str('text?', cli_name='text', @@ -389,7 +366,7 @@ class vault_add(LDAPCreate): def forward(self, *args, **options): vault_name = args[0] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = options.get('ipavaulttype') data = options.get('data') @@ -420,18 +397,14 @@ class vault_add(LDAPCreate): # get data if data: - if text: - raise errors.ValidationError(name='text', - error=_('Input data already specified')) - - if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + if text or input_file: + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) elif text: if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) data = text.encode('utf-8') @@ -564,9 +537,9 @@ class vault_add(LDAPCreate): response = super(vault_add, self).forward(*args, **options) # archive initial data - api.Command.vault_archive( + self.api.Command.vault_archive( vault_name, - container=container_id, + container_id=container_id, data=data, password=password, encryption_key=encryption_key) @@ -576,13 +549,21 @@ class vault_add(LDAPCreate): def pre_callback(self, ldap, dn, entry_attrs, attrs_list, *keys, **options): assert isinstance(dn, DN) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + + # set owner principal = getattr(context, 'principal') (username, realm) = split_principal(principal) - owner_dn = self.api.Object['user'].get_dn(username) + owner_dn = self.api.Object.user.get_dn(username) entry_attrs['owner'] = owner_dn - container_dn = DN(*dn[1:]) - api.Object.vaultcontainer.create_entry(container_dn, owner=owner_dn) + # container is user's private container, create container + if container_id == self.api.Object.vaultcontainer.get_private_id(): + try: + self.api.Object.vaultcontainer.create_entry( + DN(*dn[1:]), owner=owner_dn) + except errors.DuplicateEntry: + pass return dn @@ -593,31 +574,41 @@ class vault_add(LDAPCreate): return dn + def handle_not_found(self, *args, **options): + + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + + raise errors.NotFound( + reason=self.obj.parent_not_found_msg % { + 'parent': container_id, + 'oname': self.api.Object.vaultcontainer.object_name, + } + ) @register() class vault_del(LDAPDelete): __doc__ = _('Delete a vault.') - msg_summary = _('Deleted vault "%(value)s"') - takes_options = LDAPDelete.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), ) + msg_summary = _('Deleted vault "%(value)s"') + def post_callback(self, ldap, dn, *keys, **options): assert isinstance(dn, DN) vault_id = self.obj.get_id(dn) - kra_client = api.Backend.kra.get_client() + kra_client = self.api.Backend.kra.get_client() kra_account = pki.account.AccountClient(kra_client.connection) kra_account.login() - client_key_id = self.api.Object.vault.get_kra_id(vault_id) + client_key_id = self.obj.get_kra_id(vault_id) # deactivate vault record in KRA response = kra_client.keys.list_keys(client_key_id, pki.key.KeyClient.KEY_STATUS_ACTIVE) @@ -636,6 +627,13 @@ class vault_del(LDAPDelete): class vault_find(LDAPSearch): __doc__ = _('Search for vaults.') + takes_options = LDAPSearch.takes_options + ( + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), + ), + ) + msg_summary = ngettext( '%(count)d vault matched', '%(count)d vaults matched', 0 ) @@ -643,14 +641,9 @@ class vault_find(LDAPSearch): def pre_callback(self, ldap, filter, attrs_list, base_dn, scope, *keys, **options): assert isinstance(base_dn, DN) - principal = getattr(context, 'principal') - (username, realm) = split_principal(principal) - owner_dn = self.api.Object['user'].get_dn(username) - - container_id = self.Object.vaultcontainer.normalize_id(options.get('container')) - base_dn = self.Object.vaultcontainer.get_dn(parent=container_id) - - api.Object.vaultcontainer.create_entry(base_dn, owner=owner_dn) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + (name, parent_id) = self.api.Object.vaultcontainer.split_id(container_id) + base_dn = self.api.Object.vaultcontainer.get_dn(name, parent_id=parent_id) return (filter, base_dn, scope) @@ -662,11 +655,33 @@ class vault_find(LDAPSearch): return truncated + def handle_not_found(self, *args, **options): + + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + + # vault container is user's private container, ignore + if container_id == self.api.Object.vaultcontainer.get_private_id(): + return + + # otherwise, raise an error + raise errors.NotFound( + reason=self.obj.parent_not_found_msg % { + 'parent': container_id, + 'oname': self.api.Object.vaultcontainer.object_name, + } + ) @register() class vault_mod(LDAPUpdate): __doc__ = _('Modify a vault.') + takes_options = LDAPUpdate.takes_options + ( + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), + ), + ) + msg_summary = _('Modified vault "%(value)s"') def post_callback(self, ldap, dn, entry_attrs, *keys, **options): @@ -682,9 +697,9 @@ class vault_show(LDAPRetrieve): __doc__ = _('Display information about a vault.') takes_options = LDAPRetrieve.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), ) @@ -700,11 +715,6 @@ class vault_show(LDAPRetrieve): class vault_transport_cert(Command): __doc__ = _('Retrieve vault transport certificate.') - # list of attributes we want exported to JSON - json_friendly_attributes = ( - 'takes_args', - ) - takes_options = ( Str('out?', cli_name='out', @@ -718,13 +728,6 @@ class vault_transport_cert(Command): ), ) - def __json__(self): - json_dict = dict( - (a, getattr(self, a)) for a in self.json_friendly_attributes - ) - json_dict['takes_options'] = list(self.get_json_options()) - return json_dict - def forward(self, *args, **options): file = options.get('out') @@ -743,7 +746,7 @@ class vault_transport_cert(Command): def execute(self, *args, **options): - kra_client = api.Backend.kra.get_client() + kra_client = self.api.Backend.kra.get_client() transport_cert = kra_client.system_certs.get_transport_cert() return { 'result': { @@ -757,13 +760,13 @@ class vault_archive(LDAPRetrieve): __doc__ = _('Archive data into a vault.') takes_options = LDAPRetrieve.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), Bytes('data?', cli_name='data', - doc=_('Base-64 encoded binary data to archive'), + doc=_('Binary data to archive'), ), Str('text?', cli_name='text', @@ -804,7 +807,7 @@ class vault_archive(LDAPRetrieve): def forward(self, *args, **options): vault_name = args[0] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = 'standard' salt = None @@ -812,7 +815,7 @@ class vault_archive(LDAPRetrieve): escrow_public_key = None # retrieve vault info - response = api.Command.vault_show(vault_name, container=container_id) + response = self.api.Command.vault_show(vault_name, container_id=container_id) result = response['result'] if result.has_key('ipavaulttype'): @@ -850,18 +853,14 @@ class vault_archive(LDAPRetrieve): # get data if data: - if text: - raise errors.ValidationError(name='text', - error=_('Input data already specified')) - - if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + if text or input_file: + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) elif text: if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) data = text.encode('utf-8') @@ -902,9 +901,9 @@ class vault_archive(LDAPRetrieve): password = unicode(getpass.getpass('Password: ')) try: - api.Command.vault_retrieve( + self.api.Command.vault_retrieve( vault_name, - container=container_id, + container_id=container_id, password=password) except errors.NotFound: @@ -959,7 +958,7 @@ class vault_archive(LDAPRetrieve): (file, filename) = tempfile.mkstemp() os.close(file) try: - api.Command.vault_transport_cert(out=unicode(filename)) + self.api.Command.vault_transport_cert(out=unicode(filename)) transport_cert_der = nss.read_der_from_file(filename, True) nss_transport_cert = nss.Certificate(transport_cert_der) @@ -981,7 +980,7 @@ class vault_archive(LDAPRetrieve): options['nonce'] = unicode(base64.b64encode(nonce)) vault_data = {} - vault_data[u'data'] = unicode(data) + vault_data[u'data'] = unicode(base64.b64encode(data)) if encrypted_key: vault_data[u'encrypted_key'] = unicode(base64.b64encode(encrypted_key)) @@ -1012,7 +1011,7 @@ class vault_archive(LDAPRetrieve): principal = getattr(context, 'principal') (username, realm) = split_principal(principal) - user_dn = self.api.Object['user'].get_dn(username) + user_dn = self.api.Object.user.get_dn(username) if user_dn not in owners and user_dn not in members: raise errors.ACIError(info=_("Insufficient access to vault '%s'.") % vault_id) @@ -1020,12 +1019,12 @@ class vault_archive(LDAPRetrieve): entry_attrs['vault_id'] = vault_id # connect to KRA - kra_client = api.Backend.kra.get_client() + kra_client = self.api.Backend.kra.get_client() kra_account = pki.account.AccountClient(kra_client.connection) kra_account.login() - client_key_id = self.api.Object.vault.get_kra_id(vault_id) + client_key_id = self.obj.get_kra_id(vault_id) # deactivate existing vault record in KRA response = kra_client.keys.list_keys( @@ -1062,9 +1061,9 @@ class vault_retrieve(LDAPRetrieve): __doc__ = _('Retrieve a data from a vault.') takes_options = LDAPRetrieve.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), Flag('show_text?', doc=_('Show text data'), @@ -1122,13 +1121,13 @@ class vault_retrieve(LDAPRetrieve): def forward(self, *args, **options): vault_name = args[0] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = 'standard' salt = None # retrieve vault info - response = api.Command.vault_show(vault_name, container=container_id) + response = self.api.Command.vault_show(vault_name, container_id=container_id) result = response['result'] if result.has_key('ipavaulttype'): @@ -1179,7 +1178,7 @@ class vault_retrieve(LDAPRetrieve): (file, filename) = tempfile.mkstemp() os.close(file) try: - api.Command.vault_transport_cert(out=unicode(filename)) + self.api.Command.vault_transport_cert(out=unicode(filename)) transport_cert_der = nss.read_der_from_file(filename, True) nss_transport_cert = nss.Certificate(transport_cert_der) @@ -1209,7 +1208,7 @@ class vault_retrieve(LDAPRetrieve): nonce_iv=nonce) vault_data = json.loads(json_vault_data) - data = str(vault_data[u'data']) + data = base64.b64decode(str(vault_data[u'data'])) encrypted_key = None @@ -1343,7 +1342,7 @@ class vault_retrieve(LDAPRetrieve): principal = getattr(context, 'principal') (username, realm) = split_principal(principal) - user_dn = self.api.Object['user'].get_dn(username) + user_dn = self.api.Object.user.get_dn(username) if user_dn not in owners and user_dn not in members: raise errors.ACIError(info=_("Insufficient access to vault '%s'.") % vault_id) @@ -1353,12 +1352,12 @@ class vault_retrieve(LDAPRetrieve): wrapped_session_key = base64.b64decode(options['session_key']) # connect to KRA - kra_client = api.Backend.kra.get_client() + kra_client = self.api.Backend.kra.get_client() kra_account = pki.account.AccountClient(kra_client.connection) kra_account.login() - client_key_id = self.api.Object.vault.get_kra_id(vault_id) + client_key_id = self.obj.get_kra_id(vault_id) # find vault record in KRA response = kra_client.keys.list_keys( @@ -1388,9 +1387,9 @@ class vault_add_owner(LDAPAddMember): __doc__ = _('Add owners to a vault.') takes_options = LDAPAddMember.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), ) @@ -1410,9 +1409,9 @@ class vault_remove_owner(LDAPRemoveMember): __doc__ = _('Remove owners from a vault.') takes_options = LDAPRemoveMember.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), ) @@ -1432,9 +1431,9 @@ class vault_add_member(LDAPAddMember): __doc__ = _('Add members to a vault.') takes_options = LDAPAddMember.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), ) @@ -1451,9 +1450,9 @@ class vault_remove_member(LDAPRemoveMember): __doc__ = _('Remove members from a vault.') takes_options = LDAPRemoveMember.takes_options + ( - Str('container?', - cli_name='container', - doc=_('Container'), + Str('container_id?', + cli_name='container_id', + doc=_('Container ID'), ), ) diff --git a/ipalib/plugins/vaultcontainer.py b/ipalib/plugins/vaultcontainer.py index 881bbea7886e06356489cd49cc32f2a6a6460a5e..27cb6fff3479335943bae59340c9afc773dfc004 100644 --- a/ipalib/plugins/vaultcontainer.py +++ b/ipalib/plugins/vaultcontainer.py @@ -40,7 +40,7 @@ EXAMPLES: ipa vaultcontainer-find """) + _(""" List top-level vault containers: - ipa vaultcontainer-find --parent / + ipa vaultcontainer-find --parent-id / """) + _(""" Add a vault container: ipa vaultcontainer-add MyContainer @@ -75,7 +75,6 @@ class vaultcontainer(LDAPObject): Vault container object. """) - base_dn = DN(api.env.container_vault, api.env.basedn) object_name = _('vault container') object_name_plural = _('vault containers') @@ -109,19 +108,11 @@ class vaultcontainer(LDAPObject): pattern_errmsg='may only include letters, numbers, _, ., and -', maxlength=255, ), - Str('parent?', - cli_name='parent', - label=_('Parent'), - doc=_('Parent container'), - flags=('virtual_attribute'), - pattern='^[a-zA-Z0-9_.-/]+$', - pattern_errmsg='may only include letters, numbers, _, ., -, and /', - ), Str('container_id?', cli_name='container_id', label=_('Container ID'), doc=_('Container ID'), - flags=('virtual_attribute'), + flags={'no_option', 'virtual_attribute'}, ), Str('description?', cli_name='desc', @@ -131,22 +122,19 @@ class vaultcontainer(LDAPObject): ) def get_dn(self, *keys, **options): - __doc__ = _(""" + """ Generates vault container DN from container ID. - """) + """ # get container ID from parameters - name = None - if keys: - name = keys[0] - - parent_id = api.Object.vaultcontainer.normalize_id(options.get('parent')) + name = keys[-1] + parent_id = self.normalize_id(options.get('parent_id')) container_id = parent_id if name: container_id = parent_id + name + u'/' - dn = self.base_dn + dn = self.api.Object.vault.base_dn # for each name in the ID, prepend the base DN for name in container_id.split(u'/'): @@ -156,30 +144,26 @@ class vaultcontainer(LDAPObject): return dn def get_id(self, dn): - __doc__ = _(""" + """ Generates container ID from container DN. - """) + """ # make sure the DN is a container DN - if not dn.endswith(self.base_dn): + if not dn.endswith(self.api.Object.vault.base_dn): raise ValueError('Invalid container DN: %s' % dn) # construct container ID from the bottom up id = u'/' - while len(dn) > len(self.base_dn): - - rdn = dn[0] + for rdn in dn[:-len(self.api.Object.vault.base_dn)]: name = rdn['cn'] id = u'/' + name + id - dn = DN(*dn[1:]) - return id def get_private_id(self, username=None): - __doc__ = _(""" + """ Returns user's private container ID (i.e. /users//). - """) + """ if not username: principal = getattr(context, 'principal') @@ -188,9 +172,9 @@ class vaultcontainer(LDAPObject): return u'/users/' + username + u'/' def normalize_id(self, id): - __doc__ = _(""" + """ Normalizes container ID. - """) + """ # if ID is empty, return user's private container ID if not id: @@ -208,13 +192,13 @@ class vaultcontainer(LDAPObject): return self.get_private_id() + id def split_id(self, id): - __doc__ = _(""" + """ Splits a normalized container ID into (container name, parent ID) tuple. - """) + """ # handle root ID if id == u'/': - return (None, None) + return (None, u'/') # split ID into parent ID, container name, and empty string parts = id.rsplit(u'/', 2) @@ -223,23 +207,10 @@ class vaultcontainer(LDAPObject): return (parts[1], parts[0] + u'/') def create_entry(self, dn, owner=None): - __doc__ = _(""" + """ Creates a container entry and its parents. - """) + """ - # if entry already exists, return - try: - self.backend.get_entry(dn) - return - - except errors.NotFound: - pass - - # otherwise, create parent entry first - parent_dn = DN(*dn[1:]) - self.create_entry(parent_dn, owner=owner) - - # then create the entry itself rdn = dn[0] entry = self.backend.make_entry( dn, @@ -248,6 +219,20 @@ class vaultcontainer(LDAPObject): 'cn': rdn['cn'], 'owner': owner }) + + # if entry can be added return + try: + self.backend.add_entry(entry) + return + + except errors.NotFound: + pass + + # otherwise, create parent entry first + parent_dn = DN(*dn[1:]) + self.create_entry(parent_dn, owner=owner) + + # then create the entry again self.backend.add_entry(entry) @@ -255,18 +240,33 @@ class vaultcontainer(LDAPObject): class vaultcontainer_add(LDAPCreate): __doc__ = _('Create a new vault container.') + takes_options = LDAPCreate.takes_options + ( + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), + ), + ) + msg_summary = _('Added vault container "%(value)s"') def pre_callback(self, ldap, dn, entry_attrs, attrs_list, *keys, **options): assert isinstance(dn, DN) + parent_id = self.obj.normalize_id(options.get('parent_id')) + + # set owner principal = getattr(context, 'principal') (username, realm) = split_principal(principal) - owner_dn = self.api.Object['user'].get_dn(username) + owner_dn = self.api.Object.user.get_dn(username) entry_attrs['owner'] = owner_dn - parent_dn = DN(*dn[1:]) - self.obj.create_entry(parent_dn, owner=owner_dn) + # parent is user's private container, create parent + if parent_id == self.obj.get_private_id(): + try: + self.obj.create_entry( + DN(*dn[1:]), owner=owner_dn) + except errors.DuplicateEntry: + pass return dn @@ -277,17 +277,26 @@ class vaultcontainer_add(LDAPCreate): return dn + def handle_not_found(self, *args, **options): + + parent_id = self.obj.normalize_id(options.get('parent_id')) + + raise errors.NotFound( + reason=self.obj.parent_not_found_msg % { + 'parent': parent_id, + 'oname': self.obj.object_name, + } + ) + @register() class vaultcontainer_del(LDAPDelete): __doc__ = _('Delete a vault container.') - msg_summary = _('Deleted vault container "%(value)s"') - takes_options = LDAPDelete.takes_options + ( - Str('parent?', - cli_name='parent', - doc=_('Parent container'), + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), ), Flag('force?', doc=_('Force deletion'), @@ -295,42 +304,33 @@ class vaultcontainer_del(LDAPDelete): ), ) - def delete_entry(self, pkey, *keys, **options): - __doc__ = _(""" - Overwrites the base method to control deleting subtree with force option. - """) + msg_summary = _('Deleted vault container "%(value)s"') - ldap = self.obj.backend - nkeys = keys[:-1] + (pkey, ) - dn = self.obj.get_dn(*nkeys, **options) + def pre_callback(self, ldap, dn, *keys, **options): assert isinstance(dn, DN) - for callback in self.get_callbacks('pre'): - dn = callback(self, ldap, dn, *nkeys, **options) - assert isinstance(dn, DN) - try: - self._exc_wrapper(nkeys, options, ldap.delete_entry)(dn) + ldap.get_entries(dn, scope=ldap.SCOPE_ONELEVEL, attrs_list=[]) except errors.NotFound: - self.obj.handle_not_found(*nkeys) - except errors.NotAllowedOnNonLeaf: - # this entry is not a leaf entry - # if forced, delete all child nodes - if options.get('force'): - self.delete_subtree(dn, *nkeys, **options) - else: - raise + pass + else: + if not options.get('force', False): + raise errors.NotAllowedOnNonLeaf() - for callback in self.get_callbacks('post'): - result = callback(self, ldap, dn, *nkeys, **options) - - return result + return dn @register() class vaultcontainer_find(LDAPSearch): __doc__ = _('Search for vault containers.') + takes_options = LDAPSearch.takes_options + ( + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), + ), + ) + msg_summary = ngettext( '%(count)d vault container matched', '%(count)d vault containers matched', 0 ) @@ -338,14 +338,9 @@ class vaultcontainer_find(LDAPSearch): def pre_callback(self, ldap, filter, attrs_list, base_dn, scope, *keys, **options): assert isinstance(base_dn, DN) - principal = getattr(context, 'principal') - (username, realm) = split_principal(principal) - owner_dn = self.api.Object['user'].get_dn(username) - - parent_id = self.obj.normalize_id(options.get('parent')) - base_dn = self.obj.get_dn(parent=parent_id) - - self.obj.create_entry(base_dn, owner=owner_dn) + parent_id = self.obj.normalize_id(options.get('parent_id')) + (name, grandparent_id) = self.obj.split_id(parent_id) + base_dn = self.obj.get_dn(name, parent_id=grandparent_id) return (filter, base_dn, scope) @@ -356,11 +351,34 @@ class vaultcontainer_find(LDAPSearch): return truncated + def handle_not_found(self, *args, **options): + + parent_id = self.obj.normalize_id(options.get('parent_id')) + + # parent is user's private container, ignore + if parent_id == self.obj.get_private_id(): + return + + # otherwise, raise an error + raise errors.NotFound( + reason=self.obj.parent_not_found_msg % { + 'parent': parent_id, + 'oname': self.obj.object_name, + } + ) + @register() class vaultcontainer_mod(LDAPUpdate): __doc__ = _('Modify a vault container.') + takes_options = LDAPUpdate.takes_options + ( + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), + ), + ) + msg_summary = _('Modified vault container "%(value)s"') def post_callback(self, ldap, dn, entry_attrs, *keys, **options): @@ -376,9 +394,9 @@ class vaultcontainer_show(LDAPRetrieve): __doc__ = _('Display information about a vault container.') takes_options = LDAPRetrieve.takes_options + ( - Str('parent?', - cli_name='parent', - doc=_('Parent container'), + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), ), ) @@ -395,9 +413,9 @@ class vaultcontainer_add_owner(LDAPAddMember): __doc__ = _('Add owners to a vault container.') takes_options = LDAPAddMember.takes_options + ( - Str('parent?', - cli_name='parent', - doc=_('Parent container'), + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), ), ) @@ -417,9 +435,9 @@ class vaultcontainer_remove_owner(LDAPRemoveMember): __doc__ = _('Remove owners from a vault container.') takes_options = LDAPRemoveMember.takes_options + ( - Str('parent?', - cli_name='parent', - doc=_('Parent container'), + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), ), ) @@ -439,9 +457,9 @@ class vaultcontainer_add_member(LDAPAddMember): __doc__ = _('Add members to a vault container.') takes_options = LDAPAddMember.takes_options + ( - Str('parent?', - cli_name='parent', - doc=_('Parent container'), + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), ), ) @@ -459,9 +477,9 @@ class vaultcontainer_remove_member(LDAPRemoveMember): takes_options = LDAPRemoveMember.takes_options + ( - Str('parent?', - cli_name='parent', - doc=_('Parent container'), + Str('parent_id?', + cli_name='parent_id', + doc=_('Parent ID'), ), ) diff --git a/ipalib/plugins/vaultsecret.py b/ipalib/plugins/vaultsecret.py index b0896155a5af13013f13f2b3ae586da2a8832c30..7f44f0816b571dac718fa02edfd187fe8666565e 100644 --- a/ipalib/plugins/vaultsecret.py +++ b/ipalib/plugins/vaultsecret.py @@ -93,7 +93,7 @@ class vaultsecret(LDAPObject): Bytes('data?', cli_name='data', label=_('Data'), - doc=_('Base-64 encoded binary secret data'), + doc=_('Binary secret data'), ), ) @@ -103,8 +103,8 @@ class vaultsecret_add(LDAPRetrieve): __doc__ = _('Add a new vault secret.') takes_options = ( - Str('container?', - cli_name='container', + Str('container_id?', + cli_name='container_id', doc=_('Container ID'), ), Str('description?', @@ -113,7 +113,7 @@ class vaultsecret_add(LDAPRetrieve): ), Bytes('data?', cli_name='data', - doc=_('Base-64 encoded binary secret data'), + doc=_('Binary secret data'), ), Str('text?', cli_name='text', @@ -147,13 +147,13 @@ class vaultsecret_add(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = 'standard' salt = None # retrieve vault info - response = api.Command.vault_show(vault_name, container=container_id) + response = self.api.Command.vault_show(vault_name, container_id=container_id) result = response['result'] if result.has_key('ipavaulttype'): @@ -256,9 +256,9 @@ class vaultsecret_add(LDAPRetrieve): error=_('Invalid vault type')) # retrieve secrets - response = api.Command.vault_retrieve( + response = self.api.Command.vault_retrieve( vault_name, - container=container_id, + container_id=container_id, password=password, private_key=private_key) @@ -276,18 +276,14 @@ class vaultsecret_add(LDAPRetrieve): # get data if data: - if text: - raise errors.ValidationError(name='text', - error=_('Input data already specified')) - - if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + if text or input_file: + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) elif text: if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) data = text.encode('utf-8') @@ -315,9 +311,9 @@ class vaultsecret_add(LDAPRetrieve): # rearchive secrets vault_data = json.dumps(json_data) - response = api.Command.vault_archive( + response = self.api.Command.vault_archive( vault_name, - container=container_id, + container_id=container_id, data=vault_data, password=password) @@ -338,8 +334,8 @@ class vaultsecret_del(LDAPRetrieve): __doc__ = _('Delete a vault secret.') takes_options = ( - Str('container?', - cli_name='container', + Str('container_id?', + cli_name='container_id', doc=_('Container ID'), ), Str('password?', @@ -366,13 +362,13 @@ class vaultsecret_del(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = 'standard' salt = None # retrieve vault info - response = api.Command.vault_show(vault_name, container=container_id) + response = self.api.Command.vault_show(vault_name, container_id=container_id) result = response['result'] if result.has_key('ipavaulttype'): @@ -463,9 +459,9 @@ class vaultsecret_del(LDAPRetrieve): error=_('Invalid vault type')) # retrieve secrets - response = api.Command.vault_retrieve( + response = self.api.Command.vault_retrieve( vault_name, - container=container_id, + container_id=container_id, password=password, private_key=private_key) @@ -496,9 +492,9 @@ class vaultsecret_del(LDAPRetrieve): # rearchive secrets vault_data = json.dumps(json_data) - response = api.Command.vault_archive( + response = self.api.Command.vault_archive( vault_name, - container=container_id, + container_id=container_id, data=vault_data, password=password) @@ -515,13 +511,11 @@ class vaultsecret_del(LDAPRetrieve): @register() class vaultsecret_find(LDAPSearch): - __doc__ = _(""" - Search for vault secrets. - """) + __doc__ = _('Search for vault secrets.') takes_options = ( - Str('container?', - cli_name='container', + Str('container_id?', + cli_name='container_id', doc=_('Container ID'), ), Str('password?', @@ -545,13 +539,13 @@ class vaultsecret_find(LDAPSearch): def forward(self, *args, **options): vault_name = args[0] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = 'standard' salt = None # retrieve vault info - response = api.Command.vault_show(vault_name, container=container_id) + response = self.api.Command.vault_show(vault_name, container_id=container_id) result = response['result'] if result.has_key('ipavaulttype'): @@ -641,9 +635,9 @@ class vaultsecret_find(LDAPSearch): raise errors.ValidationError(name='vault_type', error=_('Invalid vault type')) - response = api.Command.vault_retrieve( + response = self.api.Command.vault_retrieve( vault_name, - container=container_id, + container_id=container_id, password=password, private_key=private_key) @@ -678,8 +672,8 @@ class vaultsecret_mod(LDAPRetrieve): __doc__ = _('Modify a vault secret.') takes_options = ( - Str('container?', - cli_name='container', + Str('container_id?', + cli_name='container_id', doc=_('Container ID'), ), Str('description?', @@ -722,13 +716,13 @@ class vaultsecret_mod(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = 'standard' salt = None # retrieve vault info - response = api.Command.vault_show(vault_name, container=container_id) + response = self.api.Command.vault_show(vault_name, container_id=container_id) result = response['result'] if result.has_key('ipavaulttype'): @@ -830,18 +824,14 @@ class vaultsecret_mod(LDAPRetrieve): # get data if data: - if text: - raise errors.ValidationError(name='text', - error=_('Input data already specified')) - - if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + if text or input_file: + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) elif text: if input_file: - raise errors.ValidationError(name='input_file', - error=_('Input data already specified')) + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) data = text.encode('utf-8') @@ -853,9 +843,9 @@ class vaultsecret_mod(LDAPRetrieve): pass # retrieve secrets - response = api.Command.vault_retrieve( + response = self.api.Command.vault_retrieve( vault_name, - container=container_id, + container_id=container_id, password=password, private_key=private_key) @@ -889,9 +879,9 @@ class vaultsecret_mod(LDAPRetrieve): # rearchive secrets vault_data = json.dumps(json_data) - response = api.Command.vault_archive( + response = self.api.Command.vault_archive( vault_name, - container=container_id, + container_id=container_id, data=vault_data, password=password) @@ -912,8 +902,8 @@ class vaultsecret_show(LDAPRetrieve): __doc__ = _('Display information about a vault secret.') takes_options = ( - Str('container?', - cli_name='container', + Str('container_id?', + cli_name='container_id', doc=_('Container ID'), ), Flag('show_text?', @@ -956,13 +946,13 @@ class vaultsecret_show(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = api.Object.vaultcontainer.normalize_id(options.get('container')) + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) vault_type = 'standard' salt = None # retrieve vault info - response = api.Command.vault_show(vault_name, container=container_id) + response = self.api.Command.vault_show(vault_name, container_id=container_id) result = response['result'] if result.has_key('ipavaulttype'): @@ -1062,9 +1052,9 @@ class vaultsecret_show(LDAPRetrieve): error=_('Invalid vault type')) # retrieve secrets - response = api.Command.vault_retrieve( + response = self.api.Command.vault_retrieve( vault_name, - container=container_id, + container_id=container_id, password=password, private_key=private_key) diff --git a/ipatests/test_xmlrpc/test_vault_plugin.py b/ipatests/test_xmlrpc/test_vault_plugin.py index 04f56ecafd3a141b39a0e32f1725258582b6b141..f3a280b40d5b6972e8755f63d46013cadaa68334 100644 --- a/ipatests/test_xmlrpc/test_vault_plugin.py +++ b/ipatests/test_xmlrpc/test_vault_plugin.py @@ -150,7 +150,7 @@ MszdQuc/FTSJ2DYsIwx7qq5c8mtargOjWRgZU22IgY9PKeIcitQjqw== -----END RSA PRIVATE KEY----- """ -class test_vault(Declarative): +class test_vault_plugin(Declarative): cleanup_commands = [ ('vault_del', [test_vault], {'continue': True}), diff --git a/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py b/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py index b99f9f68b8a480908602503d726205eeec36c2d2..22e13769df19b40cd39a144df662bae8bbf53d9e 100644 --- a/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py +++ b/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py @@ -32,12 +32,12 @@ base_container = u'base_container' child_container = u'child_container' grandchild_container = u'grandchild_container' -class test_vault(Declarative): +class test_vaultcontainer_plugin(Declarative): cleanup_commands = [ ('vaultcontainer_del', [private_container], {'continue': True}), - ('vaultcontainer_del', [shared_container], {'parent': u'/shared/', 'continue': True}), - ('vaultcontainer_del', [service_container], {'parent': u'/services/', 'continue': True}), + ('vaultcontainer_del', [shared_container], {'parent_id': u'/shared/', 'continue': True}), + ('vaultcontainer_del', [service_container], {'parent_id': u'/services/', 'continue': True}), ('vaultcontainer_del', [base_container], {'force': True, 'continue': True}), ] @@ -49,7 +49,7 @@ class test_vault(Declarative): 'vaultcontainer_find', [], { - 'parent': u'/', + 'parent_id': u'/', }, ), 'expected': { @@ -202,7 +202,7 @@ class test_vault(Declarative): 'vaultcontainer_add', [shared_container], { - 'parent': u'/shared/', + 'parent_id': u'/shared/', }, ), 'expected': { @@ -224,7 +224,7 @@ class test_vault(Declarative): 'vaultcontainer_find', [], { - 'parent': u'/shared/', + 'parent_id': u'/shared/', }, ), 'expected': { @@ -247,7 +247,7 @@ class test_vault(Declarative): 'vaultcontainer_show', [shared_container], { - 'parent': u'/shared/', + 'parent_id': u'/shared/', }, ), 'expected': { @@ -268,7 +268,7 @@ class test_vault(Declarative): 'vaultcontainer_mod', [shared_container], { - 'parent': u'/shared/', + 'parent_id': u'/shared/', 'description': u'shared container', }, ), @@ -290,7 +290,7 @@ class test_vault(Declarative): 'vaultcontainer_del', [shared_container], { - 'parent': u'/shared/', + 'parent_id': u'/shared/', }, ), 'expected': { @@ -308,7 +308,7 @@ class test_vault(Declarative): 'vaultcontainer_add', [service_container], { - 'parent': u'/services/', + 'parent_id': u'/services/', }, ), 'expected': { @@ -349,7 +349,7 @@ class test_vault(Declarative): 'vaultcontainer_add', [child_container], { - 'parent': base_container, + 'parent_id': base_container, }, ), 'expected': { @@ -371,7 +371,7 @@ class test_vault(Declarative): 'vaultcontainer_add', [grandchild_container], { - 'parent': base_container + u'/' + child_container, + 'parent_id': base_container + u'/' + child_container, }, ), 'expected': { diff --git a/ipatests/test_xmlrpc/test_vaultsecret_plugin.py b/ipatests/test_xmlrpc/test_vaultsecret_plugin.py index 68f0fb0d7085be512939e397ea49abbcf3ca3c7b..cbfd231633e7c3c000e57d52d85b83f44f71df3c 100644 --- a/ipatests/test_xmlrpc/test_vaultsecret_plugin.py +++ b/ipatests/test_xmlrpc/test_vaultsecret_plugin.py @@ -29,7 +29,7 @@ test_vaultsecret = u'test_vaultsecret' binary_data = '\x01\x02\x03\x04' text_data = u'secret' -class test_vault(Declarative): +class test_vaultsecret_plugin(Declarative): cleanup_commands = [ ('vault_del', [test_vault], {'continue': True}), -- 1.9.0 From nkinder at redhat.com Thu Mar 12 20:43:19 2015 From: nkinder at redhat.com (Nathan Kinder) Date: Thu, 12 Mar 2015 13:43:19 -0700 Subject: [Freeipa-devel] [PATCHES 0001-0002] ipa-client-install NTP fixes In-Reply-To: <54F75C29.4010105@redhat.com> References: <54EE84B1.4040206@redhat.com> <54EEDF8A.2090204@redhat.com> <54F0CEF7.8090609@redhat.com> <54F0D191.9060908@redhat.com> <54F0D860.6020501@redhat.com> <54F0DCC3.5080405@redhat.com> <54F0DF13.2030808@redhat.com> <54F22B7E.30707@redhat.com> <54F22E09.6030707@redhat.com> <54F22F8A.6030505@redhat.com> <54F24908.2080308@redhat.com> <54F751CE.1000202@redhat.com> <54F75556.6020803@redhat.com> <54F755EA.1000100@redhat.com> <54F75C29.4010105@redhat.com> Message-ID: <5501FA67.3020504@redhat.com> On 03/04/2015 11:25 AM, Nathan Kinder wrote: > > > On 03/04/2015 10:58 AM, Martin Basti wrote: >> On 04/03/15 19:56, Nathan Kinder wrote: >>> >>> On 03/04/2015 10:41 AM, Rob Crittenden wrote: >>>> Nathan Kinder wrote: >>>>> >>>>> On 02/28/2015 01:13 PM, Nathan Kinder wrote: >>>>>> >>>>>> On 02/28/2015 01:07 PM, Rob Crittenden wrote: >>>>>>> Nathan Kinder wrote: >>>>>>>> >>>>>>>> On 02/27/2015 01:18 PM, Nathan Kinder wrote: >>>>>>>>> >>>>>>>>> On 02/27/2015 01:08 PM, Rob Crittenden wrote: >>>>>>>>>> Nathan Kinder wrote: >>>>>>>>>>> >>>>>>>>>>> On 02/27/2015 12:20 PM, Rob Crittenden wrote: >>>>>>>>>>>> Nathan Kinder wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> On 02/26/2015 12:55 AM, Martin Kosek wrote: >>>>>>>>>>>>>> On 02/26/2015 03:28 AM, Nathan Kinder wrote: >>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The two attached patches address some issues that affect >>>>>>>>>>>>>>> ipa-client-install when syncing time from the NTP server. >>>>>>>>>>>>>>> Now that we >>>>>>>>>>>>>>> use ntpd to perform the time sync, the client install can >>>>>>>>>>>>>>> end up hanging >>>>>>>>>>>>>>> forever when the server is not reachable (firewall issues, >>>>>>>>>>>>>>> etc.). These >>>>>>>>>>>>>>> patches address the issues in two different ways: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 1 - Don't attempt to sync time when --no-ntp is specified. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> 2 - Implement a timeout capability that is used when we >>>>>>>>>>>>>>> run ntpd to >>>>>>>>>>>>>>> perform the time sync to prevent indefinite hanging. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The one potentially contentious issue is that this >>>>>>>>>>>>>>> introduces a new >>>>>>>>>>>>>>> dependency on python-subprocess32 to allow us to have >>>>>>>>>>>>>>> timeout support >>>>>>>>>>>>>>> when using Python 2.x. This is packaged for Fedora, but I >>>>>>>>>>>>>>> don't see it >>>>>>>>>>>>>>> on RHEL or CentOS currently. It would need to be packaged >>>>>>>>>>>>>>> there. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/4842 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>> -NGK >>>>>>>>>>>>>> Thanks for Patches. For the second patch, I would really >>>>>>>>>>>>>> prefer to avoid new >>>>>>>>>>>>>> dependency, especially if it's not packaged in RHEL/CentOS. >>>>>>>>>>>>>> Maybe we could use >>>>>>>>>>>>>> some workaround instead, as in: >>>>>>>>>>>>>> >>>>>>>>>>>>>> http://stackoverflow.com/questions/3733270/python-subprocess-timeout >>>>>>>>>>>>>> >>>>>>>>>>>>> I don't like having to add an additional dependency either, >>>>>>>>>>>>> but the >>>>>>>>>>>>> alternative seems more risky. Utilizing the subprocess32 >>>>>>>>>>>>> module (which >>>>>>>>>>>>> is really just a backport of the normal subprocess module >>>>>>>>>>>>> from Python >>>>>>>>>>>>> 3.x) is not invasive for our code in ipautil.run(). Adding >>>>>>>>>>>>> some sort of >>>>>>>>>>>>> a thread that has to kill the spawned subprocess seems more >>>>>>>>>>>>> risky (see >>>>>>>>>>>>> the discussion about a race condition in the stackoverflow >>>>>>>>>>>>> thread >>>>>>>>>>>>> above). That said, I'm sure the thread/poll method can be >>>>>>>>>>>>> made to work >>>>>>>>>>>>> if you and others feel strongly that this is a better >>>>>>>>>>>>> approach than >>>>>>>>>>>>> adding a new dependency. >>>>>>>>>>>> Why not use /usr/bin/timeout from coreutils? >>>>>>>>>>> That sounds like a perfectly good idea. I wasn't aware of it's >>>>>>>>>>> existence (or it's possible that I forgot about it). Thanks >>>>>>>>>>> for the >>>>>>>>>>> suggestion! I'll test out a reworked version of the patch. >>>>>>>>>>> >>>>>>>>>>> Do you think that there is value in leaving the timeout >>>>>>>>>>> capability >>>>>>>>>>> centrally in ipautil.run()? We only need it for the call to >>>>>>>>>>> 'ntpd' >>>>>>>>>>> right now, but there might be a need for using a timeout for >>>>>>>>>>> other >>>>>>>>>>> commands in the future. The alternative is to just modify >>>>>>>>>>> synconce_ntp() to use /usr/bin/timeout and leave ipautil.run() >>>>>>>>>>> alone. >>>>>>>>>> I think it would require a lot of research. One of the programs >>>>>>>>>> spawned >>>>>>>>>> by this is pkicreate which could take quite some time, and >>>>>>>>>> spawning a >>>>>>>>>> clone in particular. >>>>>>>>>> >>>>>>>>>> It is definitely an interesting idea but I think it is safest >>>>>>>>>> for now to >>>>>>>>>> limit it to just NTP for now. >>>>>>>>> What I meant was that we would have an optional keyword "timeout" >>>>>>>>> parameter to ipautil.run() that defaults to None, just like my >>>>>>>>> subprocess32 approach. If a timeout is not passed in, we would use >>>>>>>>> subprocess.Popen() to run the specified command just like we do >>>>>>>>> today. >>>>>>>>> We would only actually pass the timeout parameter to >>>>>>>>> ipautil.run() in >>>>>>>>> synconce_ntp() for now, so no other commands would have a >>>>>>>>> timeout in >>>>>>>>> effect. The capability would be available for other commands >>>>>>>>> this way >>>>>>>>> though. >>>>>>>>> >>>>>>>>> Let me propose a patch with this implementation, and if you >>>>>>>>> don't like >>>>>>>>> it, we can leave ipautil.run() alone and restrict the changes to >>>>>>>>> synconce_ntp(). >>>>>>>> An updated patch 0002 is attached that uses the approach >>>>>>>> mentioned above. >>>>>>> Looks good. Not to nitpick to death but... >>>>>>> >>>>>>> Can you add timeout to ipaplatform/base/paths.py as BIN_TIMEOUT = >>>>>>> "/usr/bin/timeout" and reference that instead? It's for portability. >>>>>> Sure. I was wondering if we should do something around a full path. >>>>>> >>>>>>> And a question. I'm impatient. Should there be a notice that it will >>>>>>> timeout after n seconds somewhere so people like me don't ^C after 2 >>>>>>> seconds? Or is that just overkill and I need to learn patience? >>>>>> Probably both. :) There's always going to be someone out there who >>>>>> will >>>>>> do ctrl-C, so I think printing out a notice is a good idea. I'll >>>>>> add this. >>>>>> >>>>>>> Stylistically, should we prefer p.returncode is 15 or p.returncode >>>>>>> == 15? >>>>>> After some reading, it seems that '==' should be used. Small integers >>>>>> work with 'is', but '==' is the consistent way that equality of >>>>>> integers >>>>>> should be checked. I'll modify this. >>>>> Another updated patch 0002 is attached that addresses Rob's review >>>>> comments. >>>>> >>>>> Thanks, >>>>> -NGK >>>>> >>>> LGTM. Does someone else have time to test this? >>>> >>>> I also don't know if there is a policy on placement of new items in >>>> paths.py. Things are all over the place and some have BIN_ prefix and >>>> some don't. >>> Yeah, I noticed this too. It didn't look like there was any >>> organization. >>> >>> -NGK >>>> rob >>>> >>> _______________________________________________ >>> Freeipa-devel mailing list >>> Freeipa-devel at redhat.com >>> https://www.redhat.com/mailman/listinfo/freeipa-devel >> paths are (should be) ordered alphabetically by value, not by variable >> name. >> I see there are last 2 lines ordered incorrectly, but please try to keep >> order as I wrote above. > > OK. A new patch is attached that puts the path to 'timeout' in the > proper location. Fixing up the order of other paths is unrelated, and > should be handled in a separate patch. Bump. Does anyone else have any review feedback on this? It would be nice to get this in soon since we currently have the potential of just hanging when installing clients on F21+. Thanks, -NGK > > -NGK > >> >> Martin^2 >> >> >> >> _______________________________________________ >> Freeipa-devel mailing list >> Freeipa-devel at redhat.com >> https://www.redhat.com/mailman/listinfo/freeipa-devel From mkosek at redhat.com Fri Mar 13 09:13:36 2015 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 13 Mar 2015 10:13:36 +0100 Subject: [Freeipa-devel] [PATCHES 0001-0002] ipa-client-install NTP fixes In-Reply-To: <5501FA67.3020504@redhat.com> References: <54EE84B1.4040206@redhat.com> <54EEDF8A.2090204@redhat.com> <54F0CEF7.8090609@redhat.com> <54F0D191.9060908@redhat.com> <54F0D860.6020501@redhat.com> <54F0DCC3.5080405@redhat.com> <54F0DF13.2030808@redhat.com> <54F22B7E.30707@redhat.com> <54F22E09.6030707@redhat.com> <54F22F8A.6030505@redhat.com> <54F24908.2080308@redhat.com> <54F751CE.1000202@redhat.com> <54F75556.6020803@redhat.com> <54F755EA.1000100@redhat.com> <54F75C29.4010105@redhat.com> <5501FA67.3020504@redhat.com> Message-ID: <5502AA40.2090108@redhat.com> On 03/12/2015 09:43 PM, Nathan Kinder wrote: > > > On 03/04/2015 11:25 AM, Nathan Kinder wrote: >> >> >> On 03/04/2015 10:58 AM, Martin Basti wrote: >>> On 04/03/15 19:56, Nathan Kinder wrote: >>>> >>>> On 03/04/2015 10:41 AM, Rob Crittenden wrote: >>>>> Nathan Kinder wrote: >>>>>> >>>>>> On 02/28/2015 01:13 PM, Nathan Kinder wrote: >>>>>>> >>>>>>> On 02/28/2015 01:07 PM, Rob Crittenden wrote: >>>>>>>> Nathan Kinder wrote: >>>>>>>>> >>>>>>>>> On 02/27/2015 01:18 PM, Nathan Kinder wrote: >>>>>>>>>> >>>>>>>>>> On 02/27/2015 01:08 PM, Rob Crittenden wrote: >>>>>>>>>>> Nathan Kinder wrote: >>>>>>>>>>>> >>>>>>>>>>>> On 02/27/2015 12:20 PM, Rob Crittenden wrote: >>>>>>>>>>>>> Nathan Kinder wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 02/26/2015 12:55 AM, Martin Kosek wrote: >>>>>>>>>>>>>>> On 02/26/2015 03:28 AM, Nathan Kinder wrote: >>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> The two attached patches address some issues that affect >>>>>>>>>>>>>>>> ipa-client-install when syncing time from the NTP server. >>>>>>>>>>>>>>>> Now that we >>>>>>>>>>>>>>>> use ntpd to perform the time sync, the client install can >>>>>>>>>>>>>>>> end up hanging >>>>>>>>>>>>>>>> forever when the server is not reachable (firewall issues, >>>>>>>>>>>>>>>> etc.). These >>>>>>>>>>>>>>>> patches address the issues in two different ways: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 1 - Don't attempt to sync time when --no-ntp is specified. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> 2 - Implement a timeout capability that is used when we >>>>>>>>>>>>>>>> run ntpd to >>>>>>>>>>>>>>>> perform the time sync to prevent indefinite hanging. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> The one potentially contentious issue is that this >>>>>>>>>>>>>>>> introduces a new >>>>>>>>>>>>>>>> dependency on python-subprocess32 to allow us to have >>>>>>>>>>>>>>>> timeout support >>>>>>>>>>>>>>>> when using Python 2.x. This is packaged for Fedora, but I >>>>>>>>>>>>>>>> don't see it >>>>>>>>>>>>>>>> on RHEL or CentOS currently. It would need to be packaged >>>>>>>>>>>>>>>> there. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/4842 >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>> -NGK >>>>>>>>>>>>>>> Thanks for Patches. For the second patch, I would really >>>>>>>>>>>>>>> prefer to avoid new >>>>>>>>>>>>>>> dependency, especially if it's not packaged in RHEL/CentOS. >>>>>>>>>>>>>>> Maybe we could use >>>>>>>>>>>>>>> some workaround instead, as in: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> http://stackoverflow.com/questions/3733270/python-subprocess-timeout >>>>>>>>>>>>>>> >>>>>>>>>>>>>> I don't like having to add an additional dependency either, >>>>>>>>>>>>>> but the >>>>>>>>>>>>>> alternative seems more risky. Utilizing the subprocess32 >>>>>>>>>>>>>> module (which >>>>>>>>>>>>>> is really just a backport of the normal subprocess module >>>>>>>>>>>>>> from Python >>>>>>>>>>>>>> 3.x) is not invasive for our code in ipautil.run(). Adding >>>>>>>>>>>>>> some sort of >>>>>>>>>>>>>> a thread that has to kill the spawned subprocess seems more >>>>>>>>>>>>>> risky (see >>>>>>>>>>>>>> the discussion about a race condition in the stackoverflow >>>>>>>>>>>>>> thread >>>>>>>>>>>>>> above). That said, I'm sure the thread/poll method can be >>>>>>>>>>>>>> made to work >>>>>>>>>>>>>> if you and others feel strongly that this is a better >>>>>>>>>>>>>> approach than >>>>>>>>>>>>>> adding a new dependency. >>>>>>>>>>>>> Why not use /usr/bin/timeout from coreutils? >>>>>>>>>>>> That sounds like a perfectly good idea. I wasn't aware of it's >>>>>>>>>>>> existence (or it's possible that I forgot about it). Thanks >>>>>>>>>>>> for the >>>>>>>>>>>> suggestion! I'll test out a reworked version of the patch. >>>>>>>>>>>> >>>>>>>>>>>> Do you think that there is value in leaving the timeout >>>>>>>>>>>> capability >>>>>>>>>>>> centrally in ipautil.run()? We only need it for the call to >>>>>>>>>>>> 'ntpd' >>>>>>>>>>>> right now, but there might be a need for using a timeout for >>>>>>>>>>>> other >>>>>>>>>>>> commands in the future. The alternative is to just modify >>>>>>>>>>>> synconce_ntp() to use /usr/bin/timeout and leave ipautil.run() >>>>>>>>>>>> alone. >>>>>>>>>>> I think it would require a lot of research. One of the programs >>>>>>>>>>> spawned >>>>>>>>>>> by this is pkicreate which could take quite some time, and >>>>>>>>>>> spawning a >>>>>>>>>>> clone in particular. >>>>>>>>>>> >>>>>>>>>>> It is definitely an interesting idea but I think it is safest >>>>>>>>>>> for now to >>>>>>>>>>> limit it to just NTP for now. >>>>>>>>>> What I meant was that we would have an optional keyword "timeout" >>>>>>>>>> parameter to ipautil.run() that defaults to None, just like my >>>>>>>>>> subprocess32 approach. If a timeout is not passed in, we would use >>>>>>>>>> subprocess.Popen() to run the specified command just like we do >>>>>>>>>> today. >>>>>>>>>> We would only actually pass the timeout parameter to >>>>>>>>>> ipautil.run() in >>>>>>>>>> synconce_ntp() for now, so no other commands would have a >>>>>>>>>> timeout in >>>>>>>>>> effect. The capability would be available for other commands >>>>>>>>>> this way >>>>>>>>>> though. >>>>>>>>>> >>>>>>>>>> Let me propose a patch with this implementation, and if you >>>>>>>>>> don't like >>>>>>>>>> it, we can leave ipautil.run() alone and restrict the changes to >>>>>>>>>> synconce_ntp(). >>>>>>>>> An updated patch 0002 is attached that uses the approach >>>>>>>>> mentioned above. >>>>>>>> Looks good. Not to nitpick to death but... >>>>>>>> >>>>>>>> Can you add timeout to ipaplatform/base/paths.py as BIN_TIMEOUT = >>>>>>>> "/usr/bin/timeout" and reference that instead? It's for portability. >>>>>>> Sure. I was wondering if we should do something around a full path. >>>>>>> >>>>>>>> And a question. I'm impatient. Should there be a notice that it will >>>>>>>> timeout after n seconds somewhere so people like me don't ^C after 2 >>>>>>>> seconds? Or is that just overkill and I need to learn patience? >>>>>>> Probably both. :) There's always going to be someone out there who >>>>>>> will >>>>>>> do ctrl-C, so I think printing out a notice is a good idea. I'll >>>>>>> add this. >>>>>>> >>>>>>>> Stylistically, should we prefer p.returncode is 15 or p.returncode >>>>>>>> == 15? >>>>>>> After some reading, it seems that '==' should be used. Small integers >>>>>>> work with 'is', but '==' is the consistent way that equality of >>>>>>> integers >>>>>>> should be checked. I'll modify this. >>>>>> Another updated patch 0002 is attached that addresses Rob's review >>>>>> comments. >>>>>> >>>>>> Thanks, >>>>>> -NGK >>>>>> >>>>> LGTM. Does someone else have time to test this? >>>>> >>>>> I also don't know if there is a policy on placement of new items in >>>>> paths.py. Things are all over the place and some have BIN_ prefix and >>>>> some don't. >>>> Yeah, I noticed this too. It didn't look like there was any >>>> organization. >>>> >>>> -NGK >>>>> rob >>>>> >>>> _______________________________________________ >>>> Freeipa-devel mailing list >>>> Freeipa-devel at redhat.com >>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>> paths are (should be) ordered alphabetically by value, not by variable >>> name. >>> I see there are last 2 lines ordered incorrectly, but please try to keep >>> order as I wrote above. >> >> OK. A new patch is attached that puts the path to 'timeout' in the >> proper location. Fixing up the order of other paths is unrelated, and >> should be handled in a separate patch. > > Bump. Does anyone else have any review feedback on this? It would be > nice to get this in soon since we currently have the potential of just > hanging when installing clients on F21+. I am ok with the approach, if the patches work. I agree it would be nice to have this fixed in F21 and F22 soon. Martin, could you please take a look? This one should be easy to test. Thanks, Martin From mkosek at redhat.com Fri Mar 13 09:18:03 2015 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 13 Mar 2015 10:18:03 +0100 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5501BA89.2070507@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> <5501B960.3060808@redhat.com> <5501BA89.2070507@redhat.com> Message-ID: <5502AB4B.7040701@redhat.com> On 03/12/2015 05:10 PM, Rob Crittenden wrote: > Petr Spacek wrote: >> On 12.3.2015 16:23, Rob Crittenden wrote: >>> David Kupka wrote: >>>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>>> Upgrade plugins which modify LDAP data directly should not be executed >>>>> in --test mode. >>>>> >>>>> This patch is a workaround, to ensure update with --test option will not >>>>> modify any LDAP data. >>>>> >>>>> https://fedorahosted.org/freeipa/ticket/3448 >>>>> >>>>> Patch attached. >>>>> >>>>> >>>>> >>>> >>>> Ideally we want to fix all plugins to dry-run the upgrade not just skip >>>> when there is '--test' option but it is a good first step. >>>> Works for me, ACK. >>>> >>> >>> I agree that this breaks the spirit of --test and think it should be >>> fixed before committing. >> >> Considering how often is the option is used ... I do not think that this >> requires 'proper' fix now. It was broken for *years* so this patch is a huge >> improvement and IMHO should be commited in current form. We can re-visit it >> later on, open a ticket :-) >> > > No. There is no rush for this, at least not for the promise of a future > fix that will never come. I checked the code and to me, the proper fix looks like instrumenting ldap.update_entry calls in upgrade plugins with if options.test: log message else do the update right? I see just couple places that would need to be updated: $ git grep -E "(ldap|conn).update_entry" ipaserver/install/plugins ipaserver/install/plugins/dns.py: ldap.update_entry(container_entry) ipaserver/install/plugins/fix_replica_agreements.py: repl.conn.update_entry(replica) ipaserver/install/plugins/fix_replica_agreements.py: repl.conn.update_entry(replica) ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) ipaserver/install/plugins/update_managed_permissions.py: ldap.update_entry(base_entry) ipaserver/install/plugins/update_managed_permissions.py: ldap.update_entry(entry) ipaserver/install/plugins/update_pacs.py: ldap.update_entry(entry) ipaserver/install/plugins/update_referint.py: ldap.update_entry(entry) ipaserver/install/plugins/update_services.py: ldap.update_entry(entry) ipaserver/install/plugins/update_uniqueness.py: ldap.update_entry(uid_uniqueness_plugin) So from my POV, very quick fix. In that case, I would also prefer a fix now than a ticket that would never be done. Martin From pspacek at redhat.com Fri Mar 13 09:29:06 2015 From: pspacek at redhat.com (Petr Spacek) Date: Fri, 13 Mar 2015 10:29:06 +0100 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5502AB4B.7040701@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> <5501B960.3060808@redhat.com> <5501BA89.2070507@redhat.com> <5502AB4B.7040701@redhat.com> Message-ID: <5502ADE2.3020600@redhat.com> On 13.3.2015 10:18, Martin Kosek wrote: > On 03/12/2015 05:10 PM, Rob Crittenden wrote: >> Petr Spacek wrote: >>> On 12.3.2015 16:23, Rob Crittenden wrote: >>>> David Kupka wrote: >>>>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>>>> Upgrade plugins which modify LDAP data directly should not be executed >>>>>> in --test mode. >>>>>> >>>>>> This patch is a workaround, to ensure update with --test option will not >>>>>> modify any LDAP data. >>>>>> >>>>>> https://fedorahosted.org/freeipa/ticket/3448 >>>>>> >>>>>> Patch attached. >>>>>> >>>>>> >>>>>> >>>>> >>>>> Ideally we want to fix all plugins to dry-run the upgrade not just skip >>>>> when there is '--test' option but it is a good first step. >>>>> Works for me, ACK. >>>>> >>>> >>>> I agree that this breaks the spirit of --test and think it should be >>>> fixed before committing. >>> >>> Considering how often is the option is used ... I do not think that this >>> requires 'proper' fix now. It was broken for *years* so this patch is a huge >>> improvement and IMHO should be commited in current form. We can re-visit it >>> later on, open a ticket :-) >>> >> >> No. There is no rush for this, at least not for the promise of a future >> fix that will never come. > > I checked the code and to me, the proper fix looks like instrumenting > ldap.update_entry calls in upgrade plugins with > > if options.test: > log message > else > do the update > > right? I see just couple places that would need to be updated: > > $ git grep -E "(ldap|conn).update_entry" ipaserver/install/plugins > ipaserver/install/plugins/dns.py: ldap.update_entry(container_entry) > ipaserver/install/plugins/fix_replica_agreements.py: > repl.conn.update_entry(replica) > ipaserver/install/plugins/fix_replica_agreements.py: > repl.conn.update_entry(replica) > ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) > ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) > ipaserver/install/plugins/update_managed_permissions.py: > ldap.update_entry(base_entry) > ipaserver/install/plugins/update_managed_permissions.py: ldap.update_entry(entry) > ipaserver/install/plugins/update_pacs.py: ldap.update_entry(entry) > ipaserver/install/plugins/update_referint.py: ldap.update_entry(entry) > ipaserver/install/plugins/update_services.py: ldap.update_entry(entry) > ipaserver/install/plugins/update_uniqueness.py: > ldap.update_entry(uid_uniqueness_plugin) > > > So from my POV, very quick fix. In that case, I would also prefer a fix now > than a ticket that would never be done. I really dislike this approach because I consider it flawed by design. Plugin author has to think about it all the time and do not forget to add if otherwise ... too bad. I can see two 'safer' ways to do that: - LDAP transactions :-) - 'mock_writes=True' option in LDAP backend which would print modlists instead of applying them (and return success to the caller). Both cases eliminate the need to scatter 'ifs' all over update plugins and do not add risk of forgetting about one of them when adding/changing plugin code. -- Petr^2 Spacek From abokovoy at redhat.com Fri Mar 13 09:42:00 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Fri, 13 Mar 2015 11:42:00 +0200 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5502ADE2.3020600@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> <5501B960.3060808@redhat.com> <5501BA89.2070507@redhat.com> <5502AB4B.7040701@redhat.com> <5502ADE2.3020600@redhat.com> Message-ID: <20150313094200.GH3878@redhat.com> On Fri, 13 Mar 2015, Petr Spacek wrote: >On 13.3.2015 10:18, Martin Kosek wrote: >> On 03/12/2015 05:10 PM, Rob Crittenden wrote: >>> Petr Spacek wrote: >>>> On 12.3.2015 16:23, Rob Crittenden wrote: >>>>> David Kupka wrote: >>>>>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>>>>> Upgrade plugins which modify LDAP data directly should not be executed >>>>>>> in --test mode. >>>>>>> >>>>>>> This patch is a workaround, to ensure update with --test option will not >>>>>>> modify any LDAP data. >>>>>>> >>>>>>> https://fedorahosted.org/freeipa/ticket/3448 >>>>>>> >>>>>>> Patch attached. >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> Ideally we want to fix all plugins to dry-run the upgrade not just skip >>>>>> when there is '--test' option but it is a good first step. >>>>>> Works for me, ACK. >>>>>> >>>>> >>>>> I agree that this breaks the spirit of --test and think it should be >>>>> fixed before committing. >>>> >>>> Considering how often is the option is used ... I do not think that this >>>> requires 'proper' fix now. It was broken for *years* so this patch is a huge >>>> improvement and IMHO should be commited in current form. We can re-visit it >>>> later on, open a ticket :-) >>>> >>> >>> No. There is no rush for this, at least not for the promise of a future >>> fix that will never come. >> >> I checked the code and to me, the proper fix looks like instrumenting >> ldap.update_entry calls in upgrade plugins with >> >> if options.test: >> log message >> else >> do the update >> >> right? I see just couple places that would need to be updated: >> >> $ git grep -E "(ldap|conn).update_entry" ipaserver/install/plugins >> ipaserver/install/plugins/dns.py: ldap.update_entry(container_entry) >> ipaserver/install/plugins/fix_replica_agreements.py: >> repl.conn.update_entry(replica) >> ipaserver/install/plugins/fix_replica_agreements.py: >> repl.conn.update_entry(replica) >> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >> ipaserver/install/plugins/update_managed_permissions.py: >> ldap.update_entry(base_entry) >> ipaserver/install/plugins/update_managed_permissions.py: ldap.update_entry(entry) >> ipaserver/install/plugins/update_pacs.py: ldap.update_entry(entry) >> ipaserver/install/plugins/update_referint.py: ldap.update_entry(entry) >> ipaserver/install/plugins/update_services.py: ldap.update_entry(entry) >> ipaserver/install/plugins/update_uniqueness.py: >> ldap.update_entry(uid_uniqueness_plugin) >> >> >> So from my POV, very quick fix. In that case, I would also prefer a fix now >> than a ticket that would never be done. > >I really dislike this approach because I consider it flawed by design. Plugin >author has to think about it all the time and do not forget to add if >otherwise ... too bad. > >I can see two 'safer' ways to do that: >- LDAP transactions :-) >- 'mock_writes=True' option in LDAP backend which would print modlists instead >of applying them (and return success to the caller). > >Both cases eliminate the need to scatter 'ifs' all over update plugins and do >not add risk of forgetting about one of them when adding/changing plugin code. I like idea about mock_writes=True. However, I think we still need to make sure plugin writers rely on options.test value to see that we aren't going to write the data. The reason for it is that we might get into configurations where plugins would be doing updates based on earlier performed tasks. If task configuration is not going to be written, its status will never be correct and plugin would get an error. -- / Alexander Bokovoy From mbabinsk at redhat.com Fri Mar 13 09:58:30 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Fri, 13 Mar 2015 10:58:30 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <55002A13.8010706@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> Message-ID: <5502B4C6.8090803@redhat.com> On 03/11/2015 12:42 PM, Petr Spacek wrote: >> diff --git a/ipaserver/rpcserver.py b/ipaserver/rpcserver.py >> index d6bc955b9d9910a24eec5df1def579310eb54786..36f16908ac8477d9982bfee613b77576853054eb 100644 >> --- a/ipaserver/rpcserver.py >> +++ b/ipaserver/rpcserver.py >> @@ -958,8 +958,8 @@ class login_password(Backend, KerberosSession, HTTP_Status): >> >> def kinit(self, user, realm, password, ccache_name): >> # get http service ccache as an armor for FAST to enable OTP authentication >> - armor_principal = krb5_format_service_principal_name( >> - 'HTTP', self.api.env.host, realm) >> + armor_principal = str(krb5_format_service_principal_name( >> + 'HTTP', self.api.env.host, realm)) >> keytab = paths.IPA_KEYTAB >> armor_name = "%sA_%s" % (krbccache_prefix, user) >> armor_path = os.path.join(krbccache_dir, armor_name) >> @@ -967,34 +967,33 @@ class login_password(Backend, KerberosSession, HTTP_Status): >> self.debug('Obtaining armor ccache: principal=%s keytab=%s ccache=%s', >> armor_principal, keytab, armor_path) >> >> - (stdout, stderr, returncode) = ipautil.run( >> - [paths.KINIT, '-kt', keytab, armor_principal], >> - env={'KRB5CCNAME': armor_path}, raiseonerr=False) >> - >> - if returncode != 0: >> - raise CCacheError() >> + try: >> + ipautil.kinit_keytab(paths.IPA_KEYTAB, armor_path, >> + armor_principal) >> + except StandardError, e: >> + raise CCacheError(str(e)) >> >> # Format the user as a kerberos principal >> principal = krb5_format_principal_name(user, realm) >> >> - (stdout, stderr, returncode) = ipautil.run( >> - [paths.KINIT, principal, '-T', armor_path], >> - env={'KRB5CCNAME': ccache_name, 'LC_ALL': 'C'}, >> - stdin=password, raiseonerr=False) >> + try: >> + ipautil.kinit_password(principal, password, >> + env={'KRB5CCNAME': ccache_name, >> + 'LC_ALL': 'C'}, >> + armor_ccache=armor_path) >> >> - self.debug('kinit: principal=%s returncode=%s, stderr="%s"', >> - principal, returncode, stderr) >> - >> - self.debug('Cleanup the armor ccache') >> - ipautil.run( >> - [paths.KDESTROY, '-A', '-c', armor_path], >> - env={'KRB5CCNAME': armor_path}, >> - raiseonerr=False) >> - >> - if returncode != 0: >> - if stderr.strip() == 'kinit: Cannot read password while getting initial credentials': >> - raise PasswordExpired(principal=principal, message=unicode(stderr)) >> - raise InvalidSessionPassword(principal=principal, message=unicode(stderr)) >> + self.debug('Cleanup the armor ccache') >> + ipautil.run( >> + [paths.KDESTROY, '-A', '-c', armor_path], >> + env={'KRB5CCNAME': armor_path}, >> + raiseonerr=False) >> + except ipautil.CalledProcessError, e: >> + if ('kinit: Cannot read password while ' >> + 'getting initial credentials') in e.output: > I know it is not your code but please make sure it will work with non-English > LANG or LC_MESSAGE. I have done some research about the way the environmental variables line LC_MESSAGE, LC_ALL, etc. work, (https://www.gnu.org/software/gettext/manual/gettext.html#Locale-Names and the following section). It turns out that the CalledProcessError handling code will work also in non-english environment, because in the following code snippet, kinit is actually run using LC_ALL=C environment variable effectively disabling any localization and forcing the program to print out default (i.e. english) error messages. >> + try: >> + ipautil.kinit_password(principal, password, >> + env={'KRB5CCNAME': ccache_name, >> + 'LC_ALL': 'C'}, >> + armor_ccache=armor_path) Thus when handling the exception, we may be sure that any error message returned by above is in default locale. This greatly simplifies the logic deciding what exception to raise further based on the error message. >> + except ipautil.CalledProcessError, e: >> + if ('kinit: Cannot read password while ' >> + 'getting initial credentials') in e.output: A very clever move by Nathaniel (according to git blame) to circumvent the locale-specific behavior when it's actualy not needed. -- Martin^3 Babinsky From pspacek at redhat.com Fri Mar 13 10:00:48 2015 From: pspacek at redhat.com (Petr Spacek) Date: Fri, 13 Mar 2015 11:00:48 +0100 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <20150313094200.GH3878@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> <5501B960.3060808@redhat.com> <5501BA89.2070507@redhat.com> <5502AB4B.7040701@redhat.com> <5502ADE2.3020600@redhat.com> <20150313094200.GH3878@redhat.com> Message-ID: <5502B550.9040105@redhat.com> On 13.3.2015 10:42, Alexander Bokovoy wrote: > On Fri, 13 Mar 2015, Petr Spacek wrote: >> On 13.3.2015 10:18, Martin Kosek wrote: >>> On 03/12/2015 05:10 PM, Rob Crittenden wrote: >>>> Petr Spacek wrote: >>>>> On 12.3.2015 16:23, Rob Crittenden wrote: >>>>>> David Kupka wrote: >>>>>>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>>>>>> Upgrade plugins which modify LDAP data directly should not be executed >>>>>>>> in --test mode. >>>>>>>> >>>>>>>> This patch is a workaround, to ensure update with --test option will not >>>>>>>> modify any LDAP data. >>>>>>>> >>>>>>>> https://fedorahosted.org/freeipa/ticket/3448 >>>>>>>> >>>>>>>> Patch attached. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> Ideally we want to fix all plugins to dry-run the upgrade not just skip >>>>>>> when there is '--test' option but it is a good first step. >>>>>>> Works for me, ACK. >>>>>>> >>>>>> >>>>>> I agree that this breaks the spirit of --test and think it should be >>>>>> fixed before committing. >>>>> >>>>> Considering how often is the option is used ... I do not think that this >>>>> requires 'proper' fix now. It was broken for *years* so this patch is a huge >>>>> improvement and IMHO should be commited in current form. We can re-visit it >>>>> later on, open a ticket :-) >>>>> >>>> >>>> No. There is no rush for this, at least not for the promise of a future >>>> fix that will never come. >>> >>> I checked the code and to me, the proper fix looks like instrumenting >>> ldap.update_entry calls in upgrade plugins with >>> >>> if options.test: >>> log message >>> else >>> do the update >>> >>> right? I see just couple places that would need to be updated: >>> >>> $ git grep -E "(ldap|conn).update_entry" ipaserver/install/plugins >>> ipaserver/install/plugins/dns.py: ldap.update_entry(container_entry) >>> ipaserver/install/plugins/fix_replica_agreements.py: >>> repl.conn.update_entry(replica) >>> ipaserver/install/plugins/fix_replica_agreements.py: >>> repl.conn.update_entry(replica) >>> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >>> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >>> ipaserver/install/plugins/update_managed_permissions.py: >>> ldap.update_entry(base_entry) >>> ipaserver/install/plugins/update_managed_permissions.py: >>> ldap.update_entry(entry) >>> ipaserver/install/plugins/update_pacs.py: ldap.update_entry(entry) >>> ipaserver/install/plugins/update_referint.py: >>> ldap.update_entry(entry) >>> ipaserver/install/plugins/update_services.py: ldap.update_entry(entry) >>> ipaserver/install/plugins/update_uniqueness.py: >>> ldap.update_entry(uid_uniqueness_plugin) >>> >>> >>> So from my POV, very quick fix. In that case, I would also prefer a fix now >>> than a ticket that would never be done. >> >> I really dislike this approach because I consider it flawed by design. Plugin >> author has to think about it all the time and do not forget to add if >> otherwise ... too bad. >> >> I can see two 'safer' ways to do that: >> - LDAP transactions :-) >> - 'mock_writes=True' option in LDAP backend which would print modlists instead >> of applying them (and return success to the caller). >> >> Both cases eliminate the need to scatter 'ifs' all over update plugins and do >> not add risk of forgetting about one of them when adding/changing plugin code. > I like idea about mock_writes=True. However, I think we still need to make > sure plugin writers rely on options.test value to see that we aren't > going to write the data. The reason for it is that we might get into > configurations where plugins would be doing updates based on earlier > performed tasks. If task configuration is not going to be written, its > status will never be correct and plugin would get an error. That is exactly why I mentioned LDAP transactions. There is no other way how to test complex plugins which actually read own writes (except mocking the whole LDAP interface somewhere). -- Petr^2 Spacek From mkosek at redhat.com Fri Mar 13 10:17:21 2015 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 13 Mar 2015 11:17:21 +0100 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5502B550.9040105@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> <5501B960.3060808@redhat.com> <5501BA89.2070507@redhat.com> <5502AB4B.7040701@redhat.com> <5502ADE2.3020600@redhat.com> <20150313094200.GH3878@redhat.com> <5502B550.9040105@redhat.com> Message-ID: <5502B931.7090708@redhat.com> On 03/13/2015 11:00 AM, Petr Spacek wrote: > On 13.3.2015 10:42, Alexander Bokovoy wrote: >> On Fri, 13 Mar 2015, Petr Spacek wrote: >>> On 13.3.2015 10:18, Martin Kosek wrote: >>>> On 03/12/2015 05:10 PM, Rob Crittenden wrote: >>>>> Petr Spacek wrote: >>>>>> On 12.3.2015 16:23, Rob Crittenden wrote: >>>>>>> David Kupka wrote: >>>>>>>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>>>>>>> Upgrade plugins which modify LDAP data directly should not be executed >>>>>>>>> in --test mode. >>>>>>>>> >>>>>>>>> This patch is a workaround, to ensure update with --test option will not >>>>>>>>> modify any LDAP data. >>>>>>>>> >>>>>>>>> https://fedorahosted.org/freeipa/ticket/3448 >>>>>>>>> >>>>>>>>> Patch attached. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> Ideally we want to fix all plugins to dry-run the upgrade not just skip >>>>>>>> when there is '--test' option but it is a good first step. >>>>>>>> Works for me, ACK. >>>>>>>> >>>>>>> >>>>>>> I agree that this breaks the spirit of --test and think it should be >>>>>>> fixed before committing. >>>>>> >>>>>> Considering how often is the option is used ... I do not think that this >>>>>> requires 'proper' fix now. It was broken for *years* so this patch is a huge >>>>>> improvement and IMHO should be commited in current form. We can re-visit it >>>>>> later on, open a ticket :-) >>>>>> >>>>> >>>>> No. There is no rush for this, at least not for the promise of a future >>>>> fix that will never come. >>>> >>>> I checked the code and to me, the proper fix looks like instrumenting >>>> ldap.update_entry calls in upgrade plugins with >>>> >>>> if options.test: >>>> log message >>>> else >>>> do the update >>>> >>>> right? I see just couple places that would need to be updated: >>>> >>>> $ git grep -E "(ldap|conn).update_entry" ipaserver/install/plugins >>>> ipaserver/install/plugins/dns.py: ldap.update_entry(container_entry) >>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>> repl.conn.update_entry(replica) >>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>> repl.conn.update_entry(replica) >>>> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >>>> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >>>> ipaserver/install/plugins/update_managed_permissions.py: >>>> ldap.update_entry(base_entry) >>>> ipaserver/install/plugins/update_managed_permissions.py: >>>> ldap.update_entry(entry) >>>> ipaserver/install/plugins/update_pacs.py: ldap.update_entry(entry) >>>> ipaserver/install/plugins/update_referint.py: >>>> ldap.update_entry(entry) >>>> ipaserver/install/plugins/update_services.py: ldap.update_entry(entry) >>>> ipaserver/install/plugins/update_uniqueness.py: >>>> ldap.update_entry(uid_uniqueness_plugin) >>>> >>>> >>>> So from my POV, very quick fix. In that case, I would also prefer a fix now >>>> than a ticket that would never be done. >>> >>> I really dislike this approach because I consider it flawed by design. Plugin >>> author has to think about it all the time and do not forget to add if >>> otherwise ... too bad. >>> >>> I can see two 'safer' ways to do that: >>> - LDAP transactions :-) >>> - 'mock_writes=True' option in LDAP backend which would print modlists instead >>> of applying them (and return success to the caller). >>> >>> Both cases eliminate the need to scatter 'ifs' all over update plugins and do >>> not add risk of forgetting about one of them when adding/changing plugin code. >> I like idea about mock_writes=True. However, I think we still need to make >> sure plugin writers rely on options.test value to see that we aren't >> going to write the data. The reason for it is that we might get into >> configurations where plugins would be doing updates based on earlier >> performed tasks. If task configuration is not going to be written, its >> status will never be correct and plugin would get an error. > > That is exactly why I mentioned LDAP transactions. There is no other way how > to test complex plugins which actually read own writes (except mocking the > whole LDAP interface somewhere). While this may be a good idea long term, I do not think any of us is considering implementing the LDAP transaction support within work on this refactoring. So in this thread, let us focus on how to fix options.test mid-term. I currently see 2 proposed ways: - Making the plugins aware of options.test - Make ldap2 write operations only print the update and not do it. Although thinking of this approach, I think it may make some plugins like DNS loop forever. IIRC, at least DNS upgrade plugin have loop - search for all unfixed DNS zones - fix them with ldap update - was the search truncated, i.e. are there more zones to update? if yes, go back to start From jcholast at redhat.com Fri Mar 13 10:34:33 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Fri, 13 Mar 2015 11:34:33 +0100 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5502B931.7090708@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> <5501B960.3060808@redhat.com> <5501BA89.2070507@redhat.com> <5502AB4B.7040701@redhat.com> <5502ADE2.3020600@redhat.com> <20150313094200.GH3878@redhat.com> <5502B550.9040105@redhat.com> <5502B931.7090708@redhat.com> Message-ID: <5502BD39.5020607@redhat.com> Dne 13.3.2015 v 11:17 Martin Kosek napsal(a): > On 03/13/2015 11:00 AM, Petr Spacek wrote: >> On 13.3.2015 10:42, Alexander Bokovoy wrote: >>> On Fri, 13 Mar 2015, Petr Spacek wrote: >>>> On 13.3.2015 10:18, Martin Kosek wrote: >>>>> On 03/12/2015 05:10 PM, Rob Crittenden wrote: >>>>>> Petr Spacek wrote: >>>>>>> On 12.3.2015 16:23, Rob Crittenden wrote: >>>>>>>> David Kupka wrote: >>>>>>>>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>>>>>>>> Upgrade plugins which modify LDAP data directly should not be >>>>>>>>>> executed >>>>>>>>>> in --test mode. >>>>>>>>>> >>>>>>>>>> This patch is a workaround, to ensure update with --test >>>>>>>>>> option will not >>>>>>>>>> modify any LDAP data. >>>>>>>>>> >>>>>>>>>> https://fedorahosted.org/freeipa/ticket/3448 >>>>>>>>>> >>>>>>>>>> Patch attached. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> Ideally we want to fix all plugins to dry-run the upgrade not >>>>>>>>> just skip >>>>>>>>> when there is '--test' option but it is a good first step. >>>>>>>>> Works for me, ACK. >>>>>>>>> >>>>>>>> >>>>>>>> I agree that this breaks the spirit of --test and think it >>>>>>>> should be >>>>>>>> fixed before committing. >>>>>>> >>>>>>> Considering how often is the option is used ... I do not think >>>>>>> that this >>>>>>> requires 'proper' fix now. It was broken for *years* so this >>>>>>> patch is a huge >>>>>>> improvement and IMHO should be commited in current form. We can >>>>>>> re-visit it >>>>>>> later on, open a ticket :-) >>>>>>> >>>>>> >>>>>> No. There is no rush for this, at least not for the promise of a >>>>>> future >>>>>> fix that will never come. >>>>> >>>>> I checked the code and to me, the proper fix looks like instrumenting >>>>> ldap.update_entry calls in upgrade plugins with >>>>> >>>>> if options.test: >>>>> log message >>>>> else >>>>> do the update >>>>> >>>>> right? I see just couple places that would need to be updated: >>>>> >>>>> $ git grep -E "(ldap|conn).update_entry" ipaserver/install/plugins >>>>> ipaserver/install/plugins/dns.py: >>>>> ldap.update_entry(container_entry) >>>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>>> repl.conn.update_entry(replica) >>>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>>> repl.conn.update_entry(replica) >>>>> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >>>>> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >>>>> ipaserver/install/plugins/update_managed_permissions.py: >>>>> ldap.update_entry(base_entry) >>>>> ipaserver/install/plugins/update_managed_permissions.py: >>>>> ldap.update_entry(entry) >>>>> ipaserver/install/plugins/update_pacs.py: >>>>> ldap.update_entry(entry) >>>>> ipaserver/install/plugins/update_referint.py: >>>>> ldap.update_entry(entry) >>>>> ipaserver/install/plugins/update_services.py: ldap.update_entry(entry) >>>>> ipaserver/install/plugins/update_uniqueness.py: >>>>> ldap.update_entry(uid_uniqueness_plugin) >>>>> >>>>> >>>>> So from my POV, very quick fix. In that case, I would also prefer a >>>>> fix now >>>>> than a ticket that would never be done. >>>> >>>> I really dislike this approach because I consider it flawed by >>>> design. Plugin >>>> author has to think about it all the time and do not forget to add if >>>> otherwise ... too bad. >>>> >>>> I can see two 'safer' ways to do that: >>>> - LDAP transactions :-) >>>> - 'mock_writes=True' option in LDAP backend which would print >>>> modlists instead >>>> of applying them (and return success to the caller). >>>> >>>> Both cases eliminate the need to scatter 'ifs' all over update >>>> plugins and do >>>> not add risk of forgetting about one of them when adding/changing >>>> plugin code. >>> I like idea about mock_writes=True. However, I think we still need to >>> make >>> sure plugin writers rely on options.test value to see that we aren't >>> going to write the data. The reason for it is that we might get into >>> configurations where plugins would be doing updates based on earlier >>> performed tasks. If task configuration is not going to be written, its >>> status will never be correct and plugin would get an error. >> >> That is exactly why I mentioned LDAP transactions. There is no other >> way how >> to test complex plugins which actually read own writes (except mocking >> the >> whole LDAP interface somewhere). > > While this may be a good idea long term, I do not think any of us is > considering implementing the LDAP transaction support within work on > this refactoring. > > So in this thread, let us focus on how to fix options.test mid-term. I > currently see 2 proposed ways: > - Making the plugins aware of options.test > - Make ldap2 write operations only print the update and not do it. > Although thinking of this approach, I think it may make some plugins > like DNS loop forever. IIRC, at least DNS upgrade plugin have loop > - search for all unfixed DNS zones > - fix them with ldap update > - was the search truncated, i.e. are there more zones to update? > if yes, go back to start - Make the plugins not call {add,update,delete}_entry themselves but rather return the updates like they should. This is what the ticket () requests and what should be done to make --test work for them. -- Jan Cholasta From sbose at redhat.com Fri Mar 13 10:55:09 2015 From: sbose at redhat.com (Sumit Bose) Date: Fri, 13 Mar 2015 11:55:09 +0100 Subject: [Freeipa-devel] [PATCHES 137-139] extdom: add err_msg member to request context In-Reply-To: <20150304173522.GT3271@p.redhat.com> References: <20150304173522.GT3271@p.redhat.com> Message-ID: <20150313105509.GJ13715@p.redhat.com> On Wed, Mar 04, 2015 at 06:35:22PM +0100, Sumit Bose wrote: > Hi, > > this patch series improves error reporting of the extdom plugin > especially on the client side. Currently there is only SSSD ticket > https://fedorahosted.org/sssd/ticket/2463 . Shall I create a > corresponding FreeIPA ticket as well? > > In the third patch I already added a handful of new error messages. > Suggestions for more messages are welcome. > > bye, > Sumit Rebased versions attached. bye, Sumit -------------- next part -------------- From e402a733322c68db0f808d3386a27e5fd9bc177b Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Mon, 2 Feb 2015 00:52:10 +0100 Subject: [PATCH 137/139] extdom: add err_msg member to request context --- daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h | 1 + daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c | 1 + daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 5 ++++- 3 files changed, 6 insertions(+), 1 deletion(-) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h index d4c851169ddadc869a59c53075f9fc7f33321085..421f6c6ea625aba2db7e9ffc84115b3647673699 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h @@ -116,6 +116,7 @@ struct extdom_req { gid_t gid; } posix_gid; } data; + char *err_msg; }; struct extdom_res { diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index 47bcb179f04e08c64d92f55809b84f2d59622344..c2fd42f13fca97587ddc4c12b560e590462f121b 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -356,6 +356,7 @@ void free_req_data(struct extdom_req *req) break; } + free(req->err_msg); free(req); } diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c index dc7785877fc321ddaa5b6967d1c1b06cb454bbbf..708d0e4a2fc9da4f87a24a49c945587049f7280f 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c @@ -149,12 +149,15 @@ static int ipa_extdom_extop(Slapi_PBlock *pb) rc = LDAP_SUCCESS; done: - free_req_data(req); + if (req->err_msg != NULL) { + err_msg = req->err_msg; + } if (err_msg != NULL) { LOG("%s", err_msg); } slapi_send_ldap_result(pb, rc, NULL, err_msg, 0, NULL); ber_bvfree(ret_val); + free_req_data(req); return SLAPI_PLUGIN_EXTENDED_SENT_RESULT; } -- 2.1.0 -------------- next part -------------- From 6a6a18313745e5e50b629009a74f7b0ad1975fe2 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Mon, 2 Feb 2015 00:53:06 +0100 Subject: [PATCH 138/139] extdom: add add_err_msg() with test --- .../ipa-extdom-extop/ipa_extdom.h | 1 + .../ipa-extdom-extop/ipa_extdom_cmocka_tests.c | 43 ++++++++++++++++++++++ .../ipa-extdom-extop/ipa_extdom_common.c | 23 ++++++++++++ 3 files changed, 67 insertions(+) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h index 421f6c6ea625aba2db7e9ffc84115b3647673699..0d5d55d2fb0ece95466b0225b145a4edeef18efa 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h @@ -185,4 +185,5 @@ int getgrnam_r_wrapper(size_t buf_max, const char *name, struct group *grp, char **_buf, size_t *_buf_len); int getgrgid_r_wrapper(size_t buf_max, gid_t gid, struct group *grp, char **_buf, size_t *_buf_len); +void set_err_msg(struct extdom_req *req, const char *format, ...); #endif /* _IPA_EXTDOM_H_ */ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c index d5bacd7e8c9dc0a71eea70162406c7e5b67384ad..586b58b0fd4c7610e9cb4643b6dae04f9d22b8ab 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c @@ -213,6 +213,47 @@ void test_getgrgid_r_wrapper(void **state) free(buf); } +void extdom_req_setup(void **state) +{ + struct extdom_req *req; + + req = calloc(sizeof(struct extdom_req), 1); + assert_non_null(req); + + *state = req; +} + +void extdom_req_teardown(void **state) +{ + struct extdom_req *req; + + req = (struct extdom_req *) *state; + + free_req_data(req); +} + +void test_set_err_msg(void **state) +{ + struct extdom_req *req; + + req = (struct extdom_req *) *state; + assert_null(req->err_msg); + + set_err_msg(NULL, NULL); + assert_null(req->err_msg); + + set_err_msg(req, NULL); + assert_null(req->err_msg); + + set_err_msg(req, "Test [%s][%d].", "ABCD", 1234); + assert_non_null(req->err_msg); + assert_string_equal(req->err_msg, "Test [ABCD][1234]."); + + set_err_msg(req, "2nd Test [%s][%d].", "ABCD", 1234); + assert_non_null(req->err_msg); + assert_string_equal(req->err_msg, "Test [ABCD][1234]."); +} + int main(int argc, const char *argv[]) { const UnitTest tests[] = { @@ -220,6 +261,8 @@ int main(int argc, const char *argv[]) unit_test(test_getpwuid_r_wrapper), unit_test(test_getgrnam_r_wrapper), unit_test(test_getgrgid_r_wrapper), + unit_test_setup_teardown(test_set_err_msg, + extdom_req_setup, extdom_req_teardown), }; return run_tests(tests); diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index c2fd42f13fca97587ddc4c12b560e590462f121b..e05c005da4efcdbbee386f9e73ef3ef889e1a3c2 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -229,6 +229,29 @@ done: return ret; } +void set_err_msg(struct extdom_req *req, const char *format, ...) +{ + int ret; + va_list ap; + + if (req == NULL) { + return; + } + + if (format == NULL || req->err_msg != NULL) { + /* Do not override an existing error message. */ + return; + } + va_start(ap, format); + + ret = vasprintf(&req->err_msg, format, ap); + if (ret == -1) { + req->err_msg = strdup("vasprintf failed.\n"); + } + + va_end(ap); +} + int parse_request_data(struct berval *req_val, struct extdom_req **_req) { BerElement *ber = NULL; -- 2.1.0 -------------- next part -------------- From 4cb2b8a3c5be5cdc1c2b057870e7c66e6f05076a Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Wed, 4 Mar 2015 13:37:50 +0100 Subject: [PATCH 139/139] extdom: add selected error messages --- .../ipa-extdom-extop/ipa_extdom_common.c | 51 ++++++++++++++++------ 1 file changed, 38 insertions(+), 13 deletions(-) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index e05c005da4efcdbbee386f9e73ef3ef889e1a3c2..5e3c8b79f3e37cc7761101f71d14d0226d371ce0 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -300,26 +300,34 @@ int parse_request_data(struct berval *req_val, struct extdom_req **_req) * } */ + req = calloc(sizeof(struct extdom_req), 1); + if (req == NULL) { + /* Since we return req even in the case of an error we make sure is is + * always safe to call free_req_data() on the returned data. */ + *_req = NULL; + return LDAP_OPERATIONS_ERROR; + } + + *_req = req; + if (req_val == NULL || req_val->bv_val == NULL || req_val->bv_len == 0) { + set_err_msg(req, "Missing request data"); return LDAP_PROTOCOL_ERROR; } ber = ber_init(req_val); if (ber == NULL) { + set_err_msg(req, "Cannot initialize BER struct"); return LDAP_PROTOCOL_ERROR; } tag = ber_scanf(ber, "{ee", &input_type, &request_type); if (tag == LBER_ERROR) { ber_free(ber, 1); + set_err_msg(req, "Cannot read input and request type"); return LDAP_PROTOCOL_ERROR; } - req = calloc(sizeof(struct extdom_req), 1); - if (req == NULL) { - return LDAP_OPERATIONS_ERROR; - } - req->input_type = input_type; req->request_type = request_type; @@ -343,17 +351,15 @@ int parse_request_data(struct berval *req_val, struct extdom_req **_req) break; default: ber_free(ber, 1); - free(req); + set_err_msg(req, "Unknown input type"); return LDAP_PROTOCOL_ERROR; } ber_free(ber, 1); if (tag == LBER_ERROR) { - free(req); + set_err_msg(req, "Failed to decode BER data"); return LDAP_PROTOCOL_ERROR; } - *_req = req; - return LDAP_SUCCESS; } @@ -715,6 +721,7 @@ static int pack_ber_name(const char *domain_name, const char *name, } static int handle_uid_request(struct ipa_extdom_ctx *ctx, + struct extdom_req *req, enum request_types request_type, uid_t uid, const char *domain_name, struct berval **berval) { @@ -738,6 +745,7 @@ static int handle_uid_request(struct ipa_extdom_ctx *ctx, if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { + set_err_msg(req, "Failed to lookup SID by UID"); ret = LDAP_OPERATIONS_ERROR; } goto done; @@ -760,6 +768,7 @@ static int handle_uid_request(struct ipa_extdom_ctx *ctx, ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_UID || id_type == SSS_ID_TYPE_BOTH)) { + set_err_msg(req, "Failed to read original data"); if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { @@ -785,6 +794,7 @@ done: } static int handle_gid_request(struct ipa_extdom_ctx *ctx, + struct extdom_req *req, enum request_types request_type, gid_t gid, const char *domain_name, struct berval **berval) { @@ -807,6 +817,7 @@ static int handle_gid_request(struct ipa_extdom_ctx *ctx, if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { + set_err_msg(req, "Failed to lookup SID by GID"); ret = LDAP_OPERATIONS_ERROR; } goto done; @@ -829,6 +840,7 @@ static int handle_gid_request(struct ipa_extdom_ctx *ctx, ret = sss_nss_getorigbyname(grp.gr_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_GID || id_type == SSS_ID_TYPE_BOTH)) { + set_err_msg(req, "Failed to read original data"); if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { @@ -852,6 +864,7 @@ done: } static int handle_sid_request(struct ipa_extdom_ctx *ctx, + struct extdom_req *req, enum request_types request_type, const char *sid, struct berval **berval) { @@ -872,6 +885,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { + set_err_msg(req, "Failed to lookup name by SID"); ret = LDAP_OPERATIONS_ERROR; } goto done; @@ -879,6 +893,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, sep = strchr(fq_name, SSSD_DOMAIN_SEPARATOR); if (sep == NULL) { + set_err_msg(req, "Failed to split fully qualified name"); ret = LDAP_OPERATIONS_ERROR; goto done; } @@ -886,6 +901,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, object_name = strndup(fq_name, (sep - fq_name)); domain_name = strdup(sep + 1); if (object_name == NULL || domain_name == NULL) { + set_err_msg(req, "Missing name or domain"); ret = LDAP_OPERATIONS_ERROR; goto done; } @@ -918,6 +934,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_UID || id_type == SSS_ID_TYPE_BOTH)) { + set_err_msg(req, "Failed ot read original data"); if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { @@ -950,6 +967,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, ret = sss_nss_getorigbyname(grp.gr_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_GID || id_type == SSS_ID_TYPE_BOTH)) { + set_err_msg(req, "Failed to read original data"); if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { @@ -980,6 +998,7 @@ done: } static int handle_name_request(struct ipa_extdom_ctx *ctx, + struct extdom_req *req, enum request_types request_type, const char *name, const char *domain_name, struct berval **berval) @@ -998,6 +1017,7 @@ static int handle_name_request(struct ipa_extdom_ctx *ctx, domain_name); if (ret == -1) { ret = LDAP_OPERATIONS_ERROR; + set_err_msg(req, "Failed to create fully qualified name"); fq_name = NULL; /* content is undefined according to asprintf(3) */ goto done; @@ -1009,6 +1029,7 @@ static int handle_name_request(struct ipa_extdom_ctx *ctx, if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { + set_err_msg(req, "Failed to lookup SID by name"); ret = LDAP_OPERATIONS_ERROR; } goto done; @@ -1028,6 +1049,7 @@ static int handle_name_request(struct ipa_extdom_ctx *ctx, ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_UID || id_type == SSS_ID_TYPE_BOTH)) { + set_err_msg(req, "Failed to read original data"); if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { @@ -1068,6 +1090,7 @@ static int handle_name_request(struct ipa_extdom_ctx *ctx, if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { + set_err_msg(req, "Failed to read original data"); ret = LDAP_OPERATIONS_ERROR; } goto done; @@ -1097,27 +1120,29 @@ int handle_request(struct ipa_extdom_ctx *ctx, struct extdom_req *req, switch (req->input_type) { case INP_POSIX_UID: - ret = handle_uid_request(ctx, req->request_type, + ret = handle_uid_request(ctx, req, req->request_type, req->data.posix_uid.uid, req->data.posix_uid.domain_name, berval); break; case INP_POSIX_GID: - ret = handle_gid_request(ctx, req->request_type, + ret = handle_gid_request(ctx, req, req->request_type, req->data.posix_gid.gid, req->data.posix_uid.domain_name, berval); break; case INP_SID: - ret = handle_sid_request(ctx, req->request_type, req->data.sid, berval); + ret = handle_sid_request(ctx, req, req->request_type, req->data.sid, + berval); break; case INP_NAME: - ret = handle_name_request(ctx, req->request_type, + ret = handle_name_request(ctx, req, req->request_type, req->data.name.object_name, req->data.name.domain_name, berval); break; default: + set_err_msg(req, "Unknown input type"); ret = LDAP_PROTOCOL_ERROR; goto done; } -- 2.1.0 From pspacek at redhat.com Fri Mar 13 10:55:09 2015 From: pspacek at redhat.com (Petr Spacek) Date: Fri, 13 Mar 2015 11:55:09 +0100 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5502BD39.5020607@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> <5501B960.3060808@redhat.com> <5501BA89.2070507@redhat.com> <5502AB4B.7040701@redhat.com> <5502ADE2.3020600@redhat.com> <20150313094200.GH3878@redhat.com> <5502B550.9040105@redhat.com> <5502B931.7090708@redhat.com> <5502BD39.5020607@redhat.com> Message-ID: <5502C20D.20501@redhat.com> On 13.3.2015 11:34, Jan Cholasta wrote: > Dne 13.3.2015 v 11:17 Martin Kosek napsal(a): >> On 03/13/2015 11:00 AM, Petr Spacek wrote: >>> On 13.3.2015 10:42, Alexander Bokovoy wrote: >>>> On Fri, 13 Mar 2015, Petr Spacek wrote: >>>>> On 13.3.2015 10:18, Martin Kosek wrote: >>>>>> On 03/12/2015 05:10 PM, Rob Crittenden wrote: >>>>>>> Petr Spacek wrote: >>>>>>>> On 12.3.2015 16:23, Rob Crittenden wrote: >>>>>>>>> David Kupka wrote: >>>>>>>>>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>>>>>>>>> Upgrade plugins which modify LDAP data directly should not be >>>>>>>>>>> executed >>>>>>>>>>> in --test mode. >>>>>>>>>>> >>>>>>>>>>> This patch is a workaround, to ensure update with --test >>>>>>>>>>> option will not >>>>>>>>>>> modify any LDAP data. >>>>>>>>>>> >>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/3448 >>>>>>>>>>> >>>>>>>>>>> Patch attached. >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Ideally we want to fix all plugins to dry-run the upgrade not >>>>>>>>>> just skip >>>>>>>>>> when there is '--test' option but it is a good first step. >>>>>>>>>> Works for me, ACK. >>>>>>>>>> >>>>>>>>> >>>>>>>>> I agree that this breaks the spirit of --test and think it >>>>>>>>> should be >>>>>>>>> fixed before committing. >>>>>>>> >>>>>>>> Considering how often is the option is used ... I do not think >>>>>>>> that this >>>>>>>> requires 'proper' fix now. It was broken for *years* so this >>>>>>>> patch is a huge >>>>>>>> improvement and IMHO should be commited in current form. We can >>>>>>>> re-visit it >>>>>>>> later on, open a ticket :-) >>>>>>>> >>>>>>> >>>>>>> No. There is no rush for this, at least not for the promise of a >>>>>>> future >>>>>>> fix that will never come. >>>>>> >>>>>> I checked the code and to me, the proper fix looks like instrumenting >>>>>> ldap.update_entry calls in upgrade plugins with >>>>>> >>>>>> if options.test: >>>>>> log message >>>>>> else >>>>>> do the update >>>>>> >>>>>> right? I see just couple places that would need to be updated: >>>>>> >>>>>> $ git grep -E "(ldap|conn).update_entry" ipaserver/install/plugins >>>>>> ipaserver/install/plugins/dns.py: >>>>>> ldap.update_entry(container_entry) >>>>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>>>> repl.conn.update_entry(replica) >>>>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>>>> repl.conn.update_entry(replica) >>>>>> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >>>>>> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >>>>>> ipaserver/install/plugins/update_managed_permissions.py: >>>>>> ldap.update_entry(base_entry) >>>>>> ipaserver/install/plugins/update_managed_permissions.py: >>>>>> ldap.update_entry(entry) >>>>>> ipaserver/install/plugins/update_pacs.py: >>>>>> ldap.update_entry(entry) >>>>>> ipaserver/install/plugins/update_referint.py: >>>>>> ldap.update_entry(entry) >>>>>> ipaserver/install/plugins/update_services.py: ldap.update_entry(entry) >>>>>> ipaserver/install/plugins/update_uniqueness.py: >>>>>> ldap.update_entry(uid_uniqueness_plugin) >>>>>> >>>>>> >>>>>> So from my POV, very quick fix. In that case, I would also prefer a >>>>>> fix now >>>>>> than a ticket that would never be done. >>>>> >>>>> I really dislike this approach because I consider it flawed by >>>>> design. Plugin >>>>> author has to think about it all the time and do not forget to add if >>>>> otherwise ... too bad. >>>>> >>>>> I can see two 'safer' ways to do that: >>>>> - LDAP transactions :-) >>>>> - 'mock_writes=True' option in LDAP backend which would print >>>>> modlists instead >>>>> of applying them (and return success to the caller). >>>>> >>>>> Both cases eliminate the need to scatter 'ifs' all over update >>>>> plugins and do >>>>> not add risk of forgetting about one of them when adding/changing >>>>> plugin code. >>>> I like idea about mock_writes=True. However, I think we still need to >>>> make >>>> sure plugin writers rely on options.test value to see that we aren't >>>> going to write the data. The reason for it is that we might get into >>>> configurations where plugins would be doing updates based on earlier >>>> performed tasks. If task configuration is not going to be written, its >>>> status will never be correct and plugin would get an error. >>> >>> That is exactly why I mentioned LDAP transactions. There is no other >>> way how >>> to test complex plugins which actually read own writes (except mocking >>> the >>> whole LDAP interface somewhere). >> >> While this may be a good idea long term, I do not think any of us is >> considering implementing the LDAP transaction support within work on >> this refactoring. >> >> So in this thread, let us focus on how to fix options.test mid-term. I >> currently see 2 proposed ways: >> - Making the plugins aware of options.test >> - Make ldap2 write operations only print the update and not do it. >> Although thinking of this approach, I think it may make some plugins >> like DNS loop forever. IIRC, at least DNS upgrade plugin have loop >> - search for all unfixed DNS zones >> - fix them with ldap update >> - was the search truncated, i.e. are there more zones to update? >> if yes, go back to start > > - Make the plugins not call {add,update,delete}_entry themselves but rather > return the updates like they should. This is what the ticket > () requests and what should be > done to make --test work for them. How do you propose to handle iterative updates like the DNS upgrade mentioned by Martin^1? Return set of updates along with boolean 'call me again'? Something else? -- Petr^2 Spacek From sbose at redhat.com Fri Mar 13 10:56:46 2015 From: sbose at redhat.com (Sumit Bose) Date: Fri, 13 Mar 2015 11:56:46 +0100 Subject: [Freeipa-devel] [PATCH 140] extdom: migrate check-based test to cmocka In-Reply-To: <20150304174205.GU3271@p.redhat.com> References: <20150304174205.GU3271@p.redhat.com> Message-ID: <20150313105646.GK13715@p.redhat.com> On Wed, Mar 04, 2015 at 06:42:05PM +0100, Sumit Bose wrote: > Hi, > > this is the first patch for https://fedorahosted.org/freeipa/ticket/4922 > which converts the check-based tests of the extdom plugin to cmocka. > > bye, > Sumit Rebased version attached. bye, Sumit -------------- next part -------------- From c801df101baa41146a06a70ff4075e308905577b Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Mon, 9 Feb 2015 18:12:01 +0100 Subject: [PATCH] extdom: migrate check-based test to cmocka Besides moving the existing tests to cmocka two new tests are added which were missing from the old tests. Related to https://fedorahosted.org/freeipa/ticket/4922 --- .../ipa-slapi-plugins/ipa-extdom-extop/Makefile.am | 20 -- .../ipa-extdom-extop/ipa_extdom.h | 14 ++ .../ipa-extdom-extop/ipa_extdom_cmocka_tests.c | 156 +++++++++++++++- .../ipa-extdom-extop/ipa_extdom_common.c | 28 +-- .../ipa-extdom-extop/ipa_extdom_tests.c | 203 --------------------- 5 files changed, 176 insertions(+), 245 deletions(-) delete mode 100644 daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_tests.c diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am b/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am index a1679812ef3c5de8c6e18433cbb991a99ad0b6c8..9c2fa1c6a5f95ba06b33c0a5b560939863a88f0e 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am @@ -38,11 +38,6 @@ libipa_extdom_extop_la_LIBADD = \ TESTS = check_PROGRAMS = -if HAVE_CHECK -TESTS += extdom_tests -check_PROGRAMS += extdom_tests -endif - if HAVE_CMOCKA if HAVE_NSS_WRAPPER TESTS_ENVIRONMENT = . ./test_data/test_setup.sh; @@ -51,21 +46,6 @@ check_PROGRAMS += extdom_cmocka_tests endif endif -extdom_tests_SOURCES = \ - ipa_extdom_tests.c \ - ipa_extdom_common.c \ - $(NULL) -extdom_tests_CFLAGS = $(CHECK_CFLAGS) -extdom_tests_LDFLAGS = \ - -rpath $(shell pkg-config --libs-only-L dirsrv | sed -e 's/-L//') \ - $(NULL) -extdom_tests_LDADD = \ - $(CHECK_LIBS) \ - $(LDAP_LIBS) \ - $(DIRSRV_LIBS) \ - $(SSSNSSIDMAP_LIBS) \ - $(NULL) - extdom_cmocka_tests_SOURCES = \ ipa_extdom_cmocka_tests.c \ ipa_extdom_common.c \ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h index 0d5d55d2fb0ece95466b0225b145a4edeef18efa..65dd43ea35726db6231386a0fcbba9be1bd71412 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h @@ -185,5 +185,19 @@ int getgrnam_r_wrapper(size_t buf_max, const char *name, struct group *grp, char **_buf, size_t *_buf_len); int getgrgid_r_wrapper(size_t buf_max, gid_t gid, struct group *grp, char **_buf, size_t *_buf_len); +int pack_ber_sid(const char *sid, struct berval **berval); +int pack_ber_name(const char *domain_name, const char *name, + struct berval **berval); +int pack_ber_user(struct ipa_extdom_ctx *ctx, + enum response_types response_type, + const char *domain_name, const char *user_name, + uid_t uid, gid_t gid, + const char *gecos, const char *homedir, + const char *shell, struct sss_nss_kv *kv_list, + struct berval **berval); +int pack_ber_group(enum response_types response_type, + const char *domain_name, const char *group_name, + gid_t gid, char **members, struct sss_nss_kv *kv_list, + struct berval **berval); void set_err_msg(struct extdom_req *req, const char *format, ...); #endif /* _IPA_EXTDOM_H_ */ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c index 586b58b0fd4c7610e9cb4643b6dae04f9d22b8ab..42d588d08a96f8a26345f85aade9523e05f6f56e 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c @@ -213,30 +213,46 @@ void test_getgrgid_r_wrapper(void **state) free(buf); } +struct test_data { + struct extdom_req *req; + struct ipa_extdom_ctx *ctx; +}; + void extdom_req_setup(void **state) { - struct extdom_req *req; + struct test_data *test_data; - req = calloc(sizeof(struct extdom_req), 1); - assert_non_null(req); + test_data = calloc(sizeof(struct test_data), 1); + assert_non_null(test_data); - *state = req; + test_data->req = calloc(sizeof(struct extdom_req), 1); + assert_non_null(test_data->req); + + test_data->ctx = calloc(sizeof(struct ipa_extdom_ctx), 1); + assert_non_null(test_data->req); + + *state = test_data; } void extdom_req_teardown(void **state) { - struct extdom_req *req; + struct test_data *test_data; - req = (struct extdom_req *) *state; + test_data = (struct test_data *) *state; - free_req_data(req); + free_req_data(test_data->req); + free(test_data->ctx); + free(test_data); } void test_set_err_msg(void **state) { struct extdom_req *req; + struct test_data *test_data; + + test_data = (struct test_data *) *state; + req = test_data->req; - req = (struct extdom_req *) *state; assert_null(req->err_msg); set_err_msg(NULL, NULL); @@ -254,6 +270,127 @@ void test_set_err_msg(void **state) assert_string_equal(req->err_msg, "Test [ABCD][1234]."); } +#define TEST_SID "S-1-2-3-4" +#define TEST_DOMAIN_NAME "DOMAIN" + +char res_sid[] = {0x30, 0x0e, 0x0a, 0x01, 0x01, 0x04, 0x09, 0x53, 0x2d, 0x31, \ + 0x2d, 0x32, 0x2d, 0x33, 0x2d, 0x34}; +char res_nam[] = {0x30, 0x13, 0x0a, 0x01, 0x02, 0x30, 0x0e, 0x04, 0x06, 0x44, \ + 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, 0x74, 0x65, 0x73, \ + 0x74}; +char res_uid[] = {0x30, 0x1c, 0x0a, 0x01, 0x03, 0x30, 0x17, 0x04, 0x06, 0x44, \ + 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, 0x74, 0x65, 0x73, \ + 0x74, 0x02, 0x02, 0x30, 0x39, 0x02, 0x03, 0x00, 0xd4, 0x31}; +char res_gid[] = {0x30, 0x1e, 0x0a, 0x01, 0x04, 0x30, 0x19, 0x04, 0x06, 0x44, \ + 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x0a, 0x74, 0x65, 0x73, \ + 0x74, 0x5f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x02, 0x03, 0x00, \ + 0xd4, 0x31}; + +void test_encode(void **state) +{ + int ret; + struct berval *resp_val; + struct ipa_extdom_ctx *ctx; + struct test_data *test_data; + + test_data = (struct test_data *) *state; + ctx = test_data->ctx; + + ctx->max_nss_buf_size = (128*1024*1024); + + ret = pack_ber_sid(TEST_SID, &resp_val); + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(sizeof(res_sid), resp_val->bv_len); + assert_memory_equal(res_sid, resp_val->bv_val, resp_val->bv_len); + ber_bvfree(resp_val); + + ret = pack_ber_name(TEST_DOMAIN_NAME, "test", &resp_val); + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(sizeof(res_nam), resp_val->bv_len); + assert_memory_equal(res_nam, resp_val->bv_val, resp_val->bv_len); + ber_bvfree(resp_val); + + ret = pack_ber_user(ctx, RESP_USER, TEST_DOMAIN_NAME, "test", 12345, 54321, + NULL, NULL, NULL, NULL, &resp_val); + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(sizeof(res_uid), resp_val->bv_len); + assert_memory_equal(res_uid, resp_val->bv_val, resp_val->bv_len); + ber_bvfree(resp_val); + + ret = pack_ber_group(RESP_GROUP, TEST_DOMAIN_NAME, "test_group", 54321, + NULL, NULL, &resp_val); + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(sizeof(res_gid), resp_val->bv_len); + assert_memory_equal(res_gid, resp_val->bv_val, resp_val->bv_len); + ber_bvfree(resp_val); +} + +char req_sid[] = {0x30, 0x11, 0x0a, 0x01, 0x01, 0x0a, 0x01, 0x01, 0x04, 0x09, \ + 0x53, 0x2d, 0x31, 0x2d, 0x32, 0x2d, 0x33, 0x2d, 0x34}; +char req_nam[] = {0x30, 0x16, 0x0a, 0x01, 0x02, 0x0a, 0x01, 0x01, 0x30, 0x0e, \ + 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, \ + 0x74, 0x65, 0x73, 0x74}; +char req_uid[] = {0x30, 0x14, 0x0a, 0x01, 0x03, 0x0a, 0x01, 0x01, 0x30, 0x0c, \ + 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x02, 0x02, \ + 0x30, 0x39}; +char req_gid[] = {0x30, 0x15, 0x0a, 0x01, 0x04, 0x0a, 0x01, 0x01, 0x30, 0x0d, \ + 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x02, 0x03, \ + 0x00, 0xd4, 0x31}; + +void test_decode(void **state) +{ + struct berval req_val; + struct extdom_req *req; + int ret; + + req_val.bv_val = req_sid; + req_val.bv_len = sizeof(req_sid); + + ret = parse_request_data(&req_val, &req); + + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(req->input_type, INP_SID); + assert_int_equal(req->request_type, REQ_SIMPLE); + assert_string_equal(req->data.sid, "S-1-2-3-4"); + free_req_data(req); + + req_val.bv_val = req_nam; + req_val.bv_len = sizeof(req_nam); + + ret = parse_request_data(&req_val, &req); + + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(req->input_type, INP_NAME); + assert_int_equal(req->request_type, REQ_SIMPLE); + assert_string_equal(req->data.name.domain_name, "DOMAIN"); + assert_string_equal(req->data.name.object_name, "test"); + free_req_data(req); + + req_val.bv_val = req_uid; + req_val.bv_len = sizeof(req_uid); + + ret = parse_request_data(&req_val, &req); + + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(req->input_type, INP_POSIX_UID); + assert_int_equal(req->request_type, REQ_SIMPLE); + assert_string_equal(req->data.posix_uid.domain_name, "DOMAIN"); + assert_int_equal(req->data.posix_uid.uid, 12345); + free_req_data(req); + + req_val.bv_val = req_gid; + req_val.bv_len = sizeof(req_gid); + + ret = parse_request_data(&req_val, &req); + + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(req->input_type, INP_POSIX_GID); + assert_int_equal(req->request_type, REQ_SIMPLE); + assert_string_equal(req->data.posix_gid.domain_name, "DOMAIN"); + assert_int_equal(req->data.posix_gid.gid, 54321); + free_req_data(req); +} + int main(int argc, const char *argv[]) { const UnitTest tests[] = { @@ -263,6 +400,9 @@ int main(int argc, const char *argv[]) unit_test(test_getgrgid_r_wrapper), unit_test_setup_teardown(test_set_err_msg, extdom_req_setup, extdom_req_teardown), + unit_test_setup_teardown(test_encode, + extdom_req_setup, extdom_req_teardown), + unit_test(test_decode), }; return run_tests(tests); diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index 5e3c8b79f3e37cc7761101f71d14d0226d371ce0..fb6263187adbd3fbc0d44bc31fc9b985cb2f1d84 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -464,7 +464,7 @@ static int add_kv_list(BerElement *ber, struct sss_nss_kv *kv_list) return LDAP_SUCCESS; } -static int pack_ber_sid(const char *sid, struct berval **berval) +int pack_ber_sid(const char *sid, struct berval **berval) { BerElement *ber = NULL; int ret; @@ -491,13 +491,13 @@ static int pack_ber_sid(const char *sid, struct berval **berval) #define SSSD_SYSDB_SID_STR "objectSIDString" -static int pack_ber_user(struct ipa_extdom_ctx *ctx, - enum response_types response_type, - const char *domain_name, const char *user_name, - uid_t uid, gid_t gid, - const char *gecos, const char *homedir, - const char *shell, struct sss_nss_kv *kv_list, - struct berval **berval) +int pack_ber_user(struct ipa_extdom_ctx *ctx, + enum response_types response_type, + const char *domain_name, const char *user_name, + uid_t uid, gid_t gid, + const char *gecos, const char *homedir, + const char *shell, struct sss_nss_kv *kv_list, + struct berval **berval) { BerElement *ber = NULL; int ret; @@ -610,10 +610,10 @@ done: return ret; } -static int pack_ber_group(enum response_types response_type, - const char *domain_name, const char *group_name, - gid_t gid, char **members, struct sss_nss_kv *kv_list, - struct berval **berval) +int pack_ber_group(enum response_types response_type, + const char *domain_name, const char *group_name, + gid_t gid, char **members, struct sss_nss_kv *kv_list, + struct berval **berval) { BerElement *ber = NULL; int ret; @@ -694,8 +694,8 @@ done: return ret; } -static int pack_ber_name(const char *domain_name, const char *name, - struct berval **berval) +int pack_ber_name(const char *domain_name, const char *name, + struct berval **berval) { BerElement *ber = NULL; int ret; diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_tests.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_tests.c deleted file mode 100644 index 1467e256619f827310408d558d48c580118d9a32..0000000000000000000000000000000000000000 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_tests.c +++ /dev/null @@ -1,203 +0,0 @@ -/** BEGIN COPYRIGHT BLOCK - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation, either version 3 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program. If not, see . - * - * Additional permission under GPLv3 section 7: - * - * In the following paragraph, "GPL" means the GNU General Public - * License, version 3 or any later version, and "Non-GPL Code" means - * code that is governed neither by the GPL nor a license - * compatible with the GPL. - * - * You may link the code of this Program with Non-GPL Code and convey - * linked combinations including the two, provided that such Non-GPL - * Code only links to the code of this Program through those well - * defined interfaces identified in the file named EXCEPTION found in - * the source code files (the "Approved Interfaces"). The files of - * Non-GPL Code may instantiate templates or use macros or inline - * functions from the Approved Interfaces without causing the resulting - * work to be covered by the GPL. Only the copyright holders of this - * Program may make changes or additions to the list of Approved - * Interfaces. - * - * Authors: - * Sumit Bose - * - * Copyright (C) 2011 Red Hat, Inc. - * All rights reserved. - * END COPYRIGHT BLOCK **/ - -#include - -#include "ipa_extdom.h" -#include "util.h" - -char req_sid[] = {0x30, 0x11, 0x0a, 0x01, 0x01, 0x0a, 0x01, 0x01, 0x04, 0x09, \ - 0x53, 0x2d, 0x31, 0x2d, 0x32, 0x2d, 0x33, 0x2d, 0x34}; -char req_nam[] = {0x30, 0x16, 0x0a, 0x01, 0x02, 0x0a, 0x01, 0x01, 0x30, 0x0e, \ - 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, \ - 0x74, 0x65, 0x73, 0x74}; -char req_uid[] = {0x30, 0x14, 0x0a, 0x01, 0x03, 0x0a, 0x01, 0x01, 0x30, 0x0c, \ - 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x02, 0x02, \ - 0x30, 0x39}; -char req_gid[] = {0x30, 0x15, 0x0a, 0x01, 0x04, 0x0a, 0x01, 0x01, 0x30, 0x0d, \ - 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x02, 0x03, \ - 0x00, 0xd4, 0x31}; - -char res_sid[] = {0x30, 0x0e, 0x0a, 0x01, 0x01, 0x04, 0x09, 0x53, 0x2d, 0x31, \ - 0x2d, 0x32, 0x2d, 0x33, 0x2d, 0x34}; -char res_nam[] = {0x30, 0x13, 0x0a, 0x01, 0x02, 0x30, 0x0e, 0x04, 0x06, 0x44, \ - 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, 0x74, 0x65, 0x73, \ - 0x74}; -char res_uid[] = {0x30, 0x17, 0x0a, 0x01, 0x03, 0x30, 0x12, 0x04, 0x06, 0x44, \ - 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, 0x74, 0x65, 0x73, \ - 0x74, 0x02, 0x02, 0x30, 0x39}; -char res_gid[] = {0x30, 0x1e, 0x0a, 0x01, 0x04, 0x30, 0x19, 0x04, 0x06, 0x44, \ - 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x0a, 0x74, 0x65, 0x73, \ - 0x74, 0x5f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x02, 0x03, 0x00, \ - 0xd4, 0x31}; - -#define TEST_SID "S-1-2-3-4" -#define TEST_DOMAIN_NAME "DOMAIN" - -START_TEST(test_encode) -{ - int ret; - struct extdom_res res; - struct berval *resp_val; - - res.response_type = RESP_SID; - res.data.sid = TEST_SID; - - ret = pack_response(&res, &resp_val); - - fail_unless(ret == LDAP_SUCCESS, "pack_response() failed."); - fail_unless(sizeof(res_sid) == resp_val->bv_len && - memcmp(res_sid, resp_val->bv_val, resp_val->bv_len) == 0, - "Unexpected BER blob."); - ber_bvfree(resp_val); - - res.response_type = RESP_NAME; - res.data.name.domain_name = TEST_DOMAIN_NAME; - res.data.name.object_name = "test"; - - ret = pack_response(&res, &resp_val); - - fail_unless(ret == LDAP_SUCCESS, "pack_response() failed."); - fail_unless(sizeof(res_nam) == resp_val->bv_len && - memcmp(res_nam, resp_val->bv_val, resp_val->bv_len) == 0, - "Unexpected BER blob."); - ber_bvfree(resp_val); -} -END_TEST - -START_TEST(test_decode) -{ - struct berval req_val; - struct extdom_req *req; - int ret; - - req_val.bv_val = req_sid; - req_val.bv_len = sizeof(req_sid); - - ret = parse_request_data(&req_val, &req); - - fail_unless(ret == LDAP_SUCCESS, "parse_request_data() failed."); - fail_unless(req->input_type == INP_SID, - "parse_request_data() returned unexpected input type"); - fail_unless(req->request_type == REQ_SIMPLE, - "parse_request_data() returned unexpected request type"); - fail_unless(strcmp(req->data.sid, "S-1-2-3-4") == 0, - "parse_request_data() returned unexpected sid"); - free_req_data(req); - - req_val.bv_val = req_nam; - req_val.bv_len = sizeof(req_nam); - - ret = parse_request_data(&req_val, &req); - - fail_unless(ret == LDAP_SUCCESS, - "parse_request_data() failed."); - fail_unless(req->input_type == INP_NAME, - "parse_request_data() returned unexpected input type"); - fail_unless(req->request_type == REQ_SIMPLE, - "parse_request_data() returned unexpected request type"); - fail_unless(strcmp(req->data.name.domain_name, "DOMAIN") == 0, - "parse_request_data() returned unexpected domain name"); - fail_unless(strcmp(req->data.name.object_name, "test") == 0, - "parse_request_data() returned unexpected object name"); - free_req_data(req); - - req_val.bv_val = req_uid; - req_val.bv_len = sizeof(req_uid); - - ret = parse_request_data(&req_val, &req); - - fail_unless(ret == LDAP_SUCCESS, - "parse_request_data() failed."); - fail_unless(req->input_type == INP_POSIX_UID, - "parse_request_data() returned unexpected input type"); - fail_unless(req->request_type == REQ_SIMPLE, - "parse_request_data() returned unexpected request type"); - fail_unless(strcmp(req->data.posix_uid.domain_name, "DOMAIN") == 0, - "parse_request_data() returned unexpected domain name"); - fail_unless(req->data.posix_uid.uid == 12345, - "parse_request_data() returned unexpected uid [%d]", - req->data.posix_uid.uid); - free_req_data(req); - - req_val.bv_val = req_gid; - req_val.bv_len = sizeof(req_gid); - - ret = parse_request_data(&req_val, &req); - - fail_unless(ret == LDAP_SUCCESS, - "parse_request_data() failed."); - fail_unless(req->input_type == INP_POSIX_GID, - "parse_request_data() returned unexpected input type"); - fail_unless(req->request_type == REQ_SIMPLE, - "parse_request_data() returned unexpected request type"); - fail_unless(strcmp(req->data.posix_gid.domain_name, "DOMAIN") == 0, - "parse_request_data() returned unexpected domain name"); - fail_unless(req->data.posix_gid.gid == 54321, - "parse_request_data() returned unexpected gid [%d]", - req->data.posix_gid.gid); - free_req_data(req); -} -END_TEST - -Suite * ipa_extdom_suite(void) -{ - Suite *s = suite_create("IPA extdom"); - - TCase *tc_core = tcase_create("Core"); - tcase_add_test(tc_core, test_decode); - tcase_add_test(tc_core, test_encode); - /* TODO: add test for create_response() */ - suite_add_tcase(s, tc_core); - - return s; -} - -int main(void) -{ - int number_failed; - - Suite *s = ipa_extdom_suite (); - SRunner *sr = srunner_create (s); - srunner_run_all (sr, CK_VERBOSE); - number_failed = srunner_ntests_failed (sr); - srunner_free (sr); - - return (number_failed == 0) ? EXIT_SUCCESS : EXIT_FAILURE; -} -- 2.1.0 From mbasti at redhat.com Fri Mar 13 11:01:31 2015 From: mbasti at redhat.com (Martin Basti) Date: Fri, 13 Mar 2015 12:01:31 +0100 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5502C20D.20501@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> <5501B960.3060808@redhat.com> <5501BA89.2070507@redhat.com> <5502AB4B.7040701@redhat.com> <5502ADE2.3020600@redhat.com> <20150313094200.GH3878@redhat.com> <5502B550.9040105@redhat.com> <5502B931.7090708@redhat.com> <5502BD39.5020607@redhat.com> <5502C20D.20501@redhat.com> Message-ID: <5502C38B.2000406@redhat.com> On 13/03/15 11:55, Petr Spacek wrote: > On 13.3.2015 11:34, Jan Cholasta wrote: >> Dne 13.3.2015 v 11:17 Martin Kosek napsal(a): >>> On 03/13/2015 11:00 AM, Petr Spacek wrote: >>>> On 13.3.2015 10:42, Alexander Bokovoy wrote: >>>>> On Fri, 13 Mar 2015, Petr Spacek wrote: >>>>>> On 13.3.2015 10:18, Martin Kosek wrote: >>>>>>> On 03/12/2015 05:10 PM, Rob Crittenden wrote: >>>>>>>> Petr Spacek wrote: >>>>>>>>> On 12.3.2015 16:23, Rob Crittenden wrote: >>>>>>>>>> David Kupka wrote: >>>>>>>>>>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>>>>>>>>>> Upgrade plugins which modify LDAP data directly should not be >>>>>>>>>>>> executed >>>>>>>>>>>> in --test mode. >>>>>>>>>>>> >>>>>>>>>>>> This patch is a workaround, to ensure update with --test >>>>>>>>>>>> option will not >>>>>>>>>>>> modify any LDAP data. >>>>>>>>>>>> >>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/3448 >>>>>>>>>>>> >>>>>>>>>>>> Patch attached. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> Ideally we want to fix all plugins to dry-run the upgrade not >>>>>>>>>>> just skip >>>>>>>>>>> when there is '--test' option but it is a good first step. >>>>>>>>>>> Works for me, ACK. >>>>>>>>>>> >>>>>>>>>> I agree that this breaks the spirit of --test and think it >>>>>>>>>> should be >>>>>>>>>> fixed before committing. >>>>>>>>> Considering how often is the option is used ... I do not think >>>>>>>>> that this >>>>>>>>> requires 'proper' fix now. It was broken for *years* so this >>>>>>>>> patch is a huge >>>>>>>>> improvement and IMHO should be commited in current form. We can >>>>>>>>> re-visit it >>>>>>>>> later on, open a ticket :-) >>>>>>>>> >>>>>>>> No. There is no rush for this, at least not for the promise of a >>>>>>>> future >>>>>>>> fix that will never come. >>>>>>> I checked the code and to me, the proper fix looks like instrumenting >>>>>>> ldap.update_entry calls in upgrade plugins with >>>>>>> >>>>>>> if options.test: >>>>>>> log message >>>>>>> else >>>>>>> do the update >>>>>>> >>>>>>> right? I see just couple places that would need to be updated: >>>>>>> >>>>>>> $ git grep -E "(ldap|conn).update_entry" ipaserver/install/plugins >>>>>>> ipaserver/install/plugins/dns.py: >>>>>>> ldap.update_entry(container_entry) >>>>>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>>>>> repl.conn.update_entry(replica) >>>>>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>>>>> repl.conn.update_entry(replica) >>>>>>> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >>>>>>> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >>>>>>> ipaserver/install/plugins/update_managed_permissions.py: >>>>>>> ldap.update_entry(base_entry) >>>>>>> ipaserver/install/plugins/update_managed_permissions.py: >>>>>>> ldap.update_entry(entry) >>>>>>> ipaserver/install/plugins/update_pacs.py: >>>>>>> ldap.update_entry(entry) >>>>>>> ipaserver/install/plugins/update_referint.py: >>>>>>> ldap.update_entry(entry) >>>>>>> ipaserver/install/plugins/update_services.py: ldap.update_entry(entry) >>>>>>> ipaserver/install/plugins/update_uniqueness.py: >>>>>>> ldap.update_entry(uid_uniqueness_plugin) >>>>>>> >>>>>>> >>>>>>> So from my POV, very quick fix. In that case, I would also prefer a >>>>>>> fix now >>>>>>> than a ticket that would never be done. >>>>>> I really dislike this approach because I consider it flawed by >>>>>> design. Plugin >>>>>> author has to think about it all the time and do not forget to add if >>>>>> otherwise ... too bad. >>>>>> >>>>>> I can see two 'safer' ways to do that: >>>>>> - LDAP transactions :-) >>>>>> - 'mock_writes=True' option in LDAP backend which would print >>>>>> modlists instead >>>>>> of applying them (and return success to the caller). >>>>>> >>>>>> Both cases eliminate the need to scatter 'ifs' all over update >>>>>> plugins and do >>>>>> not add risk of forgetting about one of them when adding/changing >>>>>> plugin code. >>>>> I like idea about mock_writes=True. However, I think we still need to >>>>> make >>>>> sure plugin writers rely on options.test value to see that we aren't >>>>> going to write the data. The reason for it is that we might get into >>>>> configurations where plugins would be doing updates based on earlier >>>>> performed tasks. If task configuration is not going to be written, its >>>>> status will never be correct and plugin would get an error. >>>> That is exactly why I mentioned LDAP transactions. There is no other >>>> way how >>>> to test complex plugins which actually read own writes (except mocking >>>> the >>>> whole LDAP interface somewhere). >>> While this may be a good idea long term, I do not think any of us is >>> considering implementing the LDAP transaction support within work on >>> this refactoring. >>> >>> So in this thread, let us focus on how to fix options.test mid-term. I >>> currently see 2 proposed ways: >>> - Making the plugins aware of options.test >>> - Make ldap2 write operations only print the update and not do it. >>> Although thinking of this approach, I think it may make some plugins >>> like DNS loop forever. IIRC, at least DNS upgrade plugin have loop >>> - search for all unfixed DNS zones >>> - fix them with ldap update >>> - was the search truncated, i.e. are there more zones to update? >>> if yes, go back to start >> - Make the plugins not call {add,update,delete}_entry themselves but rather >> return the updates like they should. This is what the ticket >> () requests and what should be >> done to make --test work for them. > How do you propose to handle iterative updates like the DNS upgrade mentioned > by Martin^1? Return set of updates along with boolean 'call me again'? > Something else? > So instead of DNS commands logic, which can be used in plugin, we should reimplement the dnzone commands in upgrade plugin, to get modlist? And keep watching it and do same modifications for upgrade plugin as are done in DNS plugin. Martin^2 -- Martin Basti From pspacek at redhat.com Fri Mar 13 11:08:23 2015 From: pspacek at redhat.com (Petr Spacek) Date: Fri, 13 Mar 2015 12:08:23 +0100 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5502C38B.2000406@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> <5501B960.3060808@redhat.com> <5501BA89.2070507@redhat.com> <5502AB4B.7040701@redhat.com> <5502ADE2.3020600@redhat.com> <20150313094200.GH3878@redhat.com> <5502B550.9040105@redhat.com> <5502B931.7090708@redhat.com> <5502BD39.5020607@redhat.com> <5502C20D.20501@redhat.com> <5502C38B.2000406@redhat.com> Message-ID: <5502C527.9060506@redhat.com> On 13.3.2015 12:01, Martin Basti wrote: > On 13/03/15 11:55, Petr Spacek wrote: >> On 13.3.2015 11:34, Jan Cholasta wrote: >>> Dne 13.3.2015 v 11:17 Martin Kosek napsal(a): >>>> On 03/13/2015 11:00 AM, Petr Spacek wrote: >>>>> On 13.3.2015 10:42, Alexander Bokovoy wrote: >>>>>> On Fri, 13 Mar 2015, Petr Spacek wrote: >>>>>>> On 13.3.2015 10:18, Martin Kosek wrote: >>>>>>>> On 03/12/2015 05:10 PM, Rob Crittenden wrote: >>>>>>>>> Petr Spacek wrote: >>>>>>>>>> On 12.3.2015 16:23, Rob Crittenden wrote: >>>>>>>>>>> David Kupka wrote: >>>>>>>>>>>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>>>>>>>>>>> Upgrade plugins which modify LDAP data directly should not be >>>>>>>>>>>>> executed >>>>>>>>>>>>> in --test mode. >>>>>>>>>>>>> >>>>>>>>>>>>> This patch is a workaround, to ensure update with --test >>>>>>>>>>>>> option will not >>>>>>>>>>>>> modify any LDAP data. >>>>>>>>>>>>> >>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/3448 >>>>>>>>>>>>> >>>>>>>>>>>>> Patch attached. >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> Ideally we want to fix all plugins to dry-run the upgrade not >>>>>>>>>>>> just skip >>>>>>>>>>>> when there is '--test' option but it is a good first step. >>>>>>>>>>>> Works for me, ACK. >>>>>>>>>>>> >>>>>>>>>>> I agree that this breaks the spirit of --test and think it >>>>>>>>>>> should be >>>>>>>>>>> fixed before committing. >>>>>>>>>> Considering how often is the option is used ... I do not think >>>>>>>>>> that this >>>>>>>>>> requires 'proper' fix now. It was broken for *years* so this >>>>>>>>>> patch is a huge >>>>>>>>>> improvement and IMHO should be commited in current form. We can >>>>>>>>>> re-visit it >>>>>>>>>> later on, open a ticket :-) >>>>>>>>>> >>>>>>>>> No. There is no rush for this, at least not for the promise of a >>>>>>>>> future >>>>>>>>> fix that will never come. >>>>>>>> I checked the code and to me, the proper fix looks like instrumenting >>>>>>>> ldap.update_entry calls in upgrade plugins with >>>>>>>> >>>>>>>> if options.test: >>>>>>>> log message >>>>>>>> else >>>>>>>> do the update >>>>>>>> >>>>>>>> right? I see just couple places that would need to be updated: >>>>>>>> >>>>>>>> $ git grep -E "(ldap|conn).update_entry" ipaserver/install/plugins >>>>>>>> ipaserver/install/plugins/dns.py: >>>>>>>> ldap.update_entry(container_entry) >>>>>>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>>>>>> repl.conn.update_entry(replica) >>>>>>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>>>>>> repl.conn.update_entry(replica) >>>>>>>> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >>>>>>>> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >>>>>>>> ipaserver/install/plugins/update_managed_permissions.py: >>>>>>>> ldap.update_entry(base_entry) >>>>>>>> ipaserver/install/plugins/update_managed_permissions.py: >>>>>>>> ldap.update_entry(entry) >>>>>>>> ipaserver/install/plugins/update_pacs.py: >>>>>>>> ldap.update_entry(entry) >>>>>>>> ipaserver/install/plugins/update_referint.py: >>>>>>>> ldap.update_entry(entry) >>>>>>>> ipaserver/install/plugins/update_services.py: ldap.update_entry(entry) >>>>>>>> ipaserver/install/plugins/update_uniqueness.py: >>>>>>>> ldap.update_entry(uid_uniqueness_plugin) >>>>>>>> >>>>>>>> >>>>>>>> So from my POV, very quick fix. In that case, I would also prefer a >>>>>>>> fix now >>>>>>>> than a ticket that would never be done. >>>>>>> I really dislike this approach because I consider it flawed by >>>>>>> design. Plugin >>>>>>> author has to think about it all the time and do not forget to add if >>>>>>> otherwise ... too bad. >>>>>>> >>>>>>> I can see two 'safer' ways to do that: >>>>>>> - LDAP transactions :-) >>>>>>> - 'mock_writes=True' option in LDAP backend which would print >>>>>>> modlists instead >>>>>>> of applying them (and return success to the caller). >>>>>>> >>>>>>> Both cases eliminate the need to scatter 'ifs' all over update >>>>>>> plugins and do >>>>>>> not add risk of forgetting about one of them when adding/changing >>>>>>> plugin code. >>>>>> I like idea about mock_writes=True. However, I think we still need to >>>>>> make >>>>>> sure plugin writers rely on options.test value to see that we aren't >>>>>> going to write the data. The reason for it is that we might get into >>>>>> configurations where plugins would be doing updates based on earlier >>>>>> performed tasks. If task configuration is not going to be written, its >>>>>> status will never be correct and plugin would get an error. >>>>> That is exactly why I mentioned LDAP transactions. There is no other >>>>> way how >>>>> to test complex plugins which actually read own writes (except mocking >>>>> the >>>>> whole LDAP interface somewhere). >>>> While this may be a good idea long term, I do not think any of us is >>>> considering implementing the LDAP transaction support within work on >>>> this refactoring. >>>> >>>> So in this thread, let us focus on how to fix options.test mid-term. I >>>> currently see 2 proposed ways: >>>> - Making the plugins aware of options.test >>>> - Make ldap2 write operations only print the update and not do it. >>>> Although thinking of this approach, I think it may make some plugins >>>> like DNS loop forever. IIRC, at least DNS upgrade plugin have loop >>>> - search for all unfixed DNS zones >>>> - fix them with ldap update >>>> - was the search truncated, i.e. are there more zones to update? >>>> if yes, go back to start >>> - Make the plugins not call {add,update,delete}_entry themselves but rather >>> return the updates like they should. This is what the ticket >>> () requests and what should be >>> done to make --test work for them. >> How do you propose to handle iterative updates like the DNS upgrade mentioned >> by Martin^1? Return set of updates along with boolean 'call me again'? >> Something else? >> > So instead of DNS commands logic, which can be used in plugin, we should > reimplement the dnzone commands in upgrade plugin, to get modlist? And keep > watching it and do same modifications for upgrade plugin as are done in DNS > plugin. Again, I think we are investing to much effort into an option which is almost never used. Let it die in Git history. After all, it can be revived at any time when needed or when we have proper support in DS. -- Petr^2 Spacek From jcholast at redhat.com Fri Mar 13 11:50:20 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Fri, 13 Mar 2015 12:50:20 +0100 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5502C527.9060506@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> <5501B960.3060808@redhat.com> <5501BA89.2070507@redhat.com> <5502AB4B.7040701@redhat.com> <5502ADE2.3020600@redhat.com> <20150313094200.GH3878@redhat.com> <5502B550.9040105@redhat.com> <5502B931.7090708@redhat.com> <5502BD39.5020607@redhat.com> <5502C20D.20501@redhat.com> <5502C38B.2000406@redhat.com> <5502C527.9060506@redhat.com> Message-ID: <5502CEFC.5080408@redhat.com> Dne 13.3.2015 v 12:08 Petr Spacek napsal(a): > On 13.3.2015 12:01, Martin Basti wrote: >> On 13/03/15 11:55, Petr Spacek wrote: >>> On 13.3.2015 11:34, Jan Cholasta wrote: >>>> Dne 13.3.2015 v 11:17 Martin Kosek napsal(a): >>>>> On 03/13/2015 11:00 AM, Petr Spacek wrote: >>>>>> On 13.3.2015 10:42, Alexander Bokovoy wrote: >>>>>>> On Fri, 13 Mar 2015, Petr Spacek wrote: >>>>>>>> On 13.3.2015 10:18, Martin Kosek wrote: >>>>>>>>> On 03/12/2015 05:10 PM, Rob Crittenden wrote: >>>>>>>>>> Petr Spacek wrote: >>>>>>>>>>> On 12.3.2015 16:23, Rob Crittenden wrote: >>>>>>>>>>>> David Kupka wrote: >>>>>>>>>>>>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>>>>>>>>>>>> Upgrade plugins which modify LDAP data directly should not be >>>>>>>>>>>>>> executed >>>>>>>>>>>>>> in --test mode. >>>>>>>>>>>>>> >>>>>>>>>>>>>> This patch is a workaround, to ensure update with --test >>>>>>>>>>>>>> option will not >>>>>>>>>>>>>> modify any LDAP data. >>>>>>>>>>>>>> >>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/3448 >>>>>>>>>>>>>> >>>>>>>>>>>>>> Patch attached. >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>> Ideally we want to fix all plugins to dry-run the upgrade not >>>>>>>>>>>>> just skip >>>>>>>>>>>>> when there is '--test' option but it is a good first step. >>>>>>>>>>>>> Works for me, ACK. >>>>>>>>>>>>> >>>>>>>>>>>> I agree that this breaks the spirit of --test and think it >>>>>>>>>>>> should be >>>>>>>>>>>> fixed before committing. >>>>>>>>>>> Considering how often is the option is used ... I do not think >>>>>>>>>>> that this >>>>>>>>>>> requires 'proper' fix now. It was broken for *years* so this >>>>>>>>>>> patch is a huge >>>>>>>>>>> improvement and IMHO should be commited in current form. We can >>>>>>>>>>> re-visit it >>>>>>>>>>> later on, open a ticket :-) >>>>>>>>>>> >>>>>>>>>> No. There is no rush for this, at least not for the promise of a >>>>>>>>>> future >>>>>>>>>> fix that will never come. >>>>>>>>> I checked the code and to me, the proper fix looks like instrumenting >>>>>>>>> ldap.update_entry calls in upgrade plugins with >>>>>>>>> >>>>>>>>> if options.test: >>>>>>>>> log message >>>>>>>>> else >>>>>>>>> do the update >>>>>>>>> >>>>>>>>> right? I see just couple places that would need to be updated: >>>>>>>>> >>>>>>>>> $ git grep -E "(ldap|conn).update_entry" ipaserver/install/plugins >>>>>>>>> ipaserver/install/plugins/dns.py: >>>>>>>>> ldap.update_entry(container_entry) >>>>>>>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>>>>>>> repl.conn.update_entry(replica) >>>>>>>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>>>>>>> repl.conn.update_entry(replica) >>>>>>>>> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >>>>>>>>> ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) >>>>>>>>> ipaserver/install/plugins/update_managed_permissions.py: >>>>>>>>> ldap.update_entry(base_entry) >>>>>>>>> ipaserver/install/plugins/update_managed_permissions.py: >>>>>>>>> ldap.update_entry(entry) >>>>>>>>> ipaserver/install/plugins/update_pacs.py: >>>>>>>>> ldap.update_entry(entry) >>>>>>>>> ipaserver/install/plugins/update_referint.py: >>>>>>>>> ldap.update_entry(entry) >>>>>>>>> ipaserver/install/plugins/update_services.py: ldap.update_entry(entry) >>>>>>>>> ipaserver/install/plugins/update_uniqueness.py: >>>>>>>>> ldap.update_entry(uid_uniqueness_plugin) >>>>>>>>> >>>>>>>>> >>>>>>>>> So from my POV, very quick fix. In that case, I would also prefer a >>>>>>>>> fix now >>>>>>>>> than a ticket that would never be done. >>>>>>>> I really dislike this approach because I consider it flawed by >>>>>>>> design. Plugin >>>>>>>> author has to think about it all the time and do not forget to add if >>>>>>>> otherwise ... too bad. >>>>>>>> >>>>>>>> I can see two 'safer' ways to do that: >>>>>>>> - LDAP transactions :-) >>>>>>>> - 'mock_writes=True' option in LDAP backend which would print >>>>>>>> modlists instead >>>>>>>> of applying them (and return success to the caller). >>>>>>>> >>>>>>>> Both cases eliminate the need to scatter 'ifs' all over update >>>>>>>> plugins and do >>>>>>>> not add risk of forgetting about one of them when adding/changing >>>>>>>> plugin code. >>>>>>> I like idea about mock_writes=True. However, I think we still need to >>>>>>> make >>>>>>> sure plugin writers rely on options.test value to see that we aren't >>>>>>> going to write the data. The reason for it is that we might get into >>>>>>> configurations where plugins would be doing updates based on earlier >>>>>>> performed tasks. If task configuration is not going to be written, its >>>>>>> status will never be correct and plugin would get an error. >>>>>> That is exactly why I mentioned LDAP transactions. There is no other >>>>>> way how >>>>>> to test complex plugins which actually read own writes (except mocking >>>>>> the >>>>>> whole LDAP interface somewhere). >>>>> While this may be a good idea long term, I do not think any of us is >>>>> considering implementing the LDAP transaction support within work on >>>>> this refactoring. >>>>> >>>>> So in this thread, let us focus on how to fix options.test mid-term. I >>>>> currently see 2 proposed ways: >>>>> - Making the plugins aware of options.test >>>>> - Make ldap2 write operations only print the update and not do it. >>>>> Although thinking of this approach, I think it may make some plugins >>>>> like DNS loop forever. IIRC, at least DNS upgrade plugin have loop >>>>> - search for all unfixed DNS zones >>>>> - fix them with ldap update >>>>> - was the search truncated, i.e. are there more zones to update? >>>>> if yes, go back to start >>>> - Make the plugins not call {add,update,delete}_entry themselves but rather >>>> return the updates like they should. This is what the ticket >>>> () requests and what should be >>>> done to make --test work for them. >>> How do you propose to handle iterative updates like the DNS upgrade mentioned >>> by Martin^1? Return set of updates along with boolean 'call me again'? >>> Something else? >>> >> So instead of DNS commands logic, which can be used in plugin, we should >> reimplement the dnzone commands in upgrade plugin, to get modlist? And keep >> watching it and do same modifications for upgrade plugin as are done in DNS >> plugin. Yes, this is how it currently works and how it is done in other update plugins. For proper command support some serious changes will have to be done in the framework. > > Again, I think we are investing to much effort into an option which is almost > never used. Let it die in Git history. After all, it can be revived at any > time when needed or when we have proper support in DS. > +1 -- Jan Cholasta From mbasti at redhat.com Fri Mar 13 12:28:36 2015 From: mbasti at redhat.com (Martin Basti) Date: Fri, 13 Mar 2015 13:28:36 +0100 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5502AB4B.7040701@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> <5501B960.3060808@redhat.com> <5501BA89.2070507@redhat.com> <5502AB4B.7040701@redhat.com> Message-ID: <5502D7F4.1050708@redhat.com> On 13/03/15 10:18, Martin Kosek wrote: > On 03/12/2015 05:10 PM, Rob Crittenden wrote: >> Petr Spacek wrote: >>> On 12.3.2015 16:23, Rob Crittenden wrote: >>>> David Kupka wrote: >>>>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>>>> Upgrade plugins which modify LDAP data directly should not be >>>>>> executed >>>>>> in --test mode. >>>>>> >>>>>> This patch is a workaround, to ensure update with --test option >>>>>> will not >>>>>> modify any LDAP data. >>>>>> >>>>>> https://fedorahosted.org/freeipa/ticket/3448 >>>>>> >>>>>> Patch attached. >>>>>> >>>>>> >>>>>> >>>>> >>>>> Ideally we want to fix all plugins to dry-run the upgrade not just >>>>> skip >>>>> when there is '--test' option but it is a good first step. >>>>> Works for me, ACK. >>>>> >>>> >>>> I agree that this breaks the spirit of --test and think it should be >>>> fixed before committing. >>> >>> Considering how often is the option is used ... I do not think that >>> this >>> requires 'proper' fix now. It was broken for *years* so this patch >>> is a huge >>> improvement and IMHO should be commited in current form. We can >>> re-visit it >>> later on, open a ticket :-) >>> >> >> No. There is no rush for this, at least not for the promise of a future >> fix that will never come. > > I checked the code and to me, the proper fix looks like instrumenting > ldap.update_entry calls in upgrade plugins with > > if options.test: > log message > else > do the update > > right? I see just couple places that would need to be updated: > > $ git grep -E "(ldap|conn).update_entry" ipaserver/install/plugins > ipaserver/install/plugins/dns.py: ldap.update_entry(container_entry) > ipaserver/install/plugins/fix_replica_agreements.py: > repl.conn.update_entry(replica) > ipaserver/install/plugins/fix_replica_agreements.py: > repl.conn.update_entry(replica) > ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) > ipaserver/install/plugins/update_idranges.py: ldap.update_entry(entry) > ipaserver/install/plugins/update_managed_permissions.py: > ldap.update_entry(base_entry) > ipaserver/install/plugins/update_managed_permissions.py: > ldap.update_entry(entry) > ipaserver/install/plugins/update_pacs.py: ldap.update_entry(entry) > ipaserver/install/plugins/update_referint.py: ldap.update_entry(entry) > ipaserver/install/plugins/update_services.py: ldap.update_entry(entry) > ipaserver/install/plugins/update_uniqueness.py: > ldap.update_entry(uid_uniqueness_plugin) > > > So from my POV, very quick fix. In that case, I would also prefer a > fix now than a ticket that would never be done. > > Martin And ldap.add_entry, del_entry, ipa add/mod/del commands -- Martin Basti From mbasti at redhat.com Fri Mar 13 12:46:05 2015 From: mbasti at redhat.com (Martin Basti) Date: Fri, 13 Mar 2015 13:46:05 +0100 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5502CEFC.5080408@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> <5501B960.3060808@redhat.com> <5501BA89.2070507@redhat.com> <5502AB4B.7040701@redhat.com> <5502ADE2.3020600@redhat.com> <20150313094200.GH3878@redhat.com> <5502B550.9040105@redhat.com> <5502B931.7090708@redhat.com> <5502BD39.5020607@redhat.com> <5502C20D.20501@redhat.com> <5502C38B.2000406@redhat.com> <5502C527.9060506@redhat.com> <5502CEFC.5080408@redhat.com> Message-ID: <5502DC0D.8080809@redhat.com> On 13/03/15 12:50, Jan Cholasta wrote: > Dne 13.3.2015 v 12:08 Petr Spacek napsal(a): >> On 13.3.2015 12:01, Martin Basti wrote: >>> On 13/03/15 11:55, Petr Spacek wrote: >>>> On 13.3.2015 11:34, Jan Cholasta wrote: >>>>> Dne 13.3.2015 v 11:17 Martin Kosek napsal(a): >>>>>> On 03/13/2015 11:00 AM, Petr Spacek wrote: >>>>>>> On 13.3.2015 10:42, Alexander Bokovoy wrote: >>>>>>>> On Fri, 13 Mar 2015, Petr Spacek wrote: >>>>>>>>> On 13.3.2015 10:18, Martin Kosek wrote: >>>>>>>>>> On 03/12/2015 05:10 PM, Rob Crittenden wrote: >>>>>>>>>>> Petr Spacek wrote: >>>>>>>>>>>> On 12.3.2015 16:23, Rob Crittenden wrote: >>>>>>>>>>>>> David Kupka wrote: >>>>>>>>>>>>>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>>>>>>>>>>>>> Upgrade plugins which modify LDAP data directly should >>>>>>>>>>>>>>> not be >>>>>>>>>>>>>>> executed >>>>>>>>>>>>>>> in --test mode. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> This patch is a workaround, to ensure update with --test >>>>>>>>>>>>>>> option will not >>>>>>>>>>>>>>> modify any LDAP data. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/3448 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Patch attached. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> Ideally we want to fix all plugins to dry-run the upgrade >>>>>>>>>>>>>> not >>>>>>>>>>>>>> just skip >>>>>>>>>>>>>> when there is '--test' option but it is a good first step. >>>>>>>>>>>>>> Works for me, ACK. >>>>>>>>>>>>>> >>>>>>>>>>>>> I agree that this breaks the spirit of --test and think it >>>>>>>>>>>>> should be >>>>>>>>>>>>> fixed before committing. >>>>>>>>>>>> Considering how often is the option is used ... I do not think >>>>>>>>>>>> that this >>>>>>>>>>>> requires 'proper' fix now. It was broken for *years* so this >>>>>>>>>>>> patch is a huge >>>>>>>>>>>> improvement and IMHO should be commited in current form. We >>>>>>>>>>>> can >>>>>>>>>>>> re-visit it >>>>>>>>>>>> later on, open a ticket :-) >>>>>>>>>>>> >>>>>>>>>>> No. There is no rush for this, at least not for the promise >>>>>>>>>>> of a >>>>>>>>>>> future >>>>>>>>>>> fix that will never come. >>>>>>>>>> I checked the code and to me, the proper fix looks like >>>>>>>>>> instrumenting >>>>>>>>>> ldap.update_entry calls in upgrade plugins with >>>>>>>>>> >>>>>>>>>> if options.test: >>>>>>>>>> log message >>>>>>>>>> else >>>>>>>>>> do the update >>>>>>>>>> >>>>>>>>>> right? I see just couple places that would need to be updated: >>>>>>>>>> >>>>>>>>>> $ git grep -E "(ldap|conn).update_entry" >>>>>>>>>> ipaserver/install/plugins >>>>>>>>>> ipaserver/install/plugins/dns.py: >>>>>>>>>> ldap.update_entry(container_entry) >>>>>>>>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>>>>>>>> repl.conn.update_entry(replica) >>>>>>>>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>>>>>>>> repl.conn.update_entry(replica) >>>>>>>>>> ipaserver/install/plugins/update_idranges.py: >>>>>>>>>> ldap.update_entry(entry) >>>>>>>>>> ipaserver/install/plugins/update_idranges.py: >>>>>>>>>> ldap.update_entry(entry) >>>>>>>>>> ipaserver/install/plugins/update_managed_permissions.py: >>>>>>>>>> ldap.update_entry(base_entry) >>>>>>>>>> ipaserver/install/plugins/update_managed_permissions.py: >>>>>>>>>> ldap.update_entry(entry) >>>>>>>>>> ipaserver/install/plugins/update_pacs.py: >>>>>>>>>> ldap.update_entry(entry) >>>>>>>>>> ipaserver/install/plugins/update_referint.py: >>>>>>>>>> ldap.update_entry(entry) >>>>>>>>>> ipaserver/install/plugins/update_services.py: >>>>>>>>>> ldap.update_entry(entry) >>>>>>>>>> ipaserver/install/plugins/update_uniqueness.py: >>>>>>>>>> ldap.update_entry(uid_uniqueness_plugin) >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> So from my POV, very quick fix. In that case, I would also >>>>>>>>>> prefer a >>>>>>>>>> fix now >>>>>>>>>> than a ticket that would never be done. >>>>>>>>> I really dislike this approach because I consider it flawed by >>>>>>>>> design. Plugin >>>>>>>>> author has to think about it all the time and do not forget to >>>>>>>>> add if >>>>>>>>> otherwise ... too bad. >>>>>>>>> >>>>>>>>> I can see two 'safer' ways to do that: >>>>>>>>> - LDAP transactions :-) >>>>>>>>> - 'mock_writes=True' option in LDAP backend which would print >>>>>>>>> modlists instead >>>>>>>>> of applying them (and return success to the caller). >>>>>>>>> >>>>>>>>> Both cases eliminate the need to scatter 'ifs' all over update >>>>>>>>> plugins and do >>>>>>>>> not add risk of forgetting about one of them when adding/changing >>>>>>>>> plugin code. >>>>>>>> I like idea about mock_writes=True. However, I think we still >>>>>>>> need to >>>>>>>> make >>>>>>>> sure plugin writers rely on options.test value to see that we >>>>>>>> aren't >>>>>>>> going to write the data. The reason for it is that we might get >>>>>>>> into >>>>>>>> configurations where plugins would be doing updates based on >>>>>>>> earlier >>>>>>>> performed tasks. If task configuration is not going to be >>>>>>>> written, its >>>>>>>> status will never be correct and plugin would get an error. >>>>>>> That is exactly why I mentioned LDAP transactions. There is no >>>>>>> other >>>>>>> way how >>>>>>> to test complex plugins which actually read own writes (except >>>>>>> mocking >>>>>>> the >>>>>>> whole LDAP interface somewhere). >>>>>> While this may be a good idea long term, I do not think any of us is >>>>>> considering implementing the LDAP transaction support within work on >>>>>> this refactoring. >>>>>> >>>>>> So in this thread, let us focus on how to fix options.test >>>>>> mid-term. I >>>>>> currently see 2 proposed ways: >>>>>> - Making the plugins aware of options.test >>>>>> - Make ldap2 write operations only print the update and not do it. >>>>>> Although thinking of this approach, I think it may make some plugins >>>>>> like DNS loop forever. IIRC, at least DNS upgrade plugin have loop >>>>>> - search for all unfixed DNS zones >>>>>> - fix them with ldap update >>>>>> - was the search truncated, i.e. are there more zones to >>>>>> update? >>>>>> if yes, go back to start >>>>> - Make the plugins not call {add,update,delete}_entry >>>>> themselves but rather >>>>> return the updates like they should. This is what the ticket >>>>> () requests and what >>>>> should be >>>>> done to make --test work for them. >>>> How do you propose to handle iterative updates like the DNS upgrade >>>> mentioned >>>> by Martin^1? Return set of updates along with boolean 'call me again'? >>>> Something else? >>>> >>> So instead of DNS commands logic, which can be used in plugin, we >>> should >>> reimplement the dnzone commands in upgrade plugin, to get modlist? >>> And keep >>> watching it and do same modifications for upgrade plugin as are done >>> in DNS >>> plugin. > > Yes, this is how it currently works and how it is done in other update > plugins. For proper command support some serious changes will have to > be done in the framework. > >> >> Again, I think we are investing to much effort into an option which >> is almost >> never used. Let it die in Git history. After all, it can be revived >> at any >> time when needed or when we have proper support in DS. >> > > +1 > Updated patch attached. -- Martin Basti -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0208.2-Server-Upgrade-respect-test-option-in-plugins.patch Type: text/x-patch Size: 15371 bytes Desc: not available URL: From redhatrises at gmail.com Fri Mar 13 13:13:10 2015 From: redhatrises at gmail.com (Gabe Alford) Date: Fri, 13 Mar 2015 07:13:10 -0600 Subject: [Freeipa-devel] [PATCH 0044] Man pages: ipa-replica-prepare can only be created on first master In-Reply-To: <5501A209.70504@redhat.com> References: <5501A209.70504@redhat.com> Message-ID: On Thu, Mar 12, 2015 at 8:26 AM, Martin Kosek wrote: > On 03/12/2015 02:37 PM, Gabe Alford wrote: > > Hello, > > > > Fix for https://fedorahosted.org/freeipa/ticket/4944. Since there seems > to > > be plenty of time, I added it to the freeipa-4-1 branch. > > Thanks Gabe! I would still suggest against moving the tickets to milestones > yourself, all new tickets should still undergo the weekly triage so that > all > core developers see it and we can decide the target milestone. > Sorry about that. > With this one, it would likely indeed end in 4.1.x, especially given you > contributed a patch, but still... > > For the patch itself, I still think the wording is not as should be: > > - following line is not entirely trie, you can install can create replica > also > on servers installed with ipa-replica-install :-) > +A replica can be created on any IPA master server installed with > ipa\-server\-install. > > - Following line may also use some rewording: > However if you want to create a replica as a redundant CA with an existing > replica or master, ipa\-replica\-prepare should be run on a replica or > master > that contains the CA. > > Maybe we should add subsection to DESCRIPTION section, with following > lines: > What should the .SS be called? Replica Info? PKI INFO? Preparation Requirements? > - A replica should only be installed on the same or higher version of IPA > on > the remote system. > - A replica with PKI can only be installed from replica file prepared on a > master with PKI Makes sense? > We will see if the coffee is working today. :) > Martin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pvoborni at redhat.com Fri Mar 13 13:15:21 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Fri, 13 Mar 2015 14:15:21 +0100 Subject: [Freeipa-devel] Purpose of default user group In-Reply-To: <54FEFF6D.60601@redhat.com> References: <54FEC13D.1000400@redhat.com> <54FEFF6D.60601@redhat.com> Message-ID: <5502E2E9.2070100@redhat.com> Thanks all for the answers. On 03/10/2015 03:27 PM, Rob Crittenden wrote: > Petr Vobornik wrote: >> In ipa migrate-ds we also set the group to all users who are not member >> of anything. Why is it important for a user to be a member of a group? > > Every POSIX user needs a default GID. We don't create user-private > groups for migrated users. > How should default GID be set during migration? IMHO there are two issues: 1. ipausers group is not a POSIX group. Which, btw, also creates this nice issue: $ ipa user-add fbar --noprivate First name: Foo Last name: Bar ipa: ERROR: Default group for new users is not POSIX 2. migrated users have to be POSIX therefore they have gidnumber and migrate-ds checks for its presence. But the command doesn't do anything with the GID number later even if the group doesn't exist nor in a step where default group is set. Therefore, default group, even if POSIX, would not work for this use case(set default GID number). Q: Is it expected that user private groups will be migrated? (e.g. for migration from other FreeIPA instance). If not, then there would be a lot of users without a private group with the same GID number as UID number. Q: Why don't we allow to create user private group? What would be better if migrating from FreeIPA instance: migrate private groups or create new private groups using Managed Entries plugin? -- Petr Vobornik From mkosek at redhat.com Fri Mar 13 13:17:34 2015 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 13 Mar 2015 14:17:34 +0100 Subject: [Freeipa-devel] [PATCH 0044] Man pages: ipa-replica-prepare can only be created on first master In-Reply-To: References: <5501A209.70504@redhat.com> Message-ID: <5502E36E.9010807@redhat.com> On 03/13/2015 02:13 PM, Gabe Alford wrote: > On Thu, Mar 12, 2015 at 8:26 AM, Martin Kosek > wrote: > > On 03/12/2015 02:37 PM, Gabe Alford wrote: > > Hello, > > > > Fix for https://fedorahosted.org/freeipa/ticket/4944. Since there seems to > > be plenty of time, I added it to the freeipa-4-1 branch. > > Thanks Gabe! I would still suggest against moving the tickets to milestones > yourself, all new tickets should still undergo the weekly triage so that all > core developers see it and we can decide the target milestone. > > > Sorry about that. > > With this one, it would likely indeed end in 4.1.x, especially given you > contributed a patch, but still... > > For the patch itself, I still think the wording is not as should be: > > - following line is not entirely trie, you can install can create replica also > on servers installed with ipa-replica-install :-) > +A replica can be created on any IPA master server installed with > ipa\-server\-install. > > - Following line may also use some rewording: > However if you want to create a replica as a redundant CA with an existing > replica or master, ipa\-replica\-prepare should be run on a replica or master > that contains the CA. > > Maybe we should add subsection to DESCRIPTION section, with following lines: > > > What should the .SS be called? Replica Info? PKI INFO? Preparation Requirements? "Limitations"? > > > - A replica should only be installed on the same or higher version of IPA on > the remote system. > > - A replica with PKI can only be installed from replica file prepared on a > master with PKI > > Makes sense? > > > We will see if the coffee is working today. :) > > Martin > > From simo at redhat.com Fri Mar 13 13:34:56 2015 From: simo at redhat.com (Simo Sorce) Date: Fri, 13 Mar 2015 09:34:56 -0400 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <55002A13.8010706@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> Message-ID: <1426253696.2981.31.camel@willson.usersys.redhat.com> On Wed, 2015-03-11 at 12:42 +0100, Petr Spacek wrote: > I would like to see new code compatible with Python 3. Here I'm not > sure what is the generic solution for xrange but in this particular > case I would recommend you to use just range. Attempts variable should > have small values so the x/range differences do not matter here. To be honest, I do not think we should care for this code specifically. The move to Python3 will require python-krbV code to be completely removed, and we should use python-gssapi instead. I will help with the "how do I kinit" part once we can do that. Now, I do not know if it is worth NACKing this patch in that sense, or letting it through, it depends on whether we want to be able to backport this specific fix or whether it is ok to have it only available on python-gssapi capable machines. Simo. -- Simo Sorce * Red Hat, Inc * New York From rcritten at redhat.com Fri Mar 13 13:34:58 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Fri, 13 Mar 2015 09:34:58 -0400 Subject: [Freeipa-devel] [PATCH 0208] Respect --test option in upgrade plugins In-Reply-To: <5502CEFC.5080408@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <5501AF59.50204@redhat.com> <5501B960.3060808@redhat.com> <5501BA89.2070507@redhat.com> <5502AB4B.7040701@redhat.com> <5502ADE2.3020600@redhat.com> <20150313094200.GH3878@redhat.com> <5502B550.9040105@redhat.com> <5502B931.7090708@redhat.com> <5502BD39.5020607@redhat.com> <5502C20D.20501@redhat.com> <5502C38B.2000406@redhat.com> <5502C527.9060506@redhat.com> <5502CEFC.5080408@redhat.com> Message-ID: <5502E782.90206@redhat.com> Jan Cholasta wrote: > Dne 13.3.2015 v 12:08 Petr Spacek napsal(a): >> On 13.3.2015 12:01, Martin Basti wrote: >>> On 13/03/15 11:55, Petr Spacek wrote: >>>> On 13.3.2015 11:34, Jan Cholasta wrote: >>>>> Dne 13.3.2015 v 11:17 Martin Kosek napsal(a): >>>>>> On 03/13/2015 11:00 AM, Petr Spacek wrote: >>>>>>> On 13.3.2015 10:42, Alexander Bokovoy wrote: >>>>>>>> On Fri, 13 Mar 2015, Petr Spacek wrote: >>>>>>>>> On 13.3.2015 10:18, Martin Kosek wrote: >>>>>>>>>> On 03/12/2015 05:10 PM, Rob Crittenden wrote: >>>>>>>>>>> Petr Spacek wrote: >>>>>>>>>>>> On 12.3.2015 16:23, Rob Crittenden wrote: >>>>>>>>>>>>> David Kupka wrote: >>>>>>>>>>>>>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>>>>>>>>>>>>> Upgrade plugins which modify LDAP data directly should >>>>>>>>>>>>>>> not be >>>>>>>>>>>>>>> executed >>>>>>>>>>>>>>> in --test mode. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> This patch is a workaround, to ensure update with --test >>>>>>>>>>>>>>> option will not >>>>>>>>>>>>>>> modify any LDAP data. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/3448 >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Patch attached. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> Ideally we want to fix all plugins to dry-run the upgrade not >>>>>>>>>>>>>> just skip >>>>>>>>>>>>>> when there is '--test' option but it is a good first step. >>>>>>>>>>>>>> Works for me, ACK. >>>>>>>>>>>>>> >>>>>>>>>>>>> I agree that this breaks the spirit of --test and think it >>>>>>>>>>>>> should be >>>>>>>>>>>>> fixed before committing. >>>>>>>>>>>> Considering how often is the option is used ... I do not think >>>>>>>>>>>> that this >>>>>>>>>>>> requires 'proper' fix now. It was broken for *years* so this >>>>>>>>>>>> patch is a huge >>>>>>>>>>>> improvement and IMHO should be commited in current form. We can >>>>>>>>>>>> re-visit it >>>>>>>>>>>> later on, open a ticket :-) >>>>>>>>>>>> >>>>>>>>>>> No. There is no rush for this, at least not for the promise of a >>>>>>>>>>> future >>>>>>>>>>> fix that will never come. >>>>>>>>>> I checked the code and to me, the proper fix looks like >>>>>>>>>> instrumenting >>>>>>>>>> ldap.update_entry calls in upgrade plugins with >>>>>>>>>> >>>>>>>>>> if options.test: >>>>>>>>>> log message >>>>>>>>>> else >>>>>>>>>> do the update >>>>>>>>>> >>>>>>>>>> right? I see just couple places that would need to be updated: >>>>>>>>>> >>>>>>>>>> $ git grep -E "(ldap|conn).update_entry" >>>>>>>>>> ipaserver/install/plugins >>>>>>>>>> ipaserver/install/plugins/dns.py: >>>>>>>>>> ldap.update_entry(container_entry) >>>>>>>>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>>>>>>>> repl.conn.update_entry(replica) >>>>>>>>>> ipaserver/install/plugins/fix_replica_agreements.py: >>>>>>>>>> repl.conn.update_entry(replica) >>>>>>>>>> ipaserver/install/plugins/update_idranges.py: >>>>>>>>>> ldap.update_entry(entry) >>>>>>>>>> ipaserver/install/plugins/update_idranges.py: >>>>>>>>>> ldap.update_entry(entry) >>>>>>>>>> ipaserver/install/plugins/update_managed_permissions.py: >>>>>>>>>> ldap.update_entry(base_entry) >>>>>>>>>> ipaserver/install/plugins/update_managed_permissions.py: >>>>>>>>>> ldap.update_entry(entry) >>>>>>>>>> ipaserver/install/plugins/update_pacs.py: >>>>>>>>>> ldap.update_entry(entry) >>>>>>>>>> ipaserver/install/plugins/update_referint.py: >>>>>>>>>> ldap.update_entry(entry) >>>>>>>>>> ipaserver/install/plugins/update_services.py: >>>>>>>>>> ldap.update_entry(entry) >>>>>>>>>> ipaserver/install/plugins/update_uniqueness.py: >>>>>>>>>> ldap.update_entry(uid_uniqueness_plugin) >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> So from my POV, very quick fix. In that case, I would also >>>>>>>>>> prefer a >>>>>>>>>> fix now >>>>>>>>>> than a ticket that would never be done. >>>>>>>>> I really dislike this approach because I consider it flawed by >>>>>>>>> design. Plugin >>>>>>>>> author has to think about it all the time and do not forget to >>>>>>>>> add if >>>>>>>>> otherwise ... too bad. >>>>>>>>> >>>>>>>>> I can see two 'safer' ways to do that: >>>>>>>>> - LDAP transactions :-) >>>>>>>>> - 'mock_writes=True' option in LDAP backend which would print >>>>>>>>> modlists instead >>>>>>>>> of applying them (and return success to the caller). >>>>>>>>> >>>>>>>>> Both cases eliminate the need to scatter 'ifs' all over update >>>>>>>>> plugins and do >>>>>>>>> not add risk of forgetting about one of them when adding/changing >>>>>>>>> plugin code. >>>>>>>> I like idea about mock_writes=True. However, I think we still >>>>>>>> need to >>>>>>>> make >>>>>>>> sure plugin writers rely on options.test value to see that we >>>>>>>> aren't >>>>>>>> going to write the data. The reason for it is that we might get >>>>>>>> into >>>>>>>> configurations where plugins would be doing updates based on >>>>>>>> earlier >>>>>>>> performed tasks. If task configuration is not going to be >>>>>>>> written, its >>>>>>>> status will never be correct and plugin would get an error. >>>>>>> That is exactly why I mentioned LDAP transactions. There is no other >>>>>>> way how >>>>>>> to test complex plugins which actually read own writes (except >>>>>>> mocking >>>>>>> the >>>>>>> whole LDAP interface somewhere). >>>>>> While this may be a good idea long term, I do not think any of us is >>>>>> considering implementing the LDAP transaction support within work on >>>>>> this refactoring. >>>>>> >>>>>> So in this thread, let us focus on how to fix options.test >>>>>> mid-term. I >>>>>> currently see 2 proposed ways: >>>>>> - Making the plugins aware of options.test >>>>>> - Make ldap2 write operations only print the update and not do it. >>>>>> Although thinking of this approach, I think it may make some plugins >>>>>> like DNS loop forever. IIRC, at least DNS upgrade plugin have loop >>>>>> - search for all unfixed DNS zones >>>>>> - fix them with ldap update >>>>>> - was the search truncated, i.e. are there more zones to >>>>>> update? >>>>>> if yes, go back to start >>>>> - Make the plugins not call {add,update,delete}_entry themselves >>>>> but rather >>>>> return the updates like they should. This is what the ticket >>>>> () requests and what >>>>> should be >>>>> done to make --test work for them. >>>> How do you propose to handle iterative updates like the DNS upgrade >>>> mentioned >>>> by Martin^1? Return set of updates along with boolean 'call me again'? >>>> Something else? >>>> >>> So instead of DNS commands logic, which can be used in plugin, we should >>> reimplement the dnzone commands in upgrade plugin, to get modlist? >>> And keep >>> watching it and do same modifications for upgrade plugin as are done >>> in DNS >>> plugin. > > Yes, this is how it currently works and how it is done in other update > plugins. For proper command support some serious changes will have to be > done in the framework. I always intended that some plugins would need to directly write to LDAP (or via internal commands). This is in the spirit of "you can't think of everything" and providing enough future flexibility that we can get out of a sticky situation. >> >> Again, I think we are investing to much effort into an option which is >> almost >> never used. Let it die in Git history. After all, it can be revived at >> any >> time when needed or when we have proper support in DS. >> > > +1 More effort is being spent trying to kill the option than it would take to fix the usage. rob From redhatrises at gmail.com Fri Mar 13 13:38:57 2015 From: redhatrises at gmail.com (Gabe Alford) Date: Fri, 13 Mar 2015 07:38:57 -0600 Subject: [Freeipa-devel] [PATCH 0044] Man pages: ipa-replica-prepare can only be created on first master In-Reply-To: <5502E36E.9010807@redhat.com> References: <5501A209.70504@redhat.com> <5502E36E.9010807@redhat.com> Message-ID: "Limitations" is fine with me. Updated patch attached. On Fri, Mar 13, 2015 at 7:17 AM, Martin Kosek wrote: > On 03/13/2015 02:13 PM, Gabe Alford wrote: > >> On Thu, Mar 12, 2015 at 8:26 AM, Martin Kosek > > wrote: >> >> On 03/12/2015 02:37 PM, Gabe Alford wrote: >> > Hello, >> > >> > Fix for https://fedorahosted.org/freeipa/ticket/4944. Since there >> seems to >> > be plenty of time, I added it to the freeipa-4-1 branch. >> >> Thanks Gabe! I would still suggest against moving the tickets to >> milestones >> yourself, all new tickets should still undergo the weekly triage so >> that all >> core developers see it and we can decide the target milestone. >> >> >> Sorry about that. >> >> With this one, it would likely indeed end in 4.1.x, especially given >> you >> contributed a patch, but still... >> >> For the patch itself, I still think the wording is not as should be: >> >> - following line is not entirely trie, you can install can create >> replica also >> on servers installed with ipa-replica-install :-) >> +A replica can be created on any IPA master server installed with >> ipa\-server\-install. >> >> - Following line may also use some rewording: >> However if you want to create a replica as a redundant CA with an >> existing >> replica or master, ipa\-replica\-prepare should be run on a replica >> or master >> that contains the CA. >> >> Maybe we should add subsection to DESCRIPTION section, with following >> lines: >> >> >> What should the .SS be called? Replica Info? PKI INFO? Preparation >> Requirements? >> > > "Limitations"? > > > >> >> - A replica should only be installed on the same or higher version of >> IPA on >> the remote system. >> >> - A replica with PKI can only be installed from replica file prepared >> on a >> master with PKI >> >> Makes sense? >> >> >> We will see if the coffee is working today. :) >> >> Martin >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rga-0044-2-ipa-replica-prepare-can-only-be-created-on-the-first.patch Type: text/x-patch Size: 1981 bytes Desc: not available URL: From mkosek at redhat.com Fri Mar 13 13:49:05 2015 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 13 Mar 2015 14:49:05 +0100 Subject: [Freeipa-devel] [PATCH 0044] Man pages: ipa-replica-prepare can only be created on first master In-Reply-To: References: <5501A209.70504@redhat.com> <5502E36E.9010807@redhat.com> Message-ID: <5502EAD1.4070802@redhat.com> On 03/13/2015 02:38 PM, Gabe Alford wrote: > "Limitations" is fine with me. Updated patch attached. > Works for me. I just changed the case of the subsection name to be consistent with man pages like ipa-client-install. Thanks Gabe! ACK. Pushed to: master: fbf192f0e255c5f48e93f8838fc530b26f357deb ipa-4-1: 169a37d1a8585528c88985e19255c40f63bc831f Martin From mkubik at redhat.com Fri Mar 13 13:59:20 2015 From: mkubik at redhat.com (Milan Kubik) Date: Fri, 13 Mar 2015 14:59:20 +0100 Subject: [Freeipa-devel] [PATCH] ipatests: port of p11helper test from github Message-ID: <5502ED38.9020302@redhat.com> Hi, this is a patch with port of [1] to pytest. [1]: https://github.com/spacekpe/freeipa-pkcs11/blob/master/python/run.py Cheers, Milan -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkubik-0001-ipatests-port-of-p11helper-test-from-github.patch Type: text/x-patch Size: 11762 bytes Desc: not available URL: From rcritten at redhat.com Fri Mar 13 14:02:32 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Fri, 13 Mar 2015 10:02:32 -0400 Subject: [Freeipa-devel] Purpose of default user group In-Reply-To: <5502E2E9.2070100@redhat.com> References: <54FEC13D.1000400@redhat.com> <54FEFF6D.60601@redhat.com> <5502E2E9.2070100@redhat.com> Message-ID: <5502EDF8.3040901@redhat.com> Petr Vobornik wrote: > Thanks all for the answers. > > On 03/10/2015 03:27 PM, Rob Crittenden wrote: >> Petr Vobornik wrote: >>> In ipa migrate-ds we also set the group to all users who are not member >>> of anything. Why is it important for a user to be a member of a group? >> >> Every POSIX user needs a default GID. We don't create user-private >> groups for migrated users. >> > IPA to IPA migration is a bit of a special case, and not something we really planned on (though we've tended to keep it basically working). Migration was expected to be from an existing LDAP server providing POSIX users and groups. > How should default GID be set during migration? IMHO there are two issues: > > 1. ipausers group is not a POSIX group. Which, btw, also creates this > nice issue: > $ ipa user-add fbar --noprivate > First name: Foo > Last name: Bar > ipa: ERROR: Default group for new users is not POSIX Right, we assumed that incoming user would already have valid groups. > 2. migrated users have to be POSIX therefore they have gidnumber and > migrate-ds checks for its presence. But the command doesn't do anything > with the GID number later even if the group doesn't exist nor in a step > where default group is set. Therefore, default group, even if POSIX, > would not work for this use case(set default GID number). It does verify that the GID points to an existing group. If not you'll get a warning like: GID number %s of migrated user %s does not point to a known group. > Q: Is it expected that user private groups will be migrated? (e.g. for > migration from other FreeIPA instance). If not, then there would be a > lot of users without a private group with the same GID number as UID > number. IPA to IPA migration wasn't really planned out, so no. It is slightly complex because it will add another remote LDAP call for each user to see if they have an existing group in their name and ensure that the group contains no members (or only this user). And then later when groups are migrated skip over the existing private group silently. > Q: Why don't we allow to create user private group? What would be better > if migrating from FreeIPA instance: migrate private groups or create new > private groups using Managed Entries plugin? Because of the additional logic in evaluating what the current state of groups is on the remote server. It's doable but it would be slower. Worth an RFE I think. rob From mbasti at redhat.com Fri Mar 13 14:08:28 2015 From: mbasti at redhat.com (Martin Basti) Date: Fri, 13 Mar 2015 15:08:28 +0100 Subject: [Freeipa-devel] [PATCHES 0204-0207, 0211] Server upgrade: Make LDAP data upgrade deterministic In-Reply-To: <5501AF09.1090202@redhat.com> References: <54F9CCB9.4070600@redhat.com> <5501AF09.1090202@redhat.com> Message-ID: <5502EF5C.4070203@redhat.com> On 12/03/15 16:21, Rob Crittenden wrote: > Martin Basti wrote: >> The patchset ensure, the upgrade order will respect ordering of entries >> in *.update files. >> >> Required for: https://fedorahosted.org/freeipa/ticket/4904 >> >> Patch 205 also fixes https://fedorahosted.org/freeipa/ticket/3560 >> >> Required patch mbasti-0203 >> >> Patches attached. >> >> >> > Just reading the patches, untested. > > I think ordered should default to True in the update() method of > ldapupdater to keep in spirit with the design. > > Otherwise LGTM that it implements what was designed. > > rob > > New patch that switch default value for ordered to True attached. -- Martin Basti -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0211-Server-Upgrade-order-update-files-by-default.patch Type: text/x-patch Size: 3299 bytes Desc: not available URL: From jhrozek at redhat.com Fri Mar 13 14:14:55 2015 From: jhrozek at redhat.com (Jakub Hrozek) Date: Fri, 13 Mar 2015 15:14:55 +0100 Subject: [Freeipa-devel] [PATCH 140] extdom: migrate check-based test to cmocka In-Reply-To: <20150313105646.GK13715@p.redhat.com> References: <20150304174205.GU3271@p.redhat.com> <20150313105646.GK13715@p.redhat.com> Message-ID: <20150313141455.GE19530@hendrix.arn.redhat.com> On Fri, Mar 13, 2015 at 11:56:46AM +0100, Sumit Bose wrote: > On Wed, Mar 04, 2015 at 06:42:05PM +0100, Sumit Bose wrote: > > Hi, > > > > this is the first patch for https://fedorahosted.org/freeipa/ticket/4922 > > which converts the check-based tests of the extdom plugin to cmocka. > > > > bye, > > Sumit > > Rebased version attached. > > bye, > Sumit The test itself is fine, but did freeipa consider moving to cmocka-1.0+ to avoid warnings like: ipa_extdom_cmocka_tests.c: In function ?main?: ipa_extdom_cmocka_tests.c:408:5: warning: ?_run_tests? is deprecated (declared at /usr/include/cmocka.h:2001) [-Wdeprecated-declarations] return run_tests(tests); But I'm fine with ACKing this patch, the conversion should be done separately. From jhrozek at redhat.com Fri Mar 13 14:17:10 2015 From: jhrozek at redhat.com (Jakub Hrozek) Date: Fri, 13 Mar 2015 15:17:10 +0100 Subject: [Freeipa-devel] [PATCHES 137-139] extdom: add err_msg member to request context In-Reply-To: <20150313105509.GJ13715@p.redhat.com> References: <20150304173522.GT3271@p.redhat.com> <20150313105509.GJ13715@p.redhat.com> Message-ID: <20150313141710.GF19530@hendrix.arn.redhat.com> On Fri, Mar 13, 2015 at 11:55:09AM +0100, Sumit Bose wrote: > On Wed, Mar 04, 2015 at 06:35:22PM +0100, Sumit Bose wrote: > > Hi, > > > > this patch series improves error reporting of the extdom plugin > > especially on the client side. Currently there is only SSSD ticket > > https://fedorahosted.org/sssd/ticket/2463 . Shall I create a > > corresponding FreeIPA ticket as well? > > > > In the third patch I already added a handful of new error messages. > > Suggestions for more messages are welcome. > > > > bye, > > Sumit > > Rebased versions attached. > > bye, > Sumit The patches look good and work fine. I admit I cheated a bit and modified the code to return a failure. Then I saw on the client: [sssd[be[ipa.example.com]]] [ipa_s2n_exop_send] (0x0400): Executing extended operation [sssd[be[ipa.example.com]]] [ipa_s2n_exop_done] (0x0040): ldap_extended_operation result: Operations error(1), Failed to create fully qualified name. [sssd[be[ipa.example.com]]] [ipa_s2n_exop_done] (0x0040): ldap_extended_operation failed, server logs might contain more details. [sssd[be[ipa.example.com]]] [ipa_s2n_get_user_done] (0x0040): s2n exop request failed. I just saw one typo: > @@ -918,6 +934,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, > ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); > if (ret != 0 || !(id_type == SSS_ID_TYPE_UID > || id_type == SSS_ID_TYPE_BOTH)) { > + set_err_msg(req, "Failed ot read original data"); ~~~~~~~~~ > if (ret == ENOENT) { > ret = LDAP_NO_SUCH_OBJECT; > } else { And a compilation warning caused by previous patches. So ACK provided the typo is fixed prior to pushing the patch. From mbabinsk at redhat.com Fri Mar 13 16:37:04 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Fri, 13 Mar 2015 17:37:04 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <55002A13.8010706@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> Message-ID: <55031230.70604@redhat.com> Attaching the next iteration of patches. I have tried my best to reword the ipa-client-install man page bit about the new option. Any suggestions to further improve it are welcome. I have also slightly modified the 'kinit_keytab' function so that in Kerberos errors are reported for each attempt and the text of the last error is retained when finally raising exception. -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0015-3-ipautil-new-functions-kinit_keytab-and-kinit_passwor.patch Type: text/x-patch Size: 4272 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0016-3-ipa-client-install-try-to-get-host-TGT-several-times.patch Type: text/x-patch Size: 8580 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0017-3-Adopted-kinit_keytab-and-kinit_password-for-kerberos.patch Type: text/x-patch Size: 11442 bytes Desc: not available URL: From mkubik at redhat.com Mon Mar 16 11:03:23 2015 From: mkubik at redhat.com (Milan Kubik) Date: Mon, 16 Mar 2015 12:03:23 +0100 Subject: [Freeipa-devel] [PATCH] ipatests: port of p11helper test from github In-Reply-To: <5502ED38.9020302@redhat.com> References: <5502ED38.9020302@redhat.com> Message-ID: <5506B87B.6050600@redhat.com> On 03/13/2015 02:59 PM, Milan Kubik wrote: > Hi, > > this is a patch with port of [1] to pytest. > > [1]: https://github.com/spacekpe/freeipa-pkcs11/blob/master/python/run.py > > Cheers, > Milan > > > Added few more asserts in methods where the test could fail and cause other errors. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkubik-0001-2-ipatests-port-of-p11helper-test-from-github.patch Type: text/x-patch Size: 11841 bytes Desc: not available URL: From dkupka at redhat.com Mon Mar 16 11:06:00 2015 From: dkupka at redhat.com (David Kupka) Date: Mon, 16 Mar 2015 12:06:00 +0100 Subject: [Freeipa-devel] [PATCH] 0003-3 User life cycle: new stageuser plugin with add verb In-Reply-To: <54F9F243.5090003@redhat.com> References: <53E4D6AE.6050505@redhat.com> <54045399.3030404@redhat.com> <54196346.5070500@redhat.com> <54D0A7EB.1010700@redhat.com> <54D22BE2.9050407@redhat.com> <54D24567.4010103@redhat.com> <54E5D092.6030708@redhat.com> <54E5FF07.1080809@redhat.com> <54F9F243.5090003@redhat.com> Message-ID: <5506B918.6000708@redhat.com> On 03/06/2015 07:30 PM, thierry bordaz wrote: > On 02/19/2015 04:19 PM, Martin Basti wrote: >> On 19/02/15 13:01, thierry bordaz wrote: >>> On 02/04/2015 05:14 PM, Jan Cholasta wrote: >>>> Hi, >>>> >>>> Dne 4.2.2015 v 15:25 David Kupka napsal(a): >>>>> On 02/03/2015 11:50 AM, thierry bordaz wrote: >>>>>> On 09/17/2014 12:32 PM, thierry bordaz wrote: >>>>>>> On 09/01/2014 01:08 PM, Petr Viktorin wrote: >>>>>>>> On 08/08/2014 03:54 PM, thierry bordaz wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> The attached patch is related to 'User Life Cycle' >>>>>>>>> (https://fedorahosted.org/freeipa/ticket/3813) >>>>>>>>> >>>>>>>>> It creates a stageuser plugin with a first function stageuser-add. >>>>>>>>> Stage >>>>>>>>> user entries are provisioned under 'cn=staged >>>>>>>>> users,cn=accounts,cn=provisioning,SUFFIX'. >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> thierry >>>>>>>> >>>>>>>> Avoid `from ipalib.plugins.baseldap import *` in new code; instead >>>>>>>> import the module itself and use e.g. `baseldap.LDAPObject`. >>>>>>>> >>>>>>>> The stageuser help (docstring) is copied from the user plugin, and >>>>>>>> discusses things like account lockout and disabling users. It >>>>>>>> should >>>>>>>> rather explain what stageuser itself does. (And I don't very much >>>>>>>> like the Note about the interface being badly designed...) >>>>>>>> Also decide if the docs should call it "staged user" or "stage >>>>>>>> user" >>>>>>>> or "stageuser". >>>>>>>> >>>>>>>> A lot of the code is copied and pasted over from the users plugin. >>>>>>>> Don't do that. Either import things (e.g. validate_nsaccountlock) >>>>>>>> from the users plugin, or move the reused code into a shared >>>>>>>> module. >>>>>>>> >>>>>>>> For the `user` object, since so much is the same, it might be >>>>>>>> best to >>>>>>>> create a common base class for user and stageuser; and similarly >>>>>>>> for >>>>>>>> the Command plugins. >>>>>>>> >>>>>>>> The default permissions need different names, and you don't need >>>>>>>> another copy of the 'non_object' ones. Also, run the makeaci >>>>>>>> script. >>>>>>>> >>>>>>> Hello, >>>>>>> >>>>>>> This modified patch is mainly moving common base class into a >>>>>>> new >>>>>>> plugin: accounts.py. user/stageuser plugin inherits from >>>>>>> accounts. >>>>>>> It also creates a better description of what are stage user, how >>>>>>> to add a new stage user, updates ACI.txt and separate >>>>>>> active/stage >>>>>>> user managed permissions. >>>>>>> >>>>>>> thanks >>>>>>> thierry >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Freeipa-devel mailing list >>>>>>> Freeipa-devel at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>> >>>>>> >>>>>> Thanks David for the reviews. Here the last patches >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Freeipa-devel mailing list >>>>>> Freeipa-devel at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>> >>>>> >>>>> The freeipa-tbordaz-0002 patch had trailing whitespaces on few >>>>> lines so >>>>> I'm attaching fixed version (and unchanged patch >>>>> freeipa-tbordaz-0003-3 >>>>> to keep them together). >>>>> >>>>> The ULC feature is still WIP but these patches look good to me and >>>>> don't >>>>> break anything as far as I tested. >>>>> We should push them now to avoid further rebases. Thierry can then >>>>> prepare other patches delivering the rest of ULC functionality. >>>> >>>> Few comments from just reading the patches: >>>> >>>> 1) I would name the base class "baseuser", "account" does not >>>> necessarily mean user account. >>>> >>>> 2) This is very wrong: >>>> >>>> -class user_add(LDAPCreate): >>>> +class user_add(user, LDAPCreate): >>>> >>>> You are creating a plugin which is both an object and an command. >>>> >>>> 3) This is purely subjective, but I don't like the name >>>> "deleteuser", as it has a verb in it. We usually don't do that and >>>> IMHO we shouldn't do that. >>>> >>>> Honza >>>> >>> >>> Thank you for the review. I am attaching the updates patches >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> Freeipa-devel mailing list >>> Freeipa-devel at redhat.com >>> https://www.redhat.com/mailman/listinfo/freeipa-devel >> Hello, >> I'm getting errors during make rpms: >> >> if [ "" != "yes" ]; then \ >> ./makeapi --validate; \ >> ./makeaci --validate; \ >> fi >> >> /root/freeipa/ipalib/plugins/baseuser.py:641 command "baseuser_add" >> doc is not internationalized >> /root/freeipa/ipalib/plugins/baseuser.py:653 command "baseuser_find" >> doc is not internationalized >> /root/freeipa/ipalib/plugins/baseuser.py:647 command "baseuser_mod" >> doc is not internationalized >> 0 commands without doc, 3 commands whose doc is not i18n >> Command baseuser_add in ipalib, not in API >> Command baseuser_find in ipalib, not in API >> Command baseuser_mod in ipalib, not in API >> >> There are one or more new commands defined. >> Update API.txt and increment the minor version in VERSION. >> >> There are one or more documentation problems. >> You must fix these before preceeding >> >> Issues probably caused by this: >> 1) >> You should not use the register decorator, if this class is just for >> inheritance >> @register() >> class baseuser_add(LDAPCreate): >> >> @register() >> class baseuser_mod(LDAPUpdate): >> >> @register() >> class baseuser_find(LDAPSearch): >> >> see dns.py plugin and "DNSZoneBase" and "dnszone" classes >> >> 2) >> there might be an issue with >> @register() >> class baseuser(LDAPObject): >> >> the register decorator should not be there, I was warned by Petr^3 to >> not use permission in parent class. The same permission should be >> specified only in one place (for example user class), (otherwise they >> will be generated twice??) I don't know more details about it. >> >> -- >> Martin Basti > > Hello Martin, Jan, > > Thanks for your review. > I changed the patch so that it does not register baseuser_*. Also > increase the minor version because of new command. > Finally I moved the managed_permission definition out of the parent > baseuser class. > > > > > Martin, could you please verify that the issues you encountered are fixed? Thanks! -- David Kupka From mkosek at redhat.com Mon Mar 16 11:15:59 2015 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 16 Mar 2015 12:15:59 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <55031230.70604@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> <55031230.70604@redhat.com> Message-ID: <5506BB6F.70406@redhat.com> On 03/13/2015 05:37 PM, Martin Babinsky wrote: > Attaching the next iteration of patches. > > I have tried my best to reword the ipa-client-install man page bit about the > new option. Any suggestions to further improve it are welcome. > > I have also slightly modified the 'kinit_keytab' function so that in Kerberos > errors are reported for each attempt and the text of the last error is retained > when finally raising exception. The approach looks very good. I think that my only concern with this patch is this part: + ccache.init_creds_keytab(keytab=ktab, principal=princ) ... + except krbV.Krb5Error as e: + last_exc = str(e) + root_logger.debug("Attempt %d/%d: failed: %s" + % (attempt, attempts, last_exc)) + time.sleep(1) + + root_logger.debug("Maximum number of attempts (%d) reached" + % attempts) + raise StandardError("Error initializing principal %s: %s" + % (principal, last_exc)) The problem here is that this function will raise the super-generic StandardError instead of the proper with all the context and information about the error that the caller can then process. I think that except krbV.Krb5Error as e: if attempt == max_attempts: log something raise would be better. From mbabinsk at redhat.com Mon Mar 16 12:30:03 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Mon, 16 Mar 2015 13:30:03 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <5506BB6F.70406@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> <55031230.70604@redhat.com> <5506BB6F.70406@redhat.com> Message-ID: <5506CCCB.3020003@redhat.com> On 03/16/2015 12:15 PM, Martin Kosek wrote: > On 03/13/2015 05:37 PM, Martin Babinsky wrote: >> Attaching the next iteration of patches. >> >> I have tried my best to reword the ipa-client-install man page bit about the >> new option. Any suggestions to further improve it are welcome. >> >> I have also slightly modified the 'kinit_keytab' function so that in Kerberos >> errors are reported for each attempt and the text of the last error is retained >> when finally raising exception. > > The approach looks very good. I think that my only concern with this patch is > this part: > > + ccache.init_creds_keytab(keytab=ktab, principal=princ) > ... > + except krbV.Krb5Error as e: > + last_exc = str(e) > + root_logger.debug("Attempt %d/%d: failed: %s" > + % (attempt, attempts, last_exc)) > + time.sleep(1) > + > + root_logger.debug("Maximum number of attempts (%d) reached" > + % attempts) > + raise StandardError("Error initializing principal %s: %s" > + % (principal, last_exc)) > > The problem here is that this function will raise the super-generic > StandardError instead of the proper with all the context and information about > the error that the caller can then process. > > I think that > > except krbV.Krb5Error as e: > if attempt == max_attempts: > log something > raise > > would be better. > Yes that seems reasonable. I'm just thinking whether we should re-raise Krb5Error or raise ipalib.errors.KerberosError? the latter options makes more sense to me as we would not have to additionally import Krb5Error everywhere and it would also make the resulting errors more consistent. I am thinking about someting like this: except krbV.Krb5Error as e: if attempt == attempts: # log that we have reaches maximum number of attempts raise KerberosError(minor=str(e)) What do you think? -- Martin^3 Babinsky From jcholast at redhat.com Mon Mar 16 12:35:22 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 16 Mar 2015 13:35:22 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <5506CCCB.3020003@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> <55031230.70604@redhat.com> <5506BB6F.70406@redhat.com> <5506CCCB.3020003@redhat.com> Message-ID: <5506CE0A.20806@redhat.com> Dne 16.3.2015 v 13:30 Martin Babinsky napsal(a): > On 03/16/2015 12:15 PM, Martin Kosek wrote: >> On 03/13/2015 05:37 PM, Martin Babinsky wrote: >>> Attaching the next iteration of patches. >>> >>> I have tried my best to reword the ipa-client-install man page bit >>> about the >>> new option. Any suggestions to further improve it are welcome. >>> >>> I have also slightly modified the 'kinit_keytab' function so that in >>> Kerberos >>> errors are reported for each attempt and the text of the last error >>> is retained >>> when finally raising exception. >> >> The approach looks very good. I think that my only concern with this >> patch is >> this part: >> >> + ccache.init_creds_keytab(keytab=ktab, principal=princ) >> ... >> + except krbV.Krb5Error as e: >> + last_exc = str(e) >> + root_logger.debug("Attempt %d/%d: failed: %s" >> + % (attempt, attempts, last_exc)) >> + time.sleep(1) >> + >> + root_logger.debug("Maximum number of attempts (%d) reached" >> + % attempts) >> + raise StandardError("Error initializing principal %s: %s" >> + % (principal, last_exc)) >> >> The problem here is that this function will raise the super-generic >> StandardError instead of the proper with all the context and >> information about >> the error that the caller can then process. >> >> I think that >> >> except krbV.Krb5Error as e: >> if attempt == max_attempts: >> log something >> raise >> >> would be better. >> > > Yes that seems reasonable. I'm just thinking whether we should re-raise > Krb5Error or raise ipalib.errors.KerberosError? the latter options makes > more sense to me as we would not have to additionally import Krb5Error > everywhere and it would also make the resulting errors more consistent. > > I am thinking about someting like this: > > except krbV.Krb5Error as e: > if attempt == attempts: > # log that we have reaches maximum number of attempts > raise KerberosError(minor=str(e)) > > What do you think? > NACK, don't use ipalib from ipapython in new code, we are trying to get rid of this circular dependency. Krb5Error is OK in this case. -- Jan Cholasta From mbasti at redhat.com Mon Mar 16 12:44:12 2015 From: mbasti at redhat.com (Martin Basti) Date: Mon, 16 Mar 2015 13:44:12 +0100 Subject: [Freeipa-devel] [PATCHES 0018-0019] ipa-dns-install: Use LDAPI for all DS connections In-Reply-To: <5501BBB7.2040709@redhat.com> References: <55002E43.7050601@redhat.com> <55004D92.4080700@redhat.com> <5501A9EA.7040208@redhat.com> <5501BBB7.2040709@redhat.com> Message-ID: <5506D01C.9060606@redhat.com> On 12/03/15 17:15, Martin Babinsky wrote: > On 03/12/2015 03:59 PM, Martin Babinsky wrote: >> On 03/11/2015 03:13 PM, Martin Basti wrote: >>> On 11/03/15 13:00, Martin Babinsky wrote: >>>> These patches solve https://fedorahosted.org/freeipa/ticket/4933. >>>> >>>> They are to be applied to master branch. I will rebase them for >>>> ipa-4-1 after the review. >>>> >>> Thank you for the patches. >>> >>> I have a few comments: >>> >>> IPA-4-1 >>> Replace simple bind with LDAPI is too big change for 4-1, we should >>> start TLS if possible to avoid MINSSF>0 error. The LDAPI patches should >>> go only into IPA master branch. >>> >>> You can do something like this: >>> --- a/ipaserver/install/service.py >>> +++ b/ipaserver/install/service.py >>> @@ -107,6 +107,10 @@ class Service(object): >>> if not self.realm: >>> raise errors.NotFound(reason="realm is missing >>> for >>> %s" % (self)) >>> conn = ipaldap.IPAdmin(ldapi=self.ldapi, >>> realm=self.realm) >>> + elif self.dm_password is not None: >>> + conn = ipaldap.IPAdmin(self.fqdn, port=389, >>> + cacert=paths.IPA_CA_CRT, >>> + start_tls=True) >>> else: >>> conn = ipaldap.IPAdmin(self.fqdn, port=389) >>> >>> >>> PATCH 0018: >>> 1) >>> please add there more chatty commit message about using LDAPI >>> >>> 2) >>> I do not like much idea of adding 'realm' kwarg into __init__ method of >>> OpenDNSSECInstance >>> IIUC, it is because get_masters() method, which requires realm to use >>> LDAPI. >>> >>> You can just add ods.realm=, before call get_master() in >>> ipa-dns-install >>> if options.dnssec_master: >>> + ods.realm=api.env.realm >>> dnssec_masters = ods.get_masters() >>> (Honza will change it anyway during refactoring) >>> >>> PATCH 0019: >>> 1) >>> commit message deserves to be more chatty, can you explain there why >>> you >>> removed kerberos cache? >>> >>> Martin^2 >>> >> >> Attaching updated patches. >> >> Patch 0018 should go to both 4.1 and master branches. >> >> Patch 0019 should go only to master. >> >> >> > > One more update. > > Patch 0018 is for both 4.1 and master. > Patch 0019 is for master only. > > > Thank for patches Patch 0018: 1) Works for me but needs rebase on master Patch 0019: 1) Please rename the patch/commit message, the patch changes only ipa-dns-install connections not all DS operations 2) I have some troubles with applying patch, it needs rebase due 0018 -- Martin Basti -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkupka at redhat.com Mon Mar 16 12:54:37 2015 From: dkupka at redhat.com (David Kupka) Date: Mon, 16 Mar 2015 13:54:37 +0100 Subject: [Freeipa-devel] [PATCH] 0041 Always reload StateFile before getting or modifying the, stored values. Message-ID: <5506D28D.9020305@redhat.com> https://fedorahosted.org/freeipa/ticket/4901 -- David Kupka -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-dkupka-0041-Always-reload-StateFile-before-getting-or-modifying-.patch Type: text/x-patch Size: 2122 bytes Desc: not available URL: From mbabinsk at redhat.com Mon Mar 16 12:56:02 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Mon, 16 Mar 2015 13:56:02 +0100 Subject: [Freeipa-devel] [PATCHES 0001-0002] ipa-client-install NTP fixes In-Reply-To: <5502AA40.2090108@redhat.com> References: <54EE84B1.4040206@redhat.com> <54EEDF8A.2090204@redhat.com> <54F0CEF7.8090609@redhat.com> <54F0D191.9060908@redhat.com> <54F0D860.6020501@redhat.com> <54F0DCC3.5080405@redhat.com> <54F0DF13.2030808@redhat.com> <54F22B7E.30707@redhat.com> <54F22E09.6030707@redhat.com> <54F22F8A.6030505@redhat.com> <54F24908.2080308@redhat.com> <54F751CE.1000202@redhat.com> <54F75556.6020803@redhat.com> <54F755EA.1000100@redhat.com> <54F75C29.4010105@redhat.com> <5501FA67.3020504@redhat.com> <5502AA40.2090108@redhat.com> Message-ID: <5506D2E2.8020109@redhat.com> On 03/13/2015 10:13 AM, Martin Kosek wrote: > On 03/12/2015 09:43 PM, Nathan Kinder wrote: >> >> >> On 03/04/2015 11:25 AM, Nathan Kinder wrote: >>> >>> >>> On 03/04/2015 10:58 AM, Martin Basti wrote: >>>> On 04/03/15 19:56, Nathan Kinder wrote: >>>>> >>>>> On 03/04/2015 10:41 AM, Rob Crittenden wrote: >>>>>> Nathan Kinder wrote: >>>>>>> >>>>>>> On 02/28/2015 01:13 PM, Nathan Kinder wrote: >>>>>>>> >>>>>>>> On 02/28/2015 01:07 PM, Rob Crittenden wrote: >>>>>>>>> Nathan Kinder wrote: >>>>>>>>>> >>>>>>>>>> On 02/27/2015 01:18 PM, Nathan Kinder wrote: >>>>>>>>>>> >>>>>>>>>>> On 02/27/2015 01:08 PM, Rob Crittenden wrote: >>>>>>>>>>>> Nathan Kinder wrote: >>>>>>>>>>>>> >>>>>>>>>>>>> On 02/27/2015 12:20 PM, Rob Crittenden wrote: >>>>>>>>>>>>>> Nathan Kinder wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On 02/26/2015 12:55 AM, Martin Kosek wrote: >>>>>>>>>>>>>>>> On 02/26/2015 03:28 AM, Nathan Kinder wrote: >>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> The two attached patches address some issues that affect >>>>>>>>>>>>>>>>> ipa-client-install when syncing time from the NTP server. >>>>>>>>>>>>>>>>> Now that we >>>>>>>>>>>>>>>>> use ntpd to perform the time sync, the client install can >>>>>>>>>>>>>>>>> end up hanging >>>>>>>>>>>>>>>>> forever when the server is not reachable (firewall issues, >>>>>>>>>>>>>>>>> etc.). These >>>>>>>>>>>>>>>>> patches address the issues in two different ways: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 1 - Don't attempt to sync time when --no-ntp is specified. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> 2 - Implement a timeout capability that is used when we >>>>>>>>>>>>>>>>> run ntpd to >>>>>>>>>>>>>>>>> perform the time sync to prevent indefinite hanging. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> The one potentially contentious issue is that this >>>>>>>>>>>>>>>>> introduces a new >>>>>>>>>>>>>>>>> dependency on python-subprocess32 to allow us to have >>>>>>>>>>>>>>>>> timeout support >>>>>>>>>>>>>>>>> when using Python 2.x. This is packaged for Fedora, but I >>>>>>>>>>>>>>>>> don't see it >>>>>>>>>>>>>>>>> on RHEL or CentOS currently. It would need to be packaged >>>>>>>>>>>>>>>>> there. >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/4842 >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>> -NGK >>>>>>>>>>>>>>>> Thanks for Patches. For the second patch, I would really >>>>>>>>>>>>>>>> prefer to avoid new >>>>>>>>>>>>>>>> dependency, especially if it's not packaged in RHEL/CentOS. >>>>>>>>>>>>>>>> Maybe we could use >>>>>>>>>>>>>>>> some workaround instead, as in: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> http://stackoverflow.com/questions/3733270/python-subprocess-timeout >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> I don't like having to add an additional dependency either, >>>>>>>>>>>>>>> but the >>>>>>>>>>>>>>> alternative seems more risky. Utilizing the subprocess32 >>>>>>>>>>>>>>> module (which >>>>>>>>>>>>>>> is really just a backport of the normal subprocess module >>>>>>>>>>>>>>> from Python >>>>>>>>>>>>>>> 3.x) is not invasive for our code in ipautil.run(). Adding >>>>>>>>>>>>>>> some sort of >>>>>>>>>>>>>>> a thread that has to kill the spawned subprocess seems more >>>>>>>>>>>>>>> risky (see >>>>>>>>>>>>>>> the discussion about a race condition in the stackoverflow >>>>>>>>>>>>>>> thread >>>>>>>>>>>>>>> above). That said, I'm sure the thread/poll method can be >>>>>>>>>>>>>>> made to work >>>>>>>>>>>>>>> if you and others feel strongly that this is a better >>>>>>>>>>>>>>> approach than >>>>>>>>>>>>>>> adding a new dependency. >>>>>>>>>>>>>> Why not use /usr/bin/timeout from coreutils? >>>>>>>>>>>>> That sounds like a perfectly good idea. I wasn't aware of >>>>>>>>>>>>> it's >>>>>>>>>>>>> existence (or it's possible that I forgot about it). Thanks >>>>>>>>>>>>> for the >>>>>>>>>>>>> suggestion! I'll test out a reworked version of the patch. >>>>>>>>>>>>> >>>>>>>>>>>>> Do you think that there is value in leaving the timeout >>>>>>>>>>>>> capability >>>>>>>>>>>>> centrally in ipautil.run()? We only need it for the call to >>>>>>>>>>>>> 'ntpd' >>>>>>>>>>>>> right now, but there might be a need for using a timeout for >>>>>>>>>>>>> other >>>>>>>>>>>>> commands in the future. The alternative is to just modify >>>>>>>>>>>>> synconce_ntp() to use /usr/bin/timeout and leave ipautil.run() >>>>>>>>>>>>> alone. >>>>>>>>>>>> I think it would require a lot of research. One of the programs >>>>>>>>>>>> spawned >>>>>>>>>>>> by this is pkicreate which could take quite some time, and >>>>>>>>>>>> spawning a >>>>>>>>>>>> clone in particular. >>>>>>>>>>>> >>>>>>>>>>>> It is definitely an interesting idea but I think it is safest >>>>>>>>>>>> for now to >>>>>>>>>>>> limit it to just NTP for now. >>>>>>>>>>> What I meant was that we would have an optional keyword >>>>>>>>>>> "timeout" >>>>>>>>>>> parameter to ipautil.run() that defaults to None, just like my >>>>>>>>>>> subprocess32 approach. If a timeout is not passed in, we >>>>>>>>>>> would use >>>>>>>>>>> subprocess.Popen() to run the specified command just like we do >>>>>>>>>>> today. >>>>>>>>>>> We would only actually pass the timeout parameter to >>>>>>>>>>> ipautil.run() in >>>>>>>>>>> synconce_ntp() for now, so no other commands would have a >>>>>>>>>>> timeout in >>>>>>>>>>> effect. The capability would be available for other commands >>>>>>>>>>> this way >>>>>>>>>>> though. >>>>>>>>>>> >>>>>>>>>>> Let me propose a patch with this implementation, and if you >>>>>>>>>>> don't like >>>>>>>>>>> it, we can leave ipautil.run() alone and restrict the changes to >>>>>>>>>>> synconce_ntp(). >>>>>>>>>> An updated patch 0002 is attached that uses the approach >>>>>>>>>> mentioned above. >>>>>>>>> Looks good. Not to nitpick to death but... >>>>>>>>> >>>>>>>>> Can you add timeout to ipaplatform/base/paths.py as BIN_TIMEOUT = >>>>>>>>> "/usr/bin/timeout" and reference that instead? It's for >>>>>>>>> portability. >>>>>>>> Sure. I was wondering if we should do something around a full >>>>>>>> path. >>>>>>>> >>>>>>>>> And a question. I'm impatient. Should there be a notice that it >>>>>>>>> will >>>>>>>>> timeout after n seconds somewhere so people like me don't ^C >>>>>>>>> after 2 >>>>>>>>> seconds? Or is that just overkill and I need to learn patience? >>>>>>>> Probably both. :) There's always going to be someone out there who >>>>>>>> will >>>>>>>> do ctrl-C, so I think printing out a notice is a good idea. I'll >>>>>>>> add this. >>>>>>>> >>>>>>>>> Stylistically, should we prefer p.returncode is 15 or p.returncode >>>>>>>>> == 15? >>>>>>>> After some reading, it seems that '==' should be used. Small >>>>>>>> integers >>>>>>>> work with 'is', but '==' is the consistent way that equality of >>>>>>>> integers >>>>>>>> should be checked. I'll modify this. >>>>>>> Another updated patch 0002 is attached that addresses Rob's review >>>>>>> comments. >>>>>>> >>>>>>> Thanks, >>>>>>> -NGK >>>>>>> >>>>>> LGTM. Does someone else have time to test this? >>>>>> >>>>>> I also don't know if there is a policy on placement of new items in >>>>>> paths.py. Things are all over the place and some have BIN_ prefix and >>>>>> some don't. >>>>> Yeah, I noticed this too. It didn't look like there was any >>>>> organization. >>>>> >>>>> -NGK >>>>>> rob >>>>>> >>>>> _______________________________________________ >>>>> Freeipa-devel mailing list >>>>> Freeipa-devel at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>> paths are (should be) ordered alphabetically by value, not by variable >>>> name. >>>> I see there are last 2 lines ordered incorrectly, but please try to >>>> keep >>>> order as I wrote above. >>> >>> OK. A new patch is attached that puts the path to 'timeout' in the >>> proper location. Fixing up the order of other paths is unrelated, and >>> should be handled in a separate patch. >> >> Bump. Does anyone else have any review feedback on this? It would be >> nice to get this in soon since we currently have the potential of just >> hanging when installing clients on F21+. > > I am ok with the approach, if the patches work. I agree it would be nice > to have this fixed in F21 and F22 soon. > > Martin, could you please take a look? This one should be easy to test. > > Thanks, > Martin I have tested the patches on F21 client and they work as expected. -- Martin^3 Babinsky From mbabinsk at redhat.com Mon Mar 16 13:26:09 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Mon, 16 Mar 2015 14:26:09 +0100 Subject: [Freeipa-devel] [PATCHES 0018-0020] ipa-dns-install: Use LDAPI for all DS connections In-Reply-To: <5506D01C.9060606@redhat.com> References: <55002E43.7050601@redhat.com> <55004D92.4080700@redhat.com> <5501A9EA.7040208@redhat.com> <5501BBB7.2040709@redhat.com> <5506D01C.9060606@redhat.com> Message-ID: <5506D9F1.6050809@redhat.com> On 03/16/2015 01:44 PM, Martin Basti wrote: > On 12/03/15 17:15, Martin Babinsky wrote: >> On 03/12/2015 03:59 PM, Martin Babinsky wrote: >>> On 03/11/2015 03:13 PM, Martin Basti wrote: >>>> On 11/03/15 13:00, Martin Babinsky wrote: >>>>> These patches solve https://fedorahosted.org/freeipa/ticket/4933. >>>>> >>>>> They are to be applied to master branch. I will rebase them for >>>>> ipa-4-1 after the review. >>>>> >>>> Thank you for the patches. >>>> >>>> I have a few comments: >>>> >>>> IPA-4-1 >>>> Replace simple bind with LDAPI is too big change for 4-1, we should >>>> start TLS if possible to avoid MINSSF>0 error. The LDAPI patches should >>>> go only into IPA master branch. >>>> >>>> You can do something like this: >>>> --- a/ipaserver/install/service.py >>>> +++ b/ipaserver/install/service.py >>>> @@ -107,6 +107,10 @@ class Service(object): >>>> if not self.realm: >>>> raise errors.NotFound(reason="realm is missing >>>> for >>>> %s" % (self)) >>>> conn = ipaldap.IPAdmin(ldapi=self.ldapi, >>>> realm=self.realm) >>>> + elif self.dm_password is not None: >>>> + conn = ipaldap.IPAdmin(self.fqdn, port=389, >>>> + cacert=paths.IPA_CA_CRT, >>>> + start_tls=True) >>>> else: >>>> conn = ipaldap.IPAdmin(self.fqdn, port=389) >>>> >>>> >>>> PATCH 0018: >>>> 1) >>>> please add there more chatty commit message about using LDAPI >>>> >>>> 2) >>>> I do not like much idea of adding 'realm' kwarg into __init__ method of >>>> OpenDNSSECInstance >>>> IIUC, it is because get_masters() method, which requires realm to use >>>> LDAPI. >>>> >>>> You can just add ods.realm=, before call get_master() in >>>> ipa-dns-install >>>> if options.dnssec_master: >>>> + ods.realm=api.env.realm >>>> dnssec_masters = ods.get_masters() >>>> (Honza will change it anyway during refactoring) >>>> >>>> PATCH 0019: >>>> 1) >>>> commit message deserves to be more chatty, can you explain there why >>>> you >>>> removed kerberos cache? >>>> >>>> Martin^2 >>>> >>> >>> Attaching updated patches. >>> >>> Patch 0018 should go to both 4.1 and master branches. >>> >>> Patch 0019 should go only to master. >>> >>> >>> >> >> One more update. >> >> Patch 0018 is for both 4.1 and master. >> Patch 0019 is for master only. >> >> >> > Thank for patches > Patch 0018: > 1) > Works for me but needs rebase on master > > Patch 0019: > 1) > Please rename the patch/commit message, the patch changes only > ipa-dns-install connections not all DS operations > > 2) > I have some troubles with applying patch, it needs rebase due 0018 > > > -- > Martin Basti > Attaching updated patches: Patch 0018 is for ipa-4-1 branch. Patches 0019 and 0020 are for master branch. I hope they will apply cleanly this time (they did for me). -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0018-3-ipa-dns-install-use-STARTTLS-to.patch Type: text/x-patch Size: 8091 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0019-4-ipa-dns-install-use-STARTTLS-to.patch Type: text/x-patch Size: 8133 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0020-1-ipa-dns-install-use-LDAPI-to.patch Type: text/x-patch Size: 10441 bytes Desc: not available URL: From mkubik at redhat.com Mon Mar 16 14:32:35 2015 From: mkubik at redhat.com (Milan Kubik) Date: Mon, 16 Mar 2015 15:32:35 +0100 Subject: [Freeipa-devel] [PATCH] ipatests: port of p11helper test from github In-Reply-To: <5506B87B.6050600@redhat.com> References: <5502ED38.9020302@redhat.com> <5506B87B.6050600@redhat.com> Message-ID: <5506E983.4020807@redhat.com> On 03/16/2015 12:03 PM, Milan Kubik wrote: > On 03/13/2015 02:59 PM, Milan Kubik wrote: >> Hi, >> >> this is a patch with port of [1] to pytest. >> >> [1]: >> https://github.com/spacekpe/freeipa-pkcs11/blob/master/python/run.py >> >> Cheers, >> Milan >> >> >> > Added few more asserts in methods where the test could fail and cause > other errors. > > New version of the patch after brief discussion with Martin Basti. Removed unnecessary variable assignments and separated a new test case. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkubik-0001-3-ipatests-port-of-p11helper-test-from-github.patch Type: text/x-patch Size: 11650 bytes Desc: not available URL: From mkosek at redhat.com Mon Mar 16 14:58:34 2015 From: mkosek at redhat.com (Martin Kosek) Date: Mon, 16 Mar 2015 15:58:34 +0100 Subject: [Freeipa-devel] [PATCHES 0001-0002] ipa-client-install NTP fixes In-Reply-To: <5506D2E2.8020109@redhat.com> References: <54EE84B1.4040206@redhat.com> <54EEDF8A.2090204@redhat.com> <54F0CEF7.8090609@redhat.com> <54F0D191.9060908@redhat.com> <54F0D860.6020501@redhat.com> <54F0DCC3.5080405@redhat.com> <54F0DF13.2030808@redhat.com> <54F22B7E.30707@redhat.com> <54F22E09.6030707@redhat.com> <54F22F8A.6030505@redhat.com> <54F24908.2080308@redhat.com> <54F751CE.1000202@redhat.com> <54F75556.6020803@redhat.com> <54F755EA.1000100@redhat.com> <54F75C29.4010105@redhat.com> <5501FA67.3020504@redhat.com> <5502AA40.2090108@redhat.com> <5506D2E2.8020109@redhat.com> Message-ID: <5506EF9A.3050806@redhat.com> On 03/16/2015 01:56 PM, Martin Babinsky wrote: > On 03/13/2015 10:13 AM, Martin Kosek wrote: >> On 03/12/2015 09:43 PM, Nathan Kinder wrote: >>> >>> >>> On 03/04/2015 11:25 AM, Nathan Kinder wrote: >>>> >>>> >>>> On 03/04/2015 10:58 AM, Martin Basti wrote: >>>>> On 04/03/15 19:56, Nathan Kinder wrote: >>>>>> >>>>>> On 03/04/2015 10:41 AM, Rob Crittenden wrote: >>>>>>> Nathan Kinder wrote: >>>>>>>> >>>>>>>> On 02/28/2015 01:13 PM, Nathan Kinder wrote: >>>>>>>>> >>>>>>>>> On 02/28/2015 01:07 PM, Rob Crittenden wrote: >>>>>>>>>> Nathan Kinder wrote: >>>>>>>>>>> >>>>>>>>>>> On 02/27/2015 01:18 PM, Nathan Kinder wrote: >>>>>>>>>>>> >>>>>>>>>>>> On 02/27/2015 01:08 PM, Rob Crittenden wrote: >>>>>>>>>>>>> Nathan Kinder wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>> On 02/27/2015 12:20 PM, Rob Crittenden wrote: >>>>>>>>>>>>>>> Nathan Kinder wrote: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> On 02/26/2015 12:55 AM, Martin Kosek wrote: >>>>>>>>>>>>>>>>> On 02/26/2015 03:28 AM, Nathan Kinder wrote: >>>>>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> The two attached patches address some issues that affect >>>>>>>>>>>>>>>>>> ipa-client-install when syncing time from the NTP server. >>>>>>>>>>>>>>>>>> Now that we >>>>>>>>>>>>>>>>>> use ntpd to perform the time sync, the client install can >>>>>>>>>>>>>>>>>> end up hanging >>>>>>>>>>>>>>>>>> forever when the server is not reachable (firewall issues, >>>>>>>>>>>>>>>>>> etc.). These >>>>>>>>>>>>>>>>>> patches address the issues in two different ways: >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 1 - Don't attempt to sync time when --no-ntp is specified. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> 2 - Implement a timeout capability that is used when we >>>>>>>>>>>>>>>>>> run ntpd to >>>>>>>>>>>>>>>>>> perform the time sync to prevent indefinite hanging. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> The one potentially contentious issue is that this >>>>>>>>>>>>>>>>>> introduces a new >>>>>>>>>>>>>>>>>> dependency on python-subprocess32 to allow us to have >>>>>>>>>>>>>>>>>> timeout support >>>>>>>>>>>>>>>>>> when using Python 2.x. This is packaged for Fedora, but I >>>>>>>>>>>>>>>>>> don't see it >>>>>>>>>>>>>>>>>> on RHEL or CentOS currently. It would need to be packaged >>>>>>>>>>>>>>>>>> there. >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> https://fedorahosted.org/freeipa/ticket/4842 >>>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>>> Thanks, >>>>>>>>>>>>>>>>>> -NGK >>>>>>>>>>>>>>>>> Thanks for Patches. For the second patch, I would really >>>>>>>>>>>>>>>>> prefer to avoid new >>>>>>>>>>>>>>>>> dependency, especially if it's not packaged in RHEL/CentOS. >>>>>>>>>>>>>>>>> Maybe we could use >>>>>>>>>>>>>>>>> some workaround instead, as in: >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> http://stackoverflow.com/questions/3733270/python-subprocess-timeout >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> I don't like having to add an additional dependency either, >>>>>>>>>>>>>>>> but the >>>>>>>>>>>>>>>> alternative seems more risky. Utilizing the subprocess32 >>>>>>>>>>>>>>>> module (which >>>>>>>>>>>>>>>> is really just a backport of the normal subprocess module >>>>>>>>>>>>>>>> from Python >>>>>>>>>>>>>>>> 3.x) is not invasive for our code in ipautil.run(). Adding >>>>>>>>>>>>>>>> some sort of >>>>>>>>>>>>>>>> a thread that has to kill the spawned subprocess seems more >>>>>>>>>>>>>>>> risky (see >>>>>>>>>>>>>>>> the discussion about a race condition in the stackoverflow >>>>>>>>>>>>>>>> thread >>>>>>>>>>>>>>>> above). That said, I'm sure the thread/poll method can be >>>>>>>>>>>>>>>> made to work >>>>>>>>>>>>>>>> if you and others feel strongly that this is a better >>>>>>>>>>>>>>>> approach than >>>>>>>>>>>>>>>> adding a new dependency. >>>>>>>>>>>>>>> Why not use /usr/bin/timeout from coreutils? >>>>>>>>>>>>>> That sounds like a perfectly good idea. I wasn't aware of >>>>>>>>>>>>>> it's >>>>>>>>>>>>>> existence (or it's possible that I forgot about it). Thanks >>>>>>>>>>>>>> for the >>>>>>>>>>>>>> suggestion! I'll test out a reworked version of the patch. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Do you think that there is value in leaving the timeout >>>>>>>>>>>>>> capability >>>>>>>>>>>>>> centrally in ipautil.run()? We only need it for the call to >>>>>>>>>>>>>> 'ntpd' >>>>>>>>>>>>>> right now, but there might be a need for using a timeout for >>>>>>>>>>>>>> other >>>>>>>>>>>>>> commands in the future. The alternative is to just modify >>>>>>>>>>>>>> synconce_ntp() to use /usr/bin/timeout and leave ipautil.run() >>>>>>>>>>>>>> alone. >>>>>>>>>>>>> I think it would require a lot of research. One of the programs >>>>>>>>>>>>> spawned >>>>>>>>>>>>> by this is pkicreate which could take quite some time, and >>>>>>>>>>>>> spawning a >>>>>>>>>>>>> clone in particular. >>>>>>>>>>>>> >>>>>>>>>>>>> It is definitely an interesting idea but I think it is safest >>>>>>>>>>>>> for now to >>>>>>>>>>>>> limit it to just NTP for now. >>>>>>>>>>>> What I meant was that we would have an optional keyword >>>>>>>>>>>> "timeout" >>>>>>>>>>>> parameter to ipautil.run() that defaults to None, just like my >>>>>>>>>>>> subprocess32 approach. If a timeout is not passed in, we >>>>>>>>>>>> would use >>>>>>>>>>>> subprocess.Popen() to run the specified command just like we do >>>>>>>>>>>> today. >>>>>>>>>>>> We would only actually pass the timeout parameter to >>>>>>>>>>>> ipautil.run() in >>>>>>>>>>>> synconce_ntp() for now, so no other commands would have a >>>>>>>>>>>> timeout in >>>>>>>>>>>> effect. The capability would be available for other commands >>>>>>>>>>>> this way >>>>>>>>>>>> though. >>>>>>>>>>>> >>>>>>>>>>>> Let me propose a patch with this implementation, and if you >>>>>>>>>>>> don't like >>>>>>>>>>>> it, we can leave ipautil.run() alone and restrict the changes to >>>>>>>>>>>> synconce_ntp(). >>>>>>>>>>> An updated patch 0002 is attached that uses the approach >>>>>>>>>>> mentioned above. >>>>>>>>>> Looks good. Not to nitpick to death but... >>>>>>>>>> >>>>>>>>>> Can you add timeout to ipaplatform/base/paths.py as BIN_TIMEOUT = >>>>>>>>>> "/usr/bin/timeout" and reference that instead? It's for >>>>>>>>>> portability. >>>>>>>>> Sure. I was wondering if we should do something around a full >>>>>>>>> path. >>>>>>>>> >>>>>>>>>> And a question. I'm impatient. Should there be a notice that it >>>>>>>>>> will >>>>>>>>>> timeout after n seconds somewhere so people like me don't ^C >>>>>>>>>> after 2 >>>>>>>>>> seconds? Or is that just overkill and I need to learn patience? >>>>>>>>> Probably both. :) There's always going to be someone out there who >>>>>>>>> will >>>>>>>>> do ctrl-C, so I think printing out a notice is a good idea. I'll >>>>>>>>> add this. >>>>>>>>> >>>>>>>>>> Stylistically, should we prefer p.returncode is 15 or p.returncode >>>>>>>>>> == 15? >>>>>>>>> After some reading, it seems that '==' should be used. Small >>>>>>>>> integers >>>>>>>>> work with 'is', but '==' is the consistent way that equality of >>>>>>>>> integers >>>>>>>>> should be checked. I'll modify this. >>>>>>>> Another updated patch 0002 is attached that addresses Rob's review >>>>>>>> comments. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> -NGK >>>>>>>> >>>>>>> LGTM. Does someone else have time to test this? >>>>>>> >>>>>>> I also don't know if there is a policy on placement of new items in >>>>>>> paths.py. Things are all over the place and some have BIN_ prefix and >>>>>>> some don't. >>>>>> Yeah, I noticed this too. It didn't look like there was any >>>>>> organization. >>>>>> >>>>>> -NGK >>>>>>> rob >>>>>>> >>>>>> _______________________________________________ >>>>>> Freeipa-devel mailing list >>>>>> Freeipa-devel at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>> paths are (should be) ordered alphabetically by value, not by variable >>>>> name. >>>>> I see there are last 2 lines ordered incorrectly, but please try to >>>>> keep >>>>> order as I wrote above. >>>> >>>> OK. A new patch is attached that puts the path to 'timeout' in the >>>> proper location. Fixing up the order of other paths is unrelated, and >>>> should be handled in a separate patch. >>> >>> Bump. Does anyone else have any review feedback on this? It would be >>> nice to get this in soon since we currently have the potential of just >>> hanging when installing clients on F21+. >> >> I am ok with the approach, if the patches work. I agree it would be nice >> to have this fixed in F21 and F22 soon. >> >> Martin, could you please take a look? This one should be easy to test. >> >> Thanks, >> Martin > > I have tested the patches on F21 client and they work as expected. > I take that as an ACK. Before pushing the change, I just changed one print format from "%s" to "%d" given a number was printed. Pushed to: master: a58b77ca9cd3620201306258dd6bd05ea1c73c73 ipa-4-1: 80aeb445e2034776f08668bf04dfd711af477b25 Petr1, it would be nice to get this one built on F21+, to unblock Ipsilon project. From mbasti at redhat.com Mon Mar 16 16:01:25 2015 From: mbasti at redhat.com (Martin Basti) Date: Mon, 16 Mar 2015 17:01:25 +0100 Subject: [Freeipa-devel] [PATCHES 0018-0020] ipa-dns-install: Use LDAPI for all DS connections In-Reply-To: <5506D9F1.6050809@redhat.com> References: <55002E43.7050601@redhat.com> <55004D92.4080700@redhat.com> <5501A9EA.7040208@redhat.com> <5501BBB7.2040709@redhat.com> <5506D01C.9060606@redhat.com> <5506D9F1.6050809@redhat.com> Message-ID: <5506FE55.7060605@redhat.com> On 16/03/15 14:26, Martin Babinsky wrote: > On 03/16/2015 01:44 PM, Martin Basti wrote: >> On 12/03/15 17:15, Martin Babinsky wrote: >>> On 03/12/2015 03:59 PM, Martin Babinsky wrote: >>>> On 03/11/2015 03:13 PM, Martin Basti wrote: >>>>> On 11/03/15 13:00, Martin Babinsky wrote: >>>>>> These patches solve https://fedorahosted.org/freeipa/ticket/4933. >>>>>> >>>>>> They are to be applied to master branch. I will rebase them for >>>>>> ipa-4-1 after the review. >>>>>> >>>>> Thank you for the patches. >>>>> >>>>> I have a few comments: >>>>> >>>>> IPA-4-1 >>>>> Replace simple bind with LDAPI is too big change for 4-1, we should >>>>> start TLS if possible to avoid MINSSF>0 error. The LDAPI patches >>>>> should >>>>> go only into IPA master branch. >>>>> >>>>> You can do something like this: >>>>> --- a/ipaserver/install/service.py >>>>> +++ b/ipaserver/install/service.py >>>>> @@ -107,6 +107,10 @@ class Service(object): >>>>> if not self.realm: >>>>> raise errors.NotFound(reason="realm is missing >>>>> for >>>>> %s" % (self)) >>>>> conn = ipaldap.IPAdmin(ldapi=self.ldapi, >>>>> realm=self.realm) >>>>> + elif self.dm_password is not None: >>>>> + conn = ipaldap.IPAdmin(self.fqdn, port=389, >>>>> + cacert=paths.IPA_CA_CRT, >>>>> + start_tls=True) >>>>> else: >>>>> conn = ipaldap.IPAdmin(self.fqdn, port=389) >>>>> >>>>> >>>>> PATCH 0018: >>>>> 1) >>>>> please add there more chatty commit message about using LDAPI >>>>> >>>>> 2) >>>>> I do not like much idea of adding 'realm' kwarg into __init__ >>>>> method of >>>>> OpenDNSSECInstance >>>>> IIUC, it is because get_masters() method, which requires realm to use >>>>> LDAPI. >>>>> >>>>> You can just add ods.realm=, before call get_master() in >>>>> ipa-dns-install >>>>> if options.dnssec_master: >>>>> + ods.realm=api.env.realm >>>>> dnssec_masters = ods.get_masters() >>>>> (Honza will change it anyway during refactoring) >>>>> >>>>> PATCH 0019: >>>>> 1) >>>>> commit message deserves to be more chatty, can you explain there why >>>>> you >>>>> removed kerberos cache? >>>>> >>>>> Martin^2 >>>>> >>>> >>>> Attaching updated patches. >>>> >>>> Patch 0018 should go to both 4.1 and master branches. >>>> >>>> Patch 0019 should go only to master. >>>> >>>> >>>> >>> >>> One more update. >>> >>> Patch 0018 is for both 4.1 and master. >>> Patch 0019 is for master only. >>> >>> >>> >> Thank for patches >> Patch 0018: >> 1) >> Works for me but needs rebase on master >> >> Patch 0019: >> 1) >> Please rename the patch/commit message, the patch changes only >> ipa-dns-install connections not all DS operations >> >> 2) >> I have some troubles with applying patch, it needs rebase due 0018 >> >> >> -- >> Martin Basti >> > > Attaching updated patches: > > Patch 0018 is for ipa-4-1 branch. > Patches 0019 and 0020 are for master branch. > > I hope they will apply cleanly this time (they did for me). > ACK -- Martin Basti From mbabinsk at redhat.com Mon Mar 16 16:20:15 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Mon, 16 Mar 2015 17:20:15 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <5506CE0A.20806@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> <55031230.70604@redhat.com> <5506BB6F.70406@redhat.com> <5506CCCB.3020003@redhat.com> <5506CE0A.20806@redhat.com> Message-ID: <550702BF.8000000@redhat.com> On 03/16/2015 01:35 PM, Jan Cholasta wrote: > Dne 16.3.2015 v 13:30 Martin Babinsky napsal(a): >> On 03/16/2015 12:15 PM, Martin Kosek wrote: >>> On 03/13/2015 05:37 PM, Martin Babinsky wrote: >>>> Attaching the next iteration of patches. >>>> >>>> I have tried my best to reword the ipa-client-install man page bit >>>> about the >>>> new option. Any suggestions to further improve it are welcome. >>>> >>>> I have also slightly modified the 'kinit_keytab' function so that in >>>> Kerberos >>>> errors are reported for each attempt and the text of the last error >>>> is retained >>>> when finally raising exception. >>> >>> The approach looks very good. I think that my only concern with this >>> patch is >>> this part: >>> >>> + ccache.init_creds_keytab(keytab=ktab, principal=princ) >>> ... >>> + except krbV.Krb5Error as e: >>> + last_exc = str(e) >>> + root_logger.debug("Attempt %d/%d: failed: %s" >>> + % (attempt, attempts, last_exc)) >>> + time.sleep(1) >>> + >>> + root_logger.debug("Maximum number of attempts (%d) reached" >>> + % attempts) >>> + raise StandardError("Error initializing principal %s: %s" >>> + % (principal, last_exc)) >>> >>> The problem here is that this function will raise the super-generic >>> StandardError instead of the proper with all the context and >>> information about >>> the error that the caller can then process. >>> >>> I think that >>> >>> except krbV.Krb5Error as e: >>> if attempt == max_attempts: >>> log something >>> raise >>> >>> would be better. >>> >> >> Yes that seems reasonable. I'm just thinking whether we should re-raise >> Krb5Error or raise ipalib.errors.KerberosError? the latter options makes >> more sense to me as we would not have to additionally import Krb5Error >> everywhere and it would also make the resulting errors more consistent. >> >> I am thinking about someting like this: >> >> except krbV.Krb5Error as e: >> if attempt == attempts: >> # log that we have reaches maximum number of attempts >> raise KerberosError(minor=str(e)) >> >> What do you think? >> > > NACK, don't use ipalib from ipapython in new code, we are trying to get > rid of this circular dependency. Krb5Error is OK in this case. > Ok attaching updated patches. -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0015-4-ipautil-new-functions-kinit_keytab-and-kinit_passwor.patch Type: text/x-patch Size: 4180 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0016-4-ipa-client-install-try-to-get-host-TGT-several-times.patch Type: text/x-patch Size: 8821 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0017-4-Adopted-kinit_keytab-and-kinit_password-for-kerberos.patch Type: text/x-patch Size: 11866 bytes Desc: not available URL: From mbasti at redhat.com Mon Mar 16 16:23:12 2015 From: mbasti at redhat.com (Martin Basti) Date: Mon, 16 Mar 2015 17:23:12 +0100 Subject: [Freeipa-devel] [PATCH] ipatests: port of p11helper test from github In-Reply-To: <5506E983.4020807@redhat.com> References: <5502ED38.9020302@redhat.com> <5506B87B.6050600@redhat.com> <5506E983.4020807@redhat.com> Message-ID: <55070370.1010200@redhat.com> On 16/03/15 15:32, Milan Kubik wrote: > On 03/16/2015 12:03 PM, Milan Kubik wrote: >> On 03/13/2015 02:59 PM, Milan Kubik wrote: >>> Hi, >>> >>> this is a patch with port of [1] to pytest. >>> >>> [1]: >>> https://github.com/spacekpe/freeipa-pkcs11/blob/master/python/run.py >>> >>> Cheers, >>> Milan >>> >>> >>> >> Added few more asserts in methods where the test could fail and cause >> other errors. >> >> > New version of the patch after brief discussion with Martin Basti. > Removed unnecessary variable assignments and separated a new test case. > > Hello, thank you for the patch. I have a few nitpicks: 1) You can remove this and use just hexlify(s) +def str_to_hex(s): + return ''.join("{:02x}".format(ord(c)) for c in s) 2) + def test_find_secret_key(self, p11): + assert p11.find_keys(_ipap11helper.KEY_CLASS_SECRET_KEY, label=u"???-aest") In tests before you tested the exact number of expected IDs returned by find_keys method, why not here? Martin^2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbabinsk at redhat.com Mon Mar 16 17:15:09 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Mon, 16 Mar 2015 18:15:09 +0100 Subject: [Freeipa-devel] [PATCH 0021] show the exception message thrown by dogtag._parse_ca_status during install Message-ID: <55070F9D.9090002@redhat.com> https://fedorahosted.org/freeipa/ticket/4885 -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0021-1-show-the-exception-message-thrown-by-dogtag._parse_c.patch Type: text/x-patch Size: 1105 bytes Desc: not available URL: From mbabinsk at redhat.com Mon Mar 16 17:17:02 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Mon, 16 Mar 2015 18:17:02 +0100 Subject: [Freeipa-devel] [PATCH 0021] how the exception message raised by dogtag._parse_ca_status during install Message-ID: <5507100E.5040907@redhat.com> https://fedorahosted.org/freeipa/ticket/4885 -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0021-1-show-the-exception-message-thrown-by-dogtag._parse_c.patch Type: text/x-patch Size: 1105 bytes Desc: not available URL: From edewata at redhat.com Tue Mar 17 04:59:31 2015 From: edewata at redhat.com (Endi Sukma Dewata) Date: Tue, 17 Mar 2015 11:59:31 +0700 Subject: [Freeipa-devel] [PATCH] Password vault In-Reply-To: <5501E8BD.7000503@redhat.com> References: <54E1AF55.3060409@redhat.com> <54EBEB55.6010306@redhat.com> <54F96B22.9050507@redhat.com> <55004D5D.6060300@redhat.com> <5501E8BD.7000503@redhat.com> Message-ID: <5507B4B3.6060803@redhat.com> On 3/13/2015 2:27 AM, Endi Sukma Dewata wrote: > On 3/11/2015 9:12 PM, Endi Sukma Dewata wrote: >> Thanks for the review. New patch attached to be applied on top of all >> previous patches. Please see comments below. > > New patch #362-1 attached replacing #362. It fixed some issues in > handle_not_found(). New patch #363 attached. It adds supports for vault & vaultcontainer ID parameter. -- Endi S. Dewata -------------- next part -------------- >From e1d2a3a62e6d16c1c9b19f4cb19b900427ea5e1f Mon Sep 17 00:00:00 2001 From: "Endi S. Dewata" Date: Thu, 12 Mar 2015 09:21:02 -0400 Subject: [PATCH] Vault ID improvements. The vault plugin has been modified to accept a single vault ID in addition to separate name and parent ID attributes. The vault container has also been modified in the same way. New test cases have been added to verify this functionality. https://fedorahosted.org/freeipa/ticket/3872 --- ipalib/plugins/vault.py | 143 +++++++++++++++++-- ipalib/plugins/vaultcontainer.py | 137 +++++++++++++++++-- ipalib/plugins/vaultsecret.py | 10 +- ipatests/test_xmlrpc/test_vault_plugin.py | 151 +++++++++++++++++++++ ipatests/test_xmlrpc/test_vaultcontainer_plugin.py | 90 ++++++------ ipatests/test_xmlrpc/test_vaultsecret_plugin.py | 115 ++++++++++++++++ 6 files changed, 565 insertions(+), 81 deletions(-) diff --git a/ipalib/plugins/vault.py b/ipalib/plugins/vault.py index d47067758186601365e5924f5d13c7ab51ba66e5..38693d0710e000695cae21fb4db5dfb4c85b5c74 100644 --- a/ipalib/plugins/vault.py +++ b/ipalib/plugins/vault.py @@ -61,7 +61,7 @@ EXAMPLES: ipa vault-find """) + _(""" List shared vaults: - ipa vault-find --container-id /shared + ipa vault-find /shared """) + _(""" Add a standard vault: ipa vault-add MyVault @@ -171,8 +171,8 @@ class vault(LDAPObject): cli_name='vault_name', label=_('Vault name'), primary_key=True, - pattern='^[a-zA-Z0-9_.-]+$', - pattern_errmsg='may only include letters, numbers, _, ., and -', + pattern='^[a-zA-Z0-9_.-/]+$', + pattern_errmsg='may only include letters, numbers, _, ., -, and /', maxlength=255, ), Str('vault_id?', @@ -217,7 +217,7 @@ class vault(LDAPObject): # get vault ID from parameters name = keys[-1] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_id = container_id + name dn = self.base_dn @@ -250,6 +250,81 @@ class vault(LDAPObject): return id + def split_id(self, id): + """ + Splits a vault ID into (vault name, container ID) tuple. + """ + + if not id: + return (None, None) + + # split ID into container ID and vault name + parts = id.rsplit(u'/', 1) + + if len(parts) == 2: + vault_name = parts[1] + container_id = u'%s/' % parts[0] + + else: + vault_name = parts[0] + container_id = None + + if not vault_name: + vault_name = None + + return (vault_name, container_id) + + def merge_id(self, vault_name, container_id): + """ + Merges a vault name and a container ID into a vault ID. + """ + + if not vault_name: + id = container_id + + elif vault_name.startswith('/') or not container_id: + id = vault_name + + else: + id = container_id + vault_name + + return id + + def normalize_params(self, *args, **options): + """ + Normalizes the vault ID in the parameters. + """ + + vault_id = self.parse_params(*args, **options) + (vault_name, container_id) = self.split_id(vault_id) + return self.update_params(vault_name, container_id, *args, **options) + + def parse_params(self, *args, **options): + """ + Extracts the vault name and container ID in the parameters. + """ + + # get vault name and container ID from parameters + vault_name = args[0] + if type(vault_name) is tuple: + vault_name = vault_name[0] + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + + return self.merge_id(vault_name, container_id) + + def update_params(self, new_vault_name, new_container_id, *args, **options): + """ + Stores vault name and container ID back into the parameters. + """ + + args_list = list(args) + args_list[0] = new_vault_name + args = tuple(args_list) + + options['container_id'] = new_container_id + + return (args, options) + def get_kra_id(self, id): """ Generates a client key ID to store/retrieve data in KRA. @@ -363,10 +438,14 @@ class vault_add(LDAPCreate): msg_summary = _('Added vault "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vault_add, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def forward(self, *args, **options): vault_name = args[0] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = options.get('ipavaulttype') data = options.get('data') @@ -549,7 +628,7 @@ class vault_add(LDAPCreate): def pre_callback(self, ldap, dn, entry_attrs, attrs_list, *keys, **options): assert isinstance(dn, DN) - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) # set owner principal = getattr(context, 'principal') @@ -576,7 +655,7 @@ class vault_add(LDAPCreate): def handle_not_found(self, *args, **options): - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) raise errors.NotFound( reason=self.obj.parent_not_found_msg % { @@ -598,6 +677,10 @@ class vault_del(LDAPDelete): msg_summary = _('Deleted vault "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vault_del, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, dn, *keys, **options): assert isinstance(dn, DN) @@ -638,10 +721,16 @@ class vault_find(LDAPSearch): '%(count)d vault matched', '%(count)d vaults matched', 0 ) + def params_2_args_options(self, **params): + (args, options) = super(vault_find, self).params_2_args_options(**params) + container_id = self.obj.parse_params(*args, **options) + container_id = self.api.Object.vaultcontainer.normalize_id(container_id) + return self.obj.update_params(None, container_id, *args, **options) + def pre_callback(self, ldap, filter, attrs_list, base_dn, scope, *keys, **options): assert isinstance(base_dn, DN) - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) (name, parent_id) = self.api.Object.vaultcontainer.split_id(container_id) base_dn = self.api.Object.vaultcontainer.get_dn(name, parent_id=parent_id) @@ -657,7 +746,7 @@ class vault_find(LDAPSearch): def handle_not_found(self, *args, **options): - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) # vault container is user's private container, ignore if container_id == self.api.Object.vaultcontainer.get_private_id(): @@ -684,6 +773,10 @@ class vault_mod(LDAPUpdate): msg_summary = _('Modified vault "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vault_mod, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -703,6 +796,10 @@ class vault_show(LDAPRetrieve): ), ) + def params_2_args_options(self, **params): + (args, options) = super(vault_show, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -804,10 +901,14 @@ class vault_archive(LDAPRetrieve): msg_summary = _('Archived data into vault "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vault_archive, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def forward(self, *args, **options): vault_name = args[0] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = 'standard' salt = None @@ -1118,10 +1219,14 @@ class vault_retrieve(LDAPRetrieve): msg_summary = _('Retrieved data from vault "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vault_retrieve, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def forward(self, *args, **options): vault_name = args[0] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = 'standard' salt = None @@ -1396,6 +1501,10 @@ class vault_add_owner(LDAPAddMember): member_attributes = ['owner'] member_count_out = ('%i owner added.', '%i owners added.') + def params_2_args_options(self, **params): + (args, options) = super(vault_add_owner, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -1418,6 +1527,10 @@ class vault_remove_owner(LDAPRemoveMember): member_attributes = ['owner'] member_count_out = ('%i owner removed.', '%i owners removed.') + def params_2_args_options(self, **params): + (args, options) = super(vault_remove_owner, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -1437,6 +1550,10 @@ class vault_add_member(LDAPAddMember): ), ) + def params_2_args_options(self, **params): + (args, options) = super(vault_add_member, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -1456,6 +1573,10 @@ class vault_remove_member(LDAPRemoveMember): ), ) + def params_2_args_options(self, **params): + (args, options) = super(vault_remove_member, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) diff --git a/ipalib/plugins/vaultcontainer.py b/ipalib/plugins/vaultcontainer.py index 27cb6fff3479335943bae59340c9afc773dfc004..5e8ee353ba2a625751fbfa9867366d98bdd5aea3 100644 --- a/ipalib/plugins/vaultcontainer.py +++ b/ipalib/plugins/vaultcontainer.py @@ -40,7 +40,10 @@ EXAMPLES: ipa vaultcontainer-find """) + _(""" List top-level vault containers: - ipa vaultcontainer-find --parent-id / + ipa vaultcontainer-find / +""") + _(""" + List shared vault containers: + ipa vaultcontainer-find /shared """) + _(""" Add a vault container: ipa vaultcontainer-add MyContainer @@ -104,8 +107,8 @@ class vaultcontainer(LDAPObject): cli_name='container_name', label=_('Container name'), primary_key=True, - pattern='^[a-zA-Z0-9_.-]+$', - pattern_errmsg='may only include letters, numbers, _, ., and -', + pattern='^[a-zA-Z0-9_.-/]+$', + pattern_errmsg='may only include letters, numbers, _, ., -, and /', maxlength=255, ), Str('container_id?', @@ -128,7 +131,7 @@ class vaultcontainer(LDAPObject): # get container ID from parameters name = keys[-1] - parent_id = self.normalize_id(options.get('parent_id')) + parent_id = self.absolute_id(options.get('parent_id')) container_id = parent_id if name: @@ -176,14 +179,21 @@ class vaultcontainer(LDAPObject): Normalizes container ID. """ + # make sure ID ends with slash + if id and not id.endswith(u'/'): + return id + u'/' + + return id + + def absolute_id(self, id): + """ + Generate absolute container ID. + """ + # if ID is empty, return user's private container ID if not id: return self.get_private_id() - # make sure ID ends with slash - if not id.endswith(u'/'): - id += u'/' - # if it's an absolute ID, do nothing if id.startswith(u'/'): return id @@ -203,8 +213,68 @@ class vaultcontainer(LDAPObject): # split ID into parent ID, container name, and empty string parts = id.rsplit(u'/', 2) - # return container name and parent ID - return (parts[1], parts[0] + u'/') + if len(parts) == 3: + container_name = parts[1] + parent_id = u'%s/' % parts[0] + + elif len(parts) == 2: + container_name = parts[0] + parent_id = None + + if not container_name: + container_name = None + + return (container_name, parent_id) + + def merge_id(self, container_name, parent_id): + """ + Merges a container name and a parent ID into a container ID. + """ + + if not container_name: + id = parent_id + + elif container_name.startswith('/') or not parent_id: + id = container_name + + else: + id = parent_id + container_name + + return self.normalize_id(id) + + def normalize_params(self, *args, **options): + """ + Normalizes the container ID in the parameters. + """ + + container_id = self.parse_params(*args, **options) + (container_name, parent_id) = self.split_id(container_id) + return self.update_params(container_name, parent_id, *args, **options) + + def parse_params(self, *args, **options): + """ + Extracts the container name and parent ID in the parameters. + """ + + container_name = args[0] + if type(container_name) is tuple: + container_name = container_name[0] + parent_id = self.normalize_id(options.get('parent_id')) + + return self.merge_id(container_name, parent_id) + + def update_params(self, new_container_name, new_parent_id, *args, **options): + """ + Stores container name and parent ID back into the parameters. + """ + + args_list = list(args) + args_list[0] = new_container_name + args = tuple(args_list) + + options['parent_id'] = new_parent_id + + return (args, options) def create_entry(self, dn, owner=None): """ @@ -249,10 +319,14 @@ class vaultcontainer_add(LDAPCreate): msg_summary = _('Added vault container "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_add, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def pre_callback(self, ldap, dn, entry_attrs, attrs_list, *keys, **options): assert isinstance(dn, DN) - parent_id = self.obj.normalize_id(options.get('parent_id')) + parent_id = self.obj.absolute_id(options.get('parent_id')) # set owner principal = getattr(context, 'principal') @@ -279,7 +353,7 @@ class vaultcontainer_add(LDAPCreate): def handle_not_found(self, *args, **options): - parent_id = self.obj.normalize_id(options.get('parent_id')) + parent_id = self.obj.absolute_id(options.get('parent_id')) raise errors.NotFound( reason=self.obj.parent_not_found_msg % { @@ -306,6 +380,10 @@ class vaultcontainer_del(LDAPDelete): msg_summary = _('Deleted vault container "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_del, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def pre_callback(self, ldap, dn, *keys, **options): assert isinstance(dn, DN) @@ -335,10 +413,16 @@ class vaultcontainer_find(LDAPSearch): '%(count)d vault container matched', '%(count)d vault containers matched', 0 ) + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_find, self).params_2_args_options(**params) + + parent_id = self.obj.parse_params(*args, **options) + return self.obj.update_params(None, parent_id, *args, **options) + def pre_callback(self, ldap, filter, attrs_list, base_dn, scope, *keys, **options): assert isinstance(base_dn, DN) - parent_id = self.obj.normalize_id(options.get('parent_id')) + parent_id = self.obj.absolute_id(options.get('parent_id')) (name, grandparent_id) = self.obj.split_id(parent_id) base_dn = self.obj.get_dn(name, parent_id=grandparent_id) @@ -353,7 +437,7 @@ class vaultcontainer_find(LDAPSearch): def handle_not_found(self, *args, **options): - parent_id = self.obj.normalize_id(options.get('parent_id')) + parent_id = self.obj.absolute_id(options.get('parent_id')) # parent is user's private container, ignore if parent_id == self.obj.get_private_id(): @@ -381,6 +465,10 @@ class vaultcontainer_mod(LDAPUpdate): msg_summary = _('Modified vault container "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_mod, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -400,6 +488,10 @@ class vaultcontainer_show(LDAPRetrieve): ), ) + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_show, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -422,6 +514,10 @@ class vaultcontainer_add_owner(LDAPAddMember): member_attributes = ['owner'] member_count_out = ('%i owner added.', '%i owners added.') + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_add_owner, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -444,6 +540,10 @@ class vaultcontainer_remove_owner(LDAPRemoveMember): member_attributes = ['owner'] member_count_out = ('%i owner removed.', '%i owners removed.') + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_remove_owner, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -463,6 +563,10 @@ class vaultcontainer_add_member(LDAPAddMember): ), ) + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_add_member, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -475,7 +579,6 @@ class vaultcontainer_add_member(LDAPAddMember): class vaultcontainer_remove_member(LDAPRemoveMember): __doc__ = _('Remove members from a vault container.') - takes_options = LDAPRemoveMember.takes_options + ( Str('parent_id?', cli_name='parent_id', @@ -483,6 +586,10 @@ class vaultcontainer_remove_member(LDAPRemoveMember): ), ) + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_remove_member, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) diff --git a/ipalib/plugins/vaultsecret.py b/ipalib/plugins/vaultsecret.py index 7f44f0816b571dac718fa02edfd187fe8666565e..de15b014ca285387c7a1730fca3e110664a3ecf2 100644 --- a/ipalib/plugins/vaultsecret.py +++ b/ipalib/plugins/vaultsecret.py @@ -147,7 +147,7 @@ class vaultsecret_add(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = 'standard' salt = None @@ -362,7 +362,7 @@ class vaultsecret_del(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = 'standard' salt = None @@ -539,7 +539,7 @@ class vaultsecret_find(LDAPSearch): def forward(self, *args, **options): vault_name = args[0] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = 'standard' salt = None @@ -716,7 +716,7 @@ class vaultsecret_mod(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = 'standard' salt = None @@ -946,7 +946,7 @@ class vaultsecret_show(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = 'standard' salt = None diff --git a/ipatests/test_xmlrpc/test_vault_plugin.py b/ipatests/test_xmlrpc/test_vault_plugin.py index f3a280b40d5b6972e8755f63d46013cadaa68334..98e0b543d280d5bb36d5f9aae5c925019e79e962 100644 --- a/ipatests/test_xmlrpc/test_vault_plugin.py +++ b/ipatests/test_xmlrpc/test_vault_plugin.py @@ -34,6 +34,8 @@ asymmetric_vault = u'asymmetric_vault' escrowed_symmetric_vault = u'escrowed_symmetric_vault' escrowed_asymmetric_vault = u'escrowed_asymmetric_vault' +shared_test_vault = u'/shared/%s' % test_vault + password = u'password' public_key = """ @@ -158,6 +160,7 @@ class test_vault_plugin(Declarative): ('vault_del', [asymmetric_vault], {'continue': True}), ('vault_del', [escrowed_symmetric_vault], {'continue': True}), ('vault_del', [escrowed_asymmetric_vault], {'continue': True}), + ('vault_del', [shared_test_vault], {'continue': True}), ] tests = [ @@ -621,4 +624,152 @@ class test_vault_plugin(Declarative): }, }, + { + 'desc': 'Create test vault with absolute ID', + 'command': ( + 'vault_add', + [shared_test_vault], + {}, + ), + 'expected': { + 'value': test_vault, + 'summary': u'Added vault "%s"' % test_vault, + 'result': { + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_vault, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [test_vault], + 'vault_id': shared_test_vault, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Find test vaults with absolute ID', + 'command': ( + 'vault_find', + [u'/shared/'], + {}, + ), + 'expected': { + 'count': 1, + 'truncated': False, + 'summary': u'1 vault matched', + 'result': [ + { + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_vault, api.env.basedn), + 'cn': [test_vault], + 'vault_id': shared_test_vault, + 'ipavaulttype': [u'standard'], + }, + ], + }, + }, + + { + 'desc': 'Show test vault with absolute ID', + 'command': ( + 'vault_show', + [shared_test_vault], + {}, + ), + 'expected': { + 'value': test_vault, + 'summary': None, + 'result': { + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_vault, api.env.basedn), + 'cn': [test_vault], + 'vault_id': shared_test_vault, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Modify test vault with absolute ID', + 'command': ( + 'vault_mod', + [shared_test_vault], + { + 'description': u'Test vault', + }, + ), + 'expected': { + 'value': test_vault, + 'summary': u'Modified vault "%s"' % test_vault, + 'result': { + 'cn': [test_vault], + 'vault_id': shared_test_vault, + 'description': [u'Test vault'], + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Archive binary data with absolute ID', + 'command': ( + 'vault_archive', + [shared_test_vault], + { + 'data': binary_data, + }, + ), + 'expected': { + 'value': test_vault, + 'summary': u'Archived data into vault "%s"' % test_vault, + 'result': { + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_vault, api.env.basedn), + 'cn': [test_vault], + 'vault_id': shared_test_vault, + 'description': [u'Test vault'], + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Retrieve binary data with absolute ID', + 'command': ( + 'vault_retrieve', + [shared_test_vault], + {}, + ), + 'expected': { + 'value': test_vault, + 'summary': u'Retrieved data from vault "%s"' % test_vault, + 'result': { + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_vault, api.env.basedn), + 'cn': [test_vault], + 'vault_id': shared_test_vault, + 'description': [u'Test vault'], + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + 'nonce': fuzzy_string, + 'vault_data': fuzzy_string, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Delete test vault with absolute ID', + 'command': ( + 'vault_del', + [shared_test_vault], + {}, + ), + 'expected': { + 'value': [test_vault], + 'summary': u'Deleted vault "%s"' % test_vault, + 'result': { + 'failed': (), + }, + }, + }, + ] diff --git a/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py b/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py index 22e13769df19b40cd39a144df662bae8bbf53d9e..ca96bff4dcae8c62bc44245ba82c7bf4165754e5 100644 --- a/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py +++ b/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py @@ -24,9 +24,10 @@ Test the `ipalib/plugins/vaultcontainer.py` module. from ipalib import api, errors from xmlrpc_test import Declarative -private_container = u'private_container' -shared_container = u'shared_container' -service_container = u'service_container' +test_container = u'test_container' +private_container = test_container +shared_test_container = u'/shared/%s' % test_container +service_test_container = u'/services/%s' % test_container base_container = u'base_container' child_container = u'child_container' @@ -36,8 +37,8 @@ class test_vaultcontainer_plugin(Declarative): cleanup_commands = [ ('vaultcontainer_del', [private_container], {'continue': True}), - ('vaultcontainer_del', [shared_container], {'parent_id': u'/shared/', 'continue': True}), - ('vaultcontainer_del', [service_container], {'parent_id': u'/services/', 'continue': True}), + ('vaultcontainer_del', [shared_test_container], {'continue': True}), + ('vaultcontainer_del', [service_test_container], {'continue': True}), ('vaultcontainer_del', [base_container], {'force': True, 'continue': True}), ] @@ -200,19 +201,17 @@ class test_vaultcontainer_plugin(Declarative): 'desc': 'Create shared container', 'command': ( 'vaultcontainer_add', - [shared_container], - { - 'parent_id': u'/shared/', - }, + [shared_test_container], + {}, ), 'expected': { - 'value': shared_container, - 'summary': 'Added vault container "%s"' % shared_container, + 'value': test_container, + 'summary': 'Added vault container "%s"' % test_container, 'result': { - 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (shared_container, api.env.basedn), + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_container, api.env.basedn), 'objectclass': (u'ipaVaultContainer', u'top'), - 'cn': [shared_container], - 'container_id': u'/shared/%s/' % shared_container, + 'cn': [test_container], + 'container_id': u'/shared/%s/' % test_container, 'owner_user': [u'admin'], }, }, @@ -222,10 +221,8 @@ class test_vaultcontainer_plugin(Declarative): 'desc': 'Find shared containers', 'command': ( 'vaultcontainer_find', - [], - { - 'parent_id': u'/shared/', - }, + [u'/shared/'], + { }, ), 'expected': { 'count': 1, @@ -233,9 +230,9 @@ class test_vaultcontainer_plugin(Declarative): 'summary': u'1 vault container matched', 'result': [ { - 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (shared_container, api.env.basedn), - 'cn': [shared_container], - 'container_id': u'/shared/%s/' % shared_container, + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_container, api.env.basedn), + 'cn': [test_container], + 'container_id': u'/shared/%s/' % test_container, }, ], }, @@ -245,18 +242,16 @@ class test_vaultcontainer_plugin(Declarative): 'desc': 'Show shared container', 'command': ( 'vaultcontainer_show', - [shared_container], - { - 'parent_id': u'/shared/', - }, + [shared_test_container], + {}, ), 'expected': { - 'value': shared_container, + 'value': test_container, 'summary': None, 'result': { - 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (shared_container, api.env.basedn), - 'cn': [shared_container], - 'container_id': u'/shared/%s/' % shared_container, + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_container, api.env.basedn), + 'cn': [test_container], + 'container_id': u'/shared/%s/' % test_container, 'owner_user': [u'admin'], }, }, @@ -266,18 +261,17 @@ class test_vaultcontainer_plugin(Declarative): 'desc': 'Modify shared container', 'command': ( 'vaultcontainer_mod', - [shared_container], + [shared_test_container], { - 'parent_id': u'/shared/', 'description': u'shared container', }, ), 'expected': { - 'value': shared_container, - 'summary': 'Modified vault container "%s"' % shared_container, + 'value': test_container, + 'summary': 'Modified vault container "%s"' % test_container, 'result': { - 'cn': [shared_container], - 'container_id': u'/shared/%s/' % shared_container, + 'cn': [test_container], + 'container_id': u'/shared/%s/' % test_container, 'description': [u'shared container'], 'owner_user': [u'admin'], }, @@ -288,14 +282,12 @@ class test_vaultcontainer_plugin(Declarative): 'desc': 'Delete shared container', 'command': ( 'vaultcontainer_del', - [shared_container], - { - 'parent_id': u'/shared/', - }, + [shared_test_container], + {}, ), 'expected': { - 'value': [shared_container], - 'summary': u'Deleted vault container "%s"' % shared_container, + 'value': [test_container], + 'summary': u'Deleted vault container "%s"' % test_container, 'result': { 'failed': (), }, @@ -306,19 +298,17 @@ class test_vaultcontainer_plugin(Declarative): 'desc': 'Create service container', 'command': ( 'vaultcontainer_add', - [service_container], - { - 'parent_id': u'/services/', - }, + [service_test_container], + {}, ), 'expected': { - 'value': service_container, - 'summary': 'Added vault container "%s"' % service_container, + 'value': test_container, + 'summary': 'Added vault container "%s"' % test_container, 'result': { - 'dn': u'cn=%s,cn=services,cn=vaults,%s' % (service_container, api.env.basedn), + 'dn': u'cn=%s,cn=services,cn=vaults,%s' % (test_container, api.env.basedn), 'objectclass': (u'ipaVaultContainer', u'top'), - 'cn': [service_container], - 'container_id': u'/services/%s/' % service_container, + 'cn': [test_container], + 'container_id': u'/services/%s/' % test_container, 'owner_user': [u'admin'], }, }, diff --git a/ipatests/test_xmlrpc/test_vaultsecret_plugin.py b/ipatests/test_xmlrpc/test_vaultsecret_plugin.py index cbfd231633e7c3c000e57d52d85b83f44f71df3c..d2a4e92507fbabbcc356dc889738f151521e896c 100644 --- a/ipatests/test_xmlrpc/test_vaultsecret_plugin.py +++ b/ipatests/test_xmlrpc/test_vaultsecret_plugin.py @@ -25,6 +25,7 @@ from ipalib import api, errors from xmlrpc_test import Declarative, fuzzy_string test_vault = u'test_vault' +shared_test_vault = u'/shared/%s' % test_vault test_vaultsecret = u'test_vaultsecret' binary_data = '\x01\x02\x03\x04' text_data = u'secret' @@ -33,6 +34,7 @@ class test_vaultsecret_plugin(Declarative): cleanup_commands = [ ('vault_del', [test_vault], {'continue': True}), + ('vault_del', [shared_test_vault], {'continue': True}), ] tests = [ @@ -208,4 +210,117 @@ class test_vaultsecret_plugin(Declarative): }, }, + { + 'desc': 'Create shared test vault', + 'command': ( + 'vault_add', + [shared_test_vault], + {}, + ), + 'expected': { + 'value': test_vault, + 'summary': 'Added vault "%s"' % test_vault, + 'result': { + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_vault, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [test_vault], + 'vault_id': u'/shared/%s' % test_vault, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Create shared test vault secret with binary data', + 'command': ( + 'vaultsecret_add', + [shared_test_vault, test_vaultsecret], + { + 'data': binary_data, + }, + ), + 'expected': { + 'value': test_vaultsecret, + 'summary': 'Added vault secret "%s"' % test_vaultsecret, + 'result': { + 'secret_name': test_vaultsecret, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Find shared vault secrets', + 'command': ( + 'vaultsecret_find', + [shared_test_vault], + {}, + ), + 'expected': { + 'count': 1, + 'truncated': False, + 'summary': u'1 vault secret matched', + 'result': [ + { + 'secret_name': test_vaultsecret, + 'data': binary_data, + }, + ], + }, + }, + + { + 'desc': 'Retrieve shared test vault secret', + 'command': ( + 'vaultsecret_show', + [shared_test_vault, test_vaultsecret], + {}, + ), + 'expected': { + 'value': test_vaultsecret, + 'summary': None, + 'result': { + 'secret_name': test_vaultsecret, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Modify shared test vault secret', + 'command': ( + 'vaultsecret_mod', + [shared_test_vault, test_vaultsecret], + { + 'description': u'Test vault secret', + }, + ), + 'expected': { + 'value': test_vaultsecret, + 'summary': u'Modified vault secret "%s"' % test_vaultsecret, + 'result': { + 'secret_name': test_vaultsecret, + 'description': u'Test vault secret', + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Delete shared test vault secret', + 'command': ( + 'vaultsecret_del', + [shared_test_vault, test_vaultsecret], + {}, + ), + 'expected': { + 'value': test_vaultsecret, + 'summary': u'Deleted vault secret "%s"' % test_vaultsecret, + 'result': { + 'failed': (), + }, + }, + }, + ] -- 1.9.0 From jcholast at redhat.com Tue Mar 17 07:01:18 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 17 Mar 2015 08:01:18 +0100 Subject: [Freeipa-devel] [PATCH] 0003-3 User life cycle: new stageuser plugin with add verb In-Reply-To: <5506B918.6000708@redhat.com> References: <53E4D6AE.6050505@redhat.com> <54045399.3030404@redhat.com> <54196346.5070500@redhat.com> <54D0A7EB.1010700@redhat.com> <54D22BE2.9050407@redhat.com> <54D24567.4010103@redhat.com> <54E5D092.6030708@redhat.com> <54E5FF07.1080809@redhat.com> <54F9F243.5090003@redhat.com> <5506B918.6000708@redhat.com> Message-ID: <5507D13E.7040107@redhat.com> Dne 16.3.2015 v 12:06 David Kupka napsal(a): > On 03/06/2015 07:30 PM, thierry bordaz wrote: >> On 02/19/2015 04:19 PM, Martin Basti wrote: >>> On 19/02/15 13:01, thierry bordaz wrote: >>>> On 02/04/2015 05:14 PM, Jan Cholasta wrote: >>>>> Hi, >>>>> >>>>> Dne 4.2.2015 v 15:25 David Kupka napsal(a): >>>>>> On 02/03/2015 11:50 AM, thierry bordaz wrote: >>>>>>> On 09/17/2014 12:32 PM, thierry bordaz wrote: >>>>>>>> On 09/01/2014 01:08 PM, Petr Viktorin wrote: >>>>>>>>> On 08/08/2014 03:54 PM, thierry bordaz wrote: >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> The attached patch is related to 'User Life Cycle' >>>>>>>>>> (https://fedorahosted.org/freeipa/ticket/3813) >>>>>>>>>> >>>>>>>>>> It creates a stageuser plugin with a first function >>>>>>>>>> stageuser-add. >>>>>>>>>> Stage >>>>>>>>>> user entries are provisioned under 'cn=staged >>>>>>>>>> users,cn=accounts,cn=provisioning,SUFFIX'. >>>>>>>>>> >>>>>>>>>> Thanks >>>>>>>>>> thierry >>>>>>>>> >>>>>>>>> Avoid `from ipalib.plugins.baseldap import *` in new code; instead >>>>>>>>> import the module itself and use e.g. `baseldap.LDAPObject`. >>>>>>>>> >>>>>>>>> The stageuser help (docstring) is copied from the user plugin, and >>>>>>>>> discusses things like account lockout and disabling users. It >>>>>>>>> should >>>>>>>>> rather explain what stageuser itself does. (And I don't very much >>>>>>>>> like the Note about the interface being badly designed...) >>>>>>>>> Also decide if the docs should call it "staged user" or "stage >>>>>>>>> user" >>>>>>>>> or "stageuser". >>>>>>>>> >>>>>>>>> A lot of the code is copied and pasted over from the users plugin. >>>>>>>>> Don't do that. Either import things (e.g. validate_nsaccountlock) >>>>>>>>> from the users plugin, or move the reused code into a shared >>>>>>>>> module. >>>>>>>>> >>>>>>>>> For the `user` object, since so much is the same, it might be >>>>>>>>> best to >>>>>>>>> create a common base class for user and stageuser; and similarly >>>>>>>>> for >>>>>>>>> the Command plugins. >>>>>>>>> >>>>>>>>> The default permissions need different names, and you don't need >>>>>>>>> another copy of the 'non_object' ones. Also, run the makeaci >>>>>>>>> script. >>>>>>>>> >>>>>>>> Hello, >>>>>>>> >>>>>>>> This modified patch is mainly moving common base class into a >>>>>>>> new >>>>>>>> plugin: accounts.py. user/stageuser plugin inherits from >>>>>>>> accounts. >>>>>>>> It also creates a better description of what are stage user, >>>>>>>> how >>>>>>>> to add a new stage user, updates ACI.txt and separate >>>>>>>> active/stage >>>>>>>> user managed permissions. >>>>>>>> >>>>>>>> thanks >>>>>>>> thierry >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Freeipa-devel mailing list >>>>>>>> Freeipa-devel at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>> >>>>>>> >>>>>>> Thanks David for the reviews. Here the last patches >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Freeipa-devel mailing list >>>>>>> Freeipa-devel at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>> >>>>>> >>>>>> The freeipa-tbordaz-0002 patch had trailing whitespaces on few >>>>>> lines so >>>>>> I'm attaching fixed version (and unchanged patch >>>>>> freeipa-tbordaz-0003-3 >>>>>> to keep them together). >>>>>> >>>>>> The ULC feature is still WIP but these patches look good to me and >>>>>> don't >>>>>> break anything as far as I tested. >>>>>> We should push them now to avoid further rebases. Thierry can then >>>>>> prepare other patches delivering the rest of ULC functionality. >>>>> >>>>> Few comments from just reading the patches: >>>>> >>>>> 1) I would name the base class "baseuser", "account" does not >>>>> necessarily mean user account. >>>>> >>>>> 2) This is very wrong: >>>>> >>>>> -class user_add(LDAPCreate): >>>>> +class user_add(user, LDAPCreate): >>>>> >>>>> You are creating a plugin which is both an object and an command. >>>>> >>>>> 3) This is purely subjective, but I don't like the name >>>>> "deleteuser", as it has a verb in it. We usually don't do that and >>>>> IMHO we shouldn't do that. >>>>> >>>>> Honza >>>>> >>>> >>>> Thank you for the review. I am attaching the updates patches >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> Freeipa-devel mailing list >>>> Freeipa-devel at redhat.com >>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>> Hello, >>> I'm getting errors during make rpms: >>> >>> if [ "" != "yes" ]; then \ >>> ./makeapi --validate; \ >>> ./makeaci --validate; \ >>> fi >>> >>> /root/freeipa/ipalib/plugins/baseuser.py:641 command "baseuser_add" >>> doc is not internationalized >>> /root/freeipa/ipalib/plugins/baseuser.py:653 command "baseuser_find" >>> doc is not internationalized >>> /root/freeipa/ipalib/plugins/baseuser.py:647 command "baseuser_mod" >>> doc is not internationalized >>> 0 commands without doc, 3 commands whose doc is not i18n >>> Command baseuser_add in ipalib, not in API >>> Command baseuser_find in ipalib, not in API >>> Command baseuser_mod in ipalib, not in API >>> >>> There are one or more new commands defined. >>> Update API.txt and increment the minor version in VERSION. >>> >>> There are one or more documentation problems. >>> You must fix these before preceeding >>> >>> Issues probably caused by this: >>> 1) >>> You should not use the register decorator, if this class is just for >>> inheritance >>> @register() >>> class baseuser_add(LDAPCreate): >>> >>> @register() >>> class baseuser_mod(LDAPUpdate): >>> >>> @register() >>> class baseuser_find(LDAPSearch): >>> >>> see dns.py plugin and "DNSZoneBase" and "dnszone" classes >>> >>> 2) >>> there might be an issue with >>> @register() >>> class baseuser(LDAPObject): >>> >>> the register decorator should not be there, I was warned by Petr^3 to >>> not use permission in parent class. The same permission should be >>> specified only in one place (for example user class), (otherwise they >>> will be generated twice??) I don't know more details about it. >>> >>> -- >>> Martin Basti >> >> Hello Martin, Jan, >> >> Thanks for your review. >> I changed the patch so that it does not register baseuser_*. Also >> increase the minor version because of new command. >> Finally I moved the managed_permission definition out of the parent >> baseuser class. >> >> >> >> >> > > Martin, could you please verify that the issues you encountered are fixed? > > Thanks! > You bumped wrong version variable: -IPA_VERSION_MINOR=1 +IPA_VERSION_MINOR=2 It should have been IPA_API_VERSION_MINOR (at the bottom of the file), including the last change comment below it. IMO baseuser should include superclasses for all the usual commands (add, mod, del, show, find) and stageuser/deleteuser commands should inherit from them. You don't need to override class properties like active_container_dn and takes_params on baseuser subclasses when they have the same value as in baseuser. Honza -- Jan Cholasta From mbasti at redhat.com Tue Mar 17 09:29:04 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 17 Mar 2015 10:29:04 +0100 Subject: [Freeipa-devel] [PATCH] 0041 Always reload StateFile before getting or modifying the, stored values. In-Reply-To: <5506D28D.9020305@redhat.com> References: <5506D28D.9020305@redhat.com> Message-ID: <5507F3E0.20407@redhat.com> On 16/03/15 13:54, David Kupka wrote: > https://fedorahosted.org/freeipa/ticket/4901 ACK, it works as expected -- Martin Basti From mkubik at redhat.com Tue Mar 17 09:38:54 2015 From: mkubik at redhat.com (Milan Kubik) Date: Tue, 17 Mar 2015 10:38:54 +0100 Subject: [Freeipa-devel] [PATCH] ipatests: port of p11helper test from github In-Reply-To: <55070370.1010200@redhat.com> References: <5502ED38.9020302@redhat.com> <5506B87B.6050600@redhat.com> <5506E983.4020807@redhat.com> <55070370.1010200@redhat.com> Message-ID: <5507F62E.9080004@redhat.com> Hi, On 03/16/2015 05:23 PM, Martin Basti wrote: > On 16/03/15 15:32, Milan Kubik wrote: >> On 03/16/2015 12:03 PM, Milan Kubik wrote: >>> On 03/13/2015 02:59 PM, Milan Kubik wrote: >>>> Hi, >>>> >>>> this is a patch with port of [1] to pytest. >>>> >>>> [1]: >>>> https://github.com/spacekpe/freeipa-pkcs11/blob/master/python/run.py >>>> >>>> Cheers, >>>> Milan >>>> >>>> >>>> >>> Added few more asserts in methods where the test could fail and >>> cause other errors. >>> >>> >> New version of the patch after brief discussion with Martin Basti. >> Removed unnecessary variable assignments and separated a new test case. >> >> > Hello, > > thank you for the patch. > I have a few nitpicks: > 1) > You can remove this and use just hexlify(s) > +def str_to_hex(s): > + return ''.join("{:02x}".format(ord(c)) for c in s) done > > 2) > + def test_find_secret_key(self, p11): > + assert p11.find_keys(_ipap11helper.KEY_CLASS_SECRET_KEY, > label=u"???-aest") > > In tests before you tested the exact number of expected IDs returned > by find_keys method, why not here? Lack of attention. Fixed the assert in `test_search_for_master_key` which does the same thing. Merged `test_find_secret_key` with `test_search_for_master_key` where it belongs. > > Martin^2 Milan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkubik-0001-4-ipatests-port-of-p11helper-test-from-github.patch Type: text/x-patch Size: 11533 bytes Desc: not available URL: From pspacek at redhat.com Tue Mar 17 11:09:04 2015 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 17 Mar 2015 12:09:04 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <550702BF.8000000@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> <55031230.70604@redhat.com> <5506BB6F.70406@redhat.com> <5506CCCB.3020003@redhat.com> <5506CE0A.20806@redhat.com> <550702BF.8000000@redhat.com> Message-ID: <55080B50.9060308@redhat.com> On 16.3.2015 17:20, Martin Babinsky wrote: > On 03/16/2015 01:35 PM, Jan Cholasta wrote: >> Dne 16.3.2015 v 13:30 Martin Babinsky napsal(a): >>> On 03/16/2015 12:15 PM, Martin Kosek wrote: >>>> On 03/13/2015 05:37 PM, Martin Babinsky wrote: >>>>> Attaching the next iteration of patches. Very good! I hopefully have last two nitpicks :-) See below. > diff --git a/ipapython/ipautil.py b/ipapython/ipautil.py > index 4116d974e620341119b56fad3cff1bda48af3bab..cd03e9fd17b60b8b7324d0ccd436a10f7556baf0 100644 > --- a/ipapython/ipautil.py > +++ b/ipapython/ipautil.py > @@ -1175,27 +1175,61 @@ def wait_for_open_socket(socket_name, timeout=0): > else: > raise e > > -def kinit_hostprincipal(keytab, ccachedir, principal): > + > +def kinit_keytab(keytab, ccache_path, principal, attempts=1): > """ > - Given a ccache directory and a principal kinit as that user. > + Given a ccache_path , keytab file and a principal kinit as that user. > + > + The optional parameter 'attempts' specifies how many times the credential > + initialization should be attempted before giving up and raising > + StandardError. > > This blindly overwrites the current CCNAME so if you need to save > it do so before calling this function. > > + This function is also not thread-safe since it modifies environment > + variables. > + > Thus far this is used to kinit as the local host. This note can be deleted because it is used elsewhere too. > """ > - try: > - ccache_file = 'FILE:%s/ccache' % ccachedir > - krbcontext = krbV.default_context() > - ktab = krbV.Keytab(name=keytab, context=krbcontext) > - princ = krbV.Principal(name=principal, context=krbcontext) > - os.environ['KRB5CCNAME'] = ccache_file > - ccache = krbV.CCache(name=ccache_file, context=krbcontext, primary_principal=princ) > - ccache.init(princ) > - ccache.init_creds_keytab(keytab=ktab, principal=princ) > - return ccache_file > - except krbV.Krb5Error, e: > - raise StandardError('Error initializing principal %s in %s: %s' % (principal, keytab, str(e))) > + root_logger.debug("Initializing principal %s using keytab %s" > + % (principal, keytab)) I'm sorry for nitpicking but it would be nice to log ccache_file too. Krb5 libs return quite weird errors when CC cache is not accessible so it helps to have the path at hand. -- Petr^2 Spacek From pvoborni at redhat.com Tue Mar 17 11:25:35 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 17 Mar 2015 12:25:35 +0100 Subject: [Freeipa-devel] [PATCHES 0001-0002] ipa-client-install NTP fixes In-Reply-To: <5506EF9A.3050806@redhat.com> References: <54EE84B1.4040206@redhat.com> <54EEDF8A.2090204@redhat.com> <54F0CEF7.8090609@redhat.com> <54F0D191.9060908@redhat.com> <54F0D860.6020501@redhat.com> <54F0DCC3.5080405@redhat.com> <54F0DF13.2030808@redhat.com> <54F22B7E.30707@redhat.com> <54F22E09.6030707@redhat.com> <54F22F8A.6030505@redhat.com> <54F24908.2080308@redhat.com> <54F751CE.1000202@redhat.com> <54F75556.6020803@redhat.com> <54F755EA.1000100@redhat.com> <54F75C29.4010105@redhat.com> <5501FA67.3020504@redhat.com> <5502AA40.2090108@redhat.com> <5506D2E2.8020109@redhat.com> <5506EF9A.3050806@redhat.com> Message-ID: <55080F2F.4090606@redhat.com> On 03/16/2015 03:58 PM, Martin Kosek wrote: > On 03/16/2015 01:56 PM, Martin Babinsky wrote: >> On 03/13/2015 10:13 AM, Martin Kosek wrote: >>> On 03/12/2015 09:43 PM, Nathan Kinder wrote: >> >> I have tested the patches on F21 client and they work as expected. >> > > I take that as an ACK. Before pushing the change, I just changed one print > format from "%s" to "%d" given a number was printed. > > Pushed to: > master: a58b77ca9cd3620201306258dd6bd05ea1c73c73 > ipa-4-1: 80aeb445e2034776f08668bf04dfd711af477b25 > > Petr1, it would be nice to get this one built on F21+, to unblock Ipsilon project. > F21 update: - https://admin.fedoraproject.org/updates/freeipa-4.1.3-3.fc21 F22 update: - https://admin.fedoraproject.org/updates/freeipa-4.1.3-3.fc22 -- Petr Vobornik From edewata at redhat.com Tue Mar 17 11:34:43 2015 From: edewata at redhat.com (Endi Sukma Dewata) Date: Tue, 17 Mar 2015 18:34:43 +0700 Subject: [Freeipa-devel] [PATCH] Password vault In-Reply-To: <5507B4B3.6060803@redhat.com> References: <54E1AF55.3060409@redhat.com> <54EBEB55.6010306@redhat.com> <54F96B22.9050507@redhat.com> <55004D5D.6060300@redhat.com> <5501E8BD.7000503@redhat.com> <5507B4B3.6060803@redhat.com> Message-ID: <55081153.7090307@redhat.com> On 3/17/2015 11:59 AM, Endi Sukma Dewata wrote: > On 3/13/2015 2:27 AM, Endi Sukma Dewata wrote: >> On 3/11/2015 9:12 PM, Endi Sukma Dewata wrote: >>> Thanks for the review. New patch attached to be applied on top of all >>> previous patches. Please see comments below. >> >> New patch #362-1 attached replacing #362. It fixed some issues in >> handle_not_found(). > > New patch #363 attached. It adds supports for vault & vaultcontainer ID > parameter. Attached are patch #363-1 replacing #363, and #364 providing vault copy functionality. -- Endi S. Dewata -------------- next part -------------- >From cd5ebe18edd3583ba044ba76e30ad40e9346d890 Mon Sep 17 00:00:00 2001 From: "Endi S. Dewata" Date: Thu, 12 Mar 2015 09:21:02 -0400 Subject: [PATCH] Vault ID improvements. The vault plugin has been modified to accept a single vault ID in addition to separate name and parent ID attributes. The vault container has also been modified in the same way. New test cases have been added to verify this functionality. https://fedorahosted.org/freeipa/ticket/3872 --- API.txt | 50 +++---- ipalib/plugins/vault.py | 143 +++++++++++++++++-- ipalib/plugins/vaultcontainer.py | 137 +++++++++++++++++-- ipalib/plugins/vaultsecret.py | 10 +- ipatests/test_xmlrpc/test_vault_plugin.py | 151 +++++++++++++++++++++ ipatests/test_xmlrpc/test_vaultcontainer_plugin.py | 90 ++++++------ ipatests/test_xmlrpc/test_vaultsecret_plugin.py | 115 ++++++++++++++++ 7 files changed, 590 insertions(+), 106 deletions(-) diff --git a/API.txt b/API.txt index 3a741755ab3e15e0175599a16a090b04d46d6be8..3f78493d986a292538dcca68133824bfb591149b 100644 --- a/API.txt +++ b/API.txt @@ -4515,7 +4515,7 @@ output: Output('summary', (, ), None) output: PrimaryKey('value', None, None) command: vault_add args: 1,20,3 -arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, required=True) +arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, required=True) option: Str('addattr*', cli_name='addattr', exclude='webui') option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id?', cli_name='container_id') @@ -4541,7 +4541,7 @@ output: Output('summary', (, ), None) output: PrimaryKey('value', None, None) command: vault_add_member args: 1,7,3 -arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id?', cli_name='container_id') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) @@ -4554,7 +4554,7 @@ output: Output('failed', , None) output: Entry('result', , Gettext('A dictionary representing an LDAP entry', domain='ipa', localedir=None)) command: vault_add_owner args: 1,7,3 -arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id?', cli_name='container_id') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) @@ -4567,7 +4567,7 @@ output: Output('failed', , None) output: Entry('result', , Gettext('A dictionary representing an LDAP entry', domain='ipa', localedir=None)) command: vault_archive args: 1,15,3 -arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id?', cli_name='container_id') option: Bytes('data?', cli_name='data') @@ -4588,7 +4588,7 @@ output: Output('summary', (, ), None) output: PrimaryKey('value', None, None) command: vault_del args: 1,3,3 -arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=True, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=True, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Str('container_id?', cli_name='container_id') option: Flag('continue', autofill=True, cli_name='continue', default=False) option: Str('version?', exclude='webui') @@ -4599,7 +4599,7 @@ command: vault_find args: 1,15,4 arg: Str('criteria?', noextrawhitespace=False) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('cn', attribute=True, autofill=False, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=False) +option: Str('cn', attribute=True, autofill=False, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=False) option: Str('container_id?', cli_name='container_id') option: Str('description', attribute=True, autofill=False, cli_name='desc', multivalue=False, query=True, required=False) option: Bytes('ipaescrowpublickey', attribute=True, autofill=False, cli_name='escrow_public_key', multivalue=False, query=True, required=False) @@ -4619,7 +4619,7 @@ output: Output('summary', (, ), None) output: Output('truncated', , None) command: vault_mod args: 1,15,3 -arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Str('addattr*', cli_name='addattr', exclude='webui') option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id?', cli_name='container_id') @@ -4640,7 +4640,7 @@ output: Output('summary', (, ), None) output: PrimaryKey('value', None, None) command: vault_remove_member args: 1,7,3 -arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id?', cli_name='container_id') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) @@ -4653,7 +4653,7 @@ output: Output('failed', , None) output: Entry('result', , Gettext('A dictionary representing an LDAP entry', domain='ipa', localedir=None)) command: vault_remove_owner args: 1,7,3 -arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id?', cli_name='container_id') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) @@ -4666,7 +4666,7 @@ output: Output('failed', , None) output: Entry('result', , Gettext('A dictionary representing an LDAP entry', domain='ipa', localedir=None)) command: vault_retrieve args: 1,16,3 -arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id?', cli_name='container_id') option: Bytes('escrow_private_key?', cli_name='escrow_private_key') @@ -4688,7 +4688,7 @@ output: Output('summary', (, ), None) output: PrimaryKey('value', None, None) command: vault_show args: 1,6,3 -arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id?', cli_name='container_id') option: Flag('no_members', autofill=True, default=False, exclude='webui') @@ -4705,7 +4705,7 @@ option: Str('version?', exclude='webui') output: Output('result', None, None) command: vaultcontainer_add args: 1,9,3 -arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, required=True) +arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, required=True) option: Str('addattr*', cli_name='addattr', exclude='webui') option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id', attribute=False, cli_name='container_id', multivalue=False, required=False) @@ -4720,7 +4720,7 @@ output: Output('summary', (, ), None) output: PrimaryKey('value', None, None) command: vaultcontainer_add_member args: 1,7,3 -arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') @@ -4733,7 +4733,7 @@ output: Output('failed', , None) output: Entry('result', , Gettext('A dictionary representing an LDAP entry', domain='ipa', localedir=None)) command: vaultcontainer_add_owner args: 1,7,3 -arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') @@ -4746,7 +4746,7 @@ output: Output('failed', , None) output: Entry('result', , Gettext('A dictionary representing an LDAP entry', domain='ipa', localedir=None)) command: vaultcontainer_del args: 1,4,3 -arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=True, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=True, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Flag('continue', autofill=True, cli_name='continue', default=False) option: Flag('force?', autofill=True, default=False) option: Str('parent_id?', cli_name='parent_id') @@ -4758,7 +4758,7 @@ command: vaultcontainer_find args: 1,11,4 arg: Str('criteria?', noextrawhitespace=False) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') -option: Str('cn', attribute=True, autofill=False, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=False) +option: Str('cn', attribute=True, autofill=False, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=False) option: Str('container_id', attribute=False, autofill=False, cli_name='container_id', multivalue=False, query=True, required=False) option: Str('description', attribute=True, autofill=False, cli_name='desc', multivalue=False, query=True, required=False) option: Flag('no_members', autofill=True, default=False, exclude='webui') @@ -4774,7 +4774,7 @@ output: Output('summary', (, ), None) output: Output('truncated', , None) command: vaultcontainer_mod args: 1,11,3 -arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Str('addattr*', cli_name='addattr', exclude='webui') option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id', attribute=False, autofill=False, cli_name='container_id', multivalue=False, required=False) @@ -4791,7 +4791,7 @@ output: Output('summary', (, ), None) output: PrimaryKey('value', None, None) command: vaultcontainer_remove_member args: 1,7,3 -arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') @@ -4804,7 +4804,7 @@ output: Output('failed', , None) output: Entry('result', , Gettext('A dictionary representing an LDAP entry', domain='ipa', localedir=None)) command: vaultcontainer_remove_owner args: 1,7,3 -arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('group*', alwaysask=True, cli_name='groups', csv=True) option: Flag('no_members', autofill=True, default=False, exclude='webui') @@ -4817,7 +4817,7 @@ output: Output('failed', , None) output: Entry('result', , Gettext('A dictionary representing an LDAP entry', domain='ipa', localedir=None)) command: vaultcontainer_show args: 1,6,3 -arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('cn', attribute=True, cli_name='container_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Flag('no_members', autofill=True, default=False, exclude='webui') option: Str('parent_id?', cli_name='parent_id') @@ -4829,7 +4829,7 @@ output: Output('summary', (, ), None) output: PrimaryKey('value', None, None) command: vaultsecret_add args: 2,12,3 -arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) arg: Str('secret_name', attribute=True, cli_name='secret', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id?', cli_name='container_id') @@ -4848,7 +4848,7 @@ output: Output('summary', (, ), None) output: PrimaryKey('value', None, None) command: vaultsecret_del args: 2,8,3 -arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) arg: Str('secret_name', attribute=True, cli_name='secret', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id?', cli_name='container_id') @@ -4863,7 +4863,7 @@ output: Output('summary', (, ), None) output: PrimaryKey('value', None, None) command: vaultsecret_find args: 2,12,4 -arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) arg: Str('criteria?', noextrawhitespace=False) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id?', cli_name='container_id') @@ -4883,7 +4883,7 @@ output: Output('summary', (, ), None) output: Output('truncated', , None) command: vaultsecret_mod args: 2,12,3 -arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) arg: Str('secret_name', attribute=True, cli_name='secret', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id?', cli_name='container_id') @@ -4902,7 +4902,7 @@ output: Output('summary', (, ), None) output: PrimaryKey('value', None, None) command: vaultsecret_show args: 2,11,3 -arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) +arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) arg: Str('secret_name', attribute=True, cli_name='secret', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id?', cli_name='container_id') diff --git a/ipalib/plugins/vault.py b/ipalib/plugins/vault.py index d47067758186601365e5924f5d13c7ab51ba66e5..38693d0710e000695cae21fb4db5dfb4c85b5c74 100644 --- a/ipalib/plugins/vault.py +++ b/ipalib/plugins/vault.py @@ -61,7 +61,7 @@ EXAMPLES: ipa vault-find """) + _(""" List shared vaults: - ipa vault-find --container-id /shared + ipa vault-find /shared """) + _(""" Add a standard vault: ipa vault-add MyVault @@ -171,8 +171,8 @@ class vault(LDAPObject): cli_name='vault_name', label=_('Vault name'), primary_key=True, - pattern='^[a-zA-Z0-9_.-]+$', - pattern_errmsg='may only include letters, numbers, _, ., and -', + pattern='^[a-zA-Z0-9_.-/]+$', + pattern_errmsg='may only include letters, numbers, _, ., -, and /', maxlength=255, ), Str('vault_id?', @@ -217,7 +217,7 @@ class vault(LDAPObject): # get vault ID from parameters name = keys[-1] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_id = container_id + name dn = self.base_dn @@ -250,6 +250,81 @@ class vault(LDAPObject): return id + def split_id(self, id): + """ + Splits a vault ID into (vault name, container ID) tuple. + """ + + if not id: + return (None, None) + + # split ID into container ID and vault name + parts = id.rsplit(u'/', 1) + + if len(parts) == 2: + vault_name = parts[1] + container_id = u'%s/' % parts[0] + + else: + vault_name = parts[0] + container_id = None + + if not vault_name: + vault_name = None + + return (vault_name, container_id) + + def merge_id(self, vault_name, container_id): + """ + Merges a vault name and a container ID into a vault ID. + """ + + if not vault_name: + id = container_id + + elif vault_name.startswith('/') or not container_id: + id = vault_name + + else: + id = container_id + vault_name + + return id + + def normalize_params(self, *args, **options): + """ + Normalizes the vault ID in the parameters. + """ + + vault_id = self.parse_params(*args, **options) + (vault_name, container_id) = self.split_id(vault_id) + return self.update_params(vault_name, container_id, *args, **options) + + def parse_params(self, *args, **options): + """ + Extracts the vault name and container ID in the parameters. + """ + + # get vault name and container ID from parameters + vault_name = args[0] + if type(vault_name) is tuple: + vault_name = vault_name[0] + container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + + return self.merge_id(vault_name, container_id) + + def update_params(self, new_vault_name, new_container_id, *args, **options): + """ + Stores vault name and container ID back into the parameters. + """ + + args_list = list(args) + args_list[0] = new_vault_name + args = tuple(args_list) + + options['container_id'] = new_container_id + + return (args, options) + def get_kra_id(self, id): """ Generates a client key ID to store/retrieve data in KRA. @@ -363,10 +438,14 @@ class vault_add(LDAPCreate): msg_summary = _('Added vault "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vault_add, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def forward(self, *args, **options): vault_name = args[0] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = options.get('ipavaulttype') data = options.get('data') @@ -549,7 +628,7 @@ class vault_add(LDAPCreate): def pre_callback(self, ldap, dn, entry_attrs, attrs_list, *keys, **options): assert isinstance(dn, DN) - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) # set owner principal = getattr(context, 'principal') @@ -576,7 +655,7 @@ class vault_add(LDAPCreate): def handle_not_found(self, *args, **options): - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) raise errors.NotFound( reason=self.obj.parent_not_found_msg % { @@ -598,6 +677,10 @@ class vault_del(LDAPDelete): msg_summary = _('Deleted vault "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vault_del, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, dn, *keys, **options): assert isinstance(dn, DN) @@ -638,10 +721,16 @@ class vault_find(LDAPSearch): '%(count)d vault matched', '%(count)d vaults matched', 0 ) + def params_2_args_options(self, **params): + (args, options) = super(vault_find, self).params_2_args_options(**params) + container_id = self.obj.parse_params(*args, **options) + container_id = self.api.Object.vaultcontainer.normalize_id(container_id) + return self.obj.update_params(None, container_id, *args, **options) + def pre_callback(self, ldap, filter, attrs_list, base_dn, scope, *keys, **options): assert isinstance(base_dn, DN) - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) (name, parent_id) = self.api.Object.vaultcontainer.split_id(container_id) base_dn = self.api.Object.vaultcontainer.get_dn(name, parent_id=parent_id) @@ -657,7 +746,7 @@ class vault_find(LDAPSearch): def handle_not_found(self, *args, **options): - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) # vault container is user's private container, ignore if container_id == self.api.Object.vaultcontainer.get_private_id(): @@ -684,6 +773,10 @@ class vault_mod(LDAPUpdate): msg_summary = _('Modified vault "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vault_mod, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -703,6 +796,10 @@ class vault_show(LDAPRetrieve): ), ) + def params_2_args_options(self, **params): + (args, options) = super(vault_show, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -804,10 +901,14 @@ class vault_archive(LDAPRetrieve): msg_summary = _('Archived data into vault "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vault_archive, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def forward(self, *args, **options): vault_name = args[0] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = 'standard' salt = None @@ -1118,10 +1219,14 @@ class vault_retrieve(LDAPRetrieve): msg_summary = _('Retrieved data from vault "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vault_retrieve, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def forward(self, *args, **options): vault_name = args[0] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = 'standard' salt = None @@ -1396,6 +1501,10 @@ class vault_add_owner(LDAPAddMember): member_attributes = ['owner'] member_count_out = ('%i owner added.', '%i owners added.') + def params_2_args_options(self, **params): + (args, options) = super(vault_add_owner, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -1418,6 +1527,10 @@ class vault_remove_owner(LDAPRemoveMember): member_attributes = ['owner'] member_count_out = ('%i owner removed.', '%i owners removed.') + def params_2_args_options(self, **params): + (args, options) = super(vault_remove_owner, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -1437,6 +1550,10 @@ class vault_add_member(LDAPAddMember): ), ) + def params_2_args_options(self, **params): + (args, options) = super(vault_add_member, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -1456,6 +1573,10 @@ class vault_remove_member(LDAPRemoveMember): ), ) + def params_2_args_options(self, **params): + (args, options) = super(vault_remove_member, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) diff --git a/ipalib/plugins/vaultcontainer.py b/ipalib/plugins/vaultcontainer.py index 27cb6fff3479335943bae59340c9afc773dfc004..5e8ee353ba2a625751fbfa9867366d98bdd5aea3 100644 --- a/ipalib/plugins/vaultcontainer.py +++ b/ipalib/plugins/vaultcontainer.py @@ -40,7 +40,10 @@ EXAMPLES: ipa vaultcontainer-find """) + _(""" List top-level vault containers: - ipa vaultcontainer-find --parent-id / + ipa vaultcontainer-find / +""") + _(""" + List shared vault containers: + ipa vaultcontainer-find /shared """) + _(""" Add a vault container: ipa vaultcontainer-add MyContainer @@ -104,8 +107,8 @@ class vaultcontainer(LDAPObject): cli_name='container_name', label=_('Container name'), primary_key=True, - pattern='^[a-zA-Z0-9_.-]+$', - pattern_errmsg='may only include letters, numbers, _, ., and -', + pattern='^[a-zA-Z0-9_.-/]+$', + pattern_errmsg='may only include letters, numbers, _, ., -, and /', maxlength=255, ), Str('container_id?', @@ -128,7 +131,7 @@ class vaultcontainer(LDAPObject): # get container ID from parameters name = keys[-1] - parent_id = self.normalize_id(options.get('parent_id')) + parent_id = self.absolute_id(options.get('parent_id')) container_id = parent_id if name: @@ -176,14 +179,21 @@ class vaultcontainer(LDAPObject): Normalizes container ID. """ + # make sure ID ends with slash + if id and not id.endswith(u'/'): + return id + u'/' + + return id + + def absolute_id(self, id): + """ + Generate absolute container ID. + """ + # if ID is empty, return user's private container ID if not id: return self.get_private_id() - # make sure ID ends with slash - if not id.endswith(u'/'): - id += u'/' - # if it's an absolute ID, do nothing if id.startswith(u'/'): return id @@ -203,8 +213,68 @@ class vaultcontainer(LDAPObject): # split ID into parent ID, container name, and empty string parts = id.rsplit(u'/', 2) - # return container name and parent ID - return (parts[1], parts[0] + u'/') + if len(parts) == 3: + container_name = parts[1] + parent_id = u'%s/' % parts[0] + + elif len(parts) == 2: + container_name = parts[0] + parent_id = None + + if not container_name: + container_name = None + + return (container_name, parent_id) + + def merge_id(self, container_name, parent_id): + """ + Merges a container name and a parent ID into a container ID. + """ + + if not container_name: + id = parent_id + + elif container_name.startswith('/') or not parent_id: + id = container_name + + else: + id = parent_id + container_name + + return self.normalize_id(id) + + def normalize_params(self, *args, **options): + """ + Normalizes the container ID in the parameters. + """ + + container_id = self.parse_params(*args, **options) + (container_name, parent_id) = self.split_id(container_id) + return self.update_params(container_name, parent_id, *args, **options) + + def parse_params(self, *args, **options): + """ + Extracts the container name and parent ID in the parameters. + """ + + container_name = args[0] + if type(container_name) is tuple: + container_name = container_name[0] + parent_id = self.normalize_id(options.get('parent_id')) + + return self.merge_id(container_name, parent_id) + + def update_params(self, new_container_name, new_parent_id, *args, **options): + """ + Stores container name and parent ID back into the parameters. + """ + + args_list = list(args) + args_list[0] = new_container_name + args = tuple(args_list) + + options['parent_id'] = new_parent_id + + return (args, options) def create_entry(self, dn, owner=None): """ @@ -249,10 +319,14 @@ class vaultcontainer_add(LDAPCreate): msg_summary = _('Added vault container "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_add, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def pre_callback(self, ldap, dn, entry_attrs, attrs_list, *keys, **options): assert isinstance(dn, DN) - parent_id = self.obj.normalize_id(options.get('parent_id')) + parent_id = self.obj.absolute_id(options.get('parent_id')) # set owner principal = getattr(context, 'principal') @@ -279,7 +353,7 @@ class vaultcontainer_add(LDAPCreate): def handle_not_found(self, *args, **options): - parent_id = self.obj.normalize_id(options.get('parent_id')) + parent_id = self.obj.absolute_id(options.get('parent_id')) raise errors.NotFound( reason=self.obj.parent_not_found_msg % { @@ -306,6 +380,10 @@ class vaultcontainer_del(LDAPDelete): msg_summary = _('Deleted vault container "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_del, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def pre_callback(self, ldap, dn, *keys, **options): assert isinstance(dn, DN) @@ -335,10 +413,16 @@ class vaultcontainer_find(LDAPSearch): '%(count)d vault container matched', '%(count)d vault containers matched', 0 ) + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_find, self).params_2_args_options(**params) + + parent_id = self.obj.parse_params(*args, **options) + return self.obj.update_params(None, parent_id, *args, **options) + def pre_callback(self, ldap, filter, attrs_list, base_dn, scope, *keys, **options): assert isinstance(base_dn, DN) - parent_id = self.obj.normalize_id(options.get('parent_id')) + parent_id = self.obj.absolute_id(options.get('parent_id')) (name, grandparent_id) = self.obj.split_id(parent_id) base_dn = self.obj.get_dn(name, parent_id=grandparent_id) @@ -353,7 +437,7 @@ class vaultcontainer_find(LDAPSearch): def handle_not_found(self, *args, **options): - parent_id = self.obj.normalize_id(options.get('parent_id')) + parent_id = self.obj.absolute_id(options.get('parent_id')) # parent is user's private container, ignore if parent_id == self.obj.get_private_id(): @@ -381,6 +465,10 @@ class vaultcontainer_mod(LDAPUpdate): msg_summary = _('Modified vault container "%(value)s"') + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_mod, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -400,6 +488,10 @@ class vaultcontainer_show(LDAPRetrieve): ), ) + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_show, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -422,6 +514,10 @@ class vaultcontainer_add_owner(LDAPAddMember): member_attributes = ['owner'] member_count_out = ('%i owner added.', '%i owners added.') + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_add_owner, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -444,6 +540,10 @@ class vaultcontainer_remove_owner(LDAPRemoveMember): member_attributes = ['owner'] member_count_out = ('%i owner removed.', '%i owners removed.') + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_remove_owner, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -463,6 +563,10 @@ class vaultcontainer_add_member(LDAPAddMember): ), ) + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_add_member, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) @@ -475,7 +579,6 @@ class vaultcontainer_add_member(LDAPAddMember): class vaultcontainer_remove_member(LDAPRemoveMember): __doc__ = _('Remove members from a vault container.') - takes_options = LDAPRemoveMember.takes_options + ( Str('parent_id?', cli_name='parent_id', @@ -483,6 +586,10 @@ class vaultcontainer_remove_member(LDAPRemoveMember): ), ) + def params_2_args_options(self, **params): + (args, options) = super(vaultcontainer_remove_member, self).params_2_args_options(**params) + return self.obj.normalize_params(*args, **options) + def post_callback(self, ldap, completed, failed, dn, entry_attrs, *keys, **options): assert isinstance(dn, DN) diff --git a/ipalib/plugins/vaultsecret.py b/ipalib/plugins/vaultsecret.py index 7f44f0816b571dac718fa02edfd187fe8666565e..de15b014ca285387c7a1730fca3e110664a3ecf2 100644 --- a/ipalib/plugins/vaultsecret.py +++ b/ipalib/plugins/vaultsecret.py @@ -147,7 +147,7 @@ class vaultsecret_add(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = 'standard' salt = None @@ -362,7 +362,7 @@ class vaultsecret_del(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = 'standard' salt = None @@ -539,7 +539,7 @@ class vaultsecret_find(LDAPSearch): def forward(self, *args, **options): vault_name = args[0] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = 'standard' salt = None @@ -716,7 +716,7 @@ class vaultsecret_mod(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = 'standard' salt = None @@ -946,7 +946,7 @@ class vaultsecret_show(LDAPRetrieve): vault_name = args[0] secret_name = args[1] - container_id = self.api.Object.vaultcontainer.normalize_id(options.get('container_id')) + container_id = self.api.Object.vaultcontainer.absolute_id(options.get('container_id')) vault_type = 'standard' salt = None diff --git a/ipatests/test_xmlrpc/test_vault_plugin.py b/ipatests/test_xmlrpc/test_vault_plugin.py index f3a280b40d5b6972e8755f63d46013cadaa68334..98e0b543d280d5bb36d5f9aae5c925019e79e962 100644 --- a/ipatests/test_xmlrpc/test_vault_plugin.py +++ b/ipatests/test_xmlrpc/test_vault_plugin.py @@ -34,6 +34,8 @@ asymmetric_vault = u'asymmetric_vault' escrowed_symmetric_vault = u'escrowed_symmetric_vault' escrowed_asymmetric_vault = u'escrowed_asymmetric_vault' +shared_test_vault = u'/shared/%s' % test_vault + password = u'password' public_key = """ @@ -158,6 +160,7 @@ class test_vault_plugin(Declarative): ('vault_del', [asymmetric_vault], {'continue': True}), ('vault_del', [escrowed_symmetric_vault], {'continue': True}), ('vault_del', [escrowed_asymmetric_vault], {'continue': True}), + ('vault_del', [shared_test_vault], {'continue': True}), ] tests = [ @@ -621,4 +624,152 @@ class test_vault_plugin(Declarative): }, }, + { + 'desc': 'Create test vault with absolute ID', + 'command': ( + 'vault_add', + [shared_test_vault], + {}, + ), + 'expected': { + 'value': test_vault, + 'summary': u'Added vault "%s"' % test_vault, + 'result': { + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_vault, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [test_vault], + 'vault_id': shared_test_vault, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Find test vaults with absolute ID', + 'command': ( + 'vault_find', + [u'/shared/'], + {}, + ), + 'expected': { + 'count': 1, + 'truncated': False, + 'summary': u'1 vault matched', + 'result': [ + { + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_vault, api.env.basedn), + 'cn': [test_vault], + 'vault_id': shared_test_vault, + 'ipavaulttype': [u'standard'], + }, + ], + }, + }, + + { + 'desc': 'Show test vault with absolute ID', + 'command': ( + 'vault_show', + [shared_test_vault], + {}, + ), + 'expected': { + 'value': test_vault, + 'summary': None, + 'result': { + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_vault, api.env.basedn), + 'cn': [test_vault], + 'vault_id': shared_test_vault, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Modify test vault with absolute ID', + 'command': ( + 'vault_mod', + [shared_test_vault], + { + 'description': u'Test vault', + }, + ), + 'expected': { + 'value': test_vault, + 'summary': u'Modified vault "%s"' % test_vault, + 'result': { + 'cn': [test_vault], + 'vault_id': shared_test_vault, + 'description': [u'Test vault'], + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Archive binary data with absolute ID', + 'command': ( + 'vault_archive', + [shared_test_vault], + { + 'data': binary_data, + }, + ), + 'expected': { + 'value': test_vault, + 'summary': u'Archived data into vault "%s"' % test_vault, + 'result': { + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_vault, api.env.basedn), + 'cn': [test_vault], + 'vault_id': shared_test_vault, + 'description': [u'Test vault'], + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Retrieve binary data with absolute ID', + 'command': ( + 'vault_retrieve', + [shared_test_vault], + {}, + ), + 'expected': { + 'value': test_vault, + 'summary': u'Retrieved data from vault "%s"' % test_vault, + 'result': { + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_vault, api.env.basedn), + 'cn': [test_vault], + 'vault_id': shared_test_vault, + 'description': [u'Test vault'], + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + 'nonce': fuzzy_string, + 'vault_data': fuzzy_string, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Delete test vault with absolute ID', + 'command': ( + 'vault_del', + [shared_test_vault], + {}, + ), + 'expected': { + 'value': [test_vault], + 'summary': u'Deleted vault "%s"' % test_vault, + 'result': { + 'failed': (), + }, + }, + }, + ] diff --git a/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py b/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py index 22e13769df19b40cd39a144df662bae8bbf53d9e..ca96bff4dcae8c62bc44245ba82c7bf4165754e5 100644 --- a/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py +++ b/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py @@ -24,9 +24,10 @@ Test the `ipalib/plugins/vaultcontainer.py` module. from ipalib import api, errors from xmlrpc_test import Declarative -private_container = u'private_container' -shared_container = u'shared_container' -service_container = u'service_container' +test_container = u'test_container' +private_container = test_container +shared_test_container = u'/shared/%s' % test_container +service_test_container = u'/services/%s' % test_container base_container = u'base_container' child_container = u'child_container' @@ -36,8 +37,8 @@ class test_vaultcontainer_plugin(Declarative): cleanup_commands = [ ('vaultcontainer_del', [private_container], {'continue': True}), - ('vaultcontainer_del', [shared_container], {'parent_id': u'/shared/', 'continue': True}), - ('vaultcontainer_del', [service_container], {'parent_id': u'/services/', 'continue': True}), + ('vaultcontainer_del', [shared_test_container], {'continue': True}), + ('vaultcontainer_del', [service_test_container], {'continue': True}), ('vaultcontainer_del', [base_container], {'force': True, 'continue': True}), ] @@ -200,19 +201,17 @@ class test_vaultcontainer_plugin(Declarative): 'desc': 'Create shared container', 'command': ( 'vaultcontainer_add', - [shared_container], - { - 'parent_id': u'/shared/', - }, + [shared_test_container], + {}, ), 'expected': { - 'value': shared_container, - 'summary': 'Added vault container "%s"' % shared_container, + 'value': test_container, + 'summary': 'Added vault container "%s"' % test_container, 'result': { - 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (shared_container, api.env.basedn), + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_container, api.env.basedn), 'objectclass': (u'ipaVaultContainer', u'top'), - 'cn': [shared_container], - 'container_id': u'/shared/%s/' % shared_container, + 'cn': [test_container], + 'container_id': u'/shared/%s/' % test_container, 'owner_user': [u'admin'], }, }, @@ -222,10 +221,8 @@ class test_vaultcontainer_plugin(Declarative): 'desc': 'Find shared containers', 'command': ( 'vaultcontainer_find', - [], - { - 'parent_id': u'/shared/', - }, + [u'/shared/'], + { }, ), 'expected': { 'count': 1, @@ -233,9 +230,9 @@ class test_vaultcontainer_plugin(Declarative): 'summary': u'1 vault container matched', 'result': [ { - 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (shared_container, api.env.basedn), - 'cn': [shared_container], - 'container_id': u'/shared/%s/' % shared_container, + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_container, api.env.basedn), + 'cn': [test_container], + 'container_id': u'/shared/%s/' % test_container, }, ], }, @@ -245,18 +242,16 @@ class test_vaultcontainer_plugin(Declarative): 'desc': 'Show shared container', 'command': ( 'vaultcontainer_show', - [shared_container], - { - 'parent_id': u'/shared/', - }, + [shared_test_container], + {}, ), 'expected': { - 'value': shared_container, + 'value': test_container, 'summary': None, 'result': { - 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (shared_container, api.env.basedn), - 'cn': [shared_container], - 'container_id': u'/shared/%s/' % shared_container, + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_container, api.env.basedn), + 'cn': [test_container], + 'container_id': u'/shared/%s/' % test_container, 'owner_user': [u'admin'], }, }, @@ -266,18 +261,17 @@ class test_vaultcontainer_plugin(Declarative): 'desc': 'Modify shared container', 'command': ( 'vaultcontainer_mod', - [shared_container], + [shared_test_container], { - 'parent_id': u'/shared/', 'description': u'shared container', }, ), 'expected': { - 'value': shared_container, - 'summary': 'Modified vault container "%s"' % shared_container, + 'value': test_container, + 'summary': 'Modified vault container "%s"' % test_container, 'result': { - 'cn': [shared_container], - 'container_id': u'/shared/%s/' % shared_container, + 'cn': [test_container], + 'container_id': u'/shared/%s/' % test_container, 'description': [u'shared container'], 'owner_user': [u'admin'], }, @@ -288,14 +282,12 @@ class test_vaultcontainer_plugin(Declarative): 'desc': 'Delete shared container', 'command': ( 'vaultcontainer_del', - [shared_container], - { - 'parent_id': u'/shared/', - }, + [shared_test_container], + {}, ), 'expected': { - 'value': [shared_container], - 'summary': u'Deleted vault container "%s"' % shared_container, + 'value': [test_container], + 'summary': u'Deleted vault container "%s"' % test_container, 'result': { 'failed': (), }, @@ -306,19 +298,17 @@ class test_vaultcontainer_plugin(Declarative): 'desc': 'Create service container', 'command': ( 'vaultcontainer_add', - [service_container], - { - 'parent_id': u'/services/', - }, + [service_test_container], + {}, ), 'expected': { - 'value': service_container, - 'summary': 'Added vault container "%s"' % service_container, + 'value': test_container, + 'summary': 'Added vault container "%s"' % test_container, 'result': { - 'dn': u'cn=%s,cn=services,cn=vaults,%s' % (service_container, api.env.basedn), + 'dn': u'cn=%s,cn=services,cn=vaults,%s' % (test_container, api.env.basedn), 'objectclass': (u'ipaVaultContainer', u'top'), - 'cn': [service_container], - 'container_id': u'/services/%s/' % service_container, + 'cn': [test_container], + 'container_id': u'/services/%s/' % test_container, 'owner_user': [u'admin'], }, }, diff --git a/ipatests/test_xmlrpc/test_vaultsecret_plugin.py b/ipatests/test_xmlrpc/test_vaultsecret_plugin.py index cbfd231633e7c3c000e57d52d85b83f44f71df3c..d2a4e92507fbabbcc356dc889738f151521e896c 100644 --- a/ipatests/test_xmlrpc/test_vaultsecret_plugin.py +++ b/ipatests/test_xmlrpc/test_vaultsecret_plugin.py @@ -25,6 +25,7 @@ from ipalib import api, errors from xmlrpc_test import Declarative, fuzzy_string test_vault = u'test_vault' +shared_test_vault = u'/shared/%s' % test_vault test_vaultsecret = u'test_vaultsecret' binary_data = '\x01\x02\x03\x04' text_data = u'secret' @@ -33,6 +34,7 @@ class test_vaultsecret_plugin(Declarative): cleanup_commands = [ ('vault_del', [test_vault], {'continue': True}), + ('vault_del', [shared_test_vault], {'continue': True}), ] tests = [ @@ -208,4 +210,117 @@ class test_vaultsecret_plugin(Declarative): }, }, + { + 'desc': 'Create shared test vault', + 'command': ( + 'vault_add', + [shared_test_vault], + {}, + ), + 'expected': { + 'value': test_vault, + 'summary': 'Added vault "%s"' % test_vault, + 'result': { + 'dn': u'cn=%s,cn=shared,cn=vaults,%s' % (test_vault, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [test_vault], + 'vault_id': u'/shared/%s' % test_vault, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Create shared test vault secret with binary data', + 'command': ( + 'vaultsecret_add', + [shared_test_vault, test_vaultsecret], + { + 'data': binary_data, + }, + ), + 'expected': { + 'value': test_vaultsecret, + 'summary': 'Added vault secret "%s"' % test_vaultsecret, + 'result': { + 'secret_name': test_vaultsecret, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Find shared vault secrets', + 'command': ( + 'vaultsecret_find', + [shared_test_vault], + {}, + ), + 'expected': { + 'count': 1, + 'truncated': False, + 'summary': u'1 vault secret matched', + 'result': [ + { + 'secret_name': test_vaultsecret, + 'data': binary_data, + }, + ], + }, + }, + + { + 'desc': 'Retrieve shared test vault secret', + 'command': ( + 'vaultsecret_show', + [shared_test_vault, test_vaultsecret], + {}, + ), + 'expected': { + 'value': test_vaultsecret, + 'summary': None, + 'result': { + 'secret_name': test_vaultsecret, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Modify shared test vault secret', + 'command': ( + 'vaultsecret_mod', + [shared_test_vault, test_vaultsecret], + { + 'description': u'Test vault secret', + }, + ), + 'expected': { + 'value': test_vaultsecret, + 'summary': u'Modified vault secret "%s"' % test_vaultsecret, + 'result': { + 'secret_name': test_vaultsecret, + 'description': u'Test vault secret', + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Delete shared test vault secret', + 'command': ( + 'vaultsecret_del', + [shared_test_vault, test_vaultsecret], + {}, + ), + 'expected': { + 'value': test_vaultsecret, + 'summary': u'Deleted vault secret "%s"' % test_vaultsecret, + 'result': { + 'failed': (), + }, + }, + }, + ] -- 1.9.0 -------------- next part -------------- >From 300ec72c4201e8effdda1ec48853b6c8f4f83f1b Mon Sep 17 00:00:00 2001 From: "Endi S. Dewata" Date: Mon, 16 Mar 2015 05:08:56 -0400 Subject: [PATCH] Vault copy functionality. The vault and vaultsecret plugins have been modified to provide a functionality to copy data from another vault or vaultsecret. New test cases have been added to verify this functionality. https://fedorahosted.org/freeipa/ticket/3872 --- API.txt | 32 +- ipalib/plugins/vault.py | 274 ++++++- ipalib/plugins/vaultsecret.py | 441 ++++++++--- ipatests/test_xmlrpc/test_vault_plugin.py | 836 ++++++++++++++++++++- ipatests/test_xmlrpc/test_vaultcontainer_plugin.py | 1 + ipatests/test_xmlrpc/test_vaultsecret_plugin.py | 401 ++++++++-- 6 files changed, 1780 insertions(+), 205 deletions(-) diff --git a/API.txt b/API.txt index 3f78493d986a292538dcca68133824bfb591149b..db488517c963244a7bfcdfbb86715a71a4cc3b73 100644 --- a/API.txt +++ b/API.txt @@ -4514,7 +4514,7 @@ output: Output('result', , None) output: Output('summary', (, ), None) output: PrimaryKey('value', None, None) command: vault_add -args: 1,20,3 +args: 1,26,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, required=True) option: Str('addattr*', cli_name='addattr', exclude='webui') option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') @@ -4533,6 +4533,12 @@ option: Str('password_file?', cli_name='password_file') option: Str('public_key_file?', cli_name='public_key_file') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Str('setattr*', cli_name='setattr', exclude='webui') +option: Str('source_password?', cli_name='source_password') +option: Str('source_password_file?', cli_name='source_password_file') +option: Bytes('source_private_key?', cli_name='source_private_key') +option: Str('source_private_key_file?', cli_name='source_private_key_file') +option: Str('source_secret_id?', cli_name='source_secret_id') +option: Str('source_vault_id?', cli_name='source_vault_id') option: Str('text?', cli_name='text') option: Str('vault_id', attribute=False, cli_name='vault_id', multivalue=False, required=False) option: Str('version?', exclude='webui') @@ -4566,7 +4572,7 @@ output: Output('completed', , None) output: Output('failed', , None) output: Entry('result', , Gettext('A dictionary representing an LDAP entry', domain='ipa', localedir=None)) command: vault_archive -args: 1,15,3 +args: 1,21,3 arg: Str('cn', attribute=True, cli_name='vault_name', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') option: Str('container_id?', cli_name='container_id') @@ -4580,6 +4586,12 @@ option: Str('password_file?', cli_name='password_file') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') option: Flag('rights', autofill=True, default=False) option: Str('session_key?', cli_name='session_key') +option: Str('source_password?', cli_name='source_password') +option: Str('source_password_file?', cli_name='source_password_file') +option: Bytes('source_private_key?', cli_name='source_private_key') +option: Str('source_private_key_file?', cli_name='source_private_key_file') +option: Str('source_secret_id?', cli_name='source_secret_id') +option: Str('source_vault_id?', cli_name='source_vault_id') option: Str('text?', cli_name='text') option: Str('vault_data?', cli_name='vault_data') option: Str('version?', exclude='webui') @@ -4828,7 +4840,7 @@ output: Entry('result', , Gettext('A dictionary representing an LDA output: Output('summary', (, ), None) output: PrimaryKey('value', None, None) command: vaultsecret_add -args: 2,12,3 +args: 2,18,3 arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) arg: Str('secret_name', attribute=True, cli_name='secret', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') @@ -4841,6 +4853,12 @@ option: Str('password_file?', cli_name='password_file') option: Bytes('private_key?', cli_name='private_key') option: Str('private_key_file?', cli_name='private_key_file') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') +option: Str('source_password?', cli_name='source_password') +option: Str('source_password_file?', cli_name='source_password_file') +option: Bytes('source_private_key?', cli_name='source_private_key') +option: Str('source_private_key_file?', cli_name='source_private_key_file') +option: Str('source_secret_id?', cli_name='source_secret_id') +option: Str('source_vault_id?', cli_name='source_vault_id') option: Str('text?', cli_name='text') option: Str('version?', exclude='webui') output: Entry('result', , Gettext('A dictionary representing an LDAP entry', domain='ipa', localedir=None)) @@ -4882,7 +4900,7 @@ output: ListOfEntries('result', (, ), Gettext('A list output: Output('summary', (, ), None) output: Output('truncated', , None) command: vaultsecret_mod -args: 2,12,3 +args: 2,18,3 arg: Str('vaultcn', cli_name='vault', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-/]+$', primary_key=True, query=True, required=True) arg: Str('secret_name', attribute=True, cli_name='secret', maxlength=255, multivalue=False, pattern='^[a-zA-Z0-9_.-]+$', primary_key=True, query=True, required=True) option: Flag('all', autofill=True, cli_name='all', default=False, exclude='webui') @@ -4895,6 +4913,12 @@ option: Str('password_file?', cli_name='password_file') option: Bytes('private_key?', cli_name='private_key') option: Str('private_key_file?', cli_name='private_key_file') option: Flag('raw', autofill=True, cli_name='raw', default=False, exclude='webui') +option: Str('source_password?', cli_name='source_password') +option: Str('source_password_file?', cli_name='source_password_file') +option: Bytes('source_private_key?', cli_name='source_private_key') +option: Str('source_private_key_file?', cli_name='source_private_key_file') +option: Str('source_secret_id?', cli_name='source_secret_id') +option: Str('source_vault_id?', cli_name='source_vault_id') option: Str('text?', cli_name='text') option: Str('version?', exclude='webui') output: Entry('result', , Gettext('A dictionary representing an LDAP entry', domain='ipa', localedir=None)) diff --git a/ipalib/plugins/vault.py b/ipalib/plugins/vault.py index 38693d0710e000695cae21fb4db5dfb4c85b5c74..19309037bd670db822690dd66d03839e575ad500 100644 --- a/ipalib/plugins/vault.py +++ b/ipalib/plugins/vault.py @@ -434,6 +434,30 @@ class vault_add(LDAPCreate): cli_name='escrow_public_key_file', doc=_('File containing the escrow public key'), ), + Str('source_vault_id?', + cli_name='source_vault_id', + doc=_('Source vault ID'), + ), + Str('source_secret_id?', + cli_name='source_secret_id', + doc=_('Source secret ID'), + ), + Str('source_password?', + cli_name='source_password', + doc=_('Source vault password'), + ), + Str('source_password_file?', + cli_name='source_password_file', + doc=_('File containing the source vault password'), + ), + Bytes('source_private_key?', + cli_name='source_private_key', + doc=_('Source vault private key'), + ), + Str('source_private_key_file?', + cli_name='source_private_key_file', + doc=_('File containing the source vault private key'), + ), ) msg_summary = _('Added vault "%(value)s"') @@ -457,6 +481,12 @@ class vault_add(LDAPCreate): public_key_file = options.get('public_key_file') escrow_public_key = options.get('ipaescrowpublickey') escrow_public_key_file = options.get('escrow_public_key_file') + source_vault_id = options.get('source_vault_id') + source_secret_id = options.get('source_secret_id') + source_password = options.get('source_password') + source_password_file = options.get('source_password_file') + source_private_key = options.get('source_private_key') + source_private_key_file = options.get('source_private_key_file') # don't send these parameters to server if 'data' in options: @@ -473,24 +503,127 @@ class vault_add(LDAPCreate): del options['public_key_file'] if 'escrow_public_key_file' in options: del options['escrow_public_key_file'] + if 'source_vault_id' in options: + del options['source_vault_id'] + if 'source_secret_id' in options: + del options['source_secret_id'] + if 'source_password' in options: + del options['source_password'] + if 'source_password_file' in options: + del options['source_password_file'] + if 'source_private_key' in options: + del options['source_private_key'] + if 'source_private_key_file' in options: + del options['source_private_key_file'] # get data if data: - if text or input_file: + if text or input_file or source_vault_id: raise errors.MutuallyExclusiveError( reason=_('Input data specified multiple times')) elif text: - if input_file: + if input_file or source_vault_id: raise errors.MutuallyExclusiveError( reason=_('Input data specified multiple times')) data = text.encode('utf-8') elif input_file: + if source_vault_id: + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) + with open(input_file, 'rb') as f: data = f.read() + elif source_vault_id: + + source_response = self.api.Command.vault_show(source_vault_id) + source_result = source_response['result'] + + if source_result.has_key('ipavaulttype'): + source_vault_type = source_result['ipavaulttype'][0] + + if source_vault_type == 'standard': + + if source_password: + raise errors.ValidationError(name='source_password', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_password_file: + raise errors.ValidationError(name='source_password_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_private_key: + raise errors.ValidationError(name='source_private_key', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_private_key_file: + raise errors.ValidationError(name='source_private_key_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + elif source_vault_type == 'symmetric': + + if source_private_key: + raise errors.ValidationError(name='source_private_key', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_private_key_file: + raise errors.ValidationError(name='source_private_key_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + # get source vault password + if source_password: + pass + + elif source_password_file: + with open(source_password_file) as f: + source_password = unicode(f.read().rstrip('\n')) + + else: + source_password = unicode(getpass.getpass('Source password: ')) + + elif source_vault_type == 'asymmetric': + + if source_password: + raise errors.ValidationError(name='source_password', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_password_file: + raise errors.ValidationError(name='source_password_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + # get source vault private key + if source_private_key: + pass + + elif source_private_key_file: + with open(source_private_key_file, 'rb') as f: + source_private_key = f.read() + + else: + raise errors.ValidationError(name='source_private_key', + error=_('Missing source vault private key')) + + else: + raise errors.ValidationError(name='source_vault_type', + error=_('Invalid source vault type')) + + source_response = self.api.Command.vault_retrieve( + source_vault_id, + password=source_password, + private_key=source_private_key) + + if source_secret_id: + source_json_data = self.api.Object.vaultsecret.parse_response(source_response) + source_secrets = source_json_data['secrets'] + source_secret = self.obj.Object.vaultsecret.find(source_secrets, source_secret_id) + data = base64.b64decode(source_secret['data']) + + else: + data = source_response['result']['data'] + else: data = '' @@ -897,6 +1030,30 @@ class vault_archive(LDAPRetrieve): cli_name='encryption_key', doc=_('Base-64 encoded encryption key'), ), + Str('source_vault_id?', + cli_name='source_vault_id', + doc=_('Source vault ID'), + ), + Str('source_secret_id?', + cli_name='source_secret_id', + doc=_('Source secret ID'), + ), + Str('source_password?', + cli_name='source_password', + doc=_('Source vault password'), + ), + Str('source_password_file?', + cli_name='source_password_file', + doc=_('File containing the source vault password'), + ), + Bytes('source_private_key?', + cli_name='source_private_key', + doc=_('Source vault private key'), + ), + Str('source_private_key_file?', + cli_name='source_private_key_file', + doc=_('File containing the source vault private key'), + ), ) msg_summary = _('Archived data into vault "%(value)s"') @@ -937,6 +1094,12 @@ class vault_archive(LDAPRetrieve): password = options.get('password') password_file = options.get('password_file') encryption_key = options.get('encryption_key') + source_vault_id = options.get('source_vault_id') + source_secret_id = options.get('source_secret_id') + source_password = options.get('source_password') + source_password_file = options.get('source_password_file') + source_private_key = options.get('source_private_key') + source_private_key_file = options.get('source_private_key_file') # don't send these parameters to server if 'data' in options: @@ -951,24 +1114,127 @@ class vault_archive(LDAPRetrieve): del options['password_file'] if 'encryption_key' in options: del options['encryption_key'] + if 'source_vault_id' in options: + del options['source_vault_id'] + if 'source_secret_id' in options: + del options['source_secret_id'] + if 'source_password' in options: + del options['source_password'] + if 'source_password_file' in options: + del options['source_password_file'] + if 'source_private_key' in options: + del options['source_private_key'] + if 'source_private_key_file' in options: + del options['source_private_key_file'] # get data if data: - if text or input_file: + if text or input_file or source_vault_id: raise errors.MutuallyExclusiveError( reason=_('Input data specified multiple times')) elif text: - if input_file: + if input_file or source_vault_id: raise errors.MutuallyExclusiveError( reason=_('Input data specified multiple times')) data = text.encode('utf-8') elif input_file: + if source_vault_id: + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) + with open(input_file, 'rb') as f: data = f.read() + elif source_vault_id: + + source_response = self.api.Command.vault_show(source_vault_id) + source_result = source_response['result'] + + if source_result.has_key('ipavaulttype'): + source_vault_type = source_result['ipavaulttype'][0] + + if source_vault_type == 'standard': + + if source_password: + raise errors.ValidationError(name='source_password', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_password_file: + raise errors.ValidationError(name='source_password_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_private_key: + raise errors.ValidationError(name='source_private_key', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_private_key_file: + raise errors.ValidationError(name='source_private_key_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + elif source_vault_type == 'symmetric': + + if source_private_key: + raise errors.ValidationError(name='source_private_key', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_private_key_file: + raise errors.ValidationError(name='source_private_key_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + # get source vault password + if source_password: + pass + + elif source_password_file: + with open(source_password_file) as f: + source_password = unicode(f.read().rstrip('\n')) + + else: + source_password = unicode(getpass.getpass('Source password: ')) + + elif source_vault_type == 'asymmetric': + + if source_password: + raise errors.ValidationError(name='source_password', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_password_file: + raise errors.ValidationError(name='source_password_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + # get source vault private key + if source_private_key: + pass + + elif source_private_key_file: + with open(source_private_key_file, 'rb') as f: + source_private_key = f.read() + + else: + raise errors.ValidationError(name='source_private_key', + error=_('Missing source vault private key')) + + else: + raise errors.ValidationError(name='source_vault_type', + error=_('Invalid source vault type')) + + source_response = self.api.Command.vault_retrieve( + source_vault_id, + password=source_password, + private_key=source_private_key) + + if source_secret_id: + source_json_data = self.api.Object.vaultsecret.parse_response(source_response) + source_secrets = source_json_data['secrets'] + source_secret = self.obj.Object.vaultsecret.find(source_secrets, source_secret_id) + data = base64.b64decode(source_secret['data']) + + else: + data = source_response['result']['data'] + else: data = '' diff --git a/ipalib/plugins/vaultsecret.py b/ipalib/plugins/vaultsecret.py index de15b014ca285387c7a1730fca3e110664a3ecf2..02f035f2de8b614b5b4019e65cede984f12854b9 100644 --- a/ipalib/plugins/vaultsecret.py +++ b/ipalib/plugins/vaultsecret.py @@ -97,6 +97,32 @@ class vaultsecret(LDAPObject): ), ) + def find(self, secrets, secret_name): + """ + Finds a secret with the given name in a list of secrets. + Raises an exception if the secret is not found. + """ + + for secret in secrets: + if secret['secret_name'] == secret_name: + return secret + + raise errors.NotFound(reason=_('%s: vault secret not found' % secret_name)) + + def parse_response(self, response): + """ + Returns JSON data from vault retrieval response. + """ + + vault_data = response['result']['data'] + + if vault_data: + return json.loads(vault_data) + + return { + 'secrets': [] + } + @register() class vaultsecret_add(LDAPRetrieve): @@ -139,6 +165,30 @@ class vaultsecret_add(LDAPRetrieve): cli_name='private_key_file', doc=_('File containing the vault private key'), ), + Str('source_secret_id?', + cli_name='source_secret_id', + doc=_('Source secret ID'), + ), + Str('source_vault_id?', + cli_name='source_vault_id', + doc=_('Source vault ID'), + ), + Str('source_password?', + cli_name='source_password', + doc=_('Source vault password'), + ), + Str('source_password_file?', + cli_name='source_password_file', + doc=_('File containing the source vault password'), + ), + Bytes('source_private_key?', + cli_name='source_private_key', + doc=_('Source vault private key'), + ), + Str('source_private_key_file?', + cli_name='source_private_key_file', + doc=_('File containing the source vault private key'), + ), ) msg_summary = _('Added vault secret "%(value)s"') @@ -170,6 +220,12 @@ class vaultsecret_add(LDAPRetrieve): password_file = options.get('password_file') private_key = options.get('private_key') private_key_file = options.get('private_key_file') + source_secret_id = options.get('source_secret_id') + source_vault_id = options.get('source_vault_id') + source_password = options.get('source_password') + source_password_file = options.get('source_password_file') + source_private_key = options.get('source_private_key') + source_private_key_file = options.get('source_private_key_file') # don't send these parameters to server if 'data' in options: @@ -186,8 +242,18 @@ class vaultsecret_add(LDAPRetrieve): del options['private_key'] if 'private_key_file' in options: del options['private_key_file'] - if 'source_secret' in options: - del options['source_secret'] + if 'source_secret_id' in options: + del options['source_secret_id'] + if 'source_vault_id' in options: + del options['source_vault_id'] + if 'source_password' in options: + del options['source_password'] + if 'source_password_file' in options: + del options['source_password_file'] + if 'source_private_key' in options: + del options['source_private_key'] + if 'source_private_key_file' in options: + del options['source_private_key_file'] # type-specific initialization if vault_type == 'standard': @@ -239,7 +305,7 @@ class vaultsecret_add(LDAPRetrieve): raise errors.ValidationError(name='password_file', error=_('Invalid parameter for %s vault' % vault_type)) - # get vault public key + # get vault private key if private_key: pass @@ -262,42 +328,130 @@ class vaultsecret_add(LDAPRetrieve): password=password, private_key=private_key) - vault_data = response['result']['data'] - - if not vault_data: - json_data = { - 'secrets': [] - } - - else: - json_data = json.loads(vault_data) + json_data = self.obj.parse_response(response) secrets = json_data['secrets'] # get data if data: - if text or input_file: + if text or input_file or source_secret_id: raise errors.MutuallyExclusiveError( reason=_('Input data specified multiple times')) elif text: - if input_file: + if input_file or source_secret_id : raise errors.MutuallyExclusiveError( reason=_('Input data specified multiple times')) data = text.encode('utf-8') elif input_file: + if source_secret_id : + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) + with open(input_file, 'rb') as f: data = f.read() + elif source_secret_id: + + if source_vault_id: + + source_response = self.api.Command.vault_show(source_vault_id) + source_result = source_response['result'] + + if source_result.has_key('ipavaulttype'): + source_vault_type = source_result['ipavaulttype'][0] + + if source_vault_type == 'standard': + + if source_password: + raise errors.ValidationError(name='source_password', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_password_file: + raise errors.ValidationError(name='source_password_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_private_key: + raise errors.ValidationError(name='source_private_key', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_private_key_file: + raise errors.ValidationError(name='source_private_key_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + elif source_vault_type == 'symmetric': + + if source_private_key: + raise errors.ValidationError(name='source_private_key', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_private_key_file: + raise errors.ValidationError(name='source_private_key_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + # get source vault password + if source_password: + pass + + elif source_password_file: + with open(source_password_file) as f: + source_password = unicode(f.read().rstrip('\n')) + + else: + source_password = unicode(getpass.getpass('Source password: ')) + + elif source_vault_type == 'asymmetric': + + if source_password: + raise errors.ValidationError(name='source_password', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_password_file: + raise errors.ValidationError(name='source_password_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + # get source vault private key + if source_private_key: + pass + + elif source_private_key_file: + with open(source_private_key_file, 'rb') as f: + source_private_key = f.read() + + else: + raise errors.ValidationError(name='source_private_key', + error=_('Missing source vault private key')) + + else: + raise errors.ValidationError(name='source_vault_type', + error=_('Invalid source vault type')) + + source_response = self.api.Command.vault_retrieve( + source_vault_id, + password=source_password, + private_key=source_private_key) + + source_json_data = self.obj.parse_response(source_response) + + source_secrets = source_json_data['secrets'] + + else: + source_secrets = secrets + + source_secret = self.obj.find(source_secrets, source_secret_id) + data = base64.b64decode(source_secret['data']) + else: data = '' # add new secret - for secret in secrets: - if secret['secret_name'] == secret_name: - raise errors.DuplicateEntry(message=_('vault secret with name "%s" already exists' % secret_name)) + try: + self.obj.find(secrets, secret_name) + raise errors.DuplicateEntry(message=_('vault secret with name "%s" already exists' % secret_name)) + except errors.NotFound: + pass # store encoded data for storage secret = { @@ -442,7 +596,7 @@ class vaultsecret_del(LDAPRetrieve): raise errors.ValidationError(name='password_file', error=_('Invalid parameter for %s vault' % vault_type)) - # get vault public key + # get vault private key if private_key: pass @@ -465,27 +619,12 @@ class vaultsecret_del(LDAPRetrieve): password=password, private_key=private_key) - vault_data = response['result']['data'] - - if not vault_data: - json_data = { - 'secrets': [] - } - - else: - json_data = json.loads(vault_data) + json_data = self.obj.parse_response(response) secrets = json_data['secrets'] # find the secret - secret = None - for s in secrets: - if s['secret_name'] == secret_name: - secret = s - break - - if not secret: - raise errors.NotFound(reason=_('%s: vault secret not found' % secret_name)) + secret = self.obj.find(secrets, secret_name) # delete secret secrets.remove(secret) @@ -619,7 +758,7 @@ class vaultsecret_find(LDAPSearch): raise errors.ValidationError(name='password_file', error=_('Invalid parameter for %s vault' % vault_type)) - # get vault public key + # get vault private key if private_key: pass @@ -641,15 +780,7 @@ class vaultsecret_find(LDAPSearch): password=password, private_key=private_key) - vault_data = response['result']['data'] - - if not vault_data: - json_data = { - 'secrets': [] - } - - else: - json_data = json.loads(vault_data) + json_data = self.obj.parse_response(response) secrets = json_data['secrets'] @@ -708,6 +839,30 @@ class vaultsecret_mod(LDAPRetrieve): cli_name='private_key_file', doc=_('File containing the vault private key'), ), + Str('source_secret_id?', + cli_name='source_secret_id', + doc=_('Source secret ID'), + ), + Str('source_vault_id?', + cli_name='source_vault_id', + doc=_('Source vault ID'), + ), + Str('source_password?', + cli_name='source_password', + doc=_('Source vault password'), + ), + Str('source_password_file?', + cli_name='source_password_file', + doc=_('File containing the source vault password'), + ), + Bytes('source_private_key?', + cli_name='source_private_key', + doc=_('Source vault private key'), + ), + Str('source_private_key_file?', + cli_name='source_private_key_file', + doc=_('File containing the source vault private key'), + ), ) msg_summary = _('Modified vault secret "%(value)s"') @@ -739,6 +894,12 @@ class vaultsecret_mod(LDAPRetrieve): password_file = options.get('password_file') private_key = options.get('private_key') private_key_file = options.get('private_key_file') + source_secret_id = options.get('source_secret_id') + source_vault_id = options.get('source_vault_id') + source_password = options.get('source_password') + source_password_file = options.get('source_password_file') + source_private_key = options.get('source_private_key') + source_private_key_file = options.get('source_private_key_file') # don't send these parameters to server if 'data' in options: @@ -755,6 +916,18 @@ class vaultsecret_mod(LDAPRetrieve): del options['private_key'] if 'private_key_file' in options: del options['private_key_file'] + if 'source_secret_id' in options: + del options['source_secret_id'] + if 'source_vault_id' in options: + del options['source_vault_id'] + if 'source_password' in options: + del options['source_password'] + if 'source_password_file' in options: + del options['source_password_file'] + if 'source_private_key' in options: + del options['source_private_key'] + if 'source_private_key_file' in options: + del options['source_private_key_file'] # type-specific initialization if vault_type == 'standard': @@ -806,7 +979,7 @@ class vaultsecret_mod(LDAPRetrieve): raise errors.ValidationError(name='password_file', error=_('Invalid parameter for %s vault' % vault_type)) - # get vault public key + # get vault private key if private_key: pass @@ -822,26 +995,6 @@ class vaultsecret_mod(LDAPRetrieve): raise errors.ValidationError(name='vault_type', error=_('Invalid vault type')) - # get data - if data: - if text or input_file: - raise errors.MutuallyExclusiveError( - reason=_('Input data specified multiple times')) - - elif text: - if input_file: - raise errors.MutuallyExclusiveError( - reason=_('Input data specified multiple times')) - - data = text.encode('utf-8') - - elif input_file: - with open(input_file, 'rb') as f: - data = f.read() - - else: - pass - # retrieve secrets response = self.api.Command.vault_retrieve( vault_name, @@ -849,27 +1002,126 @@ class vaultsecret_mod(LDAPRetrieve): password=password, private_key=private_key) - vault_data = response['result']['data'] - - if not vault_data: - json_data = { - 'secrets': [] - } - - else: - json_data = json.loads(vault_data) + json_data = self.obj.parse_response(response) secrets = json_data['secrets'] # find the secret - secret = None - for s in secrets: - if s['secret_name'] == secret_name: - secret = s - break - - if not secret: - raise errors.NotFound(reason=_('%s: vault secret not found' % secret_name)) + secret = self.obj.find(secrets, secret_name) + + # get data + if data: + if text or input_file or source_secret_id: + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) + + elif text: + if input_file or source_secret_id: + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) + + data = text.encode('utf-8') + + elif input_file: + if source_secret_id: + raise errors.MutuallyExclusiveError( + reason=_('Input data specified multiple times')) + + with open(input_file, 'rb') as f: + data = f.read() + + elif source_secret_id: + + if source_vault_id: + + source_response = self.api.Command.vault_show(source_vault_id) + source_result = source_response['result'] + + if source_result.has_key('ipavaulttype'): + source_vault_type = source_result['ipavaulttype'][0] + + if source_vault_type == 'standard': + + if source_password: + raise errors.ValidationError(name='source_password', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_password_file: + raise errors.ValidationError(name='source_password_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_private_key: + raise errors.ValidationError(name='source_private_key', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_private_key_file: + raise errors.ValidationError(name='source_private_key_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + elif source_vault_type == 'symmetric': + + if source_private_key: + raise errors.ValidationError(name='source_private_key', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_private_key_file: + raise errors.ValidationError(name='source_private_key_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + # get source vault password + if source_password: + pass + + elif source_password_file: + with open(source_password_file) as f: + source_password = unicode(f.read().rstrip('\n')) + + else: + source_password = unicode(getpass.getpass('Source password: ')) + + elif source_vault_type == 'asymmetric': + + if source_password: + raise errors.ValidationError(name='source_password', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + if source_password_file: + raise errors.ValidationError(name='source_password_file', + error=_('Invalid parameter for %s vault' % source_vault_type)) + + # get source vault private key + if source_private_key: + pass + + elif source_private_key_file: + with open(source_private_key_file, 'rb') as f: + source_private_key = f.read() + + else: + raise errors.ValidationError(name='source_private_key', + error=_('Missing source vault private key')) + + else: + raise errors.ValidationError(name='source_vault_type', + error=_('Invalid source vault type')) + + source_response = self.api.Command.vault_retrieve( + source_vault_id, + password=source_password, + private_key=source_private_key) + + source_json_data = self.obj.parse_response(source_response) + + source_secrets = source_json_data['secrets'] + + else: + source_secrets = secrets + + source_secret = self.obj.find(source_secrets, source_secret_id) + data = base64.b64decode(source_secret['data']) + + else: + pass # modify the secret if description: @@ -1035,7 +1287,7 @@ class vaultsecret_show(LDAPRetrieve): raise errors.ValidationError(name='password_file', error=_('Invalid parameter for %s vault' % vault_type)) - # get vault public key + # get vault private key if private_key: pass @@ -1058,28 +1310,11 @@ class vaultsecret_show(LDAPRetrieve): password=password, private_key=private_key) - vault_data = response['result']['data'] - - if not vault_data: - json_data = { - 'secrets': [] - } - - else: - json_data = json.loads(vault_data) + json_data = self.obj.parse_response(response) secrets = json_data['secrets'] - secret = None - - # find the secret - for s in secrets: - if s['secret_name'] == secret_name: - secret = s - break - - if not secret: - raise errors.NotFound(reason=_('%s: vault secret not found' % secret_name)) + secret = self.obj.find(secrets, secret_name) # decode data for response secret['data'] = base64.b64decode(secret['data']) diff --git a/ipatests/test_xmlrpc/test_vault_plugin.py b/ipatests/test_xmlrpc/test_vault_plugin.py index 98e0b543d280d5bb36d5f9aae5c925019e79e962..b4d415d5cf71fdd1bb47d320aa2d76964e0f6309 100644 --- a/ipatests/test_xmlrpc/test_vault_plugin.py +++ b/ipatests/test_xmlrpc/test_vault_plugin.py @@ -25,11 +25,26 @@ from ipalib import api, errors from xmlrpc_test import Declarative, fuzzy_string test_vault = u'test_vault' + binary_data = '\x01\x02\x03\x04' text_data = u'secret' +test_secret = u'test_secret' +standard_secrets_vault = u'standard_secrets_vault' +symmetric_secrets_vault = u'symmetric_secrets_vault' +asymmetric_secrets_vault = u'asymmetric_secrets_vault' + +standard_vault = u'standard_vault' +standard_vault_copy = u'standard_vault_copy' +standard_vault_copy2 = u'standard_vault_copy2' + symmetric_vault = u'symmetric_vault' +symmetric_vault_copy = u'symmetric_vault_copy' +symmetric_vault_copy2 = u'symmetric_vault_copy2' + asymmetric_vault = u'asymmetric_vault' +asymmetric_vault_copy = u'asymmetric_vault_copy' +asymmetric_vault_copy2 = u'asymmetric_vault_copy2' escrowed_symmetric_vault = u'escrowed_symmetric_vault' escrowed_asymmetric_vault = u'escrowed_asymmetric_vault' @@ -37,6 +52,7 @@ escrowed_asymmetric_vault = u'escrowed_asymmetric_vault' shared_test_vault = u'/shared/%s' % test_vault password = u'password' +other_password = u'other_password' public_key = """ -----BEGIN PUBLIC KEY----- @@ -80,7 +96,7 @@ kUlCMj24a8XsShzYTWBIyW2ngvGe3pQ9PfjkUdm0LGZjYITCBvgOKw== -----END RSA PRIVATE KEY----- """ -escrow_public_key = """ +other_public_key = """ -----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAv7E/QLVyKjrgDctZ50U7 rmtL7Ks1QLoccp9WvZJ6WI1rYd0fX5FySS4dI6QTNZc6qww8NeNuZtkoxT9m1wkk @@ -92,7 +108,7 @@ TwIDAQAB -----END PUBLIC KEY----- """ -escrow_private_key = """ +other_private_key = """ -----BEGIN RSA PRIVATE KEY----- MIIEpgIBAAKCAQEAv7E/QLVyKjrgDctZ50U7rmtL7Ks1QLoccp9WvZJ6WI1rYd0f X5FySS4dI6QTNZc6qww8NeNuZtkoxT9m1wkkRl/3wK7fWNLenH/+VHOaTQc20exg @@ -122,42 +138,22 @@ veCYju6ok4ZWnMiH8MR1jgC39RWtjJZwynCuPXUP2/vZkoVf1tCZyz7dSm8TdS/2 -----END RSA PRIVATE KEY----- """ -wrong_private_key = """ ------BEGIN RSA PRIVATE KEY----- -MIIEpAIBAAKCAQEAuQVrFCTFzc8EQ1TPR/fB2vZqcNFn7KVFYN5UuGSLm8JwdSyX -aJHFdgmdWS5NHRQS7l4kd3/mn1pk4BuhmC1tO+tPM1gunCtFvgB5fBC6WJzytBEC -y/ikvSIyOQ33/KCLdTQ/oFTIp3f5dUO62a3gjgQ9Y0Z9krRXYiB5uxMyHrr/zuxL -qRqP7pQYIkROr5VqGUgFUjFcovLvzP2sabobEnctCKm+nIE4s31QR3Kqb4l31qmX -TkbiGkoEkwoqHDuu0eH50bJwRFjjx3h+5U0bmoff1ipi+Aan5gZKoOOlnkc9d2dZ -gPBHyth6+9N3V8jrjs+VMJ8j2GzI9pnVxD93awIDAQABAoIBAC5neny54FaHBmWw -vrApJpi5Vubmzm7e4LCz8oGwzgcJ1FS/E1ZpwSGitbEpWLPjVgAs4m6KSJhM/qHq -rDPTqOLvWJTjGAWZIMvPqKiCNYqGCqU44v+vY/n/uqLuqvTUe0WxDggzW4QNJibQ -DuwLnRdhXtgoVNRXoNb+mClgXiCwjgX8j+A2lb6H4iQsQOLjLr4ifH13CqDgJk3k -2BjI1wOeesxooNk3I7ZKdVL1iV/qaeysLvtWoEmoQvrn6Fbo4HAR04tQQXEG7GzM -kZ+RVmHyn5b4tCMqnzubM+IYf7k9cJfgZIyaVxO8NSfwaJkWbQiGwzvUebRiHwPs -cK+F4bkCgYEA9VKtHX6Kr9Ht6Ho0q5eZAzRrOqgB2Uj4duFAI40bQ1q9+p+VSoKR -IqU6I16HO7mbgjlOySVg5RVIcp+9DUoNoK2KlGFswX6Q0f1J/v0CDavQoqOcpyFh -F9Egtx+5VNhknDZ0GTNWNm6bTWZUHK6ZICXs/mZkn4rHdId2Xe0Ls38CgYEAwRLg -GXSctadY0Qowy2YLe+I2EjsdXgE82TvOorytjFhOPVB/GsCOJ3IjyHJyGdpkm/85 -U0SUoeNXQHIgiMGG+xxQ/ErJhpHxiaQKHFow48x3k8R7tlO68Mfnl+BZ9goRgxio -rFwiea2MtCl2j8szh9UJ1z57aG1L0PfUctd9QhUCgYEA3O2T0ZgANc6MvmwvushP -mD9AwhZDc/bvK8A3Ds0o3EOAC5Bj1jI3mkfKT8f1aagBkAkkFql+1U+RawjILIug -Mi+XOYFze94LddDxLp2Tl9Q/k/hcP3ckBVrkZ4Y+VVZ7ZOL1Myy0W1jIq6+X2Cy0 -4erFv2VfAP7uGNdVlcjAXOkCgYABT5x/77/Ep/89ZCFSsD2xuKZ/VzFq2v1LyFEt -37QZ+NuHJQ3H47jTYb4GdWh67nWybXg5LYUI2F9WS7AW3aGKAPY30FYv+Lu4IIoF -CUO9uDyznyjr4wOo8OKMsHRL7GOUDU3P5cxCIUCMVJ++eDXAXVz0vjLeUaerIpOp -t/bcxQKBgQC4ehPJQH8WWG1FnfXh62AJBgb+yUDmp+4bTRtr/GENF9W62GN+g7or -RQqtdO1WxoXsC2qbKT4ArwOpV877KlWuLfiJMe5JS8vU/n711DCFIIfkXpeXPUVc -MszdQuc/FTSJ2DYsIwx7qq5c8mtargOjWRgZU22IgY9PKeIcitQjqw== ------END RSA PRIVATE KEY----- -""" - class test_vault_plugin(Declarative): cleanup_commands = [ ('vault_del', [test_vault], {'continue': True}), + ('vault_del', [standard_secrets_vault], {'continue': True}), + ('vault_del', [symmetric_secrets_vault], {'continue': True}), + ('vault_del', [asymmetric_secrets_vault], {'continue': True}), + ('vault_del', [standard_vault], {'continue': True}), + ('vault_del', [standard_vault_copy], {'continue': True}), + ('vault_del', [standard_vault_copy2], {'continue': True}), ('vault_del', [symmetric_vault], {'continue': True}), + ('vault_del', [symmetric_vault_copy], {'continue': True}), + ('vault_del', [symmetric_vault_copy2], {'continue': True}), ('vault_del', [asymmetric_vault], {'continue': True}), + ('vault_del', [asymmetric_vault_copy], {'continue': True}), + ('vault_del', [asymmetric_vault_copy2], {'continue': True}), ('vault_del', [escrowed_symmetric_vault], {'continue': True}), ('vault_del', [escrowed_asymmetric_vault], {'continue': True}), ('vault_del', [shared_test_vault], {'continue': True}), @@ -383,6 +379,120 @@ class test_vault_plugin(Declarative): }, { + 'desc': 'Create standard vault', + 'command': ( + 'vault_add', + [standard_vault], + { + 'data': binary_data, + }, + ), + 'expected': { + 'value': standard_vault, + 'summary': 'Added vault "%s"' % standard_vault, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (standard_vault, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [standard_vault], + 'vault_id': u'/users/admin/%s' % standard_vault, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Create a copy of standard vault', + 'command': ( + 'vault_add', + [standard_vault_copy], + { + 'source_vault_id': standard_vault, + }, + ), + 'expected': { + 'value': standard_vault_copy, + 'summary': 'Added vault "%s"' % standard_vault_copy, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (standard_vault_copy, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [standard_vault_copy], + 'vault_id': u'/users/admin/%s' % standard_vault_copy, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Verify the copy creation of standard vault', + 'command': ( + 'vault_retrieve', + [standard_vault_copy], + {}, + ), + 'expected': { + 'value': standard_vault_copy, + 'summary': u'Retrieved data from vault "%s"' % standard_vault_copy, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (standard_vault_copy, api.env.basedn), + 'cn': [standard_vault_copy], + 'vault_id': u'/users/admin/%s' % standard_vault_copy, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + 'nonce': fuzzy_string, + 'vault_data': fuzzy_string, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Archive a copy of standard vault', + 'command': ( + 'vault_archive', + [standard_vault_copy], + { + 'source_vault_id': standard_vault, + }, + ), + 'expected': { + 'value': standard_vault_copy, + 'summary': u'Archived data into vault "%s"' % standard_vault_copy, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (standard_vault_copy, api.env.basedn), + 'cn': [standard_vault_copy], + 'vault_id': u'/users/admin/%s' % standard_vault_copy, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Verify the copy archival of standard vault', + 'command': ( + 'vault_retrieve', + [standard_vault_copy], + {}, + ), + 'expected': { + 'value': standard_vault_copy, + 'summary': u'Retrieved data from vault "%s"' % standard_vault_copy, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (standard_vault_copy, api.env.basedn), + 'cn': [standard_vault_copy], + 'vault_id': u'/users/admin/%s' % standard_vault_copy, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + 'nonce': fuzzy_string, + 'vault_data': fuzzy_string, + 'data': binary_data, + }, + }, + }, + + { 'desc': 'Create symmetric vault', 'command': ( 'vault_add', @@ -440,13 +550,117 @@ class test_vault_plugin(Declarative): 'vault_retrieve', [symmetric_vault], { - 'password': u'wrong', + 'password': other_password, }, ), 'expected': errors.AuthenticationError(message=u'Invalid credentials'), }, { + 'desc': 'Create a copy of symmetric vault', + 'command': ( + 'vault_add', + [symmetric_vault_copy], + { + 'ipavaulttype': u'symmetric', + 'password': other_password, + 'source_vault_id': symmetric_vault, + 'source_password': password, + }, + ), + 'expected': { + 'value': symmetric_vault_copy, + 'summary': 'Added vault "%s"' % symmetric_vault_copy, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (symmetric_vault_copy, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [symmetric_vault_copy], + 'vault_id': u'/users/admin/%s' % symmetric_vault_copy, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'symmetric'], + 'ipavaultsalt': [fuzzy_string], + }, + }, + }, + + { + 'desc': 'Verify the copy creation of symmetric vault', + 'command': ( + 'vault_retrieve', + [symmetric_vault_copy], + { + 'password': other_password, + }, + ), + 'expected': { + 'value': symmetric_vault_copy, + 'summary': u'Retrieved data from vault "%s"' % symmetric_vault_copy, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (symmetric_vault_copy, api.env.basedn), + 'cn': [symmetric_vault_copy], + 'vault_id': u'/users/admin/%s' % symmetric_vault_copy, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'symmetric'], + 'ipavaultsalt': [fuzzy_string], + 'nonce': fuzzy_string, + 'vault_data': fuzzy_string, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Archive a copy of symmetric vault', + 'command': ( + 'vault_archive', + [symmetric_vault_copy], + { + 'password': other_password, + 'source_vault_id': symmetric_vault, + 'source_password': password, + }, + ), + 'expected': { + 'value': symmetric_vault_copy, + 'summary': u'Archived data into vault "%s"' % symmetric_vault_copy, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (symmetric_vault_copy, api.env.basedn), + 'cn': [symmetric_vault_copy], + 'vault_id': u'/users/admin/%s' % symmetric_vault_copy, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'symmetric'], + 'ipavaultsalt': [fuzzy_string], + }, + }, + }, + + { + 'desc': 'Verify the copy archival of symmetric vault', + 'command': ( + 'vault_retrieve', + [symmetric_vault_copy], + { + 'password': other_password, + }, + ), + 'expected': { + 'value': symmetric_vault_copy, + 'summary': u'Retrieved data from vault "%s"' % symmetric_vault_copy, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (symmetric_vault_copy, api.env.basedn), + 'cn': [symmetric_vault_copy], + 'vault_id': u'/users/admin/%s' % symmetric_vault_copy, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'symmetric'], + 'ipavaultsalt': [fuzzy_string], + 'nonce': fuzzy_string, + 'vault_data': fuzzy_string, + 'data': binary_data, + }, + }, + }, + + { 'desc': 'Create asymmetric vault', 'command': ( 'vault_add', @@ -506,13 +720,120 @@ class test_vault_plugin(Declarative): 'vault_retrieve', [asymmetric_vault], { - 'private_key': wrong_private_key, + 'private_key': other_private_key, }, ), 'expected': errors.AuthenticationError(message=u'Invalid credentials'), }, { + 'desc': 'Create a copy of asymmetric vault', + 'command': ( + 'vault_add', + [asymmetric_vault_copy], + { + 'ipavaulttype': u'asymmetric', + 'ipapublickey': other_public_key, + 'source_vault_id': asymmetric_vault, + 'source_private_key': private_key, + }, + ), + 'expected': { + 'value': asymmetric_vault_copy, + 'summary': 'Added vault "%s"' % asymmetric_vault_copy, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (asymmetric_vault_copy, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [asymmetric_vault_copy], + 'vault_id': u'/users/admin/%s' % asymmetric_vault_copy, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'asymmetric'], + 'ipavaultsalt': [fuzzy_string], + 'ipapublickey': [other_public_key], + }, + }, + }, + + { + 'desc': 'Verify the copy creation of asymmetric vault', + 'command': ( + 'vault_retrieve', + [asymmetric_vault_copy], + { + 'private_key': other_private_key, + }, + ), + 'expected': { + 'value': asymmetric_vault_copy, + 'summary': u'Retrieved data from vault "%s"' % asymmetric_vault_copy, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (asymmetric_vault_copy, api.env.basedn), + 'cn': [asymmetric_vault_copy], + 'vault_id': u'/users/admin/%s' % asymmetric_vault_copy, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'asymmetric'], + 'ipavaultsalt': [fuzzy_string], + 'ipapublickey': [other_public_key], + 'nonce': fuzzy_string, + 'vault_data': fuzzy_string, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Archive a copy of asymmetric vault', + 'command': ( + 'vault_archive', + [asymmetric_vault_copy], + { + 'source_vault_id': asymmetric_vault, + 'source_private_key': private_key, + }, + ), + 'expected': { + 'value': asymmetric_vault_copy, + 'summary': u'Archived data into vault "%s"' % asymmetric_vault_copy, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (asymmetric_vault_copy, api.env.basedn), + 'cn': [asymmetric_vault_copy], + 'vault_id': u'/users/admin/%s' % asymmetric_vault_copy, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'asymmetric'], + 'ipavaultsalt': [fuzzy_string], + 'ipapublickey': [other_public_key], + }, + }, + }, + + { + 'desc': 'Verify the copy archival of asymmetric vault', + 'command': ( + 'vault_retrieve', + [asymmetric_vault_copy], + { + 'private_key': other_private_key, + }, + ), + 'expected': { + 'value': asymmetric_vault_copy, + 'summary': u'Retrieved data from vault "%s"' % asymmetric_vault_copy, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (asymmetric_vault_copy, api.env.basedn), + 'cn': [asymmetric_vault_copy], + 'vault_id': u'/users/admin/%s' % asymmetric_vault_copy, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'asymmetric'], + 'ipavaultsalt': [fuzzy_string], + 'ipapublickey': [other_public_key], + 'nonce': fuzzy_string, + 'vault_data': fuzzy_string, + 'data': binary_data, + }, + }, + }, + + { 'desc': 'Create escrowed symmetric vault', 'command': ( 'vault_add', @@ -520,7 +841,7 @@ class test_vault_plugin(Declarative): { 'ipavaulttype': u'symmetric', 'password': password, - 'ipaescrowpublickey': escrow_public_key, + 'ipaescrowpublickey': other_public_key, 'data': binary_data, }, ), @@ -546,7 +867,7 @@ class test_vault_plugin(Declarative): 'vault_retrieve', [escrowed_symmetric_vault], { - 'escrow_private_key': escrow_private_key, + 'escrow_private_key': other_private_key, }, ), 'expected': { @@ -575,7 +896,7 @@ class test_vault_plugin(Declarative): { 'ipavaulttype': u'asymmetric', 'ipapublickey': public_key, - 'ipaescrowpublickey': escrow_public_key, + 'ipaescrowpublickey': other_public_key, 'data': binary_data, }, ), @@ -602,7 +923,7 @@ class test_vault_plugin(Declarative): 'vault_retrieve', [escrowed_asymmetric_vault], { - 'escrow_private_key': escrow_private_key, + 'escrow_private_key': other_private_key, }, ), 'expected': { @@ -772,4 +1093,443 @@ class test_vault_plugin(Declarative): }, }, + { + 'desc': 'Create standard secrets vault', + 'command': ( + 'vault_add', + [standard_secrets_vault], + {}, + ), + 'expected': { + 'value': standard_secrets_vault, + 'summary': 'Added vault "%s"' % standard_secrets_vault, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (standard_secrets_vault, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [standard_secrets_vault], + 'vault_id': u'/users/admin/%s' % standard_secrets_vault, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Create secret in standard vault', + 'command': ( + 'vaultsecret_add', + [standard_secrets_vault, test_secret], + { + 'data': binary_data, + }, + ), + 'expected': { + 'value': test_secret, + 'summary': 'Added vault secret "%s"' % test_secret, + 'result': { + 'secret_name': test_secret, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Create symmetric secrets vault', + 'command': ( + 'vault_add', + [symmetric_secrets_vault], + { + 'ipavaulttype': u'symmetric', + 'password': password, + }, + ), + 'expected': { + 'value': symmetric_secrets_vault, + 'summary': 'Added vault "%s"' % symmetric_secrets_vault, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (symmetric_secrets_vault, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [symmetric_secrets_vault], + 'vault_id': u'/users/admin/%s' % symmetric_secrets_vault, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'symmetric'], + 'ipavaultsalt': [fuzzy_string], + }, + }, + }, + + { + 'desc': 'Create secret in symmetric vault', + 'command': ( + 'vaultsecret_add', + [symmetric_secrets_vault, test_secret], + { + 'data': binary_data, + 'password': password, + }, + ), + 'expected': { + 'value': test_secret, + 'summary': 'Added vault secret "%s"' % test_secret, + 'result': { + 'secret_name': test_secret, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Create asymmetric secrets vault', + 'command': ( + 'vault_add', + [asymmetric_secrets_vault], + { + 'ipavaulttype': u'asymmetric', + 'ipapublickey': public_key, + }, + ), + 'expected': { + 'value': asymmetric_secrets_vault, + 'summary': 'Added vault "%s"' % asymmetric_secrets_vault, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (asymmetric_secrets_vault, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [asymmetric_secrets_vault], + 'vault_id': u'/users/admin/%s' % asymmetric_secrets_vault, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'asymmetric'], + 'ipavaultsalt': [fuzzy_string], + 'ipapublickey': [public_key], + }, + }, + }, + + { + 'desc': 'Create secret in asymmetric vault', + 'command': ( + 'vaultsecret_add', + [asymmetric_secrets_vault, test_secret], + { + 'private_key': private_key, + 'data': binary_data, + }, + ), + 'expected': { + 'value': test_secret, + 'summary': 'Added vault secret "%s"' % test_secret, + 'result': { + 'secret_name': test_secret, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Create a copy of secret from standard vault', + 'command': ( + 'vault_add', + [standard_vault_copy2], + { + 'source_vault_id': standard_secrets_vault, + 'source_secret_id': test_secret, + }, + ), + 'expected': { + 'value': standard_vault_copy2, + 'summary': 'Added vault "%s"' % standard_vault_copy2, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (standard_vault_copy2, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [standard_vault_copy2], + 'vault_id': u'/users/admin/%s' % standard_vault_copy2, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Verify the copy creation of secret from standard vault', + 'command': ( + 'vault_retrieve', + [standard_vault_copy2], + {}, + ), + 'expected': { + 'value': standard_vault_copy2, + 'summary': u'Retrieved data from vault "%s"' % standard_vault_copy2, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (standard_vault_copy2, api.env.basedn), + 'cn': [standard_vault_copy2], + 'vault_id': u'/users/admin/%s' % standard_vault_copy2, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + 'nonce': fuzzy_string, + 'vault_data': fuzzy_string, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Archive a copy of secret from standard vault', + 'command': ( + 'vault_archive', + [standard_vault_copy2], + { + 'source_vault_id': standard_secrets_vault, + 'source_secret_id': test_secret, + }, + ), + 'expected': { + 'value': standard_vault_copy2, + 'summary': u'Archived data into vault "%s"' % standard_vault_copy2, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (standard_vault_copy2, api.env.basedn), + 'cn': [standard_vault_copy2], + 'vault_id': u'/users/admin/%s' % standard_vault_copy2, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + }, + }, + }, + + { + 'desc': 'Verify the copy archival of secret from standard vault', + 'command': ( + 'vault_retrieve', + [standard_vault_copy2], + {}, + ), + 'expected': { + 'value': standard_vault_copy2, + 'summary': u'Retrieved data from vault "%s"' % standard_vault_copy2, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (standard_vault_copy2, api.env.basedn), + 'cn': [standard_vault_copy2], + 'vault_id': u'/users/admin/%s' % standard_vault_copy2, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'standard'], + 'nonce': fuzzy_string, + 'vault_data': fuzzy_string, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Create a copy of secret from symmetric vault', + 'command': ( + 'vault_add', + [symmetric_vault_copy2], + { + 'ipavaulttype': u'symmetric', + 'password': password, + 'source_vault_id': symmetric_secrets_vault, + 'source_secret_id': test_secret, + 'source_password': password, + }, + ), + 'expected': { + 'value': symmetric_vault_copy2, + 'summary': 'Added vault "%s"' % symmetric_vault_copy2, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (symmetric_vault_copy2, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [symmetric_vault_copy2], + 'vault_id': u'/users/admin/%s' % symmetric_vault_copy2, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'symmetric'], + 'ipavaultsalt': [fuzzy_string], + }, + }, + }, + + { + 'desc': 'Verify the copy creation of secret from symmetric vault', + 'command': ( + 'vault_retrieve', + [symmetric_vault_copy2], + { + 'password': password, + }, + ), + 'expected': { + 'value': symmetric_vault_copy2, + 'summary': u'Retrieved data from vault "%s"' % symmetric_vault_copy2, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (symmetric_vault_copy2, api.env.basedn), + 'cn': [symmetric_vault_copy2], + 'vault_id': u'/users/admin/%s' % symmetric_vault_copy2, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'symmetric'], + 'ipavaultsalt': [fuzzy_string], + 'nonce': fuzzy_string, + 'vault_data': fuzzy_string, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Archive a copy of secret from symmetric vault', + 'command': ( + 'vault_archive', + [symmetric_vault_copy2], + { + 'password': password, + 'source_vault_id': symmetric_secrets_vault, + 'source_secret_id': test_secret, + 'source_password': password, + }, + ), + 'expected': { + 'value': symmetric_vault_copy2, + 'summary': u'Archived data into vault "%s"' % symmetric_vault_copy2, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (symmetric_vault_copy2, api.env.basedn), + 'cn': [symmetric_vault_copy2], + 'vault_id': u'/users/admin/%s' % symmetric_vault_copy2, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'symmetric'], + 'ipavaultsalt': [fuzzy_string], + }, + }, + }, + + { + 'desc': 'Verify the copy archival of secret from symmetric vault', + 'command': ( + 'vault_retrieve', + [symmetric_vault_copy2], + { + 'password': password, + }, + ), + 'expected': { + 'value': symmetric_vault_copy2, + 'summary': u'Retrieved data from vault "%s"' % symmetric_vault_copy2, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (symmetric_vault_copy2, api.env.basedn), + 'cn': [symmetric_vault_copy2], + 'vault_id': u'/users/admin/%s' % symmetric_vault_copy2, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'symmetric'], + 'ipavaultsalt': [fuzzy_string], + 'nonce': fuzzy_string, + 'vault_data': fuzzy_string, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Create a copy of secret from asymmetric vault', + 'command': ( + 'vault_add', + [asymmetric_vault_copy2], + { + 'ipavaulttype': u'asymmetric', + 'ipapublickey': public_key, + 'source_vault_id': asymmetric_secrets_vault, + 'source_secret_id': test_secret, + 'source_private_key': private_key, + }, + ), + 'expected': { + 'value': asymmetric_vault_copy2, + 'summary': 'Added vault "%s"' % asymmetric_vault_copy2, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (asymmetric_vault_copy2, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [asymmetric_vault_copy2], + 'vault_id': u'/users/admin/%s' % asymmetric_vault_copy2, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'asymmetric'], + 'ipavaultsalt': [fuzzy_string], + 'ipapublickey': [public_key], + }, + }, + }, + + { + 'desc': 'Verify the copy creation of secret from asymmetric vault', + 'command': ( + 'vault_retrieve', + [asymmetric_vault_copy2], + { + 'private_key': private_key, + }, + ), + 'expected': { + 'value': asymmetric_vault_copy2, + 'summary': u'Retrieved data from vault "%s"' % asymmetric_vault_copy2, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (asymmetric_vault_copy2, api.env.basedn), + 'cn': [asymmetric_vault_copy2], + 'vault_id': u'/users/admin/%s' % asymmetric_vault_copy2, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'asymmetric'], + 'ipavaultsalt': [fuzzy_string], + 'ipapublickey': [public_key], + 'nonce': fuzzy_string, + 'vault_data': fuzzy_string, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Archive a copy of secret from asymmetric vault', + 'command': ( + 'vault_archive', + [asymmetric_vault_copy2], + { + 'source_vault_id': asymmetric_secrets_vault, + 'source_secret_id': test_secret, + 'source_private_key': private_key, + }, + ), + 'expected': { + 'value': asymmetric_vault_copy2, + 'summary': u'Archived data into vault "%s"' % asymmetric_vault_copy2, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (asymmetric_vault_copy2, api.env.basedn), + 'cn': [asymmetric_vault_copy2], + 'vault_id': u'/users/admin/%s' % asymmetric_vault_copy2, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'asymmetric'], + 'ipavaultsalt': [fuzzy_string], + 'ipapublickey': [public_key], + }, + }, + }, + + { + 'desc': 'Verify the copy archival of secret from asymmetric vault', + 'command': ( + 'vault_retrieve', + [asymmetric_vault_copy2], + { + 'private_key': private_key, + }, + ), + 'expected': { + 'value': asymmetric_vault_copy2, + 'summary': u'Retrieved data from vault "%s"' % asymmetric_vault_copy2, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (asymmetric_vault_copy2, api.env.basedn), + 'cn': [asymmetric_vault_copy2], + 'vault_id': u'/users/admin/%s' % asymmetric_vault_copy2, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'asymmetric'], + 'ipavaultsalt': [fuzzy_string], + 'ipapublickey': [public_key], + 'nonce': fuzzy_string, + 'vault_data': fuzzy_string, + 'data': binary_data, + }, + }, + }, + ] diff --git a/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py b/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py index ca96bff4dcae8c62bc44245ba82c7bf4165754e5..26a52b857f943a371da80dd3181e1cdec55bb3ee 100644 --- a/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py +++ b/ipatests/test_xmlrpc/test_vaultcontainer_plugin.py @@ -313,6 +313,7 @@ class test_vaultcontainer_plugin(Declarative): }, }, }, + { 'desc': 'Create base container', 'command': ( diff --git a/ipatests/test_xmlrpc/test_vaultsecret_plugin.py b/ipatests/test_xmlrpc/test_vaultsecret_plugin.py index d2a4e92507fbabbcc356dc889738f151521e896c..de08d306cfb292058acc4abd8708afcae117eaf8 100644 --- a/ipatests/test_xmlrpc/test_vaultsecret_plugin.py +++ b/ipatests/test_xmlrpc/test_vaultsecret_plugin.py @@ -25,15 +25,66 @@ from ipalib import api, errors from xmlrpc_test import Declarative, fuzzy_string test_vault = u'test_vault' +symmetric_vault = u'symmetric_vault' +asymmetric_vault = u'asymmetric_vault' shared_test_vault = u'/shared/%s' % test_vault -test_vaultsecret = u'test_vaultsecret' + +test_secret = u'test_secret' +test_secret_copy = u'test_secret_copy' + binary_data = '\x01\x02\x03\x04' text_data = u'secret' +password = u'password' + +public_key = """ +-----BEGIN PUBLIC KEY----- +MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnT61EFxUOQgCJdM0tmw/ +pRRPDPGchTClnU1eBtiQD3ItKYf1+weMGwGOSJXPtkto7NlE7Qs8WHAr0UjyeBDe +k/zeB6nSVdk47OdaW1AHrJL+44r238Jbm/+7VO5lTu6Z4N5p0VqoWNLi0Uh/CkqB +tsxXaaAgjMp0AGq2U/aO/akeEYWQOYIdqUKVgAEKX5MmIA8tmbmoYIQ+B4Q3vX7N +otG4eR6c2o9Fyjd+M4Gai5Ce0fSrigRvxAYi8xpRkQ5yQn5gf4WVrn+UKTfOIjLO +pVThop+Xivcre3SpI0kt6oZPhBw9i8gbMnqifVmGFpVdhq+QVBqp+MVJvTbhRPG6 +3wIDAQAB +-----END PUBLIC KEY----- +""" + +private_key = """ +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEAnT61EFxUOQgCJdM0tmw/pRRPDPGchTClnU1eBtiQD3ItKYf1 ++weMGwGOSJXPtkto7NlE7Qs8WHAr0UjyeBDek/zeB6nSVdk47OdaW1AHrJL+44r2 +38Jbm/+7VO5lTu6Z4N5p0VqoWNLi0Uh/CkqBtsxXaaAgjMp0AGq2U/aO/akeEYWQ +OYIdqUKVgAEKX5MmIA8tmbmoYIQ+B4Q3vX7NotG4eR6c2o9Fyjd+M4Gai5Ce0fSr +igRvxAYi8xpRkQ5yQn5gf4WVrn+UKTfOIjLOpVThop+Xivcre3SpI0kt6oZPhBw9 +i8gbMnqifVmGFpVdhq+QVBqp+MVJvTbhRPG63wIDAQABAoIBAQCD2bXnfxPcMnvi +jaPwpvoDCPF0EBBHmk/0g5ApO2Qon3uBDJFUqbJwXrCY6o2d9MOJfnGONlKmcYA8 +X+d4h+SqwGjIkjxdYeSauS+Jy6Rzr1ptH/P8EjPQrfG9uJxYQDflV3nxYwwwVrx7 +8kccMPdteRB+8Bb7FzOHufMimmayCNFETnVT5CKH2PrYoPB+fr0itCipWOenDp33 +e73OV+K9U3rclmtHaoRxGohqByKfQRUkipjw4m+T3qfZZc5eN77RGW8J+oL1GVom +fwtiH7N1HVte0Dmd13nhiASg355kjqRPcIMPsRHvXkOpgg5HRUTKG5elqAyvvm27 +Fzj1YdeRAoGBAMnE61+FYh8qCyEGe8r6RGjO8iuoyk1t+0gBWbmILLBiRnj4K8Tc +k7HBG/pg3XCNbCuRwiLg8tk3VAAXzn6o+IJr3QnKbNCGa1lKfYU4mt11sBEyuL5V +NpZcZ8IiPhMlGyDA9cFbTMKOE08RqbOIdxOmTizFt0R5sYZAwOjEvBIZAoGBAMeC +N/P0bdrScFZGeS51wEdiWme/CO0IyGoqU6saI8L0dbmMJquiaAeIEjIKLqxH1RON +axhsyk97e0PCcc5QK62Utf50UUAbL/v7CpIG+qdSRYDO4bVHSCkwF32N3pYh/iVU +EsEBEkZiJi0dWa/0asDbsACutxcHda3RI5pi7oO3AoGAcbGNs/CUHt1xEfX2UaT+ +YVSjb2iYPlNH8gYYygvqqqVl8opdF3v3mYUoP8jPXrnCBzcF/uNk1HNx2O+RQxvx +lIQ1NGwlLsdfvBvWaPhBg6LqSHadVVrs/IMrUGA9PEp/Y9B3arIIqeSnCrn4Nxsh +higDCwWKRIKSPwVD7qXVGBkCgYEAu5/CASIRIeYgEXMLSd8hKcDcJo8o1MoauIT/ +1Hyrvw9pm0qrn2QHk3WrLvYWeJzBTTcEzZ6aEG+fN9UodA8/VGnzUc6QDsrCsKWh +hj0cArlDdeSZrYLQ4TNCFCiUePqU6QQM8weP6TMqlejxTKF+t8qi1bF5rCWuzP1P +D0UU7DcCgYAUvmEGckugS+FTatop8S/rmkcQ4Bf5M/YCZfsySavucDiHcBt0QtXt +Swh0XdDsYS3W1yj2XqqsQ7R58KNaffCHjjulWFzb5IiuSvvdxzWtiXHisOpO36MJ +kUlCMj24a8XsShzYTWBIyW2ngvGe3pQ9PfjkUdm0LGZjYITCBvgOKw== +-----END RSA PRIVATE KEY----- +""" + class test_vaultsecret_plugin(Declarative): cleanup_commands = [ ('vault_del', [test_vault], {'continue': True}), + ('vault_del', [symmetric_vault], {'continue': True}), + ('vault_del', [asymmetric_vault], {'continue': True}), ('vault_del', [shared_test_vault], {'continue': True}), ] @@ -61,19 +112,19 @@ class test_vaultsecret_plugin(Declarative): }, { - 'desc': 'Create test vault secret with binary data', + 'desc': 'Create secret with binary data', 'command': ( 'vaultsecret_add', - [test_vault, test_vaultsecret], + [test_vault, test_secret], { 'data': binary_data, }, ), 'expected': { - 'value': test_vaultsecret, - 'summary': 'Added vault secret "%s"' % test_vaultsecret, + 'value': test_secret, + 'summary': 'Added vault secret "%s"' % test_secret, 'result': { - 'secret_name': test_vaultsecret, + 'secret_name': test_secret, 'data': binary_data, }, }, @@ -83,10 +134,10 @@ class test_vaultsecret_plugin(Declarative): 'desc': 'Create duplicate vault secret', 'command': ( 'vaultsecret_add', - [test_vault, test_vaultsecret], + [test_vault, test_secret], {}, ), - 'expected': errors.DuplicateEntry(message=u'vault secret with name "%s" already exists' % test_vaultsecret), + 'expected': errors.DuplicateEntry(message=u'vault secret with name "%s" already exists' % test_secret), }, { @@ -102,7 +153,7 @@ class test_vaultsecret_plugin(Declarative): 'summary': u'1 vault secret matched', 'result': [ { - 'secret_name': test_vaultsecret, + 'secret_name': test_secret, 'data': binary_data, }, ], @@ -110,52 +161,271 @@ class test_vaultsecret_plugin(Declarative): }, { - 'desc': 'Retrieve test vault secret', + 'desc': 'Retrieve secret', 'command': ( 'vaultsecret_show', - [test_vault, test_vaultsecret], + [test_vault, test_secret], {}, ), 'expected': { - 'value': test_vaultsecret, + 'value': test_secret, 'summary': None, 'result': { - 'secret_name': test_vaultsecret, + 'secret_name': test_secret, 'data': binary_data, }, }, }, { - 'desc': 'Modify test vault secret', + 'desc': 'Modify secret', 'command': ( 'vaultsecret_mod', - [test_vault, test_vaultsecret], + [test_vault, test_secret], { - 'description': u'Test vault secret', + 'description': u'secret', }, ), 'expected': { - 'value': test_vaultsecret, - 'summary': u'Modified vault secret "%s"' % test_vaultsecret, + 'value': test_secret, + 'summary': u'Modified vault secret "%s"' % test_secret, 'result': { - 'secret_name': test_vaultsecret, - 'description': u'Test vault secret', + 'secret_name': test_secret, + 'description': u'secret', 'data': binary_data, }, }, }, { - 'desc': 'Delete test vault secret', + 'desc': 'Create symmetric vault', + 'command': ( + 'vault_add', + [symmetric_vault], + { + 'ipavaulttype': u'symmetric', + 'password': password, + }, + ), + 'expected': { + 'value': symmetric_vault, + 'summary': 'Added vault "%s"' % symmetric_vault, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (symmetric_vault, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [symmetric_vault], + 'vault_id': u'/users/admin/%s' % symmetric_vault, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'symmetric'], + 'ipavaultsalt': [fuzzy_string], + }, + }, + }, + + { + 'desc': 'Create secret in symmetric vault', + 'command': ( + 'vaultsecret_add', + [symmetric_vault, test_secret], + { + 'password': password, + 'data': binary_data, + }, + ), + 'expected': { + 'value': test_secret, + 'summary': 'Added vault secret "%s"' % test_secret, + 'result': { + 'secret_name': test_secret, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Create a copy of secret from a standard vault', + 'command': ( + 'vaultsecret_add', + [symmetric_vault, test_secret_copy], + { + 'source_vault_id': test_vault, + 'source_secret_id': test_secret, + 'password': password, + }, + ), + 'expected': { + 'value': test_secret_copy, + 'summary': 'Added vault secret "%s"' % test_secret_copy, + 'result': { + 'secret_name': test_secret_copy, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Update a copy of secret from a standard vault', + 'command': ( + 'vaultsecret_mod', + [symmetric_vault, test_secret_copy], + { + 'source_vault_id': test_vault, + 'source_secret_id': test_secret, + 'password': password, + }, + ), + 'expected': { + 'value': test_secret_copy, + 'summary': 'Modified vault secret "%s"' % test_secret_copy, + 'result': { + 'secret_name': test_secret_copy, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Create asymmetric vault', + 'command': ( + 'vault_add', + [asymmetric_vault], + { + 'ipavaulttype': u'asymmetric', + 'ipapublickey': public_key, + }, + ), + 'expected': { + 'value': asymmetric_vault, + 'summary': 'Added vault "%s"' % asymmetric_vault, + 'result': { + 'dn': u'cn=%s,cn=admin,cn=users,cn=vaults,%s' % (asymmetric_vault, api.env.basedn), + 'objectclass': (u'ipaVault', u'top'), + 'cn': [asymmetric_vault], + 'vault_id': u'/users/admin/%s' % asymmetric_vault, + 'owner_user': [u'admin'], + 'ipavaulttype': [u'asymmetric'], + 'ipavaultsalt': [fuzzy_string], + 'ipapublickey': [public_key], + }, + }, + }, + + { + 'desc': 'Create secret in asymmetric vault', + 'command': ( + 'vaultsecret_add', + [asymmetric_vault, test_secret], + { + 'private_key': private_key, + 'data': binary_data, + }, + ), + 'expected': { + 'value': test_secret, + 'summary': 'Added vault secret "%s"' % test_secret, + 'result': { + 'secret_name': test_secret, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Create a copy of secret from a symmetric vault', + 'command': ( + 'vaultsecret_add', + [asymmetric_vault, test_secret_copy], + { + 'private_key': private_key, + 'source_vault_id': symmetric_vault, + 'source_secret_id': test_secret, + 'source_password': password, + }, + ), + 'expected': { + 'value': test_secret_copy, + 'summary': 'Added vault secret "%s"' % test_secret_copy, + 'result': { + 'secret_name': test_secret_copy, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Update a copy of secret from a symmetric vault', + 'command': ( + 'vaultsecret_mod', + [asymmetric_vault, test_secret_copy], + { + 'private_key': private_key, + 'source_vault_id': symmetric_vault, + 'source_secret_id': test_secret, + 'source_password': password, + }, + ), + 'expected': { + 'value': test_secret_copy, + 'summary': 'Modified vault secret "%s"' % test_secret_copy, + 'result': { + 'secret_name': test_secret_copy, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Create a copy of secret from an asymmetric vault', + 'command': ( + 'vaultsecret_add', + [test_vault, test_secret_copy], + { + 'source_vault_id': asymmetric_vault, + 'source_secret_id': test_secret, + 'source_private_key': private_key, + }, + ), + 'expected': { + 'value': test_secret_copy, + 'summary': 'Added vault secret "%s"' % test_secret_copy, + 'result': { + 'secret_name': test_secret_copy, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Update a copy of secret from an asymmetric vault', + 'command': ( + 'vaultsecret_mod', + [test_vault, test_secret_copy], + { + 'source_vault_id': asymmetric_vault, + 'source_secret_id': test_secret, + 'source_private_key': private_key, + }, + ), + 'expected': { + 'value': test_secret_copy, + 'summary': 'Modified vault secret "%s"' % test_secret_copy, + 'result': { + 'secret_name': test_secret_copy, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Delete secret', 'command': ( 'vaultsecret_del', - [test_vault, test_vaultsecret], + [test_vault, test_secret], {}, ), 'expected': { - 'value': test_vaultsecret, - 'summary': u'Deleted vault secret "%s"' % test_vaultsecret, + 'value': test_secret, + 'summary': u'Deleted vault secret "%s"' % test_secret, 'result': { 'failed': (), }, @@ -166,45 +436,45 @@ class test_vaultsecret_plugin(Declarative): 'desc': 'Delete non-existent vault secret', 'command': ( 'vaultsecret_del', - [test_vault, test_vaultsecret], + [test_vault, test_secret], {}, ), - 'expected': errors.NotFound(reason=u'%s: vault secret not found' % test_vaultsecret), + 'expected': errors.NotFound(reason=u'%s: vault secret not found' % test_secret), }, { - 'desc': 'Create test vault secret with text data', + 'desc': 'Create secret with text data', 'command': ( 'vaultsecret_add', - [test_vault, test_vaultsecret], + [test_vault, test_secret], { 'text': text_data, }, ), 'expected': { - 'value': test_vaultsecret, - 'summary': 'Added vault secret "%s"' % test_vaultsecret, + 'value': test_secret, + 'summary': 'Added vault secret "%s"' % test_secret, 'result': { - 'secret_name': test_vaultsecret, + 'secret_name': test_secret, 'data': text_data.encode('utf-8'), }, }, }, { - 'desc': 'Retrieve test vault secret as text', + 'desc': 'Retrieve secret as text', 'command': ( 'vaultsecret_show', - [test_vault, test_vaultsecret], + [test_vault, test_secret], { 'show_text': True, }, ), 'expected': { - 'value': test_vaultsecret, + 'value': test_secret, 'summary': None, 'result': { - 'secret_name': test_vaultsecret, + 'secret_name': test_secret, 'text': text_data, }, }, @@ -232,19 +502,19 @@ class test_vaultsecret_plugin(Declarative): }, { - 'desc': 'Create shared test vault secret with binary data', + 'desc': 'Create shared secret with binary data', 'command': ( 'vaultsecret_add', - [shared_test_vault, test_vaultsecret], + [shared_test_vault, test_secret], { 'data': binary_data, }, ), 'expected': { - 'value': test_vaultsecret, - 'summary': 'Added vault secret "%s"' % test_vaultsecret, + 'value': test_secret, + 'summary': 'Added vault secret "%s"' % test_secret, 'result': { - 'secret_name': test_vaultsecret, + 'secret_name': test_secret, 'data': binary_data, }, }, @@ -263,7 +533,7 @@ class test_vaultsecret_plugin(Declarative): 'summary': u'1 vault secret matched', 'result': [ { - 'secret_name': test_vaultsecret, + 'secret_name': test_secret, 'data': binary_data, }, ], @@ -271,52 +541,71 @@ class test_vaultsecret_plugin(Declarative): }, { - 'desc': 'Retrieve shared test vault secret', + 'desc': 'Retrieve shared secret', 'command': ( 'vaultsecret_show', - [shared_test_vault, test_vaultsecret], + [shared_test_vault, test_secret], {}, ), 'expected': { - 'value': test_vaultsecret, + 'value': test_secret, 'summary': None, 'result': { - 'secret_name': test_vaultsecret, + 'secret_name': test_secret, 'data': binary_data, }, }, }, { - 'desc': 'Modify shared test vault secret', + 'desc': 'Modify shared secret', 'command': ( 'vaultsecret_mod', - [shared_test_vault, test_vaultsecret], + [shared_test_vault, test_secret], { - 'description': u'Test vault secret', + 'description': u'secret', }, ), 'expected': { - 'value': test_vaultsecret, - 'summary': u'Modified vault secret "%s"' % test_vaultsecret, + 'value': test_secret, + 'summary': u'Modified vault secret "%s"' % test_secret, 'result': { - 'secret_name': test_vaultsecret, - 'description': u'Test vault secret', + 'secret_name': test_secret, + 'description': u'secret', 'data': binary_data, }, }, }, { - 'desc': 'Delete shared test vault secret', + 'desc': 'Create a copy of secret in the same vault', + 'command': ( + 'vaultsecret_add', + [shared_test_vault, test_secret_copy], + { + 'source_secret_id': test_secret, + }, + ), + 'expected': { + 'value': test_secret_copy, + 'summary': 'Added vault secret "%s"' % test_secret_copy, + 'result': { + 'secret_name': test_secret_copy, + 'data': binary_data, + }, + }, + }, + + { + 'desc': 'Delete shared secret', 'command': ( 'vaultsecret_del', - [shared_test_vault, test_vaultsecret], + [shared_test_vault, test_secret], {}, ), 'expected': { - 'value': test_vaultsecret, - 'summary': u'Deleted vault secret "%s"' % test_vaultsecret, + 'value': test_secret, + 'summary': u'Deleted vault secret "%s"' % test_secret, 'result': { 'failed': (), }, -- 1.9.0 From mbasti at redhat.com Tue Mar 17 12:07:52 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 17 Mar 2015 13:07:52 +0100 Subject: [Freeipa-devel] [PATCH 0208] Remove --test option from upgrade In-Reply-To: <5501AC6C.8000603@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> Message-ID: <55081918.3000307@redhat.com> On 12/03/15 16:10, David Kupka wrote: > On 03/06/2015 06:00 PM, Martin Basti wrote: >> Upgrade plugins which modify LDAP data directly should not be executed >> in --test mode. >> >> This patch is a workaround, to ensure update with --test option will not >> modify any LDAP data. >> >> https://fedorahosted.org/freeipa/ticket/3448 >> >> Patch attached. >> >> >> > > Ideally we want to fix all plugins to dry-run the upgrade not just > skip when there is '--test' option but it is a good first step. > Works for me, ACK. > We had long discussion, and we decided to remove this option from upgrade. Reasons: * users are not supposed to use this option to test if upgrade will be successful, it can not guarantee it. * option is not used for developing, as it can not catch all issues with upgrade, using snapshots is better Attached patch removes the option. -- Martin Basti -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0208.3-Server-Upgrade-remove-test-option.patch Type: text/x-patch Size: 16895 bytes Desc: not available URL: From mbabinsk at redhat.com Tue Mar 17 15:51:14 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Tue, 17 Mar 2015 16:51:14 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <55080B50.9060308@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> <55031230.70604@redhat.com> <5506BB6F.70406@redhat.com> <5506CCCB.3020003@redhat.com> <5506CE0A.20806@redhat.com> <550702BF.8000000@redhat.com> <55080B50.9060308@redhat.com> Message-ID: <55084D72.2060707@redhat.com> On 03/17/2015 12:09 PM, Petr Spacek wrote: > On 16.3.2015 17:20, Martin Babinsky wrote: >> On 03/16/2015 01:35 PM, Jan Cholasta wrote: >>> Dne 16.3.2015 v 13:30 Martin Babinsky napsal(a): >>>> On 03/16/2015 12:15 PM, Martin Kosek wrote: >>>>> On 03/13/2015 05:37 PM, Martin Babinsky wrote: >>>>>> Attaching the next iteration of patches. > > Very good! I hopefully have last two nitpicks :-) See below. > >> diff --git a/ipapython/ipautil.py b/ipapython/ipautil.py >> index 4116d974e620341119b56fad3cff1bda48af3bab..cd03e9fd17b60b8b7324d0ccd436a10f7556baf0 100644 >> --- a/ipapython/ipautil.py >> +++ b/ipapython/ipautil.py >> @@ -1175,27 +1175,61 @@ def wait_for_open_socket(socket_name, timeout=0): >> else: >> raise e >> >> -def kinit_hostprincipal(keytab, ccachedir, principal): >> + >> +def kinit_keytab(keytab, ccache_path, principal, attempts=1): >> """ >> - Given a ccache directory and a principal kinit as that user. >> + Given a ccache_path , keytab file and a principal kinit as that user. >> + >> + The optional parameter 'attempts' specifies how many times the credential >> + initialization should be attempted before giving up and raising >> + StandardError. >> >> This blindly overwrites the current CCNAME so if you need to save >> it do so before calling this function. >> >> + This function is also not thread-safe since it modifies environment >> + variables. >> + >> Thus far this is used to kinit as the local host. > > This note can be deleted because it is used elsewhere too. > >> """ >> - try: >> - ccache_file = 'FILE:%s/ccache' % ccachedir >> - krbcontext = krbV.default_context() >> - ktab = krbV.Keytab(name=keytab, context=krbcontext) >> - princ = krbV.Principal(name=principal, context=krbcontext) >> - os.environ['KRB5CCNAME'] = ccache_file >> - ccache = krbV.CCache(name=ccache_file, context=krbcontext, primary_principal=princ) >> - ccache.init(princ) >> - ccache.init_creds_keytab(keytab=ktab, principal=princ) >> - return ccache_file >> - except krbV.Krb5Error, e: >> - raise StandardError('Error initializing principal %s in %s: %s' % (principal, keytab, str(e))) >> + root_logger.debug("Initializing principal %s using keytab %s" >> + % (principal, keytab)) > > I'm sorry for nitpicking but it would be nice to log ccache_file too. Krb5 > libs return quite weird errors when CC cache is not accessible so it helps to > have the path at hand. > Attaching updated patches. -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0015-5-ipautil-new-functions-kinit_keytab-and-kinit_passwor.patch Type: text/x-patch Size: 4231 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0016-4-ipa-client-install-try-to-get-host-TGT-several-times.patch Type: text/x-patch Size: 8821 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0017-4-Adopted-kinit_keytab-and-kinit_password-for-kerberos.patch Type: text/x-patch Size: 11866 bytes Desc: not available URL: From mbabinsk at redhat.com Tue Mar 17 16:51:04 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Tue, 17 Mar 2015 17:51:04 +0100 Subject: [Freeipa-devel] [PATCH 0021] fix improper handling of boolean option during KRA install Message-ID: <55085B78.1040201@redhat.com> https://fedorahosted.org/freeipa/ticket/4530 -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0021-1-fix-improper-handling-of-boolean-option.patch Type: text/x-patch Size: 983 bytes Desc: not available URL: From simo at redhat.com Tue Mar 17 17:00:38 2015 From: simo at redhat.com (Simo Sorce) Date: Tue, 17 Mar 2015 13:00:38 -0400 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <5506CCCB.3020003@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> <55031230.70604@redhat.com> <5506BB6F.70406@redhat.com> <5506CCCB.3020003@redhat.com> Message-ID: <1426611638.2981.106.camel@willson.usersys.redhat.com> On Mon, 2015-03-16 at 13:30 +0100, Martin Babinsky wrote: > On 03/16/2015 12:15 PM, Martin Kosek wrote: > > On 03/13/2015 05:37 PM, Martin Babinsky wrote: > >> Attaching the next iteration of patches. > >> > >> I have tried my best to reword the ipa-client-install man page bit about the > >> new option. Any suggestions to further improve it are welcome. > >> > >> I have also slightly modified the 'kinit_keytab' function so that in Kerberos > >> errors are reported for each attempt and the text of the last error is retained > >> when finally raising exception. > > > > The approach looks very good. I think that my only concern with this patch is > > this part: > > > > + ccache.init_creds_keytab(keytab=ktab, principal=princ) > > ... > > + except krbV.Krb5Error as e: > > + last_exc = str(e) > > + root_logger.debug("Attempt %d/%d: failed: %s" > > + % (attempt, attempts, last_exc)) > > + time.sleep(1) > > + > > + root_logger.debug("Maximum number of attempts (%d) reached" > > + % attempts) > > + raise StandardError("Error initializing principal %s: %s" > > + % (principal, last_exc)) > > > > The problem here is that this function will raise the super-generic > > StandardError instead of the proper with all the context and information about > > the error that the caller can then process. > > > > I think that > > > > except krbV.Krb5Error as e: > > if attempt == max_attempts: > > log something > > raise > > > > would be better. > > > > Yes that seems reasonable. I'm just thinking whether we should re-raise > Krb5Error or raise ipalib.errors.KerberosError? the latter options makes > more sense to me as we would not have to additionally import Krb5Error > everywhere and it would also make the resulting errors more consistent. > > I am thinking about someting like this: > > except krbV.Krb5Error as e: > if attempt == attempts: > # log that we have reaches maximum number of attempts > raise KerberosError(minor=str(e)) > > What do you think? Are you retrying on any error ? Please do *not* do that, if you retry many times on an error that indicates the password is wrong you may end up locking an administrative account. If you want to retry you should do it only for very specific timeout errors. Simo. -- Simo Sorce * Red Hat, Inc * New York From mbabinsk at redhat.com Wed Mar 18 09:54:28 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Wed, 18 Mar 2015 10:54:28 +0100 Subject: [Freeipa-devel] [PATCH 0022] migrate-ds: proper treatment of unsuccessful migrations Message-ID: <55094B54.7070807@redhat.com> This is a proper fix to both https://fedorahosted.org/freeipa/ticket/4846 and https://fedorahosted.org/freeipa/ticket/4952. To do this I had to throw out some unused parameters from _update_default_group function (particularly the pesky pkey causing bug #4846 to pop out). I did not test the patch because my VMs are behaving strangely today and I couldn't connect to them. So please test this patch thoroughly. -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0022-1-migrate-ds-print-out-failed-attempts-when-no-users-g.patch Type: text/x-patch Size: 2686 bytes Desc: not available URL: From sbose at redhat.com Wed Mar 18 09:58:51 2015 From: sbose at redhat.com (Sumit Bose) Date: Wed, 18 Mar 2015 10:58:51 +0100 Subject: [Freeipa-devel] [PATCHES 137-139] extdom: add err_msg member to request context In-Reply-To: <20150313141710.GF19530@hendrix.arn.redhat.com> References: <20150304173522.GT3271@p.redhat.com> <20150313105509.GJ13715@p.redhat.com> <20150313141710.GF19530@hendrix.arn.redhat.com> Message-ID: <20150318095851.GG9952@p.redhat.com> On Fri, Mar 13, 2015 at 03:17:10PM +0100, Jakub Hrozek wrote: > On Fri, Mar 13, 2015 at 11:55:09AM +0100, Sumit Bose wrote: > > On Wed, Mar 04, 2015 at 06:35:22PM +0100, Sumit Bose wrote: > > > Hi, > > > > > > this patch series improves error reporting of the extdom plugin > > > especially on the client side. Currently there is only SSSD ticket > > > https://fedorahosted.org/sssd/ticket/2463 . Shall I create a > > > corresponding FreeIPA ticket as well? > > > > > > In the third patch I already added a handful of new error messages. > > > Suggestions for more messages are welcome. > > > > > > bye, > > > Sumit > > > > Rebased versions attached. > > > > bye, > > Sumit > > The patches look good and work fine. I admit I cheated a bit and > modified the code to return a failure. Then I saw on the client: > > [sssd[be[ipa.example.com]]] [ipa_s2n_exop_send] (0x0400): Executing extended operation > [sssd[be[ipa.example.com]]] [ipa_s2n_exop_done] (0x0040): ldap_extended_operation result: Operations > error(1), Failed to create fully qualified name. > [sssd[be[ipa.example.com]]] [ipa_s2n_exop_done] (0x0040): ldap_extended_operation failed, server logs might contain more details. > [sssd[be[ipa.example.com]]] [ipa_s2n_get_user_done] (0x0040): s2n exop request failed. > > I just saw one typo: > > > @@ -918,6 +934,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, > > ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); > > if (ret != 0 || !(id_type == SSS_ID_TYPE_UID > > || id_type == SSS_ID_TYPE_BOTH)) { > > + set_err_msg(req, "Failed ot read original data"); > ~~~~~~~~~ > > if (ret == ENOENT) { > > ret = LDAP_NO_SUCH_OBJECT; > > } else { > > And a compilation warning caused by previous patches. > > So ACK provided the typo is fixed prior to pushing the patch. > Please find attached a new version where the typo is fixed. bye, Sumit -------------- next part -------------- From 9638b0a61144a481a0a2c8757d4e3a10e1f44b12 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Mon, 2 Feb 2015 00:52:10 +0100 Subject: [PATCH 137/139] extdom: add err_msg member to request context --- daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h | 1 + daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c | 1 + daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 5 ++++- 3 files changed, 6 insertions(+), 1 deletion(-) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h index d4c851169ddadc869a59c53075f9fc7f33321085..421f6c6ea625aba2db7e9ffc84115b3647673699 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h @@ -116,6 +116,7 @@ struct extdom_req { gid_t gid; } posix_gid; } data; + char *err_msg; }; struct extdom_res { diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index 47bcb179f04e08c64d92f55809b84f2d59622344..c2fd42f13fca97587ddc4c12b560e590462f121b 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -356,6 +356,7 @@ void free_req_data(struct extdom_req *req) break; } + free(req->err_msg); free(req); } diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c index dc7785877fc321ddaa5b6967d1c1b06cb454bbbf..708d0e4a2fc9da4f87a24a49c945587049f7280f 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c @@ -149,12 +149,15 @@ static int ipa_extdom_extop(Slapi_PBlock *pb) rc = LDAP_SUCCESS; done: - free_req_data(req); + if (req->err_msg != NULL) { + err_msg = req->err_msg; + } if (err_msg != NULL) { LOG("%s", err_msg); } slapi_send_ldap_result(pb, rc, NULL, err_msg, 0, NULL); ber_bvfree(ret_val); + free_req_data(req); return SLAPI_PLUGIN_EXTENDED_SENT_RESULT; } -- 2.1.0 -------------- next part -------------- From b52bc91cf9e10a779446bf399f138eb3929cfb55 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Mon, 2 Feb 2015 00:53:06 +0100 Subject: [PATCH 138/139] extdom: add add_err_msg() with test --- .../ipa-extdom-extop/ipa_extdom.h | 1 + .../ipa-extdom-extop/ipa_extdom_cmocka_tests.c | 43 ++++++++++++++++++++++ .../ipa-extdom-extop/ipa_extdom_common.c | 23 ++++++++++++ 3 files changed, 67 insertions(+) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h index 421f6c6ea625aba2db7e9ffc84115b3647673699..0d5d55d2fb0ece95466b0225b145a4edeef18efa 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h @@ -185,4 +185,5 @@ int getgrnam_r_wrapper(size_t buf_max, const char *name, struct group *grp, char **_buf, size_t *_buf_len); int getgrgid_r_wrapper(size_t buf_max, gid_t gid, struct group *grp, char **_buf, size_t *_buf_len); +void set_err_msg(struct extdom_req *req, const char *format, ...); #endif /* _IPA_EXTDOM_H_ */ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c index d5bacd7e8c9dc0a71eea70162406c7e5b67384ad..586b58b0fd4c7610e9cb4643b6dae04f9d22b8ab 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c @@ -213,6 +213,47 @@ void test_getgrgid_r_wrapper(void **state) free(buf); } +void extdom_req_setup(void **state) +{ + struct extdom_req *req; + + req = calloc(sizeof(struct extdom_req), 1); + assert_non_null(req); + + *state = req; +} + +void extdom_req_teardown(void **state) +{ + struct extdom_req *req; + + req = (struct extdom_req *) *state; + + free_req_data(req); +} + +void test_set_err_msg(void **state) +{ + struct extdom_req *req; + + req = (struct extdom_req *) *state; + assert_null(req->err_msg); + + set_err_msg(NULL, NULL); + assert_null(req->err_msg); + + set_err_msg(req, NULL); + assert_null(req->err_msg); + + set_err_msg(req, "Test [%s][%d].", "ABCD", 1234); + assert_non_null(req->err_msg); + assert_string_equal(req->err_msg, "Test [ABCD][1234]."); + + set_err_msg(req, "2nd Test [%s][%d].", "ABCD", 1234); + assert_non_null(req->err_msg); + assert_string_equal(req->err_msg, "Test [ABCD][1234]."); +} + int main(int argc, const char *argv[]) { const UnitTest tests[] = { @@ -220,6 +261,8 @@ int main(int argc, const char *argv[]) unit_test(test_getpwuid_r_wrapper), unit_test(test_getgrnam_r_wrapper), unit_test(test_getgrgid_r_wrapper), + unit_test_setup_teardown(test_set_err_msg, + extdom_req_setup, extdom_req_teardown), }; return run_tests(tests); diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index c2fd42f13fca97587ddc4c12b560e590462f121b..e05c005da4efcdbbee386f9e73ef3ef889e1a3c2 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -229,6 +229,29 @@ done: return ret; } +void set_err_msg(struct extdom_req *req, const char *format, ...) +{ + int ret; + va_list ap; + + if (req == NULL) { + return; + } + + if (format == NULL || req->err_msg != NULL) { + /* Do not override an existing error message. */ + return; + } + va_start(ap, format); + + ret = vasprintf(&req->err_msg, format, ap); + if (ret == -1) { + req->err_msg = strdup("vasprintf failed.\n"); + } + + va_end(ap); +} + int parse_request_data(struct berval *req_val, struct extdom_req **_req) { BerElement *ber = NULL; -- 2.1.0 -------------- next part -------------- From 6e6cbc67f084ed745685f04f5b7a50bf65147c95 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Wed, 4 Mar 2015 13:37:50 +0100 Subject: [PATCH 139/139] extdom: add selected error messages --- .../ipa-extdom-extop/ipa_extdom_common.c | 51 ++++++++++++++++------ 1 file changed, 38 insertions(+), 13 deletions(-) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index e05c005da4efcdbbee386f9e73ef3ef889e1a3c2..4452d456bcabc211a0ca5814d24247c43cf95a91 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -300,26 +300,34 @@ int parse_request_data(struct berval *req_val, struct extdom_req **_req) * } */ + req = calloc(sizeof(struct extdom_req), 1); + if (req == NULL) { + /* Since we return req even in the case of an error we make sure is is + * always safe to call free_req_data() on the returned data. */ + *_req = NULL; + return LDAP_OPERATIONS_ERROR; + } + + *_req = req; + if (req_val == NULL || req_val->bv_val == NULL || req_val->bv_len == 0) { + set_err_msg(req, "Missing request data"); return LDAP_PROTOCOL_ERROR; } ber = ber_init(req_val); if (ber == NULL) { + set_err_msg(req, "Cannot initialize BER struct"); return LDAP_PROTOCOL_ERROR; } tag = ber_scanf(ber, "{ee", &input_type, &request_type); if (tag == LBER_ERROR) { ber_free(ber, 1); + set_err_msg(req, "Cannot read input and request type"); return LDAP_PROTOCOL_ERROR; } - req = calloc(sizeof(struct extdom_req), 1); - if (req == NULL) { - return LDAP_OPERATIONS_ERROR; - } - req->input_type = input_type; req->request_type = request_type; @@ -343,17 +351,15 @@ int parse_request_data(struct berval *req_val, struct extdom_req **_req) break; default: ber_free(ber, 1); - free(req); + set_err_msg(req, "Unknown input type"); return LDAP_PROTOCOL_ERROR; } ber_free(ber, 1); if (tag == LBER_ERROR) { - free(req); + set_err_msg(req, "Failed to decode BER data"); return LDAP_PROTOCOL_ERROR; } - *_req = req; - return LDAP_SUCCESS; } @@ -715,6 +721,7 @@ static int pack_ber_name(const char *domain_name, const char *name, } static int handle_uid_request(struct ipa_extdom_ctx *ctx, + struct extdom_req *req, enum request_types request_type, uid_t uid, const char *domain_name, struct berval **berval) { @@ -738,6 +745,7 @@ static int handle_uid_request(struct ipa_extdom_ctx *ctx, if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { + set_err_msg(req, "Failed to lookup SID by UID"); ret = LDAP_OPERATIONS_ERROR; } goto done; @@ -760,6 +768,7 @@ static int handle_uid_request(struct ipa_extdom_ctx *ctx, ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_UID || id_type == SSS_ID_TYPE_BOTH)) { + set_err_msg(req, "Failed to read original data"); if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { @@ -785,6 +794,7 @@ done: } static int handle_gid_request(struct ipa_extdom_ctx *ctx, + struct extdom_req *req, enum request_types request_type, gid_t gid, const char *domain_name, struct berval **berval) { @@ -807,6 +817,7 @@ static int handle_gid_request(struct ipa_extdom_ctx *ctx, if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { + set_err_msg(req, "Failed to lookup SID by GID"); ret = LDAP_OPERATIONS_ERROR; } goto done; @@ -829,6 +840,7 @@ static int handle_gid_request(struct ipa_extdom_ctx *ctx, ret = sss_nss_getorigbyname(grp.gr_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_GID || id_type == SSS_ID_TYPE_BOTH)) { + set_err_msg(req, "Failed to read original data"); if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { @@ -852,6 +864,7 @@ done: } static int handle_sid_request(struct ipa_extdom_ctx *ctx, + struct extdom_req *req, enum request_types request_type, const char *sid, struct berval **berval) { @@ -872,6 +885,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { + set_err_msg(req, "Failed to lookup name by SID"); ret = LDAP_OPERATIONS_ERROR; } goto done; @@ -879,6 +893,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, sep = strchr(fq_name, SSSD_DOMAIN_SEPARATOR); if (sep == NULL) { + set_err_msg(req, "Failed to split fully qualified name"); ret = LDAP_OPERATIONS_ERROR; goto done; } @@ -886,6 +901,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, object_name = strndup(fq_name, (sep - fq_name)); domain_name = strdup(sep + 1); if (object_name == NULL || domain_name == NULL) { + set_err_msg(req, "Missing name or domain"); ret = LDAP_OPERATIONS_ERROR; goto done; } @@ -918,6 +934,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_UID || id_type == SSS_ID_TYPE_BOTH)) { + set_err_msg(req, "Failed to read original data"); if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { @@ -950,6 +967,7 @@ static int handle_sid_request(struct ipa_extdom_ctx *ctx, ret = sss_nss_getorigbyname(grp.gr_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_GID || id_type == SSS_ID_TYPE_BOTH)) { + set_err_msg(req, "Failed to read original data"); if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { @@ -980,6 +998,7 @@ done: } static int handle_name_request(struct ipa_extdom_ctx *ctx, + struct extdom_req *req, enum request_types request_type, const char *name, const char *domain_name, struct berval **berval) @@ -998,6 +1017,7 @@ static int handle_name_request(struct ipa_extdom_ctx *ctx, domain_name); if (ret == -1) { ret = LDAP_OPERATIONS_ERROR; + set_err_msg(req, "Failed to create fully qualified name"); fq_name = NULL; /* content is undefined according to asprintf(3) */ goto done; @@ -1009,6 +1029,7 @@ static int handle_name_request(struct ipa_extdom_ctx *ctx, if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { + set_err_msg(req, "Failed to lookup SID by name"); ret = LDAP_OPERATIONS_ERROR; } goto done; @@ -1028,6 +1049,7 @@ static int handle_name_request(struct ipa_extdom_ctx *ctx, ret = sss_nss_getorigbyname(pwd.pw_name, &kv_list, &id_type); if (ret != 0 || !(id_type == SSS_ID_TYPE_UID || id_type == SSS_ID_TYPE_BOTH)) { + set_err_msg(req, "Failed to read original data"); if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { @@ -1068,6 +1090,7 @@ static int handle_name_request(struct ipa_extdom_ctx *ctx, if (ret == ENOENT) { ret = LDAP_NO_SUCH_OBJECT; } else { + set_err_msg(req, "Failed to read original data"); ret = LDAP_OPERATIONS_ERROR; } goto done; @@ -1097,27 +1120,29 @@ int handle_request(struct ipa_extdom_ctx *ctx, struct extdom_req *req, switch (req->input_type) { case INP_POSIX_UID: - ret = handle_uid_request(ctx, req->request_type, + ret = handle_uid_request(ctx, req, req->request_type, req->data.posix_uid.uid, req->data.posix_uid.domain_name, berval); break; case INP_POSIX_GID: - ret = handle_gid_request(ctx, req->request_type, + ret = handle_gid_request(ctx, req, req->request_type, req->data.posix_gid.gid, req->data.posix_uid.domain_name, berval); break; case INP_SID: - ret = handle_sid_request(ctx, req->request_type, req->data.sid, berval); + ret = handle_sid_request(ctx, req, req->request_type, req->data.sid, + berval); break; case INP_NAME: - ret = handle_name_request(ctx, req->request_type, + ret = handle_name_request(ctx, req, req->request_type, req->data.name.object_name, req->data.name.domain_name, berval); break; default: + set_err_msg(req, "Unknown input type"); ret = LDAP_PROTOCOL_ERROR; goto done; } -- 2.1.0 From sbose at redhat.com Wed Mar 18 10:01:35 2015 From: sbose at redhat.com (Sumit Bose) Date: Wed, 18 Mar 2015 11:01:35 +0100 Subject: [Freeipa-devel] [PATCH 140] extdom: migrate check-based test to cmocka In-Reply-To: <20150313141455.GE19530@hendrix.arn.redhat.com> References: <20150304174205.GU3271@p.redhat.com> <20150313105646.GK13715@p.redhat.com> <20150313141455.GE19530@hendrix.arn.redhat.com> Message-ID: <20150318100135.GH9952@p.redhat.com> On Fri, Mar 13, 2015 at 03:14:55PM +0100, Jakub Hrozek wrote: > On Fri, Mar 13, 2015 at 11:56:46AM +0100, Sumit Bose wrote: > > On Wed, Mar 04, 2015 at 06:42:05PM +0100, Sumit Bose wrote: > > > Hi, > > > > > > this is the first patch for https://fedorahosted.org/freeipa/ticket/4922 > > > which converts the check-based tests of the extdom plugin to cmocka. > > > > > > bye, > > > Sumit > > > > Rebased version attached. > > > > bye, > > Sumit > > The test itself is fine, but did freeipa consider moving to cmocka-1.0+ > to avoid warnings like: > ipa_extdom_cmocka_tests.c: In function ?main?: > ipa_extdom_cmocka_tests.c:408:5: warning: ?_run_tests? is deprecated > (declared at /usr/include/cmocka.h:2001) [-Wdeprecated-declarations] > return run_tests(tests); > > But I'm fine with ACKing this patch, the conversion should be done > separately. yes, I found it more flexible to use the old style now because it works with all versions of cmocka. When I converted the remaining check based test to cmocka I will provide a patch which will switch all to the new version. bye, Sumit > > -- > Manage your subscription for the Freeipa-devel mailing list: > https://www.redhat.com/mailman/listinfo/freeipa-devel > Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code From jhrozek at redhat.com Wed Mar 18 10:22:14 2015 From: jhrozek at redhat.com (Jakub Hrozek) Date: Wed, 18 Mar 2015 11:22:14 +0100 Subject: [Freeipa-devel] [PATCH 140] extdom: migrate check-based test to cmocka In-Reply-To: <20150318100135.GH9952@p.redhat.com> References: <20150304174205.GU3271@p.redhat.com> <20150313105646.GK13715@p.redhat.com> <20150313141455.GE19530@hendrix.arn.redhat.com> <20150318100135.GH9952@p.redhat.com> Message-ID: <20150318102214.GI4854@hendrix.redhat.com> On Wed, Mar 18, 2015 at 11:01:35AM +0100, Sumit Bose wrote: > On Fri, Mar 13, 2015 at 03:14:55PM +0100, Jakub Hrozek wrote: > > On Fri, Mar 13, 2015 at 11:56:46AM +0100, Sumit Bose wrote: > > > On Wed, Mar 04, 2015 at 06:42:05PM +0100, Sumit Bose wrote: > > > > Hi, > > > > > > > > this is the first patch for https://fedorahosted.org/freeipa/ticket/4922 > > > > which converts the check-based tests of the extdom plugin to cmocka. > > > > > > > > bye, > > > > Sumit > > > > > > Rebased version attached. > > > > > > bye, > > > Sumit > > > > The test itself is fine, but did freeipa consider moving to cmocka-1.0+ > > to avoid warnings like: > > ipa_extdom_cmocka_tests.c: In function ?main?: > > ipa_extdom_cmocka_tests.c:408:5: warning: ?_run_tests? is deprecated > > (declared at /usr/include/cmocka.h:2001) [-Wdeprecated-declarations] > > return run_tests(tests); > > > > But I'm fine with ACKing this patch, the conversion should be done > > separately. > > yes, I found it more flexible to use the old style now because it works > with all versions of cmocka. When I converted the remaining check based > test to cmocka I will provide a patch which will switch all to the new > version. > > bye, > Sumit Sure, ACK (sorry, from my point of view it was obvious I was OK with pushing even the previous version. Hopefully it's even clearer now :-)) From jhrozek at redhat.com Wed Mar 18 10:23:04 2015 From: jhrozek at redhat.com (Jakub Hrozek) Date: Wed, 18 Mar 2015 11:23:04 +0100 Subject: [Freeipa-devel] [PATCHES 137-139] extdom: add err_msg member to request context In-Reply-To: <20150318095851.GG9952@p.redhat.com> References: <20150304173522.GT3271@p.redhat.com> <20150313105509.GJ13715@p.redhat.com> <20150313141710.GF19530@hendrix.arn.redhat.com> <20150318095851.GG9952@p.redhat.com> Message-ID: <20150318102304.GJ4854@hendrix.redhat.com> On Wed, Mar 18, 2015 at 10:58:51AM +0100, Sumit Bose wrote: > Please find attached a new version where the typo is fixed. > > bye, > Sumit ACK I think the IPA gatekeepers shoudl feel free to just fix these trivial errors before pushing in the future. From jhrozek at redhat.com Wed Mar 18 10:25:14 2015 From: jhrozek at redhat.com (Jakub Hrozek) Date: Wed, 18 Mar 2015 11:25:14 +0100 Subject: [Freeipa-devel] [PATCH] extop: For printf formatting warning Message-ID: <20150318102514.GK4854@hendrix.redhat.com> I could swear I sent the patch last time when I was reviewing Sumit's patches but apparently not. It's better to use %zu instead of %d for size_t formatting with recent compilers. -------------- next part -------------- >From a088e8c8a9bd29b4c22f1579f2c3705652bf2730 Mon Sep 17 00:00:00 2001 From: Jakub Hrozek Date: Wed, 18 Mar 2015 11:20:38 +0100 Subject: [PATCH] extop: For printf formatting warning --- daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c index 708d0e4a2fc9da4f87a24a49c945587049f7280f..bc25e7643cdebe0eadc0cee4dcba3a392fdc33be 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c @@ -200,7 +200,7 @@ static int ipa_extdom_init_ctx(Slapi_PBlock *pb, struct ipa_extdom_ctx **_ctx) if (ctx->max_nss_buf_size == 0) { ctx->max_nss_buf_size = DEFAULT_MAX_NSS_BUFFER; } - LOG("Maximal nss buffer size set to [%d]!\n", ctx->max_nss_buf_size); + LOG("Maximal nss buffer size set to [%zu]!\n", ctx->max_nss_buf_size); ret = 0; -- 2.1.0 From sbose at redhat.com Wed Mar 18 10:39:15 2015 From: sbose at redhat.com (Sumit Bose) Date: Wed, 18 Mar 2015 11:39:15 +0100 Subject: [Freeipa-devel] [PATCH] extop: For printf formatting warning In-Reply-To: <20150318102514.GK4854@hendrix.redhat.com> References: <20150318102514.GK4854@hendrix.redhat.com> Message-ID: <20150318103915.GI9952@p.redhat.com> On Wed, Mar 18, 2015 at 11:25:14AM +0100, Jakub Hrozek wrote: > I could swear I sent the patch last time when I was reviewing Sumit's > patches but apparently not. > > It's better to use %zu instead of %d for size_t formatting with recent > compilers. > >From a088e8c8a9bd29b4c22f1579f2c3705652bf2730 Mon Sep 17 00:00:00 2001 > From: Jakub Hrozek > Date: Wed, 18 Mar 2015 11:20:38 +0100 > Subject: [PATCH] extop: For printf formatting warning > > --- > daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c > index 708d0e4a2fc9da4f87a24a49c945587049f7280f..bc25e7643cdebe0eadc0cee4dcba3a392fdc33be 100644 > --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c > +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c > @@ -200,7 +200,7 @@ static int ipa_extdom_init_ctx(Slapi_PBlock *pb, struct ipa_extdom_ctx **_ctx) > if (ctx->max_nss_buf_size == 0) { > ctx->max_nss_buf_size = DEFAULT_MAX_NSS_BUFFER; > } > - LOG("Maximal nss buffer size set to [%d]!\n", ctx->max_nss_buf_size); > + LOG("Maximal nss buffer size set to [%zu]!\n", ctx->max_nss_buf_size); I tried this some time ago and found the here not the glibc printf version is used but I guess some NSPR implementation which does not support the z specifier. So I would assum that this is not working as expected. Have you tried to trigger the error message or called LOG unconditionally with '%zu' ? bye, Sumit > > ret = 0; > > -- > 2.1.0 > > -- > Manage your subscription for the Freeipa-devel mailing list: > https://www.redhat.com/mailman/listinfo/freeipa-devel > Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code From lkrispen at redhat.com Wed Mar 18 11:18:55 2015 From: lkrispen at redhat.com (Ludwig Krispenz) Date: Wed, 18 Mar 2015 12:18:55 +0100 Subject: [Freeipa-devel] topology plugin - again need for input Message-ID: <55095F1F.60205@redhat.com> Hi, I need your feedback on a problem with implementing the topology plugin: marking an replication agreement, this seems to be a never ending story We want o mark an agreement when it is creqated by the plugin or put under control of the plugin by raising the domain level. The first idea was to rename the agreement, but this failed because DS does not support MODRDN on the cn=config backend and on second thought using a naming convetion on the rdn of the agreement entry seems to be not the best idea. The next approach was to use an attribute in the the agreement itself, and I just used description, which is multivalued and I added a description value "managed agreement ....". This works, but didn't get Simo's blessing and we agreed just to add a new objectclass "ipaReplTopoManagedAgreement", which could be used without extenting the core replication schema. I think this is the best solution, but unfortunately it fails. replication code is called when an agreement is modified and it accepts only modifications for a defined set of replication agreement attributes - other mods are rejected with UNWILLING_TO_PERORM. I think we could enhance DS to accept a wider range of changes to the replication agreement (it already does it for winsync agreements), but this would add a new dependency on a specific DS version where this change is included. Do you think this dependency is acceptable (topology plugin is targeted to 4.2) ? or do we need to find another clever solution or use the not so nice "description" way ? Thanks, Ludwig From jhrozek at redhat.com Wed Mar 18 11:33:57 2015 From: jhrozek at redhat.com (Jakub Hrozek) Date: Wed, 18 Mar 2015 12:33:57 +0100 Subject: [Freeipa-devel] [PATCH] extop: For printf formatting warning In-Reply-To: <20150318103915.GI9952@p.redhat.com> References: <20150318102514.GK4854@hendrix.redhat.com> <20150318103915.GI9952@p.redhat.com> Message-ID: <20150318113357.GL4854@hendrix.redhat.com> On Wed, Mar 18, 2015 at 11:39:15AM +0100, Sumit Bose wrote: > On Wed, Mar 18, 2015 at 11:25:14AM +0100, Jakub Hrozek wrote: > > I could swear I sent the patch last time when I was reviewing Sumit's > > patches but apparently not. > > > > It's better to use %zu instead of %d for size_t formatting with recent > > compilers. > > > >From a088e8c8a9bd29b4c22f1579f2c3705652bf2730 Mon Sep 17 00:00:00 2001 > > From: Jakub Hrozek > > Date: Wed, 18 Mar 2015 11:20:38 +0100 > > Subject: [PATCH] extop: For printf formatting warning > > > > --- > > daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c > > index 708d0e4a2fc9da4f87a24a49c945587049f7280f..bc25e7643cdebe0eadc0cee4dcba3a392fdc33be 100644 > > --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c > > +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c > > @@ -200,7 +200,7 @@ static int ipa_extdom_init_ctx(Slapi_PBlock *pb, struct ipa_extdom_ctx **_ctx) > > if (ctx->max_nss_buf_size == 0) { > > ctx->max_nss_buf_size = DEFAULT_MAX_NSS_BUFFER; > > } > > - LOG("Maximal nss buffer size set to [%d]!\n", ctx->max_nss_buf_size); > > + LOG("Maximal nss buffer size set to [%zu]!\n", ctx->max_nss_buf_size); > > I tried this some time ago and found the here not the glibc printf > version is used but I guess some NSPR implementation which does not > support the z specifier. So I would assum that this is not working as > expected. Have you tried to trigger the error message or called LOG > unconditionally with '%zu' ? No, I only tried compiling the code. I haven't expected non-standard printf to be used. sorry. Then what about casting max_nss_buf_size to something large that the NSPR implementation can handle (unsigned long?) From tbabej at redhat.com Wed Mar 18 11:37:48 2015 From: tbabej at redhat.com (Tomas Babej) Date: Wed, 18 Mar 2015 12:37:48 +0100 Subject: [Freeipa-devel] [PATCHES 0018-0020] ipa-dns-install: Use LDAPI for all DS connections In-Reply-To: <5506FE55.7060605@redhat.com> References: <55002E43.7050601@redhat.com> <55004D92.4080700@redhat.com> <5501A9EA.7040208@redhat.com> <5501BBB7.2040709@redhat.com> <5506D01C.9060606@redhat.com> <5506D9F1.6050809@redhat.com> <5506FE55.7060605@redhat.com> Message-ID: <5509638C.5050908@redhat.com> On 03/16/2015 05:01 PM, Martin Basti wrote: > On 16/03/15 14:26, Martin Babinsky wrote: >> On 03/16/2015 01:44 PM, Martin Basti wrote: >>> On 12/03/15 17:15, Martin Babinsky wrote: >>>> On 03/12/2015 03:59 PM, Martin Babinsky wrote: >>>>> On 03/11/2015 03:13 PM, Martin Basti wrote: >>>>>> On 11/03/15 13:00, Martin Babinsky wrote: >>>>>>> These patches solve https://fedorahosted.org/freeipa/ticket/4933. >>>>>>> >>>>>>> They are to be applied to master branch. I will rebase them for >>>>>>> ipa-4-1 after the review. >>>>>>> >>>>>> Thank you for the patches. >>>>>> >>>>>> I have a few comments: >>>>>> >>>>>> IPA-4-1 >>>>>> Replace simple bind with LDAPI is too big change for 4-1, we should >>>>>> start TLS if possible to avoid MINSSF>0 error. The LDAPI patches >>>>>> should >>>>>> go only into IPA master branch. >>>>>> >>>>>> You can do something like this: >>>>>> --- a/ipaserver/install/service.py >>>>>> +++ b/ipaserver/install/service.py >>>>>> @@ -107,6 +107,10 @@ class Service(object): >>>>>> if not self.realm: >>>>>> raise errors.NotFound(reason="realm is missing >>>>>> for >>>>>> %s" % (self)) >>>>>> conn = ipaldap.IPAdmin(ldapi=self.ldapi, >>>>>> realm=self.realm) >>>>>> + elif self.dm_password is not None: >>>>>> + conn = ipaldap.IPAdmin(self.fqdn, port=389, >>>>>> + cacert=paths.IPA_CA_CRT, >>>>>> + start_tls=True) >>>>>> else: >>>>>> conn = ipaldap.IPAdmin(self.fqdn, port=389) >>>>>> >>>>>> >>>>>> PATCH 0018: >>>>>> 1) >>>>>> please add there more chatty commit message about using LDAPI >>>>>> >>>>>> 2) >>>>>> I do not like much idea of adding 'realm' kwarg into __init__ >>>>>> method of >>>>>> OpenDNSSECInstance >>>>>> IIUC, it is because get_masters() method, which requires realm to >>>>>> use >>>>>> LDAPI. >>>>>> >>>>>> You can just add ods.realm=, before call get_master() in >>>>>> ipa-dns-install >>>>>> if options.dnssec_master: >>>>>> + ods.realm=api.env.realm >>>>>> dnssec_masters = ods.get_masters() >>>>>> (Honza will change it anyway during refactoring) >>>>>> >>>>>> PATCH 0019: >>>>>> 1) >>>>>> commit message deserves to be more chatty, can you explain there why >>>>>> you >>>>>> removed kerberos cache? >>>>>> >>>>>> Martin^2 >>>>>> >>>>> >>>>> Attaching updated patches. >>>>> >>>>> Patch 0018 should go to both 4.1 and master branches. >>>>> >>>>> Patch 0019 should go only to master. >>>>> >>>>> >>>>> >>>> >>>> One more update. >>>> >>>> Patch 0018 is for both 4.1 and master. >>>> Patch 0019 is for master only. >>>> >>>> >>>> >>> Thank for patches >>> Patch 0018: >>> 1) >>> Works for me but needs rebase on master >>> >>> Patch 0019: >>> 1) >>> Please rename the patch/commit message, the patch changes only >>> ipa-dns-install connections not all DS operations >>> >>> 2) >>> I have some troubles with applying patch, it needs rebase due 0018 >>> >>> >>> -- >>> Martin Basti >>> >> >> Attaching updated patches: >> >> Patch 0018 is for ipa-4-1 branch. >> Patches 0019 and 0020 are for master branch. >> >> I hope they will apply cleanly this time (they did for me). >> > ACK > Pushed to ipa-4-1 and master. Please do not bump the base patch number for different (rebased) versions of the same patch. From mkosek at redhat.com Wed Mar 18 11:42:43 2015 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 18 Mar 2015 12:42:43 +0100 Subject: [Freeipa-devel] [PATCHES 134-136] extdom: handle ERANGE return code for getXXYYY_r() In-Reply-To: <54FDA4DA.4040409@redhat.com> References: <20150302174507.GK3271@p.redhat.com> <20150304141755.GB25455@redhat.com> <20150304171453.GS3271@p.redhat.com> <20150305081636.GX3271@p.redhat.com> <20150305103353.GA3271@p.redhat.com> <20150306120829.GV25455@redhat.com> <54FDA4DA.4040409@redhat.com> Message-ID: <550964B3.4020708@redhat.com> On 03/09/2015 02:49 PM, Tomas Babej wrote: > > On 03/06/2015 01:08 PM, Alexander Bokovoy wrote: >> On Thu, 05 Mar 2015, Sumit Bose wrote: >>> On Thu, Mar 05, 2015 at 09:16:36AM +0100, Sumit Bose wrote: >>>> On Wed, Mar 04, 2015 at 06:14:53PM +0100, Sumit Bose wrote: >>>> > On Wed, Mar 04, 2015 at 04:17:55PM +0200, Alexander Bokovoy wrote: >>>> > > On Mon, 02 Mar 2015, Sumit Bose wrote: >>>> > > >diff --git >>>> a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c >>>> b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c >>>> > > >index >>>> 20fdd62b20f28f5384cf83b8be5819f721c6c3db..84aeb28066f25f05a89d0c2d42e8b060e2399501 >>>> 100644 >>>> > > >--- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c >>>> > > >+++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c >>>> > > >@@ -49,6 +49,220 @@ >>>> > > > >>>> > > >#define MAX(a,b) (((a)>(b))?(a):(b)) >>>> > > >#define SSSD_DOMAIN_SEPARATOR '@' >>>> > > >+#define MAX_BUF (1024*1024*1024) >>>> > > >+ >>>> > > >+ >>>> > > >+ >>>> > > >+static int get_buffer(size_t *_buf_len, char **_buf) >>>> > > >+{ >>>> > > >+ long pw_max; >>>> > > >+ long gr_max; >>>> > > >+ size_t buf_len; >>>> > > >+ char *buf; >>>> > > >+ >>>> > > >+ pw_max = sysconf(_SC_GETPW_R_SIZE_MAX); >>>> > > >+ gr_max = sysconf(_SC_GETGR_R_SIZE_MAX); >>>> > > >+ >>>> > > >+ if (pw_max == -1 && gr_max == -1) { >>>> > > >+ buf_len = 16384; >>>> > > >+ } else { >>>> > > >+ buf_len = MAX(pw_max, gr_max); >>>> > > >+ } >>>> > > Here you'd get buf_len equal to 1024 by default on Linux which is too >>>> > > low for our use case. I think it would be beneficial to add one more >>>> > > MAX(buf_len, 16384): >>>> > > - if (pw_max == -1 && gr_max == -1) { >>>> > > - buf_len = 16384; >>>> > > - } else { >>>> > > - buf_len = MAX(pw_max, gr_max); >>>> > > - } >>>> > > + buf_len = MAX(16384, MAX(pw_max, gr_max)); >>>> > > >>>> > > with MAX(MAX(),..) you also get rid of if() statement as resulting >>>> > > rvalue would be guaranteed to be positive. >>>> > >>>> > done >>>> > >>>> > > >>>> > > The rest is going along the common lines but would it be better to >>>> > > allocate memory once per LDAP client request rather than always ask for >>>> > > it per each NSS call? You can guarantee a sequential use of the buffer >>>> > > within the LDAP client request processing so there is no problem with >>>> > > locks but having this memory re-allocated on subsequent >>>> > > getpwnam()/getpwuid()/... calls within the same request processing seems >>>> > > suboptimal to me. >>>> > >>>> > ok, makes sense, I moved get_buffer() back to the callers. >>>> > >>>> > New version attached. >>>> >>>> Please ignore this patch, I will send a revised version soon. >>> >>> Please find attached a revised version which properly reports missing >>> objects and out-of-memory cases and makes sure buf and buf_len are in >>> sync. >> ACK to patches 0135 and 0136. This concludes the review, thanks! >> > > Pushed to master: c15a407cbfaed163a933ab137eed16387efe25d2 As per https://fedorahosted.org/freeipa/ticket/4908, we will need this in 4.1.x also: Pushed to ipa-4-1: ec7a55a05647c4abad4c2a1bb5b5094f1e1eec55 Martin From tbabej at redhat.com Wed Mar 18 11:42:52 2015 From: tbabej at redhat.com (Tomas Babej) Date: Wed, 18 Mar 2015 12:42:52 +0100 Subject: [Freeipa-devel] [PATCH] 0041 Always reload StateFile before getting or modifying the, stored values. In-Reply-To: <5507F3E0.20407@redhat.com> References: <5506D28D.9020305@redhat.com> <5507F3E0.20407@redhat.com> Message-ID: <550964BC.80905@redhat.com> On 03/17/2015 10:29 AM, Martin Basti wrote: > On 16/03/15 13:54, David Kupka wrote: >> https://fedorahosted.org/freeipa/ticket/4901 > ACK, it works as expected > Pushed to master: 082c55fb9cf87263f1f585a1adeda464a9d7328a From tbabej at redhat.com Wed Mar 18 11:49:15 2015 From: tbabej at redhat.com (Tomas Babej) Date: Wed, 18 Mar 2015 12:49:15 +0100 Subject: [Freeipa-devel] [PATCHES] SPEC: Require python2 version of sssd bindings In-Reply-To: <20150312125817.GD3878@redhat.com> References: <20150227205039.GB2327@mail.corp.redhat.com> <54F80B99.6030104@redhat.com> <20150305102328.GA18226@mail.corp.redhat.com> <54F87448.4050703@redhat.com> <20150306140505.GA2345@mail.corp.redhat.com> <20150306141318.GX25455@redhat.com> <55018A0E.70708@redhat.com> <20150312125318.GC3878@redhat.com> <20150312125817.GD3878@redhat.com> Message-ID: <5509663B.2070704@redhat.com> On 03/12/2015 01:58 PM, Alexander Bokovoy wrote: > On Thu, 12 Mar 2015, Alexander Bokovoy wrote: >> On Thu, 12 Mar 2015, Petr Vobornik wrote: >>> On 03/06/2015 03:13 PM, Alexander Bokovoy wrote: >>>> On Fri, 06 Mar 2015, Lukas Slebodnik wrote: >>>>> On (05/03/15 16:20), Petr Vobornik wrote: >>>>>> On 03/05/2015 11:23 AM, Lukas Slebodnik wrote: >>>>>>> On (05/03/15 08:54), Petr Vobornik wrote: >>>>>>>> On 02/27/2015 09:50 PM, Lukas Slebodnik wrote: >>>>>>>>> ehlo, >>>>>>>>> >>>>>>>>> Please review attached patches and fix freeipa in fedora 22 ASAP. >>>>>>>>> >>>>>>>>> I think the most critical is 1st patch >>>>>>>>> >>>>>>>>> sh$ git grep "SSSDConfig" | grep import >>>>>>>>> install/tools/ipa-upgradeconfig:import SSSDConfig >>>>>>>>> ipa-client/ipa-install/ipa-client-automount:import SSSDConfig >>>>>>>>> ipa-client/ipa-install/ipa-client-install: import SSSDConfig >>>>>>>>> >>>>>>>>> BTW package python-sssdconfig is provides since sssd-1.10.0alpha1 >>>>>>>>> (2013-04-02) >>>>>>>>> but it was not explicitely required. >>>>>>>>> >>>>>>>>> The latest python3 changes in sssd (fedora 22) is just a >>>>>>>>> result of >>>>>>>>> negligent >>>>>>>>> packaging of freeipa. >>>>>>>>> >>>>>>>>> LS >>>>>>>>> >>>>>>>> >>>>>>>> Fedora 22 was amended. >>>>>>>> >>>>>>>> Patch 1: ACK >>>>>>>> >>>>>>>> Patch 2: ACK >>>>>>>> >>>>>>>> Patch3: >>>>>>>> the package name is libsss_nss_idmap-python not >>>>>>>> python-libsss_nss_idmap >>>>>>>> which already is required in adtrust package >>>>>>> In sssd upstream we decided to rename package >>>>>>> libsss_nss_idmap-python to >>>>>>> python-libsss_nss_idmap according to new rpm python guidelines. >>>>>>> The python3 version has alredy correct name. >>>>>>> >>>>>>> We will rename package in downstream with next major release >>>>>>> (1.13). >>>>>>> Of course it we will add "Provides: libsss_nss_idmap-python". >>>>>>> >>>>>>> We can push 3rd patch later or I can update 3rd patch. >>>>>>> What do you prefer? >>>>>>> >>>>>>> Than you very much for review. >>>>>>> >>>>>>> LS >>>>>>> >>>>>> >>>>>> Patch 3 should be updated to not forget the remaining change in >>>>>> ipa-python >>>>>> package. >>>>>> >>>>>> It then should be updated downstream and master when 1.13 is >>>>>> released in >>>>>> Fedora, or in master sooner if SSSD 1.13 becomes the minimal version >>>>>> required >>>>>> by master. >>>>> >>>>> Fixed. >>>>> >>>>> BTW Why ther is a pylint comment for some sssd modules >>>>> I did not kave any pylint problems after removing comment. >>>>> >>>>> ipalib/plugins/trust.py:32: import pysss_murmur #pylint: >>>>> disable=F0401 >>>>> ipalib/plugins/trust.py:38: import pysss_nss_idmap #pylint: >>>>> disable=F0401 >>>>> >>>>> >>>>> And why are these modules optional (try except) >>>> Because they are needed to properly load in the case trust subpackages >>>> are not installed, to generate proper messages to users who will try >>>> these commands, like 'ipa trust-add' while the infrastructure is >>>> not in >>>> place. >>>> >>>> pylint is dumb for such cases. >>>> >>>> >>> >>> Alexander, the point was not to require python_nss_idmap and >>> python-sss-murmur on ipa clients? >> Pylint is not used on ipa clients. The import statements do protection >> against failed import and that's what we use on the client side. >> >>> If so python-sss-murmur should be required only by trust-ad package >>> and not python package (patch2). And patch 3 (adding >>> libsss_nss_idmap-python to python package) should not be used. >> We already have dependencies in trust-ad subpackage: >> %package server-trust-ad >> Summary: Virtual package to install packages required for Active >> Directory trusts >> Group: System Environment/Base >> Requires: %{name}-server = %version-%release >> Requires: m2crypto >> Requires: samba-python >> Requires: samba >= %{samba_version} >> Requires: samba-winbind >> Requires: libsss_idmap >> Requires: libsss_nss_idmap-python >> >> However, we don't ship the original plugins in this package because >> otherwise you wouldn't be able to use 'ipa trust*' from any machine >> other than those where trust-ad subpackage is installed. That's why we >> use import statements and catch the import exceptions. > Sent too early. > > ... and python-sss-murmur shoudl be required by trust-ad subpackage, > yes. > So what is the resolution here? Can we push patches 1 & 2? Tomas From mkosek at redhat.com Wed Mar 18 11:53:04 2015 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 18 Mar 2015 12:53:04 +0100 Subject: [Freeipa-devel] [PATCH 140] extdom: migrate check-based test to cmocka In-Reply-To: <20150318102214.GI4854@hendrix.redhat.com> References: <20150304174205.GU3271@p.redhat.com> <20150313105646.GK13715@p.redhat.com> <20150313141455.GE19530@hendrix.arn.redhat.com> <20150318100135.GH9952@p.redhat.com> <20150318102214.GI4854@hendrix.redhat.com> Message-ID: <55096720.7060504@redhat.com> On 03/18/2015 11:22 AM, Jakub Hrozek wrote: > On Wed, Mar 18, 2015 at 11:01:35AM +0100, Sumit Bose wrote: >> On Fri, Mar 13, 2015 at 03:14:55PM +0100, Jakub Hrozek wrote: >>> On Fri, Mar 13, 2015 at 11:56:46AM +0100, Sumit Bose wrote: >>>> On Wed, Mar 04, 2015 at 06:42:05PM +0100, Sumit Bose wrote: >>>>> Hi, >>>>> >>>>> this is the first patch for https://fedorahosted.org/freeipa/ticket/4922 >>>>> which converts the check-based tests of the extdom plugin to cmocka. >>>>> >>>>> bye, >>>>> Sumit >>>> >>>> Rebased version attached. >>>> >>>> bye, >>>> Sumit >>> >>> The test itself is fine, but did freeipa consider moving to cmocka-1.0+ >>> to avoid warnings like: >>> ipa_extdom_cmocka_tests.c: In function ?main?: >>> ipa_extdom_cmocka_tests.c:408:5: warning: ?_run_tests? is deprecated >>> (declared at /usr/include/cmocka.h:2001) [-Wdeprecated-declarations] >>> return run_tests(tests); >>> >>> But I'm fine with ACKing this patch, the conversion should be done >>> separately. >> >> yes, I found it more flexible to use the old style now because it works >> with all versions of cmocka. When I converted the remaining check based >> test to cmocka I will provide a patch which will switch all to the new >> version. >> >> bye, >> Sumit > > Sure, ACK > > (sorry, from my point of view it was obvious I was OK with pushing even > the previous version. Hopefully it's even clearer now :-)) > Thanks for this work guys! I would push it, but there is a conflict in ipa_extdom. From tbabej at redhat.com Wed Mar 18 11:58:46 2015 From: tbabej at redhat.com (Tomas Babej) Date: Wed, 18 Mar 2015 12:58:46 +0100 Subject: [Freeipa-devel] [PATCHES 137-139] extdom: add err_msg member to request context In-Reply-To: <20150318102304.GJ4854@hendrix.redhat.com> References: <20150304173522.GT3271@p.redhat.com> <20150313105509.GJ13715@p.redhat.com> <20150313141710.GF19530@hendrix.arn.redhat.com> <20150318095851.GG9952@p.redhat.com> <20150318102304.GJ4854@hendrix.redhat.com> Message-ID: <55096876.1020503@redhat.com> On 03/18/2015 11:23 AM, Jakub Hrozek wrote: > On Wed, Mar 18, 2015 at 10:58:51AM +0100, Sumit Bose wrote: >> Please find attached a new version where the typo is fixed. >> >> bye, >> Sumit > ACK > > I think the IPA gatekeepers shoudl feel free to just fix these trivial > errors before pushing in the future. > Pushed to master: 6cc6a3ceec98bf2ee4f9dd776d70bd3390acb384 From abokovoy at redhat.com Wed Mar 18 12:10:09 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Wed, 18 Mar 2015 14:10:09 +0200 Subject: [Freeipa-devel] [PATCHES] SPEC: Require python2 version of sssd bindings In-Reply-To: <5509663B.2070704@redhat.com> References: <20150227205039.GB2327@mail.corp.redhat.com> <54F80B99.6030104@redhat.com> <20150305102328.GA18226@mail.corp.redhat.com> <54F87448.4050703@redhat.com> <20150306140505.GA2345@mail.corp.redhat.com> <20150306141318.GX25455@redhat.com> <55018A0E.70708@redhat.com> <20150312125318.GC3878@redhat.com> <20150312125817.GD3878@redhat.com> <5509663B.2070704@redhat.com> Message-ID: <20150318121009.GG3878@redhat.com> On Wed, 18 Mar 2015, Tomas Babej wrote: > > >On 03/12/2015 01:58 PM, Alexander Bokovoy wrote: >>On Thu, 12 Mar 2015, Alexander Bokovoy wrote: >>>On Thu, 12 Mar 2015, Petr Vobornik wrote: >>>>On 03/06/2015 03:13 PM, Alexander Bokovoy wrote: >>>>>On Fri, 06 Mar 2015, Lukas Slebodnik wrote: >>>>>>On (05/03/15 16:20), Petr Vobornik wrote: >>>>>>>On 03/05/2015 11:23 AM, Lukas Slebodnik wrote: >>>>>>>>On (05/03/15 08:54), Petr Vobornik wrote: >>>>>>>>>On 02/27/2015 09:50 PM, Lukas Slebodnik wrote: >>>>>>>>>>ehlo, >>>>>>>>>> >>>>>>>>>>Please review attached patches and fix freeipa in fedora 22 ASAP. >>>>>>>>>> >>>>>>>>>>I think the most critical is 1st patch >>>>>>>>>> >>>>>>>>>>sh$ git grep "SSSDConfig" | grep import >>>>>>>>>>install/tools/ipa-upgradeconfig:import SSSDConfig >>>>>>>>>>ipa-client/ipa-install/ipa-client-automount:import SSSDConfig >>>>>>>>>>ipa-client/ipa-install/ipa-client-install: import SSSDConfig >>>>>>>>>> >>>>>>>>>>BTW package python-sssdconfig is provides since sssd-1.10.0alpha1 >>>>>>>>>>(2013-04-02) >>>>>>>>>>but it was not explicitely required. >>>>>>>>>> >>>>>>>>>>The latest python3 changes in sssd (fedora 22) is >>>>>>>>>>just a result of >>>>>>>>>>negligent >>>>>>>>>>packaging of freeipa. >>>>>>>>>> >>>>>>>>>>LS >>>>>>>>>> >>>>>>>>> >>>>>>>>>Fedora 22 was amended. >>>>>>>>> >>>>>>>>>Patch 1: ACK >>>>>>>>> >>>>>>>>>Patch 2: ACK >>>>>>>>> >>>>>>>>>Patch3: >>>>>>>>>the package name is libsss_nss_idmap-python not >>>>>>>>>python-libsss_nss_idmap >>>>>>>>>which already is required in adtrust package >>>>>>>>In sssd upstream we decided to rename package >>>>>>>>libsss_nss_idmap-python to >>>>>>>>python-libsss_nss_idmap according to new rpm python guidelines. >>>>>>>>The python3 version has alredy correct name. >>>>>>>> >>>>>>>>We will rename package in downstream with next major >>>>>>>>release (1.13). >>>>>>>>Of course it we will add "Provides: libsss_nss_idmap-python". >>>>>>>> >>>>>>>>We can push 3rd patch later or I can update 3rd patch. >>>>>>>>What do you prefer? >>>>>>>> >>>>>>>>Than you very much for review. >>>>>>>> >>>>>>>>LS >>>>>>>> >>>>>>> >>>>>>>Patch 3 should be updated to not forget the remaining change in >>>>>>>ipa-python >>>>>>>package. >>>>>>> >>>>>>>It then should be updated downstream and master when 1.13 >>>>>>>is released in >>>>>>>Fedora, or in master sooner if SSSD 1.13 becomes the minimal version >>>>>>>required >>>>>>>by master. >>>>>> >>>>>>Fixed. >>>>>> >>>>>>BTW Why ther is a pylint comment for some sssd modules >>>>>>I did not kave any pylint problems after removing comment. >>>>>> >>>>>>ipalib/plugins/trust.py:32: import pysss_murmur #pylint: >>>>>>disable=F0401 >>>>>>ipalib/plugins/trust.py:38: import pysss_nss_idmap #pylint: >>>>>>disable=F0401 >>>>>> >>>>>> >>>>>>And why are these modules optional (try except) >>>>>Because they are needed to properly load in the case trust subpackages >>>>>are not installed, to generate proper messages to users who will try >>>>>these commands, like 'ipa trust-add' while the infrastructure >>>>>is not in >>>>>place. >>>>> >>>>>pylint is dumb for such cases. >>>>> >>>>> >>>> >>>>Alexander, the point was not to require python_nss_idmap and >>>>python-sss-murmur on ipa clients? >>>Pylint is not used on ipa clients. The import statements do protection >>>against failed import and that's what we use on the client side. >>> >>>>If so python-sss-murmur should be required only by trust-ad >>>>package and not python package (patch2). And patch 3 (adding >>>>libsss_nss_idmap-python to python package) should not be used. >>>We already have dependencies in trust-ad subpackage: >>>%package server-trust-ad >>>Summary: Virtual package to install packages required for Active >>>Directory trusts >>>Group: System Environment/Base >>>Requires: %{name}-server = %version-%release >>>Requires: m2crypto >>>Requires: samba-python >>>Requires: samba >= %{samba_version} >>>Requires: samba-winbind >>>Requires: libsss_idmap >>>Requires: libsss_nss_idmap-python >>> >>>However, we don't ship the original plugins in this package because >>>otherwise you wouldn't be able to use 'ipa trust*' from any machine >>>other than those where trust-ad subpackage is installed. That's why we >>>use import statements and catch the import exceptions. >>Sent too early. >> >>... and python-sss-murmur shoudl be required by trust-ad subpackage, >>yes. >> > >So what is the resolution here? Can we push patches 1 & 2? ACK to patches 1 and 2. -- / Alexander Bokovoy From mbasti at redhat.com Wed Mar 18 12:11:33 2015 From: mbasti at redhat.com (Martin Basti) Date: Wed, 18 Mar 2015 13:11:33 +0100 Subject: [Freeipa-devel] [PATCH 0203] Remove unused PRE_SCHEMA upgrade In-Reply-To: <5501BF0B.8080503@redhat.com> References: <54F9CD2F.3050900@redhat.com> <5501AC4A.60900@redhat.com> <5501AF32.6040503@redhat.com> <5501B8FC.4030408@redhat.com> <5501BA15.2010803@redhat.com> <5501BF0B.8080503@redhat.com> Message-ID: <55096B75.6010304@redhat.com> On 12/03/15 17:30, Martin Basti wrote: > On 12/03/15 17:08, Rob Crittenden wrote: >> Martin Basti wrote: >>> On 12/03/15 16:22, Rob Crittenden wrote: >>>> David Kupka wrote: >>>>> On 03/06/2015 04:52 PM, Martin Basti wrote: >>>>>> This upgrade step is not used anymore. >>>>>> >>>>>> Required by: https://fedorahosted.org/freeipa/ticket/4904 >>>>>> >>>>>> Patch attached. >>>>>> >>>>>> >>>>>> >>>>> Looks and works good to me, ACK. >>>> Is this going away because one can simply create an update file that >>>> exists alphabetically before the schema update? If so then ACK. >>>> >>>> rob >>> No this never works, and will not work without changes in DS, I was >>> discussing this with DS guys. If you add new replica to schema, the >>> schema has to be there before data replication. >>> >>> Martin >>> >> That's a rather narrow case though. You could make changes that only >> affect existing schema, or something in cn=config. >> >> rob > Let summarize this: > * It is unused code > * we have schema update to modify schema (is there any extra > requirement to modify schema before schema update? I though the schema > update replace old schema with new) > * it is not usable on new replicas (why to modify up to date schema?, > why to modify new configuration?) > * we can not use this to update data > * only way how we can us this is to change non-replicating data, on > current server. > > However, might there be really need to update cn=config before schema > update? > > Martin > IMO this patch can be pushed. It removes the unused and broken code. To implement this feature we need design it in proper way first. Is there any objections? Martin^2 -- Martin Basti From tbabej at redhat.com Wed Mar 18 12:15:26 2015 From: tbabej at redhat.com (Tomas Babej) Date: Wed, 18 Mar 2015 13:15:26 +0100 Subject: [Freeipa-devel] [PATCHES] SPEC: Require python2 version of sssd bindings In-Reply-To: <20150318121009.GG3878@redhat.com> References: <20150227205039.GB2327@mail.corp.redhat.com> <54F80B99.6030104@redhat.com> <20150305102328.GA18226@mail.corp.redhat.com> <54F87448.4050703@redhat.com> <20150306140505.GA2345@mail.corp.redhat.com> <20150306141318.GX25455@redhat.com> <55018A0E.70708@redhat.com> <20150312125318.GC3878@redhat.com> <20150312125817.GD3878@redhat.com> <5509663B.2070704@redhat.com> <20150318121009.GG3878@redhat.com> Message-ID: <55096C5E.4040700@redhat.com> On 03/18/2015 01:10 PM, Alexander Bokovoy wrote: > On Wed, 18 Mar 2015, Tomas Babej wrote: >> >> >> On 03/12/2015 01:58 PM, Alexander Bokovoy wrote: >>> On Thu, 12 Mar 2015, Alexander Bokovoy wrote: >>>> On Thu, 12 Mar 2015, Petr Vobornik wrote: >>>>> On 03/06/2015 03:13 PM, Alexander Bokovoy wrote: >>>>>> On Fri, 06 Mar 2015, Lukas Slebodnik wrote: >>>>>>> On (05/03/15 16:20), Petr Vobornik wrote: >>>>>>>> On 03/05/2015 11:23 AM, Lukas Slebodnik wrote: >>>>>>>>> On (05/03/15 08:54), Petr Vobornik wrote: >>>>>>>>>> On 02/27/2015 09:50 PM, Lukas Slebodnik wrote: >>>>>>>>>>> ehlo, >>>>>>>>>>> >>>>>>>>>>> Please review attached patches and fix freeipa in fedora 22 >>>>>>>>>>> ASAP. >>>>>>>>>>> >>>>>>>>>>> I think the most critical is 1st patch >>>>>>>>>>> >>>>>>>>>>> sh$ git grep "SSSDConfig" | grep import >>>>>>>>>>> install/tools/ipa-upgradeconfig:import SSSDConfig >>>>>>>>>>> ipa-client/ipa-install/ipa-client-automount:import SSSDConfig >>>>>>>>>>> ipa-client/ipa-install/ipa-client-install: import SSSDConfig >>>>>>>>>>> >>>>>>>>>>> BTW package python-sssdconfig is provides since >>>>>>>>>>> sssd-1.10.0alpha1 >>>>>>>>>>> (2013-04-02) >>>>>>>>>>> but it was not explicitely required. >>>>>>>>>>> >>>>>>>>>>> The latest python3 changes in sssd (fedora 22) is just a >>>>>>>>>>> result of >>>>>>>>>>> negligent >>>>>>>>>>> packaging of freeipa. >>>>>>>>>>> >>>>>>>>>>> LS >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Fedora 22 was amended. >>>>>>>>>> >>>>>>>>>> Patch 1: ACK >>>>>>>>>> >>>>>>>>>> Patch 2: ACK >>>>>>>>>> >>>>>>>>>> Patch3: >>>>>>>>>> the package name is libsss_nss_idmap-python not >>>>>>>>>> python-libsss_nss_idmap >>>>>>>>>> which already is required in adtrust package >>>>>>>>> In sssd upstream we decided to rename package >>>>>>>>> libsss_nss_idmap-python to >>>>>>>>> python-libsss_nss_idmap according to new rpm python guidelines. >>>>>>>>> The python3 version has alredy correct name. >>>>>>>>> >>>>>>>>> We will rename package in downstream with next major release >>>>>>>>> (1.13). >>>>>>>>> Of course it we will add "Provides: libsss_nss_idmap-python". >>>>>>>>> >>>>>>>>> We can push 3rd patch later or I can update 3rd patch. >>>>>>>>> What do you prefer? >>>>>>>>> >>>>>>>>> Than you very much for review. >>>>>>>>> >>>>>>>>> LS >>>>>>>>> >>>>>>>> >>>>>>>> Patch 3 should be updated to not forget the remaining change in >>>>>>>> ipa-python >>>>>>>> package. >>>>>>>> >>>>>>>> It then should be updated downstream and master when 1.13 is >>>>>>>> released in >>>>>>>> Fedora, or in master sooner if SSSD 1.13 becomes the minimal >>>>>>>> version >>>>>>>> required >>>>>>>> by master. >>>>>>> >>>>>>> Fixed. >>>>>>> >>>>>>> BTW Why ther is a pylint comment for some sssd modules >>>>>>> I did not kave any pylint problems after removing comment. >>>>>>> >>>>>>> ipalib/plugins/trust.py:32: import pysss_murmur #pylint: >>>>>>> disable=F0401 >>>>>>> ipalib/plugins/trust.py:38: import pysss_nss_idmap #pylint: >>>>>>> disable=F0401 >>>>>>> >>>>>>> >>>>>>> And why are these modules optional (try except) >>>>>> Because they are needed to properly load in the case trust >>>>>> subpackages >>>>>> are not installed, to generate proper messages to users who will try >>>>>> these commands, like 'ipa trust-add' while the infrastructure is >>>>>> not in >>>>>> place. >>>>>> >>>>>> pylint is dumb for such cases. >>>>>> >>>>>> >>>>> >>>>> Alexander, the point was not to require python_nss_idmap and >>>>> python-sss-murmur on ipa clients? >>>> Pylint is not used on ipa clients. The import statements do protection >>>> against failed import and that's what we use on the client side. >>>> >>>>> If so python-sss-murmur should be required only by trust-ad >>>>> package and not python package (patch2). And patch 3 (adding >>>>> libsss_nss_idmap-python to python package) should not be used. >>>> We already have dependencies in trust-ad subpackage: >>>> %package server-trust-ad >>>> Summary: Virtual package to install packages required for Active >>>> Directory trusts >>>> Group: System Environment/Base >>>> Requires: %{name}-server = %version-%release >>>> Requires: m2crypto >>>> Requires: samba-python >>>> Requires: samba >= %{samba_version} >>>> Requires: samba-winbind >>>> Requires: libsss_idmap >>>> Requires: libsss_nss_idmap-python >>>> >>>> However, we don't ship the original plugins in this package because >>>> otherwise you wouldn't be able to use 'ipa trust*' from any machine >>>> other than those where trust-ad subpackage is installed. That's why we >>>> use import statements and catch the import exceptions. >>> Sent too early. >>> >>> ... and python-sss-murmur shoudl be required by trust-ad subpackage, >>> yes. >>> >> >> So what is the resolution here? Can we push patches 1 & 2? > ACK to patches 1 and 2. Pushed to master: 6ce47d86db77373b15bb62a92b22dd2accc74b37 From sbose at redhat.com Wed Mar 18 12:32:58 2015 From: sbose at redhat.com (Sumit Bose) Date: Wed, 18 Mar 2015 13:32:58 +0100 Subject: [Freeipa-devel] [PATCH 140] extdom: migrate check-based test to cmocka In-Reply-To: <55096720.7060504@redhat.com> References: <20150304174205.GU3271@p.redhat.com> <20150313105646.GK13715@p.redhat.com> <20150313141455.GE19530@hendrix.arn.redhat.com> <20150318100135.GH9952@p.redhat.com> <20150318102214.GI4854@hendrix.redhat.com> <55096720.7060504@redhat.com> Message-ID: <20150318123258.GK9952@p.redhat.com> On Wed, Mar 18, 2015 at 12:53:04PM +0100, Martin Kosek wrote: > On 03/18/2015 11:22 AM, Jakub Hrozek wrote: > > On Wed, Mar 18, 2015 at 11:01:35AM +0100, Sumit Bose wrote: > >> On Fri, Mar 13, 2015 at 03:14:55PM +0100, Jakub Hrozek wrote: > >>> On Fri, Mar 13, 2015 at 11:56:46AM +0100, Sumit Bose wrote: > >>>> On Wed, Mar 04, 2015 at 06:42:05PM +0100, Sumit Bose wrote: > >>>>> Hi, > >>>>> > >>>>> this is the first patch for https://fedorahosted.org/freeipa/ticket/4922 > >>>>> which converts the check-based tests of the extdom plugin to cmocka. > >>>>> > >>>>> bye, > >>>>> Sumit > >>>> > >>>> Rebased version attached. > >>>> > >>>> bye, > >>>> Sumit > >>> > >>> The test itself is fine, but did freeipa consider moving to cmocka-1.0+ > >>> to avoid warnings like: > >>> ipa_extdom_cmocka_tests.c: In function ?main?: > >>> ipa_extdom_cmocka_tests.c:408:5: warning: ?_run_tests? is deprecated > >>> (declared at /usr/include/cmocka.h:2001) [-Wdeprecated-declarations] > >>> return run_tests(tests); > >>> > >>> But I'm fine with ACKing this patch, the conversion should be done > >>> separately. > >> > >> yes, I found it more flexible to use the old style now because it works > >> with all versions of cmocka. When I converted the remaining check based > >> test to cmocka I will provide a patch which will switch all to the new > >> version. > >> > >> bye, > >> Sumit > > > > Sure, ACK > > > > (sorry, from my point of view it was obvious I was OK with pushing even > > the previous version. Hopefully it's even clearer now :-)) > > > > Thanks for this work guys! I would push it, but there is a conflict in ipa_extdom. Rebased version attached. bye, Sumit -------------- next part -------------- From cedaaba614dad266e5f5e90eafedf2b173b96c12 Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Mon, 9 Feb 2015 18:12:01 +0100 Subject: [PATCH] extdom: migrate check-based test to cmocka Besides moving the existing tests to cmocka two new tests are added which were missing from the old tests. Related to https://fedorahosted.org/freeipa/ticket/4922 --- .../ipa-slapi-plugins/ipa-extdom-extop/Makefile.am | 20 -- .../ipa-extdom-extop/ipa_extdom.h | 14 ++ .../ipa-extdom-extop/ipa_extdom_cmocka_tests.c | 156 +++++++++++++++- .../ipa-extdom-extop/ipa_extdom_common.c | 28 +-- .../ipa-extdom-extop/ipa_extdom_tests.c | 203 --------------------- 5 files changed, 176 insertions(+), 245 deletions(-) delete mode 100644 daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_tests.c diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am b/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am index a1679812ef3c5de8c6e18433cbb991a99ad0b6c8..9c2fa1c6a5f95ba06b33c0a5b560939863a88f0e 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am @@ -38,11 +38,6 @@ libipa_extdom_extop_la_LIBADD = \ TESTS = check_PROGRAMS = -if HAVE_CHECK -TESTS += extdom_tests -check_PROGRAMS += extdom_tests -endif - if HAVE_CMOCKA if HAVE_NSS_WRAPPER TESTS_ENVIRONMENT = . ./test_data/test_setup.sh; @@ -51,21 +46,6 @@ check_PROGRAMS += extdom_cmocka_tests endif endif -extdom_tests_SOURCES = \ - ipa_extdom_tests.c \ - ipa_extdom_common.c \ - $(NULL) -extdom_tests_CFLAGS = $(CHECK_CFLAGS) -extdom_tests_LDFLAGS = \ - -rpath $(shell pkg-config --libs-only-L dirsrv | sed -e 's/-L//') \ - $(NULL) -extdom_tests_LDADD = \ - $(CHECK_LIBS) \ - $(LDAP_LIBS) \ - $(DIRSRV_LIBS) \ - $(SSSNSSIDMAP_LIBS) \ - $(NULL) - extdom_cmocka_tests_SOURCES = \ ipa_extdom_cmocka_tests.c \ ipa_extdom_common.c \ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h index 0d5d55d2fb0ece95466b0225b145a4edeef18efa..65dd43ea35726db6231386a0fcbba9be1bd71412 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom.h @@ -185,5 +185,19 @@ int getgrnam_r_wrapper(size_t buf_max, const char *name, struct group *grp, char **_buf, size_t *_buf_len); int getgrgid_r_wrapper(size_t buf_max, gid_t gid, struct group *grp, char **_buf, size_t *_buf_len); +int pack_ber_sid(const char *sid, struct berval **berval); +int pack_ber_name(const char *domain_name, const char *name, + struct berval **berval); +int pack_ber_user(struct ipa_extdom_ctx *ctx, + enum response_types response_type, + const char *domain_name, const char *user_name, + uid_t uid, gid_t gid, + const char *gecos, const char *homedir, + const char *shell, struct sss_nss_kv *kv_list, + struct berval **berval); +int pack_ber_group(enum response_types response_type, + const char *domain_name, const char *group_name, + gid_t gid, char **members, struct sss_nss_kv *kv_list, + struct berval **berval); void set_err_msg(struct extdom_req *req, const char *format, ...); #endif /* _IPA_EXTDOM_H_ */ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c index 586b58b0fd4c7610e9cb4643b6dae04f9d22b8ab..42d588d08a96f8a26345f85aade9523e05f6f56e 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_cmocka_tests.c @@ -213,30 +213,46 @@ void test_getgrgid_r_wrapper(void **state) free(buf); } +struct test_data { + struct extdom_req *req; + struct ipa_extdom_ctx *ctx; +}; + void extdom_req_setup(void **state) { - struct extdom_req *req; + struct test_data *test_data; - req = calloc(sizeof(struct extdom_req), 1); - assert_non_null(req); + test_data = calloc(sizeof(struct test_data), 1); + assert_non_null(test_data); - *state = req; + test_data->req = calloc(sizeof(struct extdom_req), 1); + assert_non_null(test_data->req); + + test_data->ctx = calloc(sizeof(struct ipa_extdom_ctx), 1); + assert_non_null(test_data->req); + + *state = test_data; } void extdom_req_teardown(void **state) { - struct extdom_req *req; + struct test_data *test_data; - req = (struct extdom_req *) *state; + test_data = (struct test_data *) *state; - free_req_data(req); + free_req_data(test_data->req); + free(test_data->ctx); + free(test_data); } void test_set_err_msg(void **state) { struct extdom_req *req; + struct test_data *test_data; + + test_data = (struct test_data *) *state; + req = test_data->req; - req = (struct extdom_req *) *state; assert_null(req->err_msg); set_err_msg(NULL, NULL); @@ -254,6 +270,127 @@ void test_set_err_msg(void **state) assert_string_equal(req->err_msg, "Test [ABCD][1234]."); } +#define TEST_SID "S-1-2-3-4" +#define TEST_DOMAIN_NAME "DOMAIN" + +char res_sid[] = {0x30, 0x0e, 0x0a, 0x01, 0x01, 0x04, 0x09, 0x53, 0x2d, 0x31, \ + 0x2d, 0x32, 0x2d, 0x33, 0x2d, 0x34}; +char res_nam[] = {0x30, 0x13, 0x0a, 0x01, 0x02, 0x30, 0x0e, 0x04, 0x06, 0x44, \ + 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, 0x74, 0x65, 0x73, \ + 0x74}; +char res_uid[] = {0x30, 0x1c, 0x0a, 0x01, 0x03, 0x30, 0x17, 0x04, 0x06, 0x44, \ + 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, 0x74, 0x65, 0x73, \ + 0x74, 0x02, 0x02, 0x30, 0x39, 0x02, 0x03, 0x00, 0xd4, 0x31}; +char res_gid[] = {0x30, 0x1e, 0x0a, 0x01, 0x04, 0x30, 0x19, 0x04, 0x06, 0x44, \ + 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x0a, 0x74, 0x65, 0x73, \ + 0x74, 0x5f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x02, 0x03, 0x00, \ + 0xd4, 0x31}; + +void test_encode(void **state) +{ + int ret; + struct berval *resp_val; + struct ipa_extdom_ctx *ctx; + struct test_data *test_data; + + test_data = (struct test_data *) *state; + ctx = test_data->ctx; + + ctx->max_nss_buf_size = (128*1024*1024); + + ret = pack_ber_sid(TEST_SID, &resp_val); + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(sizeof(res_sid), resp_val->bv_len); + assert_memory_equal(res_sid, resp_val->bv_val, resp_val->bv_len); + ber_bvfree(resp_val); + + ret = pack_ber_name(TEST_DOMAIN_NAME, "test", &resp_val); + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(sizeof(res_nam), resp_val->bv_len); + assert_memory_equal(res_nam, resp_val->bv_val, resp_val->bv_len); + ber_bvfree(resp_val); + + ret = pack_ber_user(ctx, RESP_USER, TEST_DOMAIN_NAME, "test", 12345, 54321, + NULL, NULL, NULL, NULL, &resp_val); + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(sizeof(res_uid), resp_val->bv_len); + assert_memory_equal(res_uid, resp_val->bv_val, resp_val->bv_len); + ber_bvfree(resp_val); + + ret = pack_ber_group(RESP_GROUP, TEST_DOMAIN_NAME, "test_group", 54321, + NULL, NULL, &resp_val); + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(sizeof(res_gid), resp_val->bv_len); + assert_memory_equal(res_gid, resp_val->bv_val, resp_val->bv_len); + ber_bvfree(resp_val); +} + +char req_sid[] = {0x30, 0x11, 0x0a, 0x01, 0x01, 0x0a, 0x01, 0x01, 0x04, 0x09, \ + 0x53, 0x2d, 0x31, 0x2d, 0x32, 0x2d, 0x33, 0x2d, 0x34}; +char req_nam[] = {0x30, 0x16, 0x0a, 0x01, 0x02, 0x0a, 0x01, 0x01, 0x30, 0x0e, \ + 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, \ + 0x74, 0x65, 0x73, 0x74}; +char req_uid[] = {0x30, 0x14, 0x0a, 0x01, 0x03, 0x0a, 0x01, 0x01, 0x30, 0x0c, \ + 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x02, 0x02, \ + 0x30, 0x39}; +char req_gid[] = {0x30, 0x15, 0x0a, 0x01, 0x04, 0x0a, 0x01, 0x01, 0x30, 0x0d, \ + 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x02, 0x03, \ + 0x00, 0xd4, 0x31}; + +void test_decode(void **state) +{ + struct berval req_val; + struct extdom_req *req; + int ret; + + req_val.bv_val = req_sid; + req_val.bv_len = sizeof(req_sid); + + ret = parse_request_data(&req_val, &req); + + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(req->input_type, INP_SID); + assert_int_equal(req->request_type, REQ_SIMPLE); + assert_string_equal(req->data.sid, "S-1-2-3-4"); + free_req_data(req); + + req_val.bv_val = req_nam; + req_val.bv_len = sizeof(req_nam); + + ret = parse_request_data(&req_val, &req); + + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(req->input_type, INP_NAME); + assert_int_equal(req->request_type, REQ_SIMPLE); + assert_string_equal(req->data.name.domain_name, "DOMAIN"); + assert_string_equal(req->data.name.object_name, "test"); + free_req_data(req); + + req_val.bv_val = req_uid; + req_val.bv_len = sizeof(req_uid); + + ret = parse_request_data(&req_val, &req); + + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(req->input_type, INP_POSIX_UID); + assert_int_equal(req->request_type, REQ_SIMPLE); + assert_string_equal(req->data.posix_uid.domain_name, "DOMAIN"); + assert_int_equal(req->data.posix_uid.uid, 12345); + free_req_data(req); + + req_val.bv_val = req_gid; + req_val.bv_len = sizeof(req_gid); + + ret = parse_request_data(&req_val, &req); + + assert_int_equal(ret, LDAP_SUCCESS); + assert_int_equal(req->input_type, INP_POSIX_GID); + assert_int_equal(req->request_type, REQ_SIMPLE); + assert_string_equal(req->data.posix_gid.domain_name, "DOMAIN"); + assert_int_equal(req->data.posix_gid.gid, 54321); + free_req_data(req); +} + int main(int argc, const char *argv[]) { const UnitTest tests[] = { @@ -263,6 +400,9 @@ int main(int argc, const char *argv[]) unit_test(test_getgrgid_r_wrapper), unit_test_setup_teardown(test_set_err_msg, extdom_req_setup, extdom_req_teardown), + unit_test_setup_teardown(test_encode, + extdom_req_setup, extdom_req_teardown), + unit_test(test_decode), }; return run_tests(tests); diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index 4452d456bcabc211a0ca5814d24247c43cf95a91..2c08e56d65f8a1a9f33a29f1749936606f738ec5 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -464,7 +464,7 @@ static int add_kv_list(BerElement *ber, struct sss_nss_kv *kv_list) return LDAP_SUCCESS; } -static int pack_ber_sid(const char *sid, struct berval **berval) +int pack_ber_sid(const char *sid, struct berval **berval) { BerElement *ber = NULL; int ret; @@ -491,13 +491,13 @@ static int pack_ber_sid(const char *sid, struct berval **berval) #define SSSD_SYSDB_SID_STR "objectSIDString" -static int pack_ber_user(struct ipa_extdom_ctx *ctx, - enum response_types response_type, - const char *domain_name, const char *user_name, - uid_t uid, gid_t gid, - const char *gecos, const char *homedir, - const char *shell, struct sss_nss_kv *kv_list, - struct berval **berval) +int pack_ber_user(struct ipa_extdom_ctx *ctx, + enum response_types response_type, + const char *domain_name, const char *user_name, + uid_t uid, gid_t gid, + const char *gecos, const char *homedir, + const char *shell, struct sss_nss_kv *kv_list, + struct berval **berval) { BerElement *ber = NULL; int ret; @@ -610,10 +610,10 @@ done: return ret; } -static int pack_ber_group(enum response_types response_type, - const char *domain_name, const char *group_name, - gid_t gid, char **members, struct sss_nss_kv *kv_list, - struct berval **berval) +int pack_ber_group(enum response_types response_type, + const char *domain_name, const char *group_name, + gid_t gid, char **members, struct sss_nss_kv *kv_list, + struct berval **berval) { BerElement *ber = NULL; int ret; @@ -694,8 +694,8 @@ done: return ret; } -static int pack_ber_name(const char *domain_name, const char *name, - struct berval **berval) +int pack_ber_name(const char *domain_name, const char *name, + struct berval **berval) { BerElement *ber = NULL; int ret; diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_tests.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_tests.c deleted file mode 100644 index 1467e256619f827310408d558d48c580118d9a32..0000000000000000000000000000000000000000 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_tests.c +++ /dev/null @@ -1,203 +0,0 @@ -/** BEGIN COPYRIGHT BLOCK - * This program is free software; you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation, either version 3 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program. If not, see . - * - * Additional permission under GPLv3 section 7: - * - * In the following paragraph, "GPL" means the GNU General Public - * License, version 3 or any later version, and "Non-GPL Code" means - * code that is governed neither by the GPL nor a license - * compatible with the GPL. - * - * You may link the code of this Program with Non-GPL Code and convey - * linked combinations including the two, provided that such Non-GPL - * Code only links to the code of this Program through those well - * defined interfaces identified in the file named EXCEPTION found in - * the source code files (the "Approved Interfaces"). The files of - * Non-GPL Code may instantiate templates or use macros or inline - * functions from the Approved Interfaces without causing the resulting - * work to be covered by the GPL. Only the copyright holders of this - * Program may make changes or additions to the list of Approved - * Interfaces. - * - * Authors: - * Sumit Bose - * - * Copyright (C) 2011 Red Hat, Inc. - * All rights reserved. - * END COPYRIGHT BLOCK **/ - -#include - -#include "ipa_extdom.h" -#include "util.h" - -char req_sid[] = {0x30, 0x11, 0x0a, 0x01, 0x01, 0x0a, 0x01, 0x01, 0x04, 0x09, \ - 0x53, 0x2d, 0x31, 0x2d, 0x32, 0x2d, 0x33, 0x2d, 0x34}; -char req_nam[] = {0x30, 0x16, 0x0a, 0x01, 0x02, 0x0a, 0x01, 0x01, 0x30, 0x0e, \ - 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, \ - 0x74, 0x65, 0x73, 0x74}; -char req_uid[] = {0x30, 0x14, 0x0a, 0x01, 0x03, 0x0a, 0x01, 0x01, 0x30, 0x0c, \ - 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x02, 0x02, \ - 0x30, 0x39}; -char req_gid[] = {0x30, 0x15, 0x0a, 0x01, 0x04, 0x0a, 0x01, 0x01, 0x30, 0x0d, \ - 0x04, 0x06, 0x44, 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x02, 0x03, \ - 0x00, 0xd4, 0x31}; - -char res_sid[] = {0x30, 0x0e, 0x0a, 0x01, 0x01, 0x04, 0x09, 0x53, 0x2d, 0x31, \ - 0x2d, 0x32, 0x2d, 0x33, 0x2d, 0x34}; -char res_nam[] = {0x30, 0x13, 0x0a, 0x01, 0x02, 0x30, 0x0e, 0x04, 0x06, 0x44, \ - 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, 0x74, 0x65, 0x73, \ - 0x74}; -char res_uid[] = {0x30, 0x17, 0x0a, 0x01, 0x03, 0x30, 0x12, 0x04, 0x06, 0x44, \ - 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x04, 0x74, 0x65, 0x73, \ - 0x74, 0x02, 0x02, 0x30, 0x39}; -char res_gid[] = {0x30, 0x1e, 0x0a, 0x01, 0x04, 0x30, 0x19, 0x04, 0x06, 0x44, \ - 0x4f, 0x4d, 0x41, 0x49, 0x4e, 0x04, 0x0a, 0x74, 0x65, 0x73, \ - 0x74, 0x5f, 0x67, 0x72, 0x6f, 0x75, 0x70, 0x02, 0x03, 0x00, \ - 0xd4, 0x31}; - -#define TEST_SID "S-1-2-3-4" -#define TEST_DOMAIN_NAME "DOMAIN" - -START_TEST(test_encode) -{ - int ret; - struct extdom_res res; - struct berval *resp_val; - - res.response_type = RESP_SID; - res.data.sid = TEST_SID; - - ret = pack_response(&res, &resp_val); - - fail_unless(ret == LDAP_SUCCESS, "pack_response() failed."); - fail_unless(sizeof(res_sid) == resp_val->bv_len && - memcmp(res_sid, resp_val->bv_val, resp_val->bv_len) == 0, - "Unexpected BER blob."); - ber_bvfree(resp_val); - - res.response_type = RESP_NAME; - res.data.name.domain_name = TEST_DOMAIN_NAME; - res.data.name.object_name = "test"; - - ret = pack_response(&res, &resp_val); - - fail_unless(ret == LDAP_SUCCESS, "pack_response() failed."); - fail_unless(sizeof(res_nam) == resp_val->bv_len && - memcmp(res_nam, resp_val->bv_val, resp_val->bv_len) == 0, - "Unexpected BER blob."); - ber_bvfree(resp_val); -} -END_TEST - -START_TEST(test_decode) -{ - struct berval req_val; - struct extdom_req *req; - int ret; - - req_val.bv_val = req_sid; - req_val.bv_len = sizeof(req_sid); - - ret = parse_request_data(&req_val, &req); - - fail_unless(ret == LDAP_SUCCESS, "parse_request_data() failed."); - fail_unless(req->input_type == INP_SID, - "parse_request_data() returned unexpected input type"); - fail_unless(req->request_type == REQ_SIMPLE, - "parse_request_data() returned unexpected request type"); - fail_unless(strcmp(req->data.sid, "S-1-2-3-4") == 0, - "parse_request_data() returned unexpected sid"); - free_req_data(req); - - req_val.bv_val = req_nam; - req_val.bv_len = sizeof(req_nam); - - ret = parse_request_data(&req_val, &req); - - fail_unless(ret == LDAP_SUCCESS, - "parse_request_data() failed."); - fail_unless(req->input_type == INP_NAME, - "parse_request_data() returned unexpected input type"); - fail_unless(req->request_type == REQ_SIMPLE, - "parse_request_data() returned unexpected request type"); - fail_unless(strcmp(req->data.name.domain_name, "DOMAIN") == 0, - "parse_request_data() returned unexpected domain name"); - fail_unless(strcmp(req->data.name.object_name, "test") == 0, - "parse_request_data() returned unexpected object name"); - free_req_data(req); - - req_val.bv_val = req_uid; - req_val.bv_len = sizeof(req_uid); - - ret = parse_request_data(&req_val, &req); - - fail_unless(ret == LDAP_SUCCESS, - "parse_request_data() failed."); - fail_unless(req->input_type == INP_POSIX_UID, - "parse_request_data() returned unexpected input type"); - fail_unless(req->request_type == REQ_SIMPLE, - "parse_request_data() returned unexpected request type"); - fail_unless(strcmp(req->data.posix_uid.domain_name, "DOMAIN") == 0, - "parse_request_data() returned unexpected domain name"); - fail_unless(req->data.posix_uid.uid == 12345, - "parse_request_data() returned unexpected uid [%d]", - req->data.posix_uid.uid); - free_req_data(req); - - req_val.bv_val = req_gid; - req_val.bv_len = sizeof(req_gid); - - ret = parse_request_data(&req_val, &req); - - fail_unless(ret == LDAP_SUCCESS, - "parse_request_data() failed."); - fail_unless(req->input_type == INP_POSIX_GID, - "parse_request_data() returned unexpected input type"); - fail_unless(req->request_type == REQ_SIMPLE, - "parse_request_data() returned unexpected request type"); - fail_unless(strcmp(req->data.posix_gid.domain_name, "DOMAIN") == 0, - "parse_request_data() returned unexpected domain name"); - fail_unless(req->data.posix_gid.gid == 54321, - "parse_request_data() returned unexpected gid [%d]", - req->data.posix_gid.gid); - free_req_data(req); -} -END_TEST - -Suite * ipa_extdom_suite(void) -{ - Suite *s = suite_create("IPA extdom"); - - TCase *tc_core = tcase_create("Core"); - tcase_add_test(tc_core, test_decode); - tcase_add_test(tc_core, test_encode); - /* TODO: add test for create_response() */ - suite_add_tcase(s, tc_core); - - return s; -} - -int main(void) -{ - int number_failed; - - Suite *s = ipa_extdom_suite (); - SRunner *sr = srunner_create (s); - srunner_run_all (sr, CK_VERBOSE); - number_failed = srunner_ntests_failed (sr); - srunner_free (sr); - - return (number_failed == 0) ? EXIT_SUCCESS : EXIT_FAILURE; -} -- 2.1.0 From mkosek at redhat.com Wed Mar 18 12:36:44 2015 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 18 Mar 2015 13:36:44 +0100 Subject: [Freeipa-devel] [PATCH 140] extdom: migrate check-based test to cmocka In-Reply-To: <20150318123258.GK9952@p.redhat.com> References: <20150304174205.GU3271@p.redhat.com> <20150313105646.GK13715@p.redhat.com> <20150313141455.GE19530@hendrix.arn.redhat.com> <20150318100135.GH9952@p.redhat.com> <20150318102214.GI4854@hendrix.redhat.com> <55096720.7060504@redhat.com> <20150318123258.GK9952@p.redhat.com> Message-ID: <5509715C.80100@redhat.com> On 03/18/2015 01:32 PM, Sumit Bose wrote: > On Wed, Mar 18, 2015 at 12:53:04PM +0100, Martin Kosek wrote: >> On 03/18/2015 11:22 AM, Jakub Hrozek wrote: >>> On Wed, Mar 18, 2015 at 11:01:35AM +0100, Sumit Bose wrote: >>>> On Fri, Mar 13, 2015 at 03:14:55PM +0100, Jakub Hrozek wrote: >>>>> On Fri, Mar 13, 2015 at 11:56:46AM +0100, Sumit Bose wrote: >>>>>> On Wed, Mar 04, 2015 at 06:42:05PM +0100, Sumit Bose wrote: >>>>>>> Hi, >>>>>>> >>>>>>> this is the first patch for https://fedorahosted.org/freeipa/ticket/4922 >>>>>>> which converts the check-based tests of the extdom plugin to cmocka. >>>>>>> >>>>>>> bye, >>>>>>> Sumit >>>>>> >>>>>> Rebased version attached. >>>>>> >>>>>> bye, >>>>>> Sumit >>>>> >>>>> The test itself is fine, but did freeipa consider moving to cmocka-1.0+ >>>>> to avoid warnings like: >>>>> ipa_extdom_cmocka_tests.c: In function ?main?: >>>>> ipa_extdom_cmocka_tests.c:408:5: warning: ?_run_tests? is deprecated >>>>> (declared at /usr/include/cmocka.h:2001) [-Wdeprecated-declarations] >>>>> return run_tests(tests); >>>>> >>>>> But I'm fine with ACKing this patch, the conversion should be done >>>>> separately. >>>> >>>> yes, I found it more flexible to use the old style now because it works >>>> with all versions of cmocka. When I converted the remaining check based >>>> test to cmocka I will provide a patch which will switch all to the new >>>> version. >>>> >>>> bye, >>>> Sumit >>> >>> Sure, ACK >>> >>> (sorry, from my point of view it was obvious I was OK with pushing even >>> the previous version. Hopefully it's even clearer now :-)) >>> >> >> Thanks for this work guys! I would push it, but there is a conflict in ipa_extdom. > > Rebased version attached. > > bye, > Sumit > Thanks! Pushed to master: d0d79ada379ab809c26a7dc0bbc88b47ab85f744 What are your next steps for https://fedorahosted.org/freeipa/ticket/4922, replace "BuildRequires: check" with cmocka BuildRequires in the FreeIPA referential spec file? From mkosek at redhat.com Wed Mar 18 12:42:10 2015 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 18 Mar 2015 13:42:10 +0100 Subject: [Freeipa-devel] [PATCH 0203] Remove unused PRE_SCHEMA upgrade In-Reply-To: <55096B75.6010304@redhat.com> References: <54F9CD2F.3050900@redhat.com> <5501AC4A.60900@redhat.com> <5501AF32.6040503@redhat.com> <5501B8FC.4030408@redhat.com> <5501BA15.2010803@redhat.com> <5501BF0B.8080503@redhat.com> <55096B75.6010304@redhat.com> Message-ID: <550972A2.3080005@redhat.com> On 03/18/2015 01:11 PM, Martin Basti wrote: > On 12/03/15 17:30, Martin Basti wrote: >> On 12/03/15 17:08, Rob Crittenden wrote: >>> Martin Basti wrote: >>>> On 12/03/15 16:22, Rob Crittenden wrote: >>>>> David Kupka wrote: >>>>>> On 03/06/2015 04:52 PM, Martin Basti wrote: >>>>>>> This upgrade step is not used anymore. >>>>>>> >>>>>>> Required by: https://fedorahosted.org/freeipa/ticket/4904 >>>>>>> >>>>>>> Patch attached. >>>>>>> >>>>>>> >>>>>>> >>>>>> Looks and works good to me, ACK. >>>>> Is this going away because one can simply create an update file that >>>>> exists alphabetically before the schema update? If so then ACK. >>>>> >>>>> rob >>>> No this never works, and will not work without changes in DS, I was >>>> discussing this with DS guys. If you add new replica to schema, the >>>> schema has to be there before data replication. >>>> >>>> Martin >>>> >>> That's a rather narrow case though. You could make changes that only >>> affect existing schema, or something in cn=config. >>> >>> rob >> Let summarize this: >> * It is unused code >> * we have schema update to modify schema (is there any extra requirement to >> modify schema before schema update? I though the schema update replace old >> schema with new) >> * it is not usable on new replicas (why to modify up to date schema?, why to >> modify new configuration?) >> * we can not use this to update data >> * only way how we can us this is to change non-replicating data, on current >> server. >> >> However, might there be really need to update cn=config before schema update? >> >> Martin >> > IMO this patch can be pushed. > > It removes the unused and broken code. To implement this feature we need design > it in proper way first. > > Is there any objections? Works for me, if it was broken anyway and there is no use case for it, yet. From sbose at redhat.com Wed Mar 18 12:58:50 2015 From: sbose at redhat.com (Sumit Bose) Date: Wed, 18 Mar 2015 13:58:50 +0100 Subject: [Freeipa-devel] [PATCH 140] extdom: migrate check-based test to cmocka In-Reply-To: <5509715C.80100@redhat.com> References: <20150304174205.GU3271@p.redhat.com> <20150313105646.GK13715@p.redhat.com> <20150313141455.GE19530@hendrix.arn.redhat.com> <20150318100135.GH9952@p.redhat.com> <20150318102214.GI4854@hendrix.redhat.com> <55096720.7060504@redhat.com> <20150318123258.GK9952@p.redhat.com> <5509715C.80100@redhat.com> Message-ID: <20150318125850.GL9952@p.redhat.com> On Wed, Mar 18, 2015 at 01:36:44PM +0100, Martin Kosek wrote: > On 03/18/2015 01:32 PM, Sumit Bose wrote: > > On Wed, Mar 18, 2015 at 12:53:04PM +0100, Martin Kosek wrote: > >> On 03/18/2015 11:22 AM, Jakub Hrozek wrote: > >>> On Wed, Mar 18, 2015 at 11:01:35AM +0100, Sumit Bose wrote: > >>>> On Fri, Mar 13, 2015 at 03:14:55PM +0100, Jakub Hrozek wrote: > >>>>> On Fri, Mar 13, 2015 at 11:56:46AM +0100, Sumit Bose wrote: > >>>>>> On Wed, Mar 04, 2015 at 06:42:05PM +0100, Sumit Bose wrote: > >>>>>>> Hi, > >>>>>>> > >>>>>>> this is the first patch for https://fedorahosted.org/freeipa/ticket/4922 > >>>>>>> which converts the check-based tests of the extdom plugin to cmocka. > >>>>>>> > >>>>>>> bye, > >>>>>>> Sumit > >>>>>> > >>>>>> Rebased version attached. > >>>>>> > >>>>>> bye, > >>>>>> Sumit > >>>>> > >>>>> The test itself is fine, but did freeipa consider moving to cmocka-1.0+ > >>>>> to avoid warnings like: > >>>>> ipa_extdom_cmocka_tests.c: In function ?main?: > >>>>> ipa_extdom_cmocka_tests.c:408:5: warning: ?_run_tests? is deprecated > >>>>> (declared at /usr/include/cmocka.h:2001) [-Wdeprecated-declarations] > >>>>> return run_tests(tests); > >>>>> > >>>>> But I'm fine with ACKing this patch, the conversion should be done > >>>>> separately. > >>>> > >>>> yes, I found it more flexible to use the old style now because it works > >>>> with all versions of cmocka. When I converted the remaining check based > >>>> test to cmocka I will provide a patch which will switch all to the new > >>>> version. > >>>> > >>>> bye, > >>>> Sumit > >>> > >>> Sure, ACK > >>> > >>> (sorry, from my point of view it was obvious I was OK with pushing even > >>> the previous version. Hopefully it's even clearer now :-)) > >>> > >> > >> Thanks for this work guys! I would push it, but there is a conflict in ipa_extdom. > > > > Rebased version attached. > > > > bye, > > Sumit > > > > Thanks! Pushed to master: d0d79ada379ab809c26a7dc0bbc88b47ab85f744 > > What are your next steps for https://fedorahosted.org/freeipa/ticket/4922, > replace "BuildRequires: check" with cmocka BuildRequires in the FreeIPA > referential spec file? yes, the next patch here would be to migrate ipa-kdb/tests/ipa_kdb_tests.c to cmocka and replace the BuildRequires. As a last step, if no related issues have shown up on the SSSD side, make the tests compile with cmocka 1.0 without warnings and add the version to the BuildRequires. bye, Sumit From simo at redhat.com Wed Mar 18 13:28:25 2015 From: simo at redhat.com (Simo Sorce) Date: Wed, 18 Mar 2015 09:28:25 -0400 Subject: [Freeipa-devel] topology plugin - again need for input In-Reply-To: <55095F1F.60205@redhat.com> References: <55095F1F.60205@redhat.com> Message-ID: <1426685305.2981.110.camel@willson.usersys.redhat.com> On Wed, 2015-03-18 at 12:18 +0100, Ludwig Krispenz wrote: > Hi, > > I need your feedback on a problem with implementing the topology plugin: > marking an replication agreement, this seems to be a never ending story > > We want o mark an agreement when it is creqated by the plugin or put > under control of the plugin by raising the domain level. > The first idea was to rename the agreement, but this failed because DS > does not support MODRDN on the cn=config backend and on second thought > using a naming convetion on the rdn of the agreement entry seems to be > not the best idea. > The next approach was to use an attribute in the the agreement itself, > and I just used description, which is multivalued and I added a > description value "managed agreement ....". > This works, but didn't get Simo's blessing and we agreed just to add a > new objectclass "ipaReplTopoManagedAgreement", which could be used > without extenting the core replication schema. > I think this is the best solution, but unfortunately it fails. > replication code is called when an agreement is modified and it accepts > only modifications for a defined set of replication agreement attributes > - other mods are rejected with UNWILLING_TO_PERORM. > > I think we could enhance DS to accept a wider range of changes to the > replication agreement (it already does it for winsync agreements), but > this would add a new dependency on a specific DS version where this > change is included. > > > Do you think this dependency is acceptable (topology plugin is targeted > to 4.2) ? or do we need to find another clever solution or use the not > so nice "description" way ? A dependency on a specific version of DS is just fine IMO. Simo. -- Simo Sorce * Red Hat, Inc * New York From lkrispen at redhat.com Wed Mar 18 13:38:21 2015 From: lkrispen at redhat.com (Ludwig Krispenz) Date: Wed, 18 Mar 2015 14:38:21 +0100 Subject: [Freeipa-devel] topology plugin - again need for input In-Reply-To: <1426685305.2981.110.camel@willson.usersys.redhat.com> References: <55095F1F.60205@redhat.com> <1426685305.2981.110.camel@willson.usersys.redhat.com> Message-ID: <55097FCD.4060607@redhat.com> On 03/18/2015 02:28 PM, Simo Sorce wrote: > On Wed, 2015-03-18 at 12:18 +0100, Ludwig Krispenz wrote: >> Hi, >> >> I need your feedback on a problem with implementing the topology plugin: >> marking an replication agreement, this seems to be a never ending story >> >> We want o mark an agreement when it is creqated by the plugin or put >> under control of the plugin by raising the domain level. >> The first idea was to rename the agreement, but this failed because DS >> does not support MODRDN on the cn=config backend and on second thought >> using a naming convetion on the rdn of the agreement entry seems to be >> not the best idea. >> The next approach was to use an attribute in the the agreement itself, >> and I just used description, which is multivalued and I added a >> description value "managed agreement ....". >> This works, but didn't get Simo's blessing and we agreed just to add a >> new objectclass "ipaReplTopoManagedAgreement", which could be used >> without extenting the core replication schema. >> I think this is the best solution, but unfortunately it fails. >> replication code is called when an agreement is modified and it accepts >> only modifications for a defined set of replication agreement attributes >> - other mods are rejected with UNWILLING_TO_PERORM. >> >> I think we could enhance DS to accept a wider range of changes to the >> replication agreement (it already does it for winsync agreements), but >> this would add a new dependency on a specific DS version where this >> change is included. >> >> >> Do you think this dependency is acceptable (topology plugin is targeted >> to 4.2) ? or do we need to find another clever solution or use the not >> so nice "description" way ? > A dependency on a specific version of DS is just fine IMO. thanks, I'll enhance DS then Ludwig > > Simo. > From jcholast at redhat.com Wed Mar 18 17:01:47 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 18 Mar 2015 18:01:47 +0100 Subject: [Freeipa-devel] [PATCH] extdom: return LDAP_NO_SUCH_OBJECT to the client In-Reply-To: <54FECE02.1070407@redhat.com> References: <20150304174747.GV3271@p.redhat.com> <20150305062828.GJ25455@redhat.com> <54FECE02.1070407@redhat.com> Message-ID: <5509AF7B.6030401@redhat.com> Dne 10.3.2015 v 11:57 Tomas Babej napsal(a): > > On 03/05/2015 07:28 AM, Alexander Bokovoy wrote: >> On Wed, 04 Mar 2015, Sumit Bose wrote: >>> Hi, >>> >>> with this patch the extdom plugin will properly indicate to a client if >>> the search object does not exist instead of returning a generic error. >>> This is important for the client to act accordingly and improve >>> debugging possibilities. >>> >>> bye, >>> Sumit >> >>> From 3895fa21524efc3a22bfb36b1a9aa34277b8dd46 Mon Sep 17 00:00:00 2001 >>> From: Sumit Bose >>> Date: Wed, 4 Mar 2015 13:39:04 +0100 >>> Subject: [PATCH] extdom: return LDAP_NO_SUCH_OBJECT to the client >>> >>> --- >>> daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 8 >>> ++++++-- >>> 1 file changed, 6 insertions(+), 2 deletions(-) >>> >>> diff --git >>> a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>> b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>> index >>> a70ed20f1816a7e00385edae8a81dd5dad9e9362..a040f2beba073d856053429face2f464347b2524 >>> 100644 >>> --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>> +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>> @@ -123,8 +123,12 @@ static int ipa_extdom_extop(Slapi_PBlock *pb) >>> >>> ret = handle_request(ctx, req, &ret_val); >>> if (ret != LDAP_SUCCESS) { >>> - rc = LDAP_OPERATIONS_ERROR; >>> - err_msg = "Failed to handle the request.\n"; >>> + if (ret == LDAP_NO_SUCH_OBJECT) { >>> + rc = LDAP_NO_SUCH_OBJECT; >>> + } else { >>> + rc = LDAP_OPERATIONS_ERROR; >>> + err_msg = "Failed to handle the request.\n"; >>> + } >>> goto done; >>> } >>> >>> -- >>> 2.1.0 >>> >> ACK. >> > > Pushed to master: 024463804c0c73e89ed76e709a838762a8302f04 > and to ipa-4-1: c55632374d3b41e23521461667da1699a7264947 -- Jan Cholasta From jcholast at redhat.com Wed Mar 18 17:03:37 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 18 Mar 2015 18:03:37 +0100 Subject: [Freeipa-devel] [PATCH 142] extdom: fix memory leak In-Reply-To: <54FED22D.5030704@redhat.com> References: <20150304175122.GW3271@p.redhat.com> <20150305063438.GK25455@redhat.com> <54F7FC45.5050009@redhat.com> <20150305070012.GL25455@redhat.com> <54FECEA1.9090000@redhat.com> <20150310111020.GE7307@p.redhat.com> <54FED22D.5030704@redhat.com> Message-ID: <5509AFE9.8020406@redhat.com> Dne 10.3.2015 v 12:14 Tomas Babej napsal(a): > > On 03/10/2015 12:10 PM, Sumit Bose wrote: >> On Tue, Mar 10, 2015 at 11:59:45AM +0100, Tomas Babej wrote: >>> On 03/05/2015 08:00 AM, Alexander Bokovoy wrote: >>>> On Wed, 04 Mar 2015, Nathan Kinder wrote: >>>>> >>>>> On 03/04/2015 10:34 PM, Alexander Bokovoy wrote: >>>>>> On Wed, 04 Mar 2015, Sumit Bose wrote: >>>>>>> Hi, >>>>>>> >>>>>>> while running 389ds with valgrind to see if my other patches >>>>>>> introduced >>>>>>> a memory leak I found an older one which is fixed by this patch. >>>>>>> >>>>>>> bye, >>>>>>> Sumit >>>>>> >From bb02cdc135fecc1766b17edd61554dbde9bccd0b Mon Sep 17 00:00:00 >>>>>> 2001 >>>>>>> From: Sumit Bose >>>>>>> Date: Wed, 4 Mar 2015 17:53:08 +0100 >>>>>>> Subject: [PATCH] extdom: fix memory leak >>>>>>> >>>>>>> --- >>>>>>> daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 1 + >>>>>>> 1 file changed, 1 insertion(+) >>>>>>> >>>>>>> diff --git >>>>>>> a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>>>>>> b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>>>>>> index >>>>>>> a040f2beba073d856053429face2f464347b2524..708d0e4a2fc9da4f87a24a49c945587049f7280f >>>>>>> >>>>>>> >>>>>>> 100644 >>>>>>> --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>>>>>> +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >>>>>>> @@ -156,6 +156,7 @@ done: >>>>>>> LOG("%s", err_msg); >>>>>>> } >>>>>>> slapi_send_ldap_result(pb, rc, NULL, err_msg, 0, NULL); >>>>>>> + ber_bvfree(ret_val); >>>>>>> free_req_data(req); >>>>>>> return SLAPI_PLUGIN_EXTENDED_SENT_RESULT; >>>>>>> } >>>>>> I can see in 389-ds code that it actually tries to remove the >>>>>> value in >>>>>> the end of extended operation handling: >>>>> This below code snippet is freeing the extended operation request >>>>> value >>>>> (SLAPI_EXT_OP_REQ_VALUE), not the return value (SLAPI_EXT_OP_RET_VAL). >>>>> >>>>> If you look at check_and_send_extended_result() in the 389-ds code, >>>>> you'll see where the extended operation return value is sent, and it >>>>> doesn't perform a free. It is up to the plug-in to perform the >>>>> free. A >>>>> good example of this in the 389-ds code is in the >>>>> passwd_modify_extop() >>>>> function. >>>>> >>>>> >>>>> Sumit's code looks good to me. ACK. >>>> Argh. Sorry for confusion of RET vs REQ. Morning, I need coffee! >>>> >>>> ACK. >>>> >>>>> -NGK >>>>> >>>>>> slapi_pblock_set( pb, SLAPI_EXT_OP_REQ_OID, extoid ); >>>>>> slapi_pblock_set( pb, SLAPI_EXT_OP_REQ_VALUE, &extval ); >>>>>> slapi_pblock_set( pb, SLAPI_REQUESTOR_ISROOT, >>>>>> &pb->pb_op->o_isroot); >>>>>> >>>>>> rc = plugin_call_exop_plugins( pb, extoid ); >>>>>> >>>>>> if ( SLAPI_PLUGIN_EXTENDED_SENT_RESULT != rc ) { >>>>>> if ( SLAPI_PLUGIN_EXTENDED_NOT_HANDLED == rc ) { >>>>>> lderr = LDAP_PROTOCOL_ERROR; /* no plugin >>>>>> handled the op */ >>>>>> errmsg = "unsupported extended operation"; >>>>>> } else { >>>>>> errmsg = NULL; >>>>>> lderr = rc; >>>>>> } >>>>>> send_ldap_result( pb, lderr, NULL, errmsg, 0, NULL ); >>>>>> } >>>>>> free_and_return: >>>>>> if (extoid) >>>>>> slapi_ch_free((void **)&extoid); >>>>>> if (extval.bv_val) >>>>>> slapi_ch_free((void **)&extval.bv_val); >>>>>> return; >>>>>> >>>>>> >>> The patch does not apply to the current master branch. >>> >>> Sumit, can you send a updated version? >> sure, new version attached. >> >> bye, >> Sumit >> >>> Thanks, >>> Tomas > Thanks, > > Pushed to master: 8dac096ae3a294dc55b32b69b873013fd687e945 > and to ipa-4-1: 179be3c222a9d27a147d5c0ff4be45e7def9b2d5 -- Jan Cholasta From tbordaz at redhat.com Wed Mar 18 18:39:48 2015 From: tbordaz at redhat.com (thierry bordaz) Date: Wed, 18 Mar 2015 19:39:48 +0100 Subject: [Freeipa-devel] [PATCH] 0003-3 User life cycle: new stageuser plugin with add verb In-Reply-To: <5507D13E.7040107@redhat.com> References: <53E4D6AE.6050505@redhat.com> <54045399.3030404@redhat.com> <54196346.5070500@redhat.com> <54D0A7EB.1010700@redhat.com> <54D22BE2.9050407@redhat.com> <54D24567.4010103@redhat.com> <54E5D092.6030708@redhat.com> <54E5FF07.1080809@redhat.com> <54F9F243.5090003@redhat.com> <5506B918.6000708@redhat.com> <5507D13E.7040107@redhat.com> Message-ID: <5509C674.90104@redhat.com> On 03/17/2015 08:01 AM, Jan Cholasta wrote: > Dne 16.3.2015 v 12:06 David Kupka napsal(a): >> On 03/06/2015 07:30 PM, thierry bordaz wrote: >>> On 02/19/2015 04:19 PM, Martin Basti wrote: >>>> On 19/02/15 13:01, thierry bordaz wrote: >>>>> On 02/04/2015 05:14 PM, Jan Cholasta wrote: >>>>>> Hi, >>>>>> >>>>>> Dne 4.2.2015 v 15:25 David Kupka napsal(a): >>>>>>> On 02/03/2015 11:50 AM, thierry bordaz wrote: >>>>>>>> On 09/17/2014 12:32 PM, thierry bordaz wrote: >>>>>>>>> On 09/01/2014 01:08 PM, Petr Viktorin wrote: >>>>>>>>>> On 08/08/2014 03:54 PM, thierry bordaz wrote: >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> The attached patch is related to 'User Life Cycle' >>>>>>>>>>> (https://fedorahosted.org/freeipa/ticket/3813) >>>>>>>>>>> >>>>>>>>>>> It creates a stageuser plugin with a first function >>>>>>>>>>> stageuser-add. >>>>>>>>>>> Stage >>>>>>>>>>> user entries are provisioned under 'cn=staged >>>>>>>>>>> users,cn=accounts,cn=provisioning,SUFFIX'. >>>>>>>>>>> >>>>>>>>>>> Thanks >>>>>>>>>>> thierry >>>>>>>>>> >>>>>>>>>> Avoid `from ipalib.plugins.baseldap import *` in new code; >>>>>>>>>> instead >>>>>>>>>> import the module itself and use e.g. `baseldap.LDAPObject`. >>>>>>>>>> >>>>>>>>>> The stageuser help (docstring) is copied from the user >>>>>>>>>> plugin, and >>>>>>>>>> discusses things like account lockout and disabling users. It >>>>>>>>>> should >>>>>>>>>> rather explain what stageuser itself does. (And I don't very >>>>>>>>>> much >>>>>>>>>> like the Note about the interface being badly designed...) >>>>>>>>>> Also decide if the docs should call it "staged user" or "stage >>>>>>>>>> user" >>>>>>>>>> or "stageuser". >>>>>>>>>> >>>>>>>>>> A lot of the code is copied and pasted over from the users >>>>>>>>>> plugin. >>>>>>>>>> Don't do that. Either import things (e.g. >>>>>>>>>> validate_nsaccountlock) >>>>>>>>>> from the users plugin, or move the reused code into a shared >>>>>>>>>> module. >>>>>>>>>> >>>>>>>>>> For the `user` object, since so much is the same, it might be >>>>>>>>>> best to >>>>>>>>>> create a common base class for user and stageuser; and similarly >>>>>>>>>> for >>>>>>>>>> the Command plugins. >>>>>>>>>> >>>>>>>>>> The default permissions need different names, and you don't need >>>>>>>>>> another copy of the 'non_object' ones. Also, run the makeaci >>>>>>>>>> script. >>>>>>>>>> >>>>>>>>> Hello, >>>>>>>>> >>>>>>>>> This modified patch is mainly moving common base class into a >>>>>>>>> new >>>>>>>>> plugin: accounts.py. user/stageuser plugin inherits from >>>>>>>>> accounts. >>>>>>>>> It also creates a better description of what are stage user, >>>>>>>>> how >>>>>>>>> to add a new stage user, updates ACI.txt and separate >>>>>>>>> active/stage >>>>>>>>> user managed permissions. >>>>>>>>> >>>>>>>>> thanks >>>>>>>>> thierry >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Freeipa-devel mailing list >>>>>>>>> Freeipa-devel at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>>> >>>>>>>> >>>>>>>> Thanks David for the reviews. Here the last patches >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Freeipa-devel mailing list >>>>>>>> Freeipa-devel at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>>> >>>>>>> >>>>>>> The freeipa-tbordaz-0002 patch had trailing whitespaces on few >>>>>>> lines so >>>>>>> I'm attaching fixed version (and unchanged patch >>>>>>> freeipa-tbordaz-0003-3 >>>>>>> to keep them together). >>>>>>> >>>>>>> The ULC feature is still WIP but these patches look good to me and >>>>>>> don't >>>>>>> break anything as far as I tested. >>>>>>> We should push them now to avoid further rebases. Thierry can then >>>>>>> prepare other patches delivering the rest of ULC functionality. >>>>>> >>>>>> Few comments from just reading the patches: >>>>>> >>>>>> 1) I would name the base class "baseuser", "account" does not >>>>>> necessarily mean user account. >>>>>> >>>>>> 2) This is very wrong: >>>>>> >>>>>> -class user_add(LDAPCreate): >>>>>> +class user_add(user, LDAPCreate): >>>>>> >>>>>> You are creating a plugin which is both an object and an command. >>>>>> >>>>>> 3) This is purely subjective, but I don't like the name >>>>>> "deleteuser", as it has a verb in it. We usually don't do that and >>>>>> IMHO we shouldn't do that. >>>>>> >>>>>> Honza >>>>>> >>>>> >>>>> Thank you for the review. I am attaching the updates patches >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Freeipa-devel mailing list >>>>> Freeipa-devel at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>> Hello, >>>> I'm getting errors during make rpms: >>>> >>>> if [ "" != "yes" ]; then \ >>>> ./makeapi --validate; \ >>>> ./makeaci --validate; \ >>>> fi >>>> >>>> /root/freeipa/ipalib/plugins/baseuser.py:641 command "baseuser_add" >>>> doc is not internationalized >>>> /root/freeipa/ipalib/plugins/baseuser.py:653 command "baseuser_find" >>>> doc is not internationalized >>>> /root/freeipa/ipalib/plugins/baseuser.py:647 command "baseuser_mod" >>>> doc is not internationalized >>>> 0 commands without doc, 3 commands whose doc is not i18n >>>> Command baseuser_add in ipalib, not in API >>>> Command baseuser_find in ipalib, not in API >>>> Command baseuser_mod in ipalib, not in API >>>> >>>> There are one or more new commands defined. >>>> Update API.txt and increment the minor version in VERSION. >>>> >>>> There are one or more documentation problems. >>>> You must fix these before preceeding >>>> >>>> Issues probably caused by this: >>>> 1) >>>> You should not use the register decorator, if this class is just for >>>> inheritance >>>> @register() >>>> class baseuser_add(LDAPCreate): >>>> >>>> @register() >>>> class baseuser_mod(LDAPUpdate): >>>> >>>> @register() >>>> class baseuser_find(LDAPSearch): >>>> >>>> see dns.py plugin and "DNSZoneBase" and "dnszone" classes >>>> >>>> 2) >>>> there might be an issue with >>>> @register() >>>> class baseuser(LDAPObject): >>>> >>>> the register decorator should not be there, I was warned by Petr^3 to >>>> not use permission in parent class. The same permission should be >>>> specified only in one place (for example user class), (otherwise they >>>> will be generated twice??) I don't know more details about it. >>>> >>>> -- >>>> Martin Basti >>> >>> Hello Martin, Jan, >>> >>> Thanks for your review. >>> I changed the patch so that it does not register baseuser_*. Also >>> increase the minor version because of new command. >>> Finally I moved the managed_permission definition out of the parent >>> baseuser class. >>> >>> >>> >>> >>> >> >> Martin, could you please verify that the issues you encountered are >> fixed? >> >> Thanks! >> > > You bumped wrong version variable: > > -IPA_VERSION_MINOR=1 > +IPA_VERSION_MINOR=2 > > It should have been IPA_API_VERSION_MINOR (at the bottom of the file), > including the last change comment below it. > > > IMO baseuser should include superclasses for all the usual commands > (add, mod, del, show, find) and stageuser/deleteuser commands should > inherit from them. > > > You don't need to override class properties like active_container_dn > and takes_params on baseuser subclasses when they have the same value > as in baseuser. > > > Honza > Hello Honza, Thanks for the review. I did the modifications you recommended within that attached patches * Change version * create the baseuser_* plugins commands and use them in the user/stageuser plugin commands * Do not redefine the class properties in the subclasses. Thanks thierry -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0002-User-life-cycle-stageuser-add-verb.patch Type: text/x-patch Size: 60937 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-User-Life-Cycle-Exclude-subtree-for-ipaUniqueID-gene.patch Type: text/x-patch Size: 2588 bytes Desc: not available URL: From mkoci at redhat.com Wed Mar 18 19:18:10 2015 From: mkoci at redhat.com (Martin Koci) Date: Wed, 18 Mar 2015 20:18:10 +0100 Subject: [Freeipa-devel] [PROPOSAL] FreeIPA Test Plan Workflow Message-ID: <1426706290.3265.10.camel@redhat.com> Hi, working with Test Plans for 4.2 features I'd like to outline workflow for test plans. The main aim is to have something documented and more clear. So I'd like to start with track ticket options. For better tracking and managing tickets we could consider 3 new fields in the track ticket. - First field for tracking link for test plan/test case which we can refer to. We would call this field just "Test". - The second field for tracking tester (QE). Let's called this field "QA Contact". This field should inform about contributor (QE). That means - you can track unassigned bugs (tracks), assigned but not reviewed (the work hasn't started yet), and finished ("-", "+" - see below). According to this you can also see who is overloaded and who can help to the others. - The third one for tracking states or flags for test coverage. Something like "QE Test Coverage" with four states/flags: * Review is needed - " " (review is required for this issue) Empty "QE Test Coverage" should mean "Hey, I need to be reviewed and considered whether some test is needed or not". * Test is not needed - "-" (Test is not required for this issue) * Test exists - "+" (Test already exists - QE done) * Test in progress - "?" (Test is required and QE {will} works on test{s}) I can imagine some naming instead of flags +,-,?, ,. Any ideas? In the track ticket should be described the issue or link to design page to get all necessary information for coverage. QE changes state "QE Test Coverage" (Test in progress - "?") and creates test plan on wiki [1] and add this link to "Test" field in the track. Then QE informs freeipa-devel list about test plan if it's OK providing the link to test plan. If it's OK QE will start work on particular test cases. When QE is done then changes state "QE Test Coverage" to "+" (test exists). As well I'd like to propose the possibility that ticket will not be closed until QE is done with test? Hope it makes sense to you. Can I get your thoughts on this, please? Thanks, /koca *[1] - http://www.freeipa.org/page/Main_Page From jcholast at redhat.com Thu Mar 19 06:37:11 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 19 Mar 2015 07:37:11 +0100 Subject: [Freeipa-devel] [PATCH] 0003-3 User life cycle: new stageuser plugin with add verb In-Reply-To: <5509C674.90104@redhat.com> References: <53E4D6AE.6050505@redhat.com> <54045399.3030404@redhat.com> <54196346.5070500@redhat.com> <54D0A7EB.1010700@redhat.com> <54D22BE2.9050407@redhat.com> <54D24567.4010103@redhat.com> <54E5D092.6030708@redhat.com> <54E5FF07.1080809@redhat.com> <54F9F243.5090003@redhat.com> <5506B918.6000708@redhat.com> <5507D13E.7040107@redhat.com> <5509C674.90104@redhat.com> Message-ID: <550A6E97.9010103@redhat.com> Dne 18.3.2015 v 19:39 thierry bordaz napsal(a): > On 03/17/2015 08:01 AM, Jan Cholasta wrote: >> Dne 16.3.2015 v 12:06 David Kupka napsal(a): >>> On 03/06/2015 07:30 PM, thierry bordaz wrote: >>>> On 02/19/2015 04:19 PM, Martin Basti wrote: >>>>> On 19/02/15 13:01, thierry bordaz wrote: >>>>>> On 02/04/2015 05:14 PM, Jan Cholasta wrote: >>>>>>> Hi, >>>>>>> >>>>>>> Dne 4.2.2015 v 15:25 David Kupka napsal(a): >>>>>>>> On 02/03/2015 11:50 AM, thierry bordaz wrote: >>>>>>>>> On 09/17/2014 12:32 PM, thierry bordaz wrote: >>>>>>>>>> On 09/01/2014 01:08 PM, Petr Viktorin wrote: >>>>>>>>>>> On 08/08/2014 03:54 PM, thierry bordaz wrote: >>>>>>>>>>>> Hi, >>>>>>>>>>>> >>>>>>>>>>>> The attached patch is related to 'User Life Cycle' >>>>>>>>>>>> (https://fedorahosted.org/freeipa/ticket/3813) >>>>>>>>>>>> >>>>>>>>>>>> It creates a stageuser plugin with a first function >>>>>>>>>>>> stageuser-add. >>>>>>>>>>>> Stage >>>>>>>>>>>> user entries are provisioned under 'cn=staged >>>>>>>>>>>> users,cn=accounts,cn=provisioning,SUFFIX'. >>>>>>>>>>>> >>>>>>>>>>>> Thanks >>>>>>>>>>>> thierry >>>>>>>>>>> >>>>>>>>>>> Avoid `from ipalib.plugins.baseldap import *` in new code; >>>>>>>>>>> instead >>>>>>>>>>> import the module itself and use e.g. `baseldap.LDAPObject`. >>>>>>>>>>> >>>>>>>>>>> The stageuser help (docstring) is copied from the user >>>>>>>>>>> plugin, and >>>>>>>>>>> discusses things like account lockout and disabling users. It >>>>>>>>>>> should >>>>>>>>>>> rather explain what stageuser itself does. (And I don't very >>>>>>>>>>> much >>>>>>>>>>> like the Note about the interface being badly designed...) >>>>>>>>>>> Also decide if the docs should call it "staged user" or "stage >>>>>>>>>>> user" >>>>>>>>>>> or "stageuser". >>>>>>>>>>> >>>>>>>>>>> A lot of the code is copied and pasted over from the users >>>>>>>>>>> plugin. >>>>>>>>>>> Don't do that. Either import things (e.g. >>>>>>>>>>> validate_nsaccountlock) >>>>>>>>>>> from the users plugin, or move the reused code into a shared >>>>>>>>>>> module. >>>>>>>>>>> >>>>>>>>>>> For the `user` object, since so much is the same, it might be >>>>>>>>>>> best to >>>>>>>>>>> create a common base class for user and stageuser; and similarly >>>>>>>>>>> for >>>>>>>>>>> the Command plugins. >>>>>>>>>>> >>>>>>>>>>> The default permissions need different names, and you don't need >>>>>>>>>>> another copy of the 'non_object' ones. Also, run the makeaci >>>>>>>>>>> script. >>>>>>>>>>> >>>>>>>>>> Hello, >>>>>>>>>> >>>>>>>>>> This modified patch is mainly moving common base class into a >>>>>>>>>> new >>>>>>>>>> plugin: accounts.py. user/stageuser plugin inherits from >>>>>>>>>> accounts. >>>>>>>>>> It also creates a better description of what are stage user, >>>>>>>>>> how >>>>>>>>>> to add a new stage user, updates ACI.txt and separate >>>>>>>>>> active/stage >>>>>>>>>> user managed permissions. >>>>>>>>>> >>>>>>>>>> thanks >>>>>>>>>> thierry >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Freeipa-devel mailing list >>>>>>>>>> Freeipa-devel at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>>>> >>>>>>>>> >>>>>>>>> Thanks David for the reviews. Here the last patches >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Freeipa-devel mailing list >>>>>>>>> Freeipa-devel at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>>>> >>>>>>>> >>>>>>>> The freeipa-tbordaz-0002 patch had trailing whitespaces on few >>>>>>>> lines so >>>>>>>> I'm attaching fixed version (and unchanged patch >>>>>>>> freeipa-tbordaz-0003-3 >>>>>>>> to keep them together). >>>>>>>> >>>>>>>> The ULC feature is still WIP but these patches look good to me and >>>>>>>> don't >>>>>>>> break anything as far as I tested. >>>>>>>> We should push them now to avoid further rebases. Thierry can then >>>>>>>> prepare other patches delivering the rest of ULC functionality. >>>>>>> >>>>>>> Few comments from just reading the patches: >>>>>>> >>>>>>> 1) I would name the base class "baseuser", "account" does not >>>>>>> necessarily mean user account. >>>>>>> >>>>>>> 2) This is very wrong: >>>>>>> >>>>>>> -class user_add(LDAPCreate): >>>>>>> +class user_add(user, LDAPCreate): >>>>>>> >>>>>>> You are creating a plugin which is both an object and an command. >>>>>>> >>>>>>> 3) This is purely subjective, but I don't like the name >>>>>>> "deleteuser", as it has a verb in it. We usually don't do that and >>>>>>> IMHO we shouldn't do that. >>>>>>> >>>>>>> Honza >>>>>>> >>>>>> >>>>>> Thank you for the review. I am attaching the updates patches >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Freeipa-devel mailing list >>>>>> Freeipa-devel at redhat.com >>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>> Hello, >>>>> I'm getting errors during make rpms: >>>>> >>>>> if [ "" != "yes" ]; then \ >>>>> ./makeapi --validate; \ >>>>> ./makeaci --validate; \ >>>>> fi >>>>> >>>>> /root/freeipa/ipalib/plugins/baseuser.py:641 command "baseuser_add" >>>>> doc is not internationalized >>>>> /root/freeipa/ipalib/plugins/baseuser.py:653 command "baseuser_find" >>>>> doc is not internationalized >>>>> /root/freeipa/ipalib/plugins/baseuser.py:647 command "baseuser_mod" >>>>> doc is not internationalized >>>>> 0 commands without doc, 3 commands whose doc is not i18n >>>>> Command baseuser_add in ipalib, not in API >>>>> Command baseuser_find in ipalib, not in API >>>>> Command baseuser_mod in ipalib, not in API >>>>> >>>>> There are one or more new commands defined. >>>>> Update API.txt and increment the minor version in VERSION. >>>>> >>>>> There are one or more documentation problems. >>>>> You must fix these before preceeding >>>>> >>>>> Issues probably caused by this: >>>>> 1) >>>>> You should not use the register decorator, if this class is just for >>>>> inheritance >>>>> @register() >>>>> class baseuser_add(LDAPCreate): >>>>> >>>>> @register() >>>>> class baseuser_mod(LDAPUpdate): >>>>> >>>>> @register() >>>>> class baseuser_find(LDAPSearch): >>>>> >>>>> see dns.py plugin and "DNSZoneBase" and "dnszone" classes >>>>> >>>>> 2) >>>>> there might be an issue with >>>>> @register() >>>>> class baseuser(LDAPObject): >>>>> >>>>> the register decorator should not be there, I was warned by Petr^3 to >>>>> not use permission in parent class. The same permission should be >>>>> specified only in one place (for example user class), (otherwise they >>>>> will be generated twice??) I don't know more details about it. >>>>> >>>>> -- >>>>> Martin Basti >>>> >>>> Hello Martin, Jan, >>>> >>>> Thanks for your review. >>>> I changed the patch so that it does not register baseuser_*. Also >>>> increase the minor version because of new command. >>>> Finally I moved the managed_permission definition out of the parent >>>> baseuser class. >>>> >>>> >>>> >>>> >>>> >>> >>> Martin, could you please verify that the issues you encountered are >>> fixed? >>> >>> Thanks! >>> >> >> You bumped wrong version variable: >> >> -IPA_VERSION_MINOR=1 >> +IPA_VERSION_MINOR=2 >> >> It should have been IPA_API_VERSION_MINOR (at the bottom of the file), >> including the last change comment below it. >> >> >> IMO baseuser should include superclasses for all the usual commands >> (add, mod, del, show, find) and stageuser/deleteuser commands should >> inherit from them. >> >> >> You don't need to override class properties like active_container_dn >> and takes_params on baseuser subclasses when they have the same value >> as in baseuser. >> >> >> Honza >> > Hello Honza, > > Thanks for the review. I did the modifications you recommended > within that attached patches > > * Change version Please also update the comment below (e.g. "# Last change: tbordaz - Add stageuser_add command") > * create the baseuser_* plugins commands and use them in the > user/stageuser plugin commands > * Do not redefine the class properties in the subclasses. There are still some in baseuser command classes: +class baseuser_add(LDAPCreate): + """ + Prototype command plugin to be implemented by real plugin + """ + active_container_dn = api.env.container_user + has_output_params = LDAPCreate.has_output_params You don't need to set active_container_dn here, you only need to set it in baseuser. Then in stageuser_add and other subclasses you use "self.obj.active_container_dn" instead of "self.active_container_dn". You also don't need to override has_output_params if you are not changing its value - you are inheriting from LDAPCreate, so baseuser_add.has_output_params implicitly has the same value as LDAPCreate.has_output_params. > > Thanks > thierry > -- Jan Cholasta From mkosek at redhat.com Thu Mar 19 07:57:31 2015 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 19 Mar 2015 08:57:31 +0100 Subject: [Freeipa-devel] [PROPOSAL] FreeIPA Test Plan Workflow In-Reply-To: <1426706290.3265.10.camel@redhat.com> References: <1426706290.3265.10.camel@redhat.com> Message-ID: <550A816B.2020501@redhat.com> On 03/18/2015 08:18 PM, Martin Koci wrote: > Hi, > working with Test Plans for 4.2 features I'd like to outline workflow > for test plans. The main aim is to have something documented and more > clear. > So I'd like to start with track ticket options. For better tracking and > managing tickets we could consider 3 new fields in the track ticket. track --> Trac > > - First field for tracking link for test plan/test case which we can > refer to. We would call this field just "Test". I would name it "Test Case", if it would link to Test Case on wiki/other tool. With "Test", I would rather understand it to be a link to the actual tests. > - The second field for tracking tester (QE). Let's called this field "QA > Contact". This field should inform about contributor (QE). That means - > you can track unassigned bugs (tracks), assigned but not reviewed (the > work hasn't started yet), and finished ("-", "+" - see below). According > to this you can also see who is overloaded and who can help to the > others. Right now, we have following person-related fields (in form name - label): reporter - Reported by reviewer - Patch review by So for consistency sake, what about: tester - Test by > - The third one for tracking states or flags for test coverage. > Something like "QE Test Coverage" with four states/flags: > > * Review is needed - " " (review is required for this issue) > Empty "QE Test Coverage" should mean "Hey, I need to be reviewed and > considered whether some test is needed or not". > * Test is not needed - "-" (Test is not required for this issue) > * Test exists - "+" (Test already exists - QE done) > * Test in progress - "?" (Test is required and QE {will} works on > test{s}) > > I can imagine some naming instead of flags +,-,?, ,. Any ideas? Right, this is the most important one as we will use it which RFEs/tickets we want to cover with tests and which not. The states make sense (we come up with them together anyway), +/-/?/ / makes sense, alternatively we can do: Test Coverage: yes/no/in progress/ / > In the track ticket should be described the issue or link to design page > to get all necessary information for coverage. QE changes state "QE Test > Coverage" (Test in progress - "?") and creates test plan on wiki [1] and > add this link to "Test" field in the track. Then QE informs > freeipa-devel list about test plan if it's OK providing the link to test > plan. If it's OK QE will start work on particular test cases. When QE is > done then changes state "QE Test Coverage" to "+" (test exists). Makes sense for big features, yes. What about smaller features or bug fixes? Is test plan also required or would it be sufficient to just add that one or two tests to XMLRPC tests? > As well I'd like to propose the possibility that ticket will not be > closed until QE is done with test? Still not sure, the requirement is that the test would need to be finished before GA, as the ticket must be in the milestone where the code is released. I personally do not see a problem with closing the ticket and leaving it with "Test Coverage: in progress" which would then be changed to "yes", when test is done. It is easy to filter tickets with unfinished tests, even if they are closed. > Hope it makes sense to you. > Can I get your thoughts on this, please? > > Thanks, > /koca > *[1] - http://www.freeipa.org/page/Main_Page > > From pspacek at redhat.com Thu Mar 19 08:25:51 2015 From: pspacek at redhat.com (Petr Spacek) Date: Thu, 19 Mar 2015 09:25:51 +0100 Subject: [Freeipa-devel] [PROPOSAL] FreeIPA Test Plan Workflow In-Reply-To: <550A816B.2020501@redhat.com> References: <1426706290.3265.10.camel@redhat.com> <550A816B.2020501@redhat.com> Message-ID: <550A880F.3040303@redhat.com> Hello, I do not much to add to the process itself. After first reading it seems pretty heavyweight but let's try it, it can be refined at any time :-) On 19.3.2015 08:57, Martin Kosek wrote: > Test Coverage: yes/no/in progress/ / I'm very much in favor of more descriptive names like: covered/not necessary/test in works/undecided It would allow random walkers by (like me :-) to understand what is going on without necessity to study some internal processes. -- Petr^2 Spacek From mkosek at redhat.com Thu Mar 19 09:11:28 2015 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 19 Mar 2015 10:11:28 +0100 Subject: [Freeipa-devel] [PROPOSAL] FreeIPA Test Plan Workflow In-Reply-To: <550A880F.3040303@redhat.com> References: <1426706290.3265.10.camel@redhat.com> <550A816B.2020501@redhat.com> <550A880F.3040303@redhat.com> Message-ID: <550A92C0.50704@redhat.com> On 03/19/2015 09:25 AM, Petr Spacek wrote: > Hello, > > I do not much to add to the process itself. After first reading it seems > pretty heavyweight but let's try it, it can be refined at any time :-) Right, but then we would need to migrate the data about test completion and so on - which is more work. So it is much better to define some working now, than to change it couple months later. We were already trying to invent something as much lightweight as possible, this was the minimum new fields we come for to be able to track the test coverage and plans. If you have another proposal how to track it better, I would love to hear it, really :-) From jcholast at redhat.com Thu Mar 19 09:20:59 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 19 Mar 2015 10:20:59 +0100 Subject: [Freeipa-devel] [PATCHES 404-407] client-install: Do not crash on invalid CA certificate in LDAP Message-ID: <550A94FB.80004@redhat.com> Hi, the attached patches fix . Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-404-certstore-Make-certificate-retrieval-more-robust.patch Type: text/x-patch Size: 4636 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-405-client-install-Do-not-crash-on-invalid-CA-certificat.patch Type: text/x-patch Size: 2376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-406-client-Fix-ca_is_enabled-calls.patch Type: text/x-patch Size: 1986 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-jcholast-407-upload_cacrt-Fix-empty-cACertificate-in-cn-CAcert.patch Type: text/x-patch Size: 4141 bytes Desc: not available URL: From pspacek at redhat.com Thu Mar 19 10:45:03 2015 From: pspacek at redhat.com (Petr Spacek) Date: Thu, 19 Mar 2015 11:45:03 +0100 Subject: [Freeipa-devel] [PROPOSAL] FreeIPA Test Plan Workflow In-Reply-To: <550A92C0.50704@redhat.com> References: <1426706290.3265.10.camel@redhat.com> <550A816B.2020501@redhat.com> <550A880F.3040303@redhat.com> <550A92C0.50704@redhat.com> Message-ID: <550AA8AF.8030905@redhat.com> On 19.3.2015 10:11, Martin Kosek wrote: > On 03/19/2015 09:25 AM, Petr Spacek wrote: >> Hello, >> >> I do not much to add to the process itself. After first reading it seems >> pretty heavyweight but let's try it, it can be refined at any time :-) > > Right, but then we would need to migrate the data about test completion and so > on - which is more work. So it is much better to define some working now, than > to change it couple months later. > > We were already trying to invent something as much lightweight as possible, > this was the minimum new fields we come for to be able to track the test > coverage and plans. If you have another proposal how to track it better, I > would love to hear it, really :-) Sure. For me the main question is when *designing of tests* should start and how it is synchronized with feature design. Is it done in parallel? Or sequentially? When the feedback from test designers flows back? Isn't it too late? Let's discuss ticket workflow like this: new -> design functionality&tests -> write code&tests -> test run -> closed IMHO we should have tests *designed* before we start to implement the final version of the functionality. It may be too late to find out that interface design is flawed (e.g. from user's point of view) when the feature is fully implemented and test phase is reached. Designing/writing tests early could discover things like poor interface design sooner, when it is still easy to change interfaces. Currently we have 'design' reviews before the implementation starts but actually designing tests at the same time would attract more eyes/brains to the feature design phase. We may call it 'first usability review' if we wish :-) In my mind, test designers should be first feature users (even virtually) so the early feedback is crucial. Note that this approach does not preclude experimental/quick&dirty prototyping as part of the design phase but it has to be clear that prototype might (and should!) be thrown away if the first idea wasn't the best one. If this is too radical: To me it seems kind on unnatural to separate testing from overall bug state. Equivalent of ON_QA state in Bugzilla seems more natural to me as it is kind of weird to claim that ticket is closed/finished before full testing cycle is finished. I.e. the ticket could have states like: new -> assigned -> qe -> closed "qe" state can be easily skipped if no testing is (deemed to be) necessary. Then there is the question if we actually need to separate field for QE state and Test case field. Test case could behave in the same way as Bugzilla link field: - empty field - undecided - 0 (or string "not necessary" or something) - test case is deemed unnecessary - non-zero link - apparently, a test case exists It would be more consistent with what we have for Bugzilla links. -- Petr^2 Spacek From dkupka at redhat.com Thu Mar 19 11:20:32 2015 From: dkupka at redhat.com (David Kupka) Date: Thu, 19 Mar 2015 12:20:32 +0100 Subject: [Freeipa-devel] [PATCHES 0204-0207, 0211] Server upgrade: Make LDAP data upgrade deterministic In-Reply-To: <5502EF5C.4070203@redhat.com> References: <54F9CCB9.4070600@redhat.com> <5501AF09.1090202@redhat.com> <5502EF5C.4070203@redhat.com> Message-ID: <550AB100.5050208@redhat.com> On 03/13/2015 03:08 PM, Martin Basti wrote: > On 12/03/15 16:21, Rob Crittenden wrote: >> Martin Basti wrote: >>> The patchset ensure, the upgrade order will respect ordering of entries >>> in *.update files. >>> >>> Required for: https://fedorahosted.org/freeipa/ticket/4904 >>> >>> Patch 205 also fixes https://fedorahosted.org/freeipa/ticket/3560 >>> >>> Required patch mbasti-0203 >>> >>> Patches attached. >>> >>> >>> >> Just reading the patches, untested. >> >> I think ordered should default to True in the update() method of >> ldapupdater to keep in spirit with the design. >> >> Otherwise LGTM that it implements what was designed. >> >> rob >> >> > New patch that switch default value for ordered to True attached. > > > Looks and works as expected, ACK. -- David Kupka From dkupka at redhat.com Thu Mar 19 11:25:24 2015 From: dkupka at redhat.com (David Kupka) Date: Thu, 19 Mar 2015 12:25:24 +0100 Subject: [Freeipa-devel] [PATCH 0208] Remove --test option from upgrade In-Reply-To: <55081918.3000307@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <55081918.3000307@redhat.com> Message-ID: <550AB224.2040709@redhat.com> On 03/17/2015 01:07 PM, Martin Basti wrote: > On 12/03/15 16:10, David Kupka wrote: >> On 03/06/2015 06:00 PM, Martin Basti wrote: >>> Upgrade plugins which modify LDAP data directly should not be executed >>> in --test mode. >>> >>> This patch is a workaround, to ensure update with --test option will not >>> modify any LDAP data. >>> >>> https://fedorahosted.org/freeipa/ticket/3448 >>> >>> Patch attached. >>> >>> >>> >> >> Ideally we want to fix all plugins to dry-run the upgrade not just >> skip when there is '--test' option but it is a good first step. >> Works for me, ACK. >> > > We had long discussion, and we decided to remove this option from upgrade. > > Reasons: > * users are not supposed to use this option to test if upgrade will be > successful, it can not guarantee it. > * option is not used for developing, as it can not catch all issues with > upgrade, using snapshots is better > > Attached patch removes the option. > Works for me, ACK. -- David Kupka From mkosek at redhat.com Thu Mar 19 11:33:12 2015 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 19 Mar 2015 12:33:12 +0100 Subject: [Freeipa-devel] [PROPOSAL] FreeIPA Test Plan Workflow In-Reply-To: <550AA8AF.8030905@redhat.com> References: <1426706290.3265.10.camel@redhat.com> <550A816B.2020501@redhat.com> <550A880F.3040303@redhat.com> <550A92C0.50704@redhat.com> <550AA8AF.8030905@redhat.com> Message-ID: <550AB3F8.200@redhat.com> On 03/19/2015 11:45 AM, Petr Spacek wrote: > On 19.3.2015 10:11, Martin Kosek wrote: >> On 03/19/2015 09:25 AM, Petr Spacek wrote: >>> Hello, >>> >>> I do not much to add to the process itself. After first reading it seems >>> pretty heavyweight but let's try it, it can be refined at any time :-) >> >> Right, but then we would need to migrate the data about test completion and so >> on - which is more work. So it is much better to define some working now, than >> to change it couple months later. >> >> We were already trying to invent something as much lightweight as possible, >> this was the minimum new fields we come for to be able to track the test >> coverage and plans. If you have another proposal how to track it better, I >> would love to hear it, really :-) > > Sure. For me the main question is when *designing of tests* should start and > how it is synchronized with feature design. Is it done in parallel? Or > sequentially? When the feedback from test designers flows back? Isn't it too late? > > Let's discuss ticket workflow like this: > new -> design functionality&tests -> write code&tests -> test run -> closed > > IMHO we should have tests *designed* before we start to implement the final > version of the functionality. It may be too late to find out that interface > design is flawed (e.g. from user's point of view) when the feature is fully > implemented and test phase is reached. > > Designing/writing tests early could discover things like poor interface design > sooner, when it is still easy to change interfaces. Currently we have 'design' > reviews before the implementation starts but actually designing tests at the > same time would attract more eyes/brains to the feature design phase. We may > call it 'first usability review' if we wish :-) > > In my mind, test designers should be first feature users (even virtually) so > the early feedback is crucial. > > Note that this approach does not preclude experimental/quick&dirty prototyping > as part of the design phase but it has to be clear that prototype might (and > should!) be thrown away if the first idea wasn't the best one. Yes! This is exactly why this QE team was created - to be able to test as early as possible, review designs with QE eyes as early as possible. > If this is too radical: > > To me it seems kind on unnatural to separate testing from overall bug state. > Equivalent of ON_QA state in Bugzilla seems more natural to me as it is kind > of weird to claim that ticket is closed/finished before full testing cycle is > finished. > > I.e. the ticket could have states like: > new -> assigned -> qe -> closed > "qe" state can be easily skipped if no testing is (deemed to be) necessary. This is an alternate approach yes. Trac has workflow plugin that should be able to add it. But wouldn't that workflow actually support the classic waterfall approach and not the more agile approach with testing more or less in parallel with the work on the code? The point is that a RFE may be still in development and also in QE state in parallel - thus the field. > > Then there is the question if we actually need to separate field for QE state > and Test case field. Test case could behave in the same way as Bugzilla link > field: > - empty field - undecided > - 0 (or string "not necessary" or something) - test case is deemed unnecessary > - non-zero link - apparently, a test case exists > > It would be more consistent with what we have for Bugzilla links. The metadata we come up should be able to supply at least following queries: - which tickets (RFEs/bugs) are covered with tests in a specific milestone, what are the test cases - who, from QE team, is working on which tickets - list of tickets where we want the tests and which are for grabs by QE engineer I am not sure if this can be covered just with the extra QE phase and Test Case link. From tbabej at redhat.com Thu Mar 19 11:36:00 2015 From: tbabej at redhat.com (Tomas Babej) Date: Thu, 19 Mar 2015 12:36:00 +0100 Subject: [Freeipa-devel] [PATCH 0203] Remove unused PRE_SCHEMA upgrade In-Reply-To: <550972A2.3080005@redhat.com> References: <54F9CD2F.3050900@redhat.com> <5501AC4A.60900@redhat.com> <5501AF32.6040503@redhat.com> <5501B8FC.4030408@redhat.com> <5501BA15.2010803@redhat.com> <5501BF0B.8080503@redhat.com> <55096B75.6010304@redhat.com> <550972A2.3080005@redhat.com> Message-ID: <550AB4A0.8000701@redhat.com> On 03/18/2015 01:42 PM, Martin Kosek wrote: > On 03/18/2015 01:11 PM, Martin Basti wrote: >> On 12/03/15 17:30, Martin Basti wrote: >>> On 12/03/15 17:08, Rob Crittenden wrote: >>>> Martin Basti wrote: >>>>> On 12/03/15 16:22, Rob Crittenden wrote: >>>>>> David Kupka wrote: >>>>>>> On 03/06/2015 04:52 PM, Martin Basti wrote: >>>>>>>> This upgrade step is not used anymore. >>>>>>>> >>>>>>>> Required by: https://fedorahosted.org/freeipa/ticket/4904 >>>>>>>> >>>>>>>> Patch attached. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> Looks and works good to me, ACK. >>>>>> Is this going away because one can simply create an update file that >>>>>> exists alphabetically before the schema update? If so then ACK. >>>>>> >>>>>> rob >>>>> No this never works, and will not work without changes in DS, I was >>>>> discussing this with DS guys. If you add new replica to schema, the >>>>> schema has to be there before data replication. >>>>> >>>>> Martin >>>>> >>>> That's a rather narrow case though. You could make changes that only >>>> affect existing schema, or something in cn=config. >>>> >>>> rob >>> Let summarize this: >>> * It is unused code >>> * we have schema update to modify schema (is there any extra requirement to >>> modify schema before schema update? I though the schema update replace old >>> schema with new) >>> * it is not usable on new replicas (why to modify up to date schema?, why to >>> modify new configuration?) >>> * we can not use this to update data >>> * only way how we can us this is to change non-replicating data, on current >>> server. >>> >>> However, might there be really need to update cn=config before schema update? >>> >>> Martin >>> >> IMO this patch can be pushed. >> >> It removes the unused and broken code. To implement this feature we need design >> it in proper way first. >> >> Is there any objections? > Works for me, if it was broken anyway and there is no use case for it, yet. > Pushed to master: d3f5d5d1ff5a730d5c268456015fee36a7dc5bff From tbabej at redhat.com Thu Mar 19 11:38:29 2015 From: tbabej at redhat.com (Tomas Babej) Date: Thu, 19 Mar 2015 12:38:29 +0100 Subject: [Freeipa-devel] [PATCHES 0204-0207, 0211] Server upgrade: Make LDAP data upgrade deterministic In-Reply-To: <550AB100.5050208@redhat.com> References: <54F9CCB9.4070600@redhat.com> <5501AF09.1090202@redhat.com> <5502EF5C.4070203@redhat.com> <550AB100.5050208@redhat.com> Message-ID: <550AB535.1020608@redhat.com> On 03/19/2015 12:20 PM, David Kupka wrote: > On 03/13/2015 03:08 PM, Martin Basti wrote: >> On 12/03/15 16:21, Rob Crittenden wrote: >>> Martin Basti wrote: >>>> The patchset ensure, the upgrade order will respect ordering of >>>> entries >>>> in *.update files. >>>> >>>> Required for: https://fedorahosted.org/freeipa/ticket/4904 >>>> >>>> Patch 205 also fixes https://fedorahosted.org/freeipa/ticket/3560 >>>> >>>> Required patch mbasti-0203 >>>> >>>> Patches attached. >>>> >>>> >>>> >>> Just reading the patches, untested. >>> >>> I think ordered should default to True in the update() method of >>> ldapupdater to keep in spirit with the design. >>> >>> Otherwise LGTM that it implements what was designed. >>> >>> rob >>> >>> >> New patch that switch default value for ordered to True attached. >> >> >> > Looks and works as expected, ACK. > Pushed to master: a42fcfc18bb94fbf97ec310dbb920e045b0473a5 From tbordaz at redhat.com Thu Mar 19 12:07:52 2015 From: tbordaz at redhat.com (thierry bordaz) Date: Thu, 19 Mar 2015 13:07:52 +0100 Subject: [Freeipa-devel] [PATCH] 0003-3 User life cycle: new stageuser plugin with add verb In-Reply-To: <550A6E97.9010103@redhat.com> References: <53E4D6AE.6050505@redhat.com> <54045399.3030404@redhat.com> <54196346.5070500@redhat.com> <54D0A7EB.1010700@redhat.com> <54D22BE2.9050407@redhat.com> <54D24567.4010103@redhat.com> <54E5D092.6030708@redhat.com> <54E5FF07.1080809@redhat.com> <54F9F243.5090003@redhat.com> <5506B918.6000708@redhat.com> <5507D13E.7040107@redhat.com> <5509C674.90104@redhat.com> <550A6E97.9010103@redhat.com> Message-ID: <550ABC18.8090009@redhat.com> On 03/19/2015 07:37 AM, Jan Cholasta wrote: > Dne 18.3.2015 v 19:39 thierry bordaz napsal(a): >> On 03/17/2015 08:01 AM, Jan Cholasta wrote: >>> Dne 16.3.2015 v 12:06 David Kupka napsal(a): >>>> On 03/06/2015 07:30 PM, thierry bordaz wrote: >>>>> On 02/19/2015 04:19 PM, Martin Basti wrote: >>>>>> On 19/02/15 13:01, thierry bordaz wrote: >>>>>>> On 02/04/2015 05:14 PM, Jan Cholasta wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> Dne 4.2.2015 v 15:25 David Kupka napsal(a): >>>>>>>>> On 02/03/2015 11:50 AM, thierry bordaz wrote: >>>>>>>>>> On 09/17/2014 12:32 PM, thierry bordaz wrote: >>>>>>>>>>> On 09/01/2014 01:08 PM, Petr Viktorin wrote: >>>>>>>>>>>> On 08/08/2014 03:54 PM, thierry bordaz wrote: >>>>>>>>>>>>> Hi, >>>>>>>>>>>>> >>>>>>>>>>>>> The attached patch is related to 'User Life Cycle' >>>>>>>>>>>>> (https://fedorahosted.org/freeipa/ticket/3813) >>>>>>>>>>>>> >>>>>>>>>>>>> It creates a stageuser plugin with a first function >>>>>>>>>>>>> stageuser-add. >>>>>>>>>>>>> Stage >>>>>>>>>>>>> user entries are provisioned under 'cn=staged >>>>>>>>>>>>> users,cn=accounts,cn=provisioning,SUFFIX'. >>>>>>>>>>>>> >>>>>>>>>>>>> Thanks >>>>>>>>>>>>> thierry >>>>>>>>>>>> >>>>>>>>>>>> Avoid `from ipalib.plugins.baseldap import *` in new code; >>>>>>>>>>>> instead >>>>>>>>>>>> import the module itself and use e.g. `baseldap.LDAPObject`. >>>>>>>>>>>> >>>>>>>>>>>> The stageuser help (docstring) is copied from the user >>>>>>>>>>>> plugin, and >>>>>>>>>>>> discusses things like account lockout and disabling users. It >>>>>>>>>>>> should >>>>>>>>>>>> rather explain what stageuser itself does. (And I don't very >>>>>>>>>>>> much >>>>>>>>>>>> like the Note about the interface being badly designed...) >>>>>>>>>>>> Also decide if the docs should call it "staged user" or "stage >>>>>>>>>>>> user" >>>>>>>>>>>> or "stageuser". >>>>>>>>>>>> >>>>>>>>>>>> A lot of the code is copied and pasted over from the users >>>>>>>>>>>> plugin. >>>>>>>>>>>> Don't do that. Either import things (e.g. >>>>>>>>>>>> validate_nsaccountlock) >>>>>>>>>>>> from the users plugin, or move the reused code into a shared >>>>>>>>>>>> module. >>>>>>>>>>>> >>>>>>>>>>>> For the `user` object, since so much is the same, it might be >>>>>>>>>>>> best to >>>>>>>>>>>> create a common base class for user and stageuser; and >>>>>>>>>>>> similarly >>>>>>>>>>>> for >>>>>>>>>>>> the Command plugins. >>>>>>>>>>>> >>>>>>>>>>>> The default permissions need different names, and you don't >>>>>>>>>>>> need >>>>>>>>>>>> another copy of the 'non_object' ones. Also, run the makeaci >>>>>>>>>>>> script. >>>>>>>>>>>> >>>>>>>>>>> Hello, >>>>>>>>>>> >>>>>>>>>>> This modified patch is mainly moving common base class >>>>>>>>>>> into a >>>>>>>>>>> new >>>>>>>>>>> plugin: accounts.py. user/stageuser plugin inherits from >>>>>>>>>>> accounts. >>>>>>>>>>> It also creates a better description of what are stage >>>>>>>>>>> user, >>>>>>>>>>> how >>>>>>>>>>> to add a new stage user, updates ACI.txt and separate >>>>>>>>>>> active/stage >>>>>>>>>>> user managed permissions. >>>>>>>>>>> >>>>>>>>>>> thanks >>>>>>>>>>> thierry >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Freeipa-devel mailing list >>>>>>>>>>> Freeipa-devel at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Thanks David for the reviews. Here the last patches >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Freeipa-devel mailing list >>>>>>>>>> Freeipa-devel at redhat.com >>>>>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>>>>> >>>>>>>>> >>>>>>>>> The freeipa-tbordaz-0002 patch had trailing whitespaces on few >>>>>>>>> lines so >>>>>>>>> I'm attaching fixed version (and unchanged patch >>>>>>>>> freeipa-tbordaz-0003-3 >>>>>>>>> to keep them together). >>>>>>>>> >>>>>>>>> The ULC feature is still WIP but these patches look good to me >>>>>>>>> and >>>>>>>>> don't >>>>>>>>> break anything as far as I tested. >>>>>>>>> We should push them now to avoid further rebases. Thierry can >>>>>>>>> then >>>>>>>>> prepare other patches delivering the rest of ULC functionality. >>>>>>>> >>>>>>>> Few comments from just reading the patches: >>>>>>>> >>>>>>>> 1) I would name the base class "baseuser", "account" does not >>>>>>>> necessarily mean user account. >>>>>>>> >>>>>>>> 2) This is very wrong: >>>>>>>> >>>>>>>> -class user_add(LDAPCreate): >>>>>>>> +class user_add(user, LDAPCreate): >>>>>>>> >>>>>>>> You are creating a plugin which is both an object and an command. >>>>>>>> >>>>>>>> 3) This is purely subjective, but I don't like the name >>>>>>>> "deleteuser", as it has a verb in it. We usually don't do that and >>>>>>>> IMHO we shouldn't do that. >>>>>>>> >>>>>>>> Honza >>>>>>>> >>>>>>> >>>>>>> Thank you for the review. I am attaching the updates patches >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Freeipa-devel mailing list >>>>>>> Freeipa-devel at redhat.com >>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>> Hello, >>>>>> I'm getting errors during make rpms: >>>>>> >>>>>> if [ "" != "yes" ]; then \ >>>>>> ./makeapi --validate; \ >>>>>> ./makeaci --validate; \ >>>>>> fi >>>>>> >>>>>> /root/freeipa/ipalib/plugins/baseuser.py:641 command "baseuser_add" >>>>>> doc is not internationalized >>>>>> /root/freeipa/ipalib/plugins/baseuser.py:653 command "baseuser_find" >>>>>> doc is not internationalized >>>>>> /root/freeipa/ipalib/plugins/baseuser.py:647 command "baseuser_mod" >>>>>> doc is not internationalized >>>>>> 0 commands without doc, 3 commands whose doc is not i18n >>>>>> Command baseuser_add in ipalib, not in API >>>>>> Command baseuser_find in ipalib, not in API >>>>>> Command baseuser_mod in ipalib, not in API >>>>>> >>>>>> There are one or more new commands defined. >>>>>> Update API.txt and increment the minor version in VERSION. >>>>>> >>>>>> There are one or more documentation problems. >>>>>> You must fix these before preceeding >>>>>> >>>>>> Issues probably caused by this: >>>>>> 1) >>>>>> You should not use the register decorator, if this class is just for >>>>>> inheritance >>>>>> @register() >>>>>> class baseuser_add(LDAPCreate): >>>>>> >>>>>> @register() >>>>>> class baseuser_mod(LDAPUpdate): >>>>>> >>>>>> @register() >>>>>> class baseuser_find(LDAPSearch): >>>>>> >>>>>> see dns.py plugin and "DNSZoneBase" and "dnszone" classes >>>>>> >>>>>> 2) >>>>>> there might be an issue with >>>>>> @register() >>>>>> class baseuser(LDAPObject): >>>>>> >>>>>> the register decorator should not be there, I was warned by >>>>>> Petr^3 to >>>>>> not use permission in parent class. The same permission should be >>>>>> specified only in one place (for example user class), (otherwise >>>>>> they >>>>>> will be generated twice??) I don't know more details about it. >>>>>> >>>>>> -- >>>>>> Martin Basti >>>>> >>>>> Hello Martin, Jan, >>>>> >>>>> Thanks for your review. >>>>> I changed the patch so that it does not register baseuser_*. Also >>>>> increase the minor version because of new command. >>>>> Finally I moved the managed_permission definition out of the parent >>>>> baseuser class. >>>>> >>>>> >>>>> >>>>> >>>>> >>>> >>>> Martin, could you please verify that the issues you encountered are >>>> fixed? >>>> >>>> Thanks! >>>> >>> >>> You bumped wrong version variable: >>> >>> -IPA_VERSION_MINOR=1 >>> +IPA_VERSION_MINOR=2 >>> >>> It should have been IPA_API_VERSION_MINOR (at the bottom of the file), >>> including the last change comment below it. >>> >>> >>> IMO baseuser should include superclasses for all the usual commands >>> (add, mod, del, show, find) and stageuser/deleteuser commands should >>> inherit from them. >>> >>> >>> You don't need to override class properties like active_container_dn >>> and takes_params on baseuser subclasses when they have the same value >>> as in baseuser. >>> >>> >>> Honza >>> >> Hello Honza, >> >> Thanks for the review. I did the modifications you recommended >> within that attached patches >> >> * Change version > > Please also update the comment below (e.g. "# Last change: tbordaz - > Add stageuser_add command") > >> * create the baseuser_* plugins commands and use them in the >> user/stageuser plugin commands >> * Do not redefine the class properties in the subclasses. > > There are still some in baseuser command classes: > > +class baseuser_add(LDAPCreate): > + """ > + Prototype command plugin to be implemented by real plugin > + """ > + active_container_dn = api.env.container_user > + has_output_params = LDAPCreate.has_output_params > > You don't need to set active_container_dn here, you only need to set > it in baseuser. Then in stageuser_add and other subclasses you use > "self.obj.active_container_dn" instead of "self.active_container_dn". > > You also don't need to override has_output_params if you are not > changing its value - you are inheriting from LDAPCreate, so > baseuser_add.has_output_params implicitly has the same value as > LDAPCreate.has_output_params. > >> >> Thanks >> thierry >> > Hello Honza, Thanks for your patience .. :-) I understand my mistake. Just a question, in a plugin command (user_add), is 'self.obj' referring to the plugin object (like 'user') ? updated patches (with the appropriate naming and patch versioning). thanks theirry -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbordaz-0002-User-Life-Cycle-Exclude-subtree-for-ipaUniqueID-gene.patch Type: text/x-patch Size: 2588 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbordaz-0003-6-User-life-cycle-stageuser-add-verb.patch Type: text/x-patch Size: 60620 bytes Desc: not available URL: From dkupka at redhat.com Thu Mar 19 14:24:17 2015 From: dkupka at redhat.com (David Kupka) Date: Thu, 19 Mar 2015 15:24:17 +0100 Subject: [Freeipa-devel] [PATCHES 404-407] client-install: Do not crash on invalid CA certificate in LDAP In-Reply-To: <550A94FB.80004@redhat.com> References: <550A94FB.80004@redhat.com> Message-ID: <550ADC11.3020309@redhat.com> On 03/19/2015 10:20 AM, Jan Cholasta wrote: > Hi, > > the attached patches fix . > > Honza > > > Hi! Thanks for the patches. Both, client installer and ldap updater, now deal quite robustly with invalid or missing certificate, ACK. Please rebase patch 407 before pushing to master. -- David Kupka From jcholast at redhat.com Thu Mar 19 14:41:35 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 19 Mar 2015 15:41:35 +0100 Subject: [Freeipa-devel] [PATCHES 404-407] client-install: Do not crash on invalid CA certificate in LDAP In-Reply-To: <550ADC11.3020309@redhat.com> References: <550A94FB.80004@redhat.com> <550ADC11.3020309@redhat.com> Message-ID: <550AE01F.2000708@redhat.com> Dne 19.3.2015 v 15:24 David Kupka napsal(a): > On 03/19/2015 10:20 AM, Jan Cholasta wrote: >> Hi, >> >> the attached patches fix . >> >> Honza >> >> >> > > Hi! > Thanks for the patches. Both, client installer and ldap updater, now > deal quite robustly with invalid or missing certificate, ACK. > > Please rebase patch 407 before pushing to master. > Thanks. Pushed to: master: fa500686075e38b687732f1c9443dbca81b5d9f4 ipa-4-1: f0a49b962c268c32db6179c60017fc04826af179 -- Jan Cholasta From pspacek at redhat.com Fri Mar 20 12:04:24 2015 From: pspacek at redhat.com (Petr Spacek) Date: Fri, 20 Mar 2015 13:04:24 +0100 Subject: [Freeipa-devel] [PROPOSAL] FreeIPA Test Plan Workflow In-Reply-To: <550AB3F8.200@redhat.com> References: <1426706290.3265.10.camel@redhat.com> <550A816B.2020501@redhat.com> <550A880F.3040303@redhat.com> <550A92C0.50704@redhat.com> <550AA8AF.8030905@redhat.com> <550AB3F8.200@redhat.com> Message-ID: <550C0CC8.4040305@redhat.com> On 19.3.2015 12:33, Martin Kosek wrote: > On 03/19/2015 11:45 AM, Petr Spacek wrote: >> On 19.3.2015 10:11, Martin Kosek wrote: >>> On 03/19/2015 09:25 AM, Petr Spacek wrote: >>>> Hello, >>>> >>>> I do not much to add to the process itself. After first reading it seems >>>> pretty heavyweight but let's try it, it can be refined at any time :-) >>> >>> Right, but then we would need to migrate the data about test completion and so >>> on - which is more work. So it is much better to define some working now, than >>> to change it couple months later. >>> >>> We were already trying to invent something as much lightweight as possible, >>> this was the minimum new fields we come for to be able to track the test >>> coverage and plans. If you have another proposal how to track it better, I >>> would love to hear it, really :-) >> >> Sure. For me the main question is when *designing of tests* should start and >> how it is synchronized with feature design. Is it done in parallel? Or >> sequentially? When the feedback from test designers flows back? Isn't it too late? >> >> Let's discuss ticket workflow like this: >> new -> design functionality&tests -> write code&tests -> test run -> closed >> >> IMHO we should have tests *designed* before we start to implement the final >> version of the functionality. It may be too late to find out that interface >> design is flawed (e.g. from user's point of view) when the feature is fully >> implemented and test phase is reached. >> >> Designing/writing tests early could discover things like poor interface design >> sooner, when it is still easy to change interfaces. Currently we have 'design' >> reviews before the implementation starts but actually designing tests at the >> same time would attract more eyes/brains to the feature design phase. We may >> call it 'first usability review' if we wish :-) >> >> In my mind, test designers should be first feature users (even virtually) so >> the early feedback is crucial. >> >> Note that this approach does not preclude experimental/quick&dirty prototyping >> as part of the design phase but it has to be clear that prototype might (and >> should!) be thrown away if the first idea wasn't the best one. > > Yes! This is exactly why this QE team was created - to be able to test as early > as possible, review designs with QE eyes as early as possible. Great, in that case we can ignore the next section completely (it was meant as fallback). >> If this is too radical: /snip/ >> Then there is the question if we actually need to separate field for QE state >> and Test case field. Test case could behave in the same way as Bugzilla link >> field: >> - empty field - undecided >> - 0 (or string "not necessary" or something) - test case is deemed unnecessary >> - non-zero link - apparently, a test case exists >> >> It would be more consistent with what we have for Bugzilla links. > > The metadata we come up should be able to supply at least following queries: > - which tickets (RFEs/bugs) are covered with tests in a specific milestone, > what are the test cases > - who, from QE team, is working on which tickets > - list of tickets where we want the tests and which are for grabs by QE engineer > > I am not sure if this can be covered just with the extra QE phase and Test Case > link. Okay, it might be easier with more explicit fields as proposed. -- Petr^2 Spacek From slaz at seznam.cz Fri Mar 20 12:30:59 2015 From: slaz at seznam.cz (=?UTF-8?B?U3RhbmlzbGF2IEzDoXpuacSNa2E=?=) Date: Fri, 20 Mar 2015 13:30:59 +0100 Subject: [Freeipa-devel] Time-based account policies Message-ID: <550C1303.7090402@seznam.cz> Hi! I went through the last week's thread on Time-Based Policies, discussed some parts I wasn't very sure about with Martin, and would like to make a summary of it, followed by some further questions on the topic. The mail is a bit longer than I thought it would be. Sorry about that. It seems I am very bad at fast and brief responses. So to summarize - the best way to go seems to be to implement both UTC and client-side local time support. A very nice idea is to store it in the format (time, timezone). Now if I understand this right, the timezone here would be the Olson database timezone stored with the host/hostgroup, possibly overridden by service/servicegroup timezone. It would help the administrator decide the correct UTC time by adjusting the time displayed in UI/CLI. The administrator should also be able to set this timezone in the UI/CLI settings of the HBAC rule (or maybe they just set it instead of storing timezone info with host/hostgroup etc. objects?). As for the local time - timezone in the tuple (time, timezone) would only say "Local Time", which can't be found in Olson's and it means the time record from the tuple should be compared to the client's time settings (/etc/localtime ?). What would then be left to decide is whether to use the iCalendar time format as internal, or use some other own created format. Such a format should not only be suitable for reoccurring events, but should also be able to handle exceptions in those events, such as holidays. It should also be easy to import iCalendar events to this format to support events from other sources. The iCalendar format is rather heavy and if we wanted to use it as a whole, it may be necessary to include third party libraries in the SSSD project along with their possible vulnerabilities. There is a possibility of using only part of the iCalendar format that handles reoccurring events and exceptions. Example from Alexander to be found here: https://github.com/libical/libical/blob/master/src/test/icalrecur_test.c. That may or may not work perfectly well, given the libical library is a complex thing and we'd be taking just a part of it. The other time format option is to simplify/edit the language that was used in the past (described with regular expressions here: https://git.fedorahosted.org/cgit/sssd.git/commit/?id=1ce240367a2144500187ccd3c0d32c975d8d346a). There was probably a reason to step out of the language (the possibility of parsing those iCalendar events, I guess) and it may or may not be possible to fix the language so that the problems with it are dealt with. I was also inspired by the language of the time part of Bind Rules from 389 Directory Server and created a possible language that might be usable. I don't have yet the regular expression describing the language in a human-friendly form but I got a finite automaton picture that describes it (see https://www.debuggex.com/i/_Pg9KOp2TgXuus8K.png). Basically, you have the parentheses form of rules from Bind Rules, separated with either "and/or" or "except" keyword. "except" should only appear once. You would set the time ranges with the "-" operator, e.g. timeofday=0800-1545 (this operator is not shown in the picture). The parenthesis could also be nested but that can't of course be described by a finite automaton. The "except" keyword should still appear only once, though. If any time-describing part is missing, it means that access is allowed (e.g. (timeofday=0800-1600 dayofweek=Mon-Fri) allows access from 8:00 to 16:00 Monday through Friday any week in the month, any month in the year). This language should be able to do everything the old language did, supports reoccurring events nicely and also allows exceptions, so importing iCalendar events might be possible. However, its downfall is that it's not very space-efficient and absolute time ranges may not be so easily be described (e.g. time from 18:00 20.3.2015 to 23:00 24.12.2015 would be quite a string). MY QUESTIONS: Do we store the timezone information with the host/hostgroup, service/servicegroup object or do we just have the admin pick the timezone themselves? Which time format would be best for both FreeIPA and SSSD? Thank you very much for you insights! Standa From mkosek at redhat.com Fri Mar 20 13:13:31 2015 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 20 Mar 2015 14:13:31 +0100 Subject: [Freeipa-devel] Designing better API compatibility Message-ID: <550C1CFB.3020402@redhat.com> Hi guys, I would like to resurrect the discussion we had during DevConf.cz time, about API compatibility in the FreeIPA server. So right now, we maintain the backward compatibility, old clients can talk to newer servers. Forward compatibility is not maintained. Unfortunately, this is not very helpful in real deployments, where the server will often be some RHEL/CentOS system and the client may be the newest Fedora - with newer API than the server. This is the toughest part we need to solve. There 3 main areas we wanted to attack with respect to compatibility: 1) API publishing and/or API browser This is mostly documentation/interactive browser to see the supported API of the server. It should not be difficult, it would just consume the metadata already generated by the server. Ticket: https://fedorahosted.org/freeipa/ticket/3129 2) Forward compatibility of the direct API consumers Until now, to keep newer clients working against older server, we are using the following trick in the ipa-client-install: https://git.fedorahosted.org/cgit/freeipa.git/tree/ipa-client/ipa-install/ipa-client-install#n1649 It mostly works, one just needs to know the minimal version that needs to be supported. It would be more user friendly, however, if this check is done on the server automatically, without user having to research it. This applies both for ipalib python lib consumer and for direct JSON-RPC consumers. Ticket: https://fedorahosted.org/freeipa/ticket/4739 3) Forward compatibility of the "ipa" client tool There are different approaches how to fix this, the generally accepted idea was to implement very thin client, which would download and cache metadata from the server on the client and generate the CLI from it. We would only need to have separate client only plugins, basically implementing interactive_promt_callback from existing server side plugins. Tickets: https://fedorahosted.org/freeipa/ticket/4739, https://fedorahosted.org/freeipa/ticket/4768 Now, question is what we can do in 4.2. I do not think we can manage to rewrite "ipa" command in the thin client, but we should do at least some portion of 1) and 3). I could not decipher that from our Devconf.cz notes. To me, the simplest way of fixing forward compatibility seems to be following steps (besides not making API backwards incompatible - i.e. what we do already): - keep sending API version from client to server - server should not refuse newer API versions - only raise error when an unknown option or unknown command is used When plugins change the behavior, they should check for client version and base it's action on it (sort of the capabilities we already have). This is the simple way, it would work well with the global API number and thin client also. So this is the simple version. Simo, Nathaniel (and others), I know you proposed versions the commands themselves, but I am now not sure how exactly you wanted to do it. What exactly would it mean for the typical extension of our API - adding new parameter and how user command extensions should be treated (command parameter added). Thank you! -- Martin Kosek Supervisor, Software Engineering - Identity Management Team Red Hat Inc. From ssorce at redhat.com Fri Mar 20 13:19:16 2015 From: ssorce at redhat.com (Simo Sorce) Date: Fri, 20 Mar 2015 09:19:16 -0400 Subject: [Freeipa-devel] Designing better API compatibility In-Reply-To: <550C1CFB.3020402@redhat.com> References: <550C1CFB.3020402@redhat.com> Message-ID: <1426857556.2981.144.camel@willson.usersys.redhat.com> On Fri, 2015-03-20 at 14:13 +0100, Martin Kosek wrote: > Hi guys, > > I would like to resurrect the discussion we had during DevConf.cz time, about > API compatibility in the FreeIPA server. > > So right now, we maintain the backward compatibility, old clients can talk to > newer servers. Forward compatibility is not maintained. Unfortunately, this is > not very helpful in real deployments, where the server will often be some > RHEL/CentOS system and the client may be the newest Fedora - with newer API > than the server. This is the toughest part we need to solve. > > There 3 main areas we wanted to attack with respect to compatibility: > > 1) API publishing and/or API browser > This is mostly documentation/interactive browser to see the supported API of > the server. It should not be difficult, it would just consume the metadata > already generated by the server. > > Ticket: https://fedorahosted.org/freeipa/ticket/3129 > > 2) Forward compatibility of the direct API consumers > Until now, to keep newer clients working against older server, we are using the > following trick in the ipa-client-install: > > https://git.fedorahosted.org/cgit/freeipa.git/tree/ipa-client/ipa-install/ipa-client-install#n1649 > > It mostly works, one just needs to know the minimal version that needs to be > supported. It would be more user friendly, however, if this check is done on > the server automatically, without user having to research it. This applies both > for ipalib python lib consumer and for direct JSON-RPC consumers. > > Ticket: https://fedorahosted.org/freeipa/ticket/4739 > > 3) Forward compatibility of the "ipa" client tool > There are different approaches how to fix this, the generally accepted idea was > to implement very thin client, which would download and cache metadata from the > server on the client and generate the CLI from it. We would only need to have > separate client only plugins, basically implementing interactive_promt_callback > from existing server side plugins. > > Tickets: https://fedorahosted.org/freeipa/ticket/4739, > https://fedorahosted.org/freeipa/ticket/4768 > > > Now, question is what we can do in 4.2. I do not think we can manage to rewrite > "ipa" command in the thin client, but we should do at least some portion of 1) > and 3). > > I could not decipher that from our Devconf.cz notes. To me, the simplest way of > fixing forward compatibility seems to be following steps (besides not making > API backwards incompatible - i.e. what we do already): > > - keep sending API version from client to server > - server should not refuse newer API versions > - only raise error when an unknown option or unknown command is used This is not sufficient, older 3.3 and 4.x servers can't be changed and we MUST be compatible with those. Basically the plan MUST work with already released servers, this is a constraint that cannot be releaxed, please work within this limitations. > When plugins change the behavior, they should check for client version and base > it's action on it (sort of the capabilities we already have). This is the > simple way, it would work well with the global API number and thin client also. Long term we need to provide versioned APIs and it is better if the client falls back to known good old APIs, because the server certainly can't fall forward. > So this is the simple version. Simo, Nathaniel (and others), I know you > proposed versions the commands themselves, but I am now not sure how exactly > you wanted to do it. What exactly would it mean for the typical extension of > our API - adding new parameter and how user command extensions should be > treated (command parameter added). I'll let Nathaniel weight in on the technical side as he had better ideas IIRC. >From my POV the client needs to find out what calls the server can support and use appropriate ones. At the same time newer servers must support older call versions so that older clients can still work. Simo. From mkosek at redhat.com Fri Mar 20 13:38:12 2015 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 20 Mar 2015 14:38:12 +0100 Subject: [Freeipa-devel] Designing better API compatibility In-Reply-To: <1426857556.2981.144.camel@willson.usersys.redhat.com> References: <550C1CFB.3020402@redhat.com> <1426857556.2981.144.camel@willson.usersys.redhat.com> Message-ID: <550C22C4.2000908@redhat.com> On 03/20/2015 02:19 PM, Simo Sorce wrote: > On Fri, 2015-03-20 at 14:13 +0100, Martin Kosek wrote: >> Hi guys, >> >> I would like to resurrect the discussion we had during DevConf.cz time, about >> API compatibility in the FreeIPA server. >> >> So right now, we maintain the backward compatibility, old clients can talk to >> newer servers. Forward compatibility is not maintained. Unfortunately, this is >> not very helpful in real deployments, where the server will often be some >> RHEL/CentOS system and the client may be the newest Fedora - with newer API >> than the server. This is the toughest part we need to solve. >> >> There 3 main areas we wanted to attack with respect to compatibility: >> >> 1) API publishing and/or API browser >> This is mostly documentation/interactive browser to see the supported API of >> the server. It should not be difficult, it would just consume the metadata >> already generated by the server. >> >> Ticket: https://fedorahosted.org/freeipa/ticket/3129 >> >> 2) Forward compatibility of the direct API consumers >> Until now, to keep newer clients working against older server, we are using the >> following trick in the ipa-client-install: >> >> https://git.fedorahosted.org/cgit/freeipa.git/tree/ipa-client/ipa-install/ipa-client-install#n1649 >> >> It mostly works, one just needs to know the minimal version that needs to be >> supported. It would be more user friendly, however, if this check is done on >> the server automatically, without user having to research it. This applies both >> for ipalib python lib consumer and for direct JSON-RPC consumers. >> >> Ticket: https://fedorahosted.org/freeipa/ticket/4739 >> >> 3) Forward compatibility of the "ipa" client tool >> There are different approaches how to fix this, the generally accepted idea was >> to implement very thin client, which would download and cache metadata from the >> server on the client and generate the CLI from it. We would only need to have >> separate client only plugins, basically implementing interactive_promt_callback >> from existing server side plugins. >> >> Tickets: https://fedorahosted.org/freeipa/ticket/4739, >> https://fedorahosted.org/freeipa/ticket/4768 >> >> >> Now, question is what we can do in 4.2. I do not think we can manage to rewrite >> "ipa" command in the thin client, but we should do at least some portion of 1) >> and 3). >> >> I could not decipher that from our Devconf.cz notes. To me, the simplest way of >> fixing forward compatibility seems to be following steps (besides not making >> API backwards incompatible - i.e. what we do already): >> >> - keep sending API version from client to server >> - server should not refuse newer API versions >> - only raise error when an unknown option or unknown command is used > > This is not sufficient, older 3.3 and 4.x servers can't be changed and > we MUST be compatible with those. > Basically the plan MUST work with already released servers, this is a > constraint that cannot be releaxed, please work within this limitations. Correct. I see 2 approaches here: a) Thin client, which simply downloads metadata from the (old) server and won't use unsupported commands/parameters b) Not-so-thin client that knows the minimal API versions of commands/parameters (can be annotated in the code), that would ping the server first to identify it's version, validate that the chosen set of commands/parameters is supported on that server and then send the commands with that version. From ssorce at redhat.com Fri Mar 20 13:58:56 2015 From: ssorce at redhat.com (Simo Sorce) Date: Fri, 20 Mar 2015 09:58:56 -0400 Subject: [Freeipa-devel] Designing better API compatibility In-Reply-To: <550C22C4.2000908@redhat.com> References: <550C1CFB.3020402@redhat.com> <1426857556.2981.144.camel@willson.usersys.redhat.com> <550C22C4.2000908@redhat.com> Message-ID: <1426859936.2981.146.camel@willson.usersys.redhat.com> On Fri, 2015-03-20 at 14:38 +0100, Martin Kosek wrote: > On 03/20/2015 02:19 PM, Simo Sorce wrote: > > On Fri, 2015-03-20 at 14:13 +0100, Martin Kosek wrote: > >> Hi guys, > >> > >> I would like to resurrect the discussion we had during DevConf.cz time, about > >> API compatibility in the FreeIPA server. > >> > >> So right now, we maintain the backward compatibility, old clients can talk to > >> newer servers. Forward compatibility is not maintained. Unfortunately, this is > >> not very helpful in real deployments, where the server will often be some > >> RHEL/CentOS system and the client may be the newest Fedora - with newer API > >> than the server. This is the toughest part we need to solve. > >> > >> There 3 main areas we wanted to attack with respect to compatibility: > >> > >> 1) API publishing and/or API browser > >> This is mostly documentation/interactive browser to see the supported API of > >> the server. It should not be difficult, it would just consume the metadata > >> already generated by the server. > >> > >> Ticket: https://fedorahosted.org/freeipa/ticket/3129 > >> > >> 2) Forward compatibility of the direct API consumers > >> Until now, to keep newer clients working against older server, we are using the > >> following trick in the ipa-client-install: > >> > >> https://git.fedorahosted.org/cgit/freeipa.git/tree/ipa-client/ipa-install/ipa-client-install#n1649 > >> > >> It mostly works, one just needs to know the minimal version that needs to be > >> supported. It would be more user friendly, however, if this check is done on > >> the server automatically, without user having to research it. This applies both > >> for ipalib python lib consumer and for direct JSON-RPC consumers. > >> > >> Ticket: https://fedorahosted.org/freeipa/ticket/4739 > >> > >> 3) Forward compatibility of the "ipa" client tool > >> There are different approaches how to fix this, the generally accepted idea was > >> to implement very thin client, which would download and cache metadata from the > >> server on the client and generate the CLI from it. We would only need to have > >> separate client only plugins, basically implementing interactive_promt_callback > >> from existing server side plugins. > >> > >> Tickets: https://fedorahosted.org/freeipa/ticket/4739, > >> https://fedorahosted.org/freeipa/ticket/4768 > >> > >> > >> Now, question is what we can do in 4.2. I do not think we can manage to rewrite > >> "ipa" command in the thin client, but we should do at least some portion of 1) > >> and 3). > >> > >> I could not decipher that from our Devconf.cz notes. To me, the simplest way of > >> fixing forward compatibility seems to be following steps (besides not making > >> API backwards incompatible - i.e. what we do already): > >> > >> - keep sending API version from client to server > >> - server should not refuse newer API versions > >> - only raise error when an unknown option or unknown command is used > > > > This is not sufficient, older 3.3 and 4.x servers can't be changed and > > we MUST be compatible with those. > > Basically the plan MUST work with already released servers, this is a > > constraint that cannot be releaxed, please work within this limitations. > > Correct. I see 2 approaches here: > > a) Thin client, which simply downloads metadata from the (old) server and won't > use unsupported commands/parameters > b) Not-so-thin client that knows the minimal API versions of > commands/parameters (can be annotated in the code), that would ping the server > first to identify it's version, validate that the chosen set of > commands/parameters is supported on that server and then send the commands with > that version. If we have a recognizable error the client can take an optimistic approach, send the command normally, if it gets an error that the server does not understand it, it checks the version in the reply and falls back to an older "baseline" version of the command (if possible) or bails out with an error. Simo. From pspacek at redhat.com Fri Mar 20 14:30:24 2015 From: pspacek at redhat.com (Petr Spacek) Date: Fri, 20 Mar 2015 15:30:24 +0100 Subject: [Freeipa-devel] Designing better API compatibility In-Reply-To: <1426857556.2981.144.camel@willson.usersys.redhat.com> References: <550C1CFB.3020402@redhat.com> <1426857556.2981.144.camel@willson.usersys.redhat.com> Message-ID: <550C2F00.2040004@redhat.com> On 20.3.2015 14:19, Simo Sorce wrote: > This is not sufficient, older 3.3 and 4.x servers can't be changed and > we MUST be compatible with those. > Basically the plan MUST work with already released servers, this is a > constraint that cannot be releaxed, please work within this limitations. Currently new clients do not work with older servers, right? Maybe we should do one more (last!) release like that and do a big cut after that. It would make the design so much easier if the new (supposedly thin) client does not need to support ancient servers which had only 'fat' clients. -- Petr^2 Spacek From npmccallum at redhat.com Fri Mar 20 14:51:12 2015 From: npmccallum at redhat.com (Nathaniel McCallum) Date: Fri, 20 Mar 2015 10:51:12 -0400 Subject: [Freeipa-devel] Designing better API compatibility In-Reply-To: <1426859936.2981.146.camel@willson.usersys.redhat.com> References: <550C1CFB.3020402@redhat.com> <1426857556.2981.144.camel@willson.usersys.redhat.com> <550C22C4.2000908@redhat.com> <1426859936.2981.146.camel@willson.usersys.redhat.com> Message-ID: <1426863072.2504.8.camel@redhat.com> On Fri, 2015-03-20 at 09:58 -0400, Simo Sorce wrote: > On Fri, 2015-03-20 at 14:38 +0100, Martin Kosek wrote: > > On 03/20/2015 02:19 PM, Simo Sorce wrote: > > > On Fri, 2015-03-20 at 14:13 +0100, Martin Kosek wrote: > > > > Hi guys, > > > > > > > > I would like to resurrect the discussion we had during > > > > DevConf.cz time, about > > > > API compatibility in the FreeIPA server. > > > > > > > > So right now, we maintain the backward compatibility, old > > > > clients can talk to > > > > newer servers. Forward compatibility is not maintained. > > > > Unfortunately, this is > > > > not very helpful in real deployments, where the server will > > > > often be some > > > > RHEL/CentOS system and the client may be the newest Fedora - > > > > with newer API > > > > than the server. This is the toughest part we need to solve. > > > > > > > > There 3 main areas we wanted to attack with respect to > > > > compatibility: > > > > > > > > 1) API publishing and/or API browser > > > > This is mostly documentation/interactive browser to see the > > > > supported API of > > > > the server. It should not be difficult, it would just consume > > > > the metadata > > > > already generated by the server. > > > > > > > > Ticket: https://fedorahosted.org/freeipa/ticket/3129 > > > > > > > > 2) Forward compatibility of the direct API consumers > > > > Until now, to keep newer clients working against older server, > > > > we are using the > > > > following trick in the ipa-client-install: > > > > > > > > https://git.fedorahosted.org/cgit/freeipa.git/tree/ipa-client/ipa-install/ipa-client-install# > > > > n1649 > > > > > > > > It mostly works, one just needs to know the minimal version > > > > that needs to be > > > > supported. It would be more user friendly, however, if this > > > > check is done on > > > > the server automatically, without user having to research it. > > > > This applies both > > > > for ipalib python lib consumer and for direct JSON-RPC > > > > consumers. > > > > > > > > Ticket: https://fedorahosted.org/freeipa/ticket/4739 > > > > > > > > 3) Forward compatibility of the "ipa" client tool > > > > There are different approaches how to fix this, the generally > > > > accepted idea was > > > > to implement very thin client, which would download and cache > > > > metadata from the > > > > server on the client and generate the CLI from it. We would > > > > only need to have > > > > separate client only plugins, basically implementing > > > > interactive_promt_callback > > > > from existing server side plugins. > > > > > > > > Tickets: https://fedorahosted.org/freeipa/ticket/4739, > > > > https://fedorahosted.org/freeipa/ticket/4768 > > > > > > > > > > > > Now, question is what we can do in 4.2. I do not think we can > > > > manage to rewrite > > > > "ipa" command in the thin client, but we should do at least > > > > some portion of 1) > > > > and 3). > > > > > > > > I could not decipher that from our Devconf.cz notes. To me, > > > > the simplest way of > > > > fixing forward compatibility seems to be following steps > > > > (besides not making > > > > API backwards incompatible - i.e. what we do already): > > > > > > > > - keep sending API version from client to server > > > > - server should not refuse newer API versions > > > > - only raise error when an unknown option or unknown command > > > > is used > > > > > > This is not sufficient, older 3.3 and 4.x servers can't be > > > changed and we MUST be compatible with those. > > > Basically the plan MUST work with already released servers, this > > > is a constraint that cannot be releaxed, please work within this > > > limitations. > > > > Correct. I see 2 approaches here: > > > > a) Thin client, which simply downloads metadata from the (old) > > server and won't > > use unsupported commands/parameters > > b) Not-so-thin client that knows the minimal API versions of > > commands/parameters (can be annotated in the code), that would > > ping the server > > first to identify it's version, validate that the chosen set of > > commands/parameters is supported on that server and then send the > > commands with > > that version. > > If we have a recognizable error the client can take an optimistic > approach, send the command normally, if it gets an error that the > server does not understand it, it checks the version in the reply > and falls back to an older "baseline" version of the command (if > possible) or bails out with an error. My understanding was that: 1. We already publish all the information necessary to implement a thin client, and have for some time. 2. Thus, the thin client would work on both new and old versions since it just simply translates from user input into JSON/XML. 3. Only plugins with specific client behavior would need to be ported to the thin client. A prime example of this is otptoken-add-yubikey. My preference is solidly for implementing the thin client first. Once we have decoupled the client from the current plugin framework, server- side changes can be made in isolation. This decoupling is the move that is essentially necessary to provide proper API versioning. And if this can't land for 4.2, land it in the next release. I'd rather do API-stability correctly and a release later than rushed with compromises. We have to live with this forever. Nathaniel From pspacek at redhat.com Fri Mar 20 15:16:05 2015 From: pspacek at redhat.com (Petr Spacek) Date: Fri, 20 Mar 2015 16:16:05 +0100 Subject: [Freeipa-devel] Designing better API compatibility In-Reply-To: <1426863072.2504.8.camel@redhat.com> References: <550C1CFB.3020402@redhat.com> <1426857556.2981.144.camel@willson.usersys.redhat.com> <550C22C4.2000908@redhat.com> <1426859936.2981.146.camel@willson.usersys.redhat.com> <1426863072.2504.8.camel@redhat.com> Message-ID: <550C39B5.5030006@redhat.com> On 20.3.2015 15:51, Nathaniel McCallum wrote: > On Fri, 2015-03-20 at 09:58 -0400, Simo Sorce wrote: >> On Fri, 2015-03-20 at 14:38 +0100, Martin Kosek wrote: >>> On 03/20/2015 02:19 PM, Simo Sorce wrote: >>>> On Fri, 2015-03-20 at 14:13 +0100, Martin Kosek wrote: >>>>> Hi guys, >>>>> >>>>> I would like to resurrect the discussion we had during >>>>> DevConf.cz time, about >>>>> API compatibility in the FreeIPA server. >>>>> >>>>> So right now, we maintain the backward compatibility, old >>>>> clients can talk to >>>>> newer servers. Forward compatibility is not maintained. >>>>> Unfortunately, this is >>>>> not very helpful in real deployments, where the server will >>>>> often be some >>>>> RHEL/CentOS system and the client may be the newest Fedora - >>>>> with newer API >>>>> than the server. This is the toughest part we need to solve. >>>>> >>>>> There 3 main areas we wanted to attack with respect to >>>>> compatibility: >>>>> >>>>> 1) API publishing and/or API browser >>>>> This is mostly documentation/interactive browser to see the >>>>> supported API of >>>>> the server. It should not be difficult, it would just consume >>>>> the metadata >>>>> already generated by the server. >>>>> >>>>> Ticket: https://fedorahosted.org/freeipa/ticket/3129 >>>>> >>>>> 2) Forward compatibility of the direct API consumers >>>>> Until now, to keep newer clients working against older server, >>>>> we are using the >>>>> following trick in the ipa-client-install: >>>>> >>>>> https://git.fedorahosted.org/cgit/freeipa.git/tree/ipa-client/ipa-install/ipa-client-install# >>>>> n1649 >>>>> >>>>> It mostly works, one just needs to know the minimal version >>>>> that needs to be >>>>> supported. It would be more user friendly, however, if this >>>>> check is done on >>>>> the server automatically, without user having to research it. >>>>> This applies both >>>>> for ipalib python lib consumer and for direct JSON-RPC >>>>> consumers. >>>>> >>>>> Ticket: https://fedorahosted.org/freeipa/ticket/4739 >>>>> >>>>> 3) Forward compatibility of the "ipa" client tool >>>>> There are different approaches how to fix this, the generally >>>>> accepted idea was >>>>> to implement very thin client, which would download and cache >>>>> metadata from the >>>>> server on the client and generate the CLI from it. We would >>>>> only need to have >>>>> separate client only plugins, basically implementing >>>>> interactive_promt_callback >>>>> from existing server side plugins. >>>>> >>>>> Tickets: https://fedorahosted.org/freeipa/ticket/4739, >>>>> https://fedorahosted.org/freeipa/ticket/4768 >>>>> >>>>> >>>>> Now, question is what we can do in 4.2. I do not think we can >>>>> manage to rewrite >>>>> "ipa" command in the thin client, but we should do at least >>>>> some portion of 1) >>>>> and 3). >>>>> >>>>> I could not decipher that from our Devconf.cz notes. To me, >>>>> the simplest way of >>>>> fixing forward compatibility seems to be following steps >>>>> (besides not making >>>>> API backwards incompatible - i.e. what we do already): >>>>> >>>>> - keep sending API version from client to server >>>>> - server should not refuse newer API versions >>>>> - only raise error when an unknown option or unknown command >>>>> is used >>>> >>>> This is not sufficient, older 3.3 and 4.x servers can't be >>>> changed and we MUST be compatible with those. >>>> Basically the plan MUST work with already released servers, this >>>> is a constraint that cannot be releaxed, please work within this >>>> limitations. >>> >>> Correct. I see 2 approaches here: >>> >>> a) Thin client, which simply downloads metadata from the (old) >>> server and won't >>> use unsupported commands/parameters >>> b) Not-so-thin client that knows the minimal API versions of >>> commands/parameters (can be annotated in the code), that would >>> ping the server >>> first to identify it's version, validate that the chosen set of >>> commands/parameters is supported on that server and then send the >>> commands with >>> that version. >> >> If we have a recognizable error the client can take an optimistic >> approach, send the command normally, if it gets an error that the >> server does not understand it, it checks the version in the reply >> and falls back to an older "baseline" version of the command (if >> possible) or bails out with an error. > > My understanding was that: > > 1. We already publish all the information necessary to implement a > thin client, and have for some time. We certainly have *some* data but real thin client will most likely require some changes. Some information like return types and so on are missing. > 2. Thus, the thin client would work on both new and old versions since > it just simply translates from user input into JSON/XML. > > 3. Only plugins with specific client behavior would need to be ported > to the thin client. A prime example of this is otptoken-add-yubikey. > > My preference is solidly for implementing the thin client first. Once > we have decoupled the client from the current plugin framework, server- > side changes can be made in isolation. This decoupling is the move > that is essentially necessary to provide proper API versioning. And if > this can't land for 4.2, land it in the next release. I'd rather do > API-stability correctly and a release later than rushed with > compromises. We have to live with this forever. + all votes I have :-) -- Petr^2 Spacek From simo at redhat.com Fri Mar 20 15:35:46 2015 From: simo at redhat.com (Simo Sorce) Date: Fri, 20 Mar 2015 11:35:46 -0400 Subject: [Freeipa-devel] Designing better API compatibility In-Reply-To: <550C2F00.2040004@redhat.com> References: <550C1CFB.3020402@redhat.com> <1426857556.2981.144.camel@willson.usersys.redhat.com> <550C2F00.2040004@redhat.com> Message-ID: <1426865746.2981.148.camel@willson.usersys.redhat.com> On Fri, 2015-03-20 at 15:30 +0100, Petr Spacek wrote: > On 20.3.2015 14:19, Simo Sorce wrote: > > This is not sufficient, older 3.3 and 4.x servers can't be changed and > > we MUST be compatible with those. > > Basically the plan MUST work with already released servers, this is a > > constraint that cannot be releaxed, please work within this limitations. > > Currently new clients do not work with older servers, right? > > Maybe we should do one more (last!) release like that and do a big cut after > that. It would make the design so much easier if the new (supposedly thin) > client does not need to support ancient servers which had only 'fat' clients. People are using 3.3 and 4.1 now, we want to support them too, whatever it takes. For future clients we can do whatever fancy automated thin client and what not, and in time drop support for very old releases, but 3.3 and < 4.2 are going to be around for quite a while and we need to support those server in the clients. Simo. -- Simo Sorce * Red Hat, Inc * New York From rcritten at redhat.com Fri Mar 20 15:46:12 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Fri, 20 Mar 2015 11:46:12 -0400 Subject: [Freeipa-devel] Designing better API compatibility In-Reply-To: <1426865746.2981.148.camel@willson.usersys.redhat.com> References: <550C1CFB.3020402@redhat.com> <1426857556.2981.144.camel@willson.usersys.redhat.com> <550C2F00.2040004@redhat.com> <1426865746.2981.148.camel@willson.usersys.redhat.com> Message-ID: <550C40C4.1060300@redhat.com> Simo Sorce wrote: > On Fri, 2015-03-20 at 15:30 +0100, Petr Spacek wrote: >> On 20.3.2015 14:19, Simo Sorce wrote: >>> This is not sufficient, older 3.3 and 4.x servers can't be changed and >>> we MUST be compatible with those. >>> Basically the plan MUST work with already released servers, this is a >>> constraint that cannot be releaxed, please work within this limitations. >> >> Currently new clients do not work with older servers, right? >> >> Maybe we should do one more (last!) release like that and do a big cut after >> that. It would make the design so much easier if the new (supposedly thin) >> client does not need to support ancient servers which had only 'fat' clients. > > People are using 3.3 and 4.1 now, we want to support them too, whatever > it takes. > For future clients we can do whatever fancy automated thin client and > what not, and in time drop support for very old releases, but 3.3 and < > 4.2 are going to be around for quite a while and we need to support > those server in the clients. I think 3.0 should be the baseline. It will around another few years at least. rob From pvoborni at redhat.com Fri Mar 20 16:00:15 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Fri, 20 Mar 2015 17:00:15 +0100 Subject: [Freeipa-devel] Designing better API compatibility In-Reply-To: <550C39B5.5030006@redhat.com> References: <550C1CFB.3020402@redhat.com> <1426857556.2981.144.camel@willson.usersys.redhat.com> <550C22C4.2000908@redhat.com> <1426859936.2981.146.camel@willson.usersys.redhat.com> <1426863072.2504.8.camel@redhat.com> <550C39B5.5030006@redhat.com> Message-ID: <550C440F.9030907@redhat.com> On 03/20/2015 04:16 PM, Petr Spacek wrote: > On 20.3.2015 15:51, Nathaniel McCallum wrote: >> On Fri, 2015-03-20 at 09:58 -0400, Simo Sorce wrote: >>> On Fri, 2015-03-20 at 14:38 +0100, Martin Kosek wrote: >>>> >>>> Correct. I see 2 approaches here: >>>> >>>> a) Thin client, which simply downloads metadata from the (old) >>>> server and won't >>>> use unsupported commands/parameters >>>> b) Not-so-thin client that knows the minimal API versions of >>>> commands/parameters (can be annotated in the code), that would >>>> ping the server >>>> first to identify it's version, validate that the chosen set of >>>> commands/parameters is supported on that server and then send the >>>> commands with >>>> that version. >>> >>> If we have a recognizable error the client can take an optimistic >>> approach, send the command normally, if it gets an error that the >>> server does not understand it, it checks the version in the reply >>> and falls back to an older "baseline" version of the command (if >>> possible) or bails out with an error. >> >> My understanding was that: >> >> 1. We already publish all the information necessary to implement a >> thin client, and have for some time. > We certainly have *some* data but real thin client will most likely require > some changes. Some information like return types and so on are missing. > >> 2. Thus, the thin client would work on both new and old versions since >> it just simply translates from user input into JSON/XML. >> >> 3. Only plugins with specific client behavior would need to be ported >> to the thin client. A prime example of this is otptoken-add-yubikey. >> >> My preference is solidly for implementing the thin client first. Once >> we have decoupled the client from the current plugin framework, server- >> side changes can be made in isolation. This decoupling is the move >> that is essentially necessary to provide proper API versioning. And if >> this can't land for 4.2, land it in the next release. I'd rather do >> API-stability correctly and a release later than rushed with >> compromises. We have to live with this forever. > + all votes I have :-) > +1 -- Petr Vobornik From mkosek at redhat.com Fri Mar 20 16:48:16 2015 From: mkosek at redhat.com (Martin Kosek) Date: Fri, 20 Mar 2015 17:48:16 +0100 Subject: [Freeipa-devel] Designing better API compatibility In-Reply-To: <550C40C4.1060300@redhat.com> References: <550C1CFB.3020402@redhat.com> <1426857556.2981.144.camel@willson.usersys.redhat.com> <550C2F00.2040004@redhat.com> <1426865746.2981.148.camel@willson.usersys.redhat.com> <550C40C4.1060300@redhat.com> Message-ID: <550C4F50.7000500@redhat.com> On 03/20/2015 04:46 PM, Rob Crittenden wrote: > Simo Sorce wrote: >> On Fri, 2015-03-20 at 15:30 +0100, Petr Spacek wrote: >>> On 20.3.2015 14:19, Simo Sorce wrote: >>>> This is not sufficient, older 3.3 and 4.x servers can't be changed and >>>> we MUST be compatible with those. >>>> Basically the plan MUST work with already released servers, this is a >>>> constraint that cannot be releaxed, please work within this limitations. >>> >>> Currently new clients do not work with older servers, right? >>> >>> Maybe we should do one more (last!) release like that and do a big cut after >>> that. It would make the design so much easier if the new (supposedly thin) >>> client does not need to support ancient servers which had only 'fat' clients. >> >> People are using 3.3 and 4.1 now, we want to support them too, whatever >> it takes. >> For future clients we can do whatever fancy automated thin client and >> what not, and in time drop support for very old releases, but 3.3 and < >> 4.2 are going to be around for quite a while and we need to support >> those server in the clients. > > I think 3.0 should be the baseline. It will around another few years at > least. > > rob > Yes, 3.0 is base line. It is the version shipped in RHEL/CentOS-6 and until that dies*, we should be friendly with it. * Looking at https://access.redhat.com/support/policy/updates/errata, this looks like year ~2020 :-) From alanwevans at gmail.com Fri Mar 20 22:22:18 2015 From: alanwevans at gmail.com (Alan Evans) Date: Fri, 20 Mar 2015 16:22:18 -0600 Subject: [Freeipa-devel] OT - git help Message-ID: I have been working on a 3.3.3 install on CentOS that I've made some changes to. I have a patch showing the changes I've made to 3.3.3. I would like to submit them as a patch to freeipa (presumably for master) but my git-fu is not strong. How can I bring the patch forward to master? Sure I could work through it manually I suppose. Git should have all the intermediate steps to get from 3.3.3 -> master so I figure there has to be a git way to do this. Throughts? Regards, -Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From pspacek at redhat.com Mon Mar 23 07:11:13 2015 From: pspacek at redhat.com (Petr Spacek) Date: Mon, 23 Mar 2015 08:11:13 +0100 Subject: [Freeipa-devel] OT - git help In-Reply-To: References: Message-ID: <550FBC91.3030707@redhat.com> On 20.3.2015 23:22, Alan Evans wrote: > I have been working on a 3.3.3 install on CentOS that I've made some > changes to. I have a patch showing the changes I've made to 3.3.3. I > would like to submit them as a patch to freeipa (presumably for master) but > my git-fu is not strong. > > How can I bring the patch forward to master? Sure I could work through it > manually I suppose. Git should have all the intermediate steps to get from > 3.3.3 -> master so I figure there has to be a git way to do this. > > Throughts? You are probably looking for command "git rebase". For further information please see: http://mettadore.com/2011/05/06/a-simple-git-rebase-workflow-explained/ http://git-scm.com/book/be/v2/Git-Branching-Rebasing $ man git rebase Also, you might find this link handy: http://www.freeipa.org/page/Contribute/Code Please let us know if you encounter any problems. Thank you for your time! -- Petr^2 Spacek From jcholast at redhat.com Mon Mar 23 08:03:53 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 23 Mar 2015 09:03:53 +0100 Subject: [Freeipa-devel] New installer PoC Message-ID: <550FC8E9.4020502@redhat.com> Hi, the attached patch contains a new PoC installer for httpd. Design goals: 1) Make code related to any particular configuration change co-located, be it install/uninstall/upgrade. 2) Get rid of code duplicates. 3) Use the same code path for install and upgrade. 4) Provide metadata for parameters from which option parsers etc. can be generated. 5) Make installers plugable. This is not really apparent from the patch, since it only implements installer for a single component, but I plan to make the whole thing extensible by plugins. Honza -- Jan Cholasta -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-install-New-installer-PoC.patch Type: text/x-patch Size: 53955 bytes Desc: not available URL: From tbabej at redhat.com Mon Mar 23 08:48:11 2015 From: tbabej at redhat.com (Tomas Babej) Date: Mon, 23 Mar 2015 09:48:11 +0100 Subject: [Freeipa-devel] [PATCH 0208] Remove --test option from upgrade In-Reply-To: <550AB224.2040709@redhat.com> References: <54F9DD12.2050008@redhat.com> <5501AC6C.8000603@redhat.com> <55081918.3000307@redhat.com> <550AB224.2040709@redhat.com> Message-ID: <550FD34B.30405@redhat.com> On 03/19/2015 12:25 PM, David Kupka wrote: > On 03/17/2015 01:07 PM, Martin Basti wrote: >> On 12/03/15 16:10, David Kupka wrote: >>> On 03/06/2015 06:00 PM, Martin Basti wrote: >>>> Upgrade plugins which modify LDAP data directly should not be executed >>>> in --test mode. >>>> >>>> This patch is a workaround, to ensure update with --test option >>>> will not >>>> modify any LDAP data. >>>> >>>> https://fedorahosted.org/freeipa/ticket/3448 >>>> >>>> Patch attached. >>>> >>>> >>>> >>> >>> Ideally we want to fix all plugins to dry-run the upgrade not just >>> skip when there is '--test' option but it is a good first step. >>> Works for me, ACK. >>> >> >> We had long discussion, and we decided to remove this option from >> upgrade. >> >> Reasons: >> * users are not supposed to use this option to test if upgrade will be >> successful, it can not guarantee it. >> * option is not used for developing, as it can not catch all issues with >> upgrade, using snapshots is better >> >> Attached patch removes the option. >> > > Works for me, ACK. Pushed to master: c3d441ae0314bd3669a75b3f33d544d9ca09d197 Tomas From mbabinsk at redhat.com Mon Mar 23 08:52:23 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Mon, 23 Mar 2015 09:52:23 +0100 Subject: [Freeipa-devel] [PATCH 0022] migrate-ds: proper treatment of unsuccessful migrations In-Reply-To: <55094B54.7070807@redhat.com> References: <55094B54.7070807@redhat.com> Message-ID: <550FD447.9050707@redhat.com> On 03/18/2015 10:54 AM, Martin Babinsky wrote: > This is a proper fix to both > https://fedorahosted.org/freeipa/ticket/4846 and > https://fedorahosted.org/freeipa/ticket/4952. > > To do this I had to throw out some unused parameters from > _update_default_group function (particularly the pesky pkey causing bug > #4846 to pop out). > > I did not test the patch because my VMs are behaving strangely today and > I couldn't connect to them. So please test this patch thoroughly. > > > Silly me did not realize that textui.print_entry1 actually deletes the dict values as it prints them so the error is always raised when applying patch. Attaching updated version that should work as expected. -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0022-2-migrate-ds-print-out-failed-attempts-when-no-users-g.patch Type: text/x-patch Size: 3266 bytes Desc: not available URL: From jcholast at redhat.com Mon Mar 23 09:10:51 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 23 Mar 2015 10:10:51 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <550C1303.7090402@seznam.cz> References: <550C1303.7090402@seznam.cz> Message-ID: <550FD89B.8050506@redhat.com> Hi, Dne 20.3.2015 v 13:30 Stanislav L?zni?ka napsal(a): > Hi! > > I went through the last week's thread on Time-Based Policies, discussed > some parts I wasn't very sure about with Martin, and would like to make > a summary of it, followed by some further questions on the topic. The > mail is a bit longer than I thought it would be. Sorry about that. It > seems I am very bad at fast and brief responses. > > So to summarize - the best way to go seems to be to implement both UTC > and client-side local time support. A very nice idea is to store it in > the format (time, timezone). Now if I understand this right, the > timezone here would be the Olson database timezone stored with the > host/hostgroup, possibly overridden by service/servicegroup timezone. It > would help the administrator decide the correct UTC time by adjusting > the time displayed in UI/CLI. The administrator should also be able to > set this timezone in the UI/CLI settings of the HBAC rule (or maybe they > just set it instead of storing timezone info with host/hostgroup etc. > objects?). > > As for the local time - timezone in the tuple (time, timezone) would > only say "Local Time", which can't be found in Olson's and it means the > time record from the tuple should be compared to the client's time > settings (/etc/localtime ?). Let me just do a braindump here: I would think timezone, or rather location (timezone + holidays) should be stored with user objects (but not in group objects, as users can be members of multiple groups, which could cause conflicts). When a user goes on a bussiness trip etc., an administrator/manager would need to change their location accordingly to reflect that. For hosts, timezone information is already available on the host (/etc/localtime). I'm not sure if there is a 1-1 mapping between timezone and holidays, but if there is not, we probably need to store location with host objects as well. If we do that, maybe we can make SSSD update local timezone on the host using the information stored with the host object to allow roaming (think user laptops). It might be a good idea to store optional owner information with hosts, so that when the owner moves to a different location, the host is assumed to be at that location as well (again think user laptops). Given the above, HBAC rules could contain (time, anchor), where anchor is "UTC", "user local time" or "host local time". > > What would then be left to decide is whether to use the iCalendar time > format as internal, or use some other own created format. Such a format > should not only be suitable for reoccurring events, but should also be > able to handle exceptions in those events, such as holidays. It should > also be easy to import iCalendar events to this format to support events > from other sources. > > The iCalendar format is rather heavy and if we wanted to use it as a > whole, it may be necessary to include third party libraries in the SSSD > project along with their possible vulnerabilities. There is a > possibility of using only part of the iCalendar format that handles > reoccurring events and exceptions. Example from Alexander to be found > here: > https://github.com/libical/libical/blob/master/src/test/icalrecur_test.c. That > may or may not work perfectly well, given the libical library is a > complex thing and we'd be taking just a part of it. > > The other time format option is to simplify/edit the language that was > used in the past (described with regular expressions here: > https://git.fedorahosted.org/cgit/sssd.git/commit/?id=1ce240367a2144500187ccd3c0d32c975d8d346a). > There was probably a reason to step out of the language (the possibility > of parsing those iCalendar events, I guess) and it may or may not be > possible to fix the language so that the problems with it are dealt with. > > I was also inspired by the language of the time part of Bind Rules from > 389 Directory Server and created a possible language that might be > usable. I don't have yet the regular expression describing the language > in a human-friendly form but I got a finite automaton picture that > describes it (see https://www.debuggex.com/i/_Pg9KOp2TgXuus8K.png). > Basically, you have the parentheses form of rules from Bind Rules, > separated with either "and/or" or "except" keyword. "except" should only > appear once. You would set the time ranges with the "-" operator, e.g. > timeofday=0800-1545 (this operator is not shown in the picture). The > parenthesis could also be nested but that can't of course be described > by a finite automaton. The "except" keyword should still appear only > once, though. If any time-describing part is missing, it means that > access is allowed (e.g. (timeofday=0800-1600 dayofweek=Mon-Fri) allows > access from 8:00 to 16:00 Monday through Friday any week in the month, > any month in the year). This language should be able to do everything > the old language did, supports reoccurring events nicely and also allows > exceptions, so importing iCalendar events might be possible. However, > its downfall is that it's not very space-efficient and absolute time > ranges may not be so easily be described (e.g. time from 18:00 20.3.2015 > to 23:00 24.12.2015 would be quite a string). > > MY QUESTIONS: Do we store the timezone information with the > host/hostgroup, service/servicegroup object or do we just have the admin > pick the timezone themselves? Which time format would be best for both > FreeIPA and SSSD? > > Thank you very much for you insights! > Standa > Honza -- Jan Cholasta From jcholast at redhat.com Mon Mar 23 10:07:39 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 23 Mar 2015 11:07:39 +0100 Subject: [Freeipa-devel] [PATCH] Password vault In-Reply-To: <55004D5D.6060300@redhat.com> References: <54E1AF55.3060409@redhat.com> <54EBEB55.6010306@redhat.com> <54F96B22.9050507@redhat.com> <55004D5D.6060300@redhat.com> Message-ID: <550FE5EB.1070606@redhat.com> Dne 11.3.2015 v 15:12 Endi Sukma Dewata napsal(a): > Thanks for the review. New patch attached to be applied on top of all > previous patches. Please see comments below. Thanks. I have replied to some of your comments below. > > On 3/6/2015 3:53 PM, Jan Cholasta wrote: >> Patch 353: >> >> 1) Please follow PEP8 in new code. >> >> The pep8 tool reports these errors in existing files: >> >> ./ipalib/constants.py:98:80: E501 line too long (84 > 79 characters) >> ./ipalib/plugins/baseldap.py:1527:80: E501 line too long (81 > 79 >> characters) >> ./ipalib/plugins/user.py:915:80: E501 line too long (80 > 79 characters) >> >> as well as many errors in the files this patch adds. > > For some reason pylint keeps crashing during build so I cannot run it > for all files. I'm fixing the errors that I can see. If you see other > errors please let me know while I'm still trying to figure out the problem. Well, I did not use pylint, but pep8: > > Is there an existing ticket for fixing PEP8 errors? Let's use that for > fixing the errors in the existing code. There is no ticket, but we still follow PEP8 in new code, so please do that. It shouldn't be too hard. > ... > >> 3) The container_vault config option should be renamed to >> container_vaultcontainer, as it is used in the vaultcontainer plugin, >> not the vault plugin. > > It was named container_vault because it defines the DN for of the > subtree that contains all vault-related entries. I moved the base_dn > variable from vaultcontainer object to the vault object for clarity. That does not make much sense to me. Vault objects are contained in their respective vaultcontainer objects, not directly in cn=vaults. > >> 4) The vault object should be child of the vaultcontainer object. >> >> Not only is this correct from the object model perspective, but it would >> also make all the container_id hacks go away. > > It's a bit difficult because it will affect how the container & vault > ID's are represented on the CLI. Yes, but the API should be done right (without hacks) first. You can tune the CLI after that if you want. > > In the design the container ID would be a single value like this: > > $ ipa vault-add /services/server.example.com/HTTP > > And if the vault ID is relative (without initial slash), it will be > appended to the user's private container (i.e. /users//): > > $ ipa vault-add PrivateVault > > The implementation is not complete yet. Currently it accepts this format: > > $ ipa vault-add [--container ] > > and I'm still planning to add this: > > $ ipa vault-add > > If the vault must be a child of vaultcontainer, and the vaultcontainer > must be a child of a vaultcontainer, does it mean the vault ID would > have to be split into separate arguments like this? > > $ ipa vaultcontainer-add services server.example.com HTTP > > If that's the case we'd lose the ability to specify a relative vault ID. Yes, that's the case. But I don't think relative IDs should be a problem, we can do this: $ ipa vaultcontainer-add a b c # absolute $ ipa vaultcontainer-add . c # relative or this: $ ipa vaultcontainer-add '' a b c # absolute $ ipa vaultcontainer-add c # relative or this: $ ipa vaultcontainer-add a b c # absolute $ ipa vaultcontainer-add c --relative # relative or this: $ ipa vaultcontainer-add a b c --absolute # absolute $ ipa vaultcontainer-add c # relative > ... > >> 11) No clever optimizations like this please: >> >> + # vault DN cannot be the container base DN >> + if len(dn) == len(api.Object.vaultcontainer.base_dn): >> + raise ValueError('Invalid vault DN: %s' % dn) >> >> Compare the DNs by value instead. > > Actually the DN values have already been compared in the code right > above it: > > # make sure the DN is a vault DN > if not dn.endswith(self.api.Object.vaultcontainer.base_dn): > raise ValueError('Invalid vault DN: %s' % dn) > > This code confirms that the incoming vault DN is within the vault > subtree. After that, the DN length comparison above is just to make sure > the incoming vault DN is not the root of the vault subtree itself. It > doesn't need to compare the values again. I see. You can combine both of the checks into one: if not dn.endswith(self.api.Object.vaultcontainer.base_dn, 1): raise ValueError(...) >... > >> 14) Use File instead of Str for input files: >> >> + Str('in?', >> + cli_name='in', >> + doc=_('File containing data to archive'), >> + ), > > The File type doesn't work with binary files because it tries to decode > the content. OK. I know File is broken and plan to fix it in the future. Just add a comment saying that it should be a File, but it's broken, OK? > ... > >> 16) You do way too much stuff in vault_add.forward(). Only code that >> must be done on the client needs to be there, i.e. handling of the >> "data", "text" and "in" options. >> >> The vault_archive call must be in vault_add.execute(), otherwise a) we >> will be making 2 RPC calls from the client and b) it won't be called at >> all when api.env.in_server is True. > > This is done by design. The vault_add.forward() generates the salt and > the keys. The vault_archive.forward() will encrypt the data. These > operations have to be done on the client side to secure the transport of > the data from the client through the server and finally to KRA. This > mechanism prevents the server from looking at the unencrypted data. OK, but that does not justify that it's broken in server-side API. It can and should be done so that it works the same way on both client and server. I think the best solution would be to split the command into two commands, server-side vault_archive_raw to archive already encrypted data, and client-side vault_archive to encrypt data and archive them with vault_archive_raw in its .execute(). Same thing for vault_retrieve. BTW, I also think it would be better if there were 2 separate sets of commands for binary and textual data (vault_{archive,retrieve}_{data,text}) rather than trying to handle everything in vault_{archive,retrieve}. > > The add & archive combination was added for convenience, not for > optimization. This way you would be able to archive data into a new > vault using a single command. Without this, you'd have to execute two > separate commands: add & archive, which will result in 2 RPC calls anyway. I think I would prefer if it was separate, as that would be consistent with other plugins (e.g. for objects with members, we don't allow adding members directly in -add, you have to use -add-member after -add). > >> 17) Why are vaultcontainer objects automatically created in vault_add? >> >> If you have to automatically create them, you also have to automatically >> delete them when the command fails. But that's a hassle, so I would just >> not create them automatically. > > The vaultcontainer is created automatically to provide a private > container (i.e. /users//) for the each user if they need it. > Without this, the admin will have to create the container manually first > before a user can create a vault, which would be an unreasonable > requirement. If the vault_add fails, it's ok to leave the private > container intact because it can be used again if the user tries to > create a vault again later and it will not affect other users. If the > user is deleted, the private container will be deleted too. > > The code was fixed to create the container only if they are adding a > vault/vault container into the user's private container. If they are > adding into other container, the container must already exist. This sounds like a job fit for the managed entries plugin. Have you tried using it for this? > >> 18) Why are vaultcontainer objects automatically created in vault_find? >> >> This is just plain wrong and has to be removed, now. > > The code was supposed to create the user's private container like in > #17, but the behavior has been changed. If the container being searched > is the user's private container, it will ignore the container not found > error and return zero results as if the private container already > exists. For other containers the container must already exist. For this > to work I had to add a handle_not_found() into LDAPSearch so the plugins > can customize the proper search response for the missing private container. No ad-hoc refactoring please. If you want to refactor anything, it should be first designed properly and put in a separate patch. Anyway, what should actually happen here is that if parent object is not found, its object plugin's handle_not_found is called, i.e. something like this: parent = self.obj.parent_object if parent: self.api.Object[parent].handle_not_found(*args[:-1]) else: raise errors.NotFound( reason=self.obj.container_not_found_msg % { 'container': self.obj.container_dn, } ) > ... > >> 21) vault_archive is not a retrieve operation, it should be based on >> LDAPUpdate instead of LDAPRetrieve. Or Command actually, since it does >> not do anything with LDAP. The same applies to vault_retrieve. > > The vault_archive does not actually modify the LDAP entry because it > stores the data in KRA. It is actually an LDAPRetrieve operation because > it needs to get the vault info before it can perform the archival > operation. Same thing with vault_retrieve. It is not a LDAPRetrieve operation, because it has different semantics. Please use Command as base class and either use ldap2 for direct LDAP or call vault_show instead of hacking around LDAPRetrieve. > >> 22) vault_archive will break with binary data that is not UTF-8 encoded >> text. >> >> This is where it occurs: >> >> + vault_data[u'data'] = unicode(data) >> >> Generally, don't use unicode() on str values and str() on unicode values >> directly, always use .decode() and .encode(). > > It needs to be a Unicode because json.dumps() doesn't work with binary > data. Fixed by adding base-64 encoding. If something str needs to be unicode, you should use .decode() to explicitly specify the encoding, instead of relying on unicode() to pick the correct one. Anyway, I think a better solution than base64 would be to use the "raw_unicode_escape" encoding: if data: data = data.decode('raw_unicode_escape') elif text: data = text elif input_file: with open(input_file, 'rb') as f: data = f.read() data = data.decode('raw_unicode_escape') else: data = u'' ... vault_data[u'data'] = data > ... > >> 26) Instead of the delete_entry refactoring in baseldap and >> vaultcontainer_add, you can put this in vaultcontainer_add's >> pre_callback: >> >> try: >> ldap.get_entries(dn, scope=ldap.SCOPE_ONELEVEL, attrs_list=[]) >> except errors.NotFound: >> pass >> else: >> if not options.get('force', False): >> raise errors.NotAllowedOnNonLeaf() > > I suppose you meant vaultcontainer_del. Fixed, but this will generate an > additional search for each delete. > > I'm leaving the changes baseldap because it may be useful later and it > doesn't change the behavior of the current code. Again, no ad-hoc refactoring please. > ... > >> 28) The vault and vaultcontainer plugins seem to be pretty similar, I >> think it would make sense to put common stuff in a base class and >> inherit vault and vaultcontainer from that. > > I plan to refactor the common code later. Right now the focus is to get > the functionality working correctly first. Please do it now, "later" usually means "never". It shouldn't be too hard and I can give you a hand with it if you want. -- Jan Cholasta From mbabinsk at redhat.com Mon Mar 23 11:48:02 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Mon, 23 Mar 2015 12:48:02 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <1426611638.2981.106.camel@willson.usersys.redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> <55031230.70604@redhat.com> <5506BB6F.70406@redhat.com> <5506CCCB.3020003@redhat.com> <1426611638.2981.106.camel@willson.usersys.redhat.com> Message-ID: <550FFD72.1090301@redhat.com> On 03/17/2015 06:00 PM, Simo Sorce wrote: > On Mon, 2015-03-16 at 13:30 +0100, Martin Babinsky wrote: >> On 03/16/2015 12:15 PM, Martin Kosek wrote: >>> On 03/13/2015 05:37 PM, Martin Babinsky wrote: >>>> Attaching the next iteration of patches. >>>> >>>> I have tried my best to reword the ipa-client-install man page bit about the >>>> new option. Any suggestions to further improve it are welcome. >>>> >>>> I have also slightly modified the 'kinit_keytab' function so that in Kerberos >>>> errors are reported for each attempt and the text of the last error is retained >>>> when finally raising exception. >>> >>> The approach looks very good. I think that my only concern with this patch is >>> this part: >>> >>> + ccache.init_creds_keytab(keytab=ktab, principal=princ) >>> ... >>> + except krbV.Krb5Error as e: >>> + last_exc = str(e) >>> + root_logger.debug("Attempt %d/%d: failed: %s" >>> + % (attempt, attempts, last_exc)) >>> + time.sleep(1) >>> + >>> + root_logger.debug("Maximum number of attempts (%d) reached" >>> + % attempts) >>> + raise StandardError("Error initializing principal %s: %s" >>> + % (principal, last_exc)) >>> >>> The problem here is that this function will raise the super-generic >>> StandardError instead of the proper with all the context and information about >>> the error that the caller can then process. >>> >>> I think that >>> >>> except krbV.Krb5Error as e: >>> if attempt == max_attempts: >>> log something >>> raise >>> >>> would be better. >>> >> >> Yes that seems reasonable. I'm just thinking whether we should re-raise >> Krb5Error or raise ipalib.errors.KerberosError? the latter options makes >> more sense to me as we would not have to additionally import Krb5Error >> everywhere and it would also make the resulting errors more consistent. >> >> I am thinking about someting like this: >> >> except krbV.Krb5Error as e: >> if attempt == attempts: >> # log that we have reaches maximum number of attempts >> raise KerberosError(minor=str(e)) >> >> What do you think? > > Are you retrying on any error ? > Please do *not* do that, if you retry many times on an error that > indicates the password is wrong you may end up locking an administrative > account. If you want to retry you should do it only for very specific > timeout errors. > > Simo. > > I have taken a look at the logs attached to the original BZ (https://bugzilla.redhat.com/show_bug.cgi?id=1161722). In ipaclient-install.log the kinit error is: "Cannot contact any KDC for realm 'ITW.USPTO.GOV' while getting initial credentials" which can be translated to krbV.KRB5_KDC_UNREACH error. However, krb5kdc.log (http://pastebin.test.redhat.com/271394) reports errors which are seemingly unrelated to the root cause (kinit timing out on getting host TGT). Thus I'm not quite sure which errors should we chceck against in this case, anyone care to advise? These are potential candidates: KRB5KDC_ERR_SVC_UNAVAILABLE, "A service is not available that is required to process the request" KRB5KRB_ERR_RESPONSE_TOO_BIG, "Response too big for UDP, retry with TCP" KRB5_REALM_UNKNOWN, "Cannot find KDC for requested realm" KRB5_KDC_UNREACH, "Cannot contact any KDC for requested realm" -- Martin^3 Babinsky From mbabinsk at redhat.com Mon Mar 23 11:55:23 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Mon, 23 Mar 2015 12:55:23 +0100 Subject: [Freeipa-devel] [PATCH 0022] migrate-ds: proper treatment of unsuccessful migrations In-Reply-To: <550FD447.9050707@redhat.com> References: <55094B54.7070807@redhat.com> <550FD447.9050707@redhat.com> Message-ID: <550FFF2B.8010605@redhat.com> On 03/23/2015 09:52 AM, Martin Babinsky wrote: > On 03/18/2015 10:54 AM, Martin Babinsky wrote: >> This is a proper fix to both >> https://fedorahosted.org/freeipa/ticket/4846 and >> https://fedorahosted.org/freeipa/ticket/4952. >> >> To do this I had to throw out some unused parameters from >> _update_default_group function (particularly the pesky pkey causing bug >> #4846 to pop out). >> >> I did not test the patch because my VMs are behaving strangely today and >> I couldn't connect to them. So please test this patch thoroughly. >> >> >> > > Silly me did not realize that textui.print_entry1 actually deletes the > dict values as it prints them so the error is always raised when > applying patch. > > Attaching updated version that should work as expected. > > > Attaching updated patch. -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0022-3-migrate-ds-print-out-failed-attempts-when-no-users-g.patch Type: text/x-patch Size: 3260 bytes Desc: not available URL: From pvoborni at redhat.com Mon Mar 23 12:11:08 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Mon, 23 Mar 2015 13:11:08 +0100 Subject: [Freeipa-devel] [PATCH 0022] migrate-ds: proper treatment of unsuccessful migrations In-Reply-To: <550FFF2B.8010605@redhat.com> References: <55094B54.7070807@redhat.com> <550FD447.9050707@redhat.com> <550FFF2B.8010605@redhat.com> Message-ID: <551002DC.4040708@redhat.com> On 03/23/2015 12:55 PM, Martin Babinsky wrote: > On 03/23/2015 09:52 AM, Martin Babinsky wrote: >> On 03/18/2015 10:54 AM, Martin Babinsky wrote: >>> This is a proper fix to both >>> https://fedorahosted.org/freeipa/ticket/4846 and >>> https://fedorahosted.org/freeipa/ticket/4952. >>> >>> To do this I had to throw out some unused parameters from >>> _update_default_group function (particularly the pesky pkey causing bug >>> #4846 to pop out). >>> >>> I did not test the patch because my VMs are behaving strangely today and >>> I couldn't connect to them. So please test this patch thoroughly. >>> >>> >>> >> >> Silly me did not realize that textui.print_entry1 actually deletes the >> dict values as it prints them so the error is always raised when >> applying patch. >> >> Attaching updated version that should work as expected. >> >> >> > > Attaching updated patch. > ACK Pushed to: ipa-4-1: 3284cbf77347f054f07b4b810d86b4db221fec0e master: 5a5e1a2494e4aa2cae57d8188de73f5035362638 -- Petr Vobornik From simo at redhat.com Mon Mar 23 13:08:18 2015 From: simo at redhat.com (Simo Sorce) Date: Mon, 23 Mar 2015 09:08:18 -0400 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <550FFD72.1090301@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> <55031230.70604@redhat.com> <5506BB6F.70406@redhat.com> <5506CCCB.3020003@redhat.com> <1426611638.2981.106.camel@willson.usersys.redhat.com> <550FFD72.1090301@redhat.com> Message-ID: <1427116098.8302.2.camel@willson.usersys.redhat.com> On Mon, 2015-03-23 at 12:48 +0100, Martin Babinsky wrote: > On 03/17/2015 06:00 PM, Simo Sorce wrote: > > On Mon, 2015-03-16 at 13:30 +0100, Martin Babinsky wrote: > >> On 03/16/2015 12:15 PM, Martin Kosek wrote: > >>> On 03/13/2015 05:37 PM, Martin Babinsky wrote: > >>>> Attaching the next iteration of patches. > >>>> > >>>> I have tried my best to reword the ipa-client-install man page bit about the > >>>> new option. Any suggestions to further improve it are welcome. > >>>> > >>>> I have also slightly modified the 'kinit_keytab' function so that in Kerberos > >>>> errors are reported for each attempt and the text of the last error is retained > >>>> when finally raising exception. > >>> > >>> The approach looks very good. I think that my only concern with this patch is > >>> this part: > >>> > >>> + ccache.init_creds_keytab(keytab=ktab, principal=princ) > >>> ... > >>> + except krbV.Krb5Error as e: > >>> + last_exc = str(e) > >>> + root_logger.debug("Attempt %d/%d: failed: %s" > >>> + % (attempt, attempts, last_exc)) > >>> + time.sleep(1) > >>> + > >>> + root_logger.debug("Maximum number of attempts (%d) reached" > >>> + % attempts) > >>> + raise StandardError("Error initializing principal %s: %s" > >>> + % (principal, last_exc)) > >>> > >>> The problem here is that this function will raise the super-generic > >>> StandardError instead of the proper with all the context and information about > >>> the error that the caller can then process. > >>> > >>> I think that > >>> > >>> except krbV.Krb5Error as e: > >>> if attempt == max_attempts: > >>> log something > >>> raise > >>> > >>> would be better. > >>> > >> > >> Yes that seems reasonable. I'm just thinking whether we should re-raise > >> Krb5Error or raise ipalib.errors.KerberosError? the latter options makes > >> more sense to me as we would not have to additionally import Krb5Error > >> everywhere and it would also make the resulting errors more consistent. > >> > >> I am thinking about someting like this: > >> > >> except krbV.Krb5Error as e: > >> if attempt == attempts: > >> # log that we have reaches maximum number of attempts > >> raise KerberosError(minor=str(e)) > >> > >> What do you think? > > > > Are you retrying on any error ? > > Please do *not* do that, if you retry many times on an error that > > indicates the password is wrong you may end up locking an administrative > > account. If you want to retry you should do it only for very specific > > timeout errors. > > > > Simo. > > > > > I have taken a look at the logs attached to the original BZ > (https://bugzilla.redhat.com/show_bug.cgi?id=1161722). > > In ipaclient-install.log the kinit error is: > > "Cannot contact any KDC for realm 'ITW.USPTO.GOV' while getting initial > credentials" > > which can be translated to krbV.KRB5_KDC_UNREACH error. However, > krb5kdc.log (http://pastebin.test.redhat.com/271394) reports errors > which are seemingly unrelated to the root cause (kinit timing out on > getting host TGT). > > Thus I'm not quite sure which errors should we chceck against in this > case, anyone care to advise? These are potential candidates: > > KRB5KDC_ERR_SVC_UNAVAILABLE, "A service is not available that is > required to process the request" > KRB5KRB_ERR_RESPONSE_TOO_BIG, "Response too big for UDP, retry with TCP" > KRB5_REALM_UNKNOWN, "Cannot find KDC for requested realm" > KRB5_KDC_UNREACH, "Cannot contact any KDC for requested realm" > The only ones that you should retry on, at first glance are KRB5_KDC_UNREACH, KRB5KDC_ERR_SVC_UNAVAILABLE. You should never see KRB5KRB_ERR_RESPONSE_TOO_BIG in the script as it should be handled automatically by the library, and if you get KRB5_REALM_UNKNOWN I do not think that retrying will make any difference. Simo. -- Simo Sorce * Red Hat, Inc * New York From pspacek at redhat.com Mon Mar 23 13:22:44 2015 From: pspacek at redhat.com (Petr Spacek) Date: Mon, 23 Mar 2015 14:22:44 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <1427116098.8302.2.camel@willson.usersys.redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> <55031230.70604@redhat.com> <5506BB6F.70406@redhat.com> <5506CCCB.3020003@redhat.com> <1426611638.2981.106.camel@willson.usersys.redhat.com> <550FFD72.1090301@redhat.com> <1427116098.8302.2.camel@willson.usersys.redhat.com> Message-ID: <551013A4.5000708@redhat.com> On 23.3.2015 14:08, Simo Sorce wrote: > On Mon, 2015-03-23 at 12:48 +0100, Martin Babinsky wrote: >> On 03/17/2015 06:00 PM, Simo Sorce wrote: >>> On Mon, 2015-03-16 at 13:30 +0100, Martin Babinsky wrote: >>>> On 03/16/2015 12:15 PM, Martin Kosek wrote: >>>>> On 03/13/2015 05:37 PM, Martin Babinsky wrote: >>>>>> Attaching the next iteration of patches. >>>>>> >>>>>> I have tried my best to reword the ipa-client-install man page bit about the >>>>>> new option. Any suggestions to further improve it are welcome. >>>>>> >>>>>> I have also slightly modified the 'kinit_keytab' function so that in Kerberos >>>>>> errors are reported for each attempt and the text of the last error is retained >>>>>> when finally raising exception. >>>>> >>>>> The approach looks very good. I think that my only concern with this patch is >>>>> this part: >>>>> >>>>> + ccache.init_creds_keytab(keytab=ktab, principal=princ) >>>>> ... >>>>> + except krbV.Krb5Error as e: >>>>> + last_exc = str(e) >>>>> + root_logger.debug("Attempt %d/%d: failed: %s" >>>>> + % (attempt, attempts, last_exc)) >>>>> + time.sleep(1) >>>>> + >>>>> + root_logger.debug("Maximum number of attempts (%d) reached" >>>>> + % attempts) >>>>> + raise StandardError("Error initializing principal %s: %s" >>>>> + % (principal, last_exc)) >>>>> >>>>> The problem here is that this function will raise the super-generic >>>>> StandardError instead of the proper with all the context and information about >>>>> the error that the caller can then process. >>>>> >>>>> I think that >>>>> >>>>> except krbV.Krb5Error as e: >>>>> if attempt == max_attempts: >>>>> log something >>>>> raise >>>>> >>>>> would be better. >>>>> >>>> >>>> Yes that seems reasonable. I'm just thinking whether we should re-raise >>>> Krb5Error or raise ipalib.errors.KerberosError? the latter options makes >>>> more sense to me as we would not have to additionally import Krb5Error >>>> everywhere and it would also make the resulting errors more consistent. >>>> >>>> I am thinking about someting like this: >>>> >>>> except krbV.Krb5Error as e: >>>> if attempt == attempts: >>>> # log that we have reaches maximum number of attempts >>>> raise KerberosError(minor=str(e)) >>>> >>>> What do you think? >>> >>> Are you retrying on any error ? >>> Please do *not* do that, if you retry many times on an error that >>> indicates the password is wrong you may end up locking an administrative >>> account. If you want to retry you should do it only for very specific >>> timeout errors. >>> >>> Simo. >>> >>> >> I have taken a look at the logs attached to the original BZ >> (https://bugzilla.redhat.com/show_bug.cgi?id=1161722). >> >> In ipaclient-install.log the kinit error is: >> >> "Cannot contact any KDC for realm 'ITW.USPTO.GOV' while getting initial >> credentials" >> >> which can be translated to krbV.KRB5_KDC_UNREACH error. However, >> krb5kdc.log (http://pastebin.test.redhat.com/271394) reports errors >> which are seemingly unrelated to the root cause (kinit timing out on >> getting host TGT). >> >> Thus I'm not quite sure which errors should we chceck against in this >> case, anyone care to advise? These are potential candidates: >> >> KRB5KDC_ERR_SVC_UNAVAILABLE, "A service is not available that is >> required to process the request" >> KRB5KRB_ERR_RESPONSE_TOO_BIG, "Response too big for UDP, retry with TCP" >> KRB5_REALM_UNKNOWN, "Cannot find KDC for requested realm" >> KRB5_KDC_UNREACH, "Cannot contact any KDC for requested realm" >> > > The only ones that you should retry on, at first glance are > KRB5_KDC_UNREACH, KRB5KDC_ERR_SVC_UNAVAILABLE. > > You should never see KRB5KRB_ERR_RESPONSE_TOO_BIG in the script as it > should be handled automatically by the library, and if you get > KRB5_REALM_UNKNOWN I do not think that retrying will make any > difference. I might be wrong but I was under the impression that this feature was also for workarounding replication delay - service is not available / key is not present / something like that. (This could happen if host/principal was added to one server but then the client connected to another server or so.) -- Petr^2 Spacek From simo at redhat.com Mon Mar 23 14:13:02 2015 From: simo at redhat.com (Simo Sorce) Date: Mon, 23 Mar 2015 10:13:02 -0400 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <551013A4.5000708@redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> <55031230.70604@redhat.com> <5506BB6F.70406@redhat.com> <5506CCCB.3020003@redhat.com> <1426611638.2981.106.camel@willson.usersys.redhat.com> <550FFD72.1090301@redhat.com> <1427116098.8302.2.camel@willson.usersys.redhat.com> <551013A4.5000708@redhat.com> Message-ID: <1427119982.8302.3.camel@willson.usersys.redhat.com> On Mon, 2015-03-23 at 14:22 +0100, Petr Spacek wrote: > On 23.3.2015 14:08, Simo Sorce wrote: > > On Mon, 2015-03-23 at 12:48 +0100, Martin Babinsky wrote: > >> On 03/17/2015 06:00 PM, Simo Sorce wrote: > >>> On Mon, 2015-03-16 at 13:30 +0100, Martin Babinsky wrote: > >>>> On 03/16/2015 12:15 PM, Martin Kosek wrote: > >>>>> On 03/13/2015 05:37 PM, Martin Babinsky wrote: > >>>>>> Attaching the next iteration of patches. > >>>>>> > >>>>>> I have tried my best to reword the ipa-client-install man page bit about the > >>>>>> new option. Any suggestions to further improve it are welcome. > >>>>>> > >>>>>> I have also slightly modified the 'kinit_keytab' function so that in Kerberos > >>>>>> errors are reported for each attempt and the text of the last error is retained > >>>>>> when finally raising exception. > >>>>> > >>>>> The approach looks very good. I think that my only concern with this patch is > >>>>> this part: > >>>>> > >>>>> + ccache.init_creds_keytab(keytab=ktab, principal=princ) > >>>>> ... > >>>>> + except krbV.Krb5Error as e: > >>>>> + last_exc = str(e) > >>>>> + root_logger.debug("Attempt %d/%d: failed: %s" > >>>>> + % (attempt, attempts, last_exc)) > >>>>> + time.sleep(1) > >>>>> + > >>>>> + root_logger.debug("Maximum number of attempts (%d) reached" > >>>>> + % attempts) > >>>>> + raise StandardError("Error initializing principal %s: %s" > >>>>> + % (principal, last_exc)) > >>>>> > >>>>> The problem here is that this function will raise the super-generic > >>>>> StandardError instead of the proper with all the context and information about > >>>>> the error that the caller can then process. > >>>>> > >>>>> I think that > >>>>> > >>>>> except krbV.Krb5Error as e: > >>>>> if attempt == max_attempts: > >>>>> log something > >>>>> raise > >>>>> > >>>>> would be better. > >>>>> > >>>> > >>>> Yes that seems reasonable. I'm just thinking whether we should re-raise > >>>> Krb5Error or raise ipalib.errors.KerberosError? the latter options makes > >>>> more sense to me as we would not have to additionally import Krb5Error > >>>> everywhere and it would also make the resulting errors more consistent. > >>>> > >>>> I am thinking about someting like this: > >>>> > >>>> except krbV.Krb5Error as e: > >>>> if attempt == attempts: > >>>> # log that we have reaches maximum number of attempts > >>>> raise KerberosError(minor=str(e)) > >>>> > >>>> What do you think? > >>> > >>> Are you retrying on any error ? > >>> Please do *not* do that, if you retry many times on an error that > >>> indicates the password is wrong you may end up locking an administrative > >>> account. If you want to retry you should do it only for very specific > >>> timeout errors. > >>> > >>> Simo. > >>> > >>> > >> I have taken a look at the logs attached to the original BZ > >> (https://bugzilla.redhat.com/show_bug.cgi?id=1161722). > >> > >> In ipaclient-install.log the kinit error is: > >> > >> "Cannot contact any KDC for realm 'ITW.USPTO.GOV' while getting initial > >> credentials" > >> > >> which can be translated to krbV.KRB5_KDC_UNREACH error. However, > >> krb5kdc.log (http://pastebin.test.redhat.com/271394) reports errors > >> which are seemingly unrelated to the root cause (kinit timing out on > >> getting host TGT). > >> > >> Thus I'm not quite sure which errors should we chceck against in this > >> case, anyone care to advise? These are potential candidates: > >> > >> KRB5KDC_ERR_SVC_UNAVAILABLE, "A service is not available that is > >> required to process the request" > >> KRB5KRB_ERR_RESPONSE_TOO_BIG, "Response too big for UDP, retry with TCP" > >> KRB5_REALM_UNKNOWN, "Cannot find KDC for requested realm" > >> KRB5_KDC_UNREACH, "Cannot contact any KDC for requested realm" > >> > > > > The only ones that you should retry on, at first glance are > > KRB5_KDC_UNREACH, KRB5KDC_ERR_SVC_UNAVAILABLE. > > > > You should never see KRB5KRB_ERR_RESPONSE_TOO_BIG in the script as it > > should be handled automatically by the library, and if you get > > KRB5_REALM_UNKNOWN I do not think that retrying will make any > > difference. > > I might be wrong but I was under the impression that this feature was also for > workarounding replication delay - service is not available / key is not > present / something like that. > > (This could happen if host/principal was added to one server but then the > client connected to another server or so.) If we have that problem we should instead use a temporary krb5.conf file that lists explicitly only the server we are joining. Simo. -- Simo Sorce * Red Hat, Inc * New York From mbasti at redhat.com Mon Mar 23 14:36:07 2015 From: mbasti at redhat.com (Martin Basti) Date: Mon, 23 Mar 2015 15:36:07 +0100 Subject: [Freeipa-devel] [PATCH 0212] Server Upgrade: Fix comments Message-ID: <551024D7.1080906@redhat.com> Attached patch fixes comments which I forgot to edit in 'make upgrade deterministic' patchset -- Martin Basti -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0212-Server-Upgrade-Fix-comments.patch Type: text/x-patch Size: 2119 bytes Desc: not available URL: From mbasti at redhat.com Mon Mar 23 14:47:29 2015 From: mbasti at redhat.com (Martin Basti) Date: Mon, 23 Mar 2015 15:47:29 +0100 Subject: [Freeipa-devel] [PATCHES 0213 - 0221] Server Upgrade: LDAPI, Update plugins Message-ID: <55102781.5060809@redhat.com> Hello, The patches: * allows to specify order of update plugins in update files. * requires to use LDAPI by ipa-ldap-updater patches attached -- Martin Basti -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0213-Server-Upgrade-use-only-LDAPI-connection.patch Type: text/x-patch Size: 5558 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0214-Server-Upgrade-remove-unused-code-in-upgrade.patch Type: text/x-patch Size: 2422 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0215-Server-Upgrade-Apply-plugin-updates-immediately.patch Type: text/x-patch Size: 27234 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0216-Server-Upgrade-specify-order-of-plugins-in-update-fi.patch Type: text/x-patch Size: 48369 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0217-Server-Upgrade-plugins-should-use-ldapupdater-API-in.patch Type: text/x-patch Size: 14855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0218-Server-Upgrade-Handle-connection-better-in-updates_f.patch Type: text/x-patch Size: 1072 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0219-Server-Upgrade-use-ldap2-connection-in-fix_replica_a.patch Type: text/x-patch Size: 1334 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0220-Server-Upgrade-restart-DS-using-ipaplatfom-service.patch Type: text/x-patch Size: 3730 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0221-Server-Upgrade-only-root-can-run-updates.patch Type: text/x-patch Size: 2155 bytes Desc: not available URL: From mbasti at redhat.com Mon Mar 23 14:54:33 2015 From: mbasti at redhat.com (Martin Basti) Date: Mon, 23 Mar 2015 15:54:33 +0100 Subject: [Freeipa-devel] [PATCH 0210] DNSSEC: CI test Message-ID: <55102929.9030702@redhat.com> Hello, a patch with DNSSEC CI tests attached. * Two types of installation tested * Tests check if zones are signed on both replica and master * The root zone test also checks chain of trust Can somebody very familiar with pytest do review? I'm not sure If I used pytest friendly constructions. PS: test may failure occasionally due a bug in DNSSEC code, but CI test itself should be OK Useful information: http://www.freeipa.org/page/Howto/DNSSEC -- Martin Basti -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0210-DNSSEC-CI-tests.patch Type: text/x-patch Size: 14004 bytes Desc: not available URL: From slaz at seznam.cz Mon Mar 23 19:17:31 2015 From: slaz at seznam.cz (=?UTF-8?B?U3RhbmRhIEzDoXpuacSNa2E=?=) Date: Mon, 23 Mar 2015 20:17:31 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <550FD89B.8050506@redhat.com> References: <550C1303.7090402@seznam.cz> <550FD89B.8050506@redhat.com> Message-ID: <551066CB.6040904@seznam.cz> On 3/23/2015 10:10 AM, Jan Cholasta wrote: > Hi, > > Dne 20.3.2015 v 13:30 Stanislav L?zni?ka napsal(a): >> ... >> >> As for the local time - timezone in the tuple (time, timezone) would >> only say "Local Time", which can't be found in Olson's and it means the >> time record from the tuple should be compared to the client's time >> settings (/etc/localtime ?). > > Let me just do a braindump here: > > I would think timezone, or rather location (timezone + holidays) > should be stored with user objects (but not in group objects, as users > can be members of multiple groups, which could cause conflicts). When > a user goes on a bussiness trip etc., an administrator/manager would > need to change their location accordingly to reflect that. > > For hosts, timezone information is already available on the host > (/etc/localtime). I'm not sure if there is a 1-1 mapping between > timezone and holidays, but if there is not, we probably need to store > location with host objects as well. If we do that, maybe we can make > SSSD update local timezone on the host using the information stored > with the host object to allow roaming (think user laptops). > I'm not sure about storing the holiday information with any objects. Holidays are a good example for exceptions in time rules but I'm not sure that this information should be really stored with each user/host. There might be cases where you want the holidays to apply and then sometimes you may not want them to apply. Storing time exceptions with the specific HBAC rules would solve that, I think. > It might be a good idea to store optional owner information with > hosts, so that when the owner moves to a different location, the host > is assumed to be at that location as well (again think user laptops). > > Given the above, HBAC rules could contain (time, anchor), where anchor > is "UTC", "user local time" or "host local time". Truth is, it was not really clear to me from the last week's discussion whose "Local Time" to use - do we use host's or do we use user's? It would make sense to me to use the user's local time. But then you would need to really store at least the timezone information with each user object. And that information should probably change with user moving between different timezones. That's quite a pickle I am in right here. > > What would then be left to decide is whether to use the iCalendar time > format as internal, or use some other own created format. Such a format > should not only be suitable for reoccurring events, but should also be > able to handle exceptions in those events, such as holidays. It should > also be easy to import iCalendar events to this format to support events > from other sources. > ... > Honza From jcholast at redhat.com Tue Mar 24 06:16:41 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 24 Mar 2015 07:16:41 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <551066CB.6040904@seznam.cz> References: <550C1303.7090402@seznam.cz> <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> Message-ID: <55110149.40303@redhat.com> Dne 23.3.2015 v 20:17 Standa L?zni?ka napsal(a): > On 3/23/2015 10:10 AM, Jan Cholasta wrote: >> Hi, >> >> Dne 20.3.2015 v 13:30 Stanislav L?zni?ka napsal(a): >>> ... >>> >>> As for the local time - timezone in the tuple (time, timezone) would >>> only say "Local Time", which can't be found in Olson's and it means the >>> time record from the tuple should be compared to the client's time >>> settings (/etc/localtime ?). >> >> Let me just do a braindump here: >> >> I would think timezone, or rather location (timezone + holidays) >> should be stored with user objects (but not in group objects, as users >> can be members of multiple groups, which could cause conflicts). When >> a user goes on a bussiness trip etc., an administrator/manager would >> need to change their location accordingly to reflect that. >> >> For hosts, timezone information is already available on the host >> (/etc/localtime). I'm not sure if there is a 1-1 mapping between >> timezone and holidays, but if there is not, we probably need to store >> location with host objects as well. If we do that, maybe we can make >> SSSD update local timezone on the host using the information stored >> with the host object to allow roaming (think user laptops). >> > I'm not sure about storing the holiday information with any objects. > Holidays are a good example for exceptions in time rules but I'm not > sure that this information should be really stored with each user/host. > There might be cases where you want the holidays to apply and then > sometimes you may not want them to apply. Storing time exceptions with > the specific HBAC rules would solve that, I think. I'm not saying we should store them directly with any object, but rather store a reference to an object defining them, so that you don't have to define them over and over again when you need to use them in multiple places. >> It might be a good idea to store optional owner information with >> hosts, so that when the owner moves to a different location, the host >> is assumed to be at that location as well (again think user laptops). >> >> Given the above, HBAC rules could contain (time, anchor), where anchor >> is "UTC", "user local time" or "host local time". > Truth is, it was not really clear to me from the last week's discussion > whose "Local Time" to use - do we use host's or do we use user's? It > would make sense to me to use the user's local time. But then you would > need to really store at least the timezone information with each user > object. And that information should probably change with user moving > between different timezones. That's quite a pickle I am in right here. IMO whether to use user or host local time depends on organization local policy, hence my suggestion to support both. Yes, timezone would need to be stored with the user, and someone or something (not the user, so they can't cheat) would need to change it when the user moves. >> >> What would then be left to decide is whether to use the iCalendar time >> format as internal, or use some other own created format. Such a format >> should not only be suitable for reoccurring events, but should also be >> able to handle exceptions in those events, such as holidays. It should >> also be easy to import iCalendar events to this format to support events >> from other sources. >> ... >> > > Honza -- Jan Cholasta From abokovoy at redhat.com Tue Mar 24 06:34:56 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 24 Mar 2015 08:34:56 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <55110149.40303@redhat.com> References: <550C1303.7090402@seznam.cz> <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> Message-ID: <20150324063456.GS3878@redhat.com> On Tue, 24 Mar 2015, Jan Cholasta wrote: >Dne 23.3.2015 v 20:17 Standa L?zni?ka napsal(a): >>On 3/23/2015 10:10 AM, Jan Cholasta wrote: >>>Hi, >>> >>>Dne 20.3.2015 v 13:30 Stanislav L?zni?ka napsal(a): >>>>... >>>> >>>>As for the local time - timezone in the tuple (time, timezone) would >>>>only say "Local Time", which can't be found in Olson's and it means the >>>>time record from the tuple should be compared to the client's time >>>>settings (/etc/localtime ?). >>> >>>Let me just do a braindump here: >>> >>>I would think timezone, or rather location (timezone + holidays) >>>should be stored with user objects (but not in group objects, as users >>>can be members of multiple groups, which could cause conflicts). When >>>a user goes on a bussiness trip etc., an administrator/manager would >>>need to change their location accordingly to reflect that. >>> >>>For hosts, timezone information is already available on the host >>>(/etc/localtime). I'm not sure if there is a 1-1 mapping between >>>timezone and holidays, but if there is not, we probably need to store >>>location with host objects as well. If we do that, maybe we can make >>>SSSD update local timezone on the host using the information stored >>>with the host object to allow roaming (think user laptops). >>> >>I'm not sure about storing the holiday information with any objects. >>Holidays are a good example for exceptions in time rules but I'm not >>sure that this information should be really stored with each user/host. >>There might be cases where you want the holidays to apply and then >>sometimes you may not want them to apply. Storing time exceptions with >>the specific HBAC rules would solve that, I think. > >I'm not saying we should store them directly with any object, but >rather store a reference to an object defining them, so that you don't >have to define them over and over again when you need to use them in >multiple places. Yes. Note also that timezone information and holiday information are independent of each other. There are multiple holiday calendars and they change every year. Maintaining holiday calendars can be seen as a separate task and it is not really related to a specific timezone. A group of timezones may share the same holiday calendar. In general, holiday calendars aren't standardized and there are no opensource project which provide up to date information about state holidays for all states. > >>>It might be a good idea to store optional owner information with >>>hosts, so that when the owner moves to a different location, the host >>>is assumed to be at that location as well (again think user laptops). >>> >>>Given the above, HBAC rules could contain (time, anchor), where anchor >>>is "UTC", "user local time" or "host local time". >>Truth is, it was not really clear to me from the last week's discussion >>whose "Local Time" to use - do we use host's or do we use user's? It >>would make sense to me to use the user's local time. But then you would >>need to really store at least the timezone information with each user >>object. And that information should probably change with user moving >>between different timezones. That's quite a pickle I am in right here. > >IMO whether to use user or host local time depends on organization >local policy, hence my suggestion to support both. > >Yes, timezone would need to be stored with the user, and someone or >something (not the user, so they can't cheat) would need to change it >when the user moves. > >>> >>>What would then be left to decide is whether to use the iCalendar time >>>format as internal, or use some other own created format. Such a format >>>should not only be suitable for reoccurring events, but should also be >>>able to handle exceptions in those events, such as holidays. It should >>>also be easy to import iCalendar events to this format to support events >>>from other sources. >>>... >>> >> >>Honza > > >-- >Jan Cholasta > >-- >Manage your subscription for the Freeipa-devel mailing list: >https://www.redhat.com/mailman/listinfo/freeipa-devel >Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code -- / Alexander Bokovoy From mkosek at redhat.com Tue Mar 24 07:07:53 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 24 Mar 2015 08:07:53 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <55110149.40303@redhat.com> References: <550C1303.7090402@seznam.cz> <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> Message-ID: <55110D49.6050705@redhat.com> On 03/24/2015 07:16 AM, Jan Cholasta wrote: > Dne 23.3.2015 v 20:17 Standa L?zni?ka napsal(a): ... >>> Given the above, HBAC rules could contain (time, anchor), where anchor >>> is "UTC", "user local time" or "host local time". >> Truth is, it was not really clear to me from the last week's discussion >> whose "Local Time" to use - do we use host's or do we use user's? It >> would make sense to me to use the user's local time. But then you would >> need to really store at least the timezone information with each user >> object. And that information should probably change with user moving >> between different timezones. That's quite a pickle I am in right here. > > IMO whether to use user or host local time depends on organization local > policy, hence my suggestion to support both. I am bit confused, I would like to make sure we are on the same page with regards to Local Time. When the Local Time rule is created, anchor will be set to "Local Time". Then SSSD would simply use host's local time, in whichever time zone the HBAC host is. So this is the default host enforcement. For the user, you want to let SSSD check authenticated user's entry, to see if there is a timezone information? This would of course depend on the information being available. For AD users, you would need to set it in ID Views or similar. From jhrozek at redhat.com Tue Mar 24 07:20:07 2015 From: jhrozek at redhat.com (Jakub Hrozek) Date: Tue, 24 Mar 2015 08:20:07 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <55110D49.6050705@redhat.com> References: <550C1303.7090402@seznam.cz> <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> Message-ID: <20150324072007.GC2989@hendrix.redhat.com> On Tue, Mar 24, 2015 at 08:07:53AM +0100, Martin Kosek wrote: > On 03/24/2015 07:16 AM, Jan Cholasta wrote: > > Dne 23.3.2015 v 20:17 Standa L?zni?ka napsal(a): > ... > >>> Given the above, HBAC rules could contain (time, anchor), where anchor > >>> is "UTC", "user local time" or "host local time". > >> Truth is, it was not really clear to me from the last week's discussion > >> whose "Local Time" to use - do we use host's or do we use user's? It > >> would make sense to me to use the user's local time. But then you would > >> need to really store at least the timezone information with each user > >> object. And that information should probably change with user moving > >> between different timezones. That's quite a pickle I am in right here. > > > > IMO whether to use user or host local time depends on organization local > > policy, hence my suggestion to support both. > > I am bit confused, I would like to make sure we are on the same page with > regards to Local Time. When the Local Time rule is created, anchor will be set > to "Local Time". Then SSSD would simply use host's local time, in whichever > time zone the HBAC host is. Yes, that was my understanding also. > > So this is the default host enforcement. For the user, you want to let SSSD > check authenticated user's entry, to see if there is a timezone information? > This would of course depend on the information being available. For AD users, > you would need to set it in ID Views or similar. Yes, also in a previous e-mail, there was a suggestion to change timezones by admin when the user changes timezones -- I didn't like that part, it seems really error prone and tedious. *If* there was this choice, it should not be the default, rather the default should also be host local time IMO. From pspacek at redhat.com Tue Mar 24 07:29:26 2015 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 24 Mar 2015 08:29:26 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150324072007.GC2989@hendrix.redhat.com> References: <550C1303.7090402@seznam.cz> <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> Message-ID: <55111256.3030707@redhat.com> On 24.3.2015 08:20, Jakub Hrozek wrote: > On Tue, Mar 24, 2015 at 08:07:53AM +0100, Martin Kosek wrote: >> On 03/24/2015 07:16 AM, Jan Cholasta wrote: >>> Dne 23.3.2015 v 20:17 Standa L?zni?ka napsal(a): >> ... >>>>> Given the above, HBAC rules could contain (time, anchor), where anchor >>>>> is "UTC", "user local time" or "host local time". >>>> Truth is, it was not really clear to me from the last week's discussion >>>> whose "Local Time" to use - do we use host's or do we use user's? It >>>> would make sense to me to use the user's local time. But then you would >>>> need to really store at least the timezone information with each user >>>> object. And that information should probably change with user moving >>>> between different timezones. That's quite a pickle I am in right here. >>> >>> IMO whether to use user or host local time depends on organization local >>> policy, hence my suggestion to support both. >> >> I am bit confused, I would like to make sure we are on the same page with >> regards to Local Time. When the Local Time rule is created, anchor will be set >> to "Local Time". Then SSSD would simply use host's local time, in whichever >> time zone the HBAC host is. > > Yes, that was my understanding also. > >> >> So this is the default host enforcement. For the user, you want to let SSSD >> check authenticated user's entry, to see if there is a timezone information? >> This would of course depend on the information being available. For AD users, >> you would need to set it in ID Views or similar. > > Yes, also in a previous e-mail, there was a suggestion to change > timezones by admin when the user changes timezones -- I didn't like that > part, it seems really error prone and tedious. *If* there was this > choice, it should not be the default, rather the default should also be > host local time IMO. Nitpick: It would be nice to clearly state in docs what 'timezone' are you going to use. Users can specify their own timezone via TZ environment variable and this can be very different from timezone defined by /etc/localtime. -- Petr^2 Spacek From mkosek at redhat.com Tue Mar 24 07:40:27 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 24 Mar 2015 08:40:27 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150324072007.GC2989@hendrix.redhat.com> References: <550C1303.7090402@seznam.cz> <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> Message-ID: <551114EB.9040508@redhat.com> On 03/24/2015 08:20 AM, Jakub Hrozek wrote: > On Tue, Mar 24, 2015 at 08:07:53AM +0100, Martin Kosek wrote: >> On 03/24/2015 07:16 AM, Jan Cholasta wrote: >>> Dne 23.3.2015 v 20:17 Standa L?zni?ka napsal(a): >> ... >>>>> Given the above, HBAC rules could contain (time, anchor), where anchor >>>>> is "UTC", "user local time" or "host local time". >>>> Truth is, it was not really clear to me from the last week's discussion >>>> whose "Local Time" to use - do we use host's or do we use user's? It >>>> would make sense to me to use the user's local time. But then you would >>>> need to really store at least the timezone information with each user >>>> object. And that information should probably change with user moving >>>> between different timezones. That's quite a pickle I am in right here. >>> >>> IMO whether to use user or host local time depends on organization local >>> policy, hence my suggestion to support both. >> >> I am bit confused, I would like to make sure we are on the same page with >> regards to Local Time. When the Local Time rule is created, anchor will be set >> to "Local Time". Then SSSD would simply use host's local time, in whichever >> time zone the HBAC host is. > > Yes, that was my understanding also. > >> >> So this is the default host enforcement. For the user, you want to let SSSD >> check authenticated user's entry, to see if there is a timezone information? >> This would of course depend on the information being available. For AD users, >> you would need to set it in ID Views or similar. > > Yes, also in a previous e-mail, there was a suggestion to change > timezones by admin when the user changes timezones -- I didn't like that > part, it seems really error prone and tedious. *If* there was this > choice, it should not be the default, rather the default should also be > host local time IMO. Host local time zone was the original case I expected. Enforcing *user* local time zone is where this discussion started. Honze proposed making this an option - leaving us to 3 different time modes: * UTC - stored as (time + olson time zone), enforcement is clear * Host Local Time - stored as (time + Host Local Time), enforcement by /etc/localtime * User Local Time - stored as (time + User Local Time), enforcement by ??? So the rule may be: * Employee Foo can access web service Bar only in his work hours IMO, it is realistic for an administrator to set the time zone setting in the employee entry. Of course, it gets tricky when the user starts moving around the globe... From pspacek at redhat.com Tue Mar 24 07:44:20 2015 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 24 Mar 2015 08:44:20 +0100 Subject: [Freeipa-devel] FreeIPA or freeIPA? (logo/spelling change) Message-ID: <551115D4.9040308@redhat.com> Hello, Do you always wonder why all texts say FreeIPA but logo has 'freeIPA' on it? :-) While preparing FreeIPA poster for OpenHouse event in Brno, I could not resist anymore and 'fixed' the logo to use capital F - old and new variants are attached. Can we agree on one one spelling and fix the rest? (Obviously it would be easier to fix logos instead of fixing all texts :-)) -- Petr^2 Spacek -------------- next part -------------- A non-text attachment was scrubbed... Name: freeIPA.png Type: image/png Size: 21155 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: FreeIPA.png Type: image/png Size: 20878 bytes Desc: not available URL: From jcholast at redhat.com Tue Mar 24 07:53:49 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 24 Mar 2015 08:53:49 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <551114EB.9040508@redhat.com> References: <550C1303.7090402@seznam.cz> <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> Message-ID: <5511180D.8050103@redhat.com> Dne 24.3.2015 v 08:40 Martin Kosek napsal(a): > On 03/24/2015 08:20 AM, Jakub Hrozek wrote: >> On Tue, Mar 24, 2015 at 08:07:53AM +0100, Martin Kosek wrote: >>> On 03/24/2015 07:16 AM, Jan Cholasta wrote: >>>> Dne 23.3.2015 v 20:17 Standa L?zni?ka napsal(a): >>> ... >>>>>> Given the above, HBAC rules could contain (time, anchor), where anchor >>>>>> is "UTC", "user local time" or "host local time". >>>>> Truth is, it was not really clear to me from the last week's discussion >>>>> whose "Local Time" to use - do we use host's or do we use user's? It >>>>> would make sense to me to use the user's local time. But then you would >>>>> need to really store at least the timezone information with each user >>>>> object. And that information should probably change with user moving >>>>> between different timezones. That's quite a pickle I am in right here. >>>> >>>> IMO whether to use user or host local time depends on organization local >>>> policy, hence my suggestion to support both. >>> >>> I am bit confused, I would like to make sure we are on the same page with >>> regards to Local Time. When the Local Time rule is created, anchor will be set >>> to "Local Time". Then SSSD would simply use host's local time, in whichever >>> time zone the HBAC host is. >> >> Yes, that was my understanding also. >> >>> >>> So this is the default host enforcement. For the user, you want to let SSSD >>> check authenticated user's entry, to see if there is a timezone information? >>> This would of course depend on the information being available. For AD users, >>> you would need to set it in ID Views or similar. >> >> Yes, also in a previous e-mail, there was a suggestion to change >> timezones by admin when the user changes timezones -- I didn't like that >> part, it seems really error prone and tedious. *If* there was this >> choice, it should not be the default, rather the default should also be >> host local time IMO. I don't think you can expect host-local time to be good enough for everyone. > > Host local time zone was the original case I expected. Enforcing *user* local > time zone is where this discussion started. Honze proposed making this an > option - leaving us to 3 different time modes: > > * UTC - stored as (time + olson time zone), enforcement is clear > * Host Local Time - stored as (time + Host Local Time), enforcement by > /etc/localtime > * User Local Time - stored as (time + User Local Time), enforcement by ??? > > So the rule may be: > * Employee Foo can access web service Bar only in his work hours Correct. > > IMO, it is realistic for an administrator to set the time zone setting in the > employee entry. Of course, it gets tricky when the user starts moving around > the globe... > It doesn't have to be the administrator, it can be automated by a 3rd party service: 1. Employee schedules bussiness trip in time management system 2. Manager approves the bussiness trip in the time management system 3. The time management system takes care of changing the employee's user object timezone when the bussiness trip starts and ends. -- Jan Cholasta From jpazdziora at redhat.com Tue Mar 24 08:29:14 2015 From: jpazdziora at redhat.com (Jan Pazdziora) Date: Tue, 24 Mar 2015 09:29:14 +0100 Subject: [Freeipa-devel] FreeIPA or freeIPA? (logo/spelling change) In-Reply-To: <551115D4.9040308@redhat.com> References: <551115D4.9040308@redhat.com> Message-ID: <20150324082914.GR17619@redhat.com> On Tue, Mar 24, 2015 at 08:44:20AM +0100, Petr Spacek wrote: > > Do you always wonder why all texts say FreeIPA but logo has 'freeIPA' on it? :-) Perhaps because it is sponsored by Red Hat, to have the logo aligned with Red Hat's logo capitalization? > While preparing FreeIPA poster for OpenHouse event in Brno, I could not resist > anymore and 'fixed' the logo to use capital F - old and new variants are attached. > > Can we agree on one one spelling and fix the rest? (Obviously it would be > easier to fix logos instead of fixing all texts :-)) I obviously do not know the history but I'd love to have the logo consistent with the name. Looking at the PNGs, the original logo has the identity | policy | audit line aligned with the right edge of the logo -- you should shift that to stay aligned. Of course, the IPA might no longer mean identity, policy, *and audit*, so maybe that second line could be dropped altogether? Also, it's FreeIPA with capital IPA -- shouldn't the letters on the box be capitalized as well? In any case, since M?ir?n authored the logo, she should be consulted about the changes. -- Jan Pazdziora Principal Software Engineer, Identity Management Engineering, Red Hat From mbasti at redhat.com Tue Mar 24 08:54:59 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 24 Mar 2015 09:54:59 +0100 Subject: [Freeipa-devel] [PATCH 0212] Server Upgrade: Fix comments In-Reply-To: <551024D7.1080906@redhat.com> References: <551024D7.1080906@redhat.com> Message-ID: <55112663.1060101@redhat.com> On 23/03/15 15:36, Martin Basti wrote: > Attached patch fixes comments which I forgot to edit in 'make upgrade > deterministic' patchset > > > I missed some dictionaries which should be lists. Updated patch attached. -- Martin Basti -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0212.2-Server-Upgrade-Fix-comments.patch Type: text/x-patch Size: 2388 bytes Desc: not available URL: From mbasti at redhat.com Tue Mar 24 08:56:15 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 24 Mar 2015 09:56:15 +0100 Subject: [Freeipa-devel] [PATCHES 0213 - 0221] Server Upgrade: LDAPI, Update plugins In-Reply-To: <55102781.5060809@redhat.com> References: <55102781.5060809@redhat.com> Message-ID: <551126AF.3040207@redhat.com> On 23/03/15 15:47, Martin Basti wrote: > Hello, > > The patches: > * allows to specify order of update plugins in update files. > * requires to use LDAPI by ipa-ldap-updater > > patches attached > > > Rebased patches attached. -- Martin Basti -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0213.2-Server-Upgrade-Fix-comments.patch Type: text/x-patch Size: 2392 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0214.2-Server-Upgrade-use-only-LDAPI-connection.patch Type: text/x-patch Size: 5558 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0215.2-Server-Upgrade-remove-unused-code-in-upgrade.patch Type: text/x-patch Size: 2422 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0216.2-Server-Upgrade-Apply-plugin-updates-immediately.patch Type: text/x-patch Size: 66201 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0217.2-Server-Upgrade-plugins-should-use-ldapupdater-API-in.patch Type: text/x-patch Size: 14855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0218.2-Server-Upgrade-Handle-connection-better-in-updates_f.patch Type: text/x-patch Size: 1072 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0219.2-Server-Upgrade-use-ldap2-connection-in-fix_replica_a.patch Type: text/x-patch Size: 1334 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0220.2-Server-Upgrade-restart-DS-using-ipaplatfom-service.patch Type: text/x-patch Size: 3730 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0221.2-Server-Upgrade-only-root-can-run-updates.patch Type: text/x-patch Size: 2155 bytes Desc: not available URL: From pspacek at redhat.com Tue Mar 24 11:01:31 2015 From: pspacek at redhat.com (Petr Spacek) Date: Tue, 24 Mar 2015 12:01:31 +0100 Subject: [Freeipa-devel] FreeIPA or freeIPA? (logo/spelling change) In-Reply-To: <20150324082914.GR17619@redhat.com> References: <551115D4.9040308@redhat.com> <20150324082914.GR17619@redhat.com> Message-ID: <5511440B.8010002@redhat.com> On 24.3.2015 09:29, Jan Pazdziora wrote: > On Tue, Mar 24, 2015 at 08:44:20AM +0100, Petr Spacek wrote: >> >> Do you always wonder why all texts say FreeIPA but logo has 'freeIPA' on it? :-) > > Perhaps because it is sponsored by Red Hat, to have the logo aligned > with Red Hat's logo capitalization? > >> While preparing FreeIPA poster for OpenHouse event in Brno, I could not resist >> anymore and 'fixed' the logo to use capital F - old and new variants are attached. >> >> Can we agree on one one spelling and fix the rest? (Obviously it would be >> easier to fix logos instead of fixing all texts :-)) > > I obviously do not know the history but I'd love to have the logo > consistent with the name. > > Looking at the PNGs, the original logo has the > > identity | policy | audit > > line aligned with the right edge of the logo -- you should shift that > to stay aligned. I changed the single letter and nothing else, this is what OSAS have in Git repo. > Of course, the IPA might no longer mean identity, policy, *and audit*, > so maybe that second line could be dropped altogether? > > Also, it's FreeIPA with capital IPA -- shouldn't the letters on the > box be capitalized as well? > > In any case, since M?ir?n authored the logo, she should be consulted > about the changes. Good point, I'm CCing her now. -- Petr^2 Spacek From mbasti at redhat.com Tue Mar 24 11:04:18 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 24 Mar 2015 12:04:18 +0100 Subject: [Freeipa-devel] [PATCH 0212] Server Upgrade: Fix comments In-Reply-To: <55112663.1060101@redhat.com> References: <551024D7.1080906@redhat.com> <55112663.1060101@redhat.com> Message-ID: <551144B2.2060706@redhat.com> On 24/03/15 09:54, Martin Basti wrote: > On 23/03/15 15:36, Martin Basti wrote: >> Attached patch fixes comments which I forgot to edit in 'make upgrade >> deterministic' patchset >> >> >> > I missed some dictionaries which should be lists. > > Updated patch attached. > > -- > Martin Basti > > Updated patch attached -- Martin Basti -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0212.3-Server-Upgrade-Fix-comments.patch Type: text/x-patch Size: 2519 bytes Desc: not available URL: From mbasti at redhat.com Tue Mar 24 11:08:43 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 24 Mar 2015 12:08:43 +0100 Subject: [Freeipa-devel] [PATCHES 0213 - 0221] Server Upgrade: LDAPI, Update plugins In-Reply-To: <551126AF.3040207@redhat.com> References: <55102781.5060809@redhat.com> <551126AF.3040207@redhat.com> Message-ID: <551145BB.3090909@redhat.com> On 24/03/15 09:56, Martin Basti wrote: > On 23/03/15 15:47, Martin Basti wrote: >> Hello, >> >> The patches: >> * allows to specify order of update plugins in update files. >> * requires to use LDAPI by ipa-ldap-updater >> >> patches attached >> >> >> > Rebased patches attached. > > -- > Martin Basti > > I accidentally merged two patches into one in previos rebase. So properly rebased patches attached. -- Martin Basti -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0213.3-Server-Upgrade-use-only-LDAPI-connection.patch Type: text/x-patch Size: 5558 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0214.3-Server-Upgrade-remove-unused-code-in-upgrade.patch Type: text/x-patch Size: 2422 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0215.3-Server-Upgrade-Apply-plugin-updates-immediately.patch Type: text/x-patch Size: 27197 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0216.3-Server-Upgrade-specify-order-of-plugins-in-update-fi.patch Type: text/x-patch Size: 48340 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0217.3-Server-Upgrade-plugins-should-use-ldapupdater-API-in.patch Type: text/x-patch Size: 14855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0218.3-Server-Upgrade-Handle-connection-better-in-updates_f.patch Type: text/x-patch Size: 1072 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0219.3-Server-Upgrade-use-ldap2-connection-in-fix_replica_a.patch Type: text/x-patch Size: 1334 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0220.3-Server-Upgrade-restart-DS-using-ipaplatfom-service.patch Type: text/x-patch Size: 3730 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0221.3-Server-Upgrade-only-root-can-run-updates.patch Type: text/x-patch Size: 2155 bytes Desc: not available URL: From duffy at redhat.com Tue Mar 24 11:35:49 2015 From: duffy at redhat.com (=?windows-1252?Q?M=E1ir=EDn_Duffy?=) Date: Tue, 24 Mar 2015 07:35:49 -0400 Subject: [Freeipa-devel] FreeIPA or freeIPA? (logo/spelling change) In-Reply-To: <5511440B.8010002@redhat.com> References: <551115D4.9040308@redhat.com> <20150324082914.GR17619@redhat.com> <5511440B.8010002@redhat.com> Message-ID: <55114C15.3090001@redhat.com> Hey, On 03/24/2015 07:01 AM, Petr Spacek wrote: >> In any case, since M?ir?n authored the logo, she should be consulted >> about the changes. Are you guys changing it to ip instead of ipa? ~m From mkosek at redhat.com Tue Mar 24 11:39:37 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 24 Mar 2015 12:39:37 +0100 Subject: [Freeipa-devel] FreeIPA or freeIPA? (logo/spelling change) In-Reply-To: <55114C15.3090001@redhat.com> References: <551115D4.9040308@redhat.com> <20150324082914.GR17619@redhat.com> <5511440B.8010002@redhat.com> <55114C15.3090001@redhat.com> Message-ID: <55114CF9.6080402@redhat.com> On 03/24/2015 12:35 PM, M?ir?n Duffy wrote: > Hey, > > On 03/24/2015 07:01 AM, Petr Spacek wrote: >>> In any case, since M?ir?n authored the logo, she should be consulted >>> about the changes. > > Are you guys changing it to ip instead of ipa? > > ~m > I do not think we are getting into any real changes, it would not even be a good idea to mess with the name or the logo at this point. I think people were mostly interested in the reasoning why the first "f" is lower and if people would have any strong preference over changing it... TLDR; no actions needed :-) From mbasti at redhat.com Tue Mar 24 11:39:58 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 24 Mar 2015 12:39:58 +0100 Subject: [Freeipa-devel] [PATCH] ipatests: port of p11helper test from github In-Reply-To: <5507F62E.9080004@redhat.com> References: <5502ED38.9020302@redhat.com> <5506B87B.6050600@redhat.com> <5506E983.4020807@redhat.com> <55070370.1010200@redhat.com> <5507F62E.9080004@redhat.com> Message-ID: <55114D0E.8000705@redhat.com> On 17/03/15 10:38, Milan Kubik wrote: > Hi, > > On 03/16/2015 05:23 PM, Martin Basti wrote: >> On 16/03/15 15:32, Milan Kubik wrote: >>> On 03/16/2015 12:03 PM, Milan Kubik wrote: >>>> On 03/13/2015 02:59 PM, Milan Kubik wrote: >>>>> Hi, >>>>> >>>>> this is a patch with port of [1] to pytest. >>>>> >>>>> [1]: >>>>> https://github.com/spacekpe/freeipa-pkcs11/blob/master/python/run.py >>>>> >>>>> Cheers, >>>>> Milan >>>>> >>>>> >>>>> >>>> Added few more asserts in methods where the test could fail and >>>> cause other errors. >>>> >>>> >>> New version of the patch after brief discussion with Martin Basti. >>> Removed unnecessary variable assignments and separated a new test case. >>> >>> >> Hello, >> >> thank you for the patch. >> I have a few nitpicks: >> 1) >> You can remove this and use just hexlify(s) >> +def str_to_hex(s): >> + return ''.join("{:02x}".format(ord(c)) for c in s) > done >> >> 2) >> + def test_find_secret_key(self, p11): >> + assert p11.find_keys(_ipap11helper.KEY_CLASS_SECRET_KEY, >> label=u"???-aest") >> >> In tests before you tested the exact number of expected IDs returned >> by find_keys method, why not here? > Lack of attention. > Fixed the assert in `test_search_for_master_key` which does the same > thing. Merged `test_find_secret_key` with `test_search_for_master_key` > where it belongs. >> >> Martin^2 > > Milan > > Thank you for patches, just two nitpicks: 1) Can you use the ipaplatform.paths constant? This is platform specific. LIBSOFTHSM2_SO = "/usr/lib/pkcs11/libsofthsm2.so" LIBSOFTHSM2_SO_64 = "/usr/lib64/pkcs11/libsofthsm2.so" Respectively use just LIBSOFTHSM2_SO, on 64bit systems it is automatically mapped into LIBSOFTHSM2_SO_64 instead of: + +libsofthsm = "/usr/lib64/pkcs11/libsofthsm2.so" + 2) Can you please check if keys were really deleted? + def test_delete_key(self, p11): -- Martin Basti -------------- next part -------------- An HTML attachment was scrubbed... URL: From duffy at redhat.com Tue Mar 24 11:47:15 2015 From: duffy at redhat.com (=?windows-1252?Q?M=E1ir=EDn_Duffy?=) Date: Tue, 24 Mar 2015 07:47:15 -0400 Subject: [Freeipa-devel] FreeIPA or freeIPA? (logo/spelling change) In-Reply-To: <55114CF9.6080402@redhat.com> References: <551115D4.9040308@redhat.com> <20150324082914.GR17619@redhat.com> <5511440B.8010002@redhat.com> <55114C15.3090001@redhat.com> <55114CF9.6080402@redhat.com> Message-ID: <55114EC3.8080505@redhat.com> Hey Martin & Petr, On 03/24/2015 07:39 AM, Martin Kosek wrote: > I do not think we are getting into any real changes, it would not even be a > good idea to mess with the name or the logo at this point. I think people were > mostly interested in the reasoning why the first "f" is lower and if people > would have any strong preference over changing it... > > TLDR; no actions needed :-) Ah okay! I think I did the logo in 2007.... I don't think the lowercase f was so deliberate a decision; I think a lot of logos we were doing at the time were lowercase so I just followed suit (e.g., the Fedora logo we use today with a lowercase f was 2006 or 2007 I think. The first Fedora logo had an uppercase F!) I think the change from lowercase to uppercase F is minimal enough that it shouldn't be an issue. Let me know if you decide to make that (or any other) changes and I'm happy to help. IIRC I built that logo to be pretty easily editable. (also this is going to be held in the mod queue since I'm not a list member :) ) ~m From jcholast at redhat.com Tue Mar 24 12:45:50 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 24 Mar 2015 13:45:50 +0100 Subject: [Freeipa-devel] [PATCH] 0003-3 User life cycle: new stageuser plugin with add verb In-Reply-To: <550ABC18.8090009@redhat.com> References: <53E4D6AE.6050505@redhat.com> <54045399.3030404@redhat.com> <54196346.5070500@redhat.com> <54D0A7EB.1010700@redhat.com> <54D22BE2.9050407@redhat.com> <54D24567.4010103@redhat.com> <54E5D092.6030708@redhat.com> <54E5FF07.1080809@redhat.com> <54F9F243.5090003@redhat.com> <5506B918.6000708@redhat.com> <5507D13E.7040107@redhat.com> <5509C674.90104@redhat.com> <550A6E97.9010103@redhat.com> <550ABC18.8090009@redhat.com> Message-ID: <55115C7E.1090306@redhat.com> Dne 19.3.2015 v 13:07 thierry bordaz napsal(a): > On 03/19/2015 07:37 AM, Jan Cholasta wrote: >> Dne 18.3.2015 v 19:39 thierry bordaz napsal(a): >>> On 03/17/2015 08:01 AM, Jan Cholasta wrote: >>>> Dne 16.3.2015 v 12:06 David Kupka napsal(a): >>>>> On 03/06/2015 07:30 PM, thierry bordaz wrote: >>>>>> On 02/19/2015 04:19 PM, Martin Basti wrote: >>>>>>> On 19/02/15 13:01, thierry bordaz wrote: >>>>>>>> On 02/04/2015 05:14 PM, Jan Cholasta wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> Dne 4.2.2015 v 15:25 David Kupka napsal(a): >>>>>>>>>> On 02/03/2015 11:50 AM, thierry bordaz wrote: >>>>>>>>>>> On 09/17/2014 12:32 PM, thierry bordaz wrote: >>>>>>>>>>>> On 09/01/2014 01:08 PM, Petr Viktorin wrote: >>>>>>>>>>>>> On 08/08/2014 03:54 PM, thierry bordaz wrote: >>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>> >>>>>>>>>>>>>> The attached patch is related to 'User Life Cycle' >>>>>>>>>>>>>> (https://fedorahosted.org/freeipa/ticket/3813) >>>>>>>>>>>>>> >>>>>>>>>>>>>> It creates a stageuser plugin with a first function >>>>>>>>>>>>>> stageuser-add. >>>>>>>>>>>>>> Stage >>>>>>>>>>>>>> user entries are provisioned under 'cn=staged >>>>>>>>>>>>>> users,cn=accounts,cn=provisioning,SUFFIX'. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>> thierry >>>>>>>>>>>>> >>>>>>>>>>>>> Avoid `from ipalib.plugins.baseldap import *` in new code; >>>>>>>>>>>>> instead >>>>>>>>>>>>> import the module itself and use e.g. `baseldap.LDAPObject`. >>>>>>>>>>>>> >>>>>>>>>>>>> The stageuser help (docstring) is copied from the user >>>>>>>>>>>>> plugin, and >>>>>>>>>>>>> discusses things like account lockout and disabling users. It >>>>>>>>>>>>> should >>>>>>>>>>>>> rather explain what stageuser itself does. (And I don't very >>>>>>>>>>>>> much >>>>>>>>>>>>> like the Note about the interface being badly designed...) >>>>>>>>>>>>> Also decide if the docs should call it "staged user" or "stage >>>>>>>>>>>>> user" >>>>>>>>>>>>> or "stageuser". >>>>>>>>>>>>> >>>>>>>>>>>>> A lot of the code is copied and pasted over from the users >>>>>>>>>>>>> plugin. >>>>>>>>>>>>> Don't do that. Either import things (e.g. >>>>>>>>>>>>> validate_nsaccountlock) >>>>>>>>>>>>> from the users plugin, or move the reused code into a shared >>>>>>>>>>>>> module. >>>>>>>>>>>>> >>>>>>>>>>>>> For the `user` object, since so much is the same, it might be >>>>>>>>>>>>> best to >>>>>>>>>>>>> create a common base class for user and stageuser; and >>>>>>>>>>>>> similarly >>>>>>>>>>>>> for >>>>>>>>>>>>> the Command plugins. >>>>>>>>>>>>> >>>>>>>>>>>>> The default permissions need different names, and you don't >>>>>>>>>>>>> need >>>>>>>>>>>>> another copy of the 'non_object' ones. Also, run the makeaci >>>>>>>>>>>>> script. >>>>>>>>>>>>> >>>>>>>>>>>> Hello, >>>>>>>>>>>> >>>>>>>>>>>> This modified patch is mainly moving common base class >>>>>>>>>>>> into a >>>>>>>>>>>> new >>>>>>>>>>>> plugin: accounts.py. user/stageuser plugin inherits from >>>>>>>>>>>> accounts. >>>>>>>>>>>> It also creates a better description of what are stage >>>>>>>>>>>> user, >>>>>>>>>>>> how >>>>>>>>>>>> to add a new stage user, updates ACI.txt and separate >>>>>>>>>>>> active/stage >>>>>>>>>>>> user managed permissions. >>>>>>>>>>>> >>>>>>>>>>>> thanks >>>>>>>>>>>> thierry >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> Freeipa-devel mailing list >>>>>>>>>>>> Freeipa-devel at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Thanks David for the reviews. Here the last patches >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> Freeipa-devel mailing list >>>>>>>>>>> Freeipa-devel at redhat.com >>>>>>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> The freeipa-tbordaz-0002 patch had trailing whitespaces on few >>>>>>>>>> lines so >>>>>>>>>> I'm attaching fixed version (and unchanged patch >>>>>>>>>> freeipa-tbordaz-0003-3 >>>>>>>>>> to keep them together). >>>>>>>>>> >>>>>>>>>> The ULC feature is still WIP but these patches look good to me >>>>>>>>>> and >>>>>>>>>> don't >>>>>>>>>> break anything as far as I tested. >>>>>>>>>> We should push them now to avoid further rebases. Thierry can >>>>>>>>>> then >>>>>>>>>> prepare other patches delivering the rest of ULC functionality. >>>>>>>>> >>>>>>>>> Few comments from just reading the patches: >>>>>>>>> >>>>>>>>> 1) I would name the base class "baseuser", "account" does not >>>>>>>>> necessarily mean user account. >>>>>>>>> >>>>>>>>> 2) This is very wrong: >>>>>>>>> >>>>>>>>> -class user_add(LDAPCreate): >>>>>>>>> +class user_add(user, LDAPCreate): >>>>>>>>> >>>>>>>>> You are creating a plugin which is both an object and an command. >>>>>>>>> >>>>>>>>> 3) This is purely subjective, but I don't like the name >>>>>>>>> "deleteuser", as it has a verb in it. We usually don't do that and >>>>>>>>> IMHO we shouldn't do that. >>>>>>>>> >>>>>>>>> Honza >>>>>>>>> >>>>>>>> >>>>>>>> Thank you for the review. I am attaching the updates patches >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Freeipa-devel mailing list >>>>>>>> Freeipa-devel at redhat.com >>>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>> Hello, >>>>>>> I'm getting errors during make rpms: >>>>>>> >>>>>>> if [ "" != "yes" ]; then \ >>>>>>> ./makeapi --validate; \ >>>>>>> ./makeaci --validate; \ >>>>>>> fi >>>>>>> >>>>>>> /root/freeipa/ipalib/plugins/baseuser.py:641 command "baseuser_add" >>>>>>> doc is not internationalized >>>>>>> /root/freeipa/ipalib/plugins/baseuser.py:653 command "baseuser_find" >>>>>>> doc is not internationalized >>>>>>> /root/freeipa/ipalib/plugins/baseuser.py:647 command "baseuser_mod" >>>>>>> doc is not internationalized >>>>>>> 0 commands without doc, 3 commands whose doc is not i18n >>>>>>> Command baseuser_add in ipalib, not in API >>>>>>> Command baseuser_find in ipalib, not in API >>>>>>> Command baseuser_mod in ipalib, not in API >>>>>>> >>>>>>> There are one or more new commands defined. >>>>>>> Update API.txt and increment the minor version in VERSION. >>>>>>> >>>>>>> There are one or more documentation problems. >>>>>>> You must fix these before preceeding >>>>>>> >>>>>>> Issues probably caused by this: >>>>>>> 1) >>>>>>> You should not use the register decorator, if this class is just for >>>>>>> inheritance >>>>>>> @register() >>>>>>> class baseuser_add(LDAPCreate): >>>>>>> >>>>>>> @register() >>>>>>> class baseuser_mod(LDAPUpdate): >>>>>>> >>>>>>> @register() >>>>>>> class baseuser_find(LDAPSearch): >>>>>>> >>>>>>> see dns.py plugin and "DNSZoneBase" and "dnszone" classes >>>>>>> >>>>>>> 2) >>>>>>> there might be an issue with >>>>>>> @register() >>>>>>> class baseuser(LDAPObject): >>>>>>> >>>>>>> the register decorator should not be there, I was warned by >>>>>>> Petr^3 to >>>>>>> not use permission in parent class. The same permission should be >>>>>>> specified only in one place (for example user class), (otherwise >>>>>>> they >>>>>>> will be generated twice??) I don't know more details about it. >>>>>>> >>>>>>> -- >>>>>>> Martin Basti >>>>>> >>>>>> Hello Martin, Jan, >>>>>> >>>>>> Thanks for your review. >>>>>> I changed the patch so that it does not register baseuser_*. Also >>>>>> increase the minor version because of new command. >>>>>> Finally I moved the managed_permission definition out of the parent >>>>>> baseuser class. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> Martin, could you please verify that the issues you encountered are >>>>> fixed? >>>>> >>>>> Thanks! >>>>> >>>> >>>> You bumped wrong version variable: >>>> >>>> -IPA_VERSION_MINOR=1 >>>> +IPA_VERSION_MINOR=2 >>>> >>>> It should have been IPA_API_VERSION_MINOR (at the bottom of the file), >>>> including the last change comment below it. >>>> >>>> >>>> IMO baseuser should include superclasses for all the usual commands >>>> (add, mod, del, show, find) and stageuser/deleteuser commands should >>>> inherit from them. >>>> >>>> >>>> You don't need to override class properties like active_container_dn >>>> and takes_params on baseuser subclasses when they have the same value >>>> as in baseuser. >>>> >>>> >>>> Honza >>>> >>> Hello Honza, >>> >>> Thanks for the review. I did the modifications you recommended >>> within that attached patches >>> >>> * Change version >> >> Please also update the comment below (e.g. "# Last change: tbordaz - >> Add stageuser_add command") >> >>> * create the baseuser_* plugins commands and use them in the >>> user/stageuser plugin commands >>> * Do not redefine the class properties in the subclasses. >> >> There are still some in baseuser command classes: >> >> +class baseuser_add(LDAPCreate): >> + """ >> + Prototype command plugin to be implemented by real plugin >> + """ >> + active_container_dn = api.env.container_user >> + has_output_params = LDAPCreate.has_output_params >> >> You don't need to set active_container_dn here, you only need to set >> it in baseuser. Then in stageuser_add and other subclasses you use >> "self.obj.active_container_dn" instead of "self.active_container_dn". >> >> You also don't need to override has_output_params if you are not >> changing its value - you are inheriting from LDAPCreate, so >> baseuser_add.has_output_params implicitly has the same value as >> LDAPCreate.has_output_params. >> >>> >>> Thanks >>> thierry >>> >> > > Hello Honza, > > Thanks for your patience .. :-) > I understand my mistake. Just a question, in a plugin command > (user_add), is 'self.obj' referring to the plugin object (like 'user') ? Yes, that's correct. > > updated patches (with the appropriate naming and patch versioning). > > thanks > theirry > One more thing: Instead of: class stageuser(baseuser): ... # take_params does not support 'nsaccountlock' option stageuser_takes_params_list = [] for elt in baseuser.takes_params: if isinstance(elt, Bool) and elt.param_spec == 'nsaccountlock?': pass else: stageuser_takes_params_list.append(elt) takes_params = tuple(stageuser_takes_params_list) I would remove nsaccountlock from baseuser.takes_params and add it in user.takes_params: class user(baseuser): ... takes_params = baseuser.takes_params + ( Bool('nsaccountlock?', label=_('Account disabled'), flags=['no_option'], ), ) -- Jan Cholasta From dkupka at redhat.com Tue Mar 24 13:23:01 2015 From: dkupka at redhat.com (David Kupka) Date: Tue, 24 Mar 2015 14:23:01 +0100 Subject: [Freeipa-devel] [PATCH 0212] Server Upgrade: Fix comments In-Reply-To: <551144B2.2060706@redhat.com> References: <551024D7.1080906@redhat.com> <55112663.1060101@redhat.com> <551144B2.2060706@redhat.com> Message-ID: <55116535.6000900@redhat.com> On 03/24/2015 12:04 PM, Martin Basti wrote: > On 24/03/15 09:54, Martin Basti wrote: >> On 23/03/15 15:36, Martin Basti wrote: >>> Attached patch fixes comments which I forgot to edit in 'make upgrade >>> deterministic' patchset >>> >>> >>> >> I missed some dictionaries which should be lists. >> >> Updated patch attached. >> >> -- >> Martin Basti >> >> > Updated patch attached > Thanks for the patch, LGTM, ACK. -- David Kupka From mkubik at redhat.com Tue Mar 24 13:41:32 2015 From: mkubik at redhat.com (Milan Kubik) Date: Tue, 24 Mar 2015 14:41:32 +0100 Subject: [Freeipa-devel] [PATCH] ipatests: port of p11helper test from github In-Reply-To: <55114D0E.8000705@redhat.com> References: <5502ED38.9020302@redhat.com> <5506B87B.6050600@redhat.com> <5506E983.4020807@redhat.com> <55070370.1010200@redhat.com> <5507F62E.9080004@redhat.com> <55114D0E.8000705@redhat.com> Message-ID: <5511698C.3060305@redhat.com> Hello, thanks for the review. On 03/24/2015 12:39 PM, Martin Basti wrote: > On 17/03/15 10:38, Milan Kubik wrote: >> Hi, >> >> On 03/16/2015 05:23 PM, Martin Basti wrote: >>> On 16/03/15 15:32, Milan Kubik wrote: >>>> On 03/16/2015 12:03 PM, Milan Kubik wrote: >>>>> On 03/13/2015 02:59 PM, Milan Kubik wrote: >>>>>> Hi, >>>>>> >>>>>> this is a patch with port of [1] to pytest. >>>>>> >>>>>> [1]: >>>>>> https://github.com/spacekpe/freeipa-pkcs11/blob/master/python/run.py >>>>>> >>>>>> Cheers, >>>>>> Milan >>>>>> >>>>>> >>>>>> >>>>> Added few more asserts in methods where the test could fail and >>>>> cause other errors. >>>>> >>>>> >>>> New version of the patch after brief discussion with Martin Basti. >>>> Removed unnecessary variable assignments and separated a new test case. >>>> >>>> >>> Hello, >>> >>> thank you for the patch. >>> I have a few nitpicks: >>> 1) >>> You can remove this and use just hexlify(s) >>> +def str_to_hex(s): >>> + return ''.join("{:02x}".format(ord(c)) for c in s) >> done >>> >>> 2) >>> + def test_find_secret_key(self, p11): >>> + assert p11.find_keys(_ipap11helper.KEY_CLASS_SECRET_KEY, >>> label=u"???-aest") >>> >>> In tests before you tested the exact number of expected IDs returned >>> by find_keys method, why not here? >> Lack of attention. >> Fixed the assert in `test_search_for_master_key` which does the same >> thing. Merged `test_find_secret_key` with >> `test_search_for_master_key` where it belongs. >>> >>> Martin^2 >> >> Milan >> >> > Thank you for patches, just two nitpicks: > > 1) > Can you use the ipaplatform.paths constant? This is platform specific. > LIBSOFTHSM2_SO = "/usr/lib/pkcs11/libsofthsm2.so" > LIBSOFTHSM2_SO_64 = "/usr/lib64/pkcs11/libsofthsm2.so" > > Respectively use just LIBSOFTHSM2_SO, on 64bit systems it is > automatically mapped into LIBSOFTHSM2_SO_64 > > instead of: > + > +libsofthsm = "/usr/lib64/pkcs11/libsofthsm2.so" > + > Done. > 2) > Can you please check if keys were really deleted? > + def test_delete_key(self, p11): Done. > -- > Martin Basti I also moved `test_search_for_master_key` right after `test_generate_master_key` and changed the assert message to a more specific one. Cheers, Milan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkubik-0001-5-ipatests-port-of-p11helper-test-from-github.patch Type: text/x-patch Size: 12123 bytes Desc: not available URL: From amarecek at redhat.com Tue Mar 24 14:06:54 2015 From: amarecek at redhat.com (=?utf-8?Q?Ale=C5=A1_Mare=C4=8Dek?=) Date: Tue, 24 Mar 2015 10:06:54 -0400 (EDT) Subject: [Freeipa-devel] [PATCH] 0001 ipatests: SOA record Maintenance tests In-Reply-To: <680074168.2796459.1427205923132.JavaMail.zimbra@redhat.com> Message-ID: <68938500.2798491.1427206014857.JavaMail.zimbra@redhat.com> Greetings! This is my very first patch, ticket#4746. Have a nice day! - alich - -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-alich-0001-ipatests-added-tests-for-SOA-record-Maintenance.patch Type: text/x-patch Size: 32366 bytes Desc: not available URL: From dkupka at redhat.com Tue Mar 24 15:26:12 2015 From: dkupka at redhat.com (David Kupka) Date: Tue, 24 Mar 2015 16:26:12 +0100 Subject: [Freeipa-devel] New installer PoC In-Reply-To: <550FC8E9.4020502@redhat.com> References: <550FC8E9.4020502@redhat.com> Message-ID: <55118214.7020804@redhat.com> On 03/23/2015 09:03 AM, Jan Cholasta wrote: > Hi, > > the attached patch contains a new PoC installer for httpd. > > Design goals: > > 1) Make code related to any particular configuration change co-located, > be it install/uninstall/upgrade. > > 2) Get rid of code duplicates. > > 3) Use the same code path for install and upgrade. > > 4) Provide metadata for parameters from which option parsers etc. can be > generated. > > 5) Make installers plugable. This is not really apparent from the patch, > since it only implements installer for a single component, but I plan to > make the whole thing extensible by plugins. > > Honza > > > Hi! I haven't tested the patch yet but it looks good at first glance. I definitely like the generator-style (un)installation steps. -- David Kupka From mbasti at redhat.com Tue Mar 24 15:39:21 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 24 Mar 2015 16:39:21 +0100 Subject: [Freeipa-devel] [PATCH] 0001 ipatests: SOA record Maintenance tests In-Reply-To: <68938500.2798491.1427206014857.JavaMail.zimbra@redhat.com> References: <68938500.2798491.1427206014857.JavaMail.zimbra@redhat.com> Message-ID: <55118529.8010708@redhat.com> On 24/03/15 15:06, Ale? Mare?ek wrote: > Greetings! > This is my very first patch, ticket#4746. > > Have a nice day! > - alich - > > Thank you for the patch. Just nitpicks: 1) + cleanup_commands = [ + ('dnszone_del', [zone6], {'continue': True}), + ('dnszone_del', [zone6b], {'continue': True}), + ] would be better do it in this way, continue option will to try remove all zones: + cleanup_commands = [ + ('dnszone_del', [zone6, zone6b], {'continue': True}), + ] 2) I'm fine with zone6b, but was there any reason to create zone6b, instead of reusing zone 1 or 2 or 3? 3) Please fix whitespace errors. $ git am freeipa-alich-0001-ipatests-added-tests-for-SOA-record-Maintenance.patch Applying: ipatests - added tests for SOA record Maintenance /home/mbasti/work/freeipa-devel/.git/rebase-apply/patch:482: trailing whitespace. /home/mbasti/work/freeipa-devel/.git/rebase-apply/patch:758: new blank line at EOF. + warning: 2 lines add whitespace errors. 4) I know the dns plugin tests are so far from PEP8, but try to keep PEP8 in new code Otherwise test works as expected. Martin^2 -- Martin Basti -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbasti at redhat.com Tue Mar 24 15:40:19 2015 From: mbasti at redhat.com (Martin Basti) Date: Tue, 24 Mar 2015 16:40:19 +0100 Subject: [Freeipa-devel] [PATCH] ipatests: port of p11helper test from github In-Reply-To: <5511698C.3060305@redhat.com> References: <5502ED38.9020302@redhat.com> <5506B87B.6050600@redhat.com> <5506E983.4020807@redhat.com> <55070370.1010200@redhat.com> <5507F62E.9080004@redhat.com> <55114D0E.8000705@redhat.com> <5511698C.3060305@redhat.com> Message-ID: <55118563.2010503@redhat.com> On 24/03/15 14:41, Milan Kubik wrote: > Hello, > > thanks for the review. > > On 03/24/2015 12:39 PM, Martin Basti wrote: >> On 17/03/15 10:38, Milan Kubik wrote: >>> Hi, >>> >>> On 03/16/2015 05:23 PM, Martin Basti wrote: >>>> On 16/03/15 15:32, Milan Kubik wrote: >>>>> On 03/16/2015 12:03 PM, Milan Kubik wrote: >>>>>> On 03/13/2015 02:59 PM, Milan Kubik wrote: >>>>>>> Hi, >>>>>>> >>>>>>> this is a patch with port of [1] to pytest. >>>>>>> >>>>>>> [1]: >>>>>>> https://github.com/spacekpe/freeipa-pkcs11/blob/master/python/run.py >>>>>>> >>>>>>> >>>>>>> Cheers, >>>>>>> Milan >>>>>>> >>>>>>> >>>>>>> >>>>>> Added few more asserts in methods where the test could fail and >>>>>> cause other errors. >>>>>> >>>>>> >>>>> New version of the patch after brief discussion with Martin Basti. >>>>> Removed unnecessary variable assignments and separated a new test >>>>> case. >>>>> >>>>> >>>> Hello, >>>> >>>> thank you for the patch. >>>> I have a few nitpicks: >>>> 1) >>>> You can remove this and use just hexlify(s) >>>> +def str_to_hex(s): >>>> + return ''.join("{:02x}".format(ord(c)) for c in s) >>> done >>>> >>>> 2) >>>> + def test_find_secret_key(self, p11): >>>> + assert p11.find_keys(_ipap11helper.KEY_CLASS_SECRET_KEY, >>>> label=u"???-aest") >>>> >>>> In tests before you tested the exact number of expected IDs >>>> returned by find_keys method, why not here? >>> Lack of attention. >>> Fixed the assert in `test_search_for_master_key` which does the same >>> thing. Merged `test_find_secret_key` with >>> `test_search_for_master_key` where it belongs. >>>> >>>> Martin^2 >>> >>> Milan >>> >>> >> Thank you for patches, just two nitpicks: >> >> 1) >> Can you use the ipaplatform.paths constant? This is platform specific. >> LIBSOFTHSM2_SO = "/usr/lib/pkcs11/libsofthsm2.so" >> LIBSOFTHSM2_SO_64 = "/usr/lib64/pkcs11/libsofthsm2.so" >> >> Respectively use just LIBSOFTHSM2_SO, on 64bit systems it is >> automatically mapped into LIBSOFTHSM2_SO_64 >> >> instead of: >> + >> +libsofthsm = "/usr/lib64/pkcs11/libsofthsm2.so" >> + >> > Done. >> 2) >> Can you please check if keys were really deleted? >> + def test_delete_key(self, p11): > Done. >> -- >> Martin Basti > > I also moved `test_search_for_master_key` right after > `test_generate_master_key` and changed the assert message to a more > specific one. > > Cheers, > Milan Please fix this: 1) $ git am freeipa-mkubik-0001-5-ipatests-port-of-p11helper-test-from-github.patch Applying: ipatests: port of p11helper test from github /home/mbasti/work/freeipa-devel/.git/rebase-apply/patch:228: new blank line at EOF. + warning: 1 line adds whitespace errors. 2) Please respect PEP8 if it is possible 3) I'm still not sure with this: assert len(master_key) == 0, "The master key should be deleted." following example is more pythonic assert not master_key, "The master key...." 4) Related to 3), should we test return value, if correct type was returned? assert isinstance(master_key, list) and not master_key, "....." I do not insist on this. Otherwise it works as expected. -- Martin Basti -------------- next part -------------- An HTML attachment was scrubbed... URL: From abokovoy at redhat.com Tue Mar 24 16:47:12 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Tue, 24 Mar 2015 18:47:12 +0200 Subject: [Freeipa-devel] [PATCH] 0001 ipatests: SOA record Maintenance tests In-Reply-To: <68938500.2798491.1427206014857.JavaMail.zimbra@redhat.com> References: <680074168.2796459.1427205923132.JavaMail.zimbra@redhat.com> <68938500.2798491.1427206014857.JavaMail.zimbra@redhat.com> Message-ID: <20150324164712.GV3878@redhat.com> On Tue, 24 Mar 2015, Ale? Mare?ek wrote: >Greetings! >This is my very first patch, ticket#4746. > >Have a nice day! > - alich - >From 0f7eb27d1470e785e3799bc78c367d8118917f99 Mon Sep 17 00:00:00 2001 >From: root >Date: Tue, 24 Mar 2015 14:40:49 +0100 >Subject: [PATCH] ipatests - added tests for SOA record Maintenance Please set up your git configuration to use proper email and user name in commits. See http://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup for details. -- / Alexander Bokovoy From slaz at seznam.cz Tue Mar 24 17:08:20 2015 From: slaz at seznam.cz (=?UTF-8?B?U3RhbmlzbGF2IEzDoXpuacSNa2E=?=) Date: Tue, 24 Mar 2015 18:08:20 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <5511180D.8050103@redhat.com> References: <550C1303.7090402@seznam.cz> <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> Message-ID: <55119A04.4060301@seznam.cz> On 03/24/2015 08:53 AM, Jan Cholasta wrote: > Dne 24.3.2015 v 08:40 Martin Kosek napsal(a): >> On 03/24/2015 08:20 AM, Jakub Hrozek wrote: >>> On Tue, Mar 24, 2015 at 08:07:53AM +0100, Martin Kosek wrote: >>>> On 03/24/2015 07:16 AM, Jan Cholasta wrote: >>>>> Dne 23.3.2015 v 20:17 Standa L?zni?ka napsal(a): >>>> ... >>>>>>> Given the above, HBAC rules could contain (time, anchor), where >>>>>>> anchor >>>>>>> is "UTC", "user local time" or "host local time". >>>>>> Truth is, it was not really clear to me from the last week's >>>>>> discussion >>>>>> whose "Local Time" to use - do we use host's or do we use >>>>>> user's? It >>>>>> would make sense to me to use the user's local time. But then you >>>>>> would >>>>>> need to really store at least the timezone information with each >>>>>> user >>>>>> object. And that information should probably change with user moving >>>>>> between different timezones. That's quite a pickle I am in right >>>>>> here. >>>>> >>>>> IMO whether to use user or host local time depends on organization >>>>> local >>>>> policy, hence my suggestion to support both. >>>> >>>> I am bit confused, I would like to make sure we are on the same >>>> page with >>>> regards to Local Time. When the Local Time rule is created, anchor >>>> will be set >>>> to "Local Time". Then SSSD would simply use host's local time, in >>>> whichever >>>> time zone the HBAC host is. >>> >>> Yes, that was my understanding also. >>> >>>> >>>> So this is the default host enforcement. For the user, you want to >>>> let SSSD >>>> check authenticated user's entry, to see if there is a timezone >>>> information? >>>> This would of course depend on the information being available. For >>>> AD users, >>>> you would need to set it in ID Views or similar. >>> >>> Yes, also in a previous e-mail, there was a suggestion to change >>> timezones by admin when the user changes timezones -- I didn't like >>> that >>> part, it seems really error prone and tedious. *If* there was this >>> choice, it should not be the default, rather the default should also be >>> host local time IMO. > > I don't think you can expect host-local time to be good enough for > everyone. > >> >> Host local time zone was the original case I expected. Enforcing >> *user* local >> time zone is where this discussion started. Honze proposed making >> this an >> option - leaving us to 3 different time modes: >> >> * UTC - stored as (time + olson time zone), enforcement is clear >> * Host Local Time - stored as (time + Host Local Time), enforcement by >> /etc/localtime >> * User Local Time - stored as (time + User Local Time), enforcement >> by ??? >> >> So the rule may be: >> * Employee Foo can access web service Bar only in his work hours > > Correct. > >> >> IMO, it is realistic for an administrator to set the time zone >> setting in the >> employee entry. Of course, it gets tricky when the user starts moving >> around >> the globe... >> > > It doesn't have to be the administrator, it can be automated by a 3rd > party service: > > 1. Employee schedules bussiness trip in time management system > 2. Manager approves the bussiness trip in the time management system > 3. The time management system takes care of changing the employee's > user object timezone when the bussiness trip starts and ends. > I have to say I am not very sure about this anymore. Although it would be a great feature to have users' local time policies, it brings a lot of traps on the way. The responsibility of keeping the LDAP database consistent with reality is laid to a 3rd party application. If that breaks or does not work well, this responsibility might be sometimes transferred to admin. But if there's a lot of people traveling at a time... I'm having mixed feelings about this. From simo at redhat.com Tue Mar 24 18:20:21 2015 From: simo at redhat.com (Simo Sorce) Date: Tue, 24 Mar 2015 14:20:21 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <551114EB.9040508@redhat.com> References: <550C1303.7090402@seznam.cz> <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> Message-ID: <1427221221.8302.25.camel@willson.usersys.redhat.com> On Tue, 2015-03-24 at 08:40 +0100, Martin Kosek wrote: > On 03/24/2015 08:20 AM, Jakub Hrozek wrote: > > On Tue, Mar 24, 2015 at 08:07:53AM +0100, Martin Kosek wrote: > >> On 03/24/2015 07:16 AM, Jan Cholasta wrote: > >>> Dne 23.3.2015 v 20:17 Standa L?zni?ka napsal(a): > >> ... > >>>>> Given the above, HBAC rules could contain (time, anchor), where anchor > >>>>> is "UTC", "user local time" or "host local time". > >>>> Truth is, it was not really clear to me from the last week's discussion > >>>> whose "Local Time" to use - do we use host's or do we use user's? It > >>>> would make sense to me to use the user's local time. But then you would > >>>> need to really store at least the timezone information with each user > >>>> object. And that information should probably change with user moving > >>>> between different timezones. That's quite a pickle I am in right here. > >>> > >>> IMO whether to use user or host local time depends on organization local > >>> policy, hence my suggestion to support both. > >> > >> I am bit confused, I would like to make sure we are on the same page with > >> regards to Local Time. When the Local Time rule is created, anchor will be set > >> to "Local Time". Then SSSD would simply use host's local time, in whichever > >> time zone the HBAC host is. > > > > Yes, that was my understanding also. > > > >> > >> So this is the default host enforcement. For the user, you want to let SSSD > >> check authenticated user's entry, to see if there is a timezone information? > >> This would of course depend on the information being available. For AD users, > >> you would need to set it in ID Views or similar. > > > > Yes, also in a previous e-mail, there was a suggestion to change > > timezones by admin when the user changes timezones -- I didn't like that > > part, it seems really error prone and tedious. *If* there was this > > choice, it should not be the default, rather the default should also be > > host local time IMO. > > Host local time zone was the original case I expected. Enforcing *user* local > time zone is where this discussion started. Honze proposed making this an > option - leaving us to 3 different time modes: > > * UTC - stored as (time + olson time zone), enforcement is clear > * Host Local Time - stored as (time + Host Local Time), enforcement by > /etc/localtime > * User Local Time - stored as (time + User Local Time), enforcement by ??? > > So the rule may be: > * Employee Foo can access web service Bar only in his work hours > > IMO, it is realistic for an administrator to set the time zone setting in the > employee entry. Of course, it gets tricky when the user starts moving around > the globe... > Host Based Access Control is about controlling access based on the *HOST*. I do not see any space for user time zones honestly. If and when someone will vehemently ask for 'per-user' time zones we can talk about it. Simo. -- Simo Sorce * Red Hat, Inc * New York From jcholast at redhat.com Wed Mar 25 07:21:35 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 25 Mar 2015 08:21:35 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <55119A04.4060301@seznam.cz> References: <550C1303.7090402@seznam.cz> <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> Message-ID: <551261FF.4020806@redhat.com> Dne 24.3.2015 v 18:08 Stanislav L?zni?ka napsal(a): > On 03/24/2015 08:53 AM, Jan Cholasta wrote: >> Dne 24.3.2015 v 08:40 Martin Kosek napsal(a): >>> On 03/24/2015 08:20 AM, Jakub Hrozek wrote: >>>> On Tue, Mar 24, 2015 at 08:07:53AM +0100, Martin Kosek wrote: >>>>> On 03/24/2015 07:16 AM, Jan Cholasta wrote: >>>>>> Dne 23.3.2015 v 20:17 Standa L?zni?ka napsal(a): >>>>> ... >>>>>>>> Given the above, HBAC rules could contain (time, anchor), where >>>>>>>> anchor >>>>>>>> is "UTC", "user local time" or "host local time". >>>>>>> Truth is, it was not really clear to me from the last week's >>>>>>> discussion >>>>>>> whose "Local Time" to use - do we use host's or do we use >>>>>>> user's? It >>>>>>> would make sense to me to use the user's local time. But then you >>>>>>> would >>>>>>> need to really store at least the timezone information with each >>>>>>> user >>>>>>> object. And that information should probably change with user moving >>>>>>> between different timezones. That's quite a pickle I am in right >>>>>>> here. >>>>>> >>>>>> IMO whether to use user or host local time depends on organization >>>>>> local >>>>>> policy, hence my suggestion to support both. >>>>> >>>>> I am bit confused, I would like to make sure we are on the same >>>>> page with >>>>> regards to Local Time. When the Local Time rule is created, anchor >>>>> will be set >>>>> to "Local Time". Then SSSD would simply use host's local time, in >>>>> whichever >>>>> time zone the HBAC host is. >>>> >>>> Yes, that was my understanding also. >>>> >>>>> >>>>> So this is the default host enforcement. For the user, you want to >>>>> let SSSD >>>>> check authenticated user's entry, to see if there is a timezone >>>>> information? >>>>> This would of course depend on the information being available. For >>>>> AD users, >>>>> you would need to set it in ID Views or similar. >>>> >>>> Yes, also in a previous e-mail, there was a suggestion to change >>>> timezones by admin when the user changes timezones -- I didn't like >>>> that >>>> part, it seems really error prone and tedious. *If* there was this >>>> choice, it should not be the default, rather the default should also be >>>> host local time IMO. >> >> I don't think you can expect host-local time to be good enough for >> everyone. >> >>> >>> Host local time zone was the original case I expected. Enforcing >>> *user* local >>> time zone is where this discussion started. Honze proposed making >>> this an >>> option - leaving us to 3 different time modes: >>> >>> * UTC - stored as (time + olson time zone), enforcement is clear >>> * Host Local Time - stored as (time + Host Local Time), enforcement by >>> /etc/localtime >>> * User Local Time - stored as (time + User Local Time), enforcement >>> by ??? >>> >>> So the rule may be: >>> * Employee Foo can access web service Bar only in his work hours >> >> Correct. >> >>> >>> IMO, it is realistic for an administrator to set the time zone >>> setting in the >>> employee entry. Of course, it gets tricky when the user starts moving >>> around >>> the globe... >>> >> >> It doesn't have to be the administrator, it can be automated by a 3rd >> party service: >> >> 1. Employee schedules bussiness trip in time management system >> 2. Manager approves the bussiness trip in the time management system >> 3. The time management system takes care of changing the employee's >> user object timezone when the bussiness trip starts and ends. >> > I have to say I am not very sure about this anymore. Although it would > be a great feature to have users' local time policies, it brings a lot > of traps on the way. The responsibility of keeping the LDAP database > consistent with reality is laid to a 3rd party application. If that > breaks or does not work well, this responsibility might be sometimes > transferred to admin. But if there's a lot of people traveling at a > time... I'm having mixed feelings about this. Timezones or not, there is always the responsibility of keeping LDAP in sync with reality. I'm just saying there are possibilities besides doing it manually. Anyway, I think both user-local and host-local time can also be implemented based on group membership, without storing anything with user and host object, with HBAC rules like this: Rule name: allow_tz_europe_prague_users Member groups: tz_europe_prague Host category: all Service category: all Access time: (timeofday=0800-1600 dayofweek=Mon-Fri) Time zone: Europe/Prague Description: Allow users in Europe/Prague access during bussiness hours Enabled: TRUE Rule name: allow_tz_europe_prague_hosts User category: all Member hostgroups: tz_europe_prague Service category: all Access time: (timeofday=0800-1600 dayofweek=Mon-Fri) Time zone: Europe/Prague Description: Allow access to hosts in Europe/Prague during bussiness hours Enabled: TRUE I guess things might get complicated if you want to do limit access based on both user and host local time, but I'm not sure if anyone would actually want to do that. -- Jan Cholasta From tbordaz at redhat.com Wed Mar 25 09:04:51 2015 From: tbordaz at redhat.com (thierry bordaz) Date: Wed, 25 Mar 2015 10:04:51 +0100 Subject: [Freeipa-devel] [PATCH] 0003-3 User life cycle: new stageuser plugin with add verb In-Reply-To: <55115C7E.1090306@redhat.com> References: <53E4D6AE.6050505@redhat.com> <54045399.3030404@redhat.com> <54196346.5070500@redhat.com> <54D0A7EB.1010700@redhat.com> <54D22BE2.9050407@redhat.com> <54D24567.4010103@redhat.com> <54E5D092.6030708@redhat.com> <54E5FF07.1080809@redhat.com> <54F9F243.5090003@redhat.com> <5506B918.6000708@redhat.com> <5507D13E.7040107@redhat.com> <5509C674.90104@redhat.com> <550A6E97.9010103@redhat.com> <550ABC18.8090009@redhat.com> <55115C7E.1090306@redhat.com> Message-ID: <55127A33.4090801@redhat.com> On 03/24/2015 01:45 PM, Jan Cholasta wrote: > Dne 19.3.2015 v 13:07 thierry bordaz napsal(a): >> On 03/19/2015 07:37 AM, Jan Cholasta wrote: >>> Dne 18.3.2015 v 19:39 thierry bordaz napsal(a): >>>> On 03/17/2015 08:01 AM, Jan Cholasta wrote: >>>>> Dne 16.3.2015 v 12:06 David Kupka napsal(a): >>>>>> On 03/06/2015 07:30 PM, thierry bordaz wrote: >>>>>>> On 02/19/2015 04:19 PM, Martin Basti wrote: >>>>>>>> On 19/02/15 13:01, thierry bordaz wrote: >>>>>>>>> On 02/04/2015 05:14 PM, Jan Cholasta wrote: >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> Dne 4.2.2015 v 15:25 David Kupka napsal(a): >>>>>>>>>>> On 02/03/2015 11:50 AM, thierry bordaz wrote: >>>>>>>>>>>> On 09/17/2014 12:32 PM, thierry bordaz wrote: >>>>>>>>>>>>> On 09/01/2014 01:08 PM, Petr Viktorin wrote: >>>>>>>>>>>>>> On 08/08/2014 03:54 PM, thierry bordaz wrote: >>>>>>>>>>>>>>> Hi, >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The attached patch is related to 'User Life Cycle' >>>>>>>>>>>>>>> (https://fedorahosted.org/freeipa/ticket/3813) >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> It creates a stageuser plugin with a first function >>>>>>>>>>>>>>> stageuser-add. >>>>>>>>>>>>>>> Stage >>>>>>>>>>>>>>> user entries are provisioned under 'cn=staged >>>>>>>>>>>>>>> users,cn=accounts,cn=provisioning,SUFFIX'. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Thanks >>>>>>>>>>>>>>> thierry >>>>>>>>>>>>>> >>>>>>>>>>>>>> Avoid `from ipalib.plugins.baseldap import *` in new code; >>>>>>>>>>>>>> instead >>>>>>>>>>>>>> import the module itself and use e.g. `baseldap.LDAPObject`. >>>>>>>>>>>>>> >>>>>>>>>>>>>> The stageuser help (docstring) is copied from the user >>>>>>>>>>>>>> plugin, and >>>>>>>>>>>>>> discusses things like account lockout and disabling >>>>>>>>>>>>>> users. It >>>>>>>>>>>>>> should >>>>>>>>>>>>>> rather explain what stageuser itself does. (And I don't very >>>>>>>>>>>>>> much >>>>>>>>>>>>>> like the Note about the interface being badly designed...) >>>>>>>>>>>>>> Also decide if the docs should call it "staged user" or >>>>>>>>>>>>>> "stage >>>>>>>>>>>>>> user" >>>>>>>>>>>>>> or "stageuser". >>>>>>>>>>>>>> >>>>>>>>>>>>>> A lot of the code is copied and pasted over from the users >>>>>>>>>>>>>> plugin. >>>>>>>>>>>>>> Don't do that. Either import things (e.g. >>>>>>>>>>>>>> validate_nsaccountlock) >>>>>>>>>>>>>> from the users plugin, or move the reused code into a shared >>>>>>>>>>>>>> module. >>>>>>>>>>>>>> >>>>>>>>>>>>>> For the `user` object, since so much is the same, it >>>>>>>>>>>>>> might be >>>>>>>>>>>>>> best to >>>>>>>>>>>>>> create a common base class for user and stageuser; and >>>>>>>>>>>>>> similarly >>>>>>>>>>>>>> for >>>>>>>>>>>>>> the Command plugins. >>>>>>>>>>>>>> >>>>>>>>>>>>>> The default permissions need different names, and you don't >>>>>>>>>>>>>> need >>>>>>>>>>>>>> another copy of the 'non_object' ones. Also, run the makeaci >>>>>>>>>>>>>> script. >>>>>>>>>>>>>> >>>>>>>>>>>>> Hello, >>>>>>>>>>>>> >>>>>>>>>>>>> This modified patch is mainly moving common base class >>>>>>>>>>>>> into a >>>>>>>>>>>>> new >>>>>>>>>>>>> plugin: accounts.py. user/stageuser plugin inherits from >>>>>>>>>>>>> accounts. >>>>>>>>>>>>> It also creates a better description of what are stage >>>>>>>>>>>>> user, >>>>>>>>>>>>> how >>>>>>>>>>>>> to add a new stage user, updates ACI.txt and separate >>>>>>>>>>>>> active/stage >>>>>>>>>>>>> user managed permissions. >>>>>>>>>>>>> >>>>>>>>>>>>> thanks >>>>>>>>>>>>> thierry >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> Freeipa-devel mailing list >>>>>>>>>>>>> Freeipa-devel at redhat.com >>>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Thanks David for the reviews. Here the last patches >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> Freeipa-devel mailing list >>>>>>>>>>>> Freeipa-devel at redhat.com >>>>>>>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> The freeipa-tbordaz-0002 patch had trailing whitespaces on few >>>>>>>>>>> lines so >>>>>>>>>>> I'm attaching fixed version (and unchanged patch >>>>>>>>>>> freeipa-tbordaz-0003-3 >>>>>>>>>>> to keep them together). >>>>>>>>>>> >>>>>>>>>>> The ULC feature is still WIP but these patches look good to me >>>>>>>>>>> and >>>>>>>>>>> don't >>>>>>>>>>> break anything as far as I tested. >>>>>>>>>>> We should push them now to avoid further rebases. Thierry can >>>>>>>>>>> then >>>>>>>>>>> prepare other patches delivering the rest of ULC functionality. >>>>>>>>>> >>>>>>>>>> Few comments from just reading the patches: >>>>>>>>>> >>>>>>>>>> 1) I would name the base class "baseuser", "account" does not >>>>>>>>>> necessarily mean user account. >>>>>>>>>> >>>>>>>>>> 2) This is very wrong: >>>>>>>>>> >>>>>>>>>> -class user_add(LDAPCreate): >>>>>>>>>> +class user_add(user, LDAPCreate): >>>>>>>>>> >>>>>>>>>> You are creating a plugin which is both an object and an >>>>>>>>>> command. >>>>>>>>>> >>>>>>>>>> 3) This is purely subjective, but I don't like the name >>>>>>>>>> "deleteuser", as it has a verb in it. We usually don't do >>>>>>>>>> that and >>>>>>>>>> IMHO we shouldn't do that. >>>>>>>>>> >>>>>>>>>> Honza >>>>>>>>>> >>>>>>>>> >>>>>>>>> Thank you for the review. I am attaching the updates patches >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Freeipa-devel mailing list >>>>>>>>> Freeipa-devel at redhat.com >>>>>>>>> https://www.redhat.com/mailman/listinfo/freeipa-devel >>>>>>>> Hello, >>>>>>>> I'm getting errors during make rpms: >>>>>>>> >>>>>>>> if [ "" != "yes" ]; then \ >>>>>>>> ./makeapi --validate; \ >>>>>>>> ./makeaci --validate; \ >>>>>>>> fi >>>>>>>> >>>>>>>> /root/freeipa/ipalib/plugins/baseuser.py:641 command >>>>>>>> "baseuser_add" >>>>>>>> doc is not internationalized >>>>>>>> /root/freeipa/ipalib/plugins/baseuser.py:653 command >>>>>>>> "baseuser_find" >>>>>>>> doc is not internationalized >>>>>>>> /root/freeipa/ipalib/plugins/baseuser.py:647 command >>>>>>>> "baseuser_mod" >>>>>>>> doc is not internationalized >>>>>>>> 0 commands without doc, 3 commands whose doc is not i18n >>>>>>>> Command baseuser_add in ipalib, not in API >>>>>>>> Command baseuser_find in ipalib, not in API >>>>>>>> Command baseuser_mod in ipalib, not in API >>>>>>>> >>>>>>>> There are one or more new commands defined. >>>>>>>> Update API.txt and increment the minor version in VERSION. >>>>>>>> >>>>>>>> There are one or more documentation problems. >>>>>>>> You must fix these before preceeding >>>>>>>> >>>>>>>> Issues probably caused by this: >>>>>>>> 1) >>>>>>>> You should not use the register decorator, if this class is >>>>>>>> just for >>>>>>>> inheritance >>>>>>>> @register() >>>>>>>> class baseuser_add(LDAPCreate): >>>>>>>> >>>>>>>> @register() >>>>>>>> class baseuser_mod(LDAPUpdate): >>>>>>>> >>>>>>>> @register() >>>>>>>> class baseuser_find(LDAPSearch): >>>>>>>> >>>>>>>> see dns.py plugin and "DNSZoneBase" and "dnszone" classes >>>>>>>> >>>>>>>> 2) >>>>>>>> there might be an issue with >>>>>>>> @register() >>>>>>>> class baseuser(LDAPObject): >>>>>>>> >>>>>>>> the register decorator should not be there, I was warned by >>>>>>>> Petr^3 to >>>>>>>> not use permission in parent class. The same permission should be >>>>>>>> specified only in one place (for example user class), (otherwise >>>>>>>> they >>>>>>>> will be generated twice??) I don't know more details about it. >>>>>>>> >>>>>>>> -- >>>>>>>> Martin Basti >>>>>>> >>>>>>> Hello Martin, Jan, >>>>>>> >>>>>>> Thanks for your review. >>>>>>> I changed the patch so that it does not register baseuser_*. Also >>>>>>> increase the minor version because of new command. >>>>>>> Finally I moved the managed_permission definition out of the parent >>>>>>> baseuser class. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> Martin, could you please verify that the issues you encountered are >>>>>> fixed? >>>>>> >>>>>> Thanks! >>>>>> >>>>> >>>>> You bumped wrong version variable: >>>>> >>>>> -IPA_VERSION_MINOR=1 >>>>> +IPA_VERSION_MINOR=2 >>>>> >>>>> It should have been IPA_API_VERSION_MINOR (at the bottom of the >>>>> file), >>>>> including the last change comment below it. >>>>> >>>>> >>>>> IMO baseuser should include superclasses for all the usual commands >>>>> (add, mod, del, show, find) and stageuser/deleteuser commands should >>>>> inherit from them. >>>>> >>>>> >>>>> You don't need to override class properties like active_container_dn >>>>> and takes_params on baseuser subclasses when they have the same value >>>>> as in baseuser. >>>>> >>>>> >>>>> Honza >>>>> >>>> Hello Honza, >>>> >>>> Thanks for the review. I did the modifications you recommended >>>> within that attached patches >>>> >>>> * Change version >>> >>> Please also update the comment below (e.g. "# Last change: tbordaz - >>> Add stageuser_add command") >>> >>>> * create the baseuser_* plugins commands and use them in the >>>> user/stageuser plugin commands >>>> * Do not redefine the class properties in the subclasses. >>> >>> There are still some in baseuser command classes: >>> >>> +class baseuser_add(LDAPCreate): >>> + """ >>> + Prototype command plugin to be implemented by real plugin >>> + """ >>> + active_container_dn = api.env.container_user >>> + has_output_params = LDAPCreate.has_output_params >>> >>> You don't need to set active_container_dn here, you only need to set >>> it in baseuser. Then in stageuser_add and other subclasses you use >>> "self.obj.active_container_dn" instead of "self.active_container_dn". >>> >>> You also don't need to override has_output_params if you are not >>> changing its value - you are inheriting from LDAPCreate, so >>> baseuser_add.has_output_params implicitly has the same value as >>> LDAPCreate.has_output_params. >>> >>>> >>>> Thanks >>>> thierry >>>> >>> >> >> Hello Honza, >> >> Thanks for your patience .. :-) >> I understand my mistake. Just a question, in a plugin command >> (user_add), is 'self.obj' referring to the plugin object (like >> 'user') ? > > Yes, that's correct. > >> >> updated patches (with the appropriate naming and patch versioning). >> >> thanks >> theirry >> > > One more thing: > > Instead of: > > class stageuser(baseuser): > ... > # take_params does not support 'nsaccountlock' option > stageuser_takes_params_list = [] > for elt in baseuser.takes_params: > if isinstance(elt, Bool) and elt.param_spec == 'nsaccountlock?': > pass > else: > stageuser_takes_params_list.append(elt) > takes_params = tuple(stageuser_takes_params_list) > > I would remove nsaccountlock from baseuser.takes_params and add it in > user.takes_params: > > class user(baseuser): > ... > takes_params = baseuser.takes_params + ( > Bool('nsaccountlock?', > label=_('Account disabled'), > flags=['no_option'], > ), > ) > > Right, making this option specific to active user makes sense. Thanks thierry -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbordaz-0002-User-Life-Cycle-Exclude-subtree-for-ipaUniqueID-gene.patch Type: text/x-patch Size: 2588 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-tbordaz-0003-7-User-life-cycle-stageuser-add-verb.patch Type: text/x-patch Size: 60177 bytes Desc: not available URL: From mbabinsk at redhat.com Wed Mar 25 10:52:26 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Wed, 25 Mar 2015 11:52:26 +0100 Subject: [Freeipa-devel] [PATCHES 0015-0017] consolidation of various Kerberos auth methods in FreeIPA code In-Reply-To: <1427119982.8302.3.camel@willson.usersys.redhat.com> References: <54F997F7.2070400@redhat.com> <54FD8CAF.7030609@redhat.com> <55002A13.8010706@redhat.com> <55031230.70604@redhat.com> <5506BB6F.70406@redhat.com> <5506CCCB.3020003@redhat.com> <1426611638.2981.106.camel@willson.usersys.redhat.com> <550FFD72.1090301@redhat.com> <1427116098.8302.2.camel@willson.usersys.redhat.com> <551013A4.5000708@redhat.com> <1427119982.8302.3.camel@willson.usersys.redhat.com> Message-ID: <5512936A.2010007@redhat.com> On 03/23/2015 03:13 PM, Simo Sorce wrote: > On Mon, 2015-03-23 at 14:22 +0100, Petr Spacek wrote: >> On 23.3.2015 14:08, Simo Sorce wrote: >>> On Mon, 2015-03-23 at 12:48 +0100, Martin Babinsky wrote: >>>> On 03/17/2015 06:00 PM, Simo Sorce wrote: >>>>> On Mon, 2015-03-16 at 13:30 +0100, Martin Babinsky wrote: >>>>>> On 03/16/2015 12:15 PM, Martin Kosek wrote: >>>>>>> On 03/13/2015 05:37 PM, Martin Babinsky wrote: >>>>>>>> Attaching the next iteration of patches. >>>>>>>> >>>>>>>> I have tried my best to reword the ipa-client-install man page bit about the >>>>>>>> new option. Any suggestions to further improve it are welcome. >>>>>>>> >>>>>>>> I have also slightly modified the 'kinit_keytab' function so that in Kerberos >>>>>>>> errors are reported for each attempt and the text of the last error is retained >>>>>>>> when finally raising exception. >>>>>>> >>>>>>> The approach looks very good. I think that my only concern with this patch is >>>>>>> this part: >>>>>>> >>>>>>> + ccache.init_creds_keytab(keytab=ktab, principal=princ) >>>>>>> ... >>>>>>> + except krbV.Krb5Error as e: >>>>>>> + last_exc = str(e) >>>>>>> + root_logger.debug("Attempt %d/%d: failed: %s" >>>>>>> + % (attempt, attempts, last_exc)) >>>>>>> + time.sleep(1) >>>>>>> + >>>>>>> + root_logger.debug("Maximum number of attempts (%d) reached" >>>>>>> + % attempts) >>>>>>> + raise StandardError("Error initializing principal %s: %s" >>>>>>> + % (principal, last_exc)) >>>>>>> >>>>>>> The problem here is that this function will raise the super-generic >>>>>>> StandardError instead of the proper with all the context and information about >>>>>>> the error that the caller can then process. >>>>>>> >>>>>>> I think that >>>>>>> >>>>>>> except krbV.Krb5Error as e: >>>>>>> if attempt == max_attempts: >>>>>>> log something >>>>>>> raise >>>>>>> >>>>>>> would be better. >>>>>>> >>>>>> >>>>>> Yes that seems reasonable. I'm just thinking whether we should re-raise >>>>>> Krb5Error or raise ipalib.errors.KerberosError? the latter options makes >>>>>> more sense to me as we would not have to additionally import Krb5Error >>>>>> everywhere and it would also make the resulting errors more consistent. >>>>>> >>>>>> I am thinking about someting like this: >>>>>> >>>>>> except krbV.Krb5Error as e: >>>>>> if attempt == attempts: >>>>>> # log that we have reaches maximum number of attempts >>>>>> raise KerberosError(minor=str(e)) >>>>>> >>>>>> What do you think? >>>>> >>>>> Are you retrying on any error ? >>>>> Please do *not* do that, if you retry many times on an error that >>>>> indicates the password is wrong you may end up locking an administrative >>>>> account. If you want to retry you should do it only for very specific >>>>> timeout errors. >>>>> >>>>> Simo. >>>>> >>>>> >>>> I have taken a look at the logs attached to the original BZ >>>> (https://bugzilla.redhat.com/show_bug.cgi?id=1161722). >>>> >>>> In ipaclient-install.log the kinit error is: >>>> >>>> "Cannot contact any KDC for realm 'ITW.USPTO.GOV' while getting initial >>>> credentials" >>>> >>>> which can be translated to krbV.KRB5_KDC_UNREACH error. However, >>>> krb5kdc.log (http://pastebin.test.redhat.com/271394) reports errors >>>> which are seemingly unrelated to the root cause (kinit timing out on >>>> getting host TGT). >>>> >>>> Thus I'm not quite sure which errors should we chceck against in this >>>> case, anyone care to advise? These are potential candidates: >>>> >>>> KRB5KDC_ERR_SVC_UNAVAILABLE, "A service is not available that is >>>> required to process the request" >>>> KRB5KRB_ERR_RESPONSE_TOO_BIG, "Response too big for UDP, retry with TCP" >>>> KRB5_REALM_UNKNOWN, "Cannot find KDC for requested realm" >>>> KRB5_KDC_UNREACH, "Cannot contact any KDC for requested realm" >>>> >>> >>> The only ones that you should retry on, at first glance are >>> KRB5_KDC_UNREACH, KRB5KDC_ERR_SVC_UNAVAILABLE. >>> >>> You should never see KRB5KRB_ERR_RESPONSE_TOO_BIG in the script as it >>> should be handled automatically by the library, and if you get >>> KRB5_REALM_UNKNOWN I do not think that retrying will make any >>> difference. >> >> I might be wrong but I was under the impression that this feature was also for >> workarounding replication delay - service is not available / key is not >> present / something like that. >> >> (This could happen if host/principal was added to one server but then the >> client connected to another server or so.) > > If we have that problem we should instead use a temporary krb5.conf file > that lists explicitly only the server we are joining. > > Simo. > This is already done since ipa-3-0: by default only one server/KDC is used during client install so there are actually no problems with replication delay, only with KDC timeouts. Anyway I'm sending updated patches. -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0015-6-ipautil-new-functions-kinit_keytab-and-kinit_passwor.patch Type: text/x-patch Size: 4471 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0016-5-ipa-client-install-try-to-get-host-TGT-several-times.patch Type: text/x-patch Size: 8794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0017-4-Adopted-kinit_keytab-and-kinit_password-for-kerberos.patch Type: text/x-patch Size: 11866 bytes Desc: not available URL: From mbabinsk at redhat.com Wed Mar 25 11:09:09 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Wed, 25 Mar 2015 12:09:09 +0100 Subject: [Freeipa-devel] [PATCH 0020] show the exception message raised by dogtag._parse_ca_status during install Message-ID: <55129755.2090402@redhat.com> This should be patch 20 I think. I must make some cleanup in my patch numbers. https://fedorahosted.org/freeipa/ticket/4885 -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0020-1-show-the-exception-message-thrown-by-dogtag._parse_c.patch Type: text/x-patch Size: 1105 bytes Desc: not available URL: From slaz at seznam.cz Wed Mar 25 11:09:33 2015 From: slaz at seznam.cz (=?UTF-8?B?U3RhbmlzbGF2IEzDoXpuacSNa2E=?=) Date: Wed, 25 Mar 2015 12:09:33 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <551261FF.4020806@redhat.com> References: <550C1303.7090402@seznam.cz> <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> Message-ID: <5512976D.6070700@seznam.cz> On 03/25/2015 08:21 AM, Jan Cholasta wrote: > Dne 24.3.2015 v 18:08 Stanislav L?zni?ka napsal(a): >> On 03/24/2015 08:53 AM, Jan Cholasta wrote: >>> Dne 24.3.2015 v 08:40 Martin Kosek napsal(a): >>>> On 03/24/2015 08:20 AM, Jakub Hrozek wrote: >>>>> On Tue, Mar 24, 2015 at 08:07:53AM +0100, Martin Kosek wrote: >>>>>> On 03/24/2015 07:16 AM, Jan Cholasta wrote: >>>>>>> Dne 23.3.2015 v 20:17 Standa L?zni?ka napsal(a): >>>>>> ... >>>>>>>>> Given the above, HBAC rules could contain (time, anchor), where >>>>>>>>> anchor >>>>>>>>> is "UTC", "user local time" or "host local time". >>>>>>>> Truth is, it was not really clear to me from the last week's >>>>>>>> discussion >>>>>>>> whose "Local Time" to use - do we use host's or do we use >>>>>>>> user's? It >>>>>>>> would make sense to me to use the user's local time. But then you >>>>>>>> would >>>>>>>> need to really store at least the timezone information with each >>>>>>>> user >>>>>>>> object. And that information should probably change with user >>>>>>>> moving >>>>>>>> between different timezones. That's quite a pickle I am in right >>>>>>>> here. >>>>>>> >>>>>>> IMO whether to use user or host local time depends on organization >>>>>>> local >>>>>>> policy, hence my suggestion to support both. >>>>>> >>>>>> I am bit confused, I would like to make sure we are on the same >>>>>> page with >>>>>> regards to Local Time. When the Local Time rule is created, anchor >>>>>> will be set >>>>>> to "Local Time". Then SSSD would simply use host's local time, in >>>>>> whichever >>>>>> time zone the HBAC host is. >>>>> >>>>> Yes, that was my understanding also. >>>>> >>>>>> >>>>>> So this is the default host enforcement. For the user, you want to >>>>>> let SSSD >>>>>> check authenticated user's entry, to see if there is a timezone >>>>>> information? >>>>>> This would of course depend on the information being available. For >>>>>> AD users, >>>>>> you would need to set it in ID Views or similar. >>>>> >>>>> Yes, also in a previous e-mail, there was a suggestion to change >>>>> timezones by admin when the user changes timezones -- I didn't like >>>>> that >>>>> part, it seems really error prone and tedious. *If* there was this >>>>> choice, it should not be the default, rather the default should >>>>> also be >>>>> host local time IMO. >>> >>> I don't think you can expect host-local time to be good enough for >>> everyone. >>> >>>> >>>> Host local time zone was the original case I expected. Enforcing >>>> *user* local >>>> time zone is where this discussion started. Honze proposed making >>>> this an >>>> option - leaving us to 3 different time modes: >>>> >>>> * UTC - stored as (time + olson time zone), enforcement is clear >>>> * Host Local Time - stored as (time + Host Local Time), >>>> enforcement by >>>> /etc/localtime >>>> * User Local Time - stored as (time + User Local Time), enforcement >>>> by ??? >>>> >>>> So the rule may be: >>>> * Employee Foo can access web service Bar only in his work hours >>> >>> Correct. >>> >>>> >>>> IMO, it is realistic for an administrator to set the time zone >>>> setting in the >>>> employee entry. Of course, it gets tricky when the user starts moving >>>> around >>>> the globe... >>>> >>> >>> It doesn't have to be the administrator, it can be automated by a 3rd >>> party service: >>> >>> 1. Employee schedules bussiness trip in time management system >>> 2. Manager approves the bussiness trip in the time management system >>> 3. The time management system takes care of changing the employee's >>> user object timezone when the bussiness trip starts and ends. >>> >> I have to say I am not very sure about this anymore. Although it would >> be a great feature to have users' local time policies, it brings a lot >> of traps on the way. The responsibility of keeping the LDAP database >> consistent with reality is laid to a 3rd party application. If that >> breaks or does not work well, this responsibility might be sometimes >> transferred to admin. But if there's a lot of people traveling at a >> time... I'm having mixed feelings about this. > > Timezones or not, there is always the responsibility of keeping LDAP > in sync with reality. I'm just saying there are possibilities besides > doing it manually. > > Anyway, I think both user-local and host-local time can also be > implemented based on group membership, without storing anything with > user and host object, with HBAC rules like this: > > Rule name: allow_tz_europe_prague_users > Member groups: tz_europe_prague > Host category: all > Service category: all > Access time: (timeofday=0800-1600 dayofweek=Mon-Fri) > Time zone: Europe/Prague > Description: Allow users in Europe/Prague access during bussiness hours > Enabled: TRUE > > Rule name: allow_tz_europe_prague_hosts > User category: all > Member hostgroups: tz_europe_prague > Service category: all > Access time: (timeofday=0800-1600 dayofweek=Mon-Fri) > Time zone: Europe/Prague > Description: Allow access to hosts in Europe/Prague during bussiness > hours > Enabled: TRUE > > I guess things might get complicated if you want to do limit access > based on both user and host local time, but I'm not sure if anyone > would actually want to do that. > This is great and it's how I would expect it would be performed. Although, I think it would still be good having host-local time rules enforced by the hosts' /etc/localtime. I'm not sure but I think that would still need to have the host's timezone stored for HBAC Test sake, is that right? That would be leading back to this solution as easiest, I guess. From mkosek at redhat.com Wed Mar 25 11:32:04 2015 From: mkosek at redhat.com (Martin Kosek) Date: Wed, 25 Mar 2015 12:32:04 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <5512976D.6070700@seznam.cz> References: <550C1303.7090402@seznam.cz> <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> Message-ID: <55129CB4.4070701@redhat.com> On 03/25/2015 12:09 PM, Stanislav L?zni?ka wrote: > On 03/25/2015 08:21 AM, Jan Cholasta wrote: >> Dne 24.3.2015 v 18:08 Stanislav L?zni?ka napsal(a): >>> On 03/24/2015 08:53 AM, Jan Cholasta wrote: >>>> Dne 24.3.2015 v 08:40 Martin Kosek napsal(a): >>>>> On 03/24/2015 08:20 AM, Jakub Hrozek wrote: >>>>>> On Tue, Mar 24, 2015 at 08:07:53AM +0100, Martin Kosek wrote: >>>>>>> On 03/24/2015 07:16 AM, Jan Cholasta wrote: >>>>>>>> Dne 23.3.2015 v 20:17 Standa L?zni?ka napsal(a): >>>>>>> ... >>>>>>>>>> Given the above, HBAC rules could contain (time, anchor), where >>>>>>>>>> anchor >>>>>>>>>> is "UTC", "user local time" or "host local time". >>>>>>>>> Truth is, it was not really clear to me from the last week's >>>>>>>>> discussion >>>>>>>>> whose "Local Time" to use - do we use host's or do we use >>>>>>>>> user's? It >>>>>>>>> would make sense to me to use the user's local time. But then you >>>>>>>>> would >>>>>>>>> need to really store at least the timezone information with each >>>>>>>>> user >>>>>>>>> object. And that information should probably change with user moving >>>>>>>>> between different timezones. That's quite a pickle I am in right >>>>>>>>> here. >>>>>>>> >>>>>>>> IMO whether to use user or host local time depends on organization >>>>>>>> local >>>>>>>> policy, hence my suggestion to support both. >>>>>>> >>>>>>> I am bit confused, I would like to make sure we are on the same >>>>>>> page with >>>>>>> regards to Local Time. When the Local Time rule is created, anchor >>>>>>> will be set >>>>>>> to "Local Time". Then SSSD would simply use host's local time, in >>>>>>> whichever >>>>>>> time zone the HBAC host is. >>>>>> >>>>>> Yes, that was my understanding also. >>>>>> >>>>>>> >>>>>>> So this is the default host enforcement. For the user, you want to >>>>>>> let SSSD >>>>>>> check authenticated user's entry, to see if there is a timezone >>>>>>> information? >>>>>>> This would of course depend on the information being available. For >>>>>>> AD users, >>>>>>> you would need to set it in ID Views or similar. >>>>>> >>>>>> Yes, also in a previous e-mail, there was a suggestion to change >>>>>> timezones by admin when the user changes timezones -- I didn't like >>>>>> that >>>>>> part, it seems really error prone and tedious. *If* there was this >>>>>> choice, it should not be the default, rather the default should also be >>>>>> host local time IMO. >>>> >>>> I don't think you can expect host-local time to be good enough for >>>> everyone. >>>> >>>>> >>>>> Host local time zone was the original case I expected. Enforcing >>>>> *user* local >>>>> time zone is where this discussion started. Honze proposed making >>>>> this an >>>>> option - leaving us to 3 different time modes: >>>>> >>>>> * UTC - stored as (time + olson time zone), enforcement is clear >>>>> * Host Local Time - stored as (time + Host Local Time), enforcement by >>>>> /etc/localtime >>>>> * User Local Time - stored as (time + User Local Time), enforcement >>>>> by ??? >>>>> >>>>> So the rule may be: >>>>> * Employee Foo can access web service Bar only in his work hours >>>> >>>> Correct. >>>> >>>>> >>>>> IMO, it is realistic for an administrator to set the time zone >>>>> setting in the >>>>> employee entry. Of course, it gets tricky when the user starts moving >>>>> around >>>>> the globe... >>>>> >>>> >>>> It doesn't have to be the administrator, it can be automated by a 3rd >>>> party service: >>>> >>>> 1. Employee schedules bussiness trip in time management system >>>> 2. Manager approves the bussiness trip in the time management system >>>> 3. The time management system takes care of changing the employee's >>>> user object timezone when the bussiness trip starts and ends. >>>> >>> I have to say I am not very sure about this anymore. Although it would >>> be a great feature to have users' local time policies, it brings a lot >>> of traps on the way. The responsibility of keeping the LDAP database >>> consistent with reality is laid to a 3rd party application. If that >>> breaks or does not work well, this responsibility might be sometimes >>> transferred to admin. But if there's a lot of people traveling at a >>> time... I'm having mixed feelings about this. >> >> Timezones or not, there is always the responsibility of keeping LDAP in sync >> with reality. I'm just saying there are possibilities besides doing it manually. >> >> Anyway, I think both user-local and host-local time can also be implemented >> based on group membership, without storing anything with user and host >> object, with HBAC rules like this: >> >> Rule name: allow_tz_europe_prague_users >> Member groups: tz_europe_prague >> Host category: all >> Service category: all >> Access time: (timeofday=0800-1600 dayofweek=Mon-Fri) >> Time zone: Europe/Prague >> Description: Allow users in Europe/Prague access during bussiness hours >> Enabled: TRUE >> >> Rule name: allow_tz_europe_prague_hosts >> User category: all >> Member hostgroups: tz_europe_prague >> Service category: all >> Access time: (timeofday=0800-1600 dayofweek=Mon-Fri) >> Time zone: Europe/Prague >> Description: Allow access to hosts in Europe/Prague during bussiness hours >> Enabled: TRUE >> >> I guess things might get complicated if you want to do limit access based on >> both user and host local time, but I'm not sure if anyone would actually want >> to do that. >> > This is great and it's how I would expect it would be performed. Although, I > think it would still be good having host-local time rules enforced by the > hosts' /etc/localtime. I'm not sure but I think that would still need to have > the host's timezone stored for HBAC Test sake, is that right? That would be > leading back to this solution as easiest, I guess. I guess so. BTW, also check the latest Simo's note, user-based local time rules may be something that does not fully fit in the HBAC scheme and may be postponed or canceled. From abokovoy at redhat.com Wed Mar 25 11:34:54 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Wed, 25 Mar 2015 13:34:54 +0200 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <5512976D.6070700@seznam.cz> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> Message-ID: <20150325113454.GY3878@redhat.com> On Wed, 25 Mar 2015, Stanislav L?zni?ka wrote: >On 03/25/2015 08:21 AM, Jan Cholasta wrote: >>Dne 24.3.2015 v 18:08 Stanislav L?zni?ka napsal(a): >>>On 03/24/2015 08:53 AM, Jan Cholasta wrote: >>>>Dne 24.3.2015 v 08:40 Martin Kosek napsal(a): >>>>>On 03/24/2015 08:20 AM, Jakub Hrozek wrote: >>>>>>On Tue, Mar 24, 2015 at 08:07:53AM +0100, Martin Kosek wrote: >>>>>>>On 03/24/2015 07:16 AM, Jan Cholasta wrote: >>>>>>>>Dne 23.3.2015 v 20:17 Standa L?zni?ka napsal(a): >>>>>>>... >>>>>>>>>>Given the above, HBAC rules could contain (time, anchor), where >>>>>>>>>>anchor >>>>>>>>>>is "UTC", "user local time" or "host local time". >>>>>>>>>Truth is, it was not really clear to me from the last week's >>>>>>>>>discussion >>>>>>>>>whose "Local Time" to use - do we use host's or do we use >>>>>>>>>user's? It >>>>>>>>>would make sense to me to use the user's local time. But then you >>>>>>>>>would >>>>>>>>>need to really store at least the timezone information with each >>>>>>>>>user >>>>>>>>>object. And that information should probably change >>>>>>>>>with user moving >>>>>>>>>between different timezones. That's quite a pickle I am in right >>>>>>>>>here. >>>>>>>> >>>>>>>>IMO whether to use user or host local time depends on organization >>>>>>>>local >>>>>>>>policy, hence my suggestion to support both. >>>>>>> >>>>>>>I am bit confused, I would like to make sure we are on the same >>>>>>>page with >>>>>>>regards to Local Time. When the Local Time rule is created, anchor >>>>>>>will be set >>>>>>>to "Local Time". Then SSSD would simply use host's local time, in >>>>>>>whichever >>>>>>>time zone the HBAC host is. >>>>>> >>>>>>Yes, that was my understanding also. >>>>>> >>>>>>> >>>>>>>So this is the default host enforcement. For the user, you want to >>>>>>>let SSSD >>>>>>>check authenticated user's entry, to see if there is a timezone >>>>>>>information? >>>>>>>This would of course depend on the information being available. For >>>>>>>AD users, >>>>>>>you would need to set it in ID Views or similar. >>>>>> >>>>>>Yes, also in a previous e-mail, there was a suggestion to change >>>>>>timezones by admin when the user changes timezones -- I didn't like >>>>>>that >>>>>>part, it seems really error prone and tedious. *If* there was this >>>>>>choice, it should not be the default, rather the default >>>>>>should also be >>>>>>host local time IMO. >>>> >>>>I don't think you can expect host-local time to be good enough for >>>>everyone. >>>> >>>>> >>>>>Host local time zone was the original case I expected. Enforcing >>>>>*user* local >>>>>time zone is where this discussion started. Honze proposed making >>>>>this an >>>>>option - leaving us to 3 different time modes: >>>>> >>>>>* UTC - stored as (time + olson time zone), enforcement is clear >>>>>* Host Local Time - stored as (time + Host Local Time), >>>>>enforcement by >>>>>/etc/localtime >>>>>* User Local Time - stored as (time + User Local Time), enforcement >>>>>by ??? >>>>> >>>>>So the rule may be: >>>>>* Employee Foo can access web service Bar only in his work hours >>>> >>>>Correct. >>>> >>>>> >>>>>IMO, it is realistic for an administrator to set the time zone >>>>>setting in the >>>>>employee entry. Of course, it gets tricky when the user starts moving >>>>>around >>>>>the globe... >>>>> >>>> >>>>It doesn't have to be the administrator, it can be automated by a 3rd >>>>party service: >>>> >>>> 1. Employee schedules bussiness trip in time management system >>>> 2. Manager approves the bussiness trip in the time management system >>>> 3. The time management system takes care of changing the employee's >>>>user object timezone when the bussiness trip starts and ends. >>>> >>>I have to say I am not very sure about this anymore. Although it would >>>be a great feature to have users' local time policies, it brings a lot >>>of traps on the way. The responsibility of keeping the LDAP database >>>consistent with reality is laid to a 3rd party application. If that >>>breaks or does not work well, this responsibility might be sometimes >>>transferred to admin. But if there's a lot of people traveling at a >>>time... I'm having mixed feelings about this. >> >>Timezones or not, there is always the responsibility of keeping LDAP >>in sync with reality. I'm just saying there are possibilities >>besides doing it manually. >> >>Anyway, I think both user-local and host-local time can also be >>implemented based on group membership, without storing anything with >>user and host object, with HBAC rules like this: >> >> Rule name: allow_tz_europe_prague_users >> Member groups: tz_europe_prague >> Host category: all >> Service category: all >> Access time: (timeofday=0800-1600 dayofweek=Mon-Fri) >> Time zone: Europe/Prague >> Description: Allow users in Europe/Prague access during bussiness hours >> Enabled: TRUE >> >> Rule name: allow_tz_europe_prague_hosts >> User category: all >> Member hostgroups: tz_europe_prague >> Service category: all >> Access time: (timeofday=0800-1600 dayofweek=Mon-Fri) >> Time zone: Europe/Prague >> Description: Allow access to hosts in Europe/Prague during >>bussiness hours >> Enabled: TRUE >> >>I guess things might get complicated if you want to do limit access >>based on both user and host local time, but I'm not sure if anyone >>would actually want to do that. >> >This is great and it's how I would expect it would be performed. >Although, I think it would still be good having host-local time rules >enforced by the hosts' /etc/localtime. I'm not sure but I think that >would still need to have the host's timezone stored for HBAC Test >sake, is that right? That would be leading back to this solution as >easiest, I guess. When using hbactest command you just need to supply implied time zone as an option to the command itself. After all, you are simulating rule execution so it does not matter where the value comes from. -- / Alexander Bokovoy From mbasti at redhat.com Wed Mar 25 14:15:26 2015 From: mbasti at redhat.com (Martin Basti) Date: Wed, 25 Mar 2015 15:15:26 +0100 Subject: [Freeipa-devel] [PATCH 0021] show the exception message thrown by dogtag._parse_ca_status during install In-Reply-To: <55070F9D.9090002@redhat.com> References: <55070F9D.9090002@redhat.com> Message-ID: <5512C2FE.6080104@redhat.com> On 16/03/15 18:15, Martin Babinsky wrote: > https://fedorahosted.org/freeipa/ticket/4885 > > > ACK -- Martin Basti -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbabinsk at redhat.com Wed Mar 25 14:26:16 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Wed, 25 Mar 2015 15:26:16 +0100 Subject: [Freeipa-devel] [PATCH 0023] enable debugging of spawned ntpd command during client install Message-ID: <5512C588.3010002@redhat.com> The attached patch related to https://fedorahosted.org/freeipa/ticket/4931 It is certainly not a final solution, more of an initial "hack" of sorts just to gather some suggestions, since I am not even sure if this is the right thing to do. The reporter from bugzilla suggests to enable debugging of ALL commands called through ipautil.run(), but I think that fixing all cca 157 found usages of run() is too much work with a quite small benefit. Anyway I would welcome some opinions about this: should the external commands really inherit the debug settings of ipa-* utilities, and if so, is the method showed in this patch the right way to do it? -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0023-1-enable-debbuging-of-spawned-ntpd-command-during-clie.patch Type: text/x-patch Size: 4503 bytes Desc: not available URL: From jcholast at redhat.com Wed Mar 25 15:18:33 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Wed, 25 Mar 2015 16:18:33 +0100 Subject: [Freeipa-devel] [PATCH 0023] enable debugging of spawned ntpd command during client install In-Reply-To: <5512C588.3010002@redhat.com> References: <5512C588.3010002@redhat.com> Message-ID: <5512D1C9.6090308@redhat.com> Hi, Dne 25.3.2015 v 15:26 Martin Babinsky napsal(a): > The attached patch related to https://fedorahosted.org/freeipa/ticket/4931 Please make sure stays fixed. > > It is certainly not a final solution, more of an initial "hack" of sorts > just to gather some suggestions, since I am not even sure if this is the > right thing to do. > > The reporter from bugzilla suggests to enable debugging of ALL commands > called through ipautil.run(), but I think that fixing all cca 157 found > usages of run() is too much work with a quite small benefit. > > Anyway I would welcome some opinions about this: should the external > commands really inherit the debug settings of ipa-* utilities, and if > so, is the method showed in this patch the right way to do it? I am not a fan of this method, ipautil.run does not know anything about the command it runs and I think it should stay that way. I would prefer to have an ipautil.run wrapper with debug flag using appropriate debugging option for each command where we need to conditionally enable debugging. Or just add the debugging option unconditionally to every command where it could be useful. Honza -- Jan Cholasta From mbasti at redhat.com Wed Mar 25 15:48:37 2015 From: mbasti at redhat.com (Martin Basti) Date: Wed, 25 Mar 2015 16:48:37 +0100 Subject: [Freeipa-devel] [PATCH 0021] fix improper handling of boolean option during KRA install In-Reply-To: <55085B78.1040201@redhat.com> References: <55085B78.1040201@redhat.com> Message-ID: <5512D8D5.8040509@redhat.com> On 17/03/15 17:51, Martin Babinsky wrote: > https://fedorahosted.org/freeipa/ticket/4530 > > > ACK -- Martin Basti -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbabinsk at redhat.com Wed Mar 25 16:07:36 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Wed, 25 Mar 2015 17:07:36 +0100 Subject: [Freeipa-devel] [PATCH 0024] do not log BINDs to non-existent users as errors Message-ID: <5512DD48.4010000@redhat.com> https://fedorahosted.org/freeipa/ticket/4889 -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0024-do-not-log-BINDs-to-non-existent-users-as-errors.patch Type: text/x-patch Size: 1338 bytes Desc: not available URL: From slaz at seznam.cz Wed Mar 25 17:25:20 2015 From: slaz at seznam.cz (=?UTF-8?B?U3RhbmlzbGF2IEzDoXpuacSNa2E=?=) Date: Wed, 25 Mar 2015 18:25:20 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <20150325113454.GY3878@redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> Message-ID: <5512EF80.8070709@seznam.cz> On 03/25/2015 12:34 PM, Alexander Bokovoy wrote: > When using hbactest command you just need to supply implied time zone > as an option to the command itself. After all, you are simulating rule > execution so it does not matter where the value comes from. Oh, good, I haven't thought of that. That certainly eases things up. Let me make a summary then, a short one this time, of what's been discussed . It seems the best way to store time policies is indeed the format (time, anchor) where anchor is either Olson database timezone or "Local Time" for host local time. We are omitting users' local time because, after all, we are talking HBAC Rules here (great point by Simo). If the admins really needed that, there's a workaround Jan mentioned that should work just fine. That leaves us with 2 kinds of policies - UTC and Local Time (which is enforced by hosts' /etc/localtime). Now with the (time, anchor) format for time policies, the LDAP schema wouldn't have to change and we could just use the AccessTime attribute of the HBAC Rule object that's already there. That seems like a good solution to me. I hope we can agree on the above although any notes are, of course, welcome. Now we would need to choose the right format for the time part of (time, anchor) I guess. There's been a discussion some 2 weeks ago about the need for event recurrence support in the format, the need for exceptions support and the need for iCalendar import possibility. So far, there are three possible languages to choose from - use the actual iCalendar or just a part of it, use a reworked version of the old language used in FreeIPA and SSSD, or use the language I proposed earlier in this thread. I would be very keen on hearing your ideas and opinions on this one. Thanks! Standa From ftweedal at redhat.com Thu Mar 26 03:26:30 2015 From: ftweedal at redhat.com (Fraser Tweedale) Date: Thu, 26 Mar 2015 13:26:30 +1000 Subject: [Freeipa-devel] [PATCH 0020] show the exception message raised by dogtag._parse_ca_status during install In-Reply-To: <55129755.2090402@redhat.com> References: <55129755.2090402@redhat.com> Message-ID: <20150326032630.GH3285@dhcp-40-8.bne.redhat.com> On Wed, Mar 25, 2015 at 12:09:09PM +0100, Martin Babinsky wrote: > This should be patch 20 I think. I must make some cleanup in my patch > numbers. > > https://fedorahosted.org/freeipa/ticket/4885 > > -- > Martin^3 Babinsky ACK > From 7e0f8b4d65f6c3f8c7d14f154aa5ef80bb064c4c Mon Sep 17 00:00:00 2001 > From: Martin Babinsky > Date: Mon, 16 Mar 2015 12:36:25 +0100 > Subject: [PATCH] show the exception message thrown by dogtag._parse_ca_status > during install > > https://fedorahosted.org/freeipa/ticket/4885 > --- > ipaplatform/redhat/services.py | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/ipaplatform/redhat/services.py b/ipaplatform/redhat/services.py > index 8759cab76c7d72a3abbf935e7f15f7a32a0b6987..c9994e409a8a005012c0467c016608b8f689eef1 100644 > --- a/ipaplatform/redhat/services.py > +++ b/ipaplatform/redhat/services.py > @@ -212,8 +212,8 @@ class RedHatCAService(RedHatService): > > status = dogtag._parse_ca_status(stdout) > # end of workaround > - except Exception: > - status = 'check interrupted' > + except Exception as e: > + status = 'check interrupted due to error: %s' % e > root_logger.debug('The CA status is: %s' % status) > if status == 'running': > break > -- > 2.1.0 > > -- > Manage your subscription for the Freeipa-devel mailing list: > https://www.redhat.com/mailman/listinfo/freeipa-devel > Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code From mbasti at redhat.com Thu Mar 26 07:56:44 2015 From: mbasti at redhat.com (Martin Basti) Date: Thu, 26 Mar 2015 08:56:44 +0100 Subject: [Freeipa-devel] [PATCH 0021] show the exception message thrown by dogtag._parse_ca_status during install In-Reply-To: <5512C2FE.6080104@redhat.com> References: <55070F9D.9090002@redhat.com> <5512C2FE.6080104@redhat.com> Message-ID: <5513BBBC.6050306@redhat.com> On 25/03/15 15:15, Martin Basti wrote: > On 16/03/15 18:15, Martin Babinsky wrote: >> https://fedorahosted.org/freeipa/ticket/4885 >> >> >> > ACK > > -- > Martin Basti This is the same patch as mbabinsk-0020, please ignore this thread as duplicated. -- Martin Basti -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcholast at redhat.com Thu Mar 26 09:43:56 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 26 Mar 2015 10:43:56 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <1427221221.8302.25.camel@willson.usersys.redhat.com> References: <550C1303.7090402@seznam.cz> <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <1427221221.8302.25.camel@willson.usersys.redhat.com> Message-ID: <5513D4DC.80401@redhat.com> Dne 24.3.2015 v 19:20 Simo Sorce napsal(a): > On Tue, 2015-03-24 at 08:40 +0100, Martin Kosek wrote: >> On 03/24/2015 08:20 AM, Jakub Hrozek wrote: >>> On Tue, Mar 24, 2015 at 08:07:53AM +0100, Martin Kosek wrote: >>>> On 03/24/2015 07:16 AM, Jan Cholasta wrote: >>>>> Dne 23.3.2015 v 20:17 Standa L?zni?ka napsal(a): >>>> ... >>>>>>> Given the above, HBAC rules could contain (time, anchor), where anchor >>>>>>> is "UTC", "user local time" or "host local time". >>>>>> Truth is, it was not really clear to me from the last week's discussion >>>>>> whose "Local Time" to use - do we use host's or do we use user's? It >>>>>> would make sense to me to use the user's local time. But then you would >>>>>> need to really store at least the timezone information with each user >>>>>> object. And that information should probably change with user moving >>>>>> between different timezones. That's quite a pickle I am in right here. >>>>> >>>>> IMO whether to use user or host local time depends on organization local >>>>> policy, hence my suggestion to support both. >>>> >>>> I am bit confused, I would like to make sure we are on the same page with >>>> regards to Local Time. When the Local Time rule is created, anchor will be set >>>> to "Local Time". Then SSSD would simply use host's local time, in whichever >>>> time zone the HBAC host is. >>> >>> Yes, that was my understanding also. >>> >>>> >>>> So this is the default host enforcement. For the user, you want to let SSSD >>>> check authenticated user's entry, to see if there is a timezone information? >>>> This would of course depend on the information being available. For AD users, >>>> you would need to set it in ID Views or similar. >>> >>> Yes, also in a previous e-mail, there was a suggestion to change >>> timezones by admin when the user changes timezones -- I didn't like that >>> part, it seems really error prone and tedious. *If* there was this >>> choice, it should not be the default, rather the default should also be >>> host local time IMO. >> >> Host local time zone was the original case I expected. Enforcing *user* local >> time zone is where this discussion started. Honze proposed making this an >> option - leaving us to 3 different time modes: >> >> * UTC - stored as (time + olson time zone), enforcement is clear >> * Host Local Time - stored as (time + Host Local Time), enforcement by >> /etc/localtime >> * User Local Time - stored as (time + User Local Time), enforcement by ??? >> >> So the rule may be: >> * Employee Foo can access web service Bar only in his work hours >> >> IMO, it is realistic for an administrator to set the time zone setting in the >> employee entry. Of course, it gets tricky when the user starts moving around >> the globe... >> > > Host Based Access Control is about controlling access based on the > *HOST*. Except you can control access based on user identity or group membership with HBAC. > > I do not see any space for user time zones honestly. Well, I don't see what's so interesting about host time. Users have bussiness hours, hosts don't. Users can move between time zones by themselves, hosts can't. > > If and when someone will vehemently ask for 'per-user' time zones we can > talk about it. > > Simo. > -- Jan Cholasta From jcholast at redhat.com Thu Mar 26 10:13:21 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 26 Mar 2015 11:13:21 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <5512EF80.8070709@seznam.cz> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> Message-ID: <5513DBC1.5060405@redhat.com> Dne 25.3.2015 v 18:25 Stanislav L?zni?ka napsal(a): > On 03/25/2015 12:34 PM, Alexander Bokovoy wrote: >> When using hbactest command you just need to supply implied time zone >> as an option to the command itself. After all, you are simulating rule >> execution so it does not matter where the value comes from. > Oh, good, I haven't thought of that. That certainly eases things up. > > Let me make a summary then, a short one this time, of what's been > discussed . > > It seems the best way to store time policies is indeed the format (time, > anchor) where anchor is either Olson database timezone or "Local Time" > for host local time. We are omitting users' local time because, after > all, we are talking HBAC Rules here (great point by Simo). If the admins > really needed that, there's a workaround Jan mentioned that should work > just fine. What I originally meant as anchor was a value specifying the time offset (e.g. "utc" - access time uses UTC, "rule" - access time uses time zone specified in the HBAC rule, "host" - access time uses host's time zone), rather than the time zone itself or "Local Time". > > That leaves us with 2 kinds of policies - UTC and Local Time (which is > enforced by hosts' /etc/localtime). Now with the (time, anchor) format > for time policies, the LDAP schema wouldn't have to change and we could > just use the AccessTime attribute of the HBAC Rule object that's already > there. That seems like a good solution to me. > > I hope we can agree on the above although any notes are, of course, > welcome. Now we would need to choose the right format for the time part > of (time, anchor) I guess. There's been a discussion some 2 weeks ago > about the need for event recurrence support in the format, the need for > exceptions support and the need for iCalendar import possibility. So > far, there are three possible languages to choose from - use the actual > iCalendar or just a part of it, use a reworked version of the old > language used in FreeIPA and SSSD, or use the language I proposed > earlier in this thread. > > I would be very keen on hearing your ideas and opinions on this one. > > Thanks! > Standa -- Jan Cholasta From slaz at seznam.cz Thu Mar 26 12:08:43 2015 From: slaz at seznam.cz (=?UTF-8?B?U3RhbmRhIEzDoXpuacSNa2E=?=) Date: Thu, 26 Mar 2015 13:08:43 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <5513DBC1.5060405@redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> Message-ID: <5513F6CB.4060004@seznam.cz> On 3/26/2015 11:13 AM, Jan Cholasta wrote: > Dne 25.3.2015 v 18:25 Stanislav L?zni?ka napsal(a): >> On 03/25/2015 12:34 PM, Alexander Bokovoy wrote: >>> When using hbactest command you just need to supply implied time zone >>> as an option to the command itself. After all, you are simulating rule >>> execution so it does not matter where the value comes from. >> Oh, good, I haven't thought of that. That certainly eases things up. >> >> Let me make a summary then, a short one this time, of what's been >> discussed . >> >> It seems the best way to store time policies is indeed the format (time, >> anchor) where anchor is either Olson database timezone or "Local Time" >> for host local time. We are omitting users' local time because, after >> all, we are talking HBAC Rules here (great point by Simo). If the admins >> really needed that, there's a workaround Jan mentioned that should work >> just fine. > > What I originally meant as anchor was a value specifying the time > offset (e.g. "utc" - access time uses UTC, "rule" - access time uses > time zone specified in the HBAC rule, "host" - access time uses host's > time zone), rather than the time zone itself or "Local Time". > You're right, that's probably more descriptive than just "Local Time". Still, I think that instead of "rule" a timezone might just as well appear on the anchor part. I think "UTC" is also part of Olson's so it should be at the same spot as the timezone. >> >> That leaves us with 2 kinds of policies - UTC and Local Time (which is >> enforced by hosts' /etc/localtime). Now with the (time, anchor) format >> for time policies, the LDAP schema wouldn't have to change and we could >> just use the AccessTime attribute of the HBAC Rule object that's already >> there. That seems like a good solution to me. >> >> I hope we can agree on the above although any notes are, of course, >> welcome. Now we would need to choose the right format for the time part >> of (time, anchor) I guess. There's been a discussion some 2 weeks ago >> about the need for event recurrence support in the format, the need for >> exceptions support and the need for iCalendar import possibility. So >> far, there are three possible languages to choose from - use the actual >> iCalendar or just a part of it, use a reworked version of the old >> language used in FreeIPA and SSSD, or use the language I proposed >> earlier in this thread. >> >> I would be very keen on hearing your ideas and opinions on this one. >> >> Thanks! >> Standa > > From mkosek at redhat.com Thu Mar 26 12:14:57 2015 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Mar 2015 13:14:57 +0100 Subject: [Freeipa-devel] [PATCH 0023] enable debugging of spawned ntpd command during client install In-Reply-To: <5512D1C9.6090308@redhat.com> References: <5512C588.3010002@redhat.com> <5512D1C9.6090308@redhat.com> Message-ID: <5513F841.4050306@redhat.com> On 03/25/2015 04:18 PM, Jan Cholasta wrote: > Hi, > > Dne 25.3.2015 v 15:26 Martin Babinsky napsal(a): >> The attached patch related to https://fedorahosted.org/freeipa/ticket/4931 > > Please make sure stays fixed. > >> >> It is certainly not a final solution, more of an initial "hack" of sorts >> just to gather some suggestions, since I am not even sure if this is the >> right thing to do. >> >> The reporter from bugzilla suggests to enable debugging of ALL commands >> called through ipautil.run(), but I think that fixing all cca 157 found >> usages of run() is too much work with a quite small benefit. >> >> Anyway I would welcome some opinions about this: should the external >> commands really inherit the debug settings of ipa-* utilities, and if >> so, is the method showed in this patch the right way to do it? > > I am not a fan of this method, ipautil.run does not know anything about the > command it runs and I think it should stay that way. > > I would prefer to have an ipautil.run wrapper with debug flag using appropriate > debugging option for each command where we need to conditionally enable > debugging. Or just add the debugging option unconditionally to every command > where it could be useful. +1, I do not like this change to ipautil.run either. It should be sole responsibility of the caller to specify the right combinations of options, including debug option, where applicable. From mkosek at redhat.com Thu Mar 26 12:24:53 2015 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Mar 2015 13:24:53 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <5513F6CB.4060004@seznam.cz> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> Message-ID: <5513FA95.1090709@redhat.com> On 03/26/2015 01:08 PM, Standa L?zni?ka wrote: > On 3/26/2015 11:13 AM, Jan Cholasta wrote: >> Dne 25.3.2015 v 18:25 Stanislav L?zni?ka napsal(a): >>> On 03/25/2015 12:34 PM, Alexander Bokovoy wrote: >>>> When using hbactest command you just need to supply implied time zone >>>> as an option to the command itself. After all, you are simulating rule >>>> execution so it does not matter where the value comes from. >>> Oh, good, I haven't thought of that. That certainly eases things up. >>> >>> Let me make a summary then, a short one this time, of what's been >>> discussed . >>> >>> It seems the best way to store time policies is indeed the format (time, >>> anchor) where anchor is either Olson database timezone or "Local Time" >>> for host local time. We are omitting users' local time because, after >>> all, we are talking HBAC Rules here (great point by Simo). If the admins >>> really needed that, there's a workaround Jan mentioned that should work >>> just fine. >> >> What I originally meant as anchor was a value specifying the time offset >> (e.g. "utc" - access time uses UTC, "rule" - access time uses time zone >> specified in the HBAC rule, "host" - access time uses host's time zone), >> rather than the time zone itself or "Local Time". >> > You're right, that's probably more descriptive than just "Local Time". Still, I > think that instead of "rule" a timezone might just as well appear on the anchor > part. I think "UTC" is also part of Olson's so it should be at the same spot as > the timezone. I am not little confused about all the places where we want to add the time zone. I thought that it was originally meant for hosts objects, so that we can HBAC rule is created, UI/CLI can already suggest the right time zone for the HBAC rule. But it should have been only informative value serving mostly UX, not something that SSSD would decide on. HBAC rule itself is always the authoritative source. We should also avoid having time zone in 2 places in the HBAC rule itself - if this is what you are steering at. I thought the authoritative time zone would be only in the HBAC time definition only, i.e. only in the anchor specifically. Can we show specific examples of these tuples, to make sure we are in agreement? My take was: (Mon-Fri 08:00-17:00, UTC+1) (Mon-Fri 08:00-17:00, local) UTC+1 may not be ideal as it would not work for daylight saving, a better way would indeed be the Olson time zone ID, i.e: (Mon-Fri 08:00-17:00, Europe/Prague) (Mon-Fri 08:00-17:00, local) From mbabinsk at redhat.com Thu Mar 26 12:52:34 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Thu, 26 Mar 2015 13:52:34 +0100 Subject: [Freeipa-devel] [PATCH 0023] enable debugging of spawned ntpd command during client install In-Reply-To: <5513F841.4050306@redhat.com> References: <5512C588.3010002@redhat.com> <5512D1C9.6090308@redhat.com> <5513F841.4050306@redhat.com> Message-ID: <55140112.9060002@redhat.com> On 03/26/2015 01:14 PM, Martin Kosek wrote: > On 03/25/2015 04:18 PM, Jan Cholasta wrote: >> Hi, >> >> Dne 25.3.2015 v 15:26 Martin Babinsky napsal(a): >>> The attached patch related to https://fedorahosted.org/freeipa/ticket/4931 >> >> Please make sure stays fixed. This should be ok as we do not use ntpdate for timesync anymore (I have tested client-install with this patch in VM with quite large clock skews between client and server and it did sync just fine also with -d flag). >> >>> >>> It is certainly not a final solution, more of an initial "hack" of sorts >>> just to gather some suggestions, since I am not even sure if this is the >>> right thing to do. >>> >>> The reporter from bugzilla suggests to enable debugging of ALL commands >>> called through ipautil.run(), but I think that fixing all cca 157 found >>> usages of run() is too much work with a quite small benefit. >>> >>> Anyway I would welcome some opinions about this: should the external >>> commands really inherit the debug settings of ipa-* utilities, and if >>> so, is the method showed in this patch the right way to do it? >> >> I am not a fan of this method, ipautil.run does not know anything about the >> command it runs and I think it should stay that way. >> >> I would prefer to have an ipautil.run wrapper with debug flag using appropriate >> debugging option for each command where we need to conditionally enable >> debugging. Or just add the debugging option unconditionally to every command >> where it could be useful. > > +1, I do not like this change to ipautil.run either. It should be sole > responsibility of the caller to specify the right combinations of options, > including debug option, where applicable. > How should we attack this issue then? If we really would do "all comands inherit the debug option", then it would be a good idea to avoid things like if options.debug: args.append('-d') littered all over the install/upgrade/etc. code. Jan's idea of a wrapper around run() then sounds like a good move. But I suppose we would then need to store somewhere (ipaplatform?) an additional data saying which command uses which option for debugging. (Or should we just slap on an optional '-d' passed to ntpd to quickly resolve the main issue in the ticket and worry about all of this later? ;)) -- Martin^3 Babinsky From abokovoy at redhat.com Thu Mar 26 13:20:12 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Thu, 26 Mar 2015 15:20:12 +0200 Subject: [Freeipa-devel] [PATCH] FreeIPA 4.1.4 release and fixes for CVE-2015-1827 and CVE-2015-0283 Message-ID: <20150326132012.GB3878@redhat.com> Hi, I've released slapi-nis 0.54.2 this morning as a fix for CVE-2015-0283, packages are built for Fedora and RHEL7.1. However, to complete the cycle, we need to release FreeIPA 4.1.4 to fix CVE-2015-1827. Both CVEs are for processing of group membership when dealing with users from trusted AD domains. Fix in FreeIPA is in extdom plugin which is in use by sssd 1.12.x, while slapi-nis fix is for legacy clients. We need to commit attached patches to FreeIPA and make a release of FreeIPA 4.1.4 today. Then I can do Fedora builds and a combined update push for slapi-nis+freeipa packages in Fedora. Patch 1 is actual CVE-2015-1827 fix. Patch 2 is to remove wrong values from Makefile.am files that actually prevent regenerating Makefiles in daemons/ subdirectory, causing non-working RHEL build. We fixed 4.1.0 base with this patch in RHEL and we just need to bring upstream in sync with downstream on this. Patch 3 raises requirement of slapi-nis to the fixed version. -- / Alexander Bokovoy -------------- next part -------------- From 175a63357354ae3b4c04fa9cbef0cbe6084f0bee Mon Sep 17 00:00:00 2001 From: Sumit Bose Date: Wed, 25 Feb 2015 10:28:22 +0100 Subject: [PATCH 1/3] extdom: fix wrong realloc size --- daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c index 47bcb17..686128e 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_common.c @@ -386,7 +386,7 @@ static int get_user_grouplist(const char *name, gid_t gid, ret = getgrouplist(name, gid, groups, &ngroups); if (ret == -1) { - new_groups = realloc(groups, ngroups); + new_groups = realloc(groups, ngroups * sizeof(gid_t)); if (new_groups == NULL) { free(groups); return LDAP_OPERATIONS_ERROR; -- 2.1.0 -------------- next part -------------- From 3811fee25fff1074e39cf541a5fa0c411255e9f4 Mon Sep 17 00:00:00 2001 From: Alexander Bokovoy Date: Wed, 18 Mar 2015 17:09:06 +0000 Subject: [PATCH 2/3] fix Makefile.am for daemons --- daemons/Makefile.am | 2 +- daemons/ipa-slapi-plugins/ipa-cldap/Makefile.am | 1 - daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am | 1 - daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am | 1 - 4 files changed, 1 insertion(+), 4 deletions(-) diff --git a/daemons/Makefile.am b/daemons/Makefile.am index 956f399..f919429 100644 --- a/daemons/Makefile.am +++ b/daemons/Makefile.am @@ -1,6 +1,6 @@ # This file will be processed with automake-1.7 to create Makefile.in # -AUTOMAKE_OPTIONS = 1.7 +AUTOMAKE_OPTIONS = 1.7 subdir-objects NULL = diff --git a/daemons/ipa-slapi-plugins/ipa-cldap/Makefile.am b/daemons/ipa-slapi-plugins/ipa-cldap/Makefile.am index 8e35cdb..fba5b08 100644 --- a/daemons/ipa-slapi-plugins/ipa-cldap/Makefile.am +++ b/daemons/ipa-slapi-plugins/ipa-cldap/Makefile.am @@ -6,7 +6,6 @@ AM_CPPFLAGS = \ -I. \ -I$(srcdir) \ -I$(PLUGIN_COMMON_DIR) \ - -I$(COMMON_BER_DIR) \ -DPREFIX=\""$(prefix)"\" \ -DBINDIR=\""$(bindir)"\" \ -DLIBDIR=\""$(libdir)"\" \ diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am b/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am index a167981..8ee26a7 100644 --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/Makefile.am @@ -7,7 +7,6 @@ AM_CPPFLAGS = \ -I$(srcdir) \ -I$(PLUGIN_COMMON_DIR) \ -I$(KRB5_UTIL_DIR) \ - -I$(COMMON_BER_DIR) \ -DPREFIX=\""$(prefix)"\" \ -DBINDIR=\""$(bindir)"\" \ -DLIBDIR=\""$(libdir)"\" \ diff --git a/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am b/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am index 1ab6c67..078ff9c 100644 --- a/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am +++ b/daemons/ipa-slapi-plugins/ipa-pwd-extop/Makefile.am @@ -14,7 +14,6 @@ AM_CPPFLAGS = \ -I$(PLUGIN_COMMON_DIR) \ -I$(KRB5_UTIL_DIR) \ -I$(ASN1_UTIL_DIR) \ - -I$(COMMON_BER_DIR) \ -DPREFIX=\""$(prefix)"\" \ -DBINDIR=\""$(bindir)"\" \ -DLIBDIR=\""$(libdir)"\" \ -- 2.1.0 -------------- next part -------------- From ab679d2d95ec8105f8c32159f4ef4b22a2e9feac Mon Sep 17 00:00:00 2001 From: Alexander Bokovoy Date: Thu, 26 Mar 2015 14:59:03 +0200 Subject: [PATCH 3/3] slapi-nis: require 0.54.2 for CVE-2015-0283 fixes --- freeipa.spec.in | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/freeipa.spec.in b/freeipa.spec.in index cb104f4..1a444dc 100644 --- a/freeipa.spec.in +++ b/freeipa.spec.in @@ -129,7 +129,7 @@ Requires(pre): systemd-units Requires(post): systemd-units Requires: selinux-policy >= %{selinux_policy_version} Requires(post): selinux-policy-base -Requires: slapi-nis >= 0.54.1-1 +Requires: slapi-nis >= 0.54.2-1 %if (0%{?fedora} <= 20 || 0%{?rhel}) # pki-ca 10.1.2-4 contains patches required by FreeIPA 4.1 # The goal is to lower the requirement of pki-ca in Fedora 20 -- 2.1.0 From slaz at seznam.cz Thu Mar 26 13:40:42 2015 From: slaz at seznam.cz (=?UTF-8?B?U3RhbmRhIEzDoXpuacSNa2E=?=) Date: Thu, 26 Mar 2015 14:40:42 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <5513FA95.1090709@redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> <5513FA95.1090709@redhat.com> Message-ID: <55140C5A.2010909@seznam.cz> On 3/26/2015 1:24 PM, Martin Kosek wrote: > On 03/26/2015 01:08 PM, Standa L?zni?ka wrote: >> On 3/26/2015 11:13 AM, Jan Cholasta wrote: >>> Dne 25.3.2015 v 18:25 Stanislav L?zni?ka napsal(a): >>>> On 03/25/2015 12:34 PM, Alexander Bokovoy wrote: >>>>> When using hbactest command you just need to supply implied time zone >>>>> as an option to the command itself. After all, you are simulating >>>>> rule >>>>> execution so it does not matter where the value comes from. >>>> Oh, good, I haven't thought of that. That certainly eases things up. >>>> >>>> Let me make a summary then, a short one this time, of what's been >>>> discussed . >>>> >>>> It seems the best way to store time policies is indeed the format >>>> (time, >>>> anchor) where anchor is either Olson database timezone or "Local Time" >>>> for host local time. We are omitting users' local time because, after >>>> all, we are talking HBAC Rules here (great point by Simo). If the >>>> admins >>>> really needed that, there's a workaround Jan mentioned that should >>>> work >>>> just fine. >>> What I originally meant as anchor was a value specifying the time >>> offset >>> (e.g. "utc" - access time uses UTC, "rule" - access time uses time zone >>> specified in the HBAC rule, "host" - access time uses host's time >>> zone), >>> rather than the time zone itself or "Local Time". >>> >> You're right, that's probably more descriptive than just "Local >> Time". Still, I >> think that instead of "rule" a timezone might just as well appear on >> the anchor >> part. I think "UTC" is also part of Olson's so it should be at the >> same spot as >> the timezone. > I am not little confused about all the places where we want to add the > time > zone. I thought that it was originally meant for hosts objects, so > that we can > HBAC rule is created, UI/CLI can already suggest the right time zone > for the > HBAC rule. But it should have been only informative value serving > mostly UX, > not something that SSSD would decide on. > > HBAC rule itself is always the authoritative source. We should also avoid > having time zone in 2 places in the HBAC rule itself - if this is what > you are > steering at. I thought the authoritative time zone would be only in > the HBAC > time definition only, i.e. only in the anchor specifically. I think the timezone still may be with the host object but only as the UI helper as you suggest. Although I would maybe rather not see it with the object at all and have the admin just set the right timezone for the HBAC rule themselves. After all, if there's a collision of host helper timezones, I think admin would have to do that anyway. I agree that there should only be one timezone record for each HBAC and I wouldn't suggest differently. There was a confusion when Jan suggested to use "rule" as anchor in the (time, anchor) tuple to get the rule's timezone which, he suggested, should be stored elsewhere but in the tuple. I think there's no harm having the timezone/"host" keyword stored with this tuple and therefore nowhere else. > Can we show specific examples of these tuples, to make sure we are in > agreement? My take was: > > (Mon-Fri 08:00-17:00, UTC+1) > (Mon-Fri 08:00-17:00, local) > > UTC+1 may not be ideal as it would not work for daylight saving, a > better way > would indeed be the Olson time zone ID, i.e: > > (Mon-Fri 08:00-17:00, Europe/Prague) > (Mon-Fri 08:00-17:00, local) Definitely the second format ((Mon-Fri 08:00-17:00, Europe/Prague)), we want to use Olson's database names. If I understand it right, "UTC" is in Olson's and stands for UTC+0 offset. If not, we can have "UTC" keyword in the anchor part of the tuple mentioned above to signalize just that (UTC+0). From pvoborni at redhat.com Thu Mar 26 13:47:47 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Thu, 26 Mar 2015 14:47:47 +0100 Subject: [Freeipa-devel] [PATCH 0020] show the exception message raised by dogtag._parse_ca_status during install In-Reply-To: <20150326032630.GH3285@dhcp-40-8.bne.redhat.com> References: <55129755.2090402@redhat.com> <20150326032630.GH3285@dhcp-40-8.bne.redhat.com> Message-ID: <55140E03.5020608@redhat.com> On 03/26/2015 04:26 AM, Fraser Tweedale wrote: > On Wed, Mar 25, 2015 at 12:09:09PM +0100, Martin Babinsky wrote: >> This should be patch 20 I think. I must make some cleanup in my patch >> numbers. >> >> https://fedorahosted.org/freeipa/ticket/4885 >> >> -- >> Martin^3 Babinsky > > ACK > Pushed to: master: e8d4f6dba1743389962e9d51871a88dc384840ec ipa-4-1: d7863f3e1ee8cbd5acda26ce1170913ca936ce7e -- Petr Vobornik From mkosek at redhat.com Thu Mar 26 13:55:25 2015 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Mar 2015 14:55:25 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <55140C5A.2010909@seznam.cz> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> <5513FA95.1090709@redhat.com> <55140C5A.2010909@seznam.cz> Message-ID: <55140FCD.5020902@redhat.com> On 03/26/2015 02:40 PM, Standa L?zni?ka wrote: > On 3/26/2015 1:24 PM, Martin Kosek wrote: >> On 03/26/2015 01:08 PM, Standa L?zni?ka wrote: >>> On 3/26/2015 11:13 AM, Jan Cholasta wrote: >>>> Dne 25.3.2015 v 18:25 Stanislav L?zni?ka napsal(a): >>>>> On 03/25/2015 12:34 PM, Alexander Bokovoy wrote: >>>>>> When using hbactest command you just need to supply implied time zone >>>>>> as an option to the command itself. After all, you are simulating rule >>>>>> execution so it does not matter where the value comes from. >>>>> Oh, good, I haven't thought of that. That certainly eases things up. >>>>> >>>>> Let me make a summary then, a short one this time, of what's been >>>>> discussed . >>>>> >>>>> It seems the best way to store time policies is indeed the format (time, >>>>> anchor) where anchor is either Olson database timezone or "Local Time" >>>>> for host local time. We are omitting users' local time because, after >>>>> all, we are talking HBAC Rules here (great point by Simo). If the admins >>>>> really needed that, there's a workaround Jan mentioned that should work >>>>> just fine. >>>> What I originally meant as anchor was a value specifying the time offset >>>> (e.g. "utc" - access time uses UTC, "rule" - access time uses time zone >>>> specified in the HBAC rule, "host" - access time uses host's time zone), >>>> rather than the time zone itself or "Local Time". >>>> >>> You're right, that's probably more descriptive than just "Local Time". Still, I >>> think that instead of "rule" a timezone might just as well appear on the anchor >>> part. I think "UTC" is also part of Olson's so it should be at the same spot as >>> the timezone. >> I am not little confused about all the places where we want to add the time >> zone. I thought that it was originally meant for hosts objects, so that we can >> HBAC rule is created, UI/CLI can already suggest the right time zone for the >> HBAC rule. But it should have been only informative value serving mostly UX, >> not something that SSSD would decide on. >> >> HBAC rule itself is always the authoritative source. We should also avoid >> having time zone in 2 places in the HBAC rule itself - if this is what you are >> steering at. I thought the authoritative time zone would be only in the HBAC >> time definition only, i.e. only in the anchor specifically. > I think the timezone still may be with the host object but only as the UI > helper as you suggest. Although I would maybe rather not see it with the object > at all and have the admin just set the right timezone for the HBAC rule > themselves. After all, if there's a collision of host helper timezones, I think > admin would have to do that anyway. Right. But UI could then offer: Warning, time zone is ambiguous. Please select the right time zone: HostA time zone: Europe/Prague [ ] HostB time zone: Europe/London [ ] > I agree that there should only be one timezone record for each HBAC and I > wouldn't suggest differently. There was a confusion when Jan suggested to use > "rule" as anchor in the (time, anchor) tuple to get the rule's timezone which, > he suggested, should be stored elsewhere but in the tuple. I think there's no > harm having the timezone/"host" keyword stored with this tuple and therefore > nowhere else. >> Can we show specific examples of these tuples, to make sure we are in >> agreement? My take was: >> >> (Mon-Fri 08:00-17:00, UTC+1) >> (Mon-Fri 08:00-17:00, local) >> >> UTC+1 may not be ideal as it would not work for daylight saving, a better way >> would indeed be the Olson time zone ID, i.e: >> >> (Mon-Fri 08:00-17:00, Europe/Prague) >> (Mon-Fri 08:00-17:00, local) > Definitely the second format ((Mon-Fri 08:00-17:00, Europe/Prague)), we want to > use Olson's database names. If I understand it right, "UTC" is in Olson's and > stands for UTC+0 offset. If not, we can have "UTC" keyword in the anchor part > of the tuple mentioned above to signalize just that (UTC+0). From pvoborni at redhat.com Thu Mar 26 14:05:32 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Thu, 26 Mar 2015 15:05:32 +0100 Subject: [Freeipa-devel] [PATCH] FreeIPA 4.1.4 release and fixes for CVE-2015-1827 and CVE-2015-0283 In-Reply-To: <20150326132012.GB3878@redhat.com> References: <20150326132012.GB3878@redhat.com> Message-ID: <5514122C.70402@redhat.com> On 03/26/2015 02:20 PM, Alexander Bokovoy wrote: > Hi, > > I've released slapi-nis 0.54.2 this morning as a fix for CVE-2015-0283, > packages are built for Fedora and RHEL7.1. However, to complete the > cycle, we need to release FreeIPA 4.1.4 to fix CVE-2015-1827. > > Both CVEs are for processing of group membership when dealing with users > from trusted AD domains. Fix in FreeIPA is in extdom plugin which is in > use by sssd 1.12.x, while slapi-nis fix is for legacy clients. > > We need to commit attached patches to FreeIPA and make a release of > FreeIPA 4.1.4 today. Then I can do Fedora builds and a combined update > push for slapi-nis+freeipa packages in Fedora. > > Patch 1 is actual CVE-2015-1827 fix. > > Patch 2 is to remove wrong values from Makefile.am files that actually > prevent regenerating Makefiles in daemons/ subdirectory, causing > non-working RHEL build. We fixed 4.1.0 base with this patch in RHEL and > we just need to bring upstream in sync with downstream on this. > > Patch 3 raises requirement of slapi-nis to the fixed version. > These patches has been already tested while the CVE was embargoed. pushed to ipa-4-1: * 447c5c7b0d76482dbb4273ea968a87cee2f4cddd fix Makefile.am for daemons * fd8e796873f34c942b8ab28d486b5edfe1c27abd extdom: fix wrong realloc size master: * 704c79d91d58f87b80afe6e9331e8060116b5ec0 fix Makefile.am for daemons * c1114ef82516002de08e004a930b5ba4a1791b25 extdom: fix wrong realloc size ipa-4-1: * 93302a8c28731625a0e38e647be50a9598bb49e7 slapi-nis: require 0.54.2 for CVE-2015-0283 fixes master: * 1b781b777f534b12a178202afa0982afd2d9c1dd slapi-nis: require 0.54.2 for CVE-2015-0283 fixes I'm going to do the FreeIPA 4.1.4 release now. -- Petr Vobornik From slaz at seznam.cz Thu Mar 26 15:20:15 2015 From: slaz at seznam.cz (=?utf-8?Q?Standa_L=C3=A1zni=C4=8Dka?=) Date: Thu, 26 Mar 2015 16:20:15 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <55140FCD.5020902@redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> <5513FA95.1090709@redhat.com> <55140C5A.2010909@seznam.cz> <55140FCD.5020902@redhat.com> Message-ID: <25EEA035-4F05-474C-A414-32A8A6F6B5DF@seznam.cz> > On 26. 3. 2015, at 14:55, Martin Kosek wrote: > >> On 03/26/2015 02:40 PM, Standa L?zni?ka wrote: >>> On 3/26/2015 1:24 PM, Martin Kosek wrote: >>>> On 03/26/2015 01:08 PM, Standa L?zni?ka wrote: >>>>> On 3/26/2015 11:13 AM, Jan Cholasta wrote: >>>>> Dne 25.3.2015 v 18:25 Stanislav L?zni?ka napsal(a): >>>>>>> On 03/25/2015 12:34 PM, Alexander Bokovoy wrote: >>>>>>> When using hbactest command you just need to supply implied time zone >>>>>>> as an option to the command itself. After all, you are simulating rule >>>>>>> execution so it does not matter where the value comes from. >>>>>> Oh, good, I haven't thought of that. That certainly eases things up. >>>>>> >>>>>> Let me make a summary then, a short one this time, of what's been >>>>>> discussed . >>>>>> >>>>>> It seems the best way to store time policies is indeed the format (time, >>>>>> anchor) where anchor is either Olson database timezone or "Local Time" >>>>>> for host local time. We are omitting users' local time because, after >>>>>> all, we are talking HBAC Rules here (great point by Simo). If the admins >>>>>> really needed that, there's a workaround Jan mentioned that should work >>>>>> just fine. >>>>> What I originally meant as anchor was a value specifying the time offset >>>>> (e.g. "utc" - access time uses UTC, "rule" - access time uses time zone >>>>> specified in the HBAC rule, "host" - access time uses host's time zone), >>>>> rather than the time zone itself or "Local Time". >>>> You're right, that's probably more descriptive than just "Local Time". Still, I >>>> think that instead of "rule" a timezone might just as well appear on the anchor >>>> part. I think "UTC" is also part of Olson's so it should be at the same spot as >>>> the timezone. >>> I am not little confused about all the places where we want to add the time >>> zone. I thought that it was originally meant for hosts objects, so that we can >>> HBAC rule is created, UI/CLI can already suggest the right time zone for the >>> HBAC rule. But it should have been only informative value serving mostly UX, >>> not something that SSSD would decide on. >>> >>> HBAC rule itself is always the authoritative source. We should also avoid >>> having time zone in 2 places in the HBAC rule itself - if this is what you are >>> steering at. I thought the authoritative time zone would be only in the HBAC >>> time definition only, i.e. only in the anchor specifically. >> I think the timezone still may be with the host object but only as the UI >> helper as you suggest. Although I would maybe rather not see it with the object >> at all and have the admin just set the right timezone for the HBAC rule >> themselves. After all, if there's a collision of host helper timezones, I think >> admin would have to do that anyway. > > Right. But UI could then offer: > > Warning, time zone is ambiguous. Please select the right time zone: > HostA time zone: Europe/Prague [ ] > HostB time zone: Europe/London [ ] I see. An option to choose a different time zone than the hosts' might also come handy here but I see where you're going. > >> I agree that there should only be one timezone record for each HBAC and I >> wouldn't suggest differently. There was a confusion when Jan suggested to use >> "rule" as anchor in the (time, anchor) tuple to get the rule's timezone which, >> he suggested, should be stored elsewhere but in the tuple. I think there's no >> harm having the timezone/"host" keyword stored with this tuple and therefore >> nowhere else. >>> Can we show specific examples of these tuples, to make sure we are in >>> agreement? My take was: >>> >>> (Mon-Fri 08:00-17:00, UTC+1) >>> (Mon-Fri 08:00-17:00, local) >>> >>> UTC+1 may not be ideal as it would not work for daylight saving, a better way >>> would indeed be the Olson time zone ID, i.e: >>> >>> (Mon-Fri 08:00-17:00, Europe/Prague) >>> (Mon-Fri 08:00-17:00, local) >> Definitely the second format ((Mon-Fri 08:00-17:00, Europe/Prague)), we want to >> use Olson's database names. If I understand it right, "UTC" is in Olson's and >> stands for UTC+0 offset. If not, we can have "UTC" keyword in the anchor part >> of the tuple mentioned above to signalize just that (UTC+0). > From jcholast at redhat.com Thu Mar 26 15:26:16 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 26 Mar 2015 16:26:16 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <55140FCD.5020902@redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> <5513FA95.1090709@redhat.com> <55140C5A.2010909@seznam.cz> <55140FCD.5020902@redhat.com> Message-ID: <55142518.7020708@redhat.com> Dne 26.3.2015 v 14:55 Martin Kosek napsal(a): > On 03/26/2015 02:40 PM, Standa L?zni?ka wrote: >> On 3/26/2015 1:24 PM, Martin Kosek wrote: >>> On 03/26/2015 01:08 PM, Standa L?zni?ka wrote: >>>> On 3/26/2015 11:13 AM, Jan Cholasta wrote: >>>>> Dne 25.3.2015 v 18:25 Stanislav L?zni?ka napsal(a): >>>>>> On 03/25/2015 12:34 PM, Alexander Bokovoy wrote: >>>>>>> When using hbactest command you just need to supply implied time zone >>>>>>> as an option to the command itself. After all, you are simulating rule >>>>>>> execution so it does not matter where the value comes from. >>>>>> Oh, good, I haven't thought of that. That certainly eases things up. >>>>>> >>>>>> Let me make a summary then, a short one this time, of what's been >>>>>> discussed . >>>>>> >>>>>> It seems the best way to store time policies is indeed the format (time, >>>>>> anchor) where anchor is either Olson database timezone or "Local Time" >>>>>> for host local time. We are omitting users' local time because, after >>>>>> all, we are talking HBAC Rules here (great point by Simo). If the admins >>>>>> really needed that, there's a workaround Jan mentioned that should work >>>>>> just fine. >>>>> What I originally meant as anchor was a value specifying the time offset >>>>> (e.g. "utc" - access time uses UTC, "rule" - access time uses time zone >>>>> specified in the HBAC rule, "host" - access time uses host's time zone), >>>>> rather than the time zone itself or "Local Time". >>>>> >>>> You're right, that's probably more descriptive than just "Local Time". Still, I >>>> think that instead of "rule" a timezone might just as well appear on the anchor >>>> part. I think "UTC" is also part of Olson's so it should be at the same spot as >>>> the timezone. Ah, right. OK then. >>> I am not little confused about all the places where we want to add the time >>> zone. I thought that it was originally meant for hosts objects, so that we can >>> HBAC rule is created, UI/CLI can already suggest the right time zone for the >>> HBAC rule. But it should have been only informative value serving mostly UX, >>> not something that SSSD would decide on. >>> >>> HBAC rule itself is always the authoritative source. We should also avoid >>> having time zone in 2 places in the HBAC rule itself - if this is what you are >>> steering at. I thought the authoritative time zone would be only in the HBAC >>> time definition only, i.e. only in the anchor specifically. >> I think the timezone still may be with the host object but only as the UI >> helper as you suggest. Although I would maybe rather not see it with the object >> at all and have the admin just set the right timezone for the HBAC rule >> themselves. After all, if there's a collision of host helper timezones, I think >> admin would have to do that anyway. I don't see any point in storing time zone in the host object, if it's not used for anything meaningful and has to be manually synchronized with the host's actual configured time zone. > > Right. But UI could then offer: > > Warning, time zone is ambiguous. Please select the right time zone: > HostA time zone: Europe/Prague [ ] > HostB time zone: Europe/London [ ] No, thanks. The whole point of "Local Time" is being able to use whatever time zone is configured on each host instead of having to specify one time zone for all of them, which is exactly what the above does. > >> I agree that there should only be one timezone record for each HBAC and I >> wouldn't suggest differently. There was a confusion when Jan suggested to use >> "rule" as anchor in the (time, anchor) tuple to get the rule's timezone which, >> he suggested, should be stored elsewhere but in the tuple. I think there's no >> harm having the timezone/"host" keyword stored with this tuple and therefore >> nowhere else. >>> Can we show specific examples of these tuples, to make sure we are in >>> agreement? My take was: >>> >>> (Mon-Fri 08:00-17:00, UTC+1) >>> (Mon-Fri 08:00-17:00, local) >>> >>> UTC+1 may not be ideal as it would not work for daylight saving, a better way >>> would indeed be the Olson time zone ID, i.e: >>> >>> (Mon-Fri 08:00-17:00, Europe/Prague) >>> (Mon-Fri 08:00-17:00, local) >> Definitely the second format ((Mon-Fri 08:00-17:00, Europe/Prague)), we want to >> use Olson's database names. If I understand it right, "UTC" is in Olson's and >> stands for UTC+0 offset. If not, we can have "UTC" keyword in the anchor part >> of the tuple mentioned above to signalize just that (UTC+0). > Can we please stop using the word "anchor" for time zone, rather than source of time zone information as I originally suggested? -- Jan Cholasta From ssorce at redhat.com Thu Mar 26 15:30:51 2015 From: ssorce at redhat.com (Simo Sorce) Date: Thu, 26 Mar 2015 11:30:51 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <55142518.7020708@redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> <5513FA95.1090709@redhat.com> <55140C5A.2010909@seznam.cz> <55140FCD.5020902@redhat.com> <55142518.7020708@redhat.com> Message-ID: <1427383851.8302.85.camel@willson.usersys.redhat.com> On Thu, 2015-03-26 at 16:26 +0100, Jan Cholasta wrote: > >> I think the timezone still may be with the host object but only as > the UI > >> helper as you suggest. Although I would maybe rather not see it > with the object > >> at all and have the admin just set the right timezone for the HBAC > rule > >> themselves. After all, if there's a collision of host helper > timezones, I think > >> admin would have to do that anyway. > > I don't see any point in storing time zone in the host object, if > it's > not used for anything meaningful and has to be manually synchronized > with the host's actual configured time zone. +1 The host *knows* it's local time zone, let's not set us up for sync issues. > > > > Right. But UI could then offer: > > > > Warning, time zone is ambiguous. Please select the right time zone: > > HostA time zone: Europe/Prague [ ] > > HostB time zone: Europe/London [ ] > > No, thanks. The whole point of "Local Time" is being able to use > whatever time zone is configured on each host instead of having to > specify one time zone for all of them, which is exactly what the above > does. +1 "Local Time" is a special name the stray out of the Olson database, you can see it as the wildcard '*' or 'ALL' in other rules and it means that the host will use its local time zone with the specified times of day and days of the week Simo. From mbasti at redhat.com Thu Mar 26 15:33:56 2015 From: mbasti at redhat.com (Martin Basti) Date: Thu, 26 Mar 2015 16:33:56 +0100 Subject: [Freeipa-devel] [PATCH 0222] DNSSEC: do not log into files Message-ID: <551426E4.2070906@redhat.com> We want to log DNSSEC daemons only into console (journald). This patch also fixes unexpected log file in /var/lib/softhsm/.ipa/log/default.log Patch attached. -- Martin Basti -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbasti-0222-DNSSEC-Do-not-log-into-files.patch Type: text/x-patch Size: 2174 bytes Desc: not available URL: From mkosek at redhat.com Thu Mar 26 15:35:14 2015 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Mar 2015 16:35:14 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <55142518.7020708@redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> <5513FA95.1090709@redhat.com> <55140C5A.2010909@seznam.cz> <55140FCD.5020902@redhat.com> <55142518.7020708@redhat.com> Message-ID: <55142732.7010604@redhat.com> On 03/26/2015 04:26 PM, Jan Cholasta wrote: > Dne 26.3.2015 v 14:55 Martin Kosek napsal(a): >> On 03/26/2015 02:40 PM, Standa L?zni?ka wrote: >>> On 3/26/2015 1:24 PM, Martin Kosek wrote: >>>> On 03/26/2015 01:08 PM, Standa L?zni?ka wrote: >>>>> On 3/26/2015 11:13 AM, Jan Cholasta wrote: >>>>>> Dne 25.3.2015 v 18:25 Stanislav L?zni?ka napsal(a): >>>>>>> On 03/25/2015 12:34 PM, Alexander Bokovoy wrote: >>>>>>>> When using hbactest command you just need to supply implied time zone >>>>>>>> as an option to the command itself. After all, you are simulating rule >>>>>>>> execution so it does not matter where the value comes from. >>>>>>> Oh, good, I haven't thought of that. That certainly eases things up. >>>>>>> >>>>>>> Let me make a summary then, a short one this time, of what's been >>>>>>> discussed . >>>>>>> >>>>>>> It seems the best way to store time policies is indeed the format (time, >>>>>>> anchor) where anchor is either Olson database timezone or "Local Time" >>>>>>> for host local time. We are omitting users' local time because, after >>>>>>> all, we are talking HBAC Rules here (great point by Simo). If the admins >>>>>>> really needed that, there's a workaround Jan mentioned that should work >>>>>>> just fine. >>>>>> What I originally meant as anchor was a value specifying the time offset >>>>>> (e.g. "utc" - access time uses UTC, "rule" - access time uses time zone >>>>>> specified in the HBAC rule, "host" - access time uses host's time zone), >>>>>> rather than the time zone itself or "Local Time". >>>>>> >>>>> You're right, that's probably more descriptive than just "Local Time". >>>>> Still, I >>>>> think that instead of "rule" a timezone might just as well appear on the >>>>> anchor >>>>> part. I think "UTC" is also part of Olson's so it should be at the same >>>>> spot as >>>>> the timezone. > > Ah, right. OK then. > >>>> I am not little confused about all the places where we want to add the time >>>> zone. I thought that it was originally meant for hosts objects, so that we can >>>> HBAC rule is created, UI/CLI can already suggest the right time zone for the >>>> HBAC rule. But it should have been only informative value serving mostly UX, >>>> not something that SSSD would decide on. >>>> >>>> HBAC rule itself is always the authoritative source. We should also avoid >>>> having time zone in 2 places in the HBAC rule itself - if this is what you are >>>> steering at. I thought the authoritative time zone would be only in the HBAC >>>> time definition only, i.e. only in the anchor specifically. >>> I think the timezone still may be with the host object but only as the UI >>> helper as you suggest. Although I would maybe rather not see it with the object >>> at all and have the admin just set the right timezone for the HBAC rule >>> themselves. After all, if there's a collision of host helper timezones, I think >>> admin would have to do that anyway. > > I don't see any point in storing time zone in the host object, if it's not used > for anything meaningful and has to be manually synchronized with the host's > actual configured time zone. It would be mostly used for aiding the HBAC rule creation process, i.e. for the UX. It would be optional. If you do not fill it, you would have to always select the right time zone in when setting the UTC HBAC time, If you fill the zone, UI could already select the right time zone for you. >> Right. But UI could then offer: >> >> Warning, time zone is ambiguous. Please select the right time zone: >> HostA time zone: Europe/Prague [ ] >> HostB time zone: Europe/London [ ] > > No, thanks. The whole point of "Local Time" is being able to use whatever time > zone is configured on each host instead of having to specify one time zone for > all of them, which is exactly what the above does. Host's Local Time and UTC time are 2 different approaches how to set the time for the HBAC rule. With Local Time type, you would of course not have to deal with time zones. I thought this was already cleared out. >>> I agree that there should only be one timezone record for each HBAC and I >>> wouldn't suggest differently. There was a confusion when Jan suggested to use >>> "rule" as anchor in the (time, anchor) tuple to get the rule's timezone which, >>> he suggested, should be stored elsewhere but in the tuple. I think there's no >>> harm having the timezone/"host" keyword stored with this tuple and therefore >>> nowhere else. >>>> Can we show specific examples of these tuples, to make sure we are in >>>> agreement? My take was: >>>> >>>> (Mon-Fri 08:00-17:00, UTC+1) >>>> (Mon-Fri 08:00-17:00, local) >>>> >>>> UTC+1 may not be ideal as it would not work for daylight saving, a better way >>>> would indeed be the Olson time zone ID, i.e: >>>> >>>> (Mon-Fri 08:00-17:00, Europe/Prague) >>>> (Mon-Fri 08:00-17:00, local) >>> Definitely the second format ((Mon-Fri 08:00-17:00, Europe/Prague)), we want to >>> use Olson's database names. If I understand it right, "UTC" is in Olson's and >>> stands for UTC+0 offset. If not, we can have "UTC" keyword in the anchor part >>> of the tuple mentioned above to signalize just that (UTC+0). >> > > Can we please stop using the word "anchor" for time zone, rather than source of > time zone information as I originally suggested? > From mkosek at redhat.com Thu Mar 26 15:39:10 2015 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Mar 2015 16:39:10 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <1427383851.8302.85.camel@willson.usersys.redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> <5513FA95.1090709@redhat.com> <55140C5A.2010909@seznam.cz> <55140FCD.5020902@redhat.com> <55142518.7020708@redhat.com> <1427383851.8302.85.camel@willson.usersys.redhat.com> Message-ID: <5514281E.1070409@redhat.com> On 03/26/2015 04:30 PM, Simo Sorce wrote: > On Thu, 2015-03-26 at 16:26 +0100, Jan Cholasta wrote: >>>> I think the timezone still may be with the host object but only as >> the UI >>>> helper as you suggest. Although I would maybe rather not see it >> with the object >>>> at all and have the admin just set the right timezone for the HBAC >> rule >>>> themselves. After all, if there's a collision of host helper >> timezones, I think >>>> admin would have to do that anyway. >> >> I don't see any point in storing time zone in the host object, if >> it's >> not used for anything meaningful and has to be manually synchronized >> with the host's actual configured time zone. > > +1 > The host *knows* it's local time zone, let's not set us up for sync > issues. > >>> >>> Right. But UI could then offer: >>> >>> Warning, time zone is ambiguous. Please select the right time zone: >>> HostA time zone: Europe/Prague [ ] >>> HostB time zone: Europe/London [ ] >> >> No, thanks. The whole point of "Local Time" is being able to use >> whatever time zone is configured on each host instead of having to >> specify one time zone for all of them, which is exactly what the above >> does. > > +1 > "Local Time" is a special name the stray out of the Olson database, you > can see it as the wildcard '*' or 'ALL' in other rules and it means that > the host will use its local time zone with the specified times of day > and days of the week See http://www.redhat.com/archives/freeipa-devel/2015-March/msg00447.html. I agree with you both if we are talking about Local Time rules. I was mostly talking about UTC rules where the time zone is required to set the right UTC time. From ssorce at redhat.com Thu Mar 26 15:39:32 2015 From: ssorce at redhat.com (Simo Sorce) Date: Thu, 26 Mar 2015 11:39:32 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <55142732.7010604@redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> <5513FA95.1090709@redhat.com> <55140C5A.2010909@seznam.cz> <55140FCD.5020902@redhat.com> <55142518.7020708@redhat.com> <55142732.7010604@redhat.com> Message-ID: <1427384372.8302.89.camel@willson.usersys.redhat.com> On Thu, 2015-03-26 at 16:35 +0100, Martin Kosek wrote: > On 03/26/2015 04:26 PM, Jan Cholasta wrote: [...] > > I don't see any point in storing time zone in the host object, if it's not used > > for anything meaningful and has to be manually synchronized with the host's > > actual configured time zone. > > It would be mostly used for aiding the HBAC rule creation process, i.e. for the > UX. It would be optional. If you do not fill it, you would have to always > select the right time zone in when setting the UTC HBAC time, > > If you fill the zone, UI could already select the right time zone for you. It will only help to do mistakes, how does the host object get to know what is the host's timezone ? And in any case you generally create HBAC rules using groups of hosts, what is the UI gonna do ? Crawl all the hosts in a group and then ? Average add the most common time zone ? Drop it please :) > Host's Local Time and UTC time are 2 different approaches how to set the time > for the HBAC rule. With Local Time type, you would of course not have to deal > with time zones. I thought this was already cleared out. Sorry you confuse me, in which case do you need UTC ? In case you want to set an absolute time that doesn't change with DST ? Simo. From ssorce at redhat.com Thu Mar 26 15:42:09 2015 From: ssorce at redhat.com (Simo Sorce) Date: Thu, 26 Mar 2015 11:42:09 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <5514281E.1070409@redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> <5513FA95.1090709@redhat.com> <55140C5A.2010909@seznam.cz> <55140FCD.5020902@redhat.com> <55142518.7020708@redhat.com> <1427383851.8302.85.camel@willson.usersys.redhat.com> <5514281E.1070409@redhat.com> Message-ID: <1427384529.8302.91.camel@willson.usersys.redhat.com> On Thu, 2015-03-26 at 16:39 +0100, Martin Kosek wrote: > On 03/26/2015 04:30 PM, Simo Sorce wrote: > > On Thu, 2015-03-26 at 16:26 +0100, Jan Cholasta wrote: > >>>> I think the timezone still may be with the host object but only as > >> the UI > >>>> helper as you suggest. Although I would maybe rather not see it > >> with the object > >>>> at all and have the admin just set the right timezone for the HBAC > >> rule > >>>> themselves. After all, if there's a collision of host helper > >> timezones, I think > >>>> admin would have to do that anyway. > >> > >> I don't see any point in storing time zone in the host object, if > >> it's > >> not used for anything meaningful and has to be manually synchronized > >> with the host's actual configured time zone. > > > > +1 > > The host *knows* it's local time zone, let's not set us up for sync > > issues. > > > >>> > >>> Right. But UI could then offer: > >>> > >>> Warning, time zone is ambiguous. Please select the right time zone: > >>> HostA time zone: Europe/Prague [ ] > >>> HostB time zone: Europe/London [ ] > >> > >> No, thanks. The whole point of "Local Time" is being able to use > >> whatever time zone is configured on each host instead of having to > >> specify one time zone for all of them, which is exactly what the above > >> does. > > > > +1 > > "Local Time" is a special name the stray out of the Olson database, you > > can see it as the wildcard '*' or 'ALL' in other rules and it means that > > the host will use its local time zone with the specified times of day > > and days of the week > > See http://www.redhat.com/archives/freeipa-devel/2015-March/msg00447.html. > > I agree with you both if we are talking about Local Time rules. I was mostly > talking about UTC rules where the time zone is required to set the right UTC time. Sorry, but if I understand what you are suggesting then I do not agree. Either you use UTC based timezones *or* you use an Olson time zone. You do *not* try to convert something like Europe/Prague to UTC as you would change the meaning of the rule. Simo. From mkosek at redhat.com Thu Mar 26 15:47:06 2015 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Mar 2015 16:47:06 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <1427384372.8302.89.camel@willson.usersys.redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> <5513FA95.1090709@redhat.com> <55140C5A.2010909@seznam.cz> <55140FCD.5020902@redhat.com> <55142518.7020708@redhat.com> <55142732.7010604@redhat.com> <1427384372.8302.89.camel@willson.usersys.redhat.com> Message-ID: <551429FA.1020906@redhat.com> On 03/26/2015 04:39 PM, Simo Sorce wrote: > On Thu, 2015-03-26 at 16:35 +0100, Martin Kosek wrote: >> On 03/26/2015 04:26 PM, Jan Cholasta wrote: > > [...] >>> I don't see any point in storing time zone in the host object, if it's not used >>> for anything meaningful and has to be manually synchronized with the host's >>> actual configured time zone. >> >> It would be mostly used for aiding the HBAC rule creation process, i.e. for the >> UX. It would be optional. If you do not fill it, you would have to always >> select the right time zone in when setting the UTC HBAC time, >> >> If you fill the zone, UI could already select the right time zone for you. > > > It will only help to do mistakes, how does the host object get to know > what is the host's timezone ? And in any case you generally create HBAC > rules using groups of hosts, what is the UI gonna do ? Crawl all the > hosts in a group and then ? Average add the most common time zone ? Search hosts, gather all time zones and list them as choices or simply warn that there are more time zones and Local Time based rule is preferred? :-) > Drop it please :) If there is no one interested in it, we can drop it. Such UX improvement can also be added later, if there is a need. > >> Host's Local Time and UTC time are 2 different approaches how to set the time >> for the HBAC rule. With Local Time type, you would of course not have to deal >> with time zones. I thought this was already cleared out. > > Sorry you confuse me, in which case do you need UTC ? > In case you want to set an absolute time that doesn't change with DST ? I am confused as well. Wasn't it you who expressed the need to have 2 different approaches for HBAC time rules - Local Time and fixed UTC time? Reference: http://www.redhat.com/archives/freeipa-devel/2015-March/msg00158.html From mkosek at redhat.com Thu Mar 26 15:49:27 2015 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Mar 2015 16:49:27 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <1427384529.8302.91.camel@willson.usersys.redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> <5513FA95.1090709@redhat.com> <55140C5A.2010909@seznam.cz> <55140FCD.5020902@redhat.com> <55142518.7020708@redhat.com> <1427383851.8302.85.camel@willson.usersys.redhat.com> <5514281E.1070409@redhat.com> <1427384529.8302.91.camel@willson.usersys.redhat.com> Message-ID: <55142A87.8080303@redhat.com> On 03/26/2015 04:42 PM, Simo Sorce wrote: > On Thu, 2015-03-26 at 16:39 +0100, Martin Kosek wrote: >> On 03/26/2015 04:30 PM, Simo Sorce wrote: >>> On Thu, 2015-03-26 at 16:26 +0100, Jan Cholasta wrote: >>>>>> I think the timezone still may be with the host object but only as >>>> the UI >>>>>> helper as you suggest. Although I would maybe rather not see it >>>> with the object >>>>>> at all and have the admin just set the right timezone for the HBAC >>>> rule >>>>>> themselves. After all, if there's a collision of host helper >>>> timezones, I think >>>>>> admin would have to do that anyway. >>>> >>>> I don't see any point in storing time zone in the host object, if >>>> it's >>>> not used for anything meaningful and has to be manually synchronized >>>> with the host's actual configured time zone. >>> >>> +1 >>> The host *knows* it's local time zone, let's not set us up for sync >>> issues. >>> >>>>> >>>>> Right. But UI could then offer: >>>>> >>>>> Warning, time zone is ambiguous. Please select the right time zone: >>>>> HostA time zone: Europe/Prague [ ] >>>>> HostB time zone: Europe/London [ ] >>>> >>>> No, thanks. The whole point of "Local Time" is being able to use >>>> whatever time zone is configured on each host instead of having to >>>> specify one time zone for all of them, which is exactly what the above >>>> does. >>> >>> +1 >>> "Local Time" is a special name the stray out of the Olson database, you >>> can see it as the wildcard '*' or 'ALL' in other rules and it means that >>> the host will use its local time zone with the specified times of day >>> and days of the week >> >> See http://www.redhat.com/archives/freeipa-devel/2015-March/msg00447.html. >> >> I agree with you both if we are talking about Local Time rules. I was mostly >> talking about UTC rules where the time zone is required to set the right UTC time. > > Sorry, but if I understand what you are suggesting then I do not agree. > Either you use UTC based timezones *or* you use an Olson time zone. You > do *not* try to convert something like Europe/Prague to UTC as you would > change the meaning of the rule. Ah, I think where the confusion is coming from. When I said UTC, I rather meant time + Olson TZ, i.e. time rule that is the same across globe, unlike the Local Time. Sorry. I think this guy (http://www.redhat.com/archives/freeipa-devel/2015-March/msg00158.html) injected the "UTC" as an alias for this method :-) From jcholast at redhat.com Thu Mar 26 15:57:35 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Thu, 26 Mar 2015 16:57:35 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <551429FA.1020906@redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> <5513FA95.1090709@redhat.com> <55140C5A.2010909@seznam.cz> <55140FCD.5020902@redhat.com> <55142518.7020708@redhat.com> <55142732.7010604@redhat.com> <1427384372.8302.89.camel@willson.usersys.redhat.com> <551429FA.1020906@redhat.com> Message-ID: <55142C6F.7080600@redhat.com> Dne 26.3.2015 v 16:47 Martin Kosek napsal(a): > On 03/26/2015 04:39 PM, Simo Sorce wrote: >> On Thu, 2015-03-26 at 16:35 +0100, Martin Kosek wrote: >>> On 03/26/2015 04:26 PM, Jan Cholasta wrote: >> >> [...] >>>> I don't see any point in storing time zone in the host object, if it's not used >>>> for anything meaningful and has to be manually synchronized with the host's >>>> actual configured time zone. >>> >>> It would be mostly used for aiding the HBAC rule creation process, i.e. for the >>> UX. It would be optional. If you do not fill it, you would have to always >>> select the right time zone in when setting the UTC HBAC time, >>> >>> If you fill the zone, UI could already select the right time zone for you. >> >> >> It will only help to do mistakes, how does the host object get to know >> what is the host's timezone ? And in any case you generally create HBAC >> rules using groups of hosts, what is the UI gonna do ? Crawl all the >> hosts in a group and then ? Average add the most common time zone ? > > Search hosts, gather all time zones and list them as choices or simply warn > that there are more time zones and Local Time based rule is preferred? :-) > >> Drop it please :) > > If there is no one interested in it, we can drop it. Such UX improvement can > also be added later, if there is a need. If we want to improve the UX by babysitting the administrator based on random guesses, we might as well add Clippy to IPA: __ / \ _____________ | | / \ @ @ | It looks | || || | like you | || || <--| are setting | |\_/| | time zone | \___/ \_____________/ ;-) -- Jan Cholasta From mkosek at redhat.com Thu Mar 26 16:03:33 2015 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Mar 2015 17:03:33 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <55142C6F.7080600@redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> <5513FA95.1090709@redhat.com> <55140C5A.2010909@seznam.cz> <55140FCD.5020902@redhat.com> <55142518.7020708@redhat.com> <55142732.7010604@redhat.com> <1427384372.8302.89.camel@willson.usersys.redhat.com> <551429FA.1020906@redhat.com> <55142C6F.7080600@redhat.com> Message-ID: <55142DD5.20201@redhat.com> On 03/26/2015 04:57 PM, Jan Cholasta wrote: > Dne 26.3.2015 v 16:47 Martin Kosek napsal(a): >> On 03/26/2015 04:39 PM, Simo Sorce wrote: >>> On Thu, 2015-03-26 at 16:35 +0100, Martin Kosek wrote: >>>> On 03/26/2015 04:26 PM, Jan Cholasta wrote: >>> >>> [...] >>>>> I don't see any point in storing time zone in the host object, if it's not >>>>> used >>>>> for anything meaningful and has to be manually synchronized with the host's >>>>> actual configured time zone. >>>> >>>> It would be mostly used for aiding the HBAC rule creation process, i.e. for >>>> the >>>> UX. It would be optional. If you do not fill it, you would have to always >>>> select the right time zone in when setting the UTC HBAC time, >>>> >>>> If you fill the zone, UI could already select the right time zone for you. >>> >>> >>> It will only help to do mistakes, how does the host object get to know >>> what is the host's timezone ? And in any case you generally create HBAC >>> rules using groups of hosts, what is the UI gonna do ? Crawl all the >>> hosts in a group and then ? Average add the most common time zone ? >> >> Search hosts, gather all time zones and list them as choices or simply warn >> that there are more time zones and Local Time based rule is preferred? :-) >> >>> Drop it please :) >> >> If there is no one interested in it, we can drop it. Such UX improvement can >> also be added later, if there is a need. > > If we want to improve the UX by babysitting the administrator based on random > guesses, we might as well add Clippy to IPA: > > __ > / \ _____________ > | | / \ > @ @ | It looks | > || || | like you | > || || <--| are setting | > |\_/| | time zone | > \___/ \_____________/ > > > ;-) > :-D I see your point. Just note that what seems as neeedless babysitting from your (or other) POV may be a very useful UX for the real world user. But in this case we can wait until we hear from that real world user that he struggles with the potential time based rules UI. From ssorce at redhat.com Thu Mar 26 16:06:03 2015 From: ssorce at redhat.com (Simo Sorce) Date: Thu, 26 Mar 2015 12:06:03 -0400 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <551429FA.1020906@redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> <5513FA95.1090709@redhat.com> <55140C5A.2010909@seznam.cz> <55140FCD.5020902@redhat.com> <55142518.7020708@redhat.com> <55142732.7010604@redhat.com> <1427384372.8302.89.camel@willson.usersys.redhat.com> <551429FA.1020906@redhat.com> Message-ID: <1427385963.8302.95.camel@willson.usersys.redhat.com> On Thu, 2015-03-26 at 16:47 +0100, Martin Kosek wrote: > On 03/26/2015 04:39 PM, Simo Sorce wrote: > > On Thu, 2015-03-26 at 16:35 +0100, Martin Kosek wrote: > >> On 03/26/2015 04:26 PM, Jan Cholasta wrote: > > > > [...] > >>> I don't see any point in storing time zone in the host object, if it's not used > >>> for anything meaningful and has to be manually synchronized with the host's > >>> actual configured time zone. > >> > >> It would be mostly used for aiding the HBAC rule creation process, i.e. for the > >> UX. It would be optional. If you do not fill it, you would have to always > >> select the right time zone in when setting the UTC HBAC time, > >> > >> If you fill the zone, UI could already select the right time zone for you. > > > > > > It will only help to do mistakes, how does the host object get to know > > what is the host's timezone ? And in any case you generally create HBAC > > rules using groups of hosts, what is the UI gonna do ? Crawl all the > > hosts in a group and then ? Average add the most common time zone ? > > Search hosts, gather all time zones and list them as choices or simply warn > that there are more time zones and Local Time based rule is preferred? :-) > > > Drop it please :) > > If there is no one interested in it, we can drop it. Such UX improvement can > also be added later, if there is a need. > > > > >> Host's Local Time and UTC time are 2 different approaches how to set the time > >> for the HBAC rule. With Local Time type, you would of course not have to deal > >> with time zones. I thought this was already cleared out. > > > > Sorry you confuse me, in which case do you need UTC ? > > In case you want to set an absolute time that doesn't change with DST ? > > I am confused as well. Wasn't it you who expressed the need to have 2 different > approaches for HBAC time rules - Local Time and fixed UTC time? Not really, Olson is correct. > Reference: > http://www.redhat.com/archives/freeipa-devel/2015-March/msg00158.html I see how the language I sued may be confusing. But I was pointing out only that you can't just do one or the other you have to support all these cases, I wasn't advocating using UTC as the "timezoned" option. If I should choose I would support all three flavors: - the special "Local Time" string - the Olson database (Europe/Rome) - absolute UTC offsets (UTC+4) However I would not publicize the latter much in the UI, as it is rarely what the admin really should do. Simo. From mkosek at redhat.com Thu Mar 26 16:10:40 2015 From: mkosek at redhat.com (Martin Kosek) Date: Thu, 26 Mar 2015 17:10:40 +0100 Subject: [Freeipa-devel] Time-based account policies In-Reply-To: <1427385963.8302.95.camel@willson.usersys.redhat.com> References: <550FD89B.8050506@redhat.com> <551066CB.6040904@seznam.cz> <55110149.40303@redhat.com> <55110D49.6050705@redhat.com> <20150324072007.GC2989@hendrix.redhat.com> <551114EB.9040508@redhat.com> <5511180D.8050103@redhat.com> <55119A04.4060301@seznam.cz> <551261FF.4020806@redhat.com> <5512976D.6070700@seznam.cz> <20150325113454.GY3878@redhat.com> <5512EF80.8070709@seznam.cz> <5513DBC1.5060405@redhat.com> <5513F6CB.4060004@seznam.cz> <5513FA95.1090709@redhat.com> <55140C5A.2010909@seznam.cz> <55140FCD.5020902@redhat.com> <55142518.7020708@redhat.com> <55142732.7010604@redhat.com> <1427384372.8302.89.camel@willson.usersys.redhat.com> <551429FA.1020906@redhat.com> <1427385963.8302.95.camel@willson.usersys.redhat.com> Message-ID: <55142F80.2060803@redhat.com> On 03/26/2015 05:06 PM, Simo Sorce wrote: > On Thu, 2015-03-26 at 16:47 +0100, Martin Kosek wrote: ... >> Reference: >> http://www.redhat.com/archives/freeipa-devel/2015-March/msg00158.html > > I see how the language I sued may be confusing. But I was pointing out > only that you can't just do one or the other you have to support all > these cases, I wasn't advocating using UTC as the "timezoned" option. > > If I should choose I would support all three flavors: > - the special "Local Time" string > - the Olson database (Europe/Rome) > - absolute UTC offsets (UTC+4) Yup, makes sense. > However I would not publicize the latter much in the UI, as it is rarely > what the admin really should do. From pvoborni at redhat.com Thu Mar 26 17:14:34 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Thu, 26 Mar 2015 18:14:34 +0100 Subject: [Freeipa-devel] Announcing FreeIPA 4.1.4 Message-ID: <55143E7A.7050907@redhat.com> The FreeIPA team would like to announce FreeIPA v4.1.4 security release! It can be downloaded from http://www.freeipa.org/page/Downloads. The builds will be available for Fedora 21. Builds for Fedora 20 are available in the official COPR repository . == Highlights in 4.1.4 == === Security fixes === * 'CVE-2015-1827' It was discovered that the IPA extdom Directory Server plug-in did not correctly perform memory reallocation when handling user account information. A request for a list of groups for a user that belongs to a large number of groups would cause a Directory Server to crash. * 'CVE-2015-0283' Additionally, FreeIPA 4.1.4 requires use of slapi-nis 0.54.2 which includes number of fixes for the CVE-2015-0283: It was discovered that the slapi-nis Directory Server plug-in did not correctly perform memory reallocation when handling user account information. A request for information about a group with many members, or a request for a user that belongs to a large number of groups, would cause a Directory Server to enter an infinite loop and consume an excessive amount of CPU time. These issues were discovered by Sumit Bose of Red Hat. === Enhancements === * Various documentation improvements by Gabe Alford === Bug fixes === * Various fixes to DNSSEC support and overall DNS deployment scripts * Improvements in handling CA certificates from previous deployments when installing FreeIPA clients * Licensing of FreeIPA is clarified with regards to OpenSSL integration * More robust configuration of slapi-nis plugin to prevent potential dead-locks with other operations requiring lower-level database access. == Upgrading == Upgrade instructions are available on upgrade page . == Feedback == Please provide comments, bugs and other feedback via the freeipa-users mailing list or #freeipa channel on Freenode. == Detailed Changelog since 4.1.3 == === Alexander Bokovoy (2) === * fix Makefile.am for daemons * slapi-nis: require 0.54.2 for CVE-2015-0283 fixes === David Kupka (2) === * Use IPA CA certificate when available and ignore NO_TLS_LDAP when not. * Restore default.conf and use it to build API. === Gabe Alford (3) === * ipa-replica-prepare should document ipv6 options * ipatests: Add tests for valid and invalid ipa-advise * ipa-replica-prepare can only be created on the first master === Jan Cholasta (4) === * certstore: Make certificate retrieval more robust * client-install: Do not crash on invalid CA certificate in LDAP * client: Fix ca_is_enabled calls * upload_cacrt: Fix empty cACertificate in cn=CAcert === Martin Babinsky (3) === * ipa-dns-install: use STARTTLS to connect to DS * migrate-ds: print out failed attempts when no users/groups are migrated * show the exception message thrown by dogtag._parse_ca_status during install === Martin Ba?ti (7) === * DNSSEC add support for CKM_RSA_PKCS_OAEP mechanism * Fix memory leaks in ipap11helper * Remove unused method from ipap11pkcs helper module * DNS fix: do not traceback if unsupported records are in LDAP * DNS fix: do not show part options for unsupported records * DNS: remove NSEC3PARAM from records * Fix dead code in ipap11helper module === Martin Ko?ek (1) === * Remove references to GPL v2.0 license === Nathan Kinder (1) === * Timeout when performing time sync during client install === Petr Voborn?k (2) === * ipatests: add missing ssh object classes to idoverrideuser * Become IPA 4.1.4 === Petr ?pa?ek (3) === * p11helper: standardize indentation and other visual aspects of the code * p11helper: use sizeof() instead of magic constants * p11helper: clarify error message === Simo Sorce (2) === * Add a clear OpenSSL exception. * Stop including the DES algorythm from openssl. === Sumit Bose (7) === * ipa-range-check: do not treat missing objects as error * Add configure check for cwrap libraries * extdom: handle ERANGE return code for getXXYYY_r() calls * extdom: make nss buffer configurable * extdom: return LDAP_NO_SUCH_OBJECT to the client * extdom: fix memory leak * extdom: fix wrong realloc size === Tom?? Babej (3) === * ipatests: Add coverage for adding and removing sshpubkeys in ID overrides * ipalib: Make sure correct attribute name is referenced for fax * idviews: Use case-insensitive detection of Default Trust View === Thierry Bordaz (1) === * Limit deadlocks between DS plugin DNA and slapi-nis -- Petr Vobornik From pvoborni at redhat.com Thu Mar 26 17:32:15 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Thu, 26 Mar 2015 18:32:15 +0100 Subject: [Freeipa-devel] [PATCHES 0001-0002] ipa-client-install NTP fixes In-Reply-To: <5506EF9A.3050806@redhat.com> References: <54EE84B1.4040206@redhat.com> <54EEDF8A.2090204@redhat.com> <54F0CEF7.8090609@redhat.com> <54F0D191.9060908@redhat.com> <54F0D860.6020501@redhat.com> <54F0DCC3.5080405@redhat.com> <54F0DF13.2030808@redhat.com> <54F22B7E.30707@redhat.com> <54F22E09.6030707@redhat.com> <54F22F8A.6030505@redhat.com> <54F24908.2080308@redhat.com> <54F751CE.1000202@redhat.com> <54F75556.6020803@redhat.com> <54F755EA.1000100@redhat.com> <54F75C29.4010105@redhat.com> <5501FA67.3020504@redhat.com> <5502AA40.2090108@redhat.com> <5506D2E2.8020109@redhat.com> <5506EF9A.3050806@redhat.com> Message-ID: <5514429F.6050900@redhat.com> On 03/16/2015 03:58 PM, Martin Kosek wrote: > On 03/16/2015 01:56 PM, Martin Babinsky wrote: >> >> I have tested the patches on F21 client and they work as expected. >> > > I take that as an ACK. Before pushing the change, I just changed one print > format from "%s" to "%d" given a number was printed. > > Pushed to: > master: a58b77ca9cd3620201306258dd6bd05ea1c73c73 > ipa-4-1: 80aeb445e2034776f08668bf04dfd711af477b25 > > Petr1, it would be nice to get this one built on F21+, to unblock Ipsilon project. > I've noticed that patch 0001 was not pushed. Pushed to: master: * f0c1daf7a2a8c88f6d84d81d66c7e39f571e0894 Skip time sync during client install when using --no-ntp ipa-4-1: * b5969c1d1ae6eb1e392e0420fcbf094ae7b34102 Skip time sync during client install when using --no-ntp Note: it's part of Fedora builds as a separate patch (that's why I noticed) -- Petr Vobornik From lslebodn at redhat.com Thu Mar 26 18:40:16 2015 From: lslebodn at redhat.com (Lukas Slebodnik) Date: Thu, 26 Mar 2015 19:40:16 +0100 Subject: [Freeipa-devel] [PATCH] extop: For printf formatting warning In-Reply-To: <20150318113357.GL4854@hendrix.redhat.com> References: <20150318102514.GK4854@hendrix.redhat.com> <20150318103915.GI9952@p.redhat.com> <20150318113357.GL4854@hendrix.redhat.com> Message-ID: <20150326184016.GC30590@mail.corp.redhat.com> On (18/03/15 12:33), Jakub Hrozek wrote: >On Wed, Mar 18, 2015 at 11:39:15AM +0100, Sumit Bose wrote: >> On Wed, Mar 18, 2015 at 11:25:14AM +0100, Jakub Hrozek wrote: >> > I could swear I sent the patch last time when I was reviewing Sumit's >> > patches but apparently not. >> > >> > It's better to use %zu instead of %d for size_t formatting with recent >> > compilers. >> >> > >From a088e8c8a9bd29b4c22f1579f2c3705652bf2730 Mon Sep 17 00:00:00 2001 >> > From: Jakub Hrozek >> > Date: Wed, 18 Mar 2015 11:20:38 +0100 >> > Subject: [PATCH] extop: For printf formatting warning >> > >> > --- >> > daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 2 +- >> > 1 file changed, 1 insertion(+), 1 deletion(-) >> > >> > diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >> > index 708d0e4a2fc9da4f87a24a49c945587049f7280f..bc25e7643cdebe0eadc0cee4dcba3a392fdc33be 100644 >> > --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >> > +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c >> > @@ -200,7 +200,7 @@ static int ipa_extdom_init_ctx(Slapi_PBlock *pb, struct ipa_extdom_ctx **_ctx) >> > if (ctx->max_nss_buf_size == 0) { >> > ctx->max_nss_buf_size = DEFAULT_MAX_NSS_BUFFER; >> > } >> > - LOG("Maximal nss buffer size set to [%d]!\n", ctx->max_nss_buf_size); >> > + LOG("Maximal nss buffer size set to [%zu]!\n", ctx->max_nss_buf_size); >> >> I tried this some time ago and found the here not the glibc printf >> version is used but I guess some NSPR implementation which does not >> support the z specifier. So I would assum that this is not working as >> expected. Have you tried to trigger the error message or called LOG >> unconditionally with '%zu' ? > >No, I only tried compiling the code. I haven't expected non-standard >printf to be used. sorry. > >Then what about casting max_nss_buf_size to something large that the NSPR >implementation can handle (unsigned long?) > You can use th modifier "j" and cast to uintmax_t or intmax_t man 3 printf says: j A following integer conversion corresponds to an intmax_t or uintmax_t argument, or a following n conversion corresponds to a pointer to an intmax_t argument. LS From jpazdziora at redhat.com Fri Mar 27 07:47:06 2015 From: jpazdziora at redhat.com (Jan Pazdziora) Date: Fri, 27 Mar 2015 08:47:06 +0100 Subject: [Freeipa-devel] Announcing FreeIPA 4.1.4 In-Reply-To: <55143E7A.7050907@redhat.com> References: <55143E7A.7050907@redhat.com> Message-ID: <20150327074706.GJ4696@redhat.com> On Thu, Mar 26, 2015 at 06:14:34PM +0100, Petr Vobornik wrote: > The FreeIPA team would like to announce FreeIPA v4.1.4 security release! > > It can be downloaded from http://www.freeipa.org/page/Downloads. The builds > will be available for Fedora 21. Builds for Fedora 20 are available in the > official COPR repository > . The description at that page still only mentions 4.1.3, it should likely be updated. Does it make sense to start making upstream copr repos for Fedora 21 and possibly Fedora 22 as well? The https://admin.fedoraproject.org/updates/freeipa-4.1.4-1.fc21 is in testing and it will be a while before it gets to Fedora proper, copr repo would give us a stable (no fiddling with updates-testing enablement) yum source. -- Jan Pazdziora Principal Software Engineer, Identity Management Engineering, Red Hat From abokovoy at redhat.com Fri Mar 27 07:50:56 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Fri, 27 Mar 2015 09:50:56 +0200 Subject: [Freeipa-devel] Announcing FreeIPA 4.1.4 In-Reply-To: <20150327074706.GJ4696@redhat.com> References: <55143E7A.7050907@redhat.com> <20150327074706.GJ4696@redhat.com> Message-ID: <20150327075056.GE3878@redhat.com> On Fri, 27 Mar 2015, Jan Pazdziora wrote: >On Thu, Mar 26, 2015 at 06:14:34PM +0100, Petr Vobornik wrote: >> The FreeIPA team would like to announce FreeIPA v4.1.4 security release! >> >> It can be downloaded from http://www.freeipa.org/page/Downloads. The builds >> will be available for Fedora 21. Builds for Fedora 20 are available in the >> official COPR repository >> . > >The description at that page still only mentions 4.1.3, it should >likely be updated. > >Does it make sense to start making upstream copr repos for Fedora 21 >and possibly Fedora 22 as well? The > > https://admin.fedoraproject.org/updates/freeipa-4.1.4-1.fc21 > >is in testing and it will be a while before it gets to Fedora proper, >copr repo would give us a stable (no fiddling with updates-testing >enablement) yum source. No, it is not making sense to duplicate Fedora repositories. If you want to get packages heading to stable repository faster, do the testing and apply karma. We can get to stable as soon as karma is reached. -- / Alexander Bokovoy From jpazdziora at redhat.com Fri Mar 27 07:58:57 2015 From: jpazdziora at redhat.com (Jan Pazdziora) Date: Fri, 27 Mar 2015 08:58:57 +0100 Subject: [Freeipa-devel] Announcing FreeIPA 4.1.4 In-Reply-To: <20150327075056.GE3878@redhat.com> References: <55143E7A.7050907@redhat.com> <20150327074706.GJ4696@redhat.com> <20150327075056.GE3878@redhat.com> Message-ID: <20150327075857.GL4696@redhat.com> On Fri, Mar 27, 2015 at 09:50:56AM +0200, Alexander Bokovoy wrote: > >is in testing and it will be a while before it gets to Fedora proper, > >copr repo would give us a stable (no fiddling with updates-testing > >enablement) yum source. > > No, it is not making sense to duplicate Fedora repositories. If you want > to get packages heading to stable repository faster, do the testing and > apply karma. We can get to stable as soon as karma is reached. But the goal here is not to get to stable faster, quite the contrary. The goal is to be able to take your time with testing in all sorts of scenarios before the bits hit stable Fedora. -- Jan Pazdziora Principal Software Engineer, Identity Management Engineering, Red Hat From jpazdziora at redhat.com Fri Mar 27 08:15:29 2015 From: jpazdziora at redhat.com (Jan Pazdziora) Date: Fri, 27 Mar 2015 09:15:29 +0100 Subject: [Freeipa-devel] FreeIPA 4.1.4 upstream repo for RHEL 7 is broken Message-ID: <20150327081529.GA21450@redhat.com> On Thu, Mar 26, 2015 at 06:14:34PM +0100, Petr Vobornik wrote: > The FreeIPA team would like to announce FreeIPA v4.1.4 security release! > > It can be downloaded from http://www.freeipa.org/page/Downloads. The builds > will be available for Fedora 21. Builds for Fedora 20 are available in the > official COPR repository > . I've noticed that the RHEL/EPEL 7 upstream repo was updated as well. However, that repo is currently broken when used on RHEL 7.1: # curl -o /etc/yum.repos.d/mkosek-freeipa-epel-7.repo https://copr.fedoraproject.org/coprs/mkosek/freeipa/repo/epel-7/mkosek-freeipa-epel-7.repo [...] # yum install -y freeipa-server [...] --> Finished Dependency Resolution Error: Package: freeipa-server-4.1.4-1.el7.centos.x86_64 (mkosek-freeipa) Requires: slapi-nis >= 0.54.2-1 Available: slapi-nis-0.52-4.el7.x86_64 (rhel-7-server-rpms) slapi-nis = 0.52-4.el7 Available: slapi-nis-0.54-2.el7.x86_64 (rhel-7-server-rpms) slapi-nis = 0.54-2.el7 Available: slapi-nis-0.54-3.el7_1.x86_64 (rhel-7-server-rpms) slapi-nis = 0.54-3.el7_1 Available: slapi-nis-0.54.1-1.el7.centos.x86_64 (mkosek-freeipa) slapi-nis = 0.54.1-1.el7.centos You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest -- Jan Pazdziora Principal Software Engineer, Identity Management Engineering, Red Hat From abokovoy at redhat.com Fri Mar 27 08:48:20 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Fri, 27 Mar 2015 10:48:20 +0200 Subject: [Freeipa-devel] Announcing FreeIPA 4.1.4 In-Reply-To: <20150327075857.GL4696@redhat.com> References: <55143E7A.7050907@redhat.com> <20150327074706.GJ4696@redhat.com> <20150327075056.GE3878@redhat.com> <20150327075857.GL4696@redhat.com> Message-ID: <20150327084820.GG3878@redhat.com> On Fri, 27 Mar 2015, Jan Pazdziora wrote: >On Fri, Mar 27, 2015 at 09:50:56AM +0200, Alexander Bokovoy wrote: >> >is in testing and it will be a while before it gets to Fedora proper, >> >copr repo would give us a stable (no fiddling with updates-testing >> >enablement) yum source. >> >> No, it is not making sense to duplicate Fedora repositories. If you want >> to get packages heading to stable repository faster, do the testing and >> apply karma. We can get to stable as soon as karma is reached. > >But the goal here is not to get to stable faster, quite the contrary. >The goal is to be able to take your time with testing in all sorts of >scenarios before the bits hit stable Fedora. If you want *that*, use your own COPR. I don't have time to maintain dozen variants of the repositories for the same distribution releases just for the sake of false feeling of stability. You need to test during development, we have upstream QE testing effort aiming at this as well. For Fedora there is updates-testing repository that fits the goal of testing before applying to the actual deployment if you have any (I do have my infrastructure deployed on Fedora). -- / Alexander Bokovoy From jpazdziora at redhat.com Fri Mar 27 09:22:50 2015 From: jpazdziora at redhat.com (Jan Pazdziora) Date: Fri, 27 Mar 2015 10:22:50 +0100 Subject: [Freeipa-devel] Announcing FreeIPA 4.1.4 In-Reply-To: <20150327084820.GG3878@redhat.com> References: <55143E7A.7050907@redhat.com> <20150327074706.GJ4696@redhat.com> <20150327075056.GE3878@redhat.com> <20150327075857.GL4696@redhat.com> <20150327084820.GG3878@redhat.com> Message-ID: <20150327092250.GM4696@redhat.com> On Fri, Mar 27, 2015 at 10:48:20AM +0200, Alexander Bokovoy wrote: > > For Fedora there is updates-testing repository that fits the goal of > testing before applying to the actual deployment if you have any (I do The problem is, the 4.1.4 bits are not even in updates-testing yet: http://dl.fedoraproject.org/pub/fedora/linux/updates/testing/21/x86_64/f/ So we did not really provide the release on Fedora 21 to the community. -- Jan Pazdziora Principal Software Engineer, Identity Management Engineering, Red Hat From pvoborni at redhat.com Fri Mar 27 09:31:54 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Fri, 27 Mar 2015 10:31:54 +0100 Subject: [Freeipa-devel] Announcing FreeIPA 4.1.4 In-Reply-To: <20150327092250.GM4696@redhat.com> References: <55143E7A.7050907@redhat.com> <20150327074706.GJ4696@redhat.com> <20150327075056.GE3878@redhat.com> <20150327075857.GL4696@redhat.com> <20150327084820.GG3878@redhat.com> <20150327092250.GM4696@redhat.com> Message-ID: <5515238A.6070300@redhat.com> On 03/27/2015 10:22 AM, Jan Pazdziora wrote: > On Fri, Mar 27, 2015 at 10:48:20AM +0200, Alexander Bokovoy wrote: >> >> For Fedora there is updates-testing repository that fits the goal >> of testing before applying to the actual deployment if you have any >> (I do > > The problem is, the 4.1.4 bits are not even in updates-testing yet: > > http://dl.fedoraproject.org/pub/fedora/linux/updates/testing/21/x86_64/f/ > > So we did not really provide the release on Fedora 21 to the > community. > There is always a question whether an upstream release announcement should wait for downstream release. I could see arguments for both answers. But we clearly stated: > It can be downloaded from http://www.freeipa.org/page/Downloads. The > builds will be available for Fedora 21. Builds for Fedora 20 are > available in the official COPR repository > . 1) "It can be downloaded from http://www.freeipa.org/page/Downloads" True, the tarball is there 2) "The builds will be available for Fedora 21." True (they are not yet, but WILL BE) 3) "Builds for Fedora 20 are available in the official COPR repository" True, although there was the issues with missing slapi-nis build which should be resolved now -- Petr Vobornik From pvoborni at redhat.com Fri Mar 27 09:45:26 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Fri, 27 Mar 2015 10:45:26 +0100 Subject: [Freeipa-devel] Announcing FreeIPA 4.1.4 In-Reply-To: <5515238A.6070300@redhat.com> References: <55143E7A.7050907@redhat.com> <20150327074706.GJ4696@redhat.com> <20150327075056.GE3878@redhat.com> <20150327075857.GL4696@redhat.com> <20150327084820.GG3878@redhat.com> <20150327092250.GM4696@redhat.com> <5515238A.6070300@redhat.com> Message-ID: <551526B6.5070304@redhat.com> On 03/27/2015 10:31 AM, Petr Vobornik wrote: > On 03/27/2015 10:22 AM, Jan Pazdziora wrote: >> On Fri, Mar 27, 2015 at 10:48:20AM +0200, Alexander Bokovoy wrote: >>> >>> For Fedora there is updates-testing repository that fits the goal >>> of testing before applying to the actual deployment if you have any >>> (I do >> >> The problem is, the 4.1.4 bits are not even in updates-testing yet: >> >> http://dl.fedoraproject.org/pub/fedora/linux/updates/testing/21/x86_64/f/ >> >> So we did not really provide the release on Fedora 21 to the >> community. >> > > There is always a question whether an upstream release announcement > should wait for downstream release. I could see arguments for both answers. > > But we clearly stated: >> It can be downloaded from http://www.freeipa.org/page/Downloads. The >> builds will be available for Fedora 21. Builds for Fedora 20 are >> available in the official COPR repository >> . > > 1) "It can be downloaded from http://www.freeipa.org/page/Downloads" > True, the tarball is there > 2) "The builds will be available for Fedora 21." > True (they are not yet, but WILL BE) And if one doesn't want to wait for Fedora process, he can always download them from koji. The announcement is usually send when the builds are ready. Sometimes I wait for the push, but in general it doesn't change anything. We can't speed up the process and doing new repos just for impatient people IMO isn't worth the time. > 3) "Builds for Fedora 20 are available in the official COPR repository" > True, although there was the issues with missing slapi-nis build > which should be resolved now -- Petr Vobornik From abokovoy at redhat.com Fri Mar 27 09:45:25 2015 From: abokovoy at redhat.com (Alexander Bokovoy) Date: Fri, 27 Mar 2015 11:45:25 +0200 Subject: [Freeipa-devel] Announcing FreeIPA 4.1.4 In-Reply-To: <5515238A.6070300@redhat.com> References: <55143E7A.7050907@redhat.com> <20150327074706.GJ4696@redhat.com> <20150327075056.GE3878@redhat.com> <20150327075857.GL4696@redhat.com> <20150327084820.GG3878@redhat.com> <20150327092250.GM4696@redhat.com> <5515238A.6070300@redhat.com> Message-ID: <20150327094525.GH3878@redhat.com> On Fri, 27 Mar 2015, Petr Vobornik wrote: >On 03/27/2015 10:22 AM, Jan Pazdziora wrote: >>On Fri, Mar 27, 2015 at 10:48:20AM +0200, Alexander Bokovoy wrote: >>> >>>For Fedora there is updates-testing repository that fits the goal >>>of testing before applying to the actual deployment if you have any >>>(I do >> >>The problem is, the 4.1.4 bits are not even in updates-testing yet: >> >>http://dl.fedoraproject.org/pub/fedora/linux/updates/testing/21/x86_64/f/ >> >> So we did not really provide the release on Fedora 21 to the >>community. >> > >There is always a question whether an upstream release announcement >should wait for downstream release. I could see arguments for both >answers. > >But we clearly stated: >>It can be downloaded from http://www.freeipa.org/page/Downloads. The >>builds will be available for Fedora 21. Builds for Fedora 20 are >>available in the official COPR repository >>. > >1) "It can be downloaded from http://www.freeipa.org/page/Downloads" > True, the tarball is there >2) "The builds will be available for Fedora 21." > True (they are not yet, but WILL BE) >3) "Builds for Fedora 20 are available in the official COPR repository" > True, although there was the issues with missing slapi-nis build >which should be resolved now Yep. And Fedora infrastructure was under heavy hammer last week or so which caused overall delay in processing updates for all repositories. This is not something that happens every time so I think Jan's complaints are far from being fair. We considered yesterday with Petr to wait until packages will be pushed but decided to give a go due to CVE. Packages are built and available, an upgrade on Fedora can be done with the help of koji tool if need to fix your deployment is high. If we would have kept announcements until Fedora have solved their issues, we would do unfair service to our users. Using COPR as a solution here is wrong, though. I know, I had one user who took my OTP COPR test repo bits and used them in production for about a year even long after packages were pushed out to Fedora proper. -- / Alexander Bokovoy From sbose at redhat.com Fri Mar 27 10:31:02 2015 From: sbose at redhat.com (Sumit Bose) Date: Fri, 27 Mar 2015 11:31:02 +0100 Subject: [Freeipa-devel] [PATCH] extop: For printf formatting warning In-Reply-To: <20150326184016.GC30590@mail.corp.redhat.com> References: <20150318102514.GK4854@hendrix.redhat.com> <20150318103915.GI9952@p.redhat.com> <20150318113357.GL4854@hendrix.redhat.com> <20150326184016.GC30590@mail.corp.redhat.com> Message-ID: <20150327103102.GE7785@p.redhat.com> On Thu, Mar 26, 2015 at 07:40:16PM +0100, Lukas Slebodnik wrote: > On (18/03/15 12:33), Jakub Hrozek wrote: > >On Wed, Mar 18, 2015 at 11:39:15AM +0100, Sumit Bose wrote: > >> On Wed, Mar 18, 2015 at 11:25:14AM +0100, Jakub Hrozek wrote: > >> > I could swear I sent the patch last time when I was reviewing Sumit's > >> > patches but apparently not. > >> > > >> > It's better to use %zu instead of %d for size_t formatting with recent > >> > compilers. > >> > >> > >From a088e8c8a9bd29b4c22f1579f2c3705652bf2730 Mon Sep 17 00:00:00 2001 > >> > From: Jakub Hrozek > >> > Date: Wed, 18 Mar 2015 11:20:38 +0100 > >> > Subject: [PATCH] extop: For printf formatting warning > >> > > >> > --- > >> > daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c | 2 +- > >> > 1 file changed, 1 insertion(+), 1 deletion(-) > >> > > >> > diff --git a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c > >> > index 708d0e4a2fc9da4f87a24a49c945587049f7280f..bc25e7643cdebe0eadc0cee4dcba3a392fdc33be 100644 > >> > --- a/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c > >> > +++ b/daemons/ipa-slapi-plugins/ipa-extdom-extop/ipa_extdom_extop.c > >> > @@ -200,7 +200,7 @@ static int ipa_extdom_init_ctx(Slapi_PBlock *pb, struct ipa_extdom_ctx **_ctx) > >> > if (ctx->max_nss_buf_size == 0) { > >> > ctx->max_nss_buf_size = DEFAULT_MAX_NSS_BUFFER; > >> > } > >> > - LOG("Maximal nss buffer size set to [%d]!\n", ctx->max_nss_buf_size); > >> > + LOG("Maximal nss buffer size set to [%zu]!\n", ctx->max_nss_buf_size); > >> > >> I tried this some time ago and found the here not the glibc printf > >> version is used but I guess some NSPR implementation which does not > >> support the z specifier. So I would assum that this is not working as > >> expected. Have you tried to trigger the error message or called LOG > >> unconditionally with '%zu' ? > > > >No, I only tried compiling the code. I haven't expected non-standard > >printf to be used. sorry. > > > >Then what about casting max_nss_buf_size to something large that the NSPR > >implementation can handle (unsigned long?) > > > You can use th modifier "j" and cast to uintmax_t or intmax_t > > man 3 printf says: > j A following integer conversion corresponds to an intmax_t or > uintmax_t argument, or a following n conversion corresponds to a > pointer to an intmax_t argument. looks like NSPR only knows about 'h', 'l' and 'll', see http://www-archive.mozilla.org/projects/nspr/reference/html/prprf.html#23299 for details. bye, Sumit > > LS > > -- > Manage your subscription for the Freeipa-devel mailing list: > https://www.redhat.com/mailman/listinfo/freeipa-devel > Contribute to FreeIPA: http://www.freeipa.org/page/Contribute/Code From tbordaz at redhat.com Fri Mar 27 10:32:40 2015 From: tbordaz at redhat.com (thierry bordaz) Date: Fri, 27 Mar 2015 11:32:40 +0100 Subject: [Freeipa-devel] User life cycle: changes in user plugin commands In-Reply-To: <551520D4.9090604@redhat.com> References: <5513E33A.4040301@redhat.com> <5513F548.7060103@redhat.com> <5514036E.5020900@redhat.com> <55142407.8000805@redhat.com> <551450E1.6080508@redhat.com> <551463C5.1030802@redhat.com> <55146AE3.9030800@redhat.com> <55147AE9.1040402@redhat.com> <551520D4.9090604@redhat.com> Message-ID: <551531C8.80907@redhat.com> On 03/27/2015 10:20 AM, Martin Kosek wrote: > On 03/26/2015 10:32 PM, Dmitri Pal wrote: >> On 03/26/2015 04:24 PM, thierry bordaz wrote: >>> On 03/26/2015 08:53 PM, Dmitri Pal wrote: >>>> On 03/26/2015 02:33 PM, thierry bordaz wrote: >>>>> On 03/26/2015 04:21 PM, Martin Kosek wrote: >>>>>> First, I think *this* thread should better be on freeipa-devel since it is >>>>>> only >>>>>> upstream feature specific, no planning inside. >>>>>> >>>>>> On 03/26/2015 02:02 PM, thierry bordaz wrote: >>>>>>> On 03/26/2015 01:02 PM, David Kupka wrote: >>>>>>>> Hi Thierry! >>>>>>>> >>>>>>>> On 03/26/2015 11:45 AM, thierry bordaz wrote: >>>>>>>>> Hello, >>>>>>>>> >>>>>>>>> In user life cycle, when a stage entry is activated it is moved from >>>>>>>>> a stage container to an active container. >>>>>>>>> Then when an active entry is deleted it is moved to a delete >>>>>>>>> container. >>>>>>>>> >>>>>>>>> The move stage->active is done by creating a new entry (ADD active, >>>>>>>>> DEL stage). >>>>>>>>> The move active->delete can be done with a MODRDN of the entry or >>>>>>>>> also ADD delete_entry + DEL active_entry. >>>>>>>>> I was wondering what is the best approach: MODRDN vs ADD-DEL. >>>>>>>> Why did we choose ADD-DEL over MODRDN in stage->active procedure? Could we >>>>>>>> use the same reasoning to repeat the choice? >>>>>>> ADD-DEL was preferred (for activate) mainly because there are provisioning >>>>>>> systems. So the stage entry can contain invalid values or missing some >>>>>>> attributes/values. We need to rebuild a very clean entry, picking some >>>>>>> values >>>>>>> from the stage entry, if the values are valid. >>>>>> The original proposal was MODRDN to also allow us control the operation with >>>>>> the MODRDN ACIs you added to DS. You cannot really control DEL and ADD >>>>>> operation together, so you would have to allow the person who activates the >>>>>> entries to delete any staged user and add a new active user. >>>>> I agree, MODRDN was the original proposal but finally ADD-DEL was choosen >>>>> because entries added by provisioning system should be validated (in >>>>> particular structural objectclass >>>>> https://www.redhat.com/archives/freeipa-devel/2014-May/msg00471.html). >>>>> >>>>> That means that the helpdesk person that has rights to ADD on active >>>>> container need DEL rights on staging. >>>>> In fact MODRDN ACI brings an additional control, where the helpdesk person >>>>> does not need that rights. >>>>>> This is original section I added for this reasoning: >>>>>> http://www.freeipa.org/index.php?title=V4/User_Life-Cycle_Management#MODRDN_vs._ADD-DEL >>>>>> >>>>>> >>>>>> Why cannot the activate command be multistep? I.e. move the entry to active >>>>>> users, generate missing fields and enable the entry? It could also trigger >>>>>> automember for that user, we have commands for it. >>>>> I think it is also feasible. >>>>> Just a remark if the stage entry has a userpassword/krbPrincipalKey, at the >>>>> time it is modrdn to active container we can authenticate with it. >>>>> This even if all the initialization steps are not completed. > In this case, I see the benefits for ADD-DEL. It would have to be done anyway, > if structural objectclasses are added to the entry, right? > > Please just make sure to include the end result and reasoning to the cleaned > wiki design. Sure. I am doing this right now. > >>>>>>> For user-del, the active entry is valid, we just want to clear some >>>>>>> attributes. >>>>>>> Actually digging into the archive it was already discussed >>>>>>> https://www.redhat.com/archives/freeipa-devel/2014-June/msg00080.html >>>>>>> leading >>>>>>> to MODRDN ! >>>>>> I would really prefer custom LDAP plugin that would do the processing of >>>>>> delete >>>>>> entry (re-add it or convert DEL to MODRDN, if possible). The reason is that >>>>>> with this approach, people/software would be still able to use standard >>>>>> ldapdelete operation to delete users instead of figuring out they need to >>>>>> MODRDN it to some location to keep the company policies. >>>>>> >>>>>> The ticket 3911 says: [RFE] Allow managing users add/modify/*delete* via LDAP >>>>>> client. With MODRDN DEL approach, you are breaking the delete part. >>>>> That is right, using DS plugin it hides some complexity at the application >>>>> level. >>>>> If we introduce user-del options '--preserve|--permanent', we would need to >>>>> give this option to the DEL (control ?). >>>> Hm. I do not see it that way. >>>> I see that command has two options do a MODRDN (--preserve) or DEL >>>> (--permanent) depending on flags. This is the decision made in framework >>>> before it hits DS. So I am not sure anything should be given to the control. >>>> It would be invoked only in case of MOD. In case of DEL everything would >>>> work as now. No? >>> (adding freeipa-devel) >>> >>> If this is the application or CLI that decides what to do, I agree that >>> --preserve will issue a MODRDN and --permanent a DEL. >>> >>> My understanding of Martin point, is that the application/CLI should not >>> decide but just issue a DEL. >>> This would be the job of a DS plugin to convert the DEL into a MODRDN (or >>> ADD-DEL) to move the entry from active to delete container. >>> >>> In that case, DS plugin needs to decide if the DEL intend to preserve the >>> entry (move to delete container) or permanently delete the entry (true ldap >>> delete). >>> I was wondering how to give to DS plugin the way to decide: configuration >>> parameter, DEL control.. >> I see but this seem more complicated than what I propose and I do not see a >> reason for the complexity. > My main concern was that there may be software doing direct LDAP additions and > modifications for the users. This is the reason why this ticket was opened. The > software may ADD staging users, helpdesk would activate them and the software > would DELete them at the end of the life cycle. > > In my proposal, to enable deleted users container, one just needs to switch a > button and activate the DS plugin which would make DEL into a MODRDN. > > With your proposal, the application would need to be learned to do MODRDN > instead of LDAP DEL - is that really feasible? Applications that use to DEL active entries at the end of the life cycle would act as 'user-del --permanent'. If they need to be ULC aware they will need to learn to do MODRDN to do a 'user-del --preserve'. Now if the default mode is '--preserve' a DEL on active container should be rejected or be translated. > > Simo, do you have any preference here since your name was called out based on > your note to do modrdn in > https://www.redhat.com/archives/freeipa-devel/2014-June/msg00083.html > ? > > Thanks, > Martin From jpazdziora at redhat.com Fri Mar 27 11:13:17 2015 From: jpazdziora at redhat.com (Jan Pazdziora) Date: Fri, 27 Mar 2015 12:13:17 +0100 Subject: [Freeipa-devel] FreeIPA 4.1.4 upstream repo for RHEL 7 is broken In-Reply-To: <20150327081529.GA21450@redhat.com> References: <20150327081529.GA21450@redhat.com> Message-ID: <20150327111317.GO4696@redhat.com> On Fri, Mar 27, 2015 at 09:15:29AM +0100, Jan Pazdziora wrote: > On Thu, Mar 26, 2015 at 06:14:34PM +0100, Petr Vobornik wrote: > > The FreeIPA team would like to announce FreeIPA v4.1.4 security release! > > > > It can be downloaded from http://www.freeipa.org/page/Downloads. The builds > > will be available for Fedora 21. Builds for Fedora 20 are available in the > > official COPR repository > > . > > I've noticed that the RHEL/EPEL 7 upstream repo was updated as well. > > However, that repo is currently broken when used on RHEL 7.1: > > # curl -o /etc/yum.repos.d/mkosek-freeipa-epel-7.repo https://copr.fedoraproject.org/coprs/mkosek/freeipa/repo/epel-7/mkosek-freeipa-epel-7.repo > [...] > # yum install -y freeipa-server > [...] > --> Finished Dependency Resolution > Error: Package: freeipa-server-4.1.4-1.el7.centos.x86_64 (mkosek-freeipa) > Requires: slapi-nis >= 0.54.2-1 > Available: slapi-nis-0.52-4.el7.x86_64 (rhel-7-server-rpms) > slapi-nis = 0.52-4.el7 > Available: slapi-nis-0.54-2.el7.x86_64 (rhel-7-server-rpms) > slapi-nis = 0.54-2.el7 > Available: slapi-nis-0.54-3.el7_1.x86_64 (rhel-7-server-rpms) > slapi-nis = 0.54-3.el7_1 > Available: slapi-nis-0.54.1-1.el7.centos.x86_64 (mkosek-freeipa) > slapi-nis = 0.54.1-1.el7.centos > You could try using --skip-broken to work around the problem > You could try running: rpm -Va --nofiles --nodigest I confirm that with latest fixes to the repo, things work both on Fedora 20 and CentOS 7. I've triggered rebuild of respective *-upstream images of https://registry.hub.docker.com/u/adelton/freeipa-server/ and they passed as well. Thank you, -- Jan Pazdziora Principal Software Engineer, Identity Management Engineering, Red Hat From mkubik at redhat.com Fri Mar 27 12:52:13 2015 From: mkubik at redhat.com (Milan Kubik) Date: Fri, 27 Mar 2015 13:52:13 +0100 Subject: [Freeipa-devel] [PATCH] ipatests: port of p11helper test from github In-Reply-To: <55118563.2010503@redhat.com> References: <5502ED38.9020302@redhat.com> <5506B87B.6050600@redhat.com> <5506E983.4020807@redhat.com> <55070370.1010200@redhat.com> <5507F62E.9080004@redhat.com> <55114D0E.8000705@redhat.com> <5511698C.3060305@redhat.com> <55118563.2010503@redhat.com> Message-ID: <5515527D.1010805@redhat.com> Hi, On 03/24/2015 04:40 PM, Martin Basti wrote: > On 24/03/15 14:41, Milan Kubik wrote: >> Hello, >> >> thanks for the review. >> >> On 03/24/2015 12:39 PM, Martin Basti wrote: >>> On 17/03/15 10:38, Milan Kubik wrote: >>>> Hi, >>>> >>>> On 03/16/2015 05:23 PM, Martin Basti wrote: >>>>> On 16/03/15 15:32, Milan Kubik wrote: >>>>>> On 03/16/2015 12:03 PM, Milan Kubik wrote: >>>>>>> On 03/13/2015 02:59 PM, Milan Kubik wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> this is a patch with port of [1] to pytest. >>>>>>>> >>>>>>>> [1]: >>>>>>>> https://github.com/spacekpe/freeipa-pkcs11/blob/master/python/run.py >>>>>>>> >>>>>>>> >>>>>>>> Cheers, >>>>>>>> Milan >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> Added few more asserts in methods where the test could fail and >>>>>>> cause other errors. >>>>>>> >>>>>>> >>>>>> New version of the patch after brief discussion with Martin >>>>>> Basti. Removed unnecessary variable assignments and separated a >>>>>> new test case. >>>>>> >>>>>> >>>>> Hello, >>>>> >>>>> thank you for the patch. >>>>> I have a few nitpicks: >>>>> 1) >>>>> You can remove this and use just hexlify(s) >>>>> +def str_to_hex(s): >>>>> + return ''.join("{:02x}".format(ord(c)) for c in s) >>>> done >>>>> >>>>> 2) >>>>> + def test_find_secret_key(self, p11): >>>>> + assert p11.find_keys(_ipap11helper.KEY_CLASS_SECRET_KEY, >>>>> label=u"???-aest") >>>>> >>>>> In tests before you tested the exact number of expected IDs >>>>> returned by find_keys method, why not here? >>>> Lack of attention. >>>> Fixed the assert in `test_search_for_master_key` which does the >>>> same thing. Merged `test_find_secret_key` with >>>> `test_search_for_master_key` where it belongs. >>>>> >>>>> Martin^2 >>>> >>>> Milan >>>> >>>> >>> Thank you for patches, just two nitpicks: >>> >>> 1) >>> Can you use the ipaplatform.paths constant? This is platform specific. >>> LIBSOFTHSM2_SO = "/usr/lib/pkcs11/libsofthsm2.so" >>> LIBSOFTHSM2_SO_64 = "/usr/lib64/pkcs11/libsofthsm2.so" >>> >>> Respectively use just LIBSOFTHSM2_SO, on 64bit systems it is >>> automatically mapped into LIBSOFTHSM2_SO_64 >>> >>> instead of: >>> + >>> +libsofthsm = "/usr/lib64/pkcs11/libsofthsm2.so" >>> + >>> >> Done. >>> 2) >>> Can you please check if keys were really deleted? >>> + def test_delete_key(self, p11): >> Done. >>> -- >>> Martin Basti >> >> I also moved `test_search_for_master_key` right after >> `test_generate_master_key` and changed the assert message to a more >> specific one. >> >> Cheers, >> Milan > Please fix this: > > 1) > $ git am > freeipa-mkubik-0001-5-ipatests-port-of-p11helper-test-from-github.patch > Applying: ipatests: port of p11helper test from github > /home/mbasti/work/freeipa-devel/.git/rebase-apply/patch:228: new blank > line at EOF. > + > warning: 1 line adds whitespace errors. > fixed (TIL: vim doesn't show the last empty line) > 2) Please respect PEP8 if it is possible Mostly done, there are few instances of long variable names off by few characters. > > 3) > I'm still not sure with this: > assert len(master_key) == 0, "The master key should be deleted." > > following example is more pythonic > assert not master_key, "The master key...." > Changed to the latter variant. > 4) > Related to 3), should we test return value, if correct type was returned? > assert isinstance(master_key, list) and not master_key, "....." > I do not insist on this. > > Otherwise it works as expected. > -- > Martin Basti Milan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkubik-0001-6-ipatests-port-of-p11helper-test-from-github.patch Type: text/x-patch Size: 14369 bytes Desc: not available URL: From dkupka at redhat.com Fri Mar 27 13:58:18 2015 From: dkupka at redhat.com (David Kupka) Date: Fri, 27 Mar 2015 14:58:18 +0100 Subject: [Freeipa-devel] [PATCH 0042] Make lint work on Fedora 22. Message-ID: <551561FA.6080702@redhat.com> pylint changed slightly so we must react otherwise we'll be unable to build freeipa rpms on Fedora 22. This patch should go to master for sure but I don't know if we want it in 4.1. -- David Kupka -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-dkupka-0042-Make-lint-work-on-Fedora-22.patch Type: text/x-patch Size: 3213 bytes Desc: not available URL: From dkupka at redhat.com Fri Mar 27 14:04:35 2015 From: dkupka at redhat.com (David Kupka) Date: Fri, 27 Mar 2015 15:04:35 +0100 Subject: [Freeipa-devel] [PATCH 0043] Use mod_auth_gssapi instead of mod_auth_kerb. Message-ID: <55156373.2020809@redhat.com> https://fedorahosted.org/freeipa/ticket/4190 To test this on F22 my patch 42 is needed. -- David Kupka -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-dkupka-0043-Use-mod_auth_gssapi-instead-of-mod_auth_kerb.patch Type: text/x-patch Size: 6824 bytes Desc: not available URL: From simo at redhat.com Fri Mar 27 14:10:05 2015 From: simo at redhat.com (Simo Sorce) Date: Fri, 27 Mar 2015 10:10:05 -0400 Subject: [Freeipa-devel] [PATCH 0043] Use mod_auth_gssapi instead of mod_auth_kerb. In-Reply-To: <55156373.2020809@redhat.com> References: <55156373.2020809@redhat.com> Message-ID: <1427465405.8302.116.camel@willson.usersys.redhat.com> On Fri, 2015-03-27 at 15:04 +0100, David Kupka wrote: > https://fedorahosted.org/freeipa/ticket/4190 > > To test this on F22 my patch 42 is needed. Please require mod_auth_gssapi >= 1.1.0-2 Any lower version will fail to work. Otherwise patch looks good to me. Simo. -- Simo Sorce * Red Hat, Inc * New York From dkupka at redhat.com Fri Mar 27 14:13:49 2015 From: dkupka at redhat.com (David Kupka) Date: Fri, 27 Mar 2015 15:13:49 +0100 Subject: [Freeipa-devel] [PATCH 0043] Use mod_auth_gssapi instead of mod_auth_kerb. In-Reply-To: <1427465405.8302.116.camel@willson.usersys.redhat.com> References: <55156373.2020809@redhat.com> <1427465405.8302.116.camel@willson.usersys.redhat.com> Message-ID: <5515659D.3010703@redhat.com> On 03/27/2015 03:10 PM, Simo Sorce wrote: > On Fri, 2015-03-27 at 15:04 +0100, David Kupka wrote: >> https://fedorahosted.org/freeipa/ticket/4190 >> >> To test this on F22 my patch 42 is needed. > > Please require mod_auth_gssapi >= 1.1.0-2 > Any lower version will fail to work. > > Otherwise patch looks good to me. > > Simo. > Realized few second after I sent it but you're really quick :-) Thanks for review and help. Updated patch attached. -- David Kupka -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-dkupka-0043.2-Use-mod_auth_gssapi-instead-of-mod_auth_kerb.patch Type: text/x-patch Size: 7708 bytes Desc: not available URL: From rcritten at redhat.com Fri Mar 27 14:14:21 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Fri, 27 Mar 2015 10:14:21 -0400 Subject: [Freeipa-devel] [PATCH 0043] Use mod_auth_gssapi instead of mod_auth_kerb. In-Reply-To: <55156373.2020809@redhat.com> References: <55156373.2020809@redhat.com> Message-ID: <551565BD.7080909@redhat.com> David Kupka wrote: > https://fedorahosted.org/freeipa/ticket/4190 > > To test this on F22 my patch 42 is needed. > > NACK. You need to bump the VERSION in ipa.conf for this file to be replaced on upgrades. This also provides an opportunity to drop the cgi-bin configuration. This is a legacy from IPA v1.0 where people had TONS, loads and heaps of problems getting Kerberos working so we provided a CGI to spit out the environment to help with troubleshooting. rob From jpazdziora at redhat.com Fri Mar 27 14:23:36 2015 From: jpazdziora at redhat.com (Jan Pazdziora) Date: Fri, 27 Mar 2015 15:23:36 +0100 Subject: [Freeipa-devel] [PATCH 0043] Use mod_auth_gssapi instead of mod_auth_kerb. In-Reply-To: <55156373.2020809@redhat.com> References: <55156373.2020809@redhat.com> Message-ID: <20150327142336.GA2233@redhat.com> On Fri, Mar 27, 2015 at 03:04:35PM +0100, David Kupka wrote: > https://fedorahosted.org/freeipa/ticket/4190 > > --- a/freeipa.spec.in > +++ b/freeipa.spec.in > @@ -118,7 +118,7 @@ Requires: cyrus-sasl-gssapi%{?_isa} > Requires: ntp > Requires: httpd >= 2.4.6-6 > Requires: mod_wsgi > -Requires: mod_auth_kerb >= 5.4-16 > +Requires: mod_auth_gssapi Do we assume we will no longer do an upstream 4.2 release on Fedora 20? Otherwise this should be covered by some %ifs to use mod_auth_kerb on Fedora 20. -- Jan Pazdziora Principal Software Engineer, Identity Management Engineering, Red Hat From dkupka at redhat.com Fri Mar 27 14:26:18 2015 From: dkupka at redhat.com (David Kupka) Date: Fri, 27 Mar 2015 15:26:18 +0100 Subject: [Freeipa-devel] [PATCH 0043] Use mod_auth_gssapi instead of mod_auth_kerb. In-Reply-To: <551565BD.7080909@redhat.com> References: <55156373.2020809@redhat.com> <551565BD.7080909@redhat.com> Message-ID: <5515688A.8020207@redhat.com> On 03/27/2015 03:14 PM, Rob Crittenden wrote: > David Kupka wrote: >> https://fedorahosted.org/freeipa/ticket/4190 >> >> To test this on F22 my patch 42 is needed. >> >> > > NACK. > > You need to bump the VERSION in ipa.conf for this file to be replaced on > upgrades. Thanks for the catch, Rob. I've forget about this. > > This also provides an opportunity to drop the cgi-bin configuration. > This is a legacy from IPA v1.0 where people had TONS, loads and heaps of > problems getting Kerberos working so we provided a CGI to spit out the > environment to help with troubleshooting. If we can safely remove it, we should do it. I did a quick test and it looks like we everything works without it. > > rob > Updated patch attached. -- David Kupka -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-dkupka-0043.3-Use-mod_auth_gssapi-instead-of-mod_auth_kerb.patch Type: text/x-patch Size: 7752 bytes Desc: not available URL: From rcritten at redhat.com Fri Mar 27 14:28:00 2015 From: rcritten at redhat.com (Rob Crittenden) Date: Fri, 27 Mar 2015 10:28:00 -0400 Subject: [Freeipa-devel] [PATCH 0043] Use mod_auth_gssapi instead of mod_auth_kerb. In-Reply-To: <20150327142336.GA2233@redhat.com> References: <55156373.2020809@redhat.com> <20150327142336.GA2233@redhat.com> Message-ID: <551568F0.3060602@redhat.com> Jan Pazdziora wrote: > On Fri, Mar 27, 2015 at 03:04:35PM +0100, David Kupka wrote: >> https://fedorahosted.org/freeipa/ticket/4190 >> >> --- a/freeipa.spec.in >> +++ b/freeipa.spec.in >> @@ -118,7 +118,7 @@ Requires: cyrus-sasl-gssapi%{?_isa} >> Requires: ntp >> Requires: httpd >= 2.4.6-6 >> Requires: mod_wsgi >> -Requires: mod_auth_kerb >= 5.4-16 >> +Requires: mod_auth_gssapi > > Do we assume we will no longer do an upstream 4.2 release on > Fedora 20? Otherwise this should be covered by some %ifs to use > mod_auth_kerb on Fedora 20. Fedora 20 only supports 3.3.x so yeah, not needed. There may be _builds_ of 4.x in F20 but they are not supported. rob From amarecek at redhat.com Fri Mar 27 15:34:26 2015 From: amarecek at redhat.com (=?utf-8?Q?Ale=C5=A1_Mare=C4=8Dek?=) Date: Fri, 27 Mar 2015 11:34:26 -0400 (EDT) Subject: [Freeipa-devel] [PATCH] 0001-2 ipatests: SOA record Maintenance tests In-Reply-To: <55118529.8010708@redhat.com> References: <68938500.2798491.1427206014857.JavaMail.zimbra@redhat.com> <55118529.8010708@redhat.com> Message-ID: <829878591.5551196.1427470466314.JavaMail.zimbra@redhat.com> Greetings! Martin, thanks for your review and comments! I changed the name of the patch and setup my git variables properly. I also re-tested it and got all passed. I'm sending a new patch that is attached. ----- Original Message ----- > From: "Martin Basti" > To: "Ale? Mare?ek" , freeipa-devel at redhat.com > Sent: Tuesday, March 24, 2015 4:39:21 PM > Subject: Re: [Freeipa-devel] [PATCH] 0001 ipatests: SOA record Maintenance tests > > On 24/03/15 15:06, Ale? Mare?ek wrote: > > Greetings! > > This is my very first patch, ticket#4746. > > > > Have a nice day! > > - alich - > > > > > Thank you for the patch. Just nitpicks: > > 1) > + cleanup_commands = [ > + ('dnszone_del', [zone6], {'continue': True}), > + ('dnszone_del', [zone6b], {'continue': True}), > + ] > > would be better do it in this way, continue option will to try remove > all zones: > + cleanup_commands = [ > + ('dnszone_del', [zone6, zone6b], {'continue': True}), > + ] > Done. > 2) > I'm fine with zone6b, but was there any reason to create zone6b, instead > of reusing zone 1 or 2 or 3? Because of some updates needs, I didn't want to break anything existing thus I created new. > > 3) > Please fix whitespace errors. > $ git am > freeipa-alich-0001-ipatests-added-tests-for-SOA-record-Maintenance.patch > Applying: ipatests - added tests for SOA record Maintenance > /home/mbasti/work/freeipa-devel/.git/rebase-apply/patch:482: trailing > whitespace. > > /home/mbasti/work/freeipa-devel/.git/rebase-apply/patch:758: new blank > line at EOF. > + > warning: 2 lines add whitespace errors. > Done. $ git am freeipa-alich-0001-2-Ipatests-DNS-SOA-Record-Maintenance.patch Applying: Ipatests DNS SOA Record Maintenance $ > 4) > I know the dns plugin tests are so far from PEP8, but try to keep PEP8 > in new code Done, only 1 line persisted that I didn't want to break: zone6_unresolvable_ns_relative_dnsname = DNSName(zone6_unresolvable_ns_relative) > > Otherwise test works as expected. > > Martin^2 > > -- > Martin Basti > > Thanks! - alich - -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-alich-0001-2-Ipatests-DNS-SOA-Record-Maintenance.patch Type: text/x-patch Size: 32974 bytes Desc: not available URL: From mbasti at redhat.com Fri Mar 27 15:54:33 2015 From: mbasti at redhat.com (Martin Basti) Date: Fri, 27 Mar 2015 16:54:33 +0100 Subject: [Freeipa-devel] [PATCH] 0001-2 ipatests: SOA record Maintenance tests In-Reply-To: <829878591.5551196.1427470466314.JavaMail.zimbra@redhat.com> References: <68938500.2798491.1427206014857.JavaMail.zimbra@redhat.com> <55118529.8010708@redhat.com> <829878591.5551196.1427470466314.JavaMail.zimbra@redhat.com> Message-ID: <55157D39.6030104@redhat.com> On 27/03/15 16:34, Ale? Mare?ek wrote: > Greetings! > Martin, thanks for your review and comments! > I changed the name of the patch and setup my git variables properly. I also re-tested it and got all passed. I'm sending a new patch that is attached. > > ----- Original Message ----- >> From: "Martin Basti" >> To: "Ale? Mare?ek" , freeipa-devel at redhat.com >> Sent: Tuesday, March 24, 2015 4:39:21 PM >> Subject: Re: [Freeipa-devel] [PATCH] 0001 ipatests: SOA record Maintenance tests >> >> On 24/03/15 15:06, Ale? Mare?ek wrote: >>> Greetings! >>> This is my very first patch, ticket#4746. >>> >>> Have a nice day! >>> - alich - >>> >>> >> Thank you for the patch. Just nitpicks: >> >> 1) >> + cleanup_commands = [ >> + ('dnszone_del', [zone6], {'continue': True}), >> + ('dnszone_del', [zone6b], {'continue': True}), >> + ] >> >> would be better do it in this way, continue option will to try remove >> all zones: >> + cleanup_commands = [ >> + ('dnszone_del', [zone6, zone6b], {'continue': True}), >> + ] >> > Done. > >> 2) >> I'm fine with zone6b, but was there any reason to create zone6b, instead >> of reusing zone 1 or 2 or 3? > Because of some updates needs, I didn't want to break anything existing thus I created new. > >> 3) >> Please fix whitespace errors. >> $ git am >> freeipa-alich-0001-ipatests-added-tests-for-SOA-record-Maintenance.patch >> Applying: ipatests - added tests for SOA record Maintenance >> /home/mbasti/work/freeipa-devel/.git/rebase-apply/patch:482: trailing >> whitespace. >> >> /home/mbasti/work/freeipa-devel/.git/rebase-apply/patch:758: new blank >> line at EOF. >> + >> warning: 2 lines add whitespace errors. >> > Done. > $ git am freeipa-alich-0001-2-Ipatests-DNS-SOA-Record-Maintenance.patch > Applying: Ipatests DNS SOA Record Maintenance > $ > >> 4) >> I know the dns plugin tests are so far from PEP8, but try to keep PEP8 >> in new code > Done, only 1 line persisted that I didn't want to break: > zone6_unresolvable_ns_relative_dnsname = DNSName(zone6_unresolvable_ns_relative) > >> Otherwise test works as expected. >> >> Martin^2 >> >> -- >> Martin Basti >> >> > Thanks! > - alich - Thank you, ACK. -- Martin Basti From pvoborni at redhat.com Fri Mar 27 23:05:07 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Sat, 28 Mar 2015 00:05:07 +0100 Subject: [Freeipa-devel] [PATCH 0042] Make lint work on Fedora 22. In-Reply-To: <551561FA.6080702@redhat.com> References: <551561FA.6080702@redhat.com> Message-ID: <5515E223.3090808@redhat.com> On 27.3.2015 14:58, David Kupka wrote: > pylint changed slightly so we must react otherwise we'll be unable to > build freeipa rpms on Fedora 22. This patch should go to master for sure > but I don't know if we want it in 4.1. > ACK tested on: - F21: ipa-4-1, master branch - F22: master branch. IMHO it could got to ipa-4-1 branch because of FreeIPA 4.1.4 in F22 -- Petr Vobornik From pvoborni at redhat.com Fri Mar 27 23:09:59 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Sat, 28 Mar 2015 00:09:59 +0100 Subject: [Freeipa-devel] [PATCH 0043] Use mod_auth_gssapi instead of mod_auth_kerb. In-Reply-To: <5515688A.8020207@redhat.com> References: <55156373.2020809@redhat.com> <551565BD.7080909@redhat.com> <5515688A.8020207@redhat.com> Message-ID: <5515E347.7070507@redhat.com> On 27.3.2015 15:26, David Kupka wrote: > On 03/27/2015 03:14 PM, Rob Crittenden wrote: >> David Kupka wrote: >>> https://fedorahosted.org/freeipa/ticket/4190 >>> >>> To test this on F22 my patch 42 is needed. >>> >>> >> >> NACK. >> >> You need to bump the VERSION in ipa.conf for this file to be replaced on >> upgrades. > > Thanks for the catch, Rob. I've forget about this. >> >> This also provides an opportunity to drop the cgi-bin configuration. >> This is a legacy from IPA v1.0 where people had TONS, loads and heaps of >> problems getting Kerberos working so we provided a CGI to spit out the >> environment to help with troubleshooting. > > If we can safely remove it, we should do it. I did a quick test and it > looks like we everything works without it. > >> >> rob >> > > Updated patch attached. > ACK tested on F22 - both CLI and Web UI -- Petr Vobornik From jcholast at redhat.com Mon Mar 30 05:12:13 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 30 Mar 2015 07:12:13 +0200 Subject: [Freeipa-devel] [PATCH 0042] Make lint work on Fedora 22. In-Reply-To: <5515E223.3090808@redhat.com> References: <551561FA.6080702@redhat.com> <5515E223.3090808@redhat.com> Message-ID: <5518DB2D.40309@redhat.com> Dne 28.3.2015 v 00:05 Petr Vobornik napsal(a): > On 27.3.2015 14:58, David Kupka wrote: >> pylint changed slightly so we must react otherwise we'll be unable to >> build freeipa rpms on Fedora 22. This patch should go to master for sure >> but I don't know if we want it in 4.1. >> > > ACK Are all the new disables really just false positives? > > tested on: > - F21: ipa-4-1, master branch > - F22: master branch. > > IMHO it could got to ipa-4-1 branch because of FreeIPA 4.1.4 in F22 -- Jan Cholasta From jcholast at redhat.com Mon Mar 30 05:15:10 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 30 Mar 2015 07:15:10 +0200 Subject: [Freeipa-devel] [PATCH 0043] Use mod_auth_gssapi instead of mod_auth_kerb. In-Reply-To: <5515E347.7070507@redhat.com> References: <55156373.2020809@redhat.com> <551565BD.7080909@redhat.com> <5515688A.8020207@redhat.com> <5515E347.7070507@redhat.com> Message-ID: <5518DBDE.2050000@redhat.com> Dne 28.3.2015 v 00:09 Petr Vobornik napsal(a): > On 27.3.2015 15:26, David Kupka wrote: >> On 03/27/2015 03:14 PM, Rob Crittenden wrote: >>> David Kupka wrote: >>>> https://fedorahosted.org/freeipa/ticket/4190 >>>> >>>> To test this on F22 my patch 42 is needed. >>>> >>>> >>> >>> NACK. >>> >>> You need to bump the VERSION in ipa.conf for this file to be replaced on >>> upgrades. >> >> Thanks for the catch, Rob. I've forget about this. >>> >>> This also provides an opportunity to drop the cgi-bin configuration. >>> This is a legacy from IPA v1.0 where people had TONS, loads and heaps of >>> problems getting Kerberos working so we provided a CGI to spit out the >>> environment to help with troubleshooting. >> >> If we can safely remove it, we should do it. I did a quick test and it >> looks like we everything works without it. Put it in a separate patch please, it is not related to mod_auth_gssapi. Is there anything else we can remove? >> >>> >>> rob >>> >> >> Updated patch attached. >> > > ACK > > tested on F22 - both CLI and Web UI -- Jan Cholasta From mbasti at redhat.com Mon Mar 30 08:28:36 2015 From: mbasti at redhat.com (Martin Basti) Date: Mon, 30 Mar 2015 10:28:36 +0200 Subject: [Freeipa-devel] [PATCH 0001] ipatests: port of p11helper test from github In-Reply-To: <5515527D.1010805@redhat.com> References: <5502ED38.9020302@redhat.com> <5506B87B.6050600@redhat.com> <5506E983.4020807@redhat.com> <55070370.1010200@redhat.com> <5507F62E.9080004@redhat.com> <55114D0E.8000705@redhat.com> <5511698C.3060305@redhat.com> <55118563.2010503@redhat.com> <5515527D.1010805@redhat.com> Message-ID: <55190934.5090108@redhat.com> On 27/03/15 13:52, Milan Kubik wrote: > Hi, > > On 03/24/2015 04:40 PM, Martin Basti wrote: >> On 24/03/15 14:41, Milan Kubik wrote: >>> Hello, >>> >>> thanks for the review. >>> >>> On 03/24/2015 12:39 PM, Martin Basti wrote: >>>> On 17/03/15 10:38, Milan Kubik wrote: >>>>> Hi, >>>>> >>>>> On 03/16/2015 05:23 PM, Martin Basti wrote: >>>>>> On 16/03/15 15:32, Milan Kubik wrote: >>>>>>> On 03/16/2015 12:03 PM, Milan Kubik wrote: >>>>>>>> On 03/13/2015 02:59 PM, Milan Kubik wrote: >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> this is a patch with port of [1] to pytest. >>>>>>>>> >>>>>>>>> [1]: >>>>>>>>> https://github.com/spacekpe/freeipa-pkcs11/blob/master/python/run.py >>>>>>>>> >>>>>>>>> >>>>>>>>> Cheers, >>>>>>>>> Milan >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> Added few more asserts in methods where the test could fail and >>>>>>>> cause other errors. >>>>>>>> >>>>>>>> >>>>>>> New version of the patch after brief discussion with Martin >>>>>>> Basti. Removed unnecessary variable assignments and separated a >>>>>>> new test case. >>>>>>> >>>>>>> >>>>>> Hello, >>>>>> >>>>>> thank you for the patch. >>>>>> I have a few nitpicks: >>>>>> 1) >>>>>> You can remove this and use just hexlify(s) >>>>>> +def str_to_hex(s): >>>>>> + return ''.join("{:02x}".format(ord(c)) for c in s) >>>>> done >>>>>> >>>>>> 2) >>>>>> + def test_find_secret_key(self, p11): >>>>>> + assert p11.find_keys(_ipap11helper.KEY_CLASS_SECRET_KEY, >>>>>> label=u"???-aest") >>>>>> >>>>>> In tests before you tested the exact number of expected IDs >>>>>> returned by find_keys method, why not here? >>>>> Lack of attention. >>>>> Fixed the assert in `test_search_for_master_key` which does the >>>>> same thing. Merged `test_find_secret_key` with >>>>> `test_search_for_master_key` where it belongs. >>>>>> >>>>>> Martin^2 >>>>> >>>>> Milan >>>>> >>>>> >>>> Thank you for patches, just two nitpicks: >>>> >>>> 1) >>>> Can you use the ipaplatform.paths constant? This is platform specific. >>>> LIBSOFTHSM2_SO = "/usr/lib/pkcs11/libsofthsm2.so" >>>> LIBSOFTHSM2_SO_64 = "/usr/lib64/pkcs11/libsofthsm2.so" >>>> >>>> Respectively use just LIBSOFTHSM2_SO, on 64bit systems it is >>>> automatically mapped into LIBSOFTHSM2_SO_64 >>>> >>>> instead of: >>>> + >>>> +libsofthsm = "/usr/lib64/pkcs11/libsofthsm2.so" >>>> + >>>> >>> Done. >>>> 2) >>>> Can you please check if keys were really deleted? >>>> + def test_delete_key(self, p11): >>> Done. >>>> -- >>>> Martin Basti >>> >>> I also moved `test_search_for_master_key` right after >>> `test_generate_master_key` and changed the assert message to a more >>> specific one. >>> >>> Cheers, >>> Milan >> Please fix this: >> >> 1) >> $ git am >> freeipa-mkubik-0001-5-ipatests-port-of-p11helper-test-from-github.patch >> Applying: ipatests: port of p11helper test from github >> /home/mbasti/work/freeipa-devel/.git/rebase-apply/patch:228: new >> blank line at EOF. >> + >> warning: 1 line adds whitespace errors. >> > fixed (TIL: vim doesn't show the last empty line) >> 2) Please respect PEP8 if it is possible > Mostly done, there are few instances of long variable names off by few > characters. >> >> 3) >> I'm still not sure with this: >> assert len(master_key) == 0, "The master key should be deleted." >> >> following example is more pythonic >> assert not master_key, "The master key...." >> > Changed to the latter variant. >> 4) >> Related to 3), should we test return value, if correct type was >> returned? >> assert isinstance(master_key, list) and not master_key, "....." >> I do not insist on this. >> >> Otherwise it works as expected. >> -- >> Martin Basti > > Milan Hello, I did few modifications: * new license header * PEP8 fixes * variables instead of magic constants for key labels an IDs Patch attached Do you accept my modifications? Martin^2 -- Martin Basti -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mkubik+mbasti-0001.7-ipatests-port-of-p11helper-test-from-github.patch Type: text/x-patch Size: 14288 bytes Desc: not available URL: From tbordaz at redhat.com Mon Mar 30 09:50:36 2015 From: tbordaz at redhat.com (thierry bordaz) Date: Mon, 30 Mar 2015 11:50:36 +0200 Subject: [Freeipa-devel] User life cycle: Question about ACI "Admin read-only attributes" Message-ID: <55191C6C.9000501@redhat.com> Hello, The aci "Admin read-only attributes" grants, for the complete suffix, read access to 'admin' users for the following attributes. "ipaUniqueId || memberOf || enrolledBy || krbExtraData || krbPrincipalName || krbCanonicalName || krbPasswordExpiration || krbLastPwdChange || krbLastSuccessfulAuth || krbLastFailedAuth" "userPassword" and "krbPrincipalKey" are not "read-only" attributes so I guess it is the reason why they are not part of this list. For User life cycle, I would need admin users to be granted read access on "userPassword" and "krbPrincipalKey". The scope could be limited to Stage container but I was wondering if there is a security reason to not grant read access on the full suffix ? thanks thierry -------------- next part -------------- An HTML attachment was scrubbed... URL: From dkupka at redhat.com Mon Mar 30 10:15:16 2015 From: dkupka at redhat.com (David Kupka) Date: Mon, 30 Mar 2015 12:15:16 +0200 Subject: [Freeipa-devel] [PATCH 0042] Make lint work on Fedora 22. In-Reply-To: <5518DB2D.40309@redhat.com> References: <551561FA.6080702@redhat.com> <5515E223.3090808@redhat.com> <5518DB2D.40309@redhat.com> Message-ID: <55192234.6050208@redhat.com> On 03/30/2015 07:12 AM, Jan Cholasta wrote: > Dne 28.3.2015 v 00:05 Petr Vobornik napsal(a): >> On 27.3.2015 14:58, David Kupka wrote: >>> pylint changed slightly so we must react otherwise we'll be unable to >>> build freeipa rpms on Fedora 22. This patch should go to master for sure >>> but I don't know if we want it in 4.1. >>> >> >> ACK > > Are all the new disables really just false positives? It seems to me as a false positives. 1. ipalib/plugins/otptoken.py:552: [E1101(no-member), otptoken_sync.forward] Module 'ssl' has no 'PROTOCOL_TLSv1' member) >>> import ssl >>> ssl.PROTOCOL_TLSv1 3 2. ipaserver/install/ipa_otptoken_import.py:63: [E1101(no-member), convertDate] Instance of 'tuple' has no 'tzinfo' member) ipaserver/install/ipa_otptoken_import.py:64: [E1101(no-member), convertDate] Instance of 'tuple' has no 'timetuple' member) dateutil.parser.parse() returns datetime.datetime object and it has both tzinfo and timetuple methods (https://docs.python.org/2/library/datetime.html#datetime-objects) 3. ipapython/dnssec/ldapkeydb.py:26: [E1127(invalid-slice-index), uri_escape] Slice index is not an int, None, or instance with __index__) This is the line lint is complaining about: out += '%'.join(hexval[i:i+2] for i in range(0, len(hexval), 2)) I don't see a chance for 'i' or 'i+1' to be anything else than integers. > >> >> tested on: >> - F21: ipa-4-1, master branch >> - F22: master branch. >> >> IMHO it could got to ipa-4-1 branch because of FreeIPA 4.1.4 in F22 > -- David Kupka From dkupka at redhat.com Mon Mar 30 10:21:28 2015 From: dkupka at redhat.com (David Kupka) Date: Mon, 30 Mar 2015 12:21:28 +0200 Subject: [Freeipa-devel] [PATCH 0043-0045] Use mod_auth_gssapi instead of mod_auth_kerb. In-Reply-To: <5518DBDE.2050000@redhat.com> References: <55156373.2020809@redhat.com> <551565BD.7080909@redhat.com> <5515688A.8020207@redhat.com> <5515E347.7070507@redhat.com> <5518DBDE.2050000@redhat.com> Message-ID: <551923A8.3070904@redhat.com> On 03/30/2015 07:15 AM, Jan Cholasta wrote: > Dne 28.3.2015 v 00:09 Petr Vobornik napsal(a): >> On 27.3.2015 15:26, David Kupka wrote: >>> On 03/27/2015 03:14 PM, Rob Crittenden wrote: >>>> David Kupka wrote: >>>>> https://fedorahosted.org/freeipa/ticket/4190 >>>>> >>>>> To test this on F22 my patch 42 is needed. >>>>> >>>>> >>>> >>>> NACK. >>>> >>>> You need to bump the VERSION in ipa.conf for this file to be >>>> replaced on >>>> upgrades. >>> >>> Thanks for the catch, Rob. I've forget about this. >>>> >>>> This also provides an opportunity to drop the cgi-bin configuration. >>>> This is a legacy from IPA v1.0 where people had TONS, loads and >>>> heaps of >>>> problems getting Kerberos working so we provided a CGI to spit out the >>>> environment to help with troubleshooting. >>> >>> If we can safely remove it, we should do it. I did a quick test and it >>> looks like we everything works without it. > > Put it in a separate patch please, it is not related to mod_auth_gssapi. Ok, patches attached. > > Is there anything else we can remove? I don't know about anything else. If we find something to drop we can do it then. > >>> >>>> >>>> rob >>>> >>> >>> Updated patch attached. >>> >> >> ACK >> >> tested on F22 - both CLI and Web UI > > -- David Kupka -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-dkupka-0043.4-Remove-unused-part-of-ipa.conf.patch Type: text/x-patch Size: 1168 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-dkupka-0044-Use-mod_auth_gssapi-instead-of-mod_auth_kerb.patch Type: text/x-patch Size: 7079 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-dkupka-0045-Bump-ipa.conf-version-to-17.patch Type: text/x-patch Size: 652 bytes Desc: not available URL: From mbabinsk at redhat.com Mon Mar 30 10:38:49 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Mon, 30 Mar 2015 12:38:49 +0200 Subject: [Freeipa-devel] [PATCH 0023] enable debugging of spawned ntpd command during client install In-Reply-To: <5513F841.4050306@redhat.com> References: <5512C588.3010002@redhat.com> <5512D1C9.6090308@redhat.com> <5513F841.4050306@redhat.com> Message-ID: <551927B9.7020100@redhat.com> On 03/26/2015 01:14 PM, Martin Kosek wrote: > On 03/25/2015 04:18 PM, Jan Cholasta wrote: >> Hi, >> >> Dne 25.3.2015 v 15:26 Martin Babinsky napsal(a): >>> The attached patch related to https://fedorahosted.org/freeipa/ticket/4931 >> >> Please make sure stays fixed. >> >>> >>> It is certainly not a final solution, more of an initial "hack" of sorts >>> just to gather some suggestions, since I am not even sure if this is the >>> right thing to do. >>> >>> The reporter from bugzilla suggests to enable debugging of ALL commands >>> called through ipautil.run(), but I think that fixing all cca 157 found >>> usages of run() is too much work with a quite small benefit. >>> >>> Anyway I would welcome some opinions about this: should the external >>> commands really inherit the debug settings of ipa-* utilities, and if >>> so, is the method showed in this patch the right way to do it? >> >> I am not a fan of this method, ipautil.run does not know anything about the >> command it runs and I think it should stay that way. >> >> I would prefer to have an ipautil.run wrapper with debug flag using appropriate >> debugging option for each command where we need to conditionally enable >> debugging. Or just add the debugging option unconditionally to every command >> where it could be useful. > > +1, I do not like this change to ipautil.run either. It should be sole > responsibility of the caller to specify the right combinations of options, > including debug option, where applicable. > Attaching updated patch. -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0023-2-enable-debugging-of-ntpd-during-client-installation.patch Type: text/x-patch Size: 2943 bytes Desc: not available URL: From pspacek at redhat.com Mon Mar 30 11:03:05 2015 From: pspacek at redhat.com (Petr Spacek) Date: Mon, 30 Mar 2015 13:03:05 +0200 Subject: [Freeipa-devel] User life cycle: Question about ACI "Admin read-only attributes" In-Reply-To: <55191C6C.9000501@redhat.com> References: <55191C6C.9000501@redhat.com> Message-ID: <55192D69.3000408@redhat.com> On 30.3.2015 11:50, thierry bordaz wrote: > Hello, > > The aci "Admin read-only attributes" grants, for the complete > suffix, read access to 'admin' users for the following attributes. > > "ipaUniqueId || memberOf || enrolledBy || krbExtraData || > krbPrincipalName || krbCanonicalName || krbPasswordExpiration || > krbLastPwdChange || krbLastSuccessfulAuth || krbLastFailedAuth" > > > "userPassword" and "krbPrincipalKey" are not "read-only" attributes > so I guess it is the reason why they are not part of this list. > > For User life cycle, I would need admin users to be granted read > access on "userPassword" and "krbPrincipalKey". > The scope could be limited to Stage container but I was wondering if > there is a security reason to not grant read access on the full suffix ? AFAIK admins were not given read access to keys and passwords on purpose as a security measure. It prevents accidental key disclosure when admin does ldapsearch and posts result somewhere (e.g. while debugging something). I did not follow the whole user life-cycle discussion. Why you need read access to it? Is it because you plan to do add/del instead of modrdn? -- Petr^2 Spacek From jcholast at redhat.com Mon Mar 30 11:13:49 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 30 Mar 2015 13:13:49 +0200 Subject: [Freeipa-devel] [PATCH 0043-0045] Use mod_auth_gssapi instead of mod_auth_kerb. In-Reply-To: <551923A8.3070904@redhat.com> References: <55156373.2020809@redhat.com> <551565BD.7080909@redhat.com> <5515688A.8020207@redhat.com> <5515E347.7070507@redhat.com> <5518DBDE.2050000@redhat.com> <551923A8.3070904@redhat.com> Message-ID: <55192FED.5030202@redhat.com> Dne 30.3.2015 v 12:21 David Kupka napsal(a): > On 03/30/2015 07:15 AM, Jan Cholasta wrote: >> Dne 28.3.2015 v 00:09 Petr Vobornik napsal(a): >>> On 27.3.2015 15:26, David Kupka wrote: >>>> On 03/27/2015 03:14 PM, Rob Crittenden wrote: >>>>> David Kupka wrote: >>>>>> https://fedorahosted.org/freeipa/ticket/4190 >>>>>> >>>>>> To test this on F22 my patch 42 is needed. >>>>>> >>>>>> >>>>> >>>>> NACK. >>>>> >>>>> You need to bump the VERSION in ipa.conf for this file to be >>>>> replaced on >>>>> upgrades. >>>> >>>> Thanks for the catch, Rob. I've forget about this. >>>>> >>>>> This also provides an opportunity to drop the cgi-bin configuration. >>>>> This is a legacy from IPA v1.0 where people had TONS, loads and >>>>> heaps of >>>>> problems getting Kerberos working so we provided a CGI to spit out the >>>>> environment to help with troubleshooting. >>>> >>>> If we can safely remove it, we should do it. I did a quick test and it >>>> looks like we everything works without it. >> >> Put it in a separate patch please, it is not related to mod_auth_gssapi. > > Ok, patches attached. > >> >> Is there anything else we can remove? > > I don't know about anything else. If we find something to drop we can do > it then. Thanks, ACK. -- Jan Cholasta From mkubik at redhat.com Mon Mar 30 11:33:30 2015 From: mkubik at redhat.com (Milan Kubik) Date: Mon, 30 Mar 2015 13:33:30 +0200 Subject: [Freeipa-devel] [PATCH 0001] ipatests: port of p11helper test from github In-Reply-To: <55190934.5090108@redhat.com> References: <5502ED38.9020302@redhat.com> <5506B87B.6050600@redhat.com> <5506E983.4020807@redhat.com> <55070370.1010200@redhat.com> <5507F62E.9080004@redhat.com> <55114D0E.8000705@redhat.com> <5511698C.3060305@redhat.com> <55118563.2010503@redhat.com> <5515527D.1010805@redhat.com> <55190934.5090108@redhat.com> Message-ID: <5519348A.70504@redhat.com> Hi, thanks for the review and sparing me few rounds for these modifications. :) ACK for the improvements. Milan On 03/30/2015 10:28 AM, Martin Basti wrote: > On 27/03/15 13:52, Milan Kubik wrote: >> Hi, >> >> On 03/24/2015 04:40 PM, Martin Basti wrote: >>> On 24/03/15 14:41, Milan Kubik wrote: >>>> Hello, >>>> >>>> thanks for the review. >>>> >>>> On 03/24/2015 12:39 PM, Martin Basti wrote: >>>>> On 17/03/15 10:38, Milan Kubik wrote: >>>>>> Hi, >>>>>> >>>>>> On 03/16/2015 05:23 PM, Martin Basti wrote: >>>>>>> On 16/03/15 15:32, Milan Kubik wrote: >>>>>>>> On 03/16/2015 12:03 PM, Milan Kubik wrote: >>>>>>>>> On 03/13/2015 02:59 PM, Milan Kubik wrote: >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> this is a patch with port of [1] to pytest. >>>>>>>>>> >>>>>>>>>> [1]: >>>>>>>>>> https://github.com/spacekpe/freeipa-pkcs11/blob/master/python/run.py >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Cheers, >>>>>>>>>> Milan >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> Added few more asserts in methods where the test could fail >>>>>>>>> and cause other errors. >>>>>>>>> >>>>>>>>> >>>>>>>> New version of the patch after brief discussion with Martin >>>>>>>> Basti. Removed unnecessary variable assignments and separated a >>>>>>>> new test case. >>>>>>>> >>>>>>>> >>>>>>> Hello, >>>>>>> >>>>>>> thank you for the patch. >>>>>>> I have a few nitpicks: >>>>>>> 1) >>>>>>> You can remove this and use just hexlify(s) >>>>>>> +def str_to_hex(s): >>>>>>> + return ''.join("{:02x}".format(ord(c)) for c in s) >>>>>> done >>>>>>> >>>>>>> 2) >>>>>>> + def test_find_secret_key(self, p11): >>>>>>> + assert p11.find_keys(_ipap11helper.KEY_CLASS_SECRET_KEY, >>>>>>> label=u"???-aest") >>>>>>> >>>>>>> In tests before you tested the exact number of expected IDs >>>>>>> returned by find_keys method, why not here? >>>>>> Lack of attention. >>>>>> Fixed the assert in `test_search_for_master_key` which does the >>>>>> same thing. Merged `test_find_secret_key` with >>>>>> `test_search_for_master_key` where it belongs. >>>>>>> >>>>>>> Martin^2 >>>>>> >>>>>> Milan >>>>>> >>>>>> >>>>> Thank you for patches, just two nitpicks: >>>>> >>>>> 1) >>>>> Can you use the ipaplatform.paths constant? This is platform specific. >>>>> LIBSOFTHSM2_SO = "/usr/lib/pkcs11/libsofthsm2.so" >>>>> LIBSOFTHSM2_SO_64 = "/usr/lib64/pkcs11/libsofthsm2.so" >>>>> >>>>> Respectively use just LIBSOFTHSM2_SO, on 64bit systems it is >>>>> automatically mapped into LIBSOFTHSM2_SO_64 >>>>> >>>>> instead of: >>>>> + >>>>> +libsofthsm = "/usr/lib64/pkcs11/libsofthsm2.so" >>>>> + >>>>> >>>> Done. >>>>> 2) >>>>> Can you please check if keys were really deleted? >>>>> + def test_delete_key(self, p11): >>>> Done. >>>>> -- >>>>> Martin Basti >>>> >>>> I also moved `test_search_for_master_key` right after >>>> `test_generate_master_key` and changed the assert message to a more >>>> specific one. >>>> >>>> Cheers, >>>> Milan >>> Please fix this: >>> >>> 1) >>> $ git am >>> freeipa-mkubik-0001-5-ipatests-port-of-p11helper-test-from-github.patch >>> Applying: ipatests: port of p11helper test from github >>> /home/mbasti/work/freeipa-devel/.git/rebase-apply/patch:228: new >>> blank line at EOF. >>> + >>> warning: 1 line adds whitespace errors. >>> >> fixed (TIL: vim doesn't show the last empty line) >>> 2) Please respect PEP8 if it is possible >> Mostly done, there are few instances of long variable names off by >> few characters. >>> >>> 3) >>> I'm still not sure with this: >>> assert len(master_key) == 0, "The master key should be deleted." >>> >>> following example is more pythonic >>> assert not master_key, "The master key...." >>> >> Changed to the latter variant. >>> 4) >>> Related to 3), should we test return value, if correct type was >>> returned? >>> assert isinstance(master_key, list) and not master_key, "....." >>> I do not insist on this. >>> >>> Otherwise it works as expected. >>> -- >>> Martin Basti >> >> Milan > > Hello, > > I did few modifications: > > * new license header > * PEP8 fixes > * variables instead of magic constants for key labels an IDs > > Patch attached > > Do you accept my modifications? > Martin^2 > -- > Martin Basti -------------- next part -------------- An HTML attachment was scrubbed... URL: From pspacek at redhat.com Mon Mar 30 11:55:07 2015 From: pspacek at redhat.com (Petr Spacek) Date: Mon, 30 Mar 2015 13:55:07 +0200 Subject: [Freeipa-devel] [PATCH 0222] DNSSEC: do not log into files In-Reply-To: <551426E4.2070906@redhat.com> References: <551426E4.2070906@redhat.com> Message-ID: <5519399B.4020208@redhat.com> On 26.3.2015 16:33, Martin Basti wrote: > We want to log DNSSEC daemons only into console (journald). > > This patch also fixes unexpected log file in > /var/lib/softhsm/.ipa/log/default.log > > Patch attached. ACK -- Petr^2 Spacek From tbordaz at redhat.com Mon Mar 30 12:00:43 2015 From: tbordaz at redhat.com (thierry bordaz) Date: Mon, 30 Mar 2015 14:00:43 +0200 Subject: [Freeipa-devel] User life cycle: Question about ACI "Admin read-only attributes" In-Reply-To: <55192D69.3000408@redhat.com> References: <55191C6C.9000501@redhat.com> <55192D69.3000408@redhat.com> Message-ID: <55193AEB.3080608@redhat.com> On 03/30/2015 01:03 PM, Petr Spacek wrote: > On 30.3.2015 11:50, thierry bordaz wrote: >> Hello, >> >> The aci "Admin read-only attributes" grants, for the complete >> suffix, read access to 'admin' users for the following attributes. >> >> "ipaUniqueId || memberOf || enrolledBy || krbExtraData || >> krbPrincipalName || krbCanonicalName || krbPasswordExpiration || >> krbLastPwdChange || krbLastSuccessfulAuth || krbLastFailedAuth" >> >> >> "userPassword" and "krbPrincipalKey" are not "read-only" attributes >> so I guess it is the reason why they are not part of this list. >> >> For User life cycle, I would need admin users to be granted read >> access on "userPassword" and "krbPrincipalKey". >> The scope could be limited to Stage container but I was wondering if >> there is a security reason to not grant read access on the full suffix ? > AFAIK admins were not given read access to keys and passwords on purpose as a > security measure. It prevents accidental key disclosure when admin does > ldapsearch and posts result somewhere (e.g. while debugging something). Yes that sounds a very good reason. > > I did not follow the whole user life-cycle discussion. Why you need read > access to it? Is it because you plan to do add/del instead of modrdn? > There are two use case where I think I need access to those attributes: * A stage entry can have userpassword/krb keys attributes. When activating an entry those values are copied to a the active entry. So the newly active user can authenticate with the credential set while his entry was in stage container. In that case, it would need a read access because the stage entry is copied into a new entry (ADD). * An active entry is deleted (preserve mode), so the entry is modrdn to the delete container. Then to prevent the reuse of old credential, those attibutes are cleared. So here I would need a read/search/write access to those attributes in the delete container. That means that if I limit the ACIs to stage/delete containers, an admin could accidentally disclose stage/delete entries keys. thanks thieryr -------------- next part -------------- An HTML attachment was scrubbed... URL: From pspacek at redhat.com Mon Mar 30 12:10:56 2015 From: pspacek at redhat.com (Petr Spacek) Date: Mon, 30 Mar 2015 14:10:56 +0200 Subject: [Freeipa-devel] [PATCH 0024] do not log BINDs to non-existent users as errors In-Reply-To: <5512DD48.4010000@redhat.com> References: <5512DD48.4010000@redhat.com> Message-ID: <55193D50.7020208@redhat.com> On 25.3.2015 17:07, Martin Babinsky wrote: > https://fedorahosted.org/freeipa/ticket/4889 ACK -- Petr^2 Spacek From jcholast at redhat.com Mon Mar 30 13:06:44 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Mon, 30 Mar 2015 15:06:44 +0200 Subject: [Freeipa-devel] [PATCH 0043-0045] Use mod_auth_gssapi instead of mod_auth_kerb. In-Reply-To: <55192FED.5030202@redhat.com> References: <55156373.2020809@redhat.com> <551565BD.7080909@redhat.com> <5515688A.8020207@redhat.com> <5515E347.7070507@redhat.com> <5518DBDE.2050000@redhat.com> <551923A8.3070904@redhat.com> <55192FED.5030202@redhat.com> Message-ID: <55194A64.3080008@redhat.com> Dne 30.3.2015 v 13:13 Jan Cholasta napsal(a): > Dne 30.3.2015 v 12:21 David Kupka napsal(a): >> On 03/30/2015 07:15 AM, Jan Cholasta wrote: >>> Dne 28.3.2015 v 00:09 Petr Vobornik napsal(a): >>>> On 27.3.2015 15:26, David Kupka wrote: >>>>> On 03/27/2015 03:14 PM, Rob Crittenden wrote: >>>>>> David Kupka wrote: >>>>>>> https://fedorahosted.org/freeipa/ticket/4190 >>>>>>> >>>>>>> To test this on F22 my patch 42 is needed. >>>>>>> >>>>>>> >>>>>> >>>>>> NACK. >>>>>> >>>>>> You need to bump the VERSION in ipa.conf for this file to be >>>>>> replaced on >>>>>> upgrades. >>>>> >>>>> Thanks for the catch, Rob. I've forget about this. >>>>>> >>>>>> This also provides an opportunity to drop the cgi-bin configuration. >>>>>> This is a legacy from IPA v1.0 where people had TONS, loads and >>>>>> heaps of >>>>>> problems getting Kerberos working so we provided a CGI to spit out >>>>>> the >>>>>> environment to help with troubleshooting. >>>>> >>>>> If we can safely remove it, we should do it. I did a quick test and it >>>>> looks like we everything works without it. >>> >>> Put it in a separate patch please, it is not related to mod_auth_gssapi. >> >> Ok, patches attached. >> >>> >>> Is there anything else we can remove? >> >> I don't know about anything else. If we find something to drop we can do >> it then. > > Thanks, ACK. > Pushed to master: b9657975b790252c952499c1cd40dc6a666d452f -- Jan Cholasta From redhatrises at gmail.com Mon Mar 30 13:25:17 2015 From: redhatrises at gmail.com (Gabe Alford) Date: Mon, 30 Mar 2015 07:25:17 -0600 Subject: [Freeipa-devel] [PATCH 0045] Add message for skipping NTP configuration during client install Message-ID: Hello, With the merging of ticket 4842 , I believe that half of ticket 3092 has been done. This patch just adds a message that says that NTP configuration was skipped which I believe should finish 3092 . Thanks, Gabe -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-rga-0045-Add-message-for-skipping-NTP-configuration-during-cl.patch Type: text/x-patch Size: 1148 bytes Desc: not available URL: From simo at redhat.com Mon Mar 30 15:52:07 2015 From: simo at redhat.com (Simo Sorce) Date: Mon, 30 Mar 2015 11:52:07 -0400 Subject: [Freeipa-devel] Use sessions for mod_auth_gssapi ? Message-ID: <1427730727.8302.151.camel@willson.usersys.redhat.com> Since we now merged in a change from mod_auth_kerb to mod_auth_gssapi I was wondering if we want to press further and emable by default the use of native mod_auth_gssapi sessions ? The old mod_auth_kerb didn't have this feature so, in order to have decent performace we introduced split paths where some are always incurring the full negotiate penalty and other are and instead rely on a session cookie. mod_auth_gssapi can be configured to use a session cookie directly which avoids the negotiate auth performance hit. Integration would require that the FreeIPA code learns how to delete the cookie when someone hits a logout button, but it would be otherwise transparent. It would be especially useful for 3rd party clients that want to use the json/xmlrpc enpoints, as all they have to do is just support sending back cookies and they do not have to learn how to contact multiple endopints to get credentials and then switch to the session only based ones. Thoughts ? Simo. -- Simo Sorce * Red Hat, Inc * New York From ayoung at redhat.com Mon Mar 30 20:09:19 2015 From: ayoung at redhat.com (Adam Young) Date: Mon, 30 Mar 2015 16:09:19 -0400 Subject: [Freeipa-devel] Use sessions for mod_auth_gssapi ? In-Reply-To: <1427730727.8302.151.camel@willson.usersys.redhat.com> References: <1427730727.8302.151.camel@willson.usersys.redhat.com> Message-ID: <5519AD6F.8080105@redhat.com> On 03/30/2015 11:52 AM, Simo Sorce wrote: > Since we now merged in a change from mod_auth_kerb to mod_auth_gssapi I > was wondering if we want to press further and emable by default the use > of native mod_auth_gssapi sessions ? > > The old mod_auth_kerb didn't have this feature so, in order to have > decent performace we introduced split paths where some are always > incurring the full negotiate penalty and other are and instead rely on a > session cookie. > > mod_auth_gssapi can be configured to use a session cookie directly which > avoids the negotiate auth performance hit. Integration would require > that the FreeIPA code learns how to delete the cookie when someone hits > a logout button, but it would be otherwise transparent. > > It would be especially useful for 3rd party clients that want to use the > json/xmlrpc enpoints, as all they have to do is just support sending > back cookies and they do not have to learn how to contact multiple > endopints to get credentials and then switch to the session only based > ones. > > Thoughts ? > > Simo. > I always wanted this. It would be awesome, very valuable. REcall that when we looked into it we were on Apache 1.3, and seesion support, mod_seesion, was not avaialble. Fairly certain the landscape has changed since then. From tbabej at redhat.com Tue Mar 31 05:23:41 2015 From: tbabej at redhat.com (Tomas Babej) Date: Tue, 31 Mar 2015 07:23:41 +0200 Subject: [Freeipa-devel] OOO 2015-03-31-2015-04-01 Message-ID: <551A2F5D.4000805@redhat.com> Hours already accumulated this month. Tomas From jcholast at redhat.com Tue Mar 31 06:04:23 2015 From: jcholast at redhat.com (Jan Cholasta) Date: Tue, 31 Mar 2015 08:04:23 +0200 Subject: [Freeipa-devel] Use sessions for mod_auth_gssapi ? In-Reply-To: <5519AD6F.8080105@redhat.com> References: <1427730727.8302.151.camel@willson.usersys.redhat.com> <5519AD6F.8080105@redhat.com> Message-ID: <551A38E7.4080308@redhat.com> Dne 30.3.2015 v 22:09 Adam Young napsal(a): > On 03/30/2015 11:52 AM, Simo Sorce wrote: >> Since we now merged in a change from mod_auth_kerb to mod_auth_gssapi I >> was wondering if we want to press further and emable by default the use >> of native mod_auth_gssapi sessions ? >> >> The old mod_auth_kerb didn't have this feature so, in order to have >> decent performace we introduced split paths where some are always >> incurring the full negotiate penalty and other are and instead rely on a >> session cookie. >> >> mod_auth_gssapi can be configured to use a session cookie directly which >> avoids the negotiate auth performance hit. Integration would require >> that the FreeIPA code learns how to delete the cookie when someone hits >> a logout button, but it would be otherwise transparent. >> >> It would be especially useful for 3rd party clients that want to use the >> json/xmlrpc enpoints, as all they have to do is just support sending >> back cookies and they do not have to learn how to contact multiple >> endopints to get credentials and then switch to the session only based >> ones. >> >> Thoughts ? >> >> Simo. >> > I always wanted this. It would be awesome, very valuable. Yes please. > > REcall that when we looked into it we were on Apache 1.3, and seesion > support, mod_seesion, was not avaialble. Fairly certain the landscape > has changed since then. > -- Jan Cholasta From mkosek at redhat.com Tue Mar 31 06:11:43 2015 From: mkosek at redhat.com (Martin Kosek) Date: Tue, 31 Mar 2015 08:11:43 +0200 Subject: [Freeipa-devel] Use sessions for mod_auth_gssapi ? In-Reply-To: <551A38E7.4080308@redhat.com> References: <1427730727.8302.151.camel@willson.usersys.redhat.com> <5519AD6F.8080105@redhat.com> <551A38E7.4080308@redhat.com> Message-ID: <551A3A9F.6000000@redhat.com> On 03/31/2015 08:04 AM, Jan Cholasta wrote: > Dne 30.3.2015 v 22:09 Adam Young napsal(a): >> On 03/30/2015 11:52 AM, Simo Sorce wrote: >>> Since we now merged in a change from mod_auth_kerb to mod_auth_gssapi I >>> was wondering if we want to press further and emable by default the use >>> of native mod_auth_gssapi sessions ? >>> >>> The old mod_auth_kerb didn't have this feature so, in order to have >>> decent performace we introduced split paths where some are always >>> incurring the full negotiate penalty and other are and instead rely on a >>> session cookie. >>> >>> mod_auth_gssapi can be configured to use a session cookie directly which >>> avoids the negotiate auth performance hit. Integration would require >>> that the FreeIPA code learns how to delete the cookie when someone hits >>> a logout button, but it would be otherwise transparent. >>> >>> It would be especially useful for 3rd party clients that want to use the >>> json/xmlrpc enpoints, as all they have to do is just support sending >>> back cookies and they do not have to learn how to contact multiple >>> endopints to get credentials and then switch to the session only based >>> ones. >>> >>> Thoughts ? >>> >>> Simo. >>> >> I always wanted this. It would be awesome, very valuable. > > Yes please. We should have a ticket with all the details then... > >> >> REcall that when we looked into it we were on Apache 1.3, and seesion >> support, mod_seesion, was not avaialble. Fairly certain the landscape >> has changed since then. >> > From tbabej at redhat.com Tue Mar 31 06:39:57 2015 From: tbabej at redhat.com (Tomas Babej) Date: Tue, 31 Mar 2015 08:39:57 +0200 Subject: [Freeipa-devel] OOO 2015-03-31-2015-04-01 In-Reply-To: <551A2F5D.4000805@redhat.com> References: <551A2F5D.4000805@redhat.com> Message-ID: <551A413D.50608@redhat.com> Sorry about the noise. On 03/31/2015 07:23 AM, Tomas Babej wrote: > Hours already accumulated this month. > > Tomas > From mbabinsk at redhat.com Tue Mar 31 08:42:24 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Tue, 31 Mar 2015 10:42:24 +0200 Subject: [Freeipa-devel] [PATCH 0025] proper client host setup/teardown in forced client reenrollment integration test suite Message-ID: <551A5DF0.6000200@redhat.com> During the investigation of https://fedorahosted.org/freeipa/ticket/4614 I discovered a bug (?) in forced client reenrollment integration test. During test scenario, master and replica are setup correctly at the beginning of the test, but the client is never setup resulting in a couple of tracebacks. After some investigation I realized that the setUp/tearDown methods are actually never called because they are supposed to be inherited from unittest.TestCase. However, IntegrationTest no longer inherits from this class, hence the bug. I have tried to fix this by adding a fixture which runs client fixup/teardown and doing some other small modifications. Tests now work as expected, but I need a review from QE guys or someone well-versed in pytest framework. TL;DR: I think I have fixed a bug in integration test but I need someone to review the fix because I may not know what I'm doing. -- Martin^3 Babinsky -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-mbabinsk-0025-1-proper-client-host-setup-teardown-in-forced-client-r.patch Type: text/x-patch Size: 5516 bytes Desc: not available URL: From pviktori at redhat.com Tue Mar 31 10:06:56 2015 From: pviktori at redhat.com (Petr Viktorin) Date: Tue, 31 Mar 2015 12:06:56 +0200 Subject: [Freeipa-devel] [PATCH 0025] proper client host setup/teardown in forced client reenrollment integration test suite In-Reply-To: <551A5DF0.6000200@redhat.com> References: <551A5DF0.6000200@redhat.com> Message-ID: <551A71C0.1050606@redhat.com> On 03/31/2015 10:42 AM, Martin Babinsky wrote: > During the investigation of https://fedorahosted.org/freeipa/ticket/4614 > I discovered a bug (?) in forced client reenrollment integration test. > > During test scenario, master and replica are setup correctly at the > beginning of the test, but the client is never setup resulting in a > couple of tracebacks. > > After some investigation I realized that the setUp/tearDown methods are > actually never called because they are supposed to be inherited from > unittest.TestCase. However, IntegrationTest no longer inherits from this > class, hence the bug. > > I have tried to fix this by adding a fixture which runs client > fixup/teardown and doing some other small modifications. Tests now work > as expected, but I need a review from QE guys or someone well-versed in > pytest framework. LGTM, from a quick glance. -- Petr Viktorin From pvoborni at redhat.com Tue Mar 31 10:11:26 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 31 Mar 2015 12:11:26 +0200 Subject: [Freeipa-devel] [PATCH] 809 speed up convert_attribute_members Message-ID: <551A72CE.2070403@redhat.com> A workaround to avoid usage of slow LDAPEntry._sync_attr #4946. I originally wanted to avoid DN processing as well but we can't do that because of DNs which are encoded - e.g. contains '+' or ','. Therefore patch 811 - faster DN implementation is very useful. Also patch 809 is useful to avoid high load of 389. https://fedorahosted.org/freeipa/ticket/4965 -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0809-speed-up-convert_attribute_members.patch Type: text/x-patch Size: 2742 bytes Desc: not available URL: From pvoborni at redhat.com Tue Mar 31 10:11:32 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 31 Mar 2015 12:11:32 +0200 Subject: [Freeipa-devel] [PATCH] 810 speed up indirect member processing Message-ID: <551A72D4.9080002@redhat.com> the old implementation tried to get all entries which are member of group. That means also user. User can't have any members therefore this costly processing was unnecessary. New implementation reduces the search only to entries which can have entries. Also page size was removed to avoid paging by small pages(default size: 100) which is very slow for many members. https://fedorahosted.org/freeipa/ticket/4947 Useful to test with #809 -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0810-speed-up-indirect-member-processing.patch Type: text/x-patch Size: 15373 bytes Desc: not available URL: From pvoborni at redhat.com Tue Mar 31 10:11:39 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 31 Mar 2015 12:11:39 +0200 Subject: [Freeipa-devel] [PATCH] 811 performance: faster DN implementation Message-ID: <551A72DB.3020105@redhat.com> The only different thing is a lack of utf-8 encoded str support(as input). I don't know how much important the support is. maybe it could be attached to ticket https://fedorahosted.org/freeipa/ticket/4947 ----- DN code was optimized to be faster if DNs are created from string. This is the major use case, since most DNs come from LDAP. With this patch, DN creation is almost 8-10x faster (with 30K-100K DNs). Second mojor use case - deepcopy in LDAPEntry is about 20x faster - done by custom __deepcopy__ function. The major change is that DN is no longer internally composed of RDNs and AVAs but it rather keeps the data in open ldap format - the same as output of str2dn function. Therefore, for immutable DNs, no other transformations are required on instantiation. The format is: DN: [RDN, RDN,...] RDN: [AVA, AVA,...] AVA: ['utf-8 encoded str - attr', 'utf-8 encode str -value', FLAG] FLAG: int Further indexing of DN object constructs an RDN which is just an encapsulation of the RDN part of open ldap representation. Indexing of RDN constructs AVA in the same fashion. Obtained EditableAVA, EditableRDN from EditableDN shares the respected lists of the open ldap repr. so that the change of value or attr is reflected in parent object. -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0811-performance-faster-DN-implementation.patch Type: text/x-patch Size: 39766 bytes Desc: not available URL: From mkoci at redhat.com Tue Mar 31 12:47:52 2015 From: mkoci at redhat.com (Martin Koci) Date: Tue, 31 Mar 2015 14:47:52 +0200 Subject: [Freeipa-devel] [QE] Test categorization into tiers and acceptance testing - tagging proposals Message-ID: <1427806072.2800.36.camel@redhat.com> Hi all, I'd like to open discussion on test categorization into tiers and acceptance testing, respectively test tagging which should help us to accomplish following goals: 1) Acceptance test - other FreeIPA partner projects (389/DS/PKI) should be able to have an "Acceptance test" that would run basic *stable* test suite that would check if anything significant broke. It should be fast enough so that the projects can run it in a Jenkins CI after commits. If we also have tags @dogtag or @sssd, the projects could simply run just the tests affecting the projects -> faster execution. 2) FreeIPA test run optimization. Currently, all FreeIPA tests are running when new commit is pushed. This takes lot of resources. It would be nice to at least be able to NOT run Tier 2 tests if Tier1 tests are failing. Or it would be nice to not run some very expensive tests after each commit, but maybe once per day/week. *TIERS* So after discussions with couple of developers and QE's we have created and summarized following proposal for sorting current IPA tests into tiers. Currently used tests reside in freeipa/ipatests. From these the only unit tests (tier 0 candidate) are test_{ipalib,ipapython} with the exception of test_ipalib/test_rpc.py which requires kerberos. The rest of the tests either require ipa/lite-server or are an integration test. The rest of the tests (majority XML RPC, UI tests, ...) then fall under the definition of Tier 1 test, as they require at least running IPA instance and admin TGT. As for the tagging of the test cases, pytest's capabilities can be used [2]. Though pytest.mark currently does not work with declarative tests (it marks all of them), when the test is an ordinary function/method the marking works as expected. The declarative tests could be rewritten in the future to more pytest specific form, e.g. test_xmlrpc/test_host_plugin.py Official guideline for this categorization will be created on the upstream wiki once we agree on that. *ACCEPTANCE TESTING* As for the acceptance testing Similar to `Test categorization into tiers` (1) proposal, there is a need to define a subset of freeipa tests that could be run by other projects or users to find out whether or not their changes (e.g. new build, feature) works with IPA. This run could be composed from tier {0,1} execution followed by a subset of integration tests test cases. The proposed mechanism for this is the same as in [4], using pytest.mark to select the classes/tests to run in this context. What I'd like to ask you here is to share any ideas on the form of the acceptance run as well as to help me identify the areas (and tests) that are considered important and should be a part of this test set. *TAGGING* Tagging the actual tests classes with pytest decorator (http://pytest.org/latest/mark.html). would be better than let developers manually maintain lists of tests for different projects. The benefit for pytest mark kept in the code is that whatever we do with the test class (rename, move, merge), the tag goes with it, not extra list needs to be maintained. As for tagging itself, the original idea which Martin Kosek was proposing was to use just the "acceptance" tag for marking the base T2 tests that would be part of FreeIPA acceptance tests. However, it seems there is a value in tagging the tests that exercise also certain sub-component of FreeIPA - SSSD, Dogtag. As long as we do not get too wild with the tags, it should be OK. So we could agreed on followings tags: - tier0, tier1, tier2 - acceptance - sssd - dogtag This would lead to e.g. @pytest.mark.dogtag @pytest.mark.acceptance @pytest.mark.tier2 class TestExternalCA(IntegrationTest): ... or simpler @dogtag @acceptance @tier2 class TestExternalCA(IntegrationTest): Hope it's not too long and that it makes sense. Can I get your thoughts on this, please? Thank you. Regards, /koca *[1] - https://fedorahosted.org/freeipa/ticket/4922 *[2] - http://pytest.org/latest/mark.html From mbabinsk at redhat.com Tue Mar 31 14:01:03 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Tue, 31 Mar 2015 16:01:03 +0200 Subject: [Freeipa-devel] [PATCH 0025] proper client host setup/teardown in forced client reenrollment integration test suite In-Reply-To: <551A71C0.1050606@redhat.com> References: <551A5DF0.6000200@redhat.com> <551A71C0.1050606@redhat.com> Message-ID: <551AA89F.4020307@redhat.com> On 03/31/2015 12:06 PM, Petr Viktorin wrote: > On 03/31/2015 10:42 AM, Martin Babinsky wrote: >> During the investigation of https://fedorahosted.org/freeipa/ticket/4614 >> I discovered a bug (?) in forced client reenrollment integration test. >> >> During test scenario, master and replica are setup correctly at the >> beginning of the test, but the client is never setup resulting in a >> couple of tracebacks. >> >> After some investigation I realized that the setUp/tearDown methods are >> actually never called because they are supposed to be inherited from >> unittest.TestCase. However, IntegrationTest no longer inherits from this >> class, hence the bug. >> >> I have tried to fix this by adding a fixture which runs client >> fixup/teardown and doing some other small modifications. Tests now work >> as expected, but I need a review from QE guys or someone well-versed in >> pytest framework. > > LGTM, from a quick glance. > > Thank Petr, anyone else has some opinion on this? -- Martin^3 Babinsky From pvoborni at redhat.com Tue Mar 31 14:16:24 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 31 Mar 2015 16:16:24 +0200 Subject: [Freeipa-devel] [PATCH] webui: use no_members option in entity select search Message-ID: <551AAC38.3070507@redhat.com> Obtaining member information for entity selects is not needed and it causes unwanted performance hit, especially with larger groups. This patch removes it. https://fedorahosted.org/freeipa/ticket/4948 -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0812-webui-use-no_members-option-in-entity-select-search.patch Type: text/x-patch Size: 2181 bytes Desc: not available URL: From pvoborni at redhat.com Tue Mar 31 14:19:04 2015 From: pvoborni at redhat.com (Petr Vobornik) Date: Tue, 31 Mar 2015 16:19:04 +0200 Subject: [Freeipa-devel] [PATCH] 786 webui: unable to select single value in CB by enter key Message-ID: <551AACD8.2080106@redhat.com> This little fellow was hiding in a cupboard (patchset 784-786 was abandoned). Fix: If editable combobox has one value, the value is selected and changed by hand, it can't be re-selected by enter key. -- Petr Vobornik -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-pvoborni-0786-webui-unable-to-select-single-value-in-CB-by-enter-k.patch Type: text/x-patch Size: 1029 bytes Desc: not available URL: From npmccallum at redhat.com Tue Mar 31 14:23:55 2015 From: npmccallum at redhat.com (Nathaniel McCallum) Date: Tue, 31 Mar 2015 10:23:55 -0400 Subject: [Freeipa-devel] Use sessions for mod_auth_gssapi ? In-Reply-To: <1427730727.8302.151.camel@willson.usersys.redhat.com> References: <1427730727.8302.151.camel@willson.usersys.redhat.com> Message-ID: <1427811835.7498.0.camel@redhat.com> On Mon, 2015-03-30 at 11:52 -0400, Simo Sorce wrote: > Since we now merged in a change from mod_auth_kerb to > mod_auth_gssapi I > was wondering if we want to press further and emable by default the > use > of native mod_auth_gssapi sessions ? > > The old mod_auth_kerb didn't have this feature so, in order to have > decent performace we introduced split paths where some are always > incurring the full negotiate penalty and other are and instead rely > on a > session cookie. > > mod_auth_gssapi can be configured to use a session cookie directly > which > avoids the negotiate auth performance hit. Integration would require > that the FreeIPA code learns how to delete the cookie when someone > hits > a logout button, but it would be otherwise transparent. > > It would be especially useful for 3rd party clients that want to use > the > json/xmlrpc enpoints, as all they have to do is just support sending > back cookies and they do not have to learn how to contact multiple > endopints to get credentials and then switch to the session only > based > ones. > > Thoughts ? +1. It is about time. :) From npmccallum at redhat.com Tue Mar 31 14:25:00 2015 From: npmccallum at redhat.com (Nathaniel McCallum) Date: Tue, 31 Mar 2015 10:25:00 -0400 Subject: [Freeipa-devel] [PATCH 0082] Update python-yubico dependency version Message-ID: <1427811900.7498.1.camel@redhat.com> This change enables support for all current YubiKey hardware. -------------- next part -------------- A non-text attachment was scrubbed... Name: freeipa-npmccallum-0082-Update-python-yubico-dependency-version.patch Type: text/x-patch Size: 1158 bytes Desc: not available URL: From mbabinsk at redhat.com Tue Mar 31 16:07:29 2015 From: mbabinsk at redhat.com (Martin Babinsky) Date: Tue, 31 Mar 2015 18:07:29 +0200 Subject: [Freeipa-devel] [PATCH] 786 webui: unable to select single value in CB by enter key In-Reply-To: <551AACD8.2080106@redhat.com> References: <551AACD8.2080106@redhat.com> Message-ID: <551AC641.3050407@redhat.com> On 03/31/2015 04:19 PM, Petr Vobornik wrote: > This little fellow was hiding in a cupboard (patchset 784-786 was > abandoned). > > Fix: If editable combobox has one value, the value is selected and > changed by hand, it can't be re-selected by enter key. > > Works as expected, ACK. -- Martin^3 Babinsky From amessina at messinet.com Tue Mar 31 16:31:21 2015 From: amessina at messinet.com (Anthony Messina) Date: Tue, 31 Mar 2015 11:31:21 -0500 Subject: [Freeipa-devel] Use sessions for mod_auth_gssapi ? In-Reply-To: <1427730727.8302.151.camel@willson.usersys.redhat.com> References: <1427730727.8302.151.camel@willson.usersys.redhat.com> Message-ID: <4583220.05BGupuHAq@linux-ws1.messinet.com> On Monday, March 30, 2015 11:52:07 AM Simo Sorce wrote: > Since we now merged in a change from mod_auth_kerb to mod_auth_gssapi I > was wondering if we want to press further and emable by default the use > of native mod_auth_gssapi sessions ? > > The old mod_auth_kerb didn't have this feature so, in order to have > decent performace we introduced split paths where some are always > incurring the full negotiate penalty and other are and instead rely on a > session cookie. > > mod_auth_gssapi can be configured to use a session cookie directly which > avoids the negotiate auth performance hit. Integration would require > that the FreeIPA code learns how to delete the cookie when someone hits > a logout button, but it would be otherwise transparent. > > It would be especially useful for 3rd party clients that want to use the > json/xmlrpc enpoints, as all they have to do is just support sending > back cookies and they do not have to learn how to contact multiple > endopints to get credentials and then switch to the session only based > ones. > > Thoughts ? > > Simo. This is a good thing, Simo. Yes. -A -- Anthony - https://messinet.com/ - https://messinet.com/~amessina/gallery 8F89 5E72 8DF0 BCF0 10BE 9967 92DC 35DC B001 4A4E -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: This is a digitally signed message part. URL: