From ayoung at redhat.com Mon Oct 3 17:21:45 2011 From: ayoung at redhat.com (Adam Young) Date: Mon, 03 Oct 2011 13:21:45 -0400 Subject: [Pki-devel] Dogtag:future Message-ID: <4E89EF29.7020605@redhat.com> I'm looking into future design of Dogtag. I'd like to solicit from the community input on a couple broad questions. 1. What aspects of the current design would you like to see changed, and why? 2. What things does Dogtag do well that we should not touch? 3. What mechanisms does Dogtag use that should be extracted into stand alone libraries and projects? 4. How do you deploy Dogtag? Do you put each system in a separate Virtual Machine? Do you run everything on one big server? What pieces do you keep inside and outside the firewall? Where is your pain in system administration? 5. How customized are your Dogtag installs? Do you do custom plugins? Do you customize the themes? Do you have lots of custom profiles? From ayoung at redhat.com Tue Oct 4 00:58:26 2011 From: ayoung at redhat.com (Adam Young) Date: Mon, 03 Oct 2011 20:58:26 -0400 Subject: [Pki-devel] Tomcat, NSS and OpenJDK Message-ID: <4E8A5A32.3080709@redhat.com> Tomcat has a class called "Realm" which is basically a way of managing the set of authentication mechanisms. PKI seems To use an older approach which bypasses the Realm config in Tomcat. I started looking at what it would take to close the distance between the two. In doing so, I found something interesting in the openjdk code base: In /usr/lib/jvm/java-1.6.0/jre/lib/security/java.security, there is a section that looks like this: # # List of providers and their preference orders (see above): # security.provider.1=sun.security.provider.Sun security.provider.2=sun.security.rsa.SunRsaSign ... # the NSS security provider was not enabled for this build; it can be enabled # if NSS (libnss3) is available on the machine. The nss.cfg file may need # editing to reflect the location of the NSS installation. #security.provider.9=sun.security.pkcs11.SunPKCS11 ${java.home}/lib/security/nss.cfg So it seems that Sun had, at least in the past, supported NSS as a Sercurity provider. For the member of the Java team not familiar with NSS (I wasn't) It is the Network Security Services and is the basis for, amongst other things, how Mozilla stores passwords and certificates. PKI makes pretty heavy use of NSS, via the Opensource Java bindings in JSS. This page here has more info: http://download.oracle.com/javase/1.5.0/docs/guide/security/p11guide.html#Intro It seems like the Oracle JDK has had support in the past for NSS as a JAAS module. To close the acronym loop with Tomcat, Tomcat has a JAAS Realm class. What this says to me is that, at one point, Java developers could have configured Tomcat to use NSS as the authentication mechanism for an application. This class ships in the file: /usr/lib/jvm/java-1.6.0-openjdk.x86_64/jre/lib/ext/sunpkcs11.jar And The native library is in /usr/lib/jvm/java-1.6.0-openjdk.x86_64/jre/lib/amd64/libj2pkcs11.so So it looks like we might have an additional Java implementation of NSS available, one that can potentially provide NSS support for Tomcat and JBoss via JAAS. It looks like all it requires is a change to the configuration file that we ship. I'm not quite sure how we would go about doing this in an automated fashion, short of pulling in libnss3 as part of Open JDK support. I'm guessing that if we enable it and the nss library is missing it errors our in some ugly manner, but I have not tested it. Is anyone familiar with this code? Would it be acceptable to activate this security module by default and to pull in libnss with Java? Is there some automated way to enable this if NSS is installed? From ahughes at redhat.com Tue Oct 4 20:47:08 2011 From: ahughes at redhat.com (Dr Andrew John Hughes) Date: Tue, 4 Oct 2011 21:47:08 +0100 Subject: [Pki-devel] [fedora-java] Tomcat, NSS and OpenJDK In-Reply-To: <4E8A5A32.3080709@redhat.com> References: <4E8A5A32.3080709@redhat.com> Message-ID: <20111004204708.GJ29874@rivendell.redhat.com> On 20:58 Mon 03 Oct , Adam Young wrote: > Tomcat has a class called "Realm" which is basically a way of managing > the set of authentication mechanisms. PKI seems To use an older > approach which bypasses the Realm config in Tomcat. I started looking > at what it would take to close the distance between the two. In doing > so, I found something interesting in the openjdk code base: > > In /usr/lib/jvm/java-1.6.0/jre/lib/security/java.security, there is a > section that looks like this: > # > # List of providers and their preference orders (see above): > # > security.provider.1=sun.security.provider.Sun > security.provider.2=sun.security.rsa.SunRsaSign > ... > # the NSS security provider was not enabled for this build; it can be > enabled > # if NSS (libnss3) is available on the machine. The nss.cfg file may need > # editing to reflect the location of the NSS installation. > #security.provider.9=sun.security.pkcs11.SunPKCS11 > ${java.home}/lib/security/nss.cfg > This is added by IcedTea, which can enable NSS support during the build. However, from the commenting, it seems the option has not been turned on in the Fedora packages. If you check /usr/lib/jvm/java-1.6.0-openjdk/lib/security/nss.cfg points at your NSS install, then uncommenting that line should allow Java to use NSS for cryptography. > > So it seems that Sun had, at least in the past, supported NSS as a > Sercurity provider. For the member of the Java team not familiar with > NSS (I wasn't) It is the Network Security Services and is the basis for, > amongst other things, how Mozilla stores passwords and certificates. > PKI makes pretty heavy use of NSS, via the Opensource Java bindings in JSS. > > This page here has more info: > > http://download.oracle.com/javase/1.5.0/docs/guide/security/p11guide.html#Intro > > It seems like the Oracle JDK has had support in the past for NSS as a > JAAS module. To close the acronym loop with Tomcat, Tomcat has a JAAS > Realm class. What this says to me is that, at one point, Java > developers could have configured Tomcat to use NSS as the authentication > mechanism for an application. AIUI, the JDK implementation just uses NSS to provide cryptography algorithms, not authentication. > > This class ships in the file: > > /usr/lib/jvm/java-1.6.0-openjdk.x86_64/jre/lib/ext/sunpkcs11.jar > > And The native library is in > > /usr/lib/jvm/java-1.6.0-openjdk.x86_64/jre/lib/amd64/libj2pkcs11.so > > > So it looks like we might have an additional Java implementation of NSS > available, one that can potentially provide NSS support for Tomcat and > JBoss via JAAS. It looks like all it requires is a change to the > configuration file that we ship. I'm not quite sure how we would go > about doing this in an automated fashion, short of pulling in libnss3 as > part of Open JDK support. I'm guessing that if we enable it and the nss > library is missing it errors our in some ugly manner, but I have not > tested it. The Fedora java-1.6.0-openjdk package would need to be altered to pass --enable-nss to configure and to depend on libnss3 (as you say). I can't remember exactly how it errors out off-hand and I've never had a system without NSS on to see that on! If you have Firefox, you have NSS. > > Is anyone familiar with this code? I am; I added the support in IcedTea and fix some bugs in the NSS code upstream (which took forever; the security-dev OpenJDK people at Oracle are very slow to respond in my experience). > Would it be acceptable to activate > this security module by default and to pull in libnss with Java? Is > there some automated way to enable this if NSS is installed? Debian and Ubuntu do enable it, and the option is there as a USE flag in Gentoo. However, an issue did come up regarding it and Firefox: http://icedtea.classpath.org/bugzilla/show_bug.cgi?id=473 which may be a blocker. As I say, the process of enabling it is just a two line change in the java-1.6.0-openjdk spec file. > -- > java-devel mailing list > java-devel at lists.fedoraproject.org > https://admin.fedoraproject.org/mailman/listinfo/java-devel -- Andrew :) Free Java Software Engineer Red Hat, Inc. (http://www.redhat.com) Support Free Java! Contribute to GNU Classpath and IcedTea http://www.gnu.org/software/classpath http://icedtea.classpath.org PGP Key: F5862A37 (https://keys.indymedia.org/) Fingerprint = EA30 D855 D50F 90CD F54D 0698 0713 C3ED F586 2A37 From ayoung at redhat.com Tue Oct 4 21:49:22 2011 From: ayoung at redhat.com (Adam Young) Date: Tue, 04 Oct 2011 17:49:22 -0400 Subject: [Pki-devel] [fedora-java] Tomcat, NSS and OpenJDK In-Reply-To: <20111004204708.GJ29874@rivendell.redhat.com> References: <4E8A5A32.3080709@redhat.com> <20111004204708.GJ29874@rivendell.redhat.com> Message-ID: <4E8B7F62.6020003@redhat.com> On 10/04/2011 04:47 PM, Dr Andrew John Hughes wrote: > On 20:58 Mon 03 Oct , Adam Young wrote: >> Tomcat has a class called "Realm" which is basically a way of managing >> the set of authentication mechanisms. PKI seems To use an older >> approach which bypasses the Realm config in Tomcat. I started looking >> at what it would take to close the distance between the two. In doing >> so, I found something interesting in the openjdk code base: >> >> In /usr/lib/jvm/java-1.6.0/jre/lib/security/java.security, there is a >> section that looks like this: >> # >> # List of providers and their preference orders (see above): >> # >> security.provider.1=sun.security.provider.Sun >> security.provider.2=sun.security.rsa.SunRsaSign >> ... >> # the NSS security provider was not enabled for this build; it can be >> enabled >> # if NSS (libnss3) is available on the machine. The nss.cfg file may need >> # editing to reflect the location of the NSS installation. >> #security.provider.9=sun.security.pkcs11.SunPKCS11 >> ${java.home}/lib/security/nss.cfg >> > This is added by IcedTea, which can enable NSS support during the build. > However, from the commenting, it seems the option has not been turned on in the > Fedora packages. > > If you check /usr/lib/jvm/java-1.6.0-openjdk/lib/security/nss.cfg points at > your NSS install, then uncommenting that line should allow Java to use NSS > for cryptography. I did that on my system and got it to work. I also went through the steps to do it programmatically from Java code: Class providerClass = Class.forName("sun.security.pkcs11.SunPKCS11"); System.out.print("found class"); Provider provider = (Provider) providerClass.getConstructor(String.class).newInstance("/usr/lib/jvm/java-1.6.0-openjdk.x86_64/jre/lib/security/nss.cfg"); Security.addProvider(provider); I'm sure there is a better approach, this was just the quickest to hack. > >> So it seems that Sun had, at least in the past, supported NSS as a >> Sercurity provider. For the member of the Java team not familiar with >> NSS (I wasn't) It is the Network Security Services and is the basis for, >> amongst other things, how Mozilla stores passwords and certificates. >> PKI makes pretty heavy use of NSS, via the Opensource Java bindings in JSS. >> >> This page here has more info: >> >> http://download.oracle.com/javase/1.5.0/docs/guide/security/p11guide.html#Intro >> >> It seems like the Oracle JDK has had support in the past for NSS as a >> JAAS module. To close the acronym loop with Tomcat, Tomcat has a JAAS >> Realm class. What this says to me is that, at one point, Java >> developers could have configured Tomcat to use NSS as the authentication >> mechanism for an application. > AIUI, the JDK implementation just uses NSS to provide cryptography algorithms, > not authentication. Hmmm...seems to me that if you can use it to look at and validate a certificate, you should be able to get the principal from it. But we are probably going to use LDAP as the JAAS piece anyway, as that is where all of the Users and ACIs live, so that might be OK. I'm not certain that it makes sense to get roles out of an NSS database for any but the smallest applications. > >> This class ships in the file: >> >> /usr/lib/jvm/java-1.6.0-openjdk.x86_64/jre/lib/ext/sunpkcs11.jar >> >> And The native library is in >> >> /usr/lib/jvm/java-1.6.0-openjdk.x86_64/jre/lib/amd64/libj2pkcs11.so >> >> >> So it looks like we might have an additional Java implementation of NSS >> available, one that can potentially provide NSS support for Tomcat and >> JBoss via JAAS. It looks like all it requires is a change to the >> configuration file that we ship. I'm not quite sure how we would go >> about doing this in an automated fashion, short of pulling in libnss3 as >> part of Open JDK support. I'm guessing that if we enable it and the nss >> library is missing it errors our in some ugly manner, but I have not >> tested it. > The Fedora java-1.6.0-openjdk package would need to be altered to pass > --enable-nss to configure and to depend on libnss3 (as you say). > > I can't remember exactly how it errors out off-hand and I've never had a system > without NSS on to see that on! If you have Firefox, you have NSS. > >> Is anyone familiar with this code? > I am; I added the support in IcedTea and fix some bugs in the NSS > code upstream (which took forever; the security-dev OpenJDK people > at Oracle are very slow to respond in my experience). Thank you. Very nicely done! > >> Would it be acceptable to activate >> this security module by default and to pull in libnss with Java? Is >> there some automated way to enable this if NSS is installed? > Debian and Ubuntu do enable it, and the option is there as a USE flag > in Gentoo. However, an issue did come up regarding it and Firefox: > > http://icedtea.classpath.org/bugzilla/show_bug.cgi?id=473 > > which may be a blocker. As I say, the process of enabling it is just a > two line change in the java-1.6.0-openjdk spec file. Yes, we would want to wait until we had a resolution for that issue. I hear that a change is coming for NSS due to enough people hitting this, but I do not know the time frame. > >> -- >> java-devel mailing list >> java-devel at lists.fedoraproject.org >> https://admin.fedoraproject.org/mailman/listinfo/java-devel From jdennis at redhat.com Tue Oct 4 22:20:37 2011 From: jdennis at redhat.com (John Dennis) Date: Tue, 04 Oct 2011 18:20:37 -0400 Subject: [Pki-devel] [fedora-java] Tomcat, NSS and OpenJDK In-Reply-To: <20111004204708.GJ29874@rivendell.redhat.com> References: <4E8A5A32.3080709@redhat.com> <20111004204708.GJ29874@rivendell.redhat.com> Message-ID: <4E8B86B5.4080306@redhat.com> On 10/04/2011 04:47 PM, Dr Andrew John Hughes wrote: > I am; I added the support in IcedTea and fix some bugs in the NSS > code upstream (which took forever; the security-dev OpenJDK people > at Oracle are very slow to respond in my experience). Thank you for doing this work, it's appreciated. -- John Dennis Looking to carve out IT costs? www.redhat.com/carveoutcosts/ From awnuk at redhat.com Tue Oct 4 22:27:17 2011 From: awnuk at redhat.com (Andrew Wnuk) Date: Tue, 04 Oct 2011 15:27:17 -0700 Subject: [Pki-devel] [fedora-java] Tomcat, NSS and OpenJDK In-Reply-To: <4E8B7F62.6020003@redhat.com> References: <4E8A5A32.3080709@redhat.com> <20111004204708.GJ29874@rivendell.redhat.com> <4E8B7F62.6020003@redhat.com> Message-ID: <4E8B8845.5050502@redhat.com> Adding Bob and Elio. On 10/04/2011 02:49 PM, Adam Young wrote: > On 10/04/2011 04:47 PM, Dr Andrew John Hughes wrote: >> On 20:58 Mon 03 Oct , Adam Young wrote: >>> Tomcat has a class called "Realm" which is basically a way of managing >>> the set of authentication mechanisms. PKI seems To use an older >>> approach which bypasses the Realm config in Tomcat. I started looking >>> at what it would take to close the distance between the two. In doing >>> so, I found something interesting in the openjdk code base: >>> >>> In /usr/lib/jvm/java-1.6.0/jre/lib/security/java.security, there is a >>> section that looks like this: >>> # >>> # List of providers and their preference orders (see above): >>> # >>> security.provider.1=sun.security.provider.Sun >>> security.provider.2=sun.security.rsa.SunRsaSign >>> ... >>> # the NSS security provider was not enabled for this build; it can be >>> enabled >>> # if NSS (libnss3) is available on the machine. The nss.cfg file may >>> need >>> # editing to reflect the location of the NSS installation. >>> #security.provider.9=sun.security.pkcs11.SunPKCS11 >>> ${java.home}/lib/security/nss.cfg >>> >> This is added by IcedTea, which can enable NSS support during the build. >> However, from the commenting, it seems the option has not been turned >> on in the >> Fedora packages. >> >> If you check /usr/lib/jvm/java-1.6.0-openjdk/lib/security/nss.cfg >> points at >> your NSS install, then uncommenting that line should allow Java to >> use NSS >> for cryptography. > > > I did that on my system and got it to work. I also went through the > steps to do it programmatically from Java code: > > Class providerClass = Class.forName("sun.security.pkcs11.SunPKCS11"); > System.out.print("found class"); > Provider provider = (Provider) > providerClass.getConstructor(String.class).newInstance("/usr/lib/jvm/java-1.6.0-openjdk.x86_64/jre/lib/security/nss.cfg"); > Security.addProvider(provider); > > > I'm sure there is a better approach, this was just the quickest to hack. > >> >>> So it seems that Sun had, at least in the past, supported NSS as a >>> Sercurity provider. For the member of the Java team not familiar with >>> NSS (I wasn't) It is the Network Security Services and is the basis >>> for, >>> amongst other things, how Mozilla stores passwords and certificates. >>> PKI makes pretty heavy use of NSS, via the Opensource Java bindings >>> in JSS. >>> >>> This page here has more info: >>> >>> http://download.oracle.com/javase/1.5.0/docs/guide/security/p11guide.html#Intro >>> >>> >>> It seems like the Oracle JDK has had support in the past for NSS as a >>> JAAS module. To close the acronym loop with Tomcat, Tomcat has a JAAS >>> Realm class. What this says to me is that, at one point, Java >>> developers could have configured Tomcat to use NSS as the >>> authentication >>> mechanism for an application. >> AIUI, the JDK implementation just uses NSS to provide cryptography >> algorithms, >> not authentication. > > Hmmm...seems to me that if you can use it to look at and validate a > certificate, you should be able to get the principal from it. But we > are probably going to use LDAP as the JAAS piece anyway, as that is > where all of the Users and ACIs live, so that might be OK. I'm not > certain that it makes sense to get roles out of an NSS database for > any but the smallest applications. > >> >>> This class ships in the file: >>> >>> /usr/lib/jvm/java-1.6.0-openjdk.x86_64/jre/lib/ext/sunpkcs11.jar >>> >>> And The native library is in >>> >>> /usr/lib/jvm/java-1.6.0-openjdk.x86_64/jre/lib/amd64/libj2pkcs11.so >>> >>> >>> So it looks like we might have an additional Java implementation of NSS >>> available, one that can potentially provide NSS support for Tomcat and >>> JBoss via JAAS. It looks like all it requires is a change to the >>> configuration file that we ship. I'm not quite sure how we would go >>> about doing this in an automated fashion, short of pulling in >>> libnss3 as >>> part of Open JDK support. I'm guessing that if we enable it and the >>> nss >>> library is missing it errors our in some ugly manner, but I have not >>> tested it. >> The Fedora java-1.6.0-openjdk package would need to be altered to pass >> --enable-nss to configure and to depend on libnss3 (as you say). >> >> I can't remember exactly how it errors out off-hand and I've never >> had a system >> without NSS on to see that on! If you have Firefox, you have NSS. >> >>> Is anyone familiar with this code? >> I am; I added the support in IcedTea and fix some bugs in the NSS >> code upstream (which took forever; the security-dev OpenJDK people >> at Oracle are very slow to respond in my experience). > > > Thank you. Very nicely done! >> >>> Would it be acceptable to activate >>> this security module by default and to pull in libnss with Java? Is >>> there some automated way to enable this if NSS is installed? >> Debian and Ubuntu do enable it, and the option is there as a USE flag >> in Gentoo. However, an issue did come up regarding it and Firefox: >> >> http://icedtea.classpath.org/bugzilla/show_bug.cgi?id=473 >> >> which may be a blocker. As I say, the process of enabling it is just a >> two line change in the java-1.6.0-openjdk spec file. > > > Yes, we would want to wait until we had a resolution for that issue. > I hear that a change is coming for NSS due to enough people hitting > this, but I do not know the time frame. > > > >> >>> -- >>> java-devel mailing list >>> java-devel at lists.fedoraproject.org >>> https://admin.fedoraproject.org/mailman/listinfo/java-devel > > _______________________________________________ > Pki-devel mailing list > Pki-devel at redhat.com > https://www.redhat.com/mailman/listinfo/pki-devel From ayoung at redhat.com Wed Oct 5 18:46:39 2011 From: ayoung at redhat.com (Adam Young) Date: Wed, 05 Oct 2011 14:46:39 -0400 Subject: [Pki-devel] [PATCH] 0007 eclipse project files Message-ID: <4E8CA60F.9010102@redhat.com> This contains only the files necessary to get the project to import into Eclipse, plus a very small .gitignore file for people that want to use git svn. This patch should have no effects on the way that code currently runs. The difference between this patch and the previous submission is that this one does not attempt to deal with the problems due to the code duplication between the console and common code bases: it does not include the console in the classpath. People that want to use eclipse to work with the console will have to perform additional modifications. Also, it does not include the directory base/common/test that has classes in it with undefined abstract methods. -------------- next part -------------- A non-text attachment was scrubbed... Name: pki-admiyo-0007-1-eclipse-project-files.patch Type: text/x-patch Size: 4105 bytes Desc: not available URL: From alee at redhat.com Wed Oct 5 19:28:49 2011 From: alee at redhat.com (Ade Lee) Date: Wed, 05 Oct 2011 15:28:49 -0400 Subject: [Pki-devel] [PATCH] 0007 eclipse project files In-Reply-To: <4E8CA60F.9010102@redhat.com> References: <4E8CA60F.9010102@redhat.com> Message-ID: <1317842929.3453.42.camel@localhost.localdomain> ACK subject to provisos discussed in #dogtag-pki about removing entries from .gitignore On Wed, 2011-10-05 at 14:46 -0400, Adam Young wrote: > This contains only the files necessary to get the project to import into > Eclipse, plus a very small .gitignore file for people that want to use > git svn. This patch should have no effects on the way that code > currently runs. > > The difference between this patch and the previous submission is that > this one does not attempt to deal with the problems due to the code > duplication between the console and common code bases: it does not > include the console in the classpath. People that want to use eclipse > to work with the console will have to perform additional modifications. > Also, it does not include the directory base/common/test that has > classes in it with undefined abstract methods. > > > _______________________________________________ > Pki-devel mailing list > Pki-devel at redhat.com > https://www.redhat.com/mailman/listinfo/pki-devel From ayoung at redhat.com Wed Oct 5 20:04:25 2011 From: ayoung at redhat.com (Adam Young) Date: Wed, 05 Oct 2011 16:04:25 -0400 Subject: [Pki-devel] [PATCH] 0007 eclipse project files In-Reply-To: <1317842929.3453.42.camel@localhost.localdomain> References: <4E8CA60F.9010102@redhat.com> <1317842929.3453.42.camel@localhost.localdomain> Message-ID: <4E8CB849.3050003@redhat.com> On 10/05/2011 03:28 PM, Ade Lee wrote: > ACK > > subject to provisos discussed in #dogtag-pki about removing entries > from .gitignore > > On Wed, 2011-10-05 at 14:46 -0400, Adam Young wrote: >> This contains only the files necessary to get the project to import into >> Eclipse, plus a very small .gitignore file for people that want to use >> git svn. This patch should have no effects on the way that code >> currently runs. >> >> The difference between this patch and the previous submission is that >> this one does not attempt to deal with the problems due to the code >> duplication between the console and common code bases: it does not >> include the console in the classpath. People that want to use eclipse >> to work with the console will have to perform additional modifications. >> Also, it does not include the directory base/common/test that has >> classes in it with undefined abstract methods. >> >> >> _______________________________________________ >> Pki-devel mailing list >> Pki-devel at redhat.com >> https://www.redhat.com/mailman/listinfo/pki-devel > Removed and pushed/commited to TRUNK. Attached patch was what was pushed. -------------- next part -------------- A non-text attachment was scrubbed... Name: pki-admiyo-0007-3-eclipse-project-files.patch Type: text/x-patch Size: 4179 bytes Desc: not available URL: From mharmsen at redhat.com Fri Oct 7 00:09:05 2011 From: mharmsen at redhat.com (Matthew Harmsen) Date: Thu, 06 Oct 2011 17:09:05 -0700 Subject: [Pki-devel] Request to build the following PKI components on Fedora 15, Fedora 16, and Fedora 17 (Rawhide) . . . Message-ID: <4E8E4321.3000500@redhat.com> Please build the following components on Fedora 15, Fedora 16, and Fedora 17 (rawhide) in Koji . . . * dogtag-pki-theme-9.0.9-1.fc15.src.rpm (dogtag-pki-theme) * pki-core-9.0.15-1.fc15.src.rpm (pki-core) * pki-console-9.0.5-1.fc15.src.rpm (pki-console) * pki-kra-9.0.8-1.fc15.src.rpm (pki-kra) * pki-ocsp-9.0.7-1.fc15.src.rpm (pki-ocsp) * pki-tks-9.0.7-1.fc15.src.rpm (pki-tks) * pki-ra-9.0.4-1.fc15.src.rpm (pki-ra) * pki-tps-9.0.7-1.fc15.src.rpm (pki-tps) * dogtag-pki-9.0.0-7.fc15.src.rpm (dogtag-pki) All changes have been checked-in, and the official tarballs (for all three platforms) have been published to: * http://pki.fedoraproject.org/pki/sources/dogtag-pki-theme/dogtag-pki-theme-9.0.9.tar.gz (dogtag-pki-theme) * http://pki.fedoraproject.org/pki/sources/pki-core/pki-core-9.0.15.tar.gz (pki-core) * http://pki.fedoraproject.org/pki/sources/pki-console/pki-console-9.0.5.tar.gz (pki-console) * http://pki.fedoraproject.org/pki/sources/pki-kra/pki-kra-9.0.8.tar.gz (pki-kra) * http://pki.fedoraproject.org/pki/sources/pki-ocsp/pki-ocsp-9.0.7.tar.gz (pki-ocsp) * http://pki.fedoraproject.org/pki/sources/pki-tks/pki-tks-9.0.7.tar.gz (pki-tks) * http://pki.fedoraproject.org/pki/sources/pki-ra/pki-ra-9.0.4.tar.gz (pki-ra) * http://pki.fedoraproject.org/pki/sources/pki-tps/pki-tps-9.0.7.tar.gz (pki-tps) * N/A (dogtag-pki) The official spec files (for all three platforms) are located at: * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/dogtag-pki-theme.spec (dogtag-pki-theme) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/pki-core.spec (pki-core) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/pki-console.spec (pki-console) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/pki-kra.spec (pki-kra) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/pki-ocsp.spec (pki-ocsp) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/pki-ra.spec (pki-ra) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/pki-tks.spec (pki-tks) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/pki-tps.spec (pki-tps) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/dogtag-pki.spec (dogtag-pki) To build the specified components, please import the following into the buildroot: * Fedora 15 o jss-4.2.6-20.el15 (pki-core, pki-console, pki-kra, pki-ocsp, pki-tks) o osutil-2.0.2-1.el15 (pki-core) o tomcatjss-6.0.2-1.el15 (pki-core) * Fedora 16 o jss-4.2.6-20.el16 (pki-core, pki-console, pki-kra, pki-ocsp, pki-tks) o osutil-2.0.2-1.el16 (pki-core) o tomcatjss-6.0.2-1.el16 (pki-core) * Fedora 17 o jss-4.2.6-20.el17 (pki-core, pki-console, pki-kra, pki-ocsp, pki-tks) o osutil-2.0.2-1.el17 (pki-core) o tomcatjss-6.0.2-1.el17 (pki-core) Thanks, -- Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5150 bytes Desc: S/MIME Cryptographic Signature URL: From release-engineering at redhat.com Fri Oct 7 00:09:13 2011 From: release-engineering at redhat.com (mharmsen@redhat.com via RT) Date: Thu, 6 Oct 2011 20:09:13 -0400 Subject: [Pki-devel] [engineering.redhat.com #125396] Request to build the following PKI components on Fedora 15, Fedora 16, and Fedora 17 (Rawhide) . . . In-Reply-To: <4E8E4321.3000500@redhat.com> References: <4E8E4321.3000500@redhat.com> Message-ID: Ticket Please build the following components on Fedora 15, Fedora 16, and Fedora 17 (rawhide) in Koji . . . * dogtag-pki-theme-9.0.9-1.fc15.src.rpm (dogtag-pki-theme) * pki-core-9.0.15-1.fc15.src.rpm (pki-core) * pki-console-9.0.5-1.fc15.src.rpm (pki-console) * pki-kra-9.0.8-1.fc15.src.rpm (pki-kra) * pki-ocsp-9.0.7-1.fc15.src.rpm (pki-ocsp) * pki-tks-9.0.7-1.fc15.src.rpm (pki-tks) * pki-ra-9.0.4-1.fc15.src.rpm (pki-ra) * pki-tps-9.0.7-1.fc15.src.rpm (pki-tps) * dogtag-pki-9.0.0-7.fc15.src.rpm (dogtag-pki) All changes have been checked-in, and the official tarballs (for all three platforms) have been published to: * http://pki.fedoraproject.org/pki/sources/dogtag-pki-theme/dogtag-pki-theme-9.0.9.tar.gz (dogtag-pki-theme) * http://pki.fedoraproject.org/pki/sources/pki-core/pki-core-9.0.15.tar.gz (pki-core) * http://pki.fedoraproject.org/pki/sources/pki-console/pki-console-9.0.5.tar.gz (pki-console) * http://pki.fedoraproject.org/pki/sources/pki-kra/pki-kra-9.0.8.tar.gz (pki-kra) * http://pki.fedoraproject.org/pki/sources/pki-ocsp/pki-ocsp-9.0.7.tar.gz (pki-ocsp) * http://pki.fedoraproject.org/pki/sources/pki-tks/pki-tks-9.0.7.tar.gz (pki-tks) * http://pki.fedoraproject.org/pki/sources/pki-ra/pki-ra-9.0.4.tar.gz (pki-ra) * http://pki.fedoraproject.org/pki/sources/pki-tps/pki-tps-9.0.7.tar.gz (pki-tps) * N/A (dogtag-pki) The official spec files (for all three platforms) are located at: * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/dogtag-pki-theme.spec (dogtag-pki-theme) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/pki-core.spec (pki-core) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/pki-console.spec (pki-console) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/pki-kra.spec (pki-kra) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/pki-ocsp.spec (pki-ocsp) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/pki-ra.spec (pki-ra) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/pki-tks.spec (pki-tks) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/pki-tps.spec (pki-tps) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/dogtag-pki.spec (dogtag-pki) To build the specified components, please import the following into the buildroot: * Fedora 15 o jss-4.2.6-20.el15 (pki-core, pki-console, pki-kra, pki-ocsp, pki-tks) o osutil-2.0.2-1.el15 (pki-core) o tomcatjss-6.0.2-1.el15 (pki-core) * Fedora 16 o jss-4.2.6-20.el16 (pki-core, pki-console, pki-kra, pki-ocsp, pki-tks) o osutil-2.0.2-1.el16 (pki-core) o tomcatjss-6.0.2-1.el16 (pki-core) * Fedora 17 o jss-4.2.6-20.el17 (pki-core, pki-console, pki-kra, pki-ocsp, pki-tks) o osutil-2.0.2-1.el17 (pki-core) o tomcatjss-6.0.2-1.el17 (pki-core) Thanks, -- Matt -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5150 bytes Desc: not available URL: From release-engineering at redhat.com Sat Oct 8 15:59:32 2011 From: release-engineering at redhat.com (Kevin Wright via RT) Date: Sat, 8 Oct 2011 11:59:32 -0400 Subject: [Pki-devel] [engineering.redhat.com #125396] Request to build the following PKI components on Fedora 15, Fedora 16, and Fedora 17 (Rawhide) . . . In-Reply-To: <4E8E4321.3000500@redhat.com> References: <4E8E4321.3000500@redhat.com> Message-ID: Ticket Matt, After getting the override tag for tomcat6 in f16, I was able to complete all of the f16 builds last night. Rawhide continues to fail trying to build the javadocs for pki-core so all of the builds that depend on pki-core (pki-util subpackage) will also fail. To sum up: o Fedora 15 builds complete o Fedora 16 builds complete o Fedora 17 builds incomplete due to changes in java which fails javadocs builds. --Kevin From ayoung at redhat.com Fri Oct 14 00:12:39 2011 From: ayoung at redhat.com (Adam Young) Date: Thu, 13 Oct 2011 20:12:39 -0400 Subject: [Pki-devel] Why we need to focus on the API for PKI moving forward. Message-ID: <4E977E77.7030805@redhat.com> http://tech.slashdot.org/comments.pl?sid=2473136&cid=37695990 I'll copy the whole thing here in line. Read it, and you will get a sense of what motivates the REST design for PKI, and where PKI needs to go. Stevey's Google Platforms Rant I was at Amazon for about six and a half years, and now I've been at Google for that long. One thing that struck me immediately about the two companies -- an impression that has been reinforced almost daily -- is that Amazon does everything wrong, and Google does everything right. Sure, it's a sweeping generalization, but a surprisingly accurate one. It's pretty crazy. There are probably a hundred or even two hundred different ways you can compare the two companies, and Google is superior in all but three of them, if I recall correctly. I actually did a spreadsheet at one point but Legal wouldn't let me show it to anyone, even though recruiting loved it. I mean, just to give you a very brief taste: Amazon's recruiting process is fundamentally flawed by having teams hire for themselves, so their hiring bar is incredibly inconsistent across teams, despite various efforts they've made to level it out. And their operations are a mess; they don't really have SREs and they make engineers pretty much do everything, which leaves almost no time for coding - though again this varies by group, so it's luck of the draw. They don't give a single shit about charity or helping the needy or community contributions or anything like that. Never comes up there, except maybe to laugh about it. Their facilities are dirt-smeared cube farms without a dime spent on decor or common meeting areas. Their pay and benefits suck, although much less so lately due to local competition from Google and Facebook. But they don't have any of our perks or extras -- they just try to match the offer-letter numbers, and that's the end of it. Their code base is a disaster, with no engineering standards whatsoever except what individual teams choose to put in place. To be fair, they do have a nice versioned-library system that we really ought to emulate, and a nice publish-subscribe system that we also have no equivalent for. But for the most part they just have a bunch of crappy tools that read and write state machine information into relational databases. We wouldn't take most of it even if it were free. I think the pubsub system and their library-shelf system were two out of the grand total of three things Amazon does better than google. I guess you could make an argument that their bias for launching early and iterating like mad is also something they do well, but you can argue it either way. They prioritize launching early over everything else, including retention and engineering discipline and a bunch of other stuff that turns out to matter in the long run. So even though it's given them some competitive advantages in the marketplace, it's created enough other problems to make it something less than a slam-dunk. But there's one thing they do really really well that pretty much makes up for ALL of their political, philosophical and technical screw-ups. Jeff Bezos is an infamous micro-manager. He micro-manages every single pixel of Amazon's retail site. He hired Larry Tesler, Apple's Chief Scientist and probably the very most famous and respected human-computer interaction expert in the entire world, and then ignored every goddamn thing Larry said for three years until Larry finally -- wisely -- left the company. Larry would do these big usability studies and demonstrate beyond any shred of doubt that nobody can understand that frigging website, but Bezos just couldn't let go of those pixels, all those millions of semantics-packed pixels on the landing page. They were like millions of his own precious children. So they're all still there, and Larry is not. Micro-managing isn't that third thing that Amazon does better than us, by the way. I mean, yeah, they micro-manage really well, but I wouldn't list it as a strength or anything. I'm just trying to set the context here, to help you understand what happened. We're talking about a guy who in all seriousness has said on many public occasions that people should be paying him to work at Amazon. He hands out little yellow stickies with his name on them, reminding people "who runs the company" when they disagree with him. The guy is a regular... well, Steve Jobs, I guess. Except without the fashion or design sense. Bezos is super smart; don't get me wrong. He just makes ordinary control freaks look like stoned hippies. So one day Jeff Bezos issued a mandate. He's doing that all the time, of course, and people scramble like ants being pounded with a rubber mallet whenever it happens. But on one occasion -- back around 2002 I think, plus or minus a year -- he issued a mandate that was so out there, so huge and eye-bulgingly ponderous, that it made all of his other mandates look like unsolicited peer bonuses. His Big Mandate went something along these lines: 1) All teams will henceforth expose their data and functionality through service interfaces. 2) Teams must communicate with each other through these interfaces. 3) There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team's data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network. 4) It doesn't matter what technology they use. HTTP, Corba, Pubsub, custom protocols -- doesn't matter. Bezos doesn't care. 5) All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions. 6) Anyone who doesn't do this will be fired. 7) Thank you; have a nice day! Ha, ha! You 150-odd ex-Amazon folks here will of course realize immediately that #7 was a little joke I threw in, because Bezos most definitely does not give a shit about your day. #6, however, was quite real, so people went to work. Bezos assigned a couple of Chief Bulldogs to oversee the effort and ensure forward progress, headed up by Uber-Chief Bear Bulldog Rick Dalzell. Rick is an ex-Armgy Ranger, West Point Academy graduate, ex-boxer, ex-Chief Torturer slash CIO at Wal*Mart, and is a big genial scary man who used the word "hardened interface" a lot. Rick was a walking, talking hardened interface himself, so needless to say, everyone made LOTS of forward progress and made sure Rick knew about it. Over the next couple of years, Amazon transformed internally into a service-oriented architecture. They learned a tremendous amount while effecting this transformation. There was lots of existing documentation and lore about SOAs, but at Amazon's vast scale it was about as useful as telling Indiana Jones to look both ways before crossing the street. Amazon's dev staff made a lot of discoveries along the way. A teeny tiny sampling of these discoveries included: - pager escalation gets way harder, because a ticket might bounce through 20 service calls before the real owner is identified. If each bounce goes through a team with a 15-minute response time, it can be hours before the right team finally finds out, unless you build a lot of scaffolding and metrics and reporting. - every single one of your peer teams suddenly becomes a potential DOS attacker. Nobody can make any real forward progress until very serious quotas and throttling are put in place in every single service. - monitoring and QA are the same thing. You'd never think so until you try doing a big SOA. But when your service says "oh yes, I'm fine", it may well be the case that the only thing still functioning in the server is the little component that knows how to say "I'm fine, roger roger, over and out" in a cheery droid voice. In order to tell whether the service is actually responding, you have to make individual calls. The problem continues recursively until your monitoring is doing comprehensive semantics checking of your entire range of services and data, at which point it's indistinguishable from automated QA. So they're a continuum. - if you have hundreds of services, and your code MUST communicate with other groups' code via these services, then you won't be able to find any of them without a service-discovery mechanism. And you can't have that without a service registration mechanism, which itself is another service. So Amazon has a universal service registry where you can find out reflectively (programmatically) about every service, what its APIs are, and also whether it is currently up, and where. - debugging problems with someone else's code gets a LOT harder, and is basically impossible unless there is a universal standard way to run every service in a debuggable sandbox. That's just a very small sample. There are dozens, maybe hundreds of individual learnings like these that Amazon had to discover organically. There were a lot of wacky ones around externalizing services, but not as many as you might think. Organizing into services taught teams not to trust each other in most of the same ways they're not supposed to trust external developers. This effort was still underway when I left to join Google in mid-2005, but it was pretty far advanced. From the time Bezos issued his edict through the time I left, Amazon had transformed culturally into a company that thinks about everything in a services-first fashion. It is now fundamental to how they approach all designs, including internal designs for stuff that might never see the light of day externally. At this point they don't even do it out of fear of being fired. I mean, they're still afraid of that; it's pretty much part of daily life there, working for the Dread Pirate Bezos and all. But they do services because they've come to understand that it's the Right Thing. There are without question pros and cons to the SOA approach, and some of the cons are pretty long. But overall it's the right thing because SOA-driven design enables Platforms. That's what Bezos was up to with his edict, of course. He didn't (and doesn't) care even a tiny bit about the well-being of the teams, nor about what technologies they use, nor in fact any detail whatsoever about how they go about their business unless they happen to be screwing up. But Bezos realized long before the vast majority of Amazonians that Amazon needs to be a platform. You wouldn't really think that an online bookstore needs to be an extensible, programmable platform. Would you? Well, the first big thing Bezos realized is that the infrastructure they'd built for selling and shipping books and sundry could be transformed an excellent repurposable computing platform. So now they have the Amazon Elastic Compute Cloud, and the Amazon Elastic MapReduce, and the Amazon Relational Database Service, and a whole passel' o' other services browsable at aws.amazon.com. These services host the backends for some pretty successful companies, reddit being my personal favorite of the bunch. The other big realization he had was that he can't always build the right thing. I think Larry Tesler might have struck some kind of chord in Bezos when he said his mom couldn't use the goddamn website. It's not even super clear whose mom he was talking about, and doesn't really matter, because nobody's mom can use the goddamn website. In fact I myself find the website disturbingly daunting, and I worked there for over half a decade. I've just learned to kinda defocus my eyes and concentrate on the million or so pixels near the center of the page above the fold. I'm not really sure how Bezos came to this realization -- the insight that he can't build one product and have it be right for everyone. But it doesn't matter, because he gets it. There's actually a formal name for this phenomenon. It's called Accessibility, and it's the most important thing in the computing world. The. Most. Important. Thing. If you're sorta thinking, "huh? You mean like, blind and deaf people Accessibility?" then you're not alone, because I've come to understand that there are lots and LOTS of people just like you: people for whom this idea does not have the right Accessibility, so it hasn't been able to get through to you yet. It's not your fault for not understanding, any more than it would be your fault for being blind or deaf or motion-restricted or living with any other disability. When software -- or idea-ware for that matter -- fails to be accessible to anyone for any reason, it is the fault of the software or of the messaging of the idea. It is an Accessibility failure. Like anything else big and important in life, Accessibility has an evil twin who, jilted by the unbalanced affection displayed by their parents in their youth, has grown into an equally powerful Arch-Nemesis (yes, there's more than one nemesis to accessibility) named Security. And boy howdy are the two ever at odds. But I'll argue that Accessibility is actually more important than Security because dialing Accessibility to zero means you have no product at all, whereas dialing Security to zero can still get you a reasonably successful product such as the Playstation Network. So yeah. In case you hadn't noticed, I could actually write a book on this topic. A fat one, filled with amusing anecdotes about ants and rubber mallets at companies I've worked at. But I will never get this little rant published, and you'll never get it read, unless I start to wrap up. That one last thing that Google doesn't do well is Platforms. We don't understand platforms. We don't "get" platforms. Some of you do, but you are the minority. This has become painfully clear to me over the past six years. I was kind of hoping that competitive pressure from Microsoft and Amazon and more recently Facebook would make us wake up collectively and start doing universal services. Not in some sort of ad-hoc, half-assed way, but in more or less the same way Amazon did it: all at once, for real, no cheating, and treating it as our top priority from now on. But no. No, it's like our tenth or eleventh priority. Or fifteenth, I don't know. It's pretty low. There are a few teams who treat the idea very seriously, but most teams either don't think about it all, ever, or only a small percentage of them think about it in a very small way. It's a big stretch even to get most teams to offer a stubby service to get programmatic access to their data and computations. Most of them think they're building products. And a stubby service is a pretty pathetic service. Go back and look at that partial list of learnings from Amazon, and tell me which ones Stubby gives you out of the box. As far as I'm concerned, it's none of them. Stubby's great, but it's like parts when you need a car. A product is useless without a platform, or more precisely and accurately, a platform-less product will always be replaced by an equivalent platform-ized product. Google+ is a prime example of our complete failure to understand platforms from the very highest levels of executive leadership (hi Larry, Sergey, Eric, Vic, howdy howdy) down to the very lowest leaf workers (hey yo). We all don't get it. The Golden Rule of platforms is that you Eat Your Own Dogfood. The Google+ platform is a pathetic afterthought. We had no API at all at launch, and last I checked, we had one measly API call. One of the team members marched in and told me about it when they launched, and I asked: "So is it the Stalker API?" She got all glum and said "Yeah." I mean, I was joking, but no... the only API call we offer is to get someone's stream. So I guess the joke was on me. Microsoft has known about the Dogfood rule for at least twenty years. It's been part of their culture for a whole generation now. You don't eat People Food and give your developers Dog Food. Doing that is simply robbing your long-term platform value for short-term successes. Platforms are all about long-term thinking. Google+ is a knee-jerk reaction, a study in short-term thinking, predicated on the incorrect notion that Facebook is successful because they built a great product. But that's not why they are successful. Facebook is successful because they built an entire constellation of products by allowing other people to do the work. So Facebook is different for everyone. Some people spend all their time on Mafia Wars. Some spend all their time on Farmville. There are hundreds or maybe thousands of different high-quality time sinks available, so there's something there for everyone. Our Google+ team took a look at the aftermarket and said: "Gosh, it looks like we need some games. Let's go contract someone to, um, write some games for us." Do you begin to see how incredibly wrong that thinking is now? The problem is that we are trying to predict what people want and deliver it for them. You can't do that. Not really. Not reliably. There have been precious few people in the world, over the entire history of computing, who have been able to do it reliably. Steve Jobs was one of them. We don't have a Steve Jobs here. I'm sorry, but we don't. Larry Tesler may have convinced Bezos that he was no Steve Jobs, but Bezos realized that he didn't need to be a Steve Jobs in order to provide everyone with the right products: interfaces and workflows that they liked and felt at ease with. He just needed to enable third-party developers to do it, and it would happen automatically. I apologize to those (many) of you for whom all this stuff I'm saying is incredibly obvious, because yeah. It's incredibly frigging obvious. Except we're not doing it. We don't get Platforms, and we don't get Accessibility. The two are basically the same thing, because platforms solve accessibility. A platform is accessibility. So yeah, Microsoft gets it. And you know as well as I do how surprising that is, because they don't "get" much of anything, really. But they understand platforms as a purely accidental outgrowth of having started life in the business of providing platforms. So they have thirty-plus years of learning in this space. And if you go to msdn.com, and spend some time browsing, and you've never seen it before, prepare to be amazed. Because it's staggeringly huge. They have thousands, and thousands, and THOUSANDS of API calls. They have a HUGE platform. Too big in fact, because they can't design for squat, but at least they're doing it. Amazon gets it. Amazon's AWS (aws.amazon.com) is incredible. Just go look at it. Click around. It's embarrassing. We don't have any of that stuff. Apple gets it, obviously. They've made some fundamentally non-open choices, particularly around their mobile platform. But they understand accessibility and they understand the power of third-party development and they eat their dogfood. And you know what? They make pretty good dogfood. Their APIs are a hell of a lot cleaner than Microsoft's, and have been since time immemorial. Facebook gets it. That's what really worries me. That's what got me off my lazy butt to write this thing. I hate blogging. I hate... plussing, or whatever it's called when you do a massive rant in Google+ even though it's a terrible venue for it but you do it anyway because in the end you really do want Google to be successful. And I do! I mean, Facebook wants me there, and it'd be pretty easy to just go. But Google is home, so I'm insisting that we have this little family intervention, uncomfortable as it might be. After you've marveled at the platform offerings of Microsoft and Amazon, and Facebook I guess (I didn't look because I didn't want to get too depressed), head over to developers.google.com and browse a little. Pretty big difference, eh? It's like what your fifth-grade nephew might mock up if he were doing an assignment to demonstrate what a big powerful platform company might be building if all they had, resource-wise, was one fifth grader. Please don't get me wrong here -- I know for a fact that the dev-rel team has had to FIGHT to get even this much available externally. They're kicking ass as far as I'm concerned, because they DO get platforms, and they are struggling heroically to try to create one in an environment that is at best platform-apathetic, and at worst often openly hostile to the idea. I'm just frankly describing what developers.google.com looks like to an outsider. It looks childish. Where's the Maps APIs in there for Christ's sake? Some of the things in there are labs projects. And the APIs for everything I clicked were... they were paltry. They were obviously dog food. Not even good organic stuff. Compared to our internal APIs it's all snouts and horse hooves. And also don't get me wrong about Google+. They're far from the only offenders. This is a cultural thing. What we have going on internally is basically a war, with the underdog minority Platformers fighting a more or less losing battle against the Mighty Funded Confident Producters. Any teams that have successfully internalized the notion that they should be externally programmable platforms from the ground up are underdogs -- Maps and Docs come to mind, and I know GMail is making overtures in that direction. But it's hard for them to get funding for it because it's not part of our culture. Maestro's funding is a feeble thing compared to the gargantuan Microsoft Office programming platform: it's a fluffy rabbit versus a T-Rex. The Docs team knows they'll never be competitive with Office until they can match its scripting facilities, but they're not getting any resource love. I mean, I assume they're not, given that Apps Script only works in Spreadsheet right now, and it doesn't even have keyboard shortcuts as part of its API. That team looks pretty unloved to me. Ironically enough, Wave was a great platform, may they rest in peace. But making something a platform is not going to make you an instant success. A platform needs a killer app. Facebook -- that is, the stock service they offer with walls and friends and such -- is the killer app for the Facebook Platform. And it is a very serious mistake to conclude that the Facebook App could have been anywhere near as successful without the Facebook Platform. You know how people are always saying Google is arrogant? I'm a Googler, so I get as irritated as you do when people say that. We're not arrogant, by and large. We're, like, 99% Arrogance-Free. I did start this post -- if you'll reach back into distant memory -- by describing Google as "doing everything right". We do mean well, and for the most part when people say we're arrogant it's because we didn't hire them, or they're unhappy with our policies, or something along those lines. They're inferring arrogance because it makes them feel better. But when we take the stance that we know how to design the perfect product for everyone, and believe you me, I hear that a lot, then we're being fools. You can attribute it to arrogance, or naivete, or whatever -- it doesn't matter in the end, because it's foolishness. There IS no perfect product for everyone. And so we wind up with a browser that doesn't let you set the default font size. Talk about an affront to Accessibility. I mean, as I get older I'm actually going blind. For real. I've been nearsighted all my life, and once you hit 40 years old you stop being able to see things up close. So font selection becomes this life-or-death thing: it can lock you out of the product completely. But the Chrome team is flat-out arrogant here: they want to build a zero-configuration product, and they're quite brazen about it, and Fuck You if you're blind or deaf or whatever. Hit Ctrl-+ on every single page visit for the rest of your life. It's not just them. It's everyone. The problem is that we're a Product Company through and through. We built a successful product with broad appeal -- our search, that is -- and that wild success has biased us. Amazon was a product company too, so it took an out-of-band force to make Bezos understand the need for a platform. That force was their evaporating margins; he was cornered and had to think of a way out. But all he had was a bunch of engineers and all these computers... if only they could be monetized somehow... you can see how he arrived at AWS, in hindsight. Microsoft started out as a platform, so they've just had lots of practice at it. Facebook, though: they worry me. I'm no expert, but I'm pretty sure they started off as a Product and they rode that success pretty far. So I'm not sure exactly how they made the transition to a platform. It was a relatively long time ago, since they had to be a platform before (now very old) things like Mafia Wars could come along. Maybe they just looked at us and asked: "How can we beat Google? What are they missing?" The problem we face is pretty huge, because it will take a dramatic cultural change in order for us to start catching up. We don't do internal service-oriented platforms, and we just as equally don't do external ones. This means that the "not getting it" is endemic across the company: the PMs don't get it, the engineers don't get it, the product teams don't get it, nobody gets it. Even if individuals do, even if YOU do, it doesn't matter one bit unless we're treating it as an all-hands-on-deck emergency. We can't keep launching products and pretending we'll turn them into magical beautiful extensible platforms later. We've tried that and it's not working. The Golden Rule of Platforms, "Eat Your Own Dogfood", can be rephrased as "Start with a Platform, and Then Use it for Everything." You can't just bolt it on later. Certainly not easily at any rate -- ask anyone who worked on platformizing MS Office. Or anyone who worked on platformizing Amazon. If you delay it, it'll be ten times as much work as just doing it correctly up front. You can't cheat. You can't have secret back doors for internal apps to get special priority access, not for ANY reason. You need to solve the hard problems up front. I'm not saying it's too late for us, but the longer we wait, the closer we get to being Too Late. I honestly don't know how to wrap this up. I've said pretty much everything I came here to say today. This post has been six years in the making. I'm sorry if I wasn't gentle enough, or if I misrepresented some product or team or person, or if we're actually doing LOTS of platform stuff and it just so happens that I and everyone I ever talk to has just never heard about it. I'm sorry. But we've gotta start doing this right. From khaltar at sssmn.com Fri Oct 14 07:08:11 2011 From: khaltar at sssmn.com (khaltar t) Date: Fri, 14 Oct 2011 00:08:11 -0700 Subject: [Pki-devel] Seeking for developer Message-ID: Dear Dogtag Certificate System Developers I am Khaltar Togtuun, the CEO of Security Solution Service (3S) company of Mongolia. We plan to implement PKI and CA in Mongolia. In this regard we seeking for highly skilled developers of open source PKI. We inviting any competent specialist on PKI, especially Dogtag CS to work with us in our project. Please connect with me if you interesting to work in Mongolia 1-2 months on PKI project. We will cover air ticket, accommodation + salary. With kind rgds Khaltar T. CEO. PhD, prof phone 976-99153286 phone/fax: 976-70113151 khaltar at sssmn.com khaltar at moncirt.org.mn www.sssmn.com NITP. 203. Ulaanbaatar. Mongolia 14200 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Fri Oct 14 20:47:58 2011 From: ayoung at redhat.com (Adam Young) Date: Fri, 14 Oct 2011 16:47:58 -0400 Subject: [Pki-devel] Why we need to focus on the API for PKI moving forward. In-Reply-To: <4E977E77.7030805@redhat.com> References: <4E977E77.7030805@redhat.com> Message-ID: <4E989FFE.8090804@redhat.com> He posted a follow up on why he took it down: https://plus.google.com/110981030061712822816/posts OK, lets assume that Dogtag PKI is going to be wildly successful. Cuz, we all know that to be the fact. By wildly successful I mean that there is an installation in many companies. Everyone in that company that needs a certificate, that needs decent PKI infrastructure has access to one. How are they going to use it? Lots of ways. Too many to count. Some will do what Candlepin is doing and use it to track who can connect to what network. The legal department will use it to sign documents. IT will use it to make sure that only trusted people can get into the datacenter. All the use cases we currently know about and more. For example, Web Single Sign on is a big deal these days, but when you get right down to it, all the implementation resolve to: redirect to this server over here and log on with your userid and password. Userid and password? What is this, 1983? Login:pfalker: Password:Joshua. Would you like to play a nice game of chess? Kerberos is a much better solution. The only problem is that it goes through ports other than the universally sanctioned 80/443. So, unless a means to proxy Kerberos via 443 comes around, and then gets implemented in all browsers, Kerberos will be restricted to inside the corporate firewall. If only there were a cryptographically secure way to log into a web application that worked in all browsers and through standard ports... OK, so Web Single Sign on could easily be the killer app for Dogtag. But we don't need to bet the farm on it. We need to make it so that everyone can use Dogtag, and use it easily. A good, web services based API is "Necessary but not sufficient." What else do we need? Dogtag does an innovative form of Authentication. It uses the Client certificate to find out who you are, and then looks up in LDAP to find out the rest of your user information. This mechanism needs to be made into a reusable authentication Realm so it can be used by other applications running in Tomcat and JBoss. Down the road, we will want to port it to HTTPD as well as to talk JDBC to a Relational Databases as well, but really, if we are successful, someone else out there may just take care of that for us. We know need to make it easy to install. We are looking at pkicreate and pkisilent with an eye to streamlining and simplifying the install process. We need examples of people using Dogtag. eCommerce, Legal, Medical (HIPPA!), Educational sites that use Dogtag as their PKI implementation. We need to get the word out. The long time Dogtag developers are the people who know PKI better than anyone. Not in the abstract sense, but in the "Done it in the real world, under load, for very important systems" sense. Dogtag is a mature, complete, Open Source, PKI implementation. When you search Google for "Open Source PKI", you should have to scroll to the second page to find a mention of something other than Dogtag or one of its derivatives. From ayoung at redhat.com Mon Oct 17 19:45:45 2011 From: ayoung at redhat.com (Adam Young) Date: Mon, 17 Oct 2011 15:45:45 -0400 Subject: [Pki-devel] Slides for PKI, Git and SVN talk Message-ID: <4E9C85E9.809@redhat.com> I'll post this to the wiki. The command to get started is: git svn clone \ http://svn.fedorahosted.org/svn/pki \ --prefix svn \ -T trunk -b branches -t tags perhaps a better prefix is pkisvn the get hep URLS are: http://cheat.errtheblog.com/s/git http://www.jukie.net/~bart/blog/svn-branches-in-git -------------- next part -------------- A non-text attachment was scrubbed... Name: pki-git-snv.odp Type: application/vnd.oasis.opendocument.presentation Size: 474265 bytes Desc: not available URL: From alee at redhat.com Mon Oct 17 20:18:42 2011 From: alee at redhat.com (Ade Lee) Date: Mon, 17 Oct 2011 16:18:42 -0400 Subject: [Pki-devel] Proposed RESTful interface to dogtag Message-ID: <1318882723.30369.41.camel@localhost.localdomain> Hi all, I tried to put this on the dogtag wiki, but it did not seem to work. Will chat with Matt. In the meantime, here is a copy for you guys to look at and comment on. It has most everything except the installation servlets and token operations (for which I need to think about the object model). If you look at the mapped servlets, you'll get a sense of what operations are covered in each URL mapping. This is a first cut -- hopefully a good starting point for discussion. So please comment away! Ade -------------- next part -------------- A non-text attachment was scrubbed... Name: rest.ods Type: application/vnd.oasis.opendocument.spreadsheet Size: 8647 bytes Desc: not available URL: From jmagne at redhat.com Tue Oct 18 00:10:34 2011 From: jmagne at redhat.com (Jack Magne) Date: Mon, 17 Oct 2011 17:10:34 -0700 Subject: [Pki-devel] Proposed RESTful interface to dogtag In-Reply-To: <1318882723.30369.41.camel@localhost.localdomain> References: <1318882723.30369.41.camel@localhost.localdomain> Message-ID: <4E9CC3FA.3010605@redhat.com> On 10/17/2011 01:18 PM, Ade Lee wrote: > Hi all, > > I tried to put this on the dogtag wiki, but it did not seem to work. > Will chat with Matt. > > In the meantime, here is a copy for you guys to look at and comment on. > It has most everything except the installation servlets and token > operations (for which I need to think about the object model). If you > look at the mapped servlets, you'll get a sense of what operations are > covered in each URL mapping. > > This is a first cut -- hopefully a good starting point for discussion. > So please comment away! > > Ade > > > > > > _______________________________________________ > Pki-devel mailing list > Pki-devel at redhat.com > https://www.redhat.com/mailman/listinfo/pki-devel > Thanks Ade. Just a few questions after having a look. 1. I noticed we have the following key related resources: PUT /pki/key "Add a key" POST /pki/key "Modify a key" In my quick readings, it appeared that the POST method was favored for creating brand new resources where PUT was used to modify existing ones? I also noticed that you have two GET versions of "pki/key". Is that kind of duplication encouraged? Or is that really just the same api entity with different input payloads? 2. You suggested I take a look at some of the TKS TokenServlet stuff. I noticed that we have a simple short list of servlets that appear to return very short lived resources. Examples being, session keys , encrypted data , and a block of randomly generated data. I would imagine it would be a POST op like something as follows: POST /pki/tks/sessionKey , which would return a link to the key itself? But does it make sense to have a "resource" for something so short lived, or does this concept even belong in such a design? 3. I was just curious about the Java back-end for this design. Will we be using the JAX-RS stuff that provides annotations in the java code in order to hook all of this up? thanks, jack -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Tue Oct 18 00:35:14 2011 From: ayoung at redhat.com (Adam Young) Date: Mon, 17 Oct 2011 20:35:14 -0400 Subject: [Pki-devel] Proposed RESTful interface to dogtag In-Reply-To: <4E9CC3FA.3010605@redhat.com> References: <1318882723.30369.41.camel@localhost.localdomain> <4E9CC3FA.3010605@redhat.com> Message-ID: <4E9CC9C2.6020600@redhat.com> On 10/17/2011 08:10 PM, Jack Magne wrote: > On 10/17/2011 01:18 PM, Ade Lee wrote: >> Hi all, >> >> I tried to put this on the dogtag wiki, but it did not seem to work. >> Will chat with Matt. >> >> In the meantime, here is a copy for you guys to look at and comment on. >> It has most everything except the installation servlets and token >> operations (for which I need to think about the object model). If you >> look at the mapped servlets, you'll get a sense of what operations are >> covered in each URL mapping. >> >> This is a first cut -- hopefully a good starting point for discussion. >> So please comment away! >> >> Ade >> >> >> >> >> >> _______________________________________________ >> Pki-devel mailing list >> Pki-devel at redhat.com >> https://www.redhat.com/mailman/listinfo/pki-devel >> > Thanks Ade. Just a few questions after having a look. > > 1. I noticed we have the following key related resources: > > > PUT /pki/key "Add a key" > > POST /pki/key "Modify a key" > > In my quick readings, it appeared that the POST method was favored for > creating brand new resources where PUT was used to modify existing ones? I think you have it backwards. PUT is the normal way for creating things. The POST operation is very generic and no specific meaning can be attached to it. In general, use POST when only a subset of a resource needs to be modified and it cannot be accessed as its own resource; or when the equivalent of a method call must be exposed. http://developer.mindtouch.com/REST/REST_for_the_Rest_of_Us says this about POST: " usually either create a new one, or replace the existing one with this copy, where as POST is kinds of a catch all. " We could possibly use PUT for both add and modify if we wanted. I tend to favor making objects immutable, and to replacing whole objects when possible. However, I know that is not always possible, especially when working with a pre-existing API. So I'd say lets try to stick to PUT semantics where possible, but deliberately use POST when we are making finer grain API calls. > > I also noticed that you have two GET versions of "pki/key". Is that > kind of duplication encouraged? Or is that really just the same api > entity with different input payloads? > > 2. You suggested I take a look at some of the TKS TokenServlet stuff. > I noticed that we have a simple short list of servlets that appear to > return very short lived resources. Examples being, session keys , > encrypted data , and a block of randomly generated data. > > I would imagine it would be a POST op like something as follows: > > POST /pki/tks/sessionKey , which would return a link to the key > itself? But does it make sense to have a "resource" for something so > short lived, or does this concept even belong in such a design? In general, REST works best if the service is stateless. Session based information should be minimized if possible. > > 3. I was just curious about the Java back-end for this design. Will we > be using the JAX-RS stuff that provides annotations in the java code > in order to hook all of this up? I am not a fan of annotations. Under other circumstances, I might be prone to say "well, that is the way of the world" and go with JAX-RS, but since we don not yet have a set of Entity objects that would drive the JAX-RS, I am more prone to look at other alternatives. THere are good libraries for serializing to JSON or XML that should be sufficient for our needs, and that will keep us from having to make our API conform to JAX-RS. So my inclination is to say no to JAX-RS to start. > > thanks, > jack Ade's document has founds its way into the wiki world: http://pki.fedoraproject.org/wiki/Dogtag_Future_Directions I might have made some Wiki errors in translation. If this contradicts Ade's spreadsheet, assume the spreadsheet is Canonical. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Tue Oct 18 00:43:30 2011 From: ayoung at redhat.com (Adam Young) Date: Mon, 17 Oct 2011 20:43:30 -0400 Subject: [Pki-devel] Proposed RESTful interface to dogtag In-Reply-To: <4E9CC9C2.6020600@redhat.com> References: <1318882723.30369.41.camel@localhost.localdomain> <4E9CC3FA.3010605@redhat.com> <4E9CC9C2.6020600@redhat.com> Message-ID: <4E9CCBB2.2010203@redhat.com> On 10/17/2011 08:35 PM, Adam Young wrote: > On 10/17/2011 08:10 PM, Jack Magne wrote: >> On 10/17/2011 01:18 PM, Ade Lee wrote: >>> Hi all, >>> >>> I tried to put this on the dogtag wiki, but it did not seem to work. >>> Will chat with Matt. >>> >>> In the meantime, here is a copy for you guys to look at and comment on. >>> It has most everything except the installation servlets and token >>> operations (for which I need to think about the object model). If you >>> look at the mapped servlets, you'll get a sense of what operations are >>> covered in each URL mapping. >>> >>> This is a first cut -- hopefully a good starting point for discussion. >>> So please comment away! >>> >>> Ade >>> >>> >>> >>> >>> >>> _______________________________________________ >>> Pki-devel mailing list >>> Pki-devel at redhat.com >>> https://www.redhat.com/mailman/listinfo/pki-devel >>> >> Thanks Ade. Just a few questions after having a look. >> >> 1. I noticed we have the following key related resources: >> >> >> PUT /pki/key "Add a key" >> >> POST /pki/key "Modify a key" >> >> In my quick readings, it appeared that the POST method was favored >> for creating brand new resources where PUT was used to modify >> existing ones? > I think you have it backwards. PUT is the normal way for creating > things. The POST operation is very generic and no specific meaning can > be attached to it. In general, use POST when only a subset of a > resource needs to be modified and it cannot be accessed as its own > resource; or when the equivalent of a method call must be exposed. > > http://developer.mindtouch.com/REST/REST_for_the_Rest_of_Us says this > about POST: > > " usually either create a new one, or replace the existing one with > this copy, where as POST is kinds of a catch all. " > > We could possibly use PUT for both add and modify if we wanted. > > > I tend to favor making objects immutable, and to replacing whole > objects when possible. However, I know that is not always possible, > especially when working with a pre-existing API. So I'd say lets try > to stick to PUT semantics where possible, but deliberately use POST > when we are making finer grain API calls. And...now I am going to contradict that. I see that the general wisom is to use POST if you want the server to create the unique ID for you, and PUT if you know it a-priori....so we will probably be using POST for things like Cert creation where the Serial number comes from the server. > >> >> I also noticed that you have two GET versions of "pki/key". Is that >> kind of duplication encouraged? Or is that really just the same api >> entity with different input payloads? >> >> 2. You suggested I take a look at some of the TKS TokenServlet stuff. >> I noticed that we have a simple short list of servlets that appear to >> return very short lived resources. Examples being, session keys , >> encrypted data , and a block of randomly generated data. >> >> I would imagine it would be a POST op like something as follows: >> >> POST /pki/tks/sessionKey , which would return a link to the key >> itself? But does it make sense to have a "resource" for something so >> short lived, or does this concept even belong in such a design? > > In general, REST works best if the service is stateless. Session > based information should be minimized if possible. > > >> >> 3. I was just curious about the Java back-end for this design. Will >> we be using the JAX-RS stuff that provides annotations in the java >> code in order to hook all of this up? > > I am not a fan of annotations. Under other circumstances, I might be > prone to say "well, that is the way of the world" and go with JAX-RS, > but since we don not yet have a set of Entity objects that would drive > the JAX-RS, I am more prone to look at other alternatives. THere are > good libraries for serializing to JSON or XML that should be > sufficient for our needs, and that will keep us from having to make > our API conform to JAX-RS. So my inclination is to say no to JAX-RS > to start. > >> >> thanks, >> jack > > > Ade's document has founds its way into the wiki world: > > http://pki.fedoraproject.org/wiki/Dogtag_Future_Directions > > > I might have made some Wiki errors in translation. If this > contradicts Ade's spreadsheet, assume the spreadsheet is Canonical. > > > > > > _______________________________________________ > Pki-devel mailing list > Pki-devel at redhat.com > https://www.redhat.com/mailman/listinfo/pki-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmagne at redhat.com Tue Oct 18 02:00:21 2011 From: jmagne at redhat.com (Jack Magne) Date: Mon, 17 Oct 2011 19:00:21 -0700 Subject: [Pki-devel] Proposed RESTful interface to dogtag In-Reply-To: <4E9CC9C2.6020600@redhat.com> References: <1318882723.30369.41.camel@localhost.localdomain> <4E9CC3FA.3010605@redhat.com> <4E9CC9C2.6020600@redhat.com> Message-ID: <4E9CDDB5.5080400@redhat.com> On 10/17/2011 05:35 PM, Adam Young wrote: > On 10/17/2011 08:10 PM, Jack Magne wrote: >> On 10/17/2011 01:18 PM, Ade Lee wrote: >>> Hi all, >>> >>> I tried to put this on the dogtag wiki, but it did not seem to work. >>> Will chat with Matt. >>> >>> In the meantime, here is a copy for you guys to look at and comment on. >>> It has most everything except the installation servlets and token >>> operations (for which I need to think about the object model). If you >>> look at the mapped servlets, you'll get a sense of what operations are >>> covered in each URL mapping. >>> >>> This is a first cut -- hopefully a good starting point for discussion. >>> So please comment away! >>> >>> Ade >>> >>> >>> >>> >>> >>> _______________________________________________ >>> Pki-devel mailing list >>> Pki-devel at redhat.com >>> https://www.redhat.com/mailman/listinfo/pki-devel >>> >> Thanks Ade. Just a few questions after having a look. >> >> 1. I noticed we have the following key related resources: >> >> >> PUT /pki/key "Add a key" >> >> POST /pki/key "Modify a key" >> >> In my quick readings, it appeared that the POST method was favored >> for creating brand new resources where PUT was used to modify >> existing ones? > I think you have it backwards. PUT is the normal way for creating > things. The POST operation is very generic and no specific meaning can > be attached to it. In general, use POST when only a subset of a > resource needs to be modified and it cannot be accessed as its own > resource; or when the equivalent of a method call must be exposed. > > http://developer.mindtouch.com/REST/REST_for_the_Rest_of_Us says this > about POST: > > " usually either create a new one, or replace the existing one with > this copy, where as POST is kinds of a catch all. " > > We could possibly use PUT for both add and modify if we wanted. > My bad. I guess I chose to read a conflicting article on this subject: http://www.ibm.com/developerworks/webservices/library/ws-restful/ where it says the following: * To create a resource on the server, use POST. * To retrieve a resource, use GET. * To change the state of a resource or to update it, use PUT. * To remove or delete a resource, use DELETE. Perhaps I was further thrown off about the discussions I've seen about the idempotence property of PUT, whereby if you replay the same request more than once, you get the same result as the first time. Where in POST, if you repeat the exercise multiple times, you will get a list of new resources. Perhaps you could speak a little to that subject. I think I see now why he would chose PUT to create a key, because PUT replaces the entire resource, and POST can be used to replace part of a resource. If we were to modify a key or a request for a key, I'm guessing we would not replace the entity entirely. > > I tend to favor making objects immutable, and to replacing whole > objects when possible. However, I know that is not always possible, > especially when working with a pre-existing API. So I'd say lets try > to stick to PUT semantics where possible, but deliberately use POST > when we are making finer grain API calls. > >> >> I also noticed that you have two GET versions of "pki/key". Is that >> kind of duplication encouraged? Or is that really just the same api >> entity with different input payloads? >> >> 2. You suggested I take a look at some of the TKS TokenServlet stuff. >> I noticed that we have a simple short list of servlets that appear to >> return very short lived resources. Examples being, session keys , >> encrypted data , and a block of randomly generated data. >> >> I would imagine it would be a POST op like something as follows: >> >> POST /pki/tks/sessionKey , which would return a link to the key >> itself? But does it make sense to have a "resource" for something so >> short lived, or does this concept even belong in such a design? > > In general, REST works best if the service is stateless. Session > based information should be minimized if possible. Perhaps REST is not the way to go in the token space due to the nature of the beast. > > >> >> 3. I was just curious about the Java back-end for this design. Will >> we be using the JAX-RS stuff that provides annotations in the java >> code in order to hook all of this up? > > I am not a fan of annotations. Under other circumstances, I might be > prone to say "well, that is the way of the world" and go with JAX-RS, > but since we don not yet have a set of Entity objects that would drive > the JAX-RS, I am more prone to look at other alternatives. THere are > good libraries for serializing to JSON or XML that should be > sufficient for our needs, and that will keep us from having to make > our API conform to JAX-RS. So my inclination is to say no to JAX-RS > to start. > >> >> thanks, >> jack > Thanks for the clarifications! > > Ade's document has founds its way into the wiki world: > > http://pki.fedoraproject.org/wiki/Dogtag_Future_Directions > > > I might have made some Wiki errors in translation. If this > contradicts Ade's spreadsheet, assume the spreadsheet is Canonical. > > > > > _______________________________________________ > Pki-devel mailing list > Pki-devel at redhat.com > https://www.redhat.com/mailman/listinfo/pki-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Tue Oct 18 02:33:03 2011 From: ayoung at redhat.com (Adam Young) Date: Mon, 17 Oct 2011 22:33:03 -0400 Subject: [Pki-devel] Proposed RESTful interface to dogtag In-Reply-To: <4E9CDDB5.5080400@redhat.com> References: <1318882723.30369.41.camel@localhost.localdomain> <4E9CC3FA.3010605@redhat.com> <4E9CC9C2.6020600@redhat.com> <4E9CDDB5.5080400@redhat.com> Message-ID: <4E9CE55F.1090306@redhat.com> On 10/17/2011 10:00 PM, Jack Magne wrote: > On 10/17/2011 05:35 PM, Adam Young wrote: >> On 10/17/2011 08:10 PM, Jack Magne wrote: >>> On 10/17/2011 01:18 PM, Ade Lee wrote: >>>> Hi all, >>>> >>>> I tried to put this on the dogtag wiki, but it did not seem to work. >>>> Will chat with Matt. >>>> >>>> In the meantime, here is a copy for you guys to look at and comment on. >>>> It has most everything except the installation servlets and token >>>> operations (for which I need to think about the object model). If you >>>> look at the mapped servlets, you'll get a sense of what operations are >>>> covered in each URL mapping. >>>> >>>> This is a first cut -- hopefully a good starting point for discussion. >>>> So please comment away! >>>> >>>> Ade >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> Pki-devel mailing list >>>> Pki-devel at redhat.com >>>> https://www.redhat.com/mailman/listinfo/pki-devel >>>> >>> Thanks Ade. Just a few questions after having a look. >>> >>> 1. I noticed we have the following key related resources: >>> >>> >>> PUT /pki/key "Add a key" >>> >>> POST /pki/key "Modify a key" >>> >>> In my quick readings, it appeared that the POST method was favored >>> for creating brand new resources where PUT was used to modify >>> existing ones? >> I think you have it backwards. PUT is the normal way for creating >> things. The POST operation is very generic and no specific meaning >> can be attached to it. In general, use POST when only a subset of a >> resource needs to be modified and it cannot be accessed as its own >> resource; or when the equivalent of a method call must be exposed. >> >> http://developer.mindtouch.com/REST/REST_for_the_Rest_of_Us says >> this about POST: >> >> " usually either create a new one, or replace the existing one with >> this copy, where as POST is kinds of a catch all. " >> >> We could possibly use PUT for both add and modify if we wanted. >> > My bad. I guess I chose to read a conflicting article on this subject: > > > http://www.ibm.com/developerworks/webservices/library/ws-restful/ > > where it says the following: > > * To create a resource on the server, use POST. > * To retrieve a resource, use GET. > * To change the state of a resource or to update it, use PUT. > * To remove or delete a resource, use DELETE. > > Perhaps I was further thrown off about the discussions I've seen about > the idempotence property of PUT, whereby if you replay the same > request more than once, you get the same result as the first time. > Where in POST, if you repeat the exercise multiple times, you will get > a list of new resources. Perhaps you could speak a little to that subject. > > I think I see now why he would chose PUT to create a key, because PUT > replaces the entire resource, and POST can be used to replace part of > a resource. If we were to modify a key or a request for a key, I'm > guessing we would not replace the entity entirely. We are all learning this stuff, and I had a little too narrow a view before this discussion. One other resource I read states that you should use POST for anything that would not be idempotent: thus, PUT to create makes sense if you know the name to PUT it into, but you would use POST for creating a new identifier. So, I think that agrees with what you are saying. So, yeah, I think you would PUT a key, assuming that the the PUT provides also the name of the key. You would POST a CSR to get a Certificate (in a simplified view) as the Certificate would get a Serial number, and that would be the unique ID. I guess for creating a user, it would depend on if you were assigning the numeric UID whether it would be POST of a PUT. I've a lot to learn about the Domain model here, so I'll let myself be guided by you guys as far as what the appropriate behavior of the business objects should be. > > > > >> >> I tend to favor making objects immutable, and to replacing whole >> objects when possible. However, I know that is not always possible, >> especially when working with a pre-existing API. So I'd say lets try >> to stick to PUT semantics where possible, but deliberately use POST >> when we are making finer grain API calls. >> >>> >>> I also noticed that you have two GET versions of "pki/key". Is that >>> kind of duplication encouraged? Or is that really just the same api >>> entity with different input payloads? >>> >>> 2. You suggested I take a look at some of the TKS TokenServlet >>> stuff. I noticed that we have a simple short list of servlets that >>> appear to return very short lived resources. Examples being, session >>> keys , encrypted data , and a block of randomly generated data. >>> >>> I would imagine it would be a POST op like something as follows: >>> >>> POST /pki/tks/sessionKey , which would return a link to the key >>> itself? But does it make sense to have a "resource" for something so >>> short lived, or does this concept even belong in such a design? >> >> In general, REST works best if the service is stateless. Session >> based information should be minimized if possible. > Perhaps REST is not the way to go in the token space due to the nature > of the beast. > >> >> >>> >>> 3. I was just curious about the Java back-end for this design. Will >>> we be using the JAX-RS stuff that provides annotations in the java >>> code in order to hook all of this up? >> >> I am not a fan of annotations. Under other circumstances, I might be >> prone to say "well, that is the way of the world" and go with JAX-RS, >> but since we don not yet have a set of Entity objects that would >> drive the JAX-RS, I am more prone to look at other alternatives. >> THere are good libraries for serializing to JSON or XML that should >> be sufficient for our needs, and that will keep us from having to >> make our API conform to JAX-RS. So my inclination is to say no to >> JAX-RS to start. >> >>> >>> thanks, >>> jack >> > > Thanks for the clarifications! >> >> Ade's document has founds its way into the wiki world: >> >> http://pki.fedoraproject.org/wiki/Dogtag_Future_Directions >> >> >> I might have made some Wiki errors in translation. If this >> contradicts Ade's spreadsheet, assume the spreadsheet is Canonical. >> >> >> >> >> _______________________________________________ >> Pki-devel mailing list >> Pki-devel at redhat.com >> https://www.redhat.com/mailman/listinfo/pki-devel >> > > > > _______________________________________________ > Pki-devel mailing list > Pki-devel at redhat.com > https://www.redhat.com/mailman/listinfo/pki-devel -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Tue Oct 18 14:25:05 2011 From: alee at redhat.com (Ade Lee) Date: Tue, 18 Oct 2011 10:25:05 -0400 Subject: [Pki-devel] Proposed RESTful interface to dogtag In-Reply-To: <4E9CE55F.1090306@redhat.com> References: <1318882723.30369.41.camel@localhost.localdomain> <4E9CC3FA.3010605@redhat.com> <4E9CC9C2.6020600@redhat.com> <4E9CDDB5.5080400@redhat.com> <4E9CE55F.1090306@redhat.com> Message-ID: <1318947906.30369.102.camel@localhost.localdomain> We all appear to be learning a lot about REST here :) I got a copy of "RESTful Web Services" by Sam Ruby et. al. and he explains it clearly. Turns out a lot of folks (including me) are confused about the roles of PUT and POST. PUT is used in general to either create a new resource or to replace an existing resource - but only if the client knows the intended URL of the resource. So, as was mentioned, this might be the case for users or for profiles where the URL of the resource /pki/profiles/userCert would be the URL for the userCert profile. POST is confusing because it has two usages. The first usage is the "create a subordinate URL" mode (POST-a). This is used when you are trying to create a new resource but the client does not know the URL of the resource. This is the case, for example, when we try to create a new certificate request. In this case, the client will do a POST to /pki/request - and the server will generate a request and return a link to /pki/request/. The client can even do a POST-a to /pki/request/scep or /pki/request/enrollment or perhaps /pki/request/{profile_id} and have the server return a link to /pki/request/. In this case, the link POST-ed to is not the parent link - but rather a "factory" link. POST has a second usage though, which is the "overloaded" mode (POST-b). This is when you POST to an existing URL so that it can change part of an existing resource (rather than replacing the whole thing which is a PUT operation). This is the kind of operation I had in mind for say, approving a certificate request or revoking a certificate. So, for example, to approve a certificate request, the client would POST-b to /pki/request/0xfeed and include say "operation=approve" in the POSTED parameters. The server would then process this request and could return the URL for the issued certificate. Or if a certificate is revoked, then the client would POST-b to /pki/certificate/0xbeef and include say "operation=revoke" in the POSTED parameters. The server would then process this request and set the status of the certificate to revoked, and return status to the client. POST-b is called overloaded because the real operation that is taking place is actually in the POST-ed parameters rather than the more limited operation of POST-a. In some sense, this is more RPC-like. It would be more "REST-ful" to replace the entire object using PUT - but this could be unacceptable in this context for a number of reasons. Once certificate requests and certs are in the CA they are stuck there under the control of the server and cannot be replaced by the client. And we need to cognizant of network traffic. If we did not want to use a POST-b type operation (and many people do in fact do use POST-b), there are a couple ways that we could proceed both of which involve adding new objects. Option A: We could decompose the certificate or request record into smaller objects. So for example, we could have the following for a cert request - Request (the CSR itself and submitted) and RequestStatus. Then, approving a request would be a PUT to /pki/requeststatus/0xfeed, replacing the object there. It will also have the side effect of generating a cert. There may be some merit to this approach, in that we often want to know the status of a request (which could the serial number of the certificate issued). And it separates the parts of the Request record that are immutable from those that can be changed by client interaction. Option B: We could use some pseudo-resources. Up to now, I have followed a principle of only creating resources that clients would want to link to. But you can create anything as a resource. One thing you could do is create a RequestApprover resource. Then, approving a request is would be doing a PUT to /pki/requestapprover passing a reference to the request. Ruby et. al. suggest this type of thing as a possible approach. It smacks as somewhat RPC-like, but it has the advantage of making explicit the operations being performed. Option C: Just do the POST-b. This is getting long -- I'll answer the other questions in a separate email. Ade On Mon, 2011-10-17 at 22:33 -0400, Adam Young wrote: > On 10/17/2011 10:00 PM, Jack Magne wrote: > > On 10/17/2011 05:35 PM, Adam Young wrote: > > > On 10/17/2011 08:10 PM, Jack Magne wrote: > > > > On 10/17/2011 01:18 PM, Ade Lee wrote: > > > > > Hi all, > > > > > > > > > > I tried to put this on the dogtag wiki, but it did not seem to work. > > > > > Will chat with Matt. > > > > > > > > > > In the meantime, here is a copy for you guys to look at and comment on. > > > > > It has most everything except the installation servlets and token > > > > > operations (for which I need to think about the object model). If you > > > > > look at the mapped servlets, you'll get a sense of what operations are > > > > > covered in each URL mapping. > > > > > > > > > > This is a first cut -- hopefully a good starting point for discussion. > > > > > So please comment away! > > > > > > > > > > Ade > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > Pki-devel mailing list > > > > > Pki-devel at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/pki-devel > > > > > > > > > Thanks Ade. Just a few questions after having a look. > > > > > > > > 1. I noticed we have the following key related resources: > > > > > > > > > > > > PUT /pki/key "Add a key" > > > > > > > > POST /pki/key "Modify a key" > > > > > > > > In my quick readings, it appeared that the POST method was > > > > favored for creating brand new resources where PUT was used to > > > > modify existing ones? > > > I think you have it backwards. PUT is the normal way for creating > > > things. The POST operation is very generic and no specific meaning > > > can be attached to it. In general, use POST when only a subset of > > > a resource needs to be modified and it cannot be accessed as its > > > own resource; or when the equivalent of a method call must be > > > exposed. > > > > > > http://developer.mindtouch.com/REST/REST_for_the_Rest_of_Us says > > > this about POST: > > > > > > " usually either create a new one, or replace the existing one > > > with this copy, where as POST is kinds of a catch all. " > > > > > > We could possibly use PUT for both add and modify if we wanted. > > > > > My bad. I guess I chose to read a conflicting article on this > > subject: > > > > > > http://www.ibm.com/developerworks/webservices/library/ws-restful/ > > > > where it says the following: > > * To create a resource on the server, use POST. > > * To retrieve a resource, use GET. > > * To change the state of a resource or to update it, use PUT. > > * To remove or delete a resource, use DELETE. > > Perhaps I was further thrown off about the discussions I've seen > > about the idempotence property of PUT, whereby if you replay the > > same request more than once, you get the same result as the first > > time. Where in POST, if you repeat the exercise multiple times, you > > will get a list of new resources. Perhaps you could speak a little > > to that subject. > > > > I think I see now why he would chose PUT to create a key, because > > PUT replaces the entire resource, and POST can be used to replace > > part of a resource. If we were to modify a key or a request for a > > key, I'm guessing we would not replace the entity entirely. > > We are all learning this stuff, and I had a little too narrow a view > before this discussion. One other resource I read states that you > should use POST for anything that would not be idempotent: thus, PUT > to create makes sense if you know the name to PUT it into, but you > would use POST for creating a new identifier. So, I think that agrees > with what you are saying. > > So, yeah, I think you would PUT a key, assuming that the the PUT > provides also the name of the key. You would POST a CSR to get a > Certificate (in a simplified view) as the Certificate would get a > Serial number, and that would be the unique ID. > > I guess for creating a user, it would depend on if you were assigning > the numeric UID whether it would be POST of a PUT. > > > I've a lot to learn about the Domain model here, so I'll let myself be > guided by you guys as far as what the appropriate behavior of the > business objects should be. > > > > > > > > > > > > > > > > > I tend to favor making objects immutable, and to replacing whole > > > objects when possible. However, I know that is not always > > > possible, especially when working with a pre-existing API. So I'd > > > say lets try to stick to PUT semantics where possible, but > > > deliberately use POST when we are making finer grain API calls. > > > > > > > > > > > I also noticed that you have two GET versions of "pki/key". Is > > > > that kind of duplication encouraged? Or is that really just the > > > > same api entity with different input payloads? > > > > > > > > 2. You suggested I take a look at some of the TKS TokenServlet > > > > stuff. I noticed that we have a simple short list of servlets > > > > that appear to return very short lived resources. Examples > > > > being, session keys , encrypted data , and a block of randomly > > > > generated data. > > > > > > > > I would imagine it would be a POST op like something as follows: > > > > > > > > POST /pki/tks/sessionKey , which would return a link to the > > > > key itself? But does it make sense to have a "resource" for > > > > something so short lived, or does this concept even belong in > > > > such a design? > > > > > > In general, REST works best if the service is stateless. Session > > > based information should be minimized if possible. > > Perhaps REST is not the way to go in the token space due to the > > nature of the beast. > > > > > > > > > > > > > > > > 3. I was just curious about the Java back-end for this design. > > > > Will we be using the JAX-RS stuff that provides annotations in > > > > the java code in order to hook all of this up? > > > > > > I am not a fan of annotations. Under other circumstances, I might > > > be prone to say "well, that is the way of the world" and go with > > > JAX-RS, but since we don not yet have a set of Entity objects that > > > would drive the JAX-RS, I am more prone to look at other > > > alternatives. THere are good libraries for serializing to JSON or > > > XML that should be sufficient for our needs, and that will keep > > > us from having to make our API conform to JAX-RS. So my > > > inclination is to say no to JAX-RS to start. > > > > > > > > > > > thanks, > > > > jack > > > > > > > Thanks for the clarifications! > > > > > > Ade's document has founds its way into the wiki world: > > > > > > http://pki.fedoraproject.org/wiki/Dogtag_Future_Directions > > > > > > > > > I might have made some Wiki errors in translation. If this > > > contradicts Ade's spreadsheet, assume the spreadsheet is > > > Canonical. > > > > > > > > > > > > > > > _______________________________________________ > > > Pki-devel mailing list > > > Pki-devel at redhat.com > > > https://www.redhat.com/mailman/listinfo/pki-devel > > > > > > > > > > > _______________________________________________ > > Pki-devel mailing list > > Pki-devel at redhat.com > > https://www.redhat.com/mailman/listinfo/pki-devel > > _______________________________________________ > Pki-devel mailing list > Pki-devel at redhat.com > https://www.redhat.com/mailman/listinfo/pki-devel From alee at redhat.com Tue Oct 18 15:34:13 2011 From: alee at redhat.com (Ade Lee) Date: Tue, 18 Oct 2011 11:34:13 -0400 Subject: [Pki-devel] Proposed RESTful interface to dogtag In-Reply-To: <4E9CC3FA.3010605@redhat.com> References: <1318882723.30369.41.camel@localhost.localdomain> <4E9CC3FA.3010605@redhat.com> Message-ID: <1318952054.30369.123.camel@localhost.localdomain> On Mon, 2011-10-17 at 17:10 -0700, Jack Magne wrote: > On 10/17/2011 01:18 PM, Ade Lee wrote: > > Hi all, > > > > I tried to put this on the dogtag wiki, but it did not seem to work. > > Will chat with Matt. > > > > In the meantime, here is a copy for you guys to look at and comment on. > > It has most everything except the installation servlets and token > > operations (for which I need to think about the object model). If you > > look at the mapped servlets, you'll get a sense of what operations are > > covered in each URL mapping. > > > > This is a first cut -- hopefully a good starting point for discussion. > > So please comment away! > > > > Ade > > > > > > > > > > _______________________________________________ > > Pki-devel mailing list > > Pki-devel at redhat.com > > https://www.redhat.com/mailman/listinfo/pki-devel > > > Thanks Ade. Just a few questions after having a look. > > 1. I noticed we have the following key related resources: > > > PUT /pki/key "Add a key" > > POST /pki/key "Modify a key" > > In my quick readings, it appeared that the POST method was favored for > creating brand new resources where PUT was used to modify existing > ones? > The above question is answered in a different part of the thread. > I also noticed that you have two GET versions of "pki/key". Is that > kind of duplication encouraged? Or is that really just the same api > entity with different input payloads? > That was a mistake. I was thinking in terms of GET /pki/key and GET /pki/key/X/details, similar to what I had for certificates and requests. > 2. You suggested I take a look at some of the TKS TokenServlet stuff. > I noticed that we have a simple short list of servlets that appear to > return very short lived resources. Examples being, session keys , > encrypted data , and a block of randomly generated data. > > I would imagine it would be a POST op like something as follows: > > POST /pki/tks/sessionKey , which would return a link to the key > itself? But does it make sense to have a "resource" for something so > short lived, or does this concept even belong in such a design? > Just because the resources are short-lived, does not mean that they are not resources or should be in a restful design. To quote: A resource is anything that?s important enough to be referenced as a thing in itself. If your users might ?want to create a hypertext link to it, make or refute assertions about it, retrieve or cache a representation of it, include all or part of it by reference into another representation, annotate it, or perform other operations on it?, then you should make it a resource http://www.w3.org/TR/2004/REC-webarch-20041215/#p39 The fact is that these short lived resources are the raison d'etre for the TKS. It would be quite strange to have the rest of the system be represented in REST and the TKS operations not represented as such. > 3. I was just curious about the Java back-end for this design. Will we > be using the JAX-RS stuff that provides annotations in the java code > in order to hook all of this up? > > thanks, > jack From alee at redhat.com Tue Oct 18 21:44:35 2011 From: alee at redhat.com (Ade Lee) Date: Tue, 18 Oct 2011 17:44:35 -0400 Subject: [Pki-devel] latest restful interface doc Message-ID: <1318974275.30369.133.camel@localhost.localdomain> Based on discussions with jmagne and ayoung, here is the latest version. Please feel free to comment! ayoung will post to wiki. Ade -------------- next part -------------- A non-text attachment was scrubbed... Name: rest.ods Type: application/vnd.oasis.opendocument.spreadsheet Size: 9071 bytes Desc: not available URL: From jmagne at redhat.com Wed Oct 19 01:13:36 2011 From: jmagne at redhat.com (Jack Magne) Date: Tue, 18 Oct 2011 18:13:36 -0700 Subject: [Pki-devel] [rhcs-dev-list] latest restful interface doc In-Reply-To: <1318974275.30369.133.camel@localhost.localdomain> References: <1318974275.30369.133.camel@localhost.localdomain> Message-ID: <4E9E2440.5050104@redhat.com> On 10/18/2011 02:44 PM, Ade Lee wrote: > Based on discussions with jmagne and ayoung, here is the latest version. > Please feel free to comment! > > ayoung will post to wiki. > > Ade > > Ade: A few questions: 1. I see that for the ocsp service we have something like this: "GET" "/pki/certificate/ocsp" "Get OCSP response" I see that perhaps we are trying to first get to the certificate in question and then drill down into the ocsp status of said certificate. Question is, is it not true that OCSP responders respond to a common format (so any client can connect) as explained here: A.1.1 Request HTTP based OCSP requests can use either the GET or the POST method to submit their requests. To enable HTTP caching, small requests (that after encoding are less than 255 bytes), MAY be submitted using GET. If HTTP caching is not important, or the request is greater than 255 bytes, the request SHOULD be submitted using POST. Where privacy is a requirement, OCSP transactions exchanged using HTTP MAY be protected using either TLS/SSL or some other lower layer protocol. An OCSP request using the GET method is constructed as follows: GET {url}/{url-encoding of base-64 encoding of the DER encoding of the OCSPRequest} where {url} may be derived from the value of AuthorityInfoAccess or other local configuration of the OCSP client. An OCSP request using the POST method is constructed as follows: The Content-Type header has the value "application/ocsp-request" while the body of the message is the binary value of the DER encoding of the OCSPRequest. I wonder if instead of drilling down to a cert and proceeding, it would make more sense to have a top level url ending with perhaps "ocsp", that is then passed the certificate info in the standard way? I don't know if this is in fact restful. 2. I was curious as to what the following does: "PUT" "/pki/certificate" "Add a certificate" "None",,, Is this actually for submitting a certificate request? I realize there is a section below dealing with requests. Is this perhaps something that is referenced by some other piece of the server itself? Like approving a request would trigger this? 3. I notice proceeding below there is a whole section devoted to profiles. I assume the bulk of the requests are agent and adminy type actions in managing profiles. I was just wondering if we have a handle on how to actually USE a profile, that is have the standalone client be able to apply for a certificate using one of these profiles we have? Is it absorbed into the act of creating "requests" further down into the API? One of the payload params would be a profile? Or perhaps a portion of the url itself would be for the profile used? 4. Also in some of the online doco they express the requests with possibly some sort of templated param at the end like this: GET pki/certificate/{cert_serial_no} or some such. Is this convention kind of implied by the API or would it be filled in later? Looks like it is shaping up as we continue to discuss. thanks, jack -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Wed Oct 19 01:26:24 2011 From: ayoung at redhat.com (Adam Young) Date: Tue, 18 Oct 2011 21:26:24 -0400 Subject: [Pki-devel] latest restful interface doc In-Reply-To: <1318974275.30369.133.camel@localhost.localdomain> References: <1318974275.30369.133.camel@localhost.localdomain> Message-ID: <4E9E2740.5020804@redhat.com> On 10/18/2011 05:44 PM, Ade Lee wrote: > Based on discussions with jmagne and ayoung, here is the latest version. > Please feel free to comment! > > ayoung will post to wiki. > > Ade > > > _______________________________________________ > Pki-devel mailing list > Pki-devel at redhat.com > https://www.redhat.com/mailman/listinfo/pki-devel Wikified here: http://pki.fedoraproject.org/wiki/Dogtag_Future_Directions#Interfaces I suspect that we will want to break it into a more granular set of pages over time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Wed Oct 19 01:50:11 2011 From: alee at redhat.com (Ade Lee) Date: Tue, 18 Oct 2011 21:50:11 -0400 Subject: [Pki-devel] [rhcs-dev-list] latest restful interface doc In-Reply-To: <4E9E2440.5050104@redhat.com> References: <1318974275.30369.133.camel@localhost.localdomain> <4E9E2440.5050104@redhat.com> Message-ID: <1318989012.30369.150.camel@localhost.localdomain> On Tue, 2011-10-18 at 18:13 -0700, Jack Magne wrote: > On 10/18/2011 02:44 PM, Ade Lee wrote: > > Based on discussions with jmagne and ayoung, here is the latest version. > > Please feel free to comment! > > > > ayoung will post to wiki. > > > > Ade > > > > > Ade: > > A few questions: > > 1. I see that for the ocsp service we have something like this: > > "GET" > "/pki/certificate/ocsp" > "Get OCSP response" > > > I see that perhaps we are trying to first get to the certificate in > question and then drill down into the ocsp status of said certificate. > > Question is, is it not true that OCSP responders respond to a common > format (so any client can connect) as explained here: > > A.1.1 Request > > HTTP based OCSP requests can use either the GET or the POST method to > submit their requests. To enable HTTP caching, small requests (that > after encoding are less than 255 bytes), MAY be submitted using GET. > If HTTP caching is not important, or the request is greater than 255 > bytes, the request SHOULD be submitted using POST. Where privacy is > a requirement, OCSP transactions exchanged using HTTP MAY be > protected using either TLS/SSL or some other lower layer protocol. > > An OCSP request using the GET method is constructed as follows: > > GET {url}/{url-encoding of base-64 encoding of the DER encoding of > the OCSPRequest} > > where {url} may be derived from the value of AuthorityInfoAccess or > other local configuration of the OCSP client. > > An OCSP request using the POST method is constructed as follows: The > Content-Type header has the value "application/ocsp-request" while > the body of the message is the binary value of the DER encoding of > the OCSPRequest. > > I wonder if instead of drilling down to a cert and proceeding, it would make more sense to have a top level url ending with perhaps "ocsp", that is then passed the > certificate info in the standard way? I don't know if this is in fact restful. > I was actually thinking of /pki/certificate/ocsp as kind of a top level request -- that is -- I did not specify /pki/certificate/ocsp/$id. It may make it more clear to specify this as /pki/ocsp or some such. I did not use /pki/ocsp because that would conflict with the system /pki/ocsp. Its a good point though that the standard expects the OCSP request to be submitted using POST. I think we should do that. > 2. I was curious as to what the following does: > > > > > > > > "PUT" > "/pki/certificate" > "Add a > certificate" > "None",,, > Is this actually for submitting a certificate request? I realize there is a section below dealing with requests. > Is this perhaps something that is referenced by some other piece of the server itself? Like approving a request would trigger this? > Are you looking at the right version? In the latest, this is POST-a .. Actually - I had put this in for completeness but in the interest of clarity, we should remove it. This operation - as well as the ones for DELETE for keys, certs and cert requests should be removed from the doc. > 3. I notice proceeding below there is a whole section devoted to profiles. I assume the bulk of the requests are agent and adminy type actions in managing profiles. > I was just wondering if we have a handle on how to actually USE a profile, that is have the standalone client be able to apply for a certificate using one of these profiles we have? > > > Is it absorbed into the act of creating "requests" further down into the API? One of the payload params would be a profile? Or perhaps a portion of the url itself would be > for the profile used? > Yes -requests are done using profiles. We may in fact end up doing something like this : POST /pki/request/${profile_id} for example. This would act like a factory URL that would end up generating a request as /pki/request/0xfeed The payload would need to include the relevant inputs as specified by the profile. The profile URLs that change the profile are agent/admin operations. It is possible for a client to GET a list of profiles (/pki/profiles) or the details (including inputs, outputs and constraints) of a specific profile (GET /pki/profile/$profile_id ) and craft a request accordingly and parse the response accordingly. > > . Also in some of the online doco they express the requests with > possibly some sort of templated param at the end like this: GET > pki/certificate/{cert_serial_no} or some such. Is this convention kind > of implied by the API or would it be filled in later? Its implied - but I think you are looking at the wrong version. In this last version, I did explicitly say -- GET /pki/certificate/$id > > Looks like it is shaping up as we continue to discuss. > > thanks, > jack > _______________________________________________ > Pki-devel mailing list > Pki-devel at redhat.com > https://www.redhat.com/mailman/listinfo/pki-devel From ayoung at redhat.com Wed Oct 19 20:52:33 2011 From: ayoung at redhat.com (Adam Young) Date: Wed, 19 Oct 2011 16:52:33 -0400 Subject: [Pki-devel] PKI Eclipse Presentation Message-ID: <4E9F3891.5000105@redhat.com> Here is the presentation I gave to the PKI Team at Red Hat about usin Eclipse for Development. It has been posted in PDF form to the wiki here:http://pki.fedoraproject.org/wiki/Image:Pki-eclipse.pdf -------------- next part -------------- A non-text attachment was scrubbed... Name: pki-eclipse.odp Type: application/vnd.oasis.opendocument.presentation Size: 469899 bytes Desc: not available URL: From rcritten at redhat.com Fri Oct 21 16:20:40 2011 From: rcritten at redhat.com (Rob Crittenden) Date: Fri, 21 Oct 2011 12:20:40 -0400 Subject: [Pki-devel] What CA constraints? Message-ID: <4EA19BD8.9010207@redhat.com> Shanks was testing signing an IPA CA cert request with an external CA and found an issue, see https://fedorahosted.org/freeipa/ticket/2019 for full details. In short the issue is the CA he did the signing with wasn't really a full CA. It was lacking all sorts of constraints. I had him try again using a proper CA and it worked fine. We'd like to detect this at install time, I'm just not exactly sure what the minimum requirements are. I also wonder if dogtag should be doing this enforcement or if IPA should (or both, perhaps). Where should we start? rob From cfu at redhat.com Fri Oct 21 16:46:42 2011 From: cfu at redhat.com (Christina) Date: Fri, 21 Oct 2011 09:46:42 -0700 Subject: [Pki-devel] What CA constraints? In-Reply-To: <4EA19BD8.9010207@redhat.com> References: <4EA19BD8.9010207@redhat.com> Message-ID: <4EA1A1F2.40509@redhat.com> On 10/21/2011 09:20 AM, Rob Crittenden wrote: > Shanks was testing signing an IPA CA cert request with an external CA > and found an issue, see https://fedorahosted.org/freeipa/ticket/2019 > for full details. > > In short the issue is the CA he did the signing with wasn't really a > full CA. It was lacking all sorts of constraints. I had him try again > using a proper CA and it worked fine. > > We'd like to detect this at install time, I'm just not exactly sure > what the minimum requirements are. I also wonder if dogtag should be > doing this enforcement or if IPA should (or both, perhaps). > > Where should we start? > > rob > > _______________________________________________ > Pki-devel mailing list > Pki-devel at redhat.com > https://www.redhat.com/mailman/listinfo/pki-devel The short answer is, at the minimum you need to have the Basic Constraints extension, but then you also need to have others like Authority Key Identifier. The key usage has to be right, etc. you can look up x509 rfc. Dogtag does have self test module to test the system certs when they are started. In the CA's case, it should report it if it's not a proper CA. I believe the test is on by default. You can look in CS.cfg for ca.cert.signing.nickname and make sure your new nickname is there ... you can also see the pairing ca.cert.signing.certusage=SSLCA, which is to tell the server that it is expected to be a CA cert, so that the server will report error and refuse to start if fails the test. Christina -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5130 bytes Desc: S/MIME Cryptographic Signature URL: From kchamart at redhat.com Fri Oct 21 17:57:34 2011 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 21 Oct 2011 23:27:34 +0530 Subject: [Pki-devel] What CA constraints? In-Reply-To: <4EA19BD8.9010207@redhat.com> References: <4EA19BD8.9010207@redhat.com> Message-ID: <4EA1B28E.2020203@redhat.com> On 10/21/2011 09:50 PM, Rob Crittenden wrote: > Shanks was testing signing an IPA CA cert request with an external CA and found an issue, > see https://fedorahosted.org/freeipa/ticket/2019 for full details. > > In short the issue is the CA he did the signing with wasn't really a full CA. It was > lacking all sorts of constraints. I had him try again using a proper CA and it worked fine. Yeah, we were trying a trial and error using a self-signed CA with certutil w/o any certificate constraints[1]. Side question: Just curious, if we try with some of the constraints(-2 , -3, -4) using 'certutil, 'ipa-find' might've been successful? (though this might not be desired and use a proper CA) -2, -3, -4 as defined in the certutil usage page -- http://www.mozilla.org/projects/security/pki/nss/tools/certutil.html [1] http://kashyapc.wordpress.com/2011/10/12/configuring-certificate-chaining-using-mozilla-nssnetwork-security-services/ > > We'd like to detect this at install time, I'm just not exactly sure what the minimum > requirements are. I also wonder if dogtag should be doing this enforcement or if IPA > should (or both, perhaps). > > Where should we start? > > rob > > _______________________________________________ > Pki-devel mailing list > Pki-devel at redhat.com > https://www.redhat.com/mailman/listinfo/pki-devel > -- /kashyap From mharmsen at redhat.com Sat Oct 22 02:20:02 2011 From: mharmsen at redhat.com (Matthew Harmsen) Date: Fri, 21 Oct 2011 19:20:02 -0700 Subject: [Pki-devel] Patch for review - Fix mod_revocator shutdown on 32-bit platforms . . . Message-ID: <4EA22852.2080406@redhat.com> * *Bugzilla Bug #716355* - mod_revocator does not shut down httpd server if expired CRL is fetched * *Bugzilla Bug #716361* - mod_revocator does not bring down httpd server if CRLUpdate fails Please review the attached patch (which should address both Bugzilla Bugs listed above): * https://bugzilla.redhat.com/attachment.cgi?id=529578&action=diff&context=patch&collapsed=&headers=1&format=raw TESTING THIS PATCH ON A 32-bit RHEL 5 SYSTEM: # date Fri Oct 21 15:50:26 PDT 2011 # cd /var/log/httpd # /sbin/service httpd start # tail -f error_log [Fri Oct 21 16:58:40 2011] [notice] core dump file size limit raised to 4294967295 bytes [Fri Oct 21 16:58:40 2011] [notice] SELinux policy enabled; httpd running as context user_u:system_r:httpd_t [Fri Oct 21 16:58:40 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Oct 21 16:58:42 2011] [notice] Digest: generating secret for digest authentication ... [Fri Oct 21 16:58:42 2011] [notice] Digest: done [Fri Oct 21 16:58:42 2011] [notice] mod_python: Creating 4 session mutexes based on 256 max processes and 0 max threads. [Fri Oct 21 16:58:43 2011] [notice] Apache/2.2.3 (Red Hat) configured -- resuming normal operations [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 # date -s "Fri Sep 21 15:50:26 PDT 2012" Fri Sep 21 15:50:26 PDT 2012 # tail -f error_log [Fri Oct 21 16:58:40 2011] [notice] core dump file size limit raised to 4294967295 bytes [Fri Oct 21 16:58:40 2011] [notice] SELinux policy enabled; httpd running as context user_u:system_r:httpd_t [Fri Oct 21 16:58:40 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Oct 21 16:58:42 2011] [notice] Digest: generating secret for digest authentication ... [Fri Oct 21 16:58:42 2011] [notice] Digest: done [Fri Oct 21 16:58:42 2011] [notice] mod_python: Creating 4 session mutexes based on 256 max processes and 0 max threads. [Fri Oct 21 16:58:43 2011] [notice] Apache/2.2.3 (Red Hat) configured -- resuming normal operations [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Sep 21 15:50:28 2012] [error] CRL http://meatpie.dsdev.sjc.redhat.com:9180/ca/ee/ca/getCRL?op=getCRL&crlIssuingPoint=MasterCRL CN=Certificate Authority,OU=pki-ca,O=DsdevSjcRedhat Domain is outdated. Shutting down server pid 25012 [Fri Sep 21 15:50:29 2012] [notice] caught SIGTERM, shutting down # /sbin/service httpd status httpd dead but subsys locked # /sbin/service httpd restart Stopping httpd: [FAILED] Starting httpd: [ OK ] # tail -f error_log [Fri Oct 21 16:58:40 2011] [notice] core dump file size limit raised to 4294967295 bytes [Fri Oct 21 16:58:40 2011] [notice] SELinux policy enabled; httpd running as context user_u:system_r:httpd_t [Fri Oct 21 16:58:40 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Oct 21 16:58:42 2011] [notice] Digest: generating secret for digest authentication ... [Fri Oct 21 16:58:42 2011] [notice] Digest: done [Fri Oct 21 16:58:42 2011] [notice] mod_python: Creating 4 session mutexes based on 256 max processes and 0 max threads. [Fri Oct 21 16:58:43 2011] [notice] Apache/2.2.3 (Red Hat) configured -- resuming normal operations [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Sep 21 15:50:28 2012] [error] CRL http://meatpie.dsdev.sjc.redhat.com:9180/ca/ee/ca/getCRL?op=getCRL&crlIssuingPoint=MasterCRL CN=Certificate Authority,OU=pki-ca,O=DsdevSjcRedhat Domain is outdated. Shutting down server pid 25012 [Fri Sep 21 15:50:29 2012] [notice] caught SIGTERM, shutting down [Fri Sep 21 15:54:30 2012] [notice] core dump file size limit raised to 4294967295 bytes [Fri Sep 21 15:54:30 2012] [notice] SELinux policy enabled; httpd running as context user_u:system_r:httpd_t [Fri Sep 21 15:54:30 2012] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Sep 21 15:54:31 2012] [notice] Digest: generating secret for digest authentication ... [Fri Sep 21 15:54:31 2012] [notice] Digest: done [Fri Sep 21 15:54:31 2012] [notice] mod_python: Creating 4 session mutexes based on 256 max processes and 0 max threads. [Fri Sep 21 15:54:32 2012] [notice] Apache/2.2.3 (Red Hat) configured -- resuming normal operations [Fri Sep 21 15:54:35 2012] [error] CRL http://meatpie.dsdev.sjc.redhat.com:9180/ca/ee/ca/getCRL?op=getCRL&crlIssuingPoint=MasterCRL CN=Certificate Authority,OU=pki-ca,O=DsdevSjcRedhat Domain is outdated. Shutting down server pid 25059 [Fri Sep 21 15:54:39 2012] [warn] child process 25065 still did not exit, sending a SIGTERM [Fri Sep 21 15:54:41 2012] [warn] child process 25065 still did not exit, sending a SIGTERM [Fri Sep 21 15:54:43 2012] [warn] child process 25065 still did not exit, sending a SIGTERM [Fri Sep 21 15:54:45 2012] [error] child process 25065 still did not exit, sending a SIGKILL [Fri Sep 21 15:54:46 2012] [notice] caught SIGTERM, shutting down # /sbin/service httpd status httpd dead but subsys locked # date -s "Fri Oct 21 15:50:26 PDT 2011" Fri Oct 21 15:50:26 PDT 2011 # /sbin/service httpd restart Stopping httpd: [FAILED] Starting httpd: [ OK ] # tail -f error_log [Fri Oct 21 16:58:40 2011] [notice] core dump file size limit raised to 4294967295 bytes [Fri Oct 21 16:58:40 2011] [notice] SELinux policy enabled; httpd running as context user_u:system_r:httpd_t [Fri Oct 21 16:58:40 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Oct 21 16:58:42 2011] [notice] Digest: generating secret for digest authentication ... [Fri Oct 21 16:58:42 2011] [notice] Digest: done [Fri Oct 21 16:58:42 2011] [notice] mod_python: Creating 4 session mutexes based on 256 max processes and 0 max threads. [Fri Oct 21 16:58:43 2011] [notice] Apache/2.2.3 (Red Hat) configured -- resuming normal operations [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 16:58:44 2011] [notice] Revocation subsystem initialized 2 [Fri Sep 21 15:50:28 2012] [error] CRL http://meatpie.dsdev.sjc.redhat.com:9180/ca/ee/ca/getCRL?op=getCRL&crlIssuingPoint=MasterCRL CN=Certificate Authority,OU=pki-ca,O=DsdevSjcRedhat Domain is outdated. Shutting down server pid 25012 [Fri Sep 21 15:50:29 2012] [notice] caught SIGTERM, shutting down [Fri Sep 21 15:54:30 2012] [notice] core dump file size limit raised to 4294967295 bytes [Fri Sep 21 15:54:30 2012] [notice] SELinux policy enabled; httpd running as context user_u:system_r:httpd_t [Fri Sep 21 15:54:30 2012] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Sep 21 15:54:31 2012] [notice] Digest: generating secret for digest authentication ... [Fri Sep 21 15:54:31 2012] [notice] Digest: done [Fri Sep 21 15:54:31 2012] [notice] mod_python: Creating 4 session mutexes based on 256 max processes and 0 max threads. [Fri Sep 21 15:54:32 2012] [notice] Apache/2.2.3 (Red Hat) configured -- resuming normal operations [Fri Sep 21 15:54:35 2012] [error] CRL http://meatpie.dsdev.sjc.redhat.com:9180/ca/ee/ca/getCRL?op=getCRL&crlIssuingPoint=MasterCRL CN=Certificate Authority,OU=pki-ca,O=DsdevSjcRedhat Domain is outdated. Shutting down server pid 25059 [Fri Sep 21 15:54:39 2012] [warn] child process 25065 still did not exit, sending a SIGTERM [Fri Sep 21 15:54:41 2012] [warn] child process 25065 still did not exit, sending a SIGTERM [Fri Sep 21 15:54:43 2012] [warn] child process 25065 still did not exit, sending a SIGTERM [Fri Sep 21 15:54:45 2012] [error] child process 25065 still did not exit, sending a SIGKILL [Fri Sep 21 15:54:46 2012] [notice] caught SIGTERM, shutting down [Fri Oct 21 15:51:01 2011] [notice] core dump file size limit raised to 4294967295 bytes [Fri Oct 21 15:51:01 2011] [notice] SELinux policy enabled; httpd running as context user_u:system_r:httpd_t [Fri Oct 21 15:51:01 2011] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Oct 21 15:51:03 2011] [notice] Digest: generating secret for digest authentication ... [Fri Oct 21 15:51:03 2011] [notice] Digest: done [Fri Oct 21 15:51:03 2011] [notice] mod_python: Creating 4 session mutexes based on 256 max processes and 0 max threads. [Fri Oct 21 15:51:04 2011] [notice] Apache/2.2.3 (Red Hat) configured -- resuming normal operations [Fri Oct 21 15:51:06 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 15:51:06 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 15:51:06 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 15:51:06 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 15:51:06 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 15:51:06 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 15:51:06 2011] [notice] Revocation subsystem initialized 2 [Fri Oct 21 15:51:06 2011] [notice] Revocation subsystem initialized 2 NOTE: PATCH WAS ALSO TESTED ON A 64-BIT PLATFORM TO DETERMINE THAT NO REGRESSION OCCURRED. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5150 bytes Desc: S/MIME Cryptographic Signature URL: From awnuk at redhat.com Sat Oct 22 11:05:49 2011 From: awnuk at redhat.com (Andrew Wnuk) Date: Sat, 22 Oct 2011 04:05:49 -0700 Subject: [Pki-devel] What CA constraints? In-Reply-To: <4EA1A1F2.40509@redhat.com> References: <4EA19BD8.9010207@redhat.com> <4EA1A1F2.40509@redhat.com> Message-ID: <4EA2A38D.9010409@redhat.com> On 10/21/2011 9:46 AM, Christina wrote: > On 10/21/2011 09:20 AM, Rob Crittenden wrote: >> Shanks was testing signing an IPA CA cert request with an external CA >> and found an issue, see https://fedorahosted.org/freeipa/ticket/2019 >> for full details. >> >> In short the issue is the CA he did the signing with wasn't really a >> full CA. It was lacking all sorts of constraints. I had him try again >> using a proper CA and it worked fine. >> >> We'd like to detect this at install time, I'm just not exactly sure >> what the minimum requirements are. I also wonder if dogtag should be >> doing this enforcement or if IPA should (or both, perhaps). >> >> Where should we start? >> >> rob >> >> _______________________________________________ >> Pki-devel mailing list >> Pki-devel at redhat.com >> https://www.redhat.com/mailman/listinfo/pki-devel > The short answer is, at the minimum you need to have the Basic > Constraints extension, but then you also need to have others like > Authority Key Identifier. The key usage has to be right, etc. you > can look up x509 rfc. > > Dogtag does have self test module to test the system certs when they > are started. In the CA's case, it should report it if it's not a > proper CA. I believe the test is on by default. You can look in > CS.cfg for ca.cert.signing.nickname and make sure your new nickname is > there ... you can also see the pairing > ca.cert.signing.certusage=SSLCA, which is to tell the server that it > is expected to be a CA cert, so that the server will report error and > refuse to start if fails the test. > > Christina > > > > > _______________________________________________ > Pki-devel mailing list > Pki-devel at redhat.com > https://www.redhat.com/mailman/listinfo/pki-devel It is always good to check RFC 5280 for guidelines: http://www.ietf.org/rfc/rfc5280.txt Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Tue Oct 25 00:45:32 2011 From: ayoung at redhat.com (Adam Young) Date: Mon, 24 Oct 2011 20:45:32 -0400 Subject: [Pki-devel] Keytab for talking to PKI CA from IPA Message-ID: <4EA606AC.6040302@redhat.com> When setting up replication, it should not be necessary to cache any passwords, anywhere, until the replication agreemsnts are set up, and then, all caching should be using known secure mechanisms. The two main repositories we care about are the Directory Server instances managed by IPA and the Certificate Authority. Currently, these are not in the same Dir Srv isntance (although they could be) due to the fact that we expect to replicate them seperately. Specifically, we expect to have more IPA instances than CA instances, and we will not have a CA instance without a co-located IPA instance. During normal operations, the IPA instance should not need to talk to the directory server instance of the CA. All communication between IPA and the CA should happen via the HTTPS secured via Certificates issued by the CA. Once the replication process starts, the file generated by ipa-prepare replicate should not need an passwords. Instead, when the replicated server is installed, the user performing the install should get a ticket as an administrative user. All authentication for the replication should be based on that ticket. The very first step is to install an new Directory server for IPA. For this, the replication process can generate a single use password and use it as the Directory server password for the local instance. Next, the ticket for the adminiustrative user should be used to download a keytab for the Directory Server to use when communicating back to the original IPA directory server.With these keytab in place, the replicated IPA DS should be able to talk to the original IPA DS and establish the replication agreement. At this point, the single use password should be disabled. Once the IPA Directory server has been replicated, we can either use the original keytab or download a second keytab to establish a replication agreement between the replicated CA directory server and the original CA directory server. The process would look the same as setting up the keytab for the IPA directory server. Why don't we do this now? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ayoung at redhat.com Tue Oct 25 01:03:05 2011 From: ayoung at redhat.com (Adam Young) Date: Mon, 24 Oct 2011 21:03:05 -0400 Subject: [Pki-devel] [PATCH] 0008-stub-unimplemented-abstract-methods Message-ID: <4EA60AC9.4050303@redhat.com> -------------- next part -------------- A non-text attachment was scrubbed... Name: dogtag-admiyo-0008-stub-unimplemented-abstract-methods.patch Type: text/x-patch Size: 4567 bytes Desc: not available URL: From ayoung at redhat.com Tue Oct 25 01:13:22 2011 From: ayoung at redhat.com (Adam Young) Date: Mon, 24 Oct 2011 21:13:22 -0400 Subject: [Pki-devel] Slides from the Git and Eclipse talks Message-ID: <4EA60D32.6080501@redhat.com> Links to all of the talks I gave regarding Git, Tomcat, and Eclipse can be found on the wiki. http://pki.fedoraproject.org/wiki/User:Admiyo I'll make additional links to them from under the development process pages at some point as well. From rcritten at redhat.com Tue Oct 25 13:10:01 2011 From: rcritten at redhat.com (Rob Crittenden) Date: Tue, 25 Oct 2011 09:10:01 -0400 Subject: [Pki-devel] [Freeipa-devel] Keytab for talking to PKI CA from IPA In-Reply-To: <4EA60640.80108@younglogic.com> References: <4EA60640.80108@younglogic.com> Message-ID: <4EA6B529.3060108@redhat.com> Adam Young wrote: > When setting up replication, it should not be necessary to cache any > passwords, anywhere, until the replication agreemsnts are set up, and > then, all caching should be using known secure mechanisms. > > The two main repositories we care about are the Directory Server > instances managed by IPA and the Certificate Authority. Currently, these > are not in the same Dir Srv isntance (although they could be) due to the > fact that we expect to replicate them seperately. Specifically, we > expect to have more IPA instances than CA instances, and we will not > have a CA instance without a co-located IPA instance. Not really. I created two instances to provide logical separation and prevent us from having to deal with multiple naming contexts. There was some serious investigation over the summer to see if we could consolidate into a single instance. It was determined to be too much too late in the cycle. > During normal operations, the IPA instance should not need to talk to > the directory server instance of the CA. All communication between IPA > and the CA should happen via the HTTPS secured via Certificates issued > by the CA. Right, this has never been in question AFAIK. > Once the replication process starts, the file generated by ipa-prepare > replicate should not need an passwords. Instead, when the replicated > server is installed, the user performing the install should get a ticket > as an administrative user. All authentication for the replication should > be based on that ticket. > > The very first step is to install an new Directory server for IPA. For > this, the replication process can generate a single use password and use > it as the Directory server password for the local instance. Next, the > ticket for the adminiustrative user should be used to download a keytab > for the Directory Server to use when communicating back to the original > IPA directory server.With these keytab in place, the replicated IPA DS > should be able to talk to the original IPA DS and establish the > replication agreement. At this point, the single use password should be > disabled. I think you mean the first step is to create a 389-ds instance for the CA, right? Why not simply pre-create the keytab and ship it in the prepared file? There is no need to disable the random DM password. Once the installation is complete it will be unknown so effectively disabled. > Once the IPA Directory server has been replicated, we can either use the > original keytab or download a second keytab to establish a replication > agreement between the replicated CA directory server and the original CA > directory server. The process would look the same as setting up the > keytab for the IPA directory server. > > Why don't we do this now? Because of permission issues writing the relevant entries to cn=config. rob From alee at redhat.com Tue Oct 25 16:31:10 2011 From: alee at redhat.com (Ade Lee) Date: Tue, 25 Oct 2011 12:31:10 -0400 Subject: [Pki-devel] the state of client auth and the new proposed restful interface Message-ID: <1319560272.13969.65.camel@localhost.localdomain> Hi all, Client cert authentication is central to how the CS manages access to its servlets. How we use and have used client auth is tricky though - and has implications for the new and current interfaces. In this email, I hope to lay out the current state of client-auth - what works, what doesn't, and some thoughts as to how this affects the currently proposed interface. More thoughts on how to solve some of the problems would be really helpful. We use client auth in the following ways: I. There are servlets which never require client auth (ee pages mostly) and some admin (mostly installation related) pages II. There are servlets which always require client auth (agent pages) III. There are servlets which may require client auth (admin pages from the console) which can be configured to require client auth. IV. Enrollment profiles (executed from the EE pages - which do not require client auth) - which, when executed - require client auth to be processed. Another example of this would be self-renewals. This is a case where we start off in a non-clientauth context and switch to client auth. Here is a brief history of how we have implemented client auth to satisfy the above cases: 1. Shared Port Configuration: In the past, we did not have ports that were specified as client-auth/ non-client auth. Instead, if a servlet required client auth, we would ask tomcat for the javax.x509.certificate attribute and - through tomcatjss and jss - it would renegotiate the connection and get the relevant certificate. This handled cases I-IV. 2. Separated Ports: Recently we changed the configuration to a port separated configuration. EE, admin and agent operations occurred on separate ports, which were configured to either require (agent) or not require (ee.,admin) client auth. For case III, if client auth were required, we configured the admin port to "want" - which would essentially ask for a cert and continue if one was not provided. For case IV, we continued to rely on tomcat-initiated renegotiation. 3. Separated ports with MITM fix: For older clients without the newer renegotiation protocol that fixes MITM, renegotiation is disabled. This means that case IV would fail for certain profiles and renewals. The fix for this was to redirect the profile submission to a new client auth required EE port. 4. CA behind an apache proxy. In this case, the outside world talks to the CS through an httpd instance that talks to the CS though the ajp port. This is the current configuration used in IPA. * The decision as to whether a port is client auth/ non-client auth is made in the proxy by mapping URLs. URLs that begin with /agent for example require client auth. This handles cases I and II. * III is more complicated but can be addressed in the same way. That is we can tweak the URL matching rules to always permit the admin servlets that always do not need client auth (like installation servlets), while requiring client auth for other admin servlets. * Case IV is currently broken. Tomcat still passes on the request for the javax.x509.certificate attribute - but ajp does not understand this request and drops it on the floor - resulting in a 5XX type error being returned. As of now, I have been unable to find a way to configure ajp to pass the request on to apache to have it renegotiate the connection. 5. The new proposed RESTful interface: For Dogtag 9+, we have been envisioning that the CS instances would use default ports and that they would still communicate with the outside world through the http proxy. At this point though, the new RESTful interface does not have any indication as to whether the operation is agent/ee/admin -- no indication of this in the URL -- so it is impossible to filter at the httpd proxy based on URL. Moreover, case IV is still broken. There is also an additional consideration - which for reference, I cam going to call case V. Some operations which currently have different output depending on whether they are executed as an agent or an ee user - are mapped to the same servlet in the new proposed api. An example is : /pki/profiles -which would return a list of profiles This is a different list depending on whether it is executed as an agent or whether it is executed as an EE user. Possible Solutions (so far): 1. Whether we use client auth is closely tied to our conception of who is executing the command. We have defined roles : ee, agent, admin in the CS. Perhaps we should add these to the proposed URI pattern? So we might have /pki/agent/certificates and /pki/ee/certifcates etc. This would allow URL filtering on the proxy along the same lines as now - solving cases I,II,III and V. Case IV is still broken. 2. Another solution might be to - once we have determined that a client cert is required (and has not been provided) - send a temporary redirect to say /pki/getClientCert - with a 307 (temporary redirect) code. This servlet could be configured in the proxy to require client auth. This would presumably be the equivalent of renegotiating the connection - except using apache rather than tomcat to do it. And it would solve all five cases listed above. 3, If we can figure out how to get ajp to pass back a request to renegotiate the connection, then this might be the best solution yet. Then we do not need to change any CS code - when we determine that we need a client cert - it will be passed back to the proxy - and the proxy will get the cert. Ade From alee at redhat.com Wed Oct 26 14:26:12 2011 From: alee at redhat.com (Ade Lee) Date: Wed, 26 Oct 2011 10:26:12 -0400 Subject: [Pki-devel] Proposed API v 0.3 Message-ID: <1319639172.13969.73.camel@localhost.localdomain> Hi all, Removed some interfaces that will not be used as per Jack's mention. Also added some controller objects to handle TKS operations (courtesy of Jack) And added some flows on page 2, so that we can see how the new interface would all work together. I took a quick stab at the key archival/ recovery scenarios - but that probably needs more massaging from more experienced PKI folks. Adam, please wikify. Ade -------------- next part -------------- A non-text attachment was scrubbed... Name: rest.ods Type: application/vnd.oasis.opendocument.spreadsheet Size: 12180 bytes Desc: not available URL: From ayoung at redhat.com Wed Oct 26 17:34:33 2011 From: ayoung at redhat.com (Adam Young) Date: Wed, 26 Oct 2011 13:34:33 -0400 Subject: [Pki-devel] Updated REST design pages on the Wiki Message-ID: <4EA844A9.3050307@redhat.com> Ade has done another pass over the REST design, taking feedback from several people. The direct links are: http://pki.fedoraproject.org/wiki/REST and http://pki.fedoraproject.org/wiki/REST/flows We'll shortly update http://pki.fedoraproject.org/wiki/Dogtag_Future_Directions With a fuller description of what we are planning. From alee at redhat.com Wed Oct 26 19:44:11 2011 From: alee at redhat.com (Ade Lee) Date: Wed, 26 Oct 2011 15:44:11 -0400 Subject: [Pki-devel] POST-a vs. POST-b Message-ID: <1319658252.13969.123.camel@localhost.localdomain> I was asked today what the difference in the interface doc was between POST-a and POST-b. Below is an excerpt from Restful Web Services (in the best practices chapter) that explains in more detail. Basically, POST-a is using POST to add a new resource. You must use POST instead of PUT if the server is responsible for the final URI. Example, in creating a new request, the server is responsible for assigning a request_id and hence the link to the uri -- /pki/request/$id This is a good use of POST (as described in section 8.6.2) 8.6.3 describes the overloaded use of POST - which is as the result of a submission of a form. This is what I call POST-b - and what some call POST-p (for POST-processor). We have tried to reduce the number of cases where POST-b is used. Its not bad in and of itself, but it makes the interface less clear and less resource oriented or RESTful. Sometimes we have to use it -- as Jack specified for OCSP for instance. In the case of CRLs, at this point I just do not understand how that interface is supposed to work (and am waiting for Andrew or Christina to tell me). Hope this helps, Ade ********************************************* 8.6.2. New Resources: PUT Versus POST You can expose the creation of new resources through PUT, POST, or both. But a client can only use PUT to create resources when it can calculate the final URI of the new resource. In Amazon?s S3 service, the URI path to a bucket is /{bucket-name}. Since the client chooses the bucket name, a client can create a bucket by constructing the corresponding URI and sending a PUT request to it. On the other hand, the URI to a resource in a typical Rails web service looks like /{database-table-name}/{database-ID}. The name of the database table is known in advance, but the ID of the new resource won?t be known until the corresponding record is saved to the database. To create a resource, the client must POST to a ?factory? resource, located at /{database-table-name}. The server chooses a URI for the new resource. 8.6.3. Overloading POST POST isn?t just for creating new resources and appending to representations. You can also use it to turn a resource into a tiny RPC-style message processor. A resource that receives an overloaded POST request can scan the incoming representation for additional method information, and carry out any task whatsoever. This gives the resource a wider vocabulary than one that supports only the uniform interface. This is how most web applications work. XML-RPC and SOAP/WSDL web services also run over overloaded POST. I strongly discourage the use of overloaded POST, because it ruins the uniform interface. If you?re tempted to expose complex objects or processes through overloaded POST, try giving the objects or processes their own URIs, and exposing them as resources. I show several examples of this in Section 8.8? later in this chapter. There are two noncontroversial uses for overloaded POST. The first is to simulate HTTP?s uniform interface for clients like web browsers that don?t support PUT or DELETE. The second is to work around limits on the maximum length of a URI. The HTTP standard specifies no limit on how long a URI can get, but many clients and servers impose their own limits: Apache won?t respond to requests for URIs longer than 8 KB. If a client can?t make a GET request to http://www.example.com/numbers/1111111 because of URI length restrictions (imagine a million more ones there if you like), it can make a POST request to http://www.example.com/numbers?_method=GET and put ?1111111? in the entity-body. If you want to do without PUT and DELETE altogether, it?s entirely RESTful to expose safe operations on resources through GET, and all other operations through overloaded POST. Doing this violates my Resource-Oriented Architecture, but it conforms to the less restrictive rules of REST. REST says you should use a uniform interface, but it doesn?t say which one. If the uniform interface really doesn?t work for you, or it?s not worth the effort to make it work, then go ahead and overload POST, but don?t lose the resource-oriented design. Every URI you expose should still be a resource: something a client might want to link to. A lot of web applications create new URIs for operations exposed through overloaded POST. You get URIs like /weblog/myweblog/rebuild-index. It doesn?t make sense to link to that URI. Instead of putting method information in the URI, expose overloaded POST on your existing resources (/weblog/myweblog) and ask for method information in the incoming representation (method=rebuild-index). This way, /weblog/myweblog still acts like a resource, albeit one that doesn?t totally conform to the uniform interface. It responds to GET, PUT, DELETE... and also ?rebuild-index? through overloaded POST. It?s still an object in the object-oriented sense. A rule of thumb: if you?re using overloaded POST, and you never expose GET and POST on the same URI, you?re probably not exposing resources at all. You?ve probably got an RPC-style service. From ayoung at redhat.com Thu Oct 27 01:29:44 2011 From: ayoung at redhat.com (Adam Young) Date: Wed, 26 Oct 2011 21:29:44 -0400 Subject: [Pki-devel] Choosing a REST implementation Message-ID: <4EA8B408.3090003@redhat.com> As we close in on the decision about REST, I'd like to mention one thing that Ade and I have been looking into: which REST implementation to choose. It seems like the world has settled in on JAX-B as the API of choice. This makes things as little easier for us, as it means we can program to a standard API and swap out the libraries. I've done what I call a Spike ( something less than a proof-of-concept, more like poking a hole in the ground to see if something comes out) using the reference implementation: Jersey. With this I can do: curl -H "accept: application/json" -H "Accept: applicaton/json" http://$IPASERVER:8080/PKIREST/pki/certificates/2 And get back: {"algorithmID":"RSA","issuer":"Fedora PKA Server Cert","serialNumber":"2","version":"0"} as well as curl -H "accept: application/xml" -H "Accept: applicaton/json" http://$IPASERVER:8080/PKIREST/pki/certificates Which gets: RSAFedora PKA Server Cert20RSAFedora PKA CA10 So we have a library and an approach that can handle both XML and JSON without much fuss. Note that this is just a Plain Ole Java Object (POJO) that is a place holder for a real certificate and not anything that ties in with our APIs. As I said, a spike. The other REST implementations I've checked out are RESTlet and RESTeasy. Of them, RESTeasy is closer to home, as it is supported by the JBoss team, and is the JBoss REST Implementation of choice. Both of them are complicated enough that it is not immediately clear how to go about integrating them into a pre-existing Serlvet app. I'm sure it is not much different than the Jersey code base. The questions then are: does the Jersey implementation do everything that we foresee needing and how hard would it be to package. I'll answer the packaging question first, as it is the easier one. In order to productize the Jersey implementation for shipping with PKI I'd have to get the source for Jersey and make an RPM out of it: someone on the JPackage team has done this in the past, and it shouldn't be hard to move it forward to the latest Jersey implementation. The Jersey implementation from the Oracle site provides: asm-3.1.jar jackson-xc-1.8.3.jar jersey-server-1.9.1.jar jackson-core-asl-1.8.3.jar jersey-client-1.9.1.jar jettison-1.1.jar jackson-jaxrs-1.8.3.jar jersey-core-1.9.1.jar jsr311-api-1.1.1.jar jackson-mapper-asl-1.8.3.jar jersey-json-1.9.1.jar Most of these are not in Fedora yet. Only jettison. asm is at an older version(2) but we should be able to get 3 in with a the name asm3, which is what I think is already going to happen for Maven support. I suspect it would not be that hard to package the others and get them into Fedora. The source for all of them is in the Maven Repo: http://download.java.net/maven/2/com/sun/jersey/ and converting Maven POMs to Spec files is straightforward enough that it can be automated. The other REST implementations are in pretty much the same boat. None of the mare packaged for Fedora, and most of them pull in quite a few other packages for Building. Both RESTlet and RESTeasy are fairly heavy weight to build: they both go beyond providing a library that interfaces with the Servlet API to providing full Servers, far more than we would want to support. Lets just say that getting them to build as RPMs is a task that I would not be willing to take on in support of PKI. There are a few others out there, but mostly they seem to be projects that were originally for web services that decided to add support for REST as well. So then the question is: Does Jersey does everything we think we need it to do? The short answer right now is that I don't know, but I suspect so. REST is basically making Heavy use of the HTTP definition, so really, the question is does it let us maximize things like redirects, response codes, renegotiation and authentication. This is the next stage of investigation, and I'll keep you posted. From adam at younglogic.com Tue Oct 25 00:43:44 2011 From: adam at younglogic.com (Adam Young) Date: Mon, 24 Oct 2011 20:43:44 -0400 Subject: [Pki-devel] Keytab for talking to PKI CA from IPA Message-ID: <4EA60640.80108@younglogic.com> When setting up replication, it should not be necessary to cache any passwords, anywhere, until the replication agreemsnts are set up, and then, all caching should be using known secure mechanisms. The two main repositories we care about are the Directory Server instances managed by IPA and the Certificate Authority. Currently, these are not in the same Dir Srv isntance (although they could be) due to the fact that we expect to replicate them seperately. Specifically, we expect to have more IPA instances than CA instances, and we will not have a CA instance without a co-located IPA instance. During normal operations, the IPA instance should not need to talk to the directory server instance of the CA. All communication between IPA and the CA should happen via the HTTPS secured via Certificates issued by the CA. Once the replication process starts, the file generated by ipa-prepare replicate should not need an passwords. Instead, when the replicated server is installed, the user performing the install should get a ticket as an administrative user. All authentication for the replication should be based on that ticket. The very first step is to install an new Directory server for IPA. For this, the replication process can generate a single use password and use it as the Directory server password for the local instance. Next, the ticket for the adminiustrative user should be used to download a keytab for the Directory Server to use when communicating back to the original IPA directory server.With these keytab in place, the replicated IPA DS should be able to talk to the original IPA DS and establish the replication agreement. At this point, the single use password should be disabled. Once the IPA Directory server has been replicated, we can either use the original keytab or download a second keytab to establish a replication agreement between the replicated CA directory server and the original CA directory server. The process would look the same as setting up the keytab for the IPA directory server. Why don't we do this now? From alee at redhat.com Thu Oct 27 14:34:54 2011 From: alee at redhat.com (Ade Lee) Date: Thu, 27 Oct 2011 10:34:54 -0400 Subject: [Pki-devel] Interface doc changed (version 0.4) Message-ID: <1319726095.13969.129.camel@localhost.localdomain> Hi all, Based on feedback from Jack on some KRA servlets, I have made a few small modifications to the interface docs and flows. In particular, added a couple flows for key generation and storage on the DRM (server-side key gen), and key recovery when nAgents=1. On the interface docs, made the types of POST-a to /pki/keyrequest more explicit. See: http://pki.fedoraproject.org/wiki/REST http://pki.fedoraproject.org/wiki/REST/flows Ade From ayoung at redhat.com Thu Oct 27 18:04:03 2011 From: ayoung at redhat.com (Adam Young) Date: Thu, 27 Oct 2011 14:04:03 -0400 Subject: [Pki-devel] [PATCH] 0009-add-tests-to-classpath Message-ID: <4EA99D13.1000305@redhat.com> Requires 0008 in order to not break the build in eclipse. -------------- next part -------------- A non-text attachment was scrubbed... Name: dogtag-admiyo-0009-1-add-tests-to-classpath.patch Type: text/x-patch Size: 899 bytes Desc: not available URL: From ayoung at redhat.com Fri Oct 28 01:05:43 2011 From: ayoung at redhat.com (Adam Young) Date: Thu, 27 Oct 2011 21:05:43 -0400 Subject: [Pki-devel] Client Authentication Message-ID: <4EA9FFE7.20405@redhat.com> Ade, Your ealier emali discussed the renegotiation challenge based on the Profiles. http://pki.fedoraproject.org/wiki/REST#Profiles For the case where a user points a browser (say and Ajax request) at /pki/profiles lets say that we have two cases: one where the user is authenticated and one where they are not. In both cases, they get back a collection, but in the case of unauthenticated it will have significantly fewer entries. In this case, we would want the Java equivalent of mod_nss: NSS_VerifyCLient: Optional I'm guessing this a tomcatjss setting. In this case, if the user has the certificate, they can present it, but if they don't, the operation will complete. I think this is what we want. We always ask for the certificate, but we say it is OK if you don't have it, you just don't get the data. In the case where the user is asking for an object, say an actual profile, and they don't have sufficient privs, they get back a hard and fast error: probably 403.2 http://en.wikipedia.org/wiki/HTTP_403 For something like CSRs, we probably want to restrict access to agents. In that case, if an unauthenticated user, or one without appropriate privs, attempts to access that URL, they also get a 403.2. I don't know how this works in with the renegotiate, but I am guessing that every time the user without a certificate hits an "Optional" page they will be asked for their cert. This might be chatty. No idea. So in general, we tag the URLS either NSS_VerifyClient: Require if they must be authenticated to use them NSS_VerifyClient: Optional if they see different results based on authentication or not NSS_VerifyClient: None if they can view them unauthenticated and see the same results as everyone else IN the pki/WEB-INF/web.xml, this probably maps to something like this: Protected Resource */*/profile* *anonymous* *agent* I'm guessing that we want to specify a role for anonymous as opposed to no role. ... CLIENT-CERT Tomcat Manager Application PKICA ... the PKICA Realm would be defined at the server level, in conf/server.xml. Something like: There is a class that almost does what we want. *org.apache.catalina.realm.JNDIRealm*. I suspect we can subclass it. It has two ways of doing the auth : Bind mode and Comparison mode. It might be possible to add a Client Cert mode in a subclass. docs are here: http://tomcat.apache.org/tomcat-6.0-doc/realm-howto.html#JNDIRealm -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: void.gif Type: image/gif Size: 43 bytes Desc: not available URL: From jmagne at redhat.com Fri Oct 28 01:33:08 2011 From: jmagne at redhat.com (Jack Magne) Date: Thu, 27 Oct 2011 18:33:08 -0700 Subject: [Pki-devel] Client Authentication In-Reply-To: <4EA9FFE7.20405@redhat.com> References: <4EA9FFE7.20405@redhat.com> Message-ID: <4EAA0654.5010003@redhat.com> Adam Young wrote: > Ade, > > Your ealier emali discussed the renegotiation challenge based on the > Profiles. > > http://pki.fedoraproject.org/wiki/REST#Profiles > > For the case where a user points a browser (say and Ajax request) at > /pki/profiles lets say that we have two cases: one where the user > is authenticated and one where they are not. In both cases, they get > back a collection, but in the case of unauthenticated it will have > significantly fewer entries. > > In this case, we would want the Java equivalent of mod_nss: > > NSS_VerifyCLient: Optional > > I'm guessing this a tomcatjss setting. clientAuth="want" For tomcatjss in server.xml The thing is though, the way this behaves is the user is asked for the cert every time. The server then lets it go if the user choses not to send one, or uses it if they do. > > In this case, if the user has the certificate, they can present it, > but if they don't, the operation will complete. I think this is what > we want. We always ask for the certificate, but we say it is OK if > you don't have it, you just don't get the data. > > In the case where the user is asking for an object, say an actual > profile, and they don't have sufficient privs, they get back a hard > and fast error: probably 403.2 > > http://en.wikipedia.org/wiki/HTTP_403 > > For something like CSRs, we probably want to restrict access to > agents. In that case, if an unauthenticated user, or one without > appropriate privs, attempts to access that URL, they also get a 403.2. > > I don't know how this works in with the renegotiate, but I am guessing > that every time the user without a certificate hits an "Optional" page > they will be asked for their cert. This might be chatty. No idea. > > So in general, we tag the URLS either > NSS_VerifyClient: Require if they must be authenticated to use them > NSS_VerifyClient: Optional if they see different results based on > authentication or not > NSS_VerifyClient: None if they can view them unauthenticated and see > the same results as everyone else > > > IN the pki/WEB-INF/web.xml, this probably maps to something like this: > > > Protected Resource > */*/profile* > > > > *anonymous* > *agent* > > > > I'm guessing that we want to specify a role for anonymous as opposed > to no role. > > > ... > > > CLIENT-CERT > Tomcat Manager Application > PKICA > > ... > > > > the PKICA Realm would be defined at the server level, in > conf/server.xml. Something like: > > > connectionURL="ldaps://localhost:8389" > userPattern="uid={0},ou=people,dc=mycompany,dc=com" > roleBase="ou=groups,dc=mycompany,dc=com" > roleName="cn" > roleSearch="(uniqueMember={0})" > /> > > There is a class that almost does what we want. > > *org.apache.catalina.realm.JNDIRealm*. > > I suspect we can subclass it. It has two ways of doing the auth : > Bind mode and Comparison mode. It might be possible to add a Client > Cert mode in a subclass. docs are here: > http://tomcat.apache.org/tomcat-6.0-doc/realm-howto.html#JNDIRealm > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > Pki-devel mailing list > Pki-devel at redhat.com > https://www.redhat.com/mailman/listinfo/pki-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 43 bytes Desc: not available URL: From alee at redhat.com Fri Oct 28 02:02:52 2011 From: alee at redhat.com (Ade Lee) Date: Thu, 27 Oct 2011 22:02:52 -0400 Subject: [Pki-devel] Client Authentication In-Reply-To: <4EAA0654.5010003@redhat.com> References: <4EA9FFE7.20405@redhat.com> <4EAA0654.5010003@redhat.com> Message-ID: <1319767373.13969.131.camel@localhost.localdomain> On Thu, 2011-10-27 at 18:33 -0700, Jack Magne wrote: > Adam Young wrote: > > Ade, > > > > Your ealier emali discussed the renegotiation challenge based on the > > Profiles. > > > > http://pki.fedoraproject.org/wiki/REST#Profiles > > > > For the case where a user points a browser (say and Ajax request) > > at /pki/profiles lets say that we have two cases: one where the > > user is authenticated and one where they are not. In both cases, > > they get back a collection, but in the case of unauthenticated it > > will have significantly fewer entries. > > > > In this case, we would want the Java equivalent of mod_nss: > > > > NSS_VerifyCLient: Optional > > > > I'm guessing this a tomcatjss setting. > clientAuth="want" > > For tomcatjss in server.xml > > The thing is though, the way this behaves is the user is asked for the > cert every time. The server then lets it go if the user choses not to > send one, or uses it if they do. Which is precisely the point, Adam. We do not want end users to be prompted for a certificate when they do not need to be. In the UI, that will most likely be seen as a regression. > > > > In this case, if the user has the certificate, they can present it, > > but if they don't, the operation will complete. I think this is > > what we want. We always ask for the certificate, but we say it is > > OK if you don't have it, you just don't get the data. > > > > In the case where the user is asking for an object, say an actual > > profile, and they don't have sufficient privs, they get back a hard > > and fast error: probably 403.2 > > > > http://en.wikipedia.org/wiki/HTTP_403 > > > > For something like CSRs, we probably want to restrict access to > > agents. In that case, if an unauthenticated user, or one without > > appropriate privs, attempts to access that URL, they also get a > > 403.2. > > > > I don't know how this works in with the renegotiate, but I am > > guessing that every time the user without a certificate hits an > > "Optional" page they will be asked for their cert. This might be > > chatty. No idea. > > > > So in general, we tag the URLS either > > NSS_VerifyClient: Require if they must be authenticated to use them > > NSS_VerifyClient: Optional if they see different results based on > > authentication or not > > NSS_VerifyClient: None if they can view them unauthenticated and see > > the same results as everyone else > > > > > > IN the pki/WEB-INF/web.xml, this probably maps to something like > > this: > > > > > > Protected Resource > > /*/profile > > > > > > > > anonymous > > agent > > > > > > > > I'm guessing that we want to specify a role for anonymous as opposed > > to no role. > > > > > > ... > > > > > > CLIENT-CERT > > Tomcat Manager Application > > PKICA > > > > ... > > > > > > > > the PKICA Realm would be defined at the server level, in > > conf/server.xml. Something like: > > > > > > > > > connectionURL="ldaps://localhost:8389" > > userPattern="uid={0},ou=people,dc=mycompany,dc=com" > > roleBase="ou=groups,dc=mycompany,dc=com" > > roleName="cn" > > roleSearch="(uniqueMember={0})" > > /> > > > > There is a class that almost does what we want. > > > > org.apache.catalina.realm.JNDIRealm. > > > > I suspect we can subclass it. It has two ways of doing the auth : > > Bind mode and Comparison mode. It might be possible to add a > > Client Cert mode in a subclass. docs are here: > > http://tomcat.apache.org/tomcat-6.0-doc/realm-howto.html#JNDIRealm > > > > > > > > > > > > ____________________________________________________________________ > > > > _______________________________________________ > > Pki-devel mailing list > > Pki-devel at redhat.com > > https://www.redhat.com/mailman/listinfo/pki-devel > > > From mharmsen at redhat.com Sat Oct 29 00:03:45 2011 From: mharmsen at redhat.com (Matthew Harmsen) Date: Fri, 28 Oct 2011 17:03:45 -0700 Subject: [Pki-devel] Patch for review - Fix Java class conflicts when using Java 7 . . . Message-ID: <4EAB42E1.9090608@redhat.com> * *Bugzilla Bug #749927* - Java class conflicts using Java 7 in Fedora 17 (rawhide) . . . Please review the attached patch: * https://bugzilla.redhat.com/attachment.cgi?id=530756&action=diff&context=patch&collapsed=&headers=1&format=raw Thanks, -- Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5150 bytes Desc: S/MIME Cryptographic Signature URL: From awnuk at redhat.com Sat Oct 29 00:11:20 2011 From: awnuk at redhat.com (Andrew Wnuk) Date: Fri, 28 Oct 2011 17:11:20 -0700 Subject: [Pki-devel] Patch for review - Fix Java class conflicts when using Java 7 . . . In-Reply-To: <4EAB42E1.9090608@redhat.com> References: <4EAB42E1.9090608@redhat.com> Message-ID: <4EAB44A8.3050803@redhat.com> On 10/28/2011 05:03 PM, Matthew Harmsen wrote: > > * *Bugzilla Bug #749927* > -Java class > conflicts using Java 7 in Fedora 17 (rawhide) . . . > > Please review the attached patch: > > * https://bugzilla.redhat.com/attachment.cgi?id=530756&action=diff&context=patch&collapsed=&headers=1&format=raw > > Thanks, > -- Matt > > > _______________________________________________ > Pki-devel mailing list > Pki-devel at redhat.com > https://www.redhat.com/mailman/listinfo/pki-devel ACK. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mharmsen at redhat.com Sat Oct 29 01:07:06 2011 From: mharmsen at redhat.com (Matthew Harmsen) Date: Fri, 28 Oct 2011 18:07:06 -0700 Subject: [Pki-devel] Patch for review - Fix installation errors associated with CA, DRM, OCSP, and TKS . . . Message-ID: <4EAB51BA.6070408@redhat.com> * *Bugzilla Bug #749945* - Installation error reported during CA, DRM, OCSP, and TKS package installation . . . Please review the attached patch: * https://bugzilla.redhat.com/attachment.cgi?id=530759&action=diff&context=patch&collapsed=&headers=1&format=raw Thanks, -- Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5150 bytes Desc: S/MIME Cryptographic Signature URL: From jmagne at redhat.com Sat Oct 29 01:15:26 2011 From: jmagne at redhat.com (Jack Magne) Date: Fri, 28 Oct 2011 18:15:26 -0700 Subject: [Pki-devel] Patch for review - Fix installation errors associated with CA, DRM, OCSP, and TKS . . . In-Reply-To: <4EAB51BA.6070408@redhat.com> References: <4EAB51BA.6070408@redhat.com> Message-ID: <4EAB53AE.50107@redhat.com> Matthew Harmsen wrote: > > * *Bugzilla Bug #749945* > - > Installation error reported during CA, DRM, OCSP, and TKS > package installation . . . > > Please review the attached patch: > > * https://bugzilla.redhat.com/attachment.cgi?id=530759&action=diff&context=patch&collapsed=&headers=1&format=raw > > Thanks, > -- Matt > ------------------------------------------------------------------------ > > _______________________________________________ > Pki-devel mailing list > Pki-devel at redhat.com > https://www.redhat.com/mailman/listinfo/pki-devel > Ack From mharmsen at redhat.com Sat Oct 29 04:37:51 2011 From: mharmsen at redhat.com (Matthew Harmsen) Date: Fri, 28 Oct 2011 21:37:51 -0700 Subject: [Pki-devel] Request to build the following PKI components on Fedora 15, Fedora 16, and Fedora 17 (Rawhide) . . . Message-ID: <4EAB831F.2040706@redhat.com> Fixes have been made to allow Dogtag 9.0 to build successfully on Fedora 17! Please build the following components on Fedora 15, Fedora 16, and Fedora 17 (rawhide) in Koji . . . * pki-core-9.0.16-1.fc[15,16,17].src.rpm (pki-core) * pki-kra-9.0.9-1.fc[15,16,17].src.rpm (pki-kra) * pki-ocsp-9.0.8-1.fc[15,16,17].src.rpm (pki-ocsp) * pki-tks-9.0.8-1.fc[15,16,17].src.rpm (pki-tks) * dogtag-pki-9.0.0-8.fc[15,16,17].src.rpm (dogtag-pki) and please build the following components on Fedora 17 (rawhide) in Koji . . . * pki-ra-9.0.4-1.fc17.src.rpm (pki-ra) * pki-tps-9.0.7-1.fc17.src.rpm (pki-tps) All changes have been checked-in, and the official tarballs (for all three platforms) have been published to: * http://pki.fedoraproject.org/pki/sources/pki-core/pki-core-9.0.16.tar.gz (pki-core) * http://pki.fedoraproject.org/pki/sources/pki-kra/pki-kra-9.0.9.tar.gz (pki-kra) * http://pki.fedoraproject.org/pki/sources/pki-ocsp/pki-ocsp-9.0.8.tar.gz (pki-ocsp) * http://pki.fedoraproject.org/pki/sources/pki-tks/pki-tks-9.0.8.tar.gz (pki-tks) * N/A (dogtag-pki) likewise, the official tarballs for the additional Fedora 17 (rawhide) builds are available at: * http://pki.fedoraproject.org/pki/sources/pki-ra/pki-ra-9.0.4.tar.gz (pki-ra) * http://pki.fedoraproject.org/pki/sources/pki-tps/pki-tps-9.0.7.tar.gz (pki-tps) The official spec files (for all three platforms) are located at: * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/FEDORA/pki-core.spec (pki-core) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/FEDORA/pki-kra.spec (pki-kra) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/FEDORA/pki-ocsp.spec (pki-ocsp) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/FEDORA/pki-tks.spec (pki-tks) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/FEDORA/dogtag-pki.spec (dogtag-pki) likewise, the official spec files for the additional Fedora 17 (rawhide) builds are available at: * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/FEDORA-17/pki-ra.spec (pki-ra) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/FEDORA-17/pki-tps.spec (pki-tps) Thanks, -- Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5150 bytes Desc: S/MIME Cryptographic Signature URL: From release-engineering at redhat.com Sat Oct 29 04:37:58 2011 From: release-engineering at redhat.com (mharmsen@redhat.com via RT) Date: Sat, 29 Oct 2011 00:37:58 -0400 Subject: [Pki-devel] [engineering.redhat.com #127608] Request to build the following PKI components on Fedora 15, Fedora 16, and Fedora 17 (Rawhide) . . . In-Reply-To: <4EAB831F.2040706@redhat.com> References: <4EAB831F.2040706@redhat.com> Message-ID: Ticket Fixes have been made to allow Dogtag 9.0 to build successfully on Fedora 17! Please build the following components on Fedora 15, Fedora 16, and Fedora 17 (rawhide) in Koji . . . * pki-core-9.0.16-1.fc[15,16,17].src.rpm (pki-core) * pki-kra-9.0.9-1.fc[15,16,17].src.rpm (pki-kra) * pki-ocsp-9.0.8-1.fc[15,16,17].src.rpm (pki-ocsp) * pki-tks-9.0.8-1.fc[15,16,17].src.rpm (pki-tks) * dogtag-pki-9.0.0-8.fc[15,16,17].src.rpm (dogtag-pki) and please build the following components on Fedora 17 (rawhide) in Koji . . . * pki-ra-9.0.4-1.fc17.src.rpm (pki-ra) * pki-tps-9.0.7-1.fc17.src.rpm (pki-tps) All changes have been checked-in, and the official tarballs (for all three platforms) have been published to: * http://pki.fedoraproject.org/pki/sources/pki-core/pki-core-9.0.16.tar.gz (pki-core) * http://pki.fedoraproject.org/pki/sources/pki-kra/pki-kra-9.0.9.tar.gz (pki-kra) * http://pki.fedoraproject.org/pki/sources/pki-ocsp/pki-ocsp-9.0.8.tar.gz (pki-ocsp) * http://pki.fedoraproject.org/pki/sources/pki-tks/pki-tks-9.0.8.tar.gz (pki-tks) * N/A (dogtag-pki) likewise, the official tarballs for the additional Fedora 17 (rawhide) builds are available at: * http://pki.fedoraproject.org/pki/sources/pki-ra/pki-ra-9.0.4.tar.gz (pki-ra) * http://pki.fedoraproject.org/pki/sources/pki-tps/pki-tps-9.0.7.tar.gz (pki-tps) The official spec files (for all three platforms) are located at: * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/FEDORA/pki-core.spec (pki-core) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/FEDORA/pki-kra.spec (pki-kra) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/FEDORA/pki-ocsp.spec (pki-ocsp) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/FEDORA/pki-tks.spec (pki-tks) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/FEDORA/dogtag-pki.spec (dogtag-pki) likewise, the official spec files for the additional Fedora 17 (rawhide) builds are available at: * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/FEDORA-17/pki-ra.spec (pki-ra) * https://alpha.dsdev.sjc.redhat.com/home/mharmsen/kwright/SPECS/FEDORA-17/pki-tps.spec (pki-tps) Thanks, -- Matt -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5150 bytes Desc: not available URL: From release-engineering at redhat.com Mon Oct 31 01:46:58 2011 From: release-engineering at redhat.com (Kevin Wright via RT) Date: Sun, 30 Oct 2011 21:46:58 -0400 Subject: [Pki-devel] [engineering.redhat.com #127608] Request to build the following PKI components on Fedora 15, Fedora 16, and Fedora 17 (Rawhide) . . . In-Reply-To: <4EAB831F.2040706@redhat.com> References: <4EAB831F.2040706@redhat.com> Message-ID: Ticket All builds completed. closing. From release-engineering at redhat.com Mon Oct 31 20:37:54 2011 From: release-engineering at redhat.com (Kevin Wright via RT) Date: Mon, 31 Oct 2011 16:37:54 -0400 Subject: [Pki-devel] [engineering.redhat.com #125396] Request to build the following PKI components on Fedora 15, Fedora 16, and Fedora 17 (Rawhide) . . . In-Reply-To: <4E8E4321.3000500@redhat.com> References: <4E8E4321.3000500@redhat.com> Message-ID: Ticket this has been solved in 127608 when a new submission for fedora 17 for pki-core was built.