From prmarino1 at gmail.com Sun Sep 1 19:55:23 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Sun, 1 Sep 2013 15:55:23 -0400 Subject: [rhos-list] horizon log in via https not working In-Reply-To: <18CF1869BE7AB04DB1E4CC93FD43702A1B71BC02@P-EXMB2-DC21.corp.sgi.com> References: <18CF1869BE7AB04DB1E4CC93FD43702A1B71BB06@P-EXMB2-DC21.corp.sgi.com> <5220823D.3040507@redhat.com> <18CF1869BE7AB04DB1E4CC93FD43702A1B71BC02@P-EXMB2-DC21.corp.sgi.com> Message-ID: do you have mod_ssl or mod_nss installed? On Fri, Aug 30, 2013 at 10:07 AM, David Raddatz wrote: > Hi, Matthias, > > Thanks for the response. Yeah - the doc.openstack.org is probably geared towards Ubuntu... > > I'll take a look at the red hat document (been looking at it this morning) but now I have some probably dumb questions as I'm a newbie in all this. BTW, it looks like the firewall is active and not listening to 443. > > I was under the impression that using packstack would make it easier to configure Red Hat OpenStack. Why does the HTTPS access method not work when the HTTPS URL is the URL that is given after the packstack installation completes? > > If I have to go back and manually configure Horizon to enable the SSL/HTTPS support, then what is the value of packstack in this particular case? Wouldn't specifying CONFIG_HORIZON_SSL=y tell packstack to configure things for me? > > Or, maybe I didn't have something set up in my answer file correctly? > > After all that - since this is a non-production test environment, I'm wondering is it even necessary to configure the HTTPS support? I mean, besides going through the process to gain the experience of configuring it, I probably don't care if I can't access Horizon using the HTTPS method - right? It was just confusing when I got the message after the installation to use that URL (and it didn't work) that took me down this path... > > Dave > -------------------------------------------- > Dave Raddatz > Big Data Solutions and Performance > Austin, TX > (512) 249-0210 > draddatz at sgi.com > > >> -----Original Message----- >> From: rhos-list-bounces at redhat.com [mailto:rhos-list- >> bounces at redhat.com] On Behalf Of Matthias Runge >> Sent: Friday, August 30, 2013 6:30 AM >> To: rhos-list at redhat.com >> Subject: Re: [rhos-list] horizon log in via https not working >> >> On 29/08/13 21:04, David Raddatz wrote: >> > Hello, >> > >> > >> > >> > I got my RHOS 3.0 packstack all-in-one install to complete without >> > errors but I can't use my browser to get to the horizon web interface >> > if I try to use the HTTPS address. If I use the HTTP address, then it >> > works. The INFO message after packstack finishes points me to the >> > HTTPS URL but that didn't work. I have CONFIG_HORIZON_SSL=y in my >> > answers file that I used during the installation. >> > >> > >> > >> > I looked at the docs.openstack.org doc where it talks about installing >> > openstack dashboard and enabling it for HTTPS, BUT it refers to files >> > that don't appear to exist on my system: >> > >> > |/etc/openstack-dashboard/local_settings.py| >> > >> > |/etc/apache2/ports.conf| >> > >> > |/etc/apache2/conf.d/openstack-dashboard.conf| >> > >> > >> Those files may exist on Ubuntu systems, and I think I have fixed that once at >> upstream docs. Sadly, upstream docs are organized in a confusing way here. >> >> You might want to check the docs for RHOS-3.0: >> https://access.redhat.com/site/documentation/en- >> US/Red_Hat_OpenStack/3/html/Installation_and_Configuration_Guide/Con >> figuring_Secured_Deployment_HTTPS.html >> >> Is a firewall active, and if yes, is port 443 open? >> Is your httpd listening on 443? >> >> Matthias >> >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From prmarino1 at gmail.com Sun Sep 1 20:09:56 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Sun, 1 Sep 2013 16:09:56 -0400 Subject: [rhos-list] Gluster UFO 3.4 swift Multi tenant question Message-ID: I have Gluster UFO installed as a back end for swift from here http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/RHEL/epel-6/ with RDO 3 Its working well except for one thing. All of the tenants are seeing one Gluster volume which is some what nice, especially when compared to the old 3.3 behavior of creating one volume per tenant named after the tenant ID number. The problem is I expected to see is sub directory created under the volume root for each tenant but instead what in seeing is that all of the tenants can see the root of the Gluster volume. The result is that all of the tenants can access each others files and even delete them. even scarier is that the tennants can see and delete each others glance images and snapshots. Can any one suggest options to look at or documents to read to try to figure out how to modify the behavior? From Johnson.Dharmaraj at csscorp.com Mon Sep 2 05:28:57 2013 From: Johnson.Dharmaraj at csscorp.com (Johnson Dharmaraj) Date: Mon, 2 Sep 2013 05:28:57 +0000 Subject: [rhos-list] Live migration using WebUI Message-ID: <55E032C823220849AB1E328468CA57F553ADDAB2@INCHEAMVW033.ad.csscorp.com> Hi, I have a 2 node OpenStack setup running. I have setup live migration using with /var/lib/nova/instances/ mounted from an nfs share. The instances are getting migrated from one compute node to the other seamlessly while i run the command "nova live-migration " However when i use the webUI to migrate the instance nova-compute shows the following error ssh mkdir -p /var/lib/nova/instances/15ba85b1-a457-45f1-bcc3-b197480c0e31 2013-09-01 22:22:06.216 31208 TRACE nova.openstack.common.rpc.amqp Exit code: 255 2013-09-01 22:22:06.216 31208 TRACE nova.openstack.common.rpc.amqp Stdout: '' 2013-09-01 22:22:06.216 31208 TRACE nova.openstack.common.rpc.amqp Stderr: 'Host key verification failed.\r\n' Have anyone rectified a similar problem? http://www.csscorp.com/common/email-disclaimer.php -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbellur at redhat.com Mon Sep 2 10:47:51 2013 From: vbellur at redhat.com (Vijay Bellur) Date: Mon, 02 Sep 2013 16:17:51 +0530 Subject: [rhos-list] Gluster UFO 3.4 swift Multi tenant question In-Reply-To: References: Message-ID: <52246CD7.6080503@redhat.com> On 09/02/2013 01:39 AM, Paul Robert Marino wrote: > I have Gluster UFO installed as a back end for swift from here > http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/RHEL/epel-6/ > with RDO 3 > > Its working well except for one thing. All of the tenants are seeing > one Gluster volume which is some what nice, especially when compared > to the old 3.3 behavior of creating one volume per tenant named after > the tenant ID number. > > The problem is I expected to see is sub directory created under the > volume root for each tenant but instead what in seeing is that all of > the tenants can see the root of the Gluster volume. The result is that > all of the tenants can access each others files and even delete them. > even scarier is that the tennants can see and delete each others > glance images and snapshots. > > Can any one suggest options to look at or documents to read to try to > figure out how to modify the behavior? > Adding gluster swift developers who might be able to help. -Vijay From pmyers at redhat.com Mon Sep 2 12:26:46 2013 From: pmyers at redhat.com (Perry Myers) Date: Mon, 02 Sep 2013 08:26:46 -0400 Subject: [rhos-list] Live migration using WebUI In-Reply-To: <55E032C823220849AB1E328468CA57F553ADDAB2@INCHEAMVW033.ad.csscorp.com> References: <55E032C823220849AB1E328468CA57F553ADDAB2@INCHEAMVW033.ad.csscorp.com> Message-ID: <52248406.9050504@redhat.com> On 09/02/2013 01:28 AM, Johnson Dharmaraj wrote: > Hi, > > I have a 2 node OpenStack setup running. > > I have setup live migration using with /var/lib/nova/instances/ mounted > from an nfs share. The instances are getting migrated from one compute > node to the other seamlessly while i run the command "nova > live-migration " > > However when i use the webUI to migrate the instance nova-compute shows > the following error The odd thing about this is that Horizon simply uses the same exact REST APIs that you would be using when you use the Nova python CLI. Horizon doesn't have a different interface than the CLI would. When you tried both the nova CLI and Horizon interface, were you migrating between the exact same hosts? Perhaps your SSH keys are set up properly between the hosts that were involved in the CLI migration, but are not set up properly between the hosts uses in the Horizon initiated migration? > ssh mkdir -p > /var/lib/nova/instances/15ba85b1-a457-45f1-bcc3-b197480c0e31 > 2013-09-01 22:22:06.216 31208 TRACE nova.openstack.common.rpc.amqp Exit > code: 255 > 2013-09-01 22:22:06.216 31208 TRACE nova.openstack.common.rpc.amqp > Stdout: '' > 2013-09-01 22:22:06.216 31208 TRACE nova.openstack.common.rpc.amqp > Stderr: 'Host key verification failed.\r\n' > > Have anyone rectified a similar problem? From rraja at redhat.com Tue Sep 3 00:55:08 2013 From: rraja at redhat.com (Ramana Raja) Date: Mon, 2 Sep 2013 20:55:08 -0400 (EDT) Subject: [rhos-list] Gluster UFO 3.4 swift Multi tenant question In-Reply-To: <52246CD7.6080503@redhat.com> References: <52246CD7.6080503@redhat.com> Message-ID: <31398813.7155780.1378169708207.JavaMail.root@redhat.com> Hi Paul, Currently, gluster-swift doesn't support the feature of multiple accounts/tenants accessing the same volume. Each tenant still needs his own gluster volume. So I'm wondering how you were able to observe the reported behaviour. How did you prepare the ringfiles for the different tenants, which use the same gluster volume? Did you change the configuration of the servers? Also, how did you access the files that you mention? It'd be helpful if you could share the commands you used to perform these actions. Thanks, Ram ----- Original Message ----- From: "Vijay Bellur" To: "Paul Robert Marino" Cc: rhos-list at redhat.com, "Luis Pabon" , "Ramana Raja" , "Chetan Risbud" Sent: Monday, September 2, 2013 4:17:51 PM Subject: Re: [rhos-list] Gluster UFO 3.4 swift Multi tenant question On 09/02/2013 01:39 AM, Paul Robert Marino wrote: > I have Gluster UFO installed as a back end for swift from here > http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/RHEL/epel-6/ > with RDO 3 > > Its working well except for one thing. All of the tenants are seeing > one Gluster volume which is some what nice, especially when compared > to the old 3.3 behavior of creating one volume per tenant named after > the tenant ID number. > > The problem is I expected to see is sub directory created under the > volume root for each tenant but instead what in seeing is that all of > the tenants can see the root of the Gluster volume. The result is that > all of the tenants can access each others files and even delete them. > even scarier is that the tennants can see and delete each others > glance images and snapshots. > > Can any one suggest options to look at or documents to read to try to > figure out how to modify the behavior? > Adding gluster swift developers who might be able to help. -Vijay From prmarino1 at gmail.com Tue Sep 3 01:58:54 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Mon, 02 Sep 2013 21:58:54 -0400 Subject: [rhos-list] Gluster UFO 3.4 swift Multi tenant question In-Reply-To: <31398813.7155780.1378169708207.JavaMail.root@redhat.com> Message-ID: <5225425e.8482e00a.1bfd.ffff9dcc@mx.google.com> An HTML attachment was scrubbed... URL: From jpichon at redhat.com Tue Sep 3 09:10:48 2013 From: jpichon at redhat.com (Julie Pichon) Date: Tue, 3 Sep 2013 05:10:48 -0400 (EDT) Subject: [rhos-list] Live migration using WebUI In-Reply-To: <52248406.9050504@redhat.com> References: <55E032C823220849AB1E328468CA57F553ADDAB2@INCHEAMVW033.ad.csscorp.com> <52248406.9050504@redhat.com> Message-ID: <1926579704.8659974.1378199448652.JavaMail.root@redhat.com> "Perry Myers" wrote: > On 09/02/2013 01:28 AM, Johnson Dharmaraj wrote: > > Hi, > > > > I have a 2 node OpenStack setup running. > > > > I have setup live migration using with /var/lib/nova/instances/ mounted > > from an nfs share. The instances are getting migrated from one compute > > node to the other seamlessly while i run the command "nova > > live-migration " > > > > However when i use the webUI to migrate the instance nova-compute shows > > the following error > > The odd thing about this is that Horizon simply uses the same exact REST > APIs that you would be using when you use the Nova python CLI. Horizon > doesn't have a different interface than the CLI would. Note that Horizon currently does not support live migrations. The "Migrate" command uses the "nova migrate" API which involves a reboot. Unfortunately I'm not familiar enough with the underlying nova mechanism to help debug the error. Julie > When you tried both the nova CLI and Horizon interface, were you > migrating between the exact same hosts? Perhaps your SSH keys are set > up properly between the hosts that were involved in the CLI migration, > but are not set up properly between the hosts uses in the Horizon > initiated migration? > > > ssh mkdir -p > > /var/lib/nova/instances/15ba85b1-a457-45f1-bcc3-b197480c0e31 > > 2013-09-01 22:22:06.216 31208 TRACE nova.openstack.common.rpc.amqp Exit > > code: 255 > > 2013-09-01 22:22:06.216 31208 TRACE nova.openstack.common.rpc.amqp > > Stdout: '' > > 2013-09-01 22:22:06.216 31208 TRACE nova.openstack.common.rpc.amqp > > Stderr: 'Host key verification failed.\r\n' > > > > Have anyone rectified a similar problem? > > From ndipanov at redhat.com Tue Sep 3 11:27:57 2013 From: ndipanov at redhat.com (=?UTF-8?B?Tmlrb2xhIMSQaXBhbm92?=) Date: Tue, 03 Sep 2013 13:27:57 +0200 Subject: [rhos-list] Live migration using WebUI In-Reply-To: <1926579704.8659974.1378199448652.JavaMail.root@redhat.com> References: <55E032C823220849AB1E328468CA57F553ADDAB2@INCHEAMVW033.ad.csscorp.com> <52248406.9050504@redhat.com> <1926579704.8659974.1378199448652.JavaMail.root@redhat.com> Message-ID: <5225C7BD.6070007@redhat.com> On 03/09/13 11:10, Julie Pichon wrote: > "Perry Myers" wrote: >> On 09/02/2013 01:28 AM, Johnson Dharmaraj wrote: >>> Hi, >>> >>> I have a 2 node OpenStack setup running. >>> >>> I have setup live migration using with /var/lib/nova/instances/ mounted >>> from an nfs share. The instances are getting migrated from one compute >>> node to the other seamlessly while i run the command "nova >>> live-migration " >>> >>> However when i use the webUI to migrate the instance nova-compute shows >>> the following error >> >> The odd thing about this is that Horizon simply uses the same exact REST >> APIs that you would be using when you use the Nova python CLI. Horizon >> doesn't have a different interface than the CLI would. > > Note that Horizon currently does not support live migrations. > The "Migrate" command uses the "nova migrate" API which involves > a reboot. > > Unfortunately I'm not familiar enough with the underlying nova > mechanism to help debug the error. > Thanks for the info Julie! If that is indeed the case - then we are looking at a missing feature (in Horizon) since the APIs are different for migrate(resize) and live_migrate (both are part of the admin_actions extension), and they are in fact two different functionalities. As for the error - it is worth pointing out that even if you do have shared storage - you will still need to have ssh keys set up (nova uses ssh to confirm that storage is indeed shared) for the cold migration (so the one supported by Horizon) but not for the live migration (the check is done via RPC calls). I cannot claim this with 100% accuracy as I am not sure from the email which version is Johnson running - but if it is a later Grizzly or Havana - this is indeed the case. Solution to your problem would be: 1) Distribute ssh keys properly for the cold migration (also know as resize) to work. 2) Raise a feature request for live-migration to be added in horizon. Thanks, N. > Julie > >> When you tried both the nova CLI and Horizon interface, were you >> migrating between the exact same hosts? Perhaps your SSH keys are set >> up properly between the hosts that were involved in the CLI migration, >> but are not set up properly between the hosts uses in the Horizon >> initiated migration? >> >>> ssh mkdir -p >>> /var/lib/nova/instances/15ba85b1-a457-45f1-bcc3-b197480c0e31 >>> 2013-09-01 22:22:06.216 31208 TRACE nova.openstack.common.rpc.amqp Exit >>> code: 255 >>> 2013-09-01 22:22:06.216 31208 TRACE nova.openstack.common.rpc.amqp >>> Stdout: '' >>> 2013-09-01 22:22:06.216 31208 TRACE nova.openstack.common.rpc.amqp >>> Stderr: 'Host key verification failed.\r\n' >>> >>> Have anyone rectified a similar problem? >> >> From lchristoph at arago.de Tue Sep 3 16:03:05 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Tue, 3 Sep 2013 16:03:05 +0000 Subject: [rhos-list] Parallel Cinder access? Message-ID: Hi! I can't google up any good answer for the question if it is possible to have multiple Cinder Volume instances access the same underlying storage (especially interesting is plain old LVM). The idea is to run Cinder Volume on the compute nodes and eliminate one trip over the network for iSCSI storage. So the Cinder Volume instances need to see the same volumes, and some central Cinder service (scheduler? API?) has to know that each compute node has its own local Cinder Volume service. The next step would of course be to eliminate the iSCSI export/import of the volume as it can be accessed through /dev/mapper. Since I haven't found any reference to this kind of architecture, I presume that it isn't viable (yet?). But then I would very much appreciate a "No. Won't work." from this list to convince some people around here. Incidentally, we will be evaluating Datacore to provide the block storage Cinder will carve into volumes. I would be very interested in hearing from anybody with Datacore experience. Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 [http://www.arago.net/wp-content/uploads/2013/06/EmailSignatur1.png] -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 -------------- next part -------------- An HTML attachment was scrubbed... URL: From prmarino1 at gmail.com Tue Sep 3 16:27:34 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Tue, 03 Sep 2013 12:27:34 -0400 Subject: [rhos-list] Parallel Cinder access? In-Reply-To: Message-ID: <52260df9.2a49310a.5b54.ffffd3d0@mx.google.com> An HTML attachment was scrubbed... URL: From draddatz at sgi.com Tue Sep 3 18:19:33 2013 From: draddatz at sgi.com (David Raddatz) Date: Tue, 3 Sep 2013 18:19:33 +0000 Subject: [rhos-list] horizon log in via https not working In-Reply-To: References: <18CF1869BE7AB04DB1E4CC93FD43702A1B71BB06@P-EXMB2-DC21.corp.sgi.com> <5220823D.3040507@redhat.com> <18CF1869BE7AB04DB1E4CC93FD43702A1B71BC02@P-EXMB2-DC21.corp.sgi.com> Message-ID: <18CF1869BE7AB04DB1E4CC93FD43702A1B71BFB9@P-EXMB2-DC21.corp.sgi.com> I have mod_ssl but not mod_nss. Dave > -----Original Message----- > From: Paul Robert Marino [mailto:prmarino1 at gmail.com] > Sent: Sunday, September 01, 2013 2:55 PM > To: David Raddatz > Cc: Matthias Runge; rhos-list at redhat.com > Subject: Re: [rhos-list] horizon log in via https not working > > do you have mod_ssl or mod_nss installed? > > On Fri, Aug 30, 2013 at 10:07 AM, David Raddatz wrote: > > Hi, Matthias, > > > > Thanks for the response. Yeah - the doc.openstack.org is probably geared > towards Ubuntu... > > > > I'll take a look at the red hat document (been looking at it this morning) but > now I have some probably dumb questions as I'm a newbie in all this. BTW, it > looks like the firewall is active and not listening to 443. > > > > I was under the impression that using packstack would make it easier to > configure Red Hat OpenStack. Why does the HTTPS access method not work > when the HTTPS URL is the URL that is given after the packstack installation > completes? > > > > If I have to go back and manually configure Horizon to enable the > SSL/HTTPS support, then what is the value of packstack in this particular > case? Wouldn't specifying CONFIG_HORIZON_SSL=y tell packstack to > configure things for me? > > > > Or, maybe I didn't have something set up in my answer file correctly? > > > > After all that - since this is a non-production test environment, I'm > wondering is it even necessary to configure the HTTPS support? I mean, > besides going through the process to gain the experience of configuring it, I > probably don't care if I can't access Horizon using the HTTPS method - right? > It was just confusing when I got the message after the installation to use that > URL (and it didn't work) that took me down this path... > > > > Dave > > -------------------------------------------- > > Dave Raddatz > > Big Data Solutions and Performance > > Austin, TX > > (512) 249-0210 > > draddatz at sgi.com > > > > > >> -----Original Message----- > >> From: rhos-list-bounces at redhat.com [mailto:rhos-list- > >> bounces at redhat.com] On Behalf Of Matthias Runge > >> Sent: Friday, August 30, 2013 6:30 AM > >> To: rhos-list at redhat.com > >> Subject: Re: [rhos-list] horizon log in via https not working > >> > >> On 29/08/13 21:04, David Raddatz wrote: > >> > Hello, > >> > > >> > > >> > > >> > I got my RHOS 3.0 packstack all-in-one install to complete without > >> > errors but I can't use my browser to get to the horizon web > >> > interface if I try to use the HTTPS address. If I use the HTTP > >> > address, then it works. The INFO message after packstack finishes > >> > points me to the HTTPS URL but that didn't work. I have > >> > CONFIG_HORIZON_SSL=y in my answers file that I used during the > installation. > >> > > >> > > >> > > >> > I looked at the docs.openstack.org doc where it talks about > >> > installing openstack dashboard and enabling it for HTTPS, BUT it > >> > refers to files that don't appear to exist on my system: > >> > > >> > |/etc/openstack-dashboard/local_settings.py| > >> > > >> > |/etc/apache2/ports.conf| > >> > > >> > |/etc/apache2/conf.d/openstack-dashboard.conf| > >> > > >> > > >> Those files may exist on Ubuntu systems, and I think I have fixed > >> that once at upstream docs. Sadly, upstream docs are organized in a > confusing way here. > >> > >> You might want to check the docs for RHOS-3.0: > >> https://access.redhat.com/site/documentation/en- > >> > US/Red_Hat_OpenStack/3/html/Installation_and_Configuration_Guide/Con > >> figuring_Secured_Deployment_HTTPS.html > >> > >> Is a firewall active, and if yes, is port 443 open? > >> Is your httpd listening on 443? > >> > >> Matthias > >> > >> > >> _______________________________________________ > >> rhos-list mailing list > >> rhos-list at redhat.com > >> https://www.redhat.com/mailman/listinfo/rhos-list > > > > _______________________________________________ > > rhos-list mailing list > > rhos-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rhos-list From mrunge at redhat.com Wed Sep 4 08:53:07 2013 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 04 Sep 2013 10:53:07 +0200 Subject: [rhos-list] horizon log in via https not working In-Reply-To: <18CF1869BE7AB04DB1E4CC93FD43702A1B71BC02@P-EXMB2-DC21.corp.sgi.com> References: <18CF1869BE7AB04DB1E4CC93FD43702A1B71BB06@P-EXMB2-DC21.corp.sgi.com> <5220823D.3040507@redhat.com> <18CF1869BE7AB04DB1E4CC93FD43702A1B71BC02@P-EXMB2-DC21.corp.sgi.com> Message-ID: <5226F4F3.4040209@redhat.com> On 30/08/13 16:07, David Raddatz wrote: > Hi, Matthias, > > Thanks for the response. Yeah - the doc.openstack.org is probably > geared towards Ubuntu... > > I'll take a look at the red hat document (been looking at it this > morning) but now I have some probably dumb questions as I'm a newbie > in all this. BTW, it looks like the firewall is active and not > listening to 443. > > I was under the impression that using packstack would make it easier > to configure Red Hat OpenStack. Why does the HTTPS access method not > work when the HTTPS URL is the URL that is given after the packstack > installation completes? > David, ah, sorry, for the long delay. Yes. It's intended that way. So, if packstack didn't configure this right, that might be a bug. Matthias From mrunge at redhat.com Wed Sep 4 09:09:47 2013 From: mrunge at redhat.com (Matthias Runge) Date: Wed, 04 Sep 2013 11:09:47 +0200 Subject: [rhos-list] horizon log in via https not working In-Reply-To: <18CF1869BE7AB04DB1E4CC93FD43702A1B71BC02@P-EXMB2-DC21.corp.sgi.com> References: <18CF1869BE7AB04DB1E4CC93FD43702A1B71BB06@P-EXMB2-DC21.corp.sgi.com> <5220823D.3040507@redhat.com> <18CF1869BE7AB04DB1E4CC93FD43702A1B71BC02@P-EXMB2-DC21.corp.sgi.com> Message-ID: <5226F8DB.7050001@redhat.com> On 30/08/13 16:07, David Raddatz wrote: > Hi, Matthias, > > Thanks for the response. Yeah - the doc.openstack.org is probably > geared towards Ubuntu... > > I'll take a look at the red hat document (been looking at it this > morning) but now I have some probably dumb questions as I'm a newbie > in all this. BTW, it looks like the firewall is active and not > listening to 443. OK; two things to check: lsof -i :443 | grep apache (to check, if your apache is listening on port 443) if it does: by chance, it may be possible, adding port 443 to the firewall setting failed. You should edit /etc/sysconfig/iptables and locate a rule: -A INPUT -p tcp -m multiport --dports 80 -m comment --comment "001 horizon incoming" -j ACCEPT If it already contains --dports 80,443 , you're all set here. if not, please add ',443' to dports and restart the firewall: service iptables restart. As a side note, lokkit -s https also adds port 443 to the firewall configuration. Matthias From sgordon at redhat.com Wed Sep 4 16:33:47 2013 From: sgordon at redhat.com (Steve Gordon) Date: Wed, 4 Sep 2013 12:33:47 -0400 (EDT) Subject: [rhos-list] start over using packstack? In-Reply-To: <521B3B7B.80201@redhat.com> References: <18CF1869BE7AB04DB1E4CC93FD43702A1B71B1CE@P-EXMB2-DC21.corp.sgi.com> <1377207759.8743.26.camel@sideshowbob> <521B1AA4.3070700@redhat.com> <521B3B7B.80201@redhat.com> Message-ID: <130629095.8487154.1378312427948.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Perry Myers" > To: "Tomas Von Veschler" , "Steve Gordon" , "Bruce Reeler" > > Cc: "Dave Maley" , rhos-list at redhat.com > Sent: Monday, August 26, 2013 7:26:51 AM > Subject: Re: [rhos-list] start over using packstack? > > On 08/26/2013 05:06 AM, Tomas Von Veschler wrote: > > On 08/22/2013 11:42 PM, Dave Maley wrote: > >> Hi David, > >> > >>> Is there a way to un-install RHOS 3.0 and start over using packstack? > >> > >> There are 2 processes for this covered in the RHOS Getting Started > >> Guide: > >> https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack/3/html/Getting_Started_Guide/appe-Getting_Started_Guide-Removing_PackStack_Deployments.html > >> > >> > > > > Note the guide suggests: > > > > $ yum -y remove '*openstack*' > > > > This will match: > > kernel-firmware-2.6.32-358.114.1.openstack.el6.gre.2.noarch > > > > And without firmwares your nic most likely won't work. > > That particular kernel doesn't exist in RHOS yet, it's only in RDO. > > But yes, once it does exist in RHOS, the removal instructions will need > to be amended. > > But a complete removal of RHOS should actually remove this, because this > package is RHOS/RDO specific. But what should be done in this case is > to remove: > > > [admin at el6-mgmt ~]$ s yum remove > > kernel-firmware-2.6.32-358.114.1.openstack.el6.gre.2.noarch > > kernel-2.6.32-358.114.1.openstack.el6.gre.2.x86_64 > > But first you need to be booted into another kernel. The system won't > let you remove the running kernel. > > And then after removing the openstack.gre kernel and firmware, you'll > need to reinstall the latest kernel-firmware available from the RHEL 6.4 > repos. > > So the instructions should look like: > > * Reboot to non-RHOS/RDO kernel > * yum remove *openstack* > (Now that you're not booted to the OpenStack kernel, you can remove > it) > * Disable RHOS/RDO yum repos > * yum install kernel-firmware > > Or something like that. I have raised a documentation bug [1] but would like a lucky volunteer to assist with updating this procedure, perhaps on the RDO wiki page which provided the original source material [2]. I believe Derek originally wrote this but it appears to also be missing Heat and Ceilometer removal, probably also other bits and pieces that have since been added. Thanks, Steve [1] https://bugzilla.redhat.com/show_bug.cgi?id=1004457 [2] http://openstack.redhat.com/Uninstalling_RDO From andrey at xdel.ru Wed Sep 4 20:39:32 2013 From: andrey at xdel.ru (Andrey Korolyov) Date: Thu, 5 Sep 2013 00:39:32 +0400 Subject: [rhos-list] RBD support in current/upcoming qemu-kvm Message-ID: Hello, Can we have a small hope that the qemu packages which will be shipped with 6.5 will get official Ceph support? It will even more awesome if we can have ``official'' packages for 6.4. From dhkarimi at sei.cmu.edu Fri Sep 6 12:42:24 2013 From: dhkarimi at sei.cmu.edu (Derrick H. Karimi) Date: Fri, 6 Sep 2013 12:42:24 +0000 Subject: [rhos-list] Cirros VM image DHCP issues In-Reply-To: <9C3E8F27-E3BB-4AEA-B650-00967544BC2D@redhat.com> References: <521B3BFA.70109@redhat.com> <018AC617-CF5E-4079-A186-56AD36A2FC57@redhat.com> <521B9438.2080902@redhat.com> <9C3E8F27-E3BB-4AEA-B650-00967544BC2D@redhat.com> Message-ID: <421EE192CD0C6C49A23B97027914202B158236AD@marathon> I have seen the similar issue (also happens with some Ubuntu images), this solved it for me iptables -A POSTROUTING -t mangle -p udp --dport 68 -j CHECKSUM --checksum-fill taken from here http://serverfault.com/questions/448347/instances-in-openstack-are-not-getting-dhcp-leases --Derrick H. Karimi --Software Developer, SEI Emerging Technology Center --Carnegie Mellon University -----Original Message----- From: rhos-list-bounces at redhat.com [mailto:rhos-list-bounces at redhat.com] On Behalf Of Rhys Oxenham Sent: Tuesday, August 27, 2013 12:48 AM To: Perry Myers Cc: Livnat Peer; Maru Newby; rhos-list at redhat.com Subject: Re: [rhos-list] Cirros VM image DHCP issues On 26 Aug 2013, at 20:45, Perry Myers wrote: > On 08/26/2013 01:28 PM, Prashanth Prahalad wrote: >> Thanks Rhys/Perry. >> >> To answer your questions: >> >> Rhys - I downloaded the cirros image you mentioned and that did the >> trick ! The only issue I see is that with the new Cirros 3.1, the VM >> takes a long time to come up (whereas 3.0 was pretty snappy). If you >> happen to know, could you point me to the bug or the link which >> mentions about this issue in Cirros. Great! Are you running the metadata service? I suspect that if you're not it is waiting on the metadata service and will eventually time out. Does it give any indication of this upon boot? Thanks Rhys >> >> Perry - I'm running - >> >> [root at controller ~(keystone_admin)]# uname -a >> Linux controller 2.6.32-358.114.1.openstack.el6.x86_64 #1 SMP Wed >> Jul 3 02:11:25 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux >> >> >> Do you think this kernel could have the bug which you were mentioned ? >> Could you please reference the bug for this. I would really >> appreciate this. > > I don't know the bz# offhand, but that kernel is the latest released > kernel for RHOS and it does have some bugs around using Quantum. I > was never able to get guests to get dhcp addresses when using that kernel. > > Instead I have been using: > http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6 > /kernel-2.6.32-358.114.1.openstack.el6.gre.2.x86_64.rpm > > Which unfortunately is only available on RDO right now. But we will > be releasing a new RHOS kernel in 3.0 repos in the next few weeks > which will have the same fixes as what is in RDO. > >> Interestingly, I'm running the same bits on the compute node and that >> seems fine (the VM picks up the DHCP reply just fine) _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list From pmyers at redhat.com Fri Sep 6 14:25:13 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 06 Sep 2013 10:25:13 -0400 Subject: [rhos-list] Cirros VM image DHCP issues In-Reply-To: <421EE192CD0C6C49A23B97027914202B158236AD@marathon> References: <521B3BFA.70109@redhat.com> <018AC617-CF5E-4079-A186-56AD36A2FC57@redhat.com> <521B9438.2080902@redhat.com> <9C3E8F27-E3BB-4AEA-B650-00967544BC2D@redhat.com> <421EE192CD0C6C49A23B97027914202B158236AD@marathon> Message-ID: <5229E5C9.1070506@redhat.com> On 09/06/2013 08:42 AM, Derrick H. Karimi wrote: > I have seen the similar issue (also happens with some Ubuntu images), this solved it for me > > iptables -A POSTROUTING -t mangle -p udp --dport 68 -j CHECKSUM --checksum-fill Is that a rule you put in the guest or on the host? Do you only see this issue on Cirros images or by chance do you see it with Fedora or RHEL images as well? > taken from here > http://serverfault.com/questions/448347/instances-in-openstack-are-not-getting-dhcp-leases There was some talk on this thread that if virtio networking is enabled, this behavior doesn't occur... What did you find? Perry From vkarani1 at in.ibm.com Fri Sep 6 22:48:01 2013 From: vkarani1 at in.ibm.com (Velayutham Karani1) Date: Sat, 7 Sep 2013 04:18:01 +0530 Subject: [rhos-list] AUTO: Velayutham Karani1 is out of the office (returning 10-09-2013) Message-ID: I am out of the office until 10-09-2013. I am out of office with no access to email and limited access to mobile. I can reply to your mail on my return and respond to sms as and when possible. Thanks, Velu. Note: This is an automated response to your message "rhos-list Digest, Vol 14, Issue 5" sent on 06/09/2013 21:30:04. This is the only notification you will receive while this person is away. From acathrow at redhat.com Sun Sep 8 15:28:48 2013 From: acathrow at redhat.com (Andrew Cathrow) Date: Sun, 8 Sep 2013 11:28:48 -0400 (EDT) Subject: [rhos-list] Cirros VM image DHCP issues In-Reply-To: <5229E5C9.1070506@redhat.com> References: <521B3BFA.70109@redhat.com> <018AC617-CF5E-4079-A186-56AD36A2FC57@redhat.com> <521B9438.2080902@redhat.com> <9C3E8F27-E3BB-4AEA-B650-00967544BC2D@redhat.com> <421EE192CD0C6C49A23B97027914202B158236AD@marathon> <5229E5C9.1070506@redhat.com> Message-ID: <1117889607.16245926.1378654128775.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Perry Myers" > To: "Derrick H. Karimi" , "Livnat Peer" , "Maru Newby" , > "Brent Eagles" , "Robert Kukura" > Cc: rhos-list at redhat.com > Sent: Friday, September 6, 2013 10:25:13 AM > Subject: Re: [rhos-list] Cirros VM image DHCP issues > > On 09/06/2013 08:42 AM, Derrick H. Karimi wrote: > > I have seen the similar issue (also happens with some Ubuntu images), this > > solved it for me > > > > iptables -A POSTROUTING -t mangle -p udp --dport 68 -j CHECKSUM > > --checksum-fill > > Is that a rule you put in the guest or on the host? > > Do you only see this issue on Cirros images or by chance do you see it > with Fedora or RHEL images as well? > > > taken from here > > http://serverfault.com/questions/448347/instances-in-openstack-are-not-getting-dhcp-leases > > There was some talk on this thread that if virtio networking is enabled, > this behavior doesn't occur... What did you find? I learned, from a smarter man than I, that ... It's an old bug in dhclient on Linux - dhclient doesn't work well with RX checksum offloading enabled. Recent Red Hat and Ubuntu derivatives have fixed it, debian didn't yet ... http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=652739 http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=671707 > > Perry > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From gfidente at redhat.com Mon Sep 9 08:56:27 2013 From: gfidente at redhat.com (Giulio Fidente) Date: Mon, 09 Sep 2013 10:56:27 +0200 Subject: [rhos-list] Parallel Cinder access? In-Reply-To: References: Message-ID: <522D8D3B.4040409@redhat.com> On 09/03/2013 06:03 PM, Lutz Christoph wrote: > Hi! > > I can't google up any good answer for the question if it is possible to > have multiple Cinder Volume instances access the same underlying > storage (especially interesting is plain old LVM). > > The idea is to run Cinder Volume on the compute nodes and eliminate one > trip over the network for iSCSI storage. So the Cinder Volume instances > need to see the same volumes, and some central Cinder service > (scheduler? API?) has to know that each compute node has its own local > Cinder Volume service. The next step would of course be to eliminate the > iSCSI export/import of the volume as it can be accessed through /dev/mapper. > > Since I haven't found any reference to this kind of architecture, I > presume that it isn't viable (yet?). But then I would very much > appreciate a "No. Won't work." from this list to convince some people > around here. you can actually deploy multiple instances of the cinder-volume service, on different nodes, controlled by a single cinder-{api,scheduler} I wrote a small blog post[1] about how to setup such a topology. I would void the use of the local compute disks for cinder-volume though, as that will consume your CPU for the disks I/O. You can attach some external storage to each compute node instead (and maybe in that case even use a specific cinder driver rather than lvm). 1. http://giuliofidente.com/2013/04/openstack-cinder-add-more-volume-nodes.html -- Giulio Fidente GPG KEY: 08D733BA | IRC: giulivo From lchristoph at arago.de Mon Sep 9 13:41:53 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Mon, 9 Sep 2013 13:41:53 +0000 Subject: [rhos-list] Parallel Cinder access? In-Reply-To: <522D8D3B.4040409@redhat.com> References: , <522D8D3B.4040409@redhat.com> Message-ID: <4daa16994ac94075b9a5a16c61d9d69f@AMSPR07MB145.eurprd07.prod.outlook.com> Hi! Thanks for the post. I had found your instructions while trying to find some that address the question if multiple Cinder Volume instances can access the same storage space in parallel. Alas, you didn't address that. And it seems this isn't possible. Incidentally: I'm currently trying to lobby my local powers to use storage that avoids the additional round trip required by anything that isn't integrated with both Cinder and libvirtd. Nexenta uses iSCSI volumes of its own rather than one exported from Cinder Volume, so it allows libvirtd to access the volumes directly rather than via an intermediate. Scality has code in libvirtd that uses its API. Ceph would be nice, too, but it currently does not play well with RHEL6. It's only useful for libvirtd. Which may also be true for Scality, I'm still trying to find out. Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 ________________________________________ Von: Giulio Fidente Gesendet: Montag, 9. September 2013 10:56 An: Lutz Christoph Cc: rhos-list at redhat.com Betreff: Re: [rhos-list] Parallel Cinder access? On 09/03/2013 06:03 PM, Lutz Christoph wrote: > Hi! > > I can't google up any good answer for the question if it is possible to > have multiple Cinder Volume instances access the same underlying > storage (especially interesting is plain old LVM). > > The idea is to run Cinder Volume on the compute nodes and eliminate one > trip over the network for iSCSI storage. So the Cinder Volume instances > need to see the same volumes, and some central Cinder service > (scheduler? API?) has to know that each compute node has its own local > Cinder Volume service. The next step would of course be to eliminate the > iSCSI export/import of the volume as it can be accessed through /dev/mapper. > > Since I haven't found any reference to this kind of architecture, I > presume that it isn't viable (yet?). But then I would very much > appreciate a "No. Won't work." from this list to convince some people > around here. you can actually deploy multiple instances of the cinder-volume service, on different nodes, controlled by a single cinder-{api,scheduler} I wrote a small blog post[1] about how to setup such a topology. I would void the use of the local compute disks for cinder-volume though, as that will consume your CPU for the disks I/O. You can attach some external storage to each compute node instead (and maybe in that case even use a specific cinder driver rather than lvm). 1. http://giuliofidente.com/2013/04/openstack-cinder-add-more-volume-nodes.html -- Giulio Fidente GPG KEY: 08D733BA | IRC: giulivo From gfidente at redhat.com Mon Sep 9 14:17:32 2013 From: gfidente at redhat.com (Giulio Fidente) Date: Mon, 09 Sep 2013 16:17:32 +0200 Subject: [rhos-list] Parallel Cinder access? In-Reply-To: <4daa16994ac94075b9a5a16c61d9d69f@AMSPR07MB145.eurprd07.prod.outlook.com> References: , <522D8D3B.4040409@redhat.com> <4daa16994ac94075b9a5a16c61d9d69f@AMSPR07MB145.eurprd07.prod.outlook.com> Message-ID: <522DD87C.4030503@redhat.com> On 09/09/2013 03:41 PM, Lutz Christoph wrote: > Hi! > > Thanks for the post. I had found your instructions while trying to find some that address the question if multiple Cinder Volume instances can access the same storage space in parallel. > > Alas, you didn't address that. And it seems this isn't possible. hi Lutz, I have probably misread your first email but I think your conclusions about this not being possible are wrong. See below. > Incidentally: > > I'm currently trying to lobby my local powers to use storage that avoids the additional round trip required by anything that isn't integrated with both Cinder and libvirtd. Nexenta uses iSCSI volumes of its own rather than one exported from Cinder Volume, so it allows libvirtd to access the volumes directly rather than via an intermediate. Scality has code in libvirtd that uses its API. > > Ceph would be nice, too, but it currently does not play well with RHEL6. It's only useful for libvirtd. Which may also be true for Scality, I'm still trying to find out. You can have multiple cinder-volume services deployed on different nodes, all controlled by a single cinder-api (as per the blog post). The many cinder-volume instances can insist on the same backing storage if that allows for concurrency, as it happens with GlusterFS or NFS drivers. Different drivers (like the EMC and NetApp drivers) also allow for such a configuration, the storage tasks parallelization is managed by their external SMI and DFM software on the storage. All these drivers also allow for the compute nodes to mount the iSCSI/NFS volumes from the storage instead of accessing the hosts running cinder-volume (this seems to be the particular setup you're looking for). Ceph, as you suggest, is yet another option and there could be others. The problem with plain old LVM is that concurrent operations initiated by different cinder-volume nodes can corrupt the metadata but this is mostly a limit of the LVM driver. -- Giulio Fidente GPG KEY: 08D733BA | IRC: giulivo From matthias.pfuetzner at redhat.com Mon Sep 9 14:19:29 2013 From: matthias.pfuetzner at redhat.com (=?ISO-8859-1?Q?Matthias_Pf=FCtzner?=) Date: Mon, 09 Sep 2013 16:19:29 +0200 Subject: [rhos-list] Parallel Cinder access? In-Reply-To: <4daa16994ac94075b9a5a16c61d9d69f@AMSPR07MB145.eurprd07.prod.outlook.com> References: , <522D8D3B.4040409@redhat.com> <4daa16994ac94075b9a5a16c61d9d69f@AMSPR07MB145.eurprd07.prod.outlook.com> Message-ID: <522DD8F1.9000105@redhat.com> Lutz, what about Red Hat Storage? Did you look into that? It offers Cinder, Swift as well as Glance interfaces. Curious, Matthias On 09/09/2013 03:41 PM, Lutz Christoph wrote: > Hi! > > Thanks for the post. I had found your instructions while trying to find some that address the question if multiple Cinder Volume instances can access the same storage space in parallel. > > Alas, you didn't address that. And it seems this isn't possible. > > Incidentally: > > I'm currently trying to lobby my local powers to use storage that avoids the additional round trip required by anything that isn't integrated with both Cinder and libvirtd. Nexenta uses iSCSI volumes of its own rather than one exported from Cinder Volume, so it allows libvirtd to access the volumes directly rather than via an intermediate. Scality has code in libvirtd that uses its API. > > Ceph would be nice, too, but it currently does not play well with RHEL6. It's only useful for libvirtd. Which may also be true for Scality, I'm still trying to find out. > > > Best regards / Mit freundlichen Gr??en > Lutz Christoph > -- Red Hat GmbH Matthias Pf?tzner Solution Architect, Cloud MesseTurm 60308 Frankfurt/Main phone: +49 69 365051 031 mobile: +49 172 7724032 fax: +49 69 365051 001 email: matthias.pfuetzner at redhat.com ___________________________________________________________________________ Reg. Adresse: Red Hat GmbH, Werner-von-Siemens-Ring 11 -15, 85630 Grasbrunn Handelsregister: Amtsgericht Muenchen HRB 153243 Geschaeftsfuehrer: Charles Cachera, Michael Cunningham, Mark Hegarty, Charlie Peters From lchristoph at arago.de Tue Sep 10 09:05:44 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Tue, 10 Sep 2013 09:05:44 +0000 Subject: [rhos-list] Parallel Cinder access? In-Reply-To: <522DD8F1.9000105@redhat.com> References: , <522D8D3B.4040409@redhat.com> <4daa16994ac94075b9a5a16c61d9d69f@AMSPR07MB145.eurprd07.prod.outlook.com>, <522DD8F1.9000105@redhat.com> Message-ID: <8ccf5cc652c649a188fc383ae6febfa7@AMSPR07MB145.eurprd07.prod.outlook.com> Hello! I had a look at GlusterFS when I tested RHEV and liked it. I just didn't want to overdo it with test licenses - testing here requires running something that looks interesting for several months. Now, with OpenStack, GlusterFS is missing support for quite a few Cinder functions, some of them crucial for us like snapshots. See https://wiki.openstack.org/wiki/CinderSupportMatrix Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 ________________________________________ Von: rhos-list-bounces at redhat.com im Auftrag von Matthias Pf?tzner Gesendet: Montag, 9. September 2013 16:19 An: rhos-list at redhat.com Betreff: Re: [rhos-list] Parallel Cinder access? Lutz, what about Red Hat Storage? Did you look into that? It offers Cinder, Swift as well as Glance interfaces. Curious, Matthias On 09/09/2013 03:41 PM, Lutz Christoph wrote: > Hi! > > Thanks for the post. I had found your instructions while trying to find some that address the question if multiple Cinder Volume instances can access the same storage space in parallel. > > Alas, you didn't address that. And it seems this isn't possible. > > Incidentally: > > I'm currently trying to lobby my local powers to use storage that avoids the additional round trip required by anything that isn't integrated with both Cinder and libvirtd. Nexenta uses iSCSI volumes of its own rather than one exported from Cinder Volume, so it allows libvirtd to access the volumes directly rather than via an intermediate. Scality has code in libvirtd that uses its API. > > Ceph would be nice, too, but it currently does not play well with RHEL6. It's only useful for libvirtd. Which may also be true for Scality, I'm still trying to find out. > > > Best regards / Mit freundlichen Gr??en > Lutz Christoph > -- Red Hat GmbH Matthias Pf?tzner Solution Architect, Cloud MesseTurm 60308 Frankfurt/Main phone: +49 69 365051 031 mobile: +49 172 7724032 fax: +49 69 365051 001 email: matthias.pfuetzner at redhat.com ___________________________________________________________________________ Reg. Adresse: Red Hat GmbH, Werner-von-Siemens-Ring 11 -15, 85630 Grasbrunn Handelsregister: Amtsgericht Muenchen HRB 153243 Geschaeftsfuehrer: Charles Cachera, Michael Cunningham, Mark Hegarty, Charlie Peters _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list From yiimao.y at gmail.com Tue Sep 10 09:29:40 2013 From: yiimao.y at gmail.com (yimao) Date: Tue, 10 Sep 2013 17:29:40 +0800 Subject: [rhos-list] How can I start br-ex interface across reboots Message-ID: Hi,all I have tried Red Hat Enterprise Linux OpenStack Platform,But IP address on the br-ex interface is not persistent acress reboots.How can I start br-ex, and is there some auto method for this problem? When i re-run packstack as :packstack --answer-file=~/packstack-answers-*.txt, the packstack doesn't run correctly. Thanks yiimao -------------- next part -------------- An HTML attachment was scrubbed... URL: From dneary at redhat.com Tue Sep 10 10:24:28 2013 From: dneary at redhat.com (Dave Neary) Date: Tue, 10 Sep 2013 12:24:28 +0200 Subject: [rhos-list] How can I start br-ex interface across reboots In-Reply-To: References: Message-ID: <522EF35C.6010003@redhat.com> Hi yimao, I documented this in the "Networking" page on openstack.redhat.com. In /etc/sysconfig/network-scripts/ifcfg-br-ex, put: DEVICE=br-ex TYPE=Ethernet BOOTPROTO=static ONBOOT=yes IPADDR= NETMASK=255.255.255.0 GATEWAY= DNS1= DNS2= DOMAIN= NAME="System br-ex" I associated eth0 with br-ex, si I also had to remove the IP configuration from the equivalent ifcfg-eth0 file. I have: DEVICE=eth0 TYPE=Ethernet UUID=88f7adf3-7bca-4076-8d06-98790a778f2e ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=static NAME="System eth0" Once I associated eth0 with br-ex, I was set. Cheers, Dave. On 09/10/2013 11:29 AM, yimao wrote: > Hi,all > > I have tried Red Hat Enterprise Linux OpenStack Platform > ,But IP > address on the br-ex interface is not persistent acress reboots.How can > I start br-ex, and is there some auto method for this problem? > When i re-run packstack as :packstack > --answer-file=~/packstack-answers-*.txt, the packstack doesn't run > correctly. > > Thanks > yiimao > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From mehbhatt at cisco.com Tue Sep 10 11:12:51 2013 From: mehbhatt at cisco.com (Mehul Bhatt (mehbhatt)) Date: Tue, 10 Sep 2013 11:12:51 +0000 Subject: [rhos-list] subscription manager / yum working on one system but not the other In-Reply-To: <520122DD.4040009@redhat.com> References: <5200EECA.9010106@redhat.com> <520122DD.4040009@redhat.com> Message-ID: BTW, guys, this was repeated on a new node today. I figured that 6.4 beta .iso installer sometimes misses /etc/yum.repos.d/redhat.repo file. Manually copying the file from a healthy node fixed the problem. Though, I believe this is a workaround - shouldn't yum automatically create the file if it doesn't exist? -Mehul. -----Original Message----- From: Bryan Kearney [mailto:bkearney at redhat.com] Sent: Tuesday, August 06, 2013 9:53 PM To: Mehul Bhatt (mehbhatt) Cc: rhos-list at redhat.com Subject: Re: [rhos-list] subscription manager / yum working on one system but not the other On 08/06/2013 12:20 PM, Mehul Bhatt (mehbhatt) wrote: > Doesn't look anything different - other than the fact that the bad one doesn't have RHOS still installed. I also increased yum debug level and see the difference between yum logs on two machines - nothing specific catches my eyes. > > vi /etc/yum.conf > > [main] > cachedir=/var/cache/yum/$basearch/$releasever > keepcache=0 > debuglevel=5 <<-- changed this > > Logs has this: > This system is receiving updates from Red Hat Subscription Management. > Config time: 2.269 > Yum Version: 3.2.29 > COMMAND: yum install -y yum-utils > Installroot: / > Ext Commands: > > yum-utils > Setting up Package Sacks > Reading Local RPMDB > rpmdb time: 0.000 > Setting up Install Process > Setting up Package Sacks > Checking for virtual provide or file-provide for yum-utils Setting up > Package Sacks Nothing to do > > > > And BTW, here's the difference between "subscription-manager list --installed" : > > > > On good one: > > [root at rhos-node2 ~]# subscription-manager list --installed > +-------------------------------------------+ > Installed Product Status > +-------------------------------------------+ > Product Name: Red Hat Enterprise Linux Server > Product ID: 69 > Version: 6.3 > Arch: x86_64 > Status: Subscribed > Starts: 07/23/2013 > Ends: 09/21/2013 > > Product Name: Red Hat OpenStack > Product ID: 191 > Version: 3.0 > Arch: x86_64 > Status: Subscribed > Starts: 07/23/2013 > Ends: 09/21/2013 > > > On bad one: > > [root at rhos-node1 ~]# subscription-manager list --installed > +-------------------------------------------+ > Installed Product Status > +-------------------------------------------+ > Product Name: Red Hat Enterprise Linux Server > Product ID: 69 > Version: 6.4 Beta > Arch: x86_64 > Status: Subscribed > Starts: 07/23/2013 > Ends: 09/21/2013 > > [root at rhos-node1 ~]# > > I wonder if beta is the issue? That may lock you out of prod bits. Are there any enabled repos (subscription-manager repos --list) -- bk From mmosesohn at mirantis.com Tue Sep 10 12:06:59 2013 From: mmosesohn at mirantis.com (Matthew Mosesohn) Date: Tue, 10 Sep 2013 16:06:59 +0400 Subject: [rhos-list] subscription manager / yum working on one system but not the other In-Reply-To: References: <5200EECA.9010106@redhat.com> <520122DD.4040009@redhat.com> Message-ID: Mehul, After registering and activating any product, the next yum operation should generate /etc/yum.repos.d/redhat.repo. You can run yum repolist to generate it and test that it exists. On Tue, Sep 10, 2013 at 3:12 PM, Mehul Bhatt (mehbhatt) wrote: > BTW, guys, this was repeated on a new node today. > > I figured that 6.4 beta .iso installer sometimes misses /etc/yum.repos.d/redhat.repo file. Manually copying the file from a healthy node fixed the problem. Though, I believe this is a workaround - shouldn't yum automatically create the file if it doesn't exist? > > -Mehul. > > -----Original Message----- > From: Bryan Kearney [mailto:bkearney at redhat.com] > Sent: Tuesday, August 06, 2013 9:53 PM > To: Mehul Bhatt (mehbhatt) > Cc: rhos-list at redhat.com > Subject: Re: [rhos-list] subscription manager / yum working on one system but not the other > > On 08/06/2013 12:20 PM, Mehul Bhatt (mehbhatt) wrote: >> Doesn't look anything different - other than the fact that the bad one doesn't have RHOS still installed. I also increased yum debug level and see the difference between yum logs on two machines - nothing specific catches my eyes. >> >> vi /etc/yum.conf >> >> [main] >> cachedir=/var/cache/yum/$basearch/$releasever >> keepcache=0 >> debuglevel=5 <<-- changed this >> >> Logs has this: >> This system is receiving updates from Red Hat Subscription Management. >> Config time: 2.269 >> Yum Version: 3.2.29 >> COMMAND: yum install -y yum-utils >> Installroot: / >> Ext Commands: >> >> yum-utils >> Setting up Package Sacks >> Reading Local RPMDB >> rpmdb time: 0.000 >> Setting up Install Process >> Setting up Package Sacks >> Checking for virtual provide or file-provide for yum-utils Setting up >> Package Sacks Nothing to do >> >> >> >> And BTW, here's the difference between "subscription-manager list --installed" : >> >> >> >> On good one: >> >> [root at rhos-node2 ~]# subscription-manager list --installed >> +-------------------------------------------+ >> Installed Product Status >> +-------------------------------------------+ >> Product Name: Red Hat Enterprise Linux Server >> Product ID: 69 >> Version: 6.3 >> Arch: x86_64 >> Status: Subscribed >> Starts: 07/23/2013 >> Ends: 09/21/2013 >> >> Product Name: Red Hat OpenStack >> Product ID: 191 >> Version: 3.0 >> Arch: x86_64 >> Status: Subscribed >> Starts: 07/23/2013 >> Ends: 09/21/2013 >> >> >> On bad one: >> >> [root at rhos-node1 ~]# subscription-manager list --installed >> +-------------------------------------------+ >> Installed Product Status >> +-------------------------------------------+ >> Product Name: Red Hat Enterprise Linux Server >> Product ID: 69 >> Version: 6.4 Beta >> Arch: x86_64 >> Status: Subscribed >> Starts: 07/23/2013 >> Ends: 09/21/2013 >> >> [root at rhos-node1 ~]# >> >> > > I wonder if beta is the issue? That may lock you out of prod bits. Are there any enabled repos (subscription-manager repos --list) > > -- bk > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From pmyers at redhat.com Tue Sep 10 12:08:03 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 10 Sep 2013 08:08:03 -0400 Subject: [rhos-list] subscription manager / yum working on one system but not the other In-Reply-To: References: <5200EECA.9010106@redhat.com> <520122DD.4040009@redhat.com> Message-ID: <522F0BA3.4070500@redhat.com> On 09/10/2013 07:12 AM, Mehul Bhatt (mehbhatt) wrote: > BTW, guys, this was repeated on a new node today. > > I figured that 6.4 beta .iso installer sometimes misses /etc/yum.repos.d/redhat.repo file. Manually copying the file from a healthy node fixed the problem. Though, I believe this is a workaround - shouldn't yum automatically create the file if it doesn't exist? redhat.repo is created dynamically by subscription manager after the following two steps: 1) register the system to RHN/CDN and subscribe to a valid RHEL entitlement 2) run yum repolist (or any other valid yum command) Until you do that 2nd step, the redhat.repo file is usually blank From bkearney at redhat.com Tue Sep 10 12:22:24 2013 From: bkearney at redhat.com (Bryan Kearney) Date: Tue, 10 Sep 2013 08:22:24 -0400 Subject: [rhos-list] subscription manager / yum working on one system but not the other In-Reply-To: <522F0BA3.4070500@redhat.com> References: <5200EECA.9010106@redhat.com> <520122DD.4040009@redhat.com> <522F0BA3.4070500@redhat.com> Message-ID: <522F0F00.6070509@redhat.com> On 09/10/2013 08:08 AM, Perry Myers wrote: > On 09/10/2013 07:12 AM, Mehul Bhatt (mehbhatt) wrote: >> BTW, guys, this was repeated on a new node today. >> >> I figured that 6.4 beta .iso installer sometimes misses /etc/yum.repos.d/redhat.repo file. Manually copying the file from a healthy node fixed the problem. Though, I believe this is a workaround - shouldn't yum automatically create the file if it doesn't exist? > > redhat.repo is created dynamically by subscription manager after the > following two steps: > > 1) register the system to RHN/CDN and subscribe to a valid RHEL entitlement > 2) run yum repolist (or any other valid yum command) > > Until you do that 2nd step, the redhat.repo file is usually blank > They are correct. The subscription provides the content for the file, but it is not created until it is needed the first time. -- bk From johnmark at redhat.com Tue Sep 10 13:20:28 2013 From: johnmark at redhat.com (John Mark Walker) Date: Tue, 10 Sep 2013 09:20:28 -0400 (EDT) Subject: [rhos-list] Blog post: Why GlusterFS Should not be Integrated with OpenStack In-Reply-To: <1676442903.17581691.1378818865578.JavaMail.root@redhat.com> Message-ID: <548864891.17599218.1378819228599.JavaMail.root@redhat.com> Found this in my inbox this morning. If you're on the main OpenStack mailing list, you saw it too: https://shellycloud.com/blog/2013/09/why-glusterfs-should-not-be-implemented-with-openstack Essentially says that we don't have all the features to make us fully ready for OpenStack, specifically Cinder integration. Some of the things mentioned: - no snapshotting - no auth for Gluster volumes - no glance integration - Cinder can't use Gluster's native failover - no support for CoW volumes And he ends with this: "How can we make things better? GlusterFS is not going to be our first choice as Cinder back-end. It might be very useful for multi-attach volumes but it's lack of basic operations on volumes forces to search for additional tools like qcow image format. Fortunately, there are other solutions. From our experience one of the most promising open source storage systems is Ceph. It not only provides all functionality which OpenStack/GlusterFS lacks but is also much better designed which makes working with it very pleasant. For more information we recommend Sebastien Han's Blog together with Ceph's documentation." We need to respond to this. If it's factually correct, then we need to recognize that and say that the cinder integration is very new and rapidly changing. If parts of it are wrong, then we need to (nicely) thank him for writing it up and point out any errors. -JM From johnmark at redhat.com Tue Sep 10 13:23:42 2013 From: johnmark at redhat.com (John Mark Walker) Date: Tue, 10 Sep 2013 09:23:42 -0400 (EDT) Subject: [rhos-list] [storage-blr] Blog post: Why GlusterFS Should not be Integrated with OpenStack In-Reply-To: <1430905043.17600600.1378819369435.JavaMail.root@redhat.com> Message-ID: <751677102.17601530.1378819422590.JavaMail.root@redhat.com> On another note - we need to think of a way to say that cloud storage != block storage. Right now, block storage gets all the glory. That's the conversation we need to change. -JM ----- Original Message ----- > Found this in my inbox this morning. If you're on the main OpenStack mailing > list, you saw it too: > > https://shellycloud.com/blog/2013/09/why-glusterfs-should-not-be-implemented-with-openstack > > > Essentially says that we don't have all the features to make us fully ready > for OpenStack, specifically Cinder integration. Some of the things > mentioned: > > - no snapshotting > - no auth for Gluster volumes > - no glance integration > - Cinder can't use Gluster's native failover > - no support for CoW volumes > > > And he ends with this: > > "How can we make things better? > > GlusterFS is not going to be our first choice as Cinder back-end. It might be > very useful for multi-attach volumes but it's lack of basic operations on > volumes forces to search for additional tools like qcow image format. > > Fortunately, there are other solutions. From our experience one of the most > promising open source storage systems is Ceph. It not only provides all > functionality which OpenStack/GlusterFS lacks but is also much better > designed which makes working with it very pleasant. For more information we > recommend Sebastien Han's Blog together with Ceph's documentation." > > > We need to respond to this. If it's factually correct, then we need to > recognize that and say that the cinder integration is very new and rapidly > changing. If parts of it are wrong, then we need to (nicely) thank him for > writing it up and point out any errors. > > -JM > > From jowalker at redhat.com Tue Sep 10 13:22:49 2013 From: jowalker at redhat.com (John Mark Walker) Date: Tue, 10 Sep 2013 09:22:49 -0400 (EDT) Subject: [rhos-list] Blog post: Why GlusterFS Should not be Integrated with OpenStack In-Reply-To: <548864891.17599218.1378819228599.JavaMail.root@redhat.com> References: <548864891.17599218.1378819228599.JavaMail.root@redhat.com> Message-ID: <1430905043.17600600.1378819369435.JavaMail.root@redhat.com> On another note - we need to think of a way to say that cloud storage != block storage. Right now, block storage gets all the glory. That's the conversation we need to change. -JM ----- Original Message ----- > Found this in my inbox this morning. If you're on the main OpenStack mailing > list, you saw it too: > > https://shellycloud.com/blog/2013/09/why-glusterfs-should-not-be-implemented-with-openstack > > > Essentially says that we don't have all the features to make us fully ready > for OpenStack, specifically Cinder integration. Some of the things > mentioned: > > - no snapshotting > - no auth for Gluster volumes > - no glance integration > - Cinder can't use Gluster's native failover > - no support for CoW volumes > > > And he ends with this: > > "How can we make things better? > > GlusterFS is not going to be our first choice as Cinder back-end. It might be > very useful for multi-attach volumes but it's lack of basic operations on > volumes forces to search for additional tools like qcow image format. > > Fortunately, there are other solutions. From our experience one of the most > promising open source storage systems is Ceph. It not only provides all > functionality which OpenStack/GlusterFS lacks but is also much better > designed which makes working with it very pleasant. For more information we > recommend Sebastien Han's Blog together with Ceph's documentation." > > > We need to respond to this. If it's factually correct, then we need to > recognize that and say that the cinder integration is very new and rapidly > changing. If parts of it are wrong, then we need to (nicely) thank him for > writing it up and point out any errors. > > -JM > > From aydinp at destek.as Mon Sep 16 11:58:13 2013 From: aydinp at destek.as (Aydin PAYKOC) Date: Mon, 16 Sep 2013 11:58:13 +0000 Subject: [rhos-list] cinder-volumes vg not recognized by packstack Message-ID: <8B91F67B8509F34A8588E7EFD4005BEC031CD03B@ANKPHEXC.destekas.local> Hi, I am trying to install openstack by running packstack interactively. The version of openstack is Grizzly kernel : Linux openstack 2.6.32-358.118.1.openstack.el6.x86_64 It will be a 1 machine installation I have created a vg named cinder-volumes and vgscan shows it; Found volume group "cinder-volumes" using metadata type lvm2 Found volume group "vg_openstack" using metadata type lvm2 When I run packstack interactively it still asks if I want to create volume group for cinder. Should Cinder's volumes group be created (for proof-of-concept installation)? [y|n] [y] : As far as I know it should look for "cinder-volumes" vg and if found should continue without asking the above question. vg cinder-volumes is on a different disk, (dev/sdb1) but I do not think this should make any difference Can anybody tell me what could be the problem Regards, Aydin ________________________________ Bu E-posta mesaji gizlidir. Ayrica sadece yukarida adi ge?en kisiye ?zel bilgi i?eriyor olabilir. Mesajin g?nderilmek istendigi kisi siz degilseniz hi?bir kismini kopyalayamaz baskasina g?nderemez baskasina a?iklayamaz veya kullanamazsiniz. Eger bu mesaj size yanlislikla ulasmissa l?tfen mesaji ve t?m kopyalarini sisteminizden silin ve g?nderen kisiyi E-posta yolu ile bilgilendirin. Internet iletisiminde zamaninda g?venli hatasiz ya da vir?ss?z g?nderim garanti edilemez. G?nderen taraf hata veya unutmalardan sorumluluk kabul etmez. This E-mail is confidential. It may also be legally privileged. If you are not the addressee you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return E-mail. Internet communications cannot be guaranteed to be timely, secure, error or virus-free. The sender does not accept liability for any errors or omissions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aydinp at destek.as Mon Sep 16 12:45:11 2013 From: aydinp at destek.as (Aydin PAYKOC) Date: Mon, 16 Sep 2013 12:45:11 +0000 Subject: [rhos-list] cinder-volumes vg not recognized by packstack In-Reply-To: <8B91F67B8509F34A8588E7EFD4005BEC031CD03B@ANKPHEXC.destekas.local> References: <8B91F67B8509F34A8588E7EFD4005BEC031CD03B@ANKPHEXC.destekas.local> Message-ID: <8B91F67B8509F34A8588E7EFD4005BEC031CD065@ANKPHEXC.destekas.local> Hi all, I have figured out that if you say "no" when packstack asks if you want to create a "cinder-volumes" vg, and continue with the installation it checks if "cinder-volumes" vg has already been present during the installation phase. Which is fine but I think this could be misleading, wouldn't be better if it checks "cinder-volumes" first and brings up the question only if it could not find "cinder-volumes" vg. Regards, Aydin ________________________________ From: Aydin PAYKOC Sent: Monday, September 16, 2013 14:58 To: rhos-list at redhat.com Subject: cinder-volumes vg not recognized by packstack Hi, I am trying to install openstack by running packstack interactively. The version of openstack is Grizzly kernel : Linux openstack 2.6.32-358.118.1.openstack.el6.x86_64 It will be a 1 machine installation I have created a vg named cinder-volumes and vgscan shows it; Found volume group "cinder-volumes" using metadata type lvm2 Found volume group "vg_openstack" using metadata type lvm2 When I run packstack interactively it still asks if I want to create volume group for cinder. Should Cinder's volumes group be created (for proof-of-concept installation)? [y|n] [y] : As far as I know it should look for "cinder-volumes" vg and if found should continue without asking the above question. vg cinder-volumes is on a different disk, (dev/sdb1) but I do not think this should make any difference Can anybody tell me what could be the problem Regards, Aydin ________________________________ Bu E-posta mesaji gizlidir. Ayrica sadece yukarida adi ge?en kisiye ?zel bilgi i?eriyor olabilir. Mesajin g?nderilmek istendigi kisi siz degilseniz hi?bir kismini kopyalayamaz baskasina g?nderemez baskasina a?iklayamaz veya kullanamazsiniz. Eger bu mesaj size yanlislikla ulasmissa l?tfen mesaji ve t?m kopyalarini sisteminizden silin ve g?nderen kisiyi E-posta yolu ile bilgilendirin. Internet iletisiminde zamaninda g?venli hatasiz ya da vir?ss?z g?nderim garanti edilemez. G?nderen taraf hata veya unutmalardan sorumluluk kabul etmez. This E-mail is confidential. It may also be legally privileged. If you are not the addressee you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return E-mail. Internet communications cannot be guaranteed to be timely, secure, error or virus-free. The sender does not accept liability for any errors or omissions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From prmarino1 at gmail.com Mon Sep 16 21:02:10 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Mon, 16 Sep 2013 17:02:10 -0400 Subject: [rhos-list] Gluster UFO 3.4 swift Multi tenant question In-Reply-To: <5225425e.8482e00a.1bfd.ffff9dcc@mx.google.com> References: <31398813.7155780.1378169708207.JavaMail.root@redhat.com> <5225425e.8482e00a.1bfd.ffff9dcc@mx.google.com> Message-ID: Sorry for the delay on reporting the details. I got temporarily pulled off the project and dedicated to a different project which was considered higher priority by my employer. I'm just getting back to doing my normal work today. first here are the rpms I have installed " rpm -qa |grep -P -i '(gluster|swift)' glusterfs-libs-3.4.0-8.el6.x86_64 glusterfs-server-3.4.0-8.el6.x86_64 openstack-swift-plugin-swift3-1.0.0-0.20120711git.el6.noarch openstack-swift-proxy-1.8.0-2.el6.noarch glusterfs-3.4.0-8.el6.x86_64 glusterfs-cli-3.4.0-8.el6.x86_64 glusterfs-geo-replication-3.4.0-8.el6.x86_64 glusterfs-api-3.4.0-8.el6.x86_64 openstack-swift-1.8.0-2.el6.noarch openstack-swift-container-1.8.0-2.el6.noarch openstack-swift-object-1.8.0-2.el6.noarch glusterfs-fuse-3.4.0-8.el6.x86_64 glusterfs-rdma-3.4.0-8.el6.x86_64 openstack-swift-account-1.8.0-2.el6.noarch glusterfs-ufo-3.4.0-8.el6.noarch glusterfs-vim-3.2.7-1.el6.x86_64 python-swiftclient-1.4.0-1.el6.noarch here are some key config files note I've changed the passwords I'm using and hostnames " cat /etc/swift/account-server.conf [DEFAULT] mount_check = true bind_port = 6012 user = root log_facility = LOG_LOCAL2 devices = /swift/tenants/ [pipeline:main] pipeline = account-server [app:account-server] use = egg:gluster_swift_ufo#account log_name = account-server log_level = DEBUG log_requests = true [account-replicator] vm_test_mode = yes [account-auditor] [account-reaper] " " cat /etc/swift/container-server.conf [DEFAULT] devices = /swift/tenants/ mount_check = true bind_port = 6011 user = root log_facility = LOG_LOCAL2 [pipeline:main] pipeline = container-server [app:container-server] use = egg:gluster_swift_ufo#container [container-replicator] vm_test_mode = yes [container-updater] [container-auditor] [container-sync] " " cat /etc/swift/object-server.conf [DEFAULT] mount_check = true bind_port = 6010 user = root log_facility = LOG_LOCAL2 devices = /swift/tenants/ [pipeline:main] pipeline = object-server [app:object-server] use = egg:gluster_swift_ufo#object [object-replicator] vm_test_mode = yes [object-updater] [object-auditor] " " cat /etc/swift/proxy-server.conf [DEFAULT] bind_port = 8080 user = root log_facility = LOG_LOCAL1 log_name = swift log_level = DEBUG log_headers = True [pipeline:main] pipeline = healthcheck cache authtoken keystone proxy-server [app:proxy-server] use = egg:gluster_swift_ufo#proxy allow_account_management = true account_autocreate = true [filter:tempauth] use = egg:swift#tempauth # Here you need to add users explicitly. See the OpenStack Swift Deployment # Guide for more information. The user and user64 directives take the # following form: # user__ = [group] [group] [...] [storage_url] # user64__ = [group] [group] [...] [storage_url] # Where you use user64 for accounts and/or usernames that include underscores. # # NOTE (and WARNING): The account name must match the device name specified # when generating the account, container, and object build rings. # # E.g. # user_ufo0_admin = abc123 .admin [filter:healthcheck] use = egg:swift#healthcheck [filter:cache] use = egg:swift#memcache [filter:keystone] use = egg:swift#keystoneauth #paste.filter_factory = keystone.middleware.swift_auth:filter_factory operator_roles = Member,admin,swiftoperator [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory auth_host = keystone01.vip.my.net auth_port = 35357 auth_protocol = http admin_user = swift admin_password = PASSWORD admin_tenant_name = service signing_dir = /var/cache/swift service_port = 5000 service_host = keystone01.vip.my.net [filter:swiftauth] use = egg:keystone#swiftauth auth_host = keystone01.vip.my.net auth_port = 35357 auth_protocol = http keystone_url = https://keystone01.vip.my.net:5000/v2.0 admin_user = swift admin_password = PASSWORD admin_tenant_name = service signing_dir = /var/cache/swift keystone_swift_operator_roles = Member,admin,swiftoperator keystone_tenant_user_admin = true [filter:catch_errors] use = egg:swift#catch_errors " " cat /etc/swift/swift.conf [DEFAULT] [swift-hash] # random unique string that can never change (DO NOT LOSE) swift_hash_path_suffix = gluster #3d60c9458bb77abe # The swift-constraints section sets the basic constraints on data # saved in the swift cluster. [swift-constraints] # max_file_size is the largest "normal" object that can be saved in # the cluster. This is also the limit on the size of each segment of # a "large" object when using the large object manifest support. # This value is set in bytes. Setting it to lower than 1MiB will cause # some tests to fail. It is STRONGLY recommended to leave this value at # the default (5 * 2**30 + 2). # FIXME: Really? Gluster can handle a 2^64 sized file? And can the fronting # web service handle such a size? I think with UFO, we need to keep with the # default size from Swift and encourage users to research what size their web # services infrastructure can handle. max_file_size = 18446744073709551616 # max_meta_name_length is the max number of bytes in the utf8 encoding # of the name portion of a metadata header. #max_meta_name_length = 128 # max_meta_value_length is the max number of bytes in the utf8 encoding # of a metadata value #max_meta_value_length = 256 # max_meta_count is the max number of metadata keys that can be stored # on a single account, container, or object #max_meta_count = 90 # max_meta_overall_size is the max number of bytes in the utf8 encoding # of the metadata (keys + values) #max_meta_overall_size = 4096 # max_object_name_length is the max number of bytes in the utf8 encoding of an # object name: Gluster FS can handle much longer file names, but the length # between the slashes of the URL is handled below. Remember that most web # clients can't handle anything greater than 2048, and those that do are # rather clumsy. max_object_name_length = 2048 # max_object_name_component_length (GlusterFS) is the max number of bytes in # the utf8 encoding of an object name component (the part between the # slashes); this is a limit imposed by the underlying file system (for XFS it # is 255 bytes). max_object_name_component_length = 255 # container_listing_limit is the default (and max) number of items # returned for a container listing request #container_listing_limit = 10000 # account_listing_limit is the default (and max) number of items returned # for an account listing request #account_listing_limit = 10000 # max_account_name_length is the max number of bytes in the utf8 encoding of # an account name: Gluster FS Filename limit (XFS limit?), must be the same # size as max_object_name_component_length above. max_account_name_length = 255 # max_container_name_length is the max number of bytes in the utf8 encoding # of a container name: Gluster FS Filename limit (XFS limit?), must be the same # size as max_object_name_component_length above. max_container_name_length = 255 " The volumes " gluster volume list cindervol unified-storage-vol a07d2f39117c4e5abdeba722cf245828 bd74a005f08541b9989e392a689be2fc f6da0a8151ff43b7be10d961a20c94d6 " if I run the command " gluster-swift-gen-builders unified-storage-vol a07d2f39117c4e5abdeba722cf245828 bd74a005f08541b9989e392a689be2fc f6da0a8151ff43b7be10d961a20c94d6 " because of a change in the script in this version as compaired to the version I got from http://repos.fedorapeople.org/repos/kkeithle/glusterfs/ the gluster-swift-gen-builders script only takes the first option and ignores the rest. other than the location of the config files none of the changes Ive made are functionally different than the ones mentioned in http://www.gluster.org/2012/09/howto-using-ufo-swift-a-quick-and-dirty-setup-guide/ The result is that the first volume named "unified-storage-vol" winds up being used for every thing regardless of the tenant, and users and see and manage each others objects regardless of what tenant they are members of. through the swift command or via horizon. In a way this is a good thing for me it simplifies thing significantly and would be fine if it just created a directory for each tenant and only allow the user to access the individual directories, not the whole gluster volume. by the way seeing every thing includes the service tenants data so unprivileged users can delete glance images without being a member of the service group. On Mon, Sep 2, 2013 at 9:58 PM, Paul Robert Marino wrote: > Well I'll give you the full details in the morning but simply I used the > stock cluster ring builder script that came with the 3.4 rpms and the old > version from 3.3 took the list of volumes and would add all of them the > version with 3.4 only takes the first one. > > Well I ran the script expecting the same behavior but instead they all used > the first volume in the list. > > Now I knew from the docs I read that the per tenant directories in a single > volume were one possible plan for 3.4 to deal with the scalding issue with a > large number of tenants, so when I saw the difference in the script and that > it worked I just assumed that this was done and I missed something. > > > > -- Sent from my HP Pre3 > > ________________________________ > On Sep 2, 2013 20:55, Ramana Raja wrote: > > Hi Paul, > > Currently, gluster-swift doesn't support the feature of multiple > accounts/tenants accessing the same volume. Each tenant still needs his own > gluster volume. So I'm wondering how you were able to observe the reported > behaviour. > > How did you prepare the ringfiles for the different tenants, which use the > same gluster volume? Did you change the configuration of the servers? Also, > how did you access the files that you mention? It'd be helpful if you could > share the commands you used to perform these actions. > > Thanks, > > Ram > > > ----- Original Message ----- > From: "Vijay Bellur" > To: "Paul Robert Marino" > Cc: rhos-list at redhat.com, "Luis Pabon" , "Ramana Raja" > , "Chetan Risbud" > Sent: Monday, September 2, 2013 4:17:51 PM > Subject: Re: [rhos-list] Gluster UFO 3.4 swift Multi tenant question > > On 09/02/2013 01:39 AM, Paul Robert Marino wrote: >> I have Gluster UFO installed as a back end for swift from here >> http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/RHEL/epel-6/ >> with RDO 3 >> >> Its working well except for one thing. All of the tenants are seeing >> one Gluster volume which is some what nice, especially when compared >> to the old 3.3 behavior of creating one volume per tenant named after >> the tenant ID number. >> >> The problem is I expected to see is sub directory created under the >> volume root for each tenant but instead what in seeing is that all of >> the tenants can see the root of the Gluster volume. The result is that >> all of the tenants can access each others files and even delete them. >> even scarier is that the tennants can see and delete each others >> glance images and snapshots. >> >> Can any one suggest options to look at or documents to read to try to >> figure out how to modify the behavior? >> > > Adding gluster swift developers who might be able to help. > > -Vijay From lpabon at redhat.com Tue Sep 17 14:10:25 2013 From: lpabon at redhat.com (Luis Pabon) Date: Tue, 17 Sep 2013 10:10:25 -0400 Subject: [rhos-list] Gluster UFO 3.4 swift Multi tenant question In-Reply-To: References: <31398813.7155780.1378169708207.JavaMail.root@redhat.com> <5225425e.8482e00a.1bfd.ffff9dcc@mx.google.com> Message-ID: <523862D1.5040108@redhat.com> First thing I can see is that you have Essex based gluster-ufo-* which has been replaced by the gluster-swift project. We are currently in progress of replacing the gluster-ufo-* with RPMs from the gluster-swift project in Fedora. Please checkout the following quickstart guide which show how to download the Grizzly version of gluster-swift: https://github.com/gluster/gluster-swift/blob/master/doc/markdown/quick_start_guide.md . For more information please visit: https://launchpad.net/gluster-swift - Luis On 09/16/2013 05:02 PM, Paul Robert Marino wrote: > Sorry for the delay on reporting the details. I got temporarily pulled > off the project and dedicated to a different project which was > considered higher priority by my employer. I'm just getting back to > doing my normal work today. > > first here are the rpms I have installed > " > rpm -qa |grep -P -i '(gluster|swift)' > glusterfs-libs-3.4.0-8.el6.x86_64 > glusterfs-server-3.4.0-8.el6.x86_64 > openstack-swift-plugin-swift3-1.0.0-0.20120711git.el6.noarch > openstack-swift-proxy-1.8.0-2.el6.noarch > glusterfs-3.4.0-8.el6.x86_64 > glusterfs-cli-3.4.0-8.el6.x86_64 > glusterfs-geo-replication-3.4.0-8.el6.x86_64 > glusterfs-api-3.4.0-8.el6.x86_64 > openstack-swift-1.8.0-2.el6.noarch > openstack-swift-container-1.8.0-2.el6.noarch > openstack-swift-object-1.8.0-2.el6.noarch > glusterfs-fuse-3.4.0-8.el6.x86_64 > glusterfs-rdma-3.4.0-8.el6.x86_64 > openstack-swift-account-1.8.0-2.el6.noarch > glusterfs-ufo-3.4.0-8.el6.noarch > glusterfs-vim-3.2.7-1.el6.x86_64 > python-swiftclient-1.4.0-1.el6.noarch > > here are some key config files note I've changed the passwords I'm > using and hostnames > " > cat /etc/swift/account-server.conf > [DEFAULT] > mount_check = true > bind_port = 6012 > user = root > log_facility = LOG_LOCAL2 > devices = /swift/tenants/ > > [pipeline:main] > pipeline = account-server > > [app:account-server] > use = egg:gluster_swift_ufo#account > log_name = account-server > log_level = DEBUG > log_requests = true > > [account-replicator] > vm_test_mode = yes > > [account-auditor] > > [account-reaper] > > " > > " > cat /etc/swift/container-server.conf > [DEFAULT] > devices = /swift/tenants/ > mount_check = true > bind_port = 6011 > user = root > log_facility = LOG_LOCAL2 > > [pipeline:main] > pipeline = container-server > > [app:container-server] > use = egg:gluster_swift_ufo#container > > [container-replicator] > vm_test_mode = yes > > [container-updater] > > [container-auditor] > > [container-sync] > " > > " > cat /etc/swift/object-server.conf > [DEFAULT] > mount_check = true > bind_port = 6010 > user = root > log_facility = LOG_LOCAL2 > devices = /swift/tenants/ > > [pipeline:main] > pipeline = object-server > > [app:object-server] > use = egg:gluster_swift_ufo#object > > [object-replicator] > vm_test_mode = yes > > [object-updater] > > [object-auditor] > " > > " > cat /etc/swift/proxy-server.conf > [DEFAULT] > bind_port = 8080 > user = root > log_facility = LOG_LOCAL1 > log_name = swift > log_level = DEBUG > log_headers = True > > [pipeline:main] > pipeline = healthcheck cache authtoken keystone proxy-server > > [app:proxy-server] > use = egg:gluster_swift_ufo#proxy > allow_account_management = true > account_autocreate = true > > [filter:tempauth] > use = egg:swift#tempauth > # Here you need to add users explicitly. See the OpenStack Swift Deployment > # Guide for more information. The user and user64 directives take the > # following form: > # user__ = [group] [group] [...] [storage_url] > # user64__ = [group] [group] > [...] [storage_url] > # Where you use user64 for accounts and/or usernames that include underscores. > # > # NOTE (and WARNING): The account name must match the device name specified > # when generating the account, container, and object build rings. > # > # E.g. > # user_ufo0_admin = abc123 .admin > > [filter:healthcheck] > use = egg:swift#healthcheck > > [filter:cache] > use = egg:swift#memcache > > > [filter:keystone] > use = egg:swift#keystoneauth > #paste.filter_factory = keystone.middleware.swift_auth:filter_factory > operator_roles = Member,admin,swiftoperator > > > [filter:authtoken] > paste.filter_factory = keystone.middleware.auth_token:filter_factory > auth_host = keystone01.vip.my.net > auth_port = 35357 > auth_protocol = http > admin_user = swift > admin_password = PASSWORD > admin_tenant_name = service > signing_dir = /var/cache/swift > service_port = 5000 > service_host = keystone01.vip.my.net > > [filter:swiftauth] > use = egg:keystone#swiftauth > auth_host = keystone01.vip.my.net > auth_port = 35357 > auth_protocol = http > keystone_url = https://keystone01.vip.my.net:5000/v2.0 > admin_user = swift > admin_password = PASSWORD > admin_tenant_name = service > signing_dir = /var/cache/swift > keystone_swift_operator_roles = Member,admin,swiftoperator > keystone_tenant_user_admin = true > > [filter:catch_errors] > use = egg:swift#catch_errors > " > > " > cat /etc/swift/swift.conf > [DEFAULT] > > > [swift-hash] > # random unique string that can never change (DO NOT LOSE) > swift_hash_path_suffix = gluster > #3d60c9458bb77abe > > > # The swift-constraints section sets the basic constraints on data > # saved in the swift cluster. > > [swift-constraints] > > # max_file_size is the largest "normal" object that can be saved in > # the cluster. This is also the limit on the size of each segment of > # a "large" object when using the large object manifest support. > # This value is set in bytes. Setting it to lower than 1MiB will cause > # some tests to fail. It is STRONGLY recommended to leave this value at > # the default (5 * 2**30 + 2). > > # FIXME: Really? Gluster can handle a 2^64 sized file? And can the fronting > # web service handle such a size? I think with UFO, we need to keep with the > # default size from Swift and encourage users to research what size their web > # services infrastructure can handle. > > max_file_size = 18446744073709551616 > > > # max_meta_name_length is the max number of bytes in the utf8 encoding > # of the name portion of a metadata header. > > #max_meta_name_length = 128 > > > # max_meta_value_length is the max number of bytes in the utf8 encoding > # of a metadata value > > #max_meta_value_length = 256 > > > # max_meta_count is the max number of metadata keys that can be stored > # on a single account, container, or object > > #max_meta_count = 90 > > > # max_meta_overall_size is the max number of bytes in the utf8 encoding > # of the metadata (keys + values) > > #max_meta_overall_size = 4096 > > > # max_object_name_length is the max number of bytes in the utf8 encoding of an > # object name: Gluster FS can handle much longer file names, but the length > # between the slashes of the URL is handled below. Remember that most web > # clients can't handle anything greater than 2048, and those that do are > # rather clumsy. > > max_object_name_length = 2048 > > # max_object_name_component_length (GlusterFS) is the max number of bytes in > # the utf8 encoding of an object name component (the part between the > # slashes); this is a limit imposed by the underlying file system (for XFS it > # is 255 bytes). > > max_object_name_component_length = 255 > > # container_listing_limit is the default (and max) number of items > # returned for a container listing request > > #container_listing_limit = 10000 > > > # account_listing_limit is the default (and max) number of items returned > # for an account listing request > > #account_listing_limit = 10000 > > > # max_account_name_length is the max number of bytes in the utf8 encoding of > # an account name: Gluster FS Filename limit (XFS limit?), must be the same > # size as max_object_name_component_length above. > > max_account_name_length = 255 > > > # max_container_name_length is the max number of bytes in the utf8 encoding > # of a container name: Gluster FS Filename limit (XFS limit?), must be the same > # size as max_object_name_component_length above. > > max_container_name_length = 255 > > " > > > The volumes > " > gluster volume list > cindervol > unified-storage-vol > a07d2f39117c4e5abdeba722cf245828 > bd74a005f08541b9989e392a689be2fc > f6da0a8151ff43b7be10d961a20c94d6 > " > > if I run the command > " > gluster-swift-gen-builders unified-storage-vol > a07d2f39117c4e5abdeba722cf245828 bd74a005f08541b9989e392a689be2fc > f6da0a8151ff43b7be10d961a20c94d6 > " > > because of a change in the script in this version as compaired to the > version I got from > http://repos.fedorapeople.org/repos/kkeithle/glusterfs/ the > gluster-swift-gen-builders script only takes the first option and > ignores the rest. > > other than the location of the config files none of the changes Ive > made are functionally different than the ones mentioned in > http://www.gluster.org/2012/09/howto-using-ufo-swift-a-quick-and-dirty-setup-guide/ > > The result is that the first volume named "unified-storage-vol" winds > up being used for every thing regardless of the tenant, and users and > see and manage each others objects regardless of what tenant they are > members of. > through the swift command or via horizon. > > In a way this is a good thing for me it simplifies thing significantly > and would be fine if it just created a directory for each tenant and > only allow the user to access the individual directories, not the > whole gluster volume. > by the way seeing every thing includes the service tenants data so > unprivileged users can delete glance images without being a member of > the service group. > > > > > On Mon, Sep 2, 2013 at 9:58 PM, Paul Robert Marino wrote: >> Well I'll give you the full details in the morning but simply I used the >> stock cluster ring builder script that came with the 3.4 rpms and the old >> version from 3.3 took the list of volumes and would add all of them the >> version with 3.4 only takes the first one. >> >> Well I ran the script expecting the same behavior but instead they all used >> the first volume in the list. >> >> Now I knew from the docs I read that the per tenant directories in a single >> volume were one possible plan for 3.4 to deal with the scalding issue with a >> large number of tenants, so when I saw the difference in the script and that >> it worked I just assumed that this was done and I missed something. >> >> >> >> -- Sent from my HP Pre3 >> >> ________________________________ >> On Sep 2, 2013 20:55, Ramana Raja wrote: >> >> Hi Paul, >> >> Currently, gluster-swift doesn't support the feature of multiple >> accounts/tenants accessing the same volume. Each tenant still needs his own >> gluster volume. So I'm wondering how you were able to observe the reported >> behaviour. >> >> How did you prepare the ringfiles for the different tenants, which use the >> same gluster volume? Did you change the configuration of the servers? Also, >> how did you access the files that you mention? It'd be helpful if you could >> share the commands you used to perform these actions. >> >> Thanks, >> >> Ram >> >> >> ----- Original Message ----- >> From: "Vijay Bellur" >> To: "Paul Robert Marino" >> Cc: rhos-list at redhat.com, "Luis Pabon" , "Ramana Raja" >> , "Chetan Risbud" >> Sent: Monday, September 2, 2013 4:17:51 PM >> Subject: Re: [rhos-list] Gluster UFO 3.4 swift Multi tenant question >> >> On 09/02/2013 01:39 AM, Paul Robert Marino wrote: >>> I have Gluster UFO installed as a back end for swift from here >>> http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/RHEL/epel-6/ >>> with RDO 3 >>> >>> Its working well except for one thing. All of the tenants are seeing >>> one Gluster volume which is some what nice, especially when compared >>> to the old 3.3 behavior of creating one volume per tenant named after >>> the tenant ID number. >>> >>> The problem is I expected to see is sub directory created under the >>> volume root for each tenant but instead what in seeing is that all of >>> the tenants can see the root of the Gluster volume. The result is that >>> all of the tenants can access each others files and even delete them. >>> even scarier is that the tennants can see and delete each others >>> glance images and snapshots. >>> >>> Can any one suggest options to look at or documents to read to try to >>> figure out how to modify the behavior? >>> >> Adding gluster swift developers who might be able to help. >> >> -Vijay -------------- next part -------------- An HTML attachment was scrubbed... URL: From prmarino1 at gmail.com Tue Sep 17 15:13:09 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Tue, 17 Sep 2013 11:13:09 -0400 Subject: [rhos-list] Gluster UFO 3.4 swift Multi tenant question In-Reply-To: <523862D1.5040108@redhat.com> References: <31398813.7155780.1378169708207.JavaMail.root@redhat.com> <5225425e.8482e00a.1bfd.ffff9dcc@mx.google.com> <523862D1.5040108@redhat.com> Message-ID: Luis well thats intresting because it was my impression that Gluster UFO 3.4 was based on the Grizzly version of Swift. Also I was previously unaware of this new rpm which doesnt seem to be in a repo any where. also there is a line in this new howto that is extreamly unclear " /usr/bin/gluster-swift-gen-builders test " in place of "test" what should go there is it the tenant ID string, the tenant name, or just a generic volume you can name whatever you want? in other words how should the Gluster volumes be named? On Tue, Sep 17, 2013 at 10:10 AM, Luis Pabon wrote: > First thing I can see is that you have Essex based gluster-ufo-* which has > been replaced by the gluster-swift project. We are currently in progress of > replacing the gluster-ufo-* with RPMs from the gluster-swift project in > Fedora. > > Please checkout the following quickstart guide which show how to download > the Grizzly version of gluster-swift: > https://github.com/gluster/gluster-swift/blob/master/doc/markdown/quick_start_guide.md > . > > For more information please visit: https://launchpad.net/gluster-swift > > - Luis > > > On 09/16/2013 05:02 PM, Paul Robert Marino wrote: > > Sorry for the delay on reporting the details. I got temporarily pulled > off the project and dedicated to a different project which was > considered higher priority by my employer. I'm just getting back to > doing my normal work today. > > first here are the rpms I have installed > " > rpm -qa |grep -P -i '(gluster|swift)' > glusterfs-libs-3.4.0-8.el6.x86_64 > glusterfs-server-3.4.0-8.el6.x86_64 > openstack-swift-plugin-swift3-1.0.0-0.20120711git.el6.noarch > openstack-swift-proxy-1.8.0-2.el6.noarch > glusterfs-3.4.0-8.el6.x86_64 > glusterfs-cli-3.4.0-8.el6.x86_64 > glusterfs-geo-replication-3.4.0-8.el6.x86_64 > glusterfs-api-3.4.0-8.el6.x86_64 > openstack-swift-1.8.0-2.el6.noarch > openstack-swift-container-1.8.0-2.el6.noarch > openstack-swift-object-1.8.0-2.el6.noarch > glusterfs-fuse-3.4.0-8.el6.x86_64 > glusterfs-rdma-3.4.0-8.el6.x86_64 > openstack-swift-account-1.8.0-2.el6.noarch > glusterfs-ufo-3.4.0-8.el6.noarch > glusterfs-vim-3.2.7-1.el6.x86_64 > python-swiftclient-1.4.0-1.el6.noarch > > here are some key config files note I've changed the passwords I'm > using and hostnames > " > cat /etc/swift/account-server.conf > [DEFAULT] > mount_check = true > bind_port = 6012 > user = root > log_facility = LOG_LOCAL2 > devices = /swift/tenants/ > > [pipeline:main] > pipeline = account-server > > [app:account-server] > use = egg:gluster_swift_ufo#account > log_name = account-server > log_level = DEBUG > log_requests = true > > [account-replicator] > vm_test_mode = yes > > [account-auditor] > > [account-reaper] > > " > > " > cat /etc/swift/container-server.conf > [DEFAULT] > devices = /swift/tenants/ > mount_check = true > bind_port = 6011 > user = root > log_facility = LOG_LOCAL2 > > [pipeline:main] > pipeline = container-server > > [app:container-server] > use = egg:gluster_swift_ufo#container > > [container-replicator] > vm_test_mode = yes > > [container-updater] > > [container-auditor] > > [container-sync] > " > > " > cat /etc/swift/object-server.conf > [DEFAULT] > mount_check = true > bind_port = 6010 > user = root > log_facility = LOG_LOCAL2 > devices = /swift/tenants/ > > [pipeline:main] > pipeline = object-server > > [app:object-server] > use = egg:gluster_swift_ufo#object > > [object-replicator] > vm_test_mode = yes > > [object-updater] > > [object-auditor] > " > > " > cat /etc/swift/proxy-server.conf > [DEFAULT] > bind_port = 8080 > user = root > log_facility = LOG_LOCAL1 > log_name = swift > log_level = DEBUG > log_headers = True > > [pipeline:main] > pipeline = healthcheck cache authtoken keystone proxy-server > > [app:proxy-server] > use = egg:gluster_swift_ufo#proxy > allow_account_management = true > account_autocreate = true > > [filter:tempauth] > use = egg:swift#tempauth > # Here you need to add users explicitly. See the OpenStack Swift Deployment > # Guide for more information. The user and user64 directives take the > # following form: > # user__ = [group] [group] [...] [storage_url] > # user64__ = [group] [group] > [...] [storage_url] > # Where you use user64 for accounts and/or usernames that include > underscores. > # > # NOTE (and WARNING): The account name must match the device name specified > # when generating the account, container, and object build rings. > # > # E.g. > # user_ufo0_admin = abc123 .admin > > [filter:healthcheck] > use = egg:swift#healthcheck > > [filter:cache] > use = egg:swift#memcache > > > [filter:keystone] > use = egg:swift#keystoneauth > #paste.filter_factory = keystone.middleware.swift_auth:filter_factory > operator_roles = Member,admin,swiftoperator > > > [filter:authtoken] > paste.filter_factory = keystone.middleware.auth_token:filter_factory > auth_host = keystone01.vip.my.net > auth_port = 35357 > auth_protocol = http > admin_user = swift > admin_password = PASSWORD > admin_tenant_name = service > signing_dir = /var/cache/swift > service_port = 5000 > service_host = keystone01.vip.my.net > > [filter:swiftauth] > use = egg:keystone#swiftauth > auth_host = keystone01.vip.my.net > auth_port = 35357 > auth_protocol = http > keystone_url = https://keystone01.vip.my.net:5000/v2.0 > admin_user = swift > admin_password = PASSWORD > admin_tenant_name = service > signing_dir = /var/cache/swift > keystone_swift_operator_roles = Member,admin,swiftoperator > keystone_tenant_user_admin = true > > [filter:catch_errors] > use = egg:swift#catch_errors > " > > " > cat /etc/swift/swift.conf > [DEFAULT] > > > [swift-hash] > # random unique string that can never change (DO NOT LOSE) > swift_hash_path_suffix = gluster > #3d60c9458bb77abe > > > # The swift-constraints section sets the basic constraints on data > # saved in the swift cluster. > > [swift-constraints] > > # max_file_size is the largest "normal" object that can be saved in > # the cluster. This is also the limit on the size of each segment of > # a "large" object when using the large object manifest support. > # This value is set in bytes. Setting it to lower than 1MiB will cause > # some tests to fail. It is STRONGLY recommended to leave this value at > # the default (5 * 2**30 + 2). > > # FIXME: Really? Gluster can handle a 2^64 sized file? And can the fronting > # web service handle such a size? I think with UFO, we need to keep with the > # default size from Swift and encourage users to research what size their > web > # services infrastructure can handle. > > max_file_size = 18446744073709551616 > > > # max_meta_name_length is the max number of bytes in the utf8 encoding > # of the name portion of a metadata header. > > #max_meta_name_length = 128 > > > # max_meta_value_length is the max number of bytes in the utf8 encoding > # of a metadata value > > #max_meta_value_length = 256 > > > # max_meta_count is the max number of metadata keys that can be stored > # on a single account, container, or object > > #max_meta_count = 90 > > > # max_meta_overall_size is the max number of bytes in the utf8 encoding > # of the metadata (keys + values) > > #max_meta_overall_size = 4096 > > > # max_object_name_length is the max number of bytes in the utf8 encoding of > an > # object name: Gluster FS can handle much longer file names, but the length > # between the slashes of the URL is handled below. Remember that most web > # clients can't handle anything greater than 2048, and those that do are > # rather clumsy. > > max_object_name_length = 2048 > > # max_object_name_component_length (GlusterFS) is the max number of bytes in > # the utf8 encoding of an object name component (the part between the > # slashes); this is a limit imposed by the underlying file system (for XFS > it > # is 255 bytes). > > max_object_name_component_length = 255 > > # container_listing_limit is the default (and max) number of items > # returned for a container listing request > > #container_listing_limit = 10000 > > > # account_listing_limit is the default (and max) number of items returned > # for an account listing request > > #account_listing_limit = 10000 > > > # max_account_name_length is the max number of bytes in the utf8 encoding of > # an account name: Gluster FS Filename limit (XFS limit?), must be the same > # size as max_object_name_component_length above. > > max_account_name_length = 255 > > > # max_container_name_length is the max number of bytes in the utf8 encoding > # of a container name: Gluster FS Filename limit (XFS limit?), must be the > same > # size as max_object_name_component_length above. > > max_container_name_length = 255 > > " > > > The volumes > " > gluster volume list > cindervol > unified-storage-vol > a07d2f39117c4e5abdeba722cf245828 > bd74a005f08541b9989e392a689be2fc > f6da0a8151ff43b7be10d961a20c94d6 > " > > if I run the command > " > gluster-swift-gen-builders unified-storage-vol > a07d2f39117c4e5abdeba722cf245828 bd74a005f08541b9989e392a689be2fc > f6da0a8151ff43b7be10d961a20c94d6 > " > > because of a change in the script in this version as compaired to the > version I got from > http://repos.fedorapeople.org/repos/kkeithle/glusterfs/ the > gluster-swift-gen-builders script only takes the first option and > ignores the rest. > > other than the location of the config files none of the changes Ive > made are functionally different than the ones mentioned in > http://www.gluster.org/2012/09/howto-using-ufo-swift-a-quick-and-dirty-setup-guide/ > > The result is that the first volume named "unified-storage-vol" winds > up being used for every thing regardless of the tenant, and users and > see and manage each others objects regardless of what tenant they are > members of. > through the swift command or via horizon. > > In a way this is a good thing for me it simplifies thing significantly > and would be fine if it just created a directory for each tenant and > only allow the user to access the individual directories, not the > whole gluster volume. > by the way seeing every thing includes the service tenants data so > unprivileged users can delete glance images without being a member of > the service group. > > > > > On Mon, Sep 2, 2013 at 9:58 PM, Paul Robert Marino > wrote: > > Well I'll give you the full details in the morning but simply I used the > stock cluster ring builder script that came with the 3.4 rpms and the old > version from 3.3 took the list of volumes and would add all of them the > version with 3.4 only takes the first one. > > Well I ran the script expecting the same behavior but instead they all used > the first volume in the list. > > Now I knew from the docs I read that the per tenant directories in a single > volume were one possible plan for 3.4 to deal with the scalding issue with a > large number of tenants, so when I saw the difference in the script and that > it worked I just assumed that this was done and I missed something. > > > > -- Sent from my HP Pre3 > > ________________________________ > On Sep 2, 2013 20:55, Ramana Raja wrote: > > Hi Paul, > > Currently, gluster-swift doesn't support the feature of multiple > accounts/tenants accessing the same volume. Each tenant still needs his own > gluster volume. So I'm wondering how you were able to observe the reported > behaviour. > > How did you prepare the ringfiles for the different tenants, which use the > same gluster volume? Did you change the configuration of the servers? Also, > how did you access the files that you mention? It'd be helpful if you could > share the commands you used to perform these actions. > > Thanks, > > Ram > > > ----- Original Message ----- > From: "Vijay Bellur" > To: "Paul Robert Marino" > Cc: rhos-list at redhat.com, "Luis Pabon" , "Ramana Raja" > , "Chetan Risbud" > Sent: Monday, September 2, 2013 4:17:51 PM > Subject: Re: [rhos-list] Gluster UFO 3.4 swift Multi tenant question > > On 09/02/2013 01:39 AM, Paul Robert Marino wrote: > > I have Gluster UFO installed as a back end for swift from here > http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/RHEL/epel-6/ > with RDO 3 > > Its working well except for one thing. All of the tenants are seeing > one Gluster volume which is some what nice, especially when compared > to the old 3.3 behavior of creating one volume per tenant named after > the tenant ID number. > > The problem is I expected to see is sub directory created under the > volume root for each tenant but instead what in seeing is that all of > the tenants can see the root of the Gluster volume. The result is that > all of the tenants can access each others files and even delete them. > even scarier is that the tennants can see and delete each others > glance images and snapshots. > > Can any one suggest options to look at or documents to read to try to > figure out how to modify the behavior? > > Adding gluster swift developers who might be able to help. > > -Vijay > > From lpabon at redhat.com Tue Sep 17 17:52:12 2013 From: lpabon at redhat.com (Luis Pabon) Date: Tue, 17 Sep 2013 13:52:12 -0400 Subject: [rhos-list] [gluster-swift] Gluster UFO 3.4 swift Multi tenant question In-Reply-To: References: <31398813.7155780.1378169708207.JavaMail.root@redhat.com> <5225425e.8482e00a.1bfd.ffff9dcc@mx.google.com> <523862D1.5040108@redhat.com> Message-ID: <523896CC.70105@redhat.com> On 09/17/2013 11:13 AM, Paul Robert Marino wrote: > Luis > well thats intresting because it was my impression that Gluster UFO > 3.4 was based on the Grizzly version of Swift. [LP] Sorry, the gluster-ufo RPM is Essex only. > Also I was previously unaware of this new rpm which doesnt seem to be > in a repo any where. [LP] gluster-swift project RPMs have been submitted to Fedora and are currently being reviewed. > also there is a line in this new howto that is extreamly unclear > > " > /usr/bin/gluster-swift-gen-builders test > " > in place of "test" what should go there is it the tenant ID string, > the tenant name, or just a generic volume you can name whatever you > want? > in other words how should the Gluster volumes be named? [LP] We will clarify that in the quick start guide. Thank you for pointing it out. While we update the community site, please refer to the documentation available here http://goo.gl/bQFI8o for a usage guide. As for the tool, the format is: gluster-swift-gen-buildes [VOLUME] [VOLUME...] Where VOLUME is the name of the GlusterFS volume to use for object storage. For example if the following two GlusterFS volumes, volume1 and volume2, need to be accessed over Swift, then you can type the following: # gluster-swift-gen-builders volume1 volume2 For more information please read: http://goo.gl/gd8LkW Let us know if you have any more questions or comments. - Luis > > > On Tue, Sep 17, 2013 at 10:10 AM, Luis Pabon wrote: >> First thing I can see is that you have Essex based gluster-ufo-* which has >> been replaced by the gluster-swift project. We are currently in progress of >> replacing the gluster-ufo-* with RPMs from the gluster-swift project in >> Fedora. >> >> Please checkout the following quickstart guide which show how to download >> the Grizzly version of gluster-swift: >> https://github.com/gluster/gluster-swift/blob/master/doc/markdown/quick_start_guide.md >> . >> >> For more information please visit: https://launchpad.net/gluster-swift >> >> - Luis >> >> >> On 09/16/2013 05:02 PM, Paul Robert Marino wrote: >> >> Sorry for the delay on reporting the details. I got temporarily pulled >> off the project and dedicated to a different project which was >> considered higher priority by my employer. I'm just getting back to >> doing my normal work today. >> >> first here are the rpms I have installed >> " >> rpm -qa |grep -P -i '(gluster|swift)' >> glusterfs-libs-3.4.0-8.el6.x86_64 >> glusterfs-server-3.4.0-8.el6.x86_64 >> openstack-swift-plugin-swift3-1.0.0-0.20120711git.el6.noarch >> openstack-swift-proxy-1.8.0-2.el6.noarch >> glusterfs-3.4.0-8.el6.x86_64 >> glusterfs-cli-3.4.0-8.el6.x86_64 >> glusterfs-geo-replication-3.4.0-8.el6.x86_64 >> glusterfs-api-3.4.0-8.el6.x86_64 >> openstack-swift-1.8.0-2.el6.noarch >> openstack-swift-container-1.8.0-2.el6.noarch >> openstack-swift-object-1.8.0-2.el6.noarch >> glusterfs-fuse-3.4.0-8.el6.x86_64 >> glusterfs-rdma-3.4.0-8.el6.x86_64 >> openstack-swift-account-1.8.0-2.el6.noarch >> glusterfs-ufo-3.4.0-8.el6.noarch >> glusterfs-vim-3.2.7-1.el6.x86_64 >> python-swiftclient-1.4.0-1.el6.noarch >> >> here are some key config files note I've changed the passwords I'm >> using and hostnames >> " >> cat /etc/swift/account-server.conf >> [DEFAULT] >> mount_check = true >> bind_port = 6012 >> user = root >> log_facility = LOG_LOCAL2 >> devices = /swift/tenants/ >> >> [pipeline:main] >> pipeline = account-server >> >> [app:account-server] >> use = egg:gluster_swift_ufo#account >> log_name = account-server >> log_level = DEBUG >> log_requests = true >> >> [account-replicator] >> vm_test_mode = yes >> >> [account-auditor] >> >> [account-reaper] >> >> " >> >> " >> cat /etc/swift/container-server.conf >> [DEFAULT] >> devices = /swift/tenants/ >> mount_check = true >> bind_port = 6011 >> user = root >> log_facility = LOG_LOCAL2 >> >> [pipeline:main] >> pipeline = container-server >> >> [app:container-server] >> use = egg:gluster_swift_ufo#container >> >> [container-replicator] >> vm_test_mode = yes >> >> [container-updater] >> >> [container-auditor] >> >> [container-sync] >> " >> >> " >> cat /etc/swift/object-server.conf >> [DEFAULT] >> mount_check = true >> bind_port = 6010 >> user = root >> log_facility = LOG_LOCAL2 >> devices = /swift/tenants/ >> >> [pipeline:main] >> pipeline = object-server >> >> [app:object-server] >> use = egg:gluster_swift_ufo#object >> >> [object-replicator] >> vm_test_mode = yes >> >> [object-updater] >> >> [object-auditor] >> " >> >> " >> cat /etc/swift/proxy-server.conf >> [DEFAULT] >> bind_port = 8080 >> user = root >> log_facility = LOG_LOCAL1 >> log_name = swift >> log_level = DEBUG >> log_headers = True >> >> [pipeline:main] >> pipeline = healthcheck cache authtoken keystone proxy-server >> >> [app:proxy-server] >> use = egg:gluster_swift_ufo#proxy >> allow_account_management = true >> account_autocreate = true >> >> [filter:tempauth] >> use = egg:swift#tempauth >> # Here you need to add users explicitly. See the OpenStack Swift Deployment >> # Guide for more information. The user and user64 directives take the >> # following form: >> # user__ = [group] [group] [...] [storage_url] >> # user64__ = [group] [group] >> [...] [storage_url] >> # Where you use user64 for accounts and/or usernames that include >> underscores. >> # >> # NOTE (and WARNING): The account name must match the device name specified >> # when generating the account, container, and object build rings. >> # >> # E.g. >> # user_ufo0_admin = abc123 .admin >> >> [filter:healthcheck] >> use = egg:swift#healthcheck >> >> [filter:cache] >> use = egg:swift#memcache >> >> >> [filter:keystone] >> use = egg:swift#keystoneauth >> #paste.filter_factory = keystone.middleware.swift_auth:filter_factory >> operator_roles = Member,admin,swiftoperator >> >> >> [filter:authtoken] >> paste.filter_factory = keystone.middleware.auth_token:filter_factory >> auth_host = keystone01.vip.my.net >> auth_port = 35357 >> auth_protocol = http >> admin_user = swift >> admin_password = PASSWORD >> admin_tenant_name = service >> signing_dir = /var/cache/swift >> service_port = 5000 >> service_host = keystone01.vip.my.net >> >> [filter:swiftauth] >> use = egg:keystone#swiftauth >> auth_host = keystone01.vip.my.net >> auth_port = 35357 >> auth_protocol = http >> keystone_url = https://keystone01.vip.my.net:5000/v2.0 >> admin_user = swift >> admin_password = PASSWORD >> admin_tenant_name = service >> signing_dir = /var/cache/swift >> keystone_swift_operator_roles = Member,admin,swiftoperator >> keystone_tenant_user_admin = true >> >> [filter:catch_errors] >> use = egg:swift#catch_errors >> " >> >> " >> cat /etc/swift/swift.conf >> [DEFAULT] >> >> >> [swift-hash] >> # random unique string that can never change (DO NOT LOSE) >> swift_hash_path_suffix = gluster >> #3d60c9458bb77abe >> >> >> # The swift-constraints section sets the basic constraints on data >> # saved in the swift cluster. >> >> [swift-constraints] >> >> # max_file_size is the largest "normal" object that can be saved in >> # the cluster. This is also the limit on the size of each segment of >> # a "large" object when using the large object manifest support. >> # This value is set in bytes. Setting it to lower than 1MiB will cause >> # some tests to fail. It is STRONGLY recommended to leave this value at >> # the default (5 * 2**30 + 2). >> >> # FIXME: Really? Gluster can handle a 2^64 sized file? And can the fronting >> # web service handle such a size? I think with UFO, we need to keep with the >> # default size from Swift and encourage users to research what size their >> web >> # services infrastructure can handle. >> >> max_file_size = 18446744073709551616 >> >> >> # max_meta_name_length is the max number of bytes in the utf8 encoding >> # of the name portion of a metadata header. >> >> #max_meta_name_length = 128 >> >> >> # max_meta_value_length is the max number of bytes in the utf8 encoding >> # of a metadata value >> >> #max_meta_value_length = 256 >> >> >> # max_meta_count is the max number of metadata keys that can be stored >> # on a single account, container, or object >> >> #max_meta_count = 90 >> >> >> # max_meta_overall_size is the max number of bytes in the utf8 encoding >> # of the metadata (keys + values) >> >> #max_meta_overall_size = 4096 >> >> >> # max_object_name_length is the max number of bytes in the utf8 encoding of >> an >> # object name: Gluster FS can handle much longer file names, but the length >> # between the slashes of the URL is handled below. Remember that most web >> # clients can't handle anything greater than 2048, and those that do are >> # rather clumsy. >> >> max_object_name_length = 2048 >> >> # max_object_name_component_length (GlusterFS) is the max number of bytes in >> # the utf8 encoding of an object name component (the part between the >> # slashes); this is a limit imposed by the underlying file system (for XFS >> it >> # is 255 bytes). >> >> max_object_name_component_length = 255 >> >> # container_listing_limit is the default (and max) number of items >> # returned for a container listing request >> >> #container_listing_limit = 10000 >> >> >> # account_listing_limit is the default (and max) number of items returned >> # for an account listing request >> >> #account_listing_limit = 10000 >> >> >> # max_account_name_length is the max number of bytes in the utf8 encoding of >> # an account name: Gluster FS Filename limit (XFS limit?), must be the same >> # size as max_object_name_component_length above. >> >> max_account_name_length = 255 >> >> >> # max_container_name_length is the max number of bytes in the utf8 encoding >> # of a container name: Gluster FS Filename limit (XFS limit?), must be the >> same >> # size as max_object_name_component_length above. >> >> max_container_name_length = 255 >> >> " >> >> >> The volumes >> " >> gluster volume list >> cindervol >> unified-storage-vol >> a07d2f39117c4e5abdeba722cf245828 >> bd74a005f08541b9989e392a689be2fc >> f6da0a8151ff43b7be10d961a20c94d6 >> " >> >> if I run the command >> " >> gluster-swift-gen-builders unified-storage-vol >> a07d2f39117c4e5abdeba722cf245828 bd74a005f08541b9989e392a689be2fc >> f6da0a8151ff43b7be10d961a20c94d6 >> " >> >> because of a change in the script in this version as compaired to the >> version I got from >> http://repos.fedorapeople.org/repos/kkeithle/glusterfs/ the >> gluster-swift-gen-builders script only takes the first option and >> ignores the rest. >> >> other than the location of the config files none of the changes Ive >> made are functionally different than the ones mentioned in >> http://www.gluster.org/2012/09/howto-using-ufo-swift-a-quick-and-dirty-setup-guide/ >> >> The result is that the first volume named "unified-storage-vol" winds >> up being used for every thing regardless of the tenant, and users and >> see and manage each others objects regardless of what tenant they are >> members of. >> through the swift command or via horizon. >> >> In a way this is a good thing for me it simplifies thing significantly >> and would be fine if it just created a directory for each tenant and >> only allow the user to access the individual directories, not the >> whole gluster volume. >> by the way seeing every thing includes the service tenants data so >> unprivileged users can delete glance images without being a member of >> the service group. >> >> >> >> >> On Mon, Sep 2, 2013 at 9:58 PM, Paul Robert Marino >> wrote: >> >> Well I'll give you the full details in the morning but simply I used the >> stock cluster ring builder script that came with the 3.4 rpms and the old >> version from 3.3 took the list of volumes and would add all of them the >> version with 3.4 only takes the first one. >> >> Well I ran the script expecting the same behavior but instead they all used >> the first volume in the list. >> >> Now I knew from the docs I read that the per tenant directories in a single >> volume were one possible plan for 3.4 to deal with the scalding issue with a >> large number of tenants, so when I saw the difference in the script and that >> it worked I just assumed that this was done and I missed something. >> >> >> >> -- Sent from my HP Pre3 >> >> ________________________________ >> On Sep 2, 2013 20:55, Ramana Raja wrote: >> >> Hi Paul, >> >> Currently, gluster-swift doesn't support the feature of multiple >> accounts/tenants accessing the same volume. Each tenant still needs his own >> gluster volume. So I'm wondering how you were able to observe the reported >> behaviour. >> >> How did you prepare the ringfiles for the different tenants, which use the >> same gluster volume? Did you change the configuration of the servers? Also, >> how did you access the files that you mention? It'd be helpful if you could >> share the commands you used to perform these actions. >> >> Thanks, >> >> Ram >> >> >> ----- Original Message ----- >> From: "Vijay Bellur" >> To: "Paul Robert Marino" >> Cc: rhos-list at redhat.com, "Luis Pabon" , "Ramana Raja" >> , "Chetan Risbud" >> Sent: Monday, September 2, 2013 4:17:51 PM >> Subject: Re: [rhos-list] Gluster UFO 3.4 swift Multi tenant question >> >> On 09/02/2013 01:39 AM, Paul Robert Marino wrote: >> >> I have Gluster UFO installed as a back end for swift from here >> http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/RHEL/epel-6/ >> with RDO 3 >> >> Its working well except for one thing. All of the tenants are seeing >> one Gluster volume which is some what nice, especially when compared >> to the old 3.3 behavior of creating one volume per tenant named after >> the tenant ID number. >> >> The problem is I expected to see is sub directory created under the >> volume root for each tenant but instead what in seeing is that all of >> the tenants can see the root of the Gluster volume. The result is that >> all of the tenants can access each others files and even delete them. >> even scarier is that the tennants can see and delete each others >> glance images and snapshots. >> >> Can any one suggest options to look at or documents to read to try to >> figure out how to modify the behavior? >> >> Adding gluster swift developers who might be able to help. >> >> -Vijay >> >> From prmarino1 at gmail.com Tue Sep 17 20:38:48 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Tue, 17 Sep 2013 16:38:48 -0400 Subject: [rhos-list] [gluster-swift] Gluster UFO 3.4 swift Multi tenant question In-Reply-To: <523896CC.70105@redhat.com> References: <31398813.7155780.1378169708207.JavaMail.root@redhat.com> <5225425e.8482e00a.1bfd.ffff9dcc@mx.google.com> <523862D1.5040108@redhat.com> <523896CC.70105@redhat.com> Message-ID: Luis Thanks for the timely response. On Tue, Sep 17, 2013 at 1:52 PM, Luis Pabon wrote: > > On 09/17/2013 11:13 AM, Paul Robert Marino wrote: >> >> Luis >> well thats intresting because it was my impression that Gluster UFO >> 3.4 was based on the Grizzly version of Swift. > > [LP] Sorry, the gluster-ufo RPM is Essex only. [PRM] The source of my confusion was here http://www.gluster.org/community/documentation/index.php/Features34 and here http://www.gluster.org/2013/06/glusterfs-3-4-and-swift-where-are-all-the-pieces/ These pages on the gluster site should probably be updated to reflect the changes. > > >> Also I was previously unaware of this new rpm which doesnt seem to be >> in a repo any where. > > [LP] gluster-swift project RPMs have been submitted to Fedora and are > currently being reviewed. [PRM] Cool if they are in the EPEL testing repo Ill look for them there because I would rather pull the properly EPEL signed RPMs if they exist just to make node deployments easier. If not Ill ask some of my friends offline if they can help expedite it. > > >> also there is a line in this new howto that is extreamly unclear >> >> " >> /usr/bin/gluster-swift-gen-builders test >> " >> in place of "test" what should go there is it the tenant ID string, >> the tenant name, or just a generic volume you can name whatever you >> want? >> in other words how should the Gluster volumes be named? > > [LP] We will clarify that in the quick start guide. Thank you for pointing > it out. While we update the community site, please refer to the > documentation available here http://goo.gl/bQFI8o for a usage guide. > > As for the tool, the format is: > gluster-swift-gen-buildes [VOLUME] [VOLUME...] > > Where VOLUME is the name of the GlusterFS volume to use for object storage. > For example > if the following two GlusterFS volumes, volume1 and volume2, need to be > accessed over Swift, > then you can type the following: > > # gluster-swift-gen-builders volume1 volume2 [PRM] That part I understood however it doesn't answer the question exactly. Correct me if I'm wrong but looking over the code briefly it looks as though the volume name needs to be the same as the tenant ID number like it did with Gluster UFO 3.3. so for example if I do a " keystone tenant-list" and a see tenant1 with an id of "f6da0a8151ff43b7be10d961a20c94d6" then I would need to create a volume named f6da0a8151ff43b7be10d961a20c94d6 If I can name the volumes whatever I want or give them the same name as the tenant that would be great because it makes it easier for other SA's who are not directly working with OpenStack but may need to mount the volumes to comprehend, but its not urgently needed. One thing I was glad to see is that with Gluster UFO 3.3 I had to add mount points to /etc/fstab for each volume and manually create the directories for the mount points this looks to have been corrected in Gluster-Swift. > > For more information please read: http://goo.gl/gd8LkW > > Let us know if you have any more questions or comments. [PRM] I may fork the Github repo and add some changes that may be beneficial so they can be reviewed and possibly merged. for example it would be nice if the gluster-swift-gen-buildes script used the value of the mount_ip field in /etc/swift/fs.conf instead of 127.0.0.1 if its defined. also I might make a more robust version that allows create, add, remove, and list options. Ill do testing tomorrow and let everyone know how it goes. > > - Luis > >> >> >> On Tue, Sep 17, 2013 at 10:10 AM, Luis Pabon wrote: >>> >>> First thing I can see is that you have Essex based gluster-ufo-* which >>> has >>> been replaced by the gluster-swift project. We are currently in progress >>> of >>> replacing the gluster-ufo-* with RPMs from the gluster-swift project in >>> Fedora. >>> >>> Please checkout the following quickstart guide which show how to download >>> the Grizzly version of gluster-swift: >>> >>> https://github.com/gluster/gluster-swift/blob/master/doc/markdown/quick_start_guide.md >>> . >>> >>> For more information please visit: https://launchpad.net/gluster-swift >>> >>> - Luis >>> >>> >>> On 09/16/2013 05:02 PM, Paul Robert Marino wrote: >>> >>> Sorry for the delay on reporting the details. I got temporarily pulled >>> off the project and dedicated to a different project which was >>> considered higher priority by my employer. I'm just getting back to >>> doing my normal work today. >>> >>> first here are the rpms I have installed >>> " >>> rpm -qa |grep -P -i '(gluster|swift)' >>> glusterfs-libs-3.4.0-8.el6.x86_64 >>> glusterfs-server-3.4.0-8.el6.x86_64 >>> openstack-swift-plugin-swift3-1.0.0-0.20120711git.el6.noarch >>> openstack-swift-proxy-1.8.0-2.el6.noarch >>> glusterfs-3.4.0-8.el6.x86_64 >>> glusterfs-cli-3.4.0-8.el6.x86_64 >>> glusterfs-geo-replication-3.4.0-8.el6.x86_64 >>> glusterfs-api-3.4.0-8.el6.x86_64 >>> openstack-swift-1.8.0-2.el6.noarch >>> openstack-swift-container-1.8.0-2.el6.noarch >>> openstack-swift-object-1.8.0-2.el6.noarch >>> glusterfs-fuse-3.4.0-8.el6.x86_64 >>> glusterfs-rdma-3.4.0-8.el6.x86_64 >>> openstack-swift-account-1.8.0-2.el6.noarch >>> glusterfs-ufo-3.4.0-8.el6.noarch >>> glusterfs-vim-3.2.7-1.el6.x86_64 >>> python-swiftclient-1.4.0-1.el6.noarch >>> >>> here are some key config files note I've changed the passwords I'm >>> using and hostnames >>> " >>> cat /etc/swift/account-server.conf >>> [DEFAULT] >>> mount_check = true >>> bind_port = 6012 >>> user = root >>> log_facility = LOG_LOCAL2 >>> devices = /swift/tenants/ >>> >>> [pipeline:main] >>> pipeline = account-server >>> >>> [app:account-server] >>> use = egg:gluster_swift_ufo#account >>> log_name = account-server >>> log_level = DEBUG >>> log_requests = true >>> >>> [account-replicator] >>> vm_test_mode = yes >>> >>> [account-auditor] >>> >>> [account-reaper] >>> >>> " >>> >>> " >>> cat /etc/swift/container-server.conf >>> [DEFAULT] >>> devices = /swift/tenants/ >>> mount_check = true >>> bind_port = 6011 >>> user = root >>> log_facility = LOG_LOCAL2 >>> >>> [pipeline:main] >>> pipeline = container-server >>> >>> [app:container-server] >>> use = egg:gluster_swift_ufo#container >>> >>> [container-replicator] >>> vm_test_mode = yes >>> >>> [container-updater] >>> >>> [container-auditor] >>> >>> [container-sync] >>> " >>> >>> " >>> cat /etc/swift/object-server.conf >>> [DEFAULT] >>> mount_check = true >>> bind_port = 6010 >>> user = root >>> log_facility = LOG_LOCAL2 >>> devices = /swift/tenants/ >>> >>> [pipeline:main] >>> pipeline = object-server >>> >>> [app:object-server] >>> use = egg:gluster_swift_ufo#object >>> >>> [object-replicator] >>> vm_test_mode = yes >>> >>> [object-updater] >>> >>> [object-auditor] >>> " >>> >>> " >>> cat /etc/swift/proxy-server.conf >>> [DEFAULT] >>> bind_port = 8080 >>> user = root >>> log_facility = LOG_LOCAL1 >>> log_name = swift >>> log_level = DEBUG >>> log_headers = True >>> >>> [pipeline:main] >>> pipeline = healthcheck cache authtoken keystone proxy-server >>> >>> [app:proxy-server] >>> use = egg:gluster_swift_ufo#proxy >>> allow_account_management = true >>> account_autocreate = true >>> >>> [filter:tempauth] >>> use = egg:swift#tempauth >>> # Here you need to add users explicitly. See the OpenStack Swift >>> Deployment >>> # Guide for more information. The user and user64 directives take the >>> # following form: >>> # user__ = [group] [group] [...] >>> [storage_url] >>> # user64__ = [group] [group] >>> [...] [storage_url] >>> # Where you use user64 for accounts and/or usernames that include >>> underscores. >>> # >>> # NOTE (and WARNING): The account name must match the device name >>> specified >>> # when generating the account, container, and object build rings. >>> # >>> # E.g. >>> # user_ufo0_admin = abc123 .admin >>> >>> [filter:healthcheck] >>> use = egg:swift#healthcheck >>> >>> [filter:cache] >>> use = egg:swift#memcache >>> >>> >>> [filter:keystone] >>> use = egg:swift#keystoneauth >>> #paste.filter_factory = keystone.middleware.swift_auth:filter_factory >>> operator_roles = Member,admin,swiftoperator >>> >>> >>> [filter:authtoken] >>> paste.filter_factory = keystone.middleware.auth_token:filter_factory >>> auth_host = keystone01.vip.my.net >>> auth_port = 35357 >>> auth_protocol = http >>> admin_user = swift >>> admin_password = PASSWORD >>> admin_tenant_name = service >>> signing_dir = /var/cache/swift >>> service_port = 5000 >>> service_host = keystone01.vip.my.net >>> >>> [filter:swiftauth] >>> use = egg:keystone#swiftauth >>> auth_host = keystone01.vip.my.net >>> auth_port = 35357 >>> auth_protocol = http >>> keystone_url = https://keystone01.vip.my.net:5000/v2.0 >>> admin_user = swift >>> admin_password = PASSWORD >>> admin_tenant_name = service >>> signing_dir = /var/cache/swift >>> keystone_swift_operator_roles = Member,admin,swiftoperator >>> keystone_tenant_user_admin = true >>> >>> [filter:catch_errors] >>> use = egg:swift#catch_errors >>> " >>> >>> " >>> cat /etc/swift/swift.conf >>> [DEFAULT] >>> >>> >>> [swift-hash] >>> # random unique string that can never change (DO NOT LOSE) >>> swift_hash_path_suffix = gluster >>> #3d60c9458bb77abe >>> >>> >>> # The swift-constraints section sets the basic constraints on data >>> # saved in the swift cluster. >>> >>> [swift-constraints] >>> >>> # max_file_size is the largest "normal" object that can be saved in >>> # the cluster. This is also the limit on the size of each segment of >>> # a "large" object when using the large object manifest support. >>> # This value is set in bytes. Setting it to lower than 1MiB will cause >>> # some tests to fail. It is STRONGLY recommended to leave this value at >>> # the default (5 * 2**30 + 2). >>> >>> # FIXME: Really? Gluster can handle a 2^64 sized file? And can the >>> fronting >>> # web service handle such a size? I think with UFO, we need to keep with >>> the >>> # default size from Swift and encourage users to research what size their >>> web >>> # services infrastructure can handle. >>> >>> max_file_size = 18446744073709551616 >>> >>> >>> # max_meta_name_length is the max number of bytes in the utf8 encoding >>> # of the name portion of a metadata header. >>> >>> #max_meta_name_length = 128 >>> >>> >>> # max_meta_value_length is the max number of bytes in the utf8 encoding >>> # of a metadata value >>> >>> #max_meta_value_length = 256 >>> >>> >>> # max_meta_count is the max number of metadata keys that can be stored >>> # on a single account, container, or object >>> >>> #max_meta_count = 90 >>> >>> >>> # max_meta_overall_size is the max number of bytes in the utf8 encoding >>> # of the metadata (keys + values) >>> >>> #max_meta_overall_size = 4096 >>> >>> >>> # max_object_name_length is the max number of bytes in the utf8 encoding >>> of >>> an >>> # object name: Gluster FS can handle much longer file names, but the >>> length >>> # between the slashes of the URL is handled below. Remember that most web >>> # clients can't handle anything greater than 2048, and those that do are >>> # rather clumsy. >>> >>> max_object_name_length = 2048 >>> >>> # max_object_name_component_length (GlusterFS) is the max number of bytes >>> in >>> # the utf8 encoding of an object name component (the part between the >>> # slashes); this is a limit imposed by the underlying file system (for >>> XFS >>> it >>> # is 255 bytes). >>> >>> max_object_name_component_length = 255 >>> >>> # container_listing_limit is the default (and max) number of items >>> # returned for a container listing request >>> >>> #container_listing_limit = 10000 >>> >>> >>> # account_listing_limit is the default (and max) number of items returned >>> # for an account listing request >>> >>> #account_listing_limit = 10000 >>> >>> >>> # max_account_name_length is the max number of bytes in the utf8 encoding >>> of >>> # an account name: Gluster FS Filename limit (XFS limit?), must be the >>> same >>> # size as max_object_name_component_length above. >>> >>> max_account_name_length = 255 >>> >>> >>> # max_container_name_length is the max number of bytes in the utf8 >>> encoding >>> # of a container name: Gluster FS Filename limit (XFS limit?), must be >>> the >>> same >>> # size as max_object_name_component_length above. >>> >>> max_container_name_length = 255 >>> >>> " >>> >>> >>> The volumes >>> " >>> gluster volume list >>> cindervol >>> unified-storage-vol >>> a07d2f39117c4e5abdeba722cf245828 >>> bd74a005f08541b9989e392a689be2fc >>> f6da0a8151ff43b7be10d961a20c94d6 >>> " >>> >>> if I run the command >>> " >>> gluster-swift-gen-builders unified-storage-vol >>> a07d2f39117c4e5abdeba722cf245828 bd74a005f08541b9989e392a689be2fc >>> f6da0a8151ff43b7be10d961a20c94d6 >>> " >>> >>> because of a change in the script in this version as compaired to the >>> version I got from >>> http://repos.fedorapeople.org/repos/kkeithle/glusterfs/ the >>> gluster-swift-gen-builders script only takes the first option and >>> ignores the rest. >>> >>> other than the location of the config files none of the changes Ive >>> made are functionally different than the ones mentioned in >>> >>> http://www.gluster.org/2012/09/howto-using-ufo-swift-a-quick-and-dirty-setup-guide/ >>> >>> The result is that the first volume named "unified-storage-vol" winds >>> up being used for every thing regardless of the tenant, and users and >>> see and manage each others objects regardless of what tenant they are >>> members of. >>> through the swift command or via horizon. >>> >>> In a way this is a good thing for me it simplifies thing significantly >>> and would be fine if it just created a directory for each tenant and >>> only allow the user to access the individual directories, not the >>> whole gluster volume. >>> by the way seeing every thing includes the service tenants data so >>> unprivileged users can delete glance images without being a member of >>> the service group. >>> >>> >>> >>> >>> On Mon, Sep 2, 2013 at 9:58 PM, Paul Robert Marino >>> wrote: >>> >>> Well I'll give you the full details in the morning but simply I used the >>> stock cluster ring builder script that came with the 3.4 rpms and the old >>> version from 3.3 took the list of volumes and would add all of them the >>> version with 3.4 only takes the first one. >>> >>> Well I ran the script expecting the same behavior but instead they all >>> used >>> the first volume in the list. >>> >>> Now I knew from the docs I read that the per tenant directories in a >>> single >>> volume were one possible plan for 3.4 to deal with the scalding issue >>> with a >>> large number of tenants, so when I saw the difference in the script and >>> that >>> it worked I just assumed that this was done and I missed something. >>> >>> >>> >>> -- Sent from my HP Pre3 >>> >>> ________________________________ >>> On Sep 2, 2013 20:55, Ramana Raja wrote: >>> >>> Hi Paul, >>> >>> Currently, gluster-swift doesn't support the feature of multiple >>> accounts/tenants accessing the same volume. Each tenant still needs his >>> own >>> gluster volume. So I'm wondering how you were able to observe the >>> reported >>> behaviour. >>> >>> How did you prepare the ringfiles for the different tenants, which use >>> the >>> same gluster volume? Did you change the configuration of the servers? >>> Also, >>> how did you access the files that you mention? It'd be helpful if you >>> could >>> share the commands you used to perform these actions. >>> >>> Thanks, >>> >>> Ram >>> >>> >>> ----- Original Message ----- >>> From: "Vijay Bellur" >>> To: "Paul Robert Marino" >>> Cc: rhos-list at redhat.com, "Luis Pabon" , "Ramana Raja" >>> , "Chetan Risbud" >>> Sent: Monday, September 2, 2013 4:17:51 PM >>> Subject: Re: [rhos-list] Gluster UFO 3.4 swift Multi tenant question >>> >>> On 09/02/2013 01:39 AM, Paul Robert Marino wrote: >>> >>> I have Gluster UFO installed as a back end for swift from here >>> http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/RHEL/epel-6/ >>> with RDO 3 >>> >>> Its working well except for one thing. All of the tenants are seeing >>> one Gluster volume which is some what nice, especially when compared >>> to the old 3.3 behavior of creating one volume per tenant named after >>> the tenant ID number. >>> >>> The problem is I expected to see is sub directory created under the >>> volume root for each tenant but instead what in seeing is that all of >>> the tenants can see the root of the Gluster volume. The result is that >>> all of the tenants can access each others files and even delete them. >>> even scarier is that the tennants can see and delete each others >>> glance images and snapshots. >>> >>> Can any one suggest options to look at or documents to read to try to >>> figure out how to modify the behavior? >>> >>> Adding gluster swift developers who might be able to help. >>> >>> -Vijay >>> >>> > From prmarino1 at gmail.com Thu Sep 19 21:23:01 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Thu, 19 Sep 2013 17:23:01 -0400 Subject: [rhos-list] [gluster-swift] Gluster UFO 3.4 swift Multi tenant question In-Reply-To: References: <31398813.7155780.1378169708207.JavaMail.root@redhat.com> <5225425e.8482e00a.1bfd.ffff9dcc@mx.google.com> <523862D1.5040108@redhat.com> <523896CC.70105@redhat.com> Message-ID: Thank you every one for your help especially Louis I tested the RPM and it went well every thing is working now. I did have to use the tenant ID's as the volume names. I may submit an update to the documentation to clarify this for people So in other words the volume names have to match the output of " keystone tenant-list|grep -v + | \ grep -v -P '^\|\s+id\s+\|\s+name\s+\|\s+enabled\s+\|$' | \ grep -v -P '^\w+:' | awk '{print $2}' " I've created an updated copy of gluster-swift-gen-builders that grabs the value of mount_ip from /etc/swift/fs.conf and posted it on github. you should see a pull request on the site for the submission. of the change Thank you every one for your help On Tue, Sep 17, 2013 at 4:38 PM, Paul Robert Marino wrote: > Luis > Thanks for the timely response. > > On Tue, Sep 17, 2013 at 1:52 PM, Luis Pabon wrote: >> >> On 09/17/2013 11:13 AM, Paul Robert Marino wrote: >>> >>> Luis >>> well thats intresting because it was my impression that Gluster UFO >>> 3.4 was based on the Grizzly version of Swift. >> >> [LP] Sorry, the gluster-ufo RPM is Essex only. > > [PRM] The source of my confusion was here > http://www.gluster.org/community/documentation/index.php/Features34 > and here http://www.gluster.org/2013/06/glusterfs-3-4-and-swift-where-are-all-the-pieces/ > These pages on the gluster site should probably be updated to reflect > the changes. > > >> >> >>> Also I was previously unaware of this new rpm which doesnt seem to be >>> in a repo any where. >> >> [LP] gluster-swift project RPMs have been submitted to Fedora and are >> currently being reviewed. > > [PRM] Cool if they are in the EPEL testing repo Ill look for them > there because I would rather pull the properly EPEL signed RPMs if > they exist just to make node deployments easier. If not Ill ask some > of my friends offline if they can help expedite it. > >> >> >>> also there is a line in this new howto that is extreamly unclear >>> >>> " >>> /usr/bin/gluster-swift-gen-builders test >>> " >>> in place of "test" what should go there is it the tenant ID string, >>> the tenant name, or just a generic volume you can name whatever you >>> want? >>> in other words how should the Gluster volumes be named? >> >> [LP] We will clarify that in the quick start guide. Thank you for pointing >> it out. While we update the community site, please refer to the >> documentation available here http://goo.gl/bQFI8o for a usage guide. >> >> As for the tool, the format is: >> gluster-swift-gen-buildes [VOLUME] [VOLUME...] >> >> Where VOLUME is the name of the GlusterFS volume to use for object storage. >> For example >> if the following two GlusterFS volumes, volume1 and volume2, need to be >> accessed over Swift, >> then you can type the following: >> >> # gluster-swift-gen-builders volume1 volume2 > > [PRM] That part I understood however it doesn't answer the question exactly. > > Correct me if I'm wrong but looking over the code briefly it looks as > though the volume name needs to be the same as the tenant ID number > like it did with Gluster UFO 3.3. > so for example > if I do a " keystone tenant-list" and a see tenant1 with an id of > "f6da0a8151ff43b7be10d961a20c94d6" then I would need to create a > volume named f6da0a8151ff43b7be10d961a20c94d6 > > If I can name the volumes whatever I want or give them the same name > as the tenant that would be great because it makes it easier for other > SA's who are not directly working with OpenStack but may need to mount > the volumes to comprehend, but its not urgently needed. > > One thing I was glad to see is that with Gluster UFO 3.3 I had to add > mount points to /etc/fstab for each volume and manually create the > directories for the mount points this looks to have been corrected in > Gluster-Swift. > >> >> For more information please read: http://goo.gl/gd8LkW >> >> Let us know if you have any more questions or comments. > > [PRM] I may fork the Github repo and add some changes that may be > beneficial so they can be reviewed and possibly merged. > for example it would be nice if the gluster-swift-gen-buildes script > used the value of the mount_ip field in /etc/swift/fs.conf instead of > 127.0.0.1 if its defined. > also I might make a more robust version that allows create, add, > remove, and list options. > > > Ill do testing tomorrow and let everyone know how it goes. > > >> >> - Luis >> >>> >>> >>> On Tue, Sep 17, 2013 at 10:10 AM, Luis Pabon wrote: >>>> >>>> First thing I can see is that you have Essex based gluster-ufo-* which >>>> has >>>> been replaced by the gluster-swift project. We are currently in progress >>>> of >>>> replacing the gluster-ufo-* with RPMs from the gluster-swift project in >>>> Fedora. >>>> >>>> Please checkout the following quickstart guide which show how to download >>>> the Grizzly version of gluster-swift: >>>> >>>> https://github.com/gluster/gluster-swift/blob/master/doc/markdown/quick_start_guide.md >>>> . >>>> >>>> For more information please visit: https://launchpad.net/gluster-swift >>>> >>>> - Luis >>>> >>>> >>>> On 09/16/2013 05:02 PM, Paul Robert Marino wrote: >>>> >>>> Sorry for the delay on reporting the details. I got temporarily pulled >>>> off the project and dedicated to a different project which was >>>> considered higher priority by my employer. I'm just getting back to >>>> doing my normal work today. >>>> >>>> first here are the rpms I have installed >>>> " >>>> rpm -qa |grep -P -i '(gluster|swift)' >>>> glusterfs-libs-3.4.0-8.el6.x86_64 >>>> glusterfs-server-3.4.0-8.el6.x86_64 >>>> openstack-swift-plugin-swift3-1.0.0-0.20120711git.el6.noarch >>>> openstack-swift-proxy-1.8.0-2.el6.noarch >>>> glusterfs-3.4.0-8.el6.x86_64 >>>> glusterfs-cli-3.4.0-8.el6.x86_64 >>>> glusterfs-geo-replication-3.4.0-8.el6.x86_64 >>>> glusterfs-api-3.4.0-8.el6.x86_64 >>>> openstack-swift-1.8.0-2.el6.noarch >>>> openstack-swift-container-1.8.0-2.el6.noarch >>>> openstack-swift-object-1.8.0-2.el6.noarch >>>> glusterfs-fuse-3.4.0-8.el6.x86_64 >>>> glusterfs-rdma-3.4.0-8.el6.x86_64 >>>> openstack-swift-account-1.8.0-2.el6.noarch >>>> glusterfs-ufo-3.4.0-8.el6.noarch >>>> glusterfs-vim-3.2.7-1.el6.x86_64 >>>> python-swiftclient-1.4.0-1.el6.noarch >>>> >>>> here are some key config files note I've changed the passwords I'm >>>> using and hostnames >>>> " >>>> cat /etc/swift/account-server.conf >>>> [DEFAULT] >>>> mount_check = true >>>> bind_port = 6012 >>>> user = root >>>> log_facility = LOG_LOCAL2 >>>> devices = /swift/tenants/ >>>> >>>> [pipeline:main] >>>> pipeline = account-server >>>> >>>> [app:account-server] >>>> use = egg:gluster_swift_ufo#account >>>> log_name = account-server >>>> log_level = DEBUG >>>> log_requests = true >>>> >>>> [account-replicator] >>>> vm_test_mode = yes >>>> >>>> [account-auditor] >>>> >>>> [account-reaper] >>>> >>>> " >>>> >>>> " >>>> cat /etc/swift/container-server.conf >>>> [DEFAULT] >>>> devices = /swift/tenants/ >>>> mount_check = true >>>> bind_port = 6011 >>>> user = root >>>> log_facility = LOG_LOCAL2 >>>> >>>> [pipeline:main] >>>> pipeline = container-server >>>> >>>> [app:container-server] >>>> use = egg:gluster_swift_ufo#container >>>> >>>> [container-replicator] >>>> vm_test_mode = yes >>>> >>>> [container-updater] >>>> >>>> [container-auditor] >>>> >>>> [container-sync] >>>> " >>>> >>>> " >>>> cat /etc/swift/object-server.conf >>>> [DEFAULT] >>>> mount_check = true >>>> bind_port = 6010 >>>> user = root >>>> log_facility = LOG_LOCAL2 >>>> devices = /swift/tenants/ >>>> >>>> [pipeline:main] >>>> pipeline = object-server >>>> >>>> [app:object-server] >>>> use = egg:gluster_swift_ufo#object >>>> >>>> [object-replicator] >>>> vm_test_mode = yes >>>> >>>> [object-updater] >>>> >>>> [object-auditor] >>>> " >>>> >>>> " >>>> cat /etc/swift/proxy-server.conf >>>> [DEFAULT] >>>> bind_port = 8080 >>>> user = root >>>> log_facility = LOG_LOCAL1 >>>> log_name = swift >>>> log_level = DEBUG >>>> log_headers = True >>>> >>>> [pipeline:main] >>>> pipeline = healthcheck cache authtoken keystone proxy-server >>>> >>>> [app:proxy-server] >>>> use = egg:gluster_swift_ufo#proxy >>>> allow_account_management = true >>>> account_autocreate = true >>>> >>>> [filter:tempauth] >>>> use = egg:swift#tempauth >>>> # Here you need to add users explicitly. See the OpenStack Swift >>>> Deployment >>>> # Guide for more information. The user and user64 directives take the >>>> # following form: >>>> # user__ = [group] [group] [...] >>>> [storage_url] >>>> # user64__ = [group] [group] >>>> [...] [storage_url] >>>> # Where you use user64 for accounts and/or usernames that include >>>> underscores. >>>> # >>>> # NOTE (and WARNING): The account name must match the device name >>>> specified >>>> # when generating the account, container, and object build rings. >>>> # >>>> # E.g. >>>> # user_ufo0_admin = abc123 .admin >>>> >>>> [filter:healthcheck] >>>> use = egg:swift#healthcheck >>>> >>>> [filter:cache] >>>> use = egg:swift#memcache >>>> >>>> >>>> [filter:keystone] >>>> use = egg:swift#keystoneauth >>>> #paste.filter_factory = keystone.middleware.swift_auth:filter_factory >>>> operator_roles = Member,admin,swiftoperator >>>> >>>> >>>> [filter:authtoken] >>>> paste.filter_factory = keystone.middleware.auth_token:filter_factory >>>> auth_host = keystone01.vip.my.net >>>> auth_port = 35357 >>>> auth_protocol = http >>>> admin_user = swift >>>> admin_password = PASSWORD >>>> admin_tenant_name = service >>>> signing_dir = /var/cache/swift >>>> service_port = 5000 >>>> service_host = keystone01.vip.my.net >>>> >>>> [filter:swiftauth] >>>> use = egg:keystone#swiftauth >>>> auth_host = keystone01.vip.my.net >>>> auth_port = 35357 >>>> auth_protocol = http >>>> keystone_url = https://keystone01.vip.my.net:5000/v2.0 >>>> admin_user = swift >>>> admin_password = PASSWORD >>>> admin_tenant_name = service >>>> signing_dir = /var/cache/swift >>>> keystone_swift_operator_roles = Member,admin,swiftoperator >>>> keystone_tenant_user_admin = true >>>> >>>> [filter:catch_errors] >>>> use = egg:swift#catch_errors >>>> " >>>> >>>> " >>>> cat /etc/swift/swift.conf >>>> [DEFAULT] >>>> >>>> >>>> [swift-hash] >>>> # random unique string that can never change (DO NOT LOSE) >>>> swift_hash_path_suffix = gluster >>>> #3d60c9458bb77abe >>>> >>>> >>>> # The swift-constraints section sets the basic constraints on data >>>> # saved in the swift cluster. >>>> >>>> [swift-constraints] >>>> >>>> # max_file_size is the largest "normal" object that can be saved in >>>> # the cluster. This is also the limit on the size of each segment of >>>> # a "large" object when using the large object manifest support. >>>> # This value is set in bytes. Setting it to lower than 1MiB will cause >>>> # some tests to fail. It is STRONGLY recommended to leave this value at >>>> # the default (5 * 2**30 + 2). >>>> >>>> # FIXME: Really? Gluster can handle a 2^64 sized file? And can the >>>> fronting >>>> # web service handle such a size? I think with UFO, we need to keep with >>>> the >>>> # default size from Swift and encourage users to research what size their >>>> web >>>> # services infrastructure can handle. >>>> >>>> max_file_size = 18446744073709551616 >>>> >>>> >>>> # max_meta_name_length is the max number of bytes in the utf8 encoding >>>> # of the name portion of a metadata header. >>>> >>>> #max_meta_name_length = 128 >>>> >>>> >>>> # max_meta_value_length is the max number of bytes in the utf8 encoding >>>> # of a metadata value >>>> >>>> #max_meta_value_length = 256 >>>> >>>> >>>> # max_meta_count is the max number of metadata keys that can be stored >>>> # on a single account, container, or object >>>> >>>> #max_meta_count = 90 >>>> >>>> >>>> # max_meta_overall_size is the max number of bytes in the utf8 encoding >>>> # of the metadata (keys + values) >>>> >>>> #max_meta_overall_size = 4096 >>>> >>>> >>>> # max_object_name_length is the max number of bytes in the utf8 encoding >>>> of >>>> an >>>> # object name: Gluster FS can handle much longer file names, but the >>>> length >>>> # between the slashes of the URL is handled below. Remember that most web >>>> # clients can't handle anything greater than 2048, and those that do are >>>> # rather clumsy. >>>> >>>> max_object_name_length = 2048 >>>> >>>> # max_object_name_component_length (GlusterFS) is the max number of bytes >>>> in >>>> # the utf8 encoding of an object name component (the part between the >>>> # slashes); this is a limit imposed by the underlying file system (for >>>> XFS >>>> it >>>> # is 255 bytes). >>>> >>>> max_object_name_component_length = 255 >>>> >>>> # container_listing_limit is the default (and max) number of items >>>> # returned for a container listing request >>>> >>>> #container_listing_limit = 10000 >>>> >>>> >>>> # account_listing_limit is the default (and max) number of items returned >>>> # for an account listing request >>>> >>>> #account_listing_limit = 10000 >>>> >>>> >>>> # max_account_name_length is the max number of bytes in the utf8 encoding >>>> of >>>> # an account name: Gluster FS Filename limit (XFS limit?), must be the >>>> same >>>> # size as max_object_name_component_length above. >>>> >>>> max_account_name_length = 255 >>>> >>>> >>>> # max_container_name_length is the max number of bytes in the utf8 >>>> encoding >>>> # of a container name: Gluster FS Filename limit (XFS limit?), must be >>>> the >>>> same >>>> # size as max_object_name_component_length above. >>>> >>>> max_container_name_length = 255 >>>> >>>> " >>>> >>>> >>>> The volumes >>>> " >>>> gluster volume list >>>> cindervol >>>> unified-storage-vol >>>> a07d2f39117c4e5abdeba722cf245828 >>>> bd74a005f08541b9989e392a689be2fc >>>> f6da0a8151ff43b7be10d961a20c94d6 >>>> " >>>> >>>> if I run the command >>>> " >>>> gluster-swift-gen-builders unified-storage-vol >>>> a07d2f39117c4e5abdeba722cf245828 bd74a005f08541b9989e392a689be2fc >>>> f6da0a8151ff43b7be10d961a20c94d6 >>>> " >>>> >>>> because of a change in the script in this version as compaired to the >>>> version I got from >>>> http://repos.fedorapeople.org/repos/kkeithle/glusterfs/ the >>>> gluster-swift-gen-builders script only takes the first option and >>>> ignores the rest. >>>> >>>> other than the location of the config files none of the changes Ive >>>> made are functionally different than the ones mentioned in >>>> >>>> http://www.gluster.org/2012/09/howto-using-ufo-swift-a-quick-and-dirty-setup-guide/ >>>> >>>> The result is that the first volume named "unified-storage-vol" winds >>>> up being used for every thing regardless of the tenant, and users and >>>> see and manage each others objects regardless of what tenant they are >>>> members of. >>>> through the swift command or via horizon. >>>> >>>> In a way this is a good thing for me it simplifies thing significantly >>>> and would be fine if it just created a directory for each tenant and >>>> only allow the user to access the individual directories, not the >>>> whole gluster volume. >>>> by the way seeing every thing includes the service tenants data so >>>> unprivileged users can delete glance images without being a member of >>>> the service group. >>>> >>>> >>>> >>>> >>>> On Mon, Sep 2, 2013 at 9:58 PM, Paul Robert Marino >>>> wrote: >>>> >>>> Well I'll give you the full details in the morning but simply I used the >>>> stock cluster ring builder script that came with the 3.4 rpms and the old >>>> version from 3.3 took the list of volumes and would add all of them the >>>> version with 3.4 only takes the first one. >>>> >>>> Well I ran the script expecting the same behavior but instead they all >>>> used >>>> the first volume in the list. >>>> >>>> Now I knew from the docs I read that the per tenant directories in a >>>> single >>>> volume were one possible plan for 3.4 to deal with the scalding issue >>>> with a >>>> large number of tenants, so when I saw the difference in the script and >>>> that >>>> it worked I just assumed that this was done and I missed something. >>>> >>>> >>>> >>>> -- Sent from my HP Pre3 >>>> >>>> ________________________________ >>>> On Sep 2, 2013 20:55, Ramana Raja wrote: >>>> >>>> Hi Paul, >>>> >>>> Currently, gluster-swift doesn't support the feature of multiple >>>> accounts/tenants accessing the same volume. Each tenant still needs his >>>> own >>>> gluster volume. So I'm wondering how you were able to observe the >>>> reported >>>> behaviour. >>>> >>>> How did you prepare the ringfiles for the different tenants, which use >>>> the >>>> same gluster volume? Did you change the configuration of the servers? >>>> Also, >>>> how did you access the files that you mention? It'd be helpful if you >>>> could >>>> share the commands you used to perform these actions. >>>> >>>> Thanks, >>>> >>>> Ram >>>> >>>> >>>> ----- Original Message ----- >>>> From: "Vijay Bellur" >>>> To: "Paul Robert Marino" >>>> Cc: rhos-list at redhat.com, "Luis Pabon" , "Ramana Raja" >>>> , "Chetan Risbud" >>>> Sent: Monday, September 2, 2013 4:17:51 PM >>>> Subject: Re: [rhos-list] Gluster UFO 3.4 swift Multi tenant question >>>> >>>> On 09/02/2013 01:39 AM, Paul Robert Marino wrote: >>>> >>>> I have Gluster UFO installed as a back end for swift from here >>>> http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/RHEL/epel-6/ >>>> with RDO 3 >>>> >>>> Its working well except for one thing. All of the tenants are seeing >>>> one Gluster volume which is some what nice, especially when compared >>>> to the old 3.3 behavior of creating one volume per tenant named after >>>> the tenant ID number. >>>> >>>> The problem is I expected to see is sub directory created under the >>>> volume root for each tenant but instead what in seeing is that all of >>>> the tenants can see the root of the Gluster volume. The result is that >>>> all of the tenants can access each others files and even delete them. >>>> even scarier is that the tennants can see and delete each others >>>> glance images and snapshots. >>>> >>>> Can any one suggest options to look at or documents to read to try to >>>> figure out how to modify the behavior? >>>> >>>> Adding gluster swift developers who might be able to help. >>>> >>>> -Vijay >>>> >>>> >> From draddatz at sgi.com Fri Sep 20 15:47:17 2013 From: draddatz at sgi.com (David Raddatz) Date: Fri, 20 Sep 2013 15:47:17 +0000 Subject: [rhos-list] removing OpenStack script update Message-ID: <18CF1869BE7AB04DB1E4CC93FD43702A1B72D4FA@P-EXMB2-DC21.corp.sgi.com> Hello, Sharing this in case someone else needs it... I was trying to use the script found in "A.2. Removing only OpenStack specific application data and packages" section of the getting started guide to remove OpenStack. It was failing on removing the cinder-volumes command (vgremove -f cinder-volumes). I found out that the tgtd service was still "using" the volume so I had to do the following: Edit /etc/tgt/targets.conf Comment out or remove the line with "include /etc/cinder/volumes/*" service tgtd restart Then I ran the vgremove command by hand and it worked. Dave -------------------------------------------- Dave Raddatz Big Data Solutions and Performance Austin, TX (512) 249-0210 draddatz at sgi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lpabon at redhat.com Fri Sep 20 20:11:41 2013 From: lpabon at redhat.com (Luis Pabon) Date: Fri, 20 Sep 2013 16:11:41 -0400 Subject: [rhos-list] [gluster-swift] Gluster UFO 3.4 swift Multi tenant question In-Reply-To: References: <31398813.7155780.1378169708207.JavaMail.root@redhat.com> <5225425e.8482e00a.1bfd.ffff9dcc@mx.google.com> <523862D1.5040108@redhat.com> <523896CC.70105@redhat.com> Message-ID: <523CABFD.2080607@redhat.com> This is great! Thank you. The only issue is that we use github as a public repo and we do not use github for our normal development workflow. Do you mind resubmitting the change using the development workflow as described here:https://github.com/gluster/gluster-swift/blob/master/doc/markdown/dev_guide.md. More information can be found at https://launchpad.net/gluster-swift - Luis On 09/19/2013 05:23 PM, Paul Robert Marino wrote: > Thank you every one for your help especially Louis > > I tested the RPM and it went well every thing is working now. > > I did have to use the tenant ID's as the volume names. I may submit an > update to the documentation to clarify this for people > > So in other words the volume names have to match the output of > > " > keystone tenant-list|grep -v + | \ > grep -v -P '^\|\s+id\s+\|\s+name\s+\|\s+enabled\s+\|$' | \ > grep -v -P '^\w+:' | awk '{print $2}' > " > > > I've created an updated copy of gluster-swift-gen-builders that grabs > the value of mount_ip from /etc/swift/fs.conf and posted it on github. > you should see a pull request on the site for the submission. of the > change > > Thank you every one for your help > > On Tue, Sep 17, 2013 at 4:38 PM, Paul Robert Marino wrote: >> Luis >> Thanks for the timely response. >> >> On Tue, Sep 17, 2013 at 1:52 PM, Luis Pabon wrote: >>> On 09/17/2013 11:13 AM, Paul Robert Marino wrote: >>>> Luis >>>> well thats intresting because it was my impression that Gluster UFO >>>> 3.4 was based on the Grizzly version of Swift. >>> [LP] Sorry, the gluster-ufo RPM is Essex only. >> [PRM] The source of my confusion was here >> http://www.gluster.org/community/documentation/index.php/Features34 >> and here http://www.gluster.org/2013/06/glusterfs-3-4-and-swift-where-are-all-the-pieces/ >> These pages on the gluster site should probably be updated to reflect >> the changes. >> >> >>> >>>> Also I was previously unaware of this new rpm which doesnt seem to be >>>> in a repo any where. >>> [LP] gluster-swift project RPMs have been submitted to Fedora and are >>> currently being reviewed. >> [PRM] Cool if they are in the EPEL testing repo Ill look for them >> there because I would rather pull the properly EPEL signed RPMs if >> they exist just to make node deployments easier. If not Ill ask some >> of my friends offline if they can help expedite it. >> >>> >>>> also there is a line in this new howto that is extreamly unclear >>>> >>>> " >>>> /usr/bin/gluster-swift-gen-builders test >>>> " >>>> in place of "test" what should go there is it the tenant ID string, >>>> the tenant name, or just a generic volume you can name whatever you >>>> want? >>>> in other words how should the Gluster volumes be named? >>> [LP] We will clarify that in the quick start guide. Thank you for pointing >>> it out. While we update the community site, please refer to the >>> documentation available here http://goo.gl/bQFI8o for a usage guide. >>> >>> As for the tool, the format is: >>> gluster-swift-gen-buildes [VOLUME] [VOLUME...] >>> >>> Where VOLUME is the name of the GlusterFS volume to use for object storage. >>> For example >>> if the following two GlusterFS volumes, volume1 and volume2, need to be >>> accessed over Swift, >>> then you can type the following: >>> >>> # gluster-swift-gen-builders volume1 volume2 >> [PRM] That part I understood however it doesn't answer the question exactly. >> >> Correct me if I'm wrong but looking over the code briefly it looks as >> though the volume name needs to be the same as the tenant ID number >> like it did with Gluster UFO 3.3. >> so for example >> if I do a " keystone tenant-list" and a see tenant1 with an id of >> "f6da0a8151ff43b7be10d961a20c94d6" then I would need to create a >> volume named f6da0a8151ff43b7be10d961a20c94d6 >> >> If I can name the volumes whatever I want or give them the same name >> as the tenant that would be great because it makes it easier for other >> SA's who are not directly working with OpenStack but may need to mount >> the volumes to comprehend, but its not urgently needed. >> >> One thing I was glad to see is that with Gluster UFO 3.3 I had to add >> mount points to /etc/fstab for each volume and manually create the >> directories for the mount points this looks to have been corrected in >> Gluster-Swift. >> >>> For more information please read: http://goo.gl/gd8LkW >>> >>> Let us know if you have any more questions or comments. >> [PRM] I may fork the Github repo and add some changes that may be >> beneficial so they can be reviewed and possibly merged. >> for example it would be nice if the gluster-swift-gen-buildes script >> used the value of the mount_ip field in /etc/swift/fs.conf instead of >> 127.0.0.1 if its defined. >> also I might make a more robust version that allows create, add, >> remove, and list options. >> >> >> Ill do testing tomorrow and let everyone know how it goes. >> >> >>> - Luis >>> >>>> >>>> On Tue, Sep 17, 2013 at 10:10 AM, Luis Pabon wrote: >>>>> First thing I can see is that you have Essex based gluster-ufo-* which >>>>> has >>>>> been replaced by the gluster-swift project. We are currently in progress >>>>> of >>>>> replacing the gluster-ufo-* with RPMs from the gluster-swift project in >>>>> Fedora. >>>>> >>>>> Please checkout the following quickstart guide which show how to download >>>>> the Grizzly version of gluster-swift: >>>>> >>>>> https://github.com/gluster/gluster-swift/blob/master/doc/markdown/quick_start_guide.md >>>>> . >>>>> >>>>> For more information please visit: https://launchpad.net/gluster-swift >>>>> >>>>> - Luis >>>>> >>>>> >>>>> On 09/16/2013 05:02 PM, Paul Robert Marino wrote: >>>>> >>>>> Sorry for the delay on reporting the details. I got temporarily pulled >>>>> off the project and dedicated to a different project which was >>>>> considered higher priority by my employer. I'm just getting back to >>>>> doing my normal work today. >>>>> >>>>> first here are the rpms I have installed >>>>> " >>>>> rpm -qa |grep -P -i '(gluster|swift)' >>>>> glusterfs-libs-3.4.0-8.el6.x86_64 >>>>> glusterfs-server-3.4.0-8.el6.x86_64 >>>>> openstack-swift-plugin-swift3-1.0.0-0.20120711git.el6.noarch >>>>> openstack-swift-proxy-1.8.0-2.el6.noarch >>>>> glusterfs-3.4.0-8.el6.x86_64 >>>>> glusterfs-cli-3.4.0-8.el6.x86_64 >>>>> glusterfs-geo-replication-3.4.0-8.el6.x86_64 >>>>> glusterfs-api-3.4.0-8.el6.x86_64 >>>>> openstack-swift-1.8.0-2.el6.noarch >>>>> openstack-swift-container-1.8.0-2.el6.noarch >>>>> openstack-swift-object-1.8.0-2.el6.noarch >>>>> glusterfs-fuse-3.4.0-8.el6.x86_64 >>>>> glusterfs-rdma-3.4.0-8.el6.x86_64 >>>>> openstack-swift-account-1.8.0-2.el6.noarch >>>>> glusterfs-ufo-3.4.0-8.el6.noarch >>>>> glusterfs-vim-3.2.7-1.el6.x86_64 >>>>> python-swiftclient-1.4.0-1.el6.noarch >>>>> >>>>> here are some key config files note I've changed the passwords I'm >>>>> using and hostnames >>>>> " >>>>> cat /etc/swift/account-server.conf >>>>> [DEFAULT] >>>>> mount_check = true >>>>> bind_port = 6012 >>>>> user = root >>>>> log_facility = LOG_LOCAL2 >>>>> devices = /swift/tenants/ >>>>> >>>>> [pipeline:main] >>>>> pipeline = account-server >>>>> >>>>> [app:account-server] >>>>> use = egg:gluster_swift_ufo#account >>>>> log_name = account-server >>>>> log_level = DEBUG >>>>> log_requests = true >>>>> >>>>> [account-replicator] >>>>> vm_test_mode = yes >>>>> >>>>> [account-auditor] >>>>> >>>>> [account-reaper] >>>>> >>>>> " >>>>> >>>>> " >>>>> cat /etc/swift/container-server.conf >>>>> [DEFAULT] >>>>> devices = /swift/tenants/ >>>>> mount_check = true >>>>> bind_port = 6011 >>>>> user = root >>>>> log_facility = LOG_LOCAL2 >>>>> >>>>> [pipeline:main] >>>>> pipeline = container-server >>>>> >>>>> [app:container-server] >>>>> use = egg:gluster_swift_ufo#container >>>>> >>>>> [container-replicator] >>>>> vm_test_mode = yes >>>>> >>>>> [container-updater] >>>>> >>>>> [container-auditor] >>>>> >>>>> [container-sync] >>>>> " >>>>> >>>>> " >>>>> cat /etc/swift/object-server.conf >>>>> [DEFAULT] >>>>> mount_check = true >>>>> bind_port = 6010 >>>>> user = root >>>>> log_facility = LOG_LOCAL2 >>>>> devices = /swift/tenants/ >>>>> >>>>> [pipeline:main] >>>>> pipeline = object-server >>>>> >>>>> [app:object-server] >>>>> use = egg:gluster_swift_ufo#object >>>>> >>>>> [object-replicator] >>>>> vm_test_mode = yes >>>>> >>>>> [object-updater] >>>>> >>>>> [object-auditor] >>>>> " >>>>> >>>>> " >>>>> cat /etc/swift/proxy-server.conf >>>>> [DEFAULT] >>>>> bind_port = 8080 >>>>> user = root >>>>> log_facility = LOG_LOCAL1 >>>>> log_name = swift >>>>> log_level = DEBUG >>>>> log_headers = True >>>>> >>>>> [pipeline:main] >>>>> pipeline = healthcheck cache authtoken keystone proxy-server >>>>> >>>>> [app:proxy-server] >>>>> use = egg:gluster_swift_ufo#proxy >>>>> allow_account_management = true >>>>> account_autocreate = true >>>>> >>>>> [filter:tempauth] >>>>> use = egg:swift#tempauth >>>>> # Here you need to add users explicitly. See the OpenStack Swift >>>>> Deployment >>>>> # Guide for more information. The user and user64 directives take the >>>>> # following form: >>>>> # user__ = [group] [group] [...] >>>>> [storage_url] >>>>> # user64__ = [group] [group] >>>>> [...] [storage_url] >>>>> # Where you use user64 for accounts and/or usernames that include >>>>> underscores. >>>>> # >>>>> # NOTE (and WARNING): The account name must match the device name >>>>> specified >>>>> # when generating the account, container, and object build rings. >>>>> # >>>>> # E.g. >>>>> # user_ufo0_admin = abc123 .admin >>>>> >>>>> [filter:healthcheck] >>>>> use = egg:swift#healthcheck >>>>> >>>>> [filter:cache] >>>>> use = egg:swift#memcache >>>>> >>>>> >>>>> [filter:keystone] >>>>> use = egg:swift#keystoneauth >>>>> #paste.filter_factory = keystone.middleware.swift_auth:filter_factory >>>>> operator_roles = Member,admin,swiftoperator >>>>> >>>>> >>>>> [filter:authtoken] >>>>> paste.filter_factory = keystone.middleware.auth_token:filter_factory >>>>> auth_host = keystone01.vip.my.net >>>>> auth_port = 35357 >>>>> auth_protocol = http >>>>> admin_user = swift >>>>> admin_password = PASSWORD >>>>> admin_tenant_name = service >>>>> signing_dir = /var/cache/swift >>>>> service_port = 5000 >>>>> service_host = keystone01.vip.my.net >>>>> >>>>> [filter:swiftauth] >>>>> use = egg:keystone#swiftauth >>>>> auth_host = keystone01.vip.my.net >>>>> auth_port = 35357 >>>>> auth_protocol = http >>>>> keystone_url = https://keystone01.vip.my.net:5000/v2.0 >>>>> admin_user = swift >>>>> admin_password = PASSWORD >>>>> admin_tenant_name = service >>>>> signing_dir = /var/cache/swift >>>>> keystone_swift_operator_roles = Member,admin,swiftoperator >>>>> keystone_tenant_user_admin = true >>>>> >>>>> [filter:catch_errors] >>>>> use = egg:swift#catch_errors >>>>> " >>>>> >>>>> " >>>>> cat /etc/swift/swift.conf >>>>> [DEFAULT] >>>>> >>>>> >>>>> [swift-hash] >>>>> # random unique string that can never change (DO NOT LOSE) >>>>> swift_hash_path_suffix = gluster >>>>> #3d60c9458bb77abe >>>>> >>>>> >>>>> # The swift-constraints section sets the basic constraints on data >>>>> # saved in the swift cluster. >>>>> >>>>> [swift-constraints] >>>>> >>>>> # max_file_size is the largest "normal" object that can be saved in >>>>> # the cluster. This is also the limit on the size of each segment of >>>>> # a "large" object when using the large object manifest support. >>>>> # This value is set in bytes. Setting it to lower than 1MiB will cause >>>>> # some tests to fail. It is STRONGLY recommended to leave this value at >>>>> # the default (5 * 2**30 + 2). >>>>> >>>>> # FIXME: Really? Gluster can handle a 2^64 sized file? And can the >>>>> fronting >>>>> # web service handle such a size? I think with UFO, we need to keep with >>>>> the >>>>> # default size from Swift and encourage users to research what size their >>>>> web >>>>> # services infrastructure can handle. >>>>> >>>>> max_file_size = 18446744073709551616 >>>>> >>>>> >>>>> # max_meta_name_length is the max number of bytes in the utf8 encoding >>>>> # of the name portion of a metadata header. >>>>> >>>>> #max_meta_name_length = 128 >>>>> >>>>> >>>>> # max_meta_value_length is the max number of bytes in the utf8 encoding >>>>> # of a metadata value >>>>> >>>>> #max_meta_value_length = 256 >>>>> >>>>> >>>>> # max_meta_count is the max number of metadata keys that can be stored >>>>> # on a single account, container, or object >>>>> >>>>> #max_meta_count = 90 >>>>> >>>>> >>>>> # max_meta_overall_size is the max number of bytes in the utf8 encoding >>>>> # of the metadata (keys + values) >>>>> >>>>> #max_meta_overall_size = 4096 >>>>> >>>>> >>>>> # max_object_name_length is the max number of bytes in the utf8 encoding >>>>> of >>>>> an >>>>> # object name: Gluster FS can handle much longer file names, but the >>>>> length >>>>> # between the slashes of the URL is handled below. Remember that most web >>>>> # clients can't handle anything greater than 2048, and those that do are >>>>> # rather clumsy. >>>>> >>>>> max_object_name_length = 2048 >>>>> >>>>> # max_object_name_component_length (GlusterFS) is the max number of bytes >>>>> in >>>>> # the utf8 encoding of an object name component (the part between the >>>>> # slashes); this is a limit imposed by the underlying file system (for >>>>> XFS >>>>> it >>>>> # is 255 bytes). >>>>> >>>>> max_object_name_component_length = 255 >>>>> >>>>> # container_listing_limit is the default (and max) number of items >>>>> # returned for a container listing request >>>>> >>>>> #container_listing_limit = 10000 >>>>> >>>>> >>>>> # account_listing_limit is the default (and max) number of items returned >>>>> # for an account listing request >>>>> >>>>> #account_listing_limit = 10000 >>>>> >>>>> >>>>> # max_account_name_length is the max number of bytes in the utf8 encoding >>>>> of >>>>> # an account name: Gluster FS Filename limit (XFS limit?), must be the >>>>> same >>>>> # size as max_object_name_component_length above. >>>>> >>>>> max_account_name_length = 255 >>>>> >>>>> >>>>> # max_container_name_length is the max number of bytes in the utf8 >>>>> encoding >>>>> # of a container name: Gluster FS Filename limit (XFS limit?), must be >>>>> the >>>>> same >>>>> # size as max_object_name_component_length above. >>>>> >>>>> max_container_name_length = 255 >>>>> >>>>> " >>>>> >>>>> >>>>> The volumes >>>>> " >>>>> gluster volume list >>>>> cindervol >>>>> unified-storage-vol >>>>> a07d2f39117c4e5abdeba722cf245828 >>>>> bd74a005f08541b9989e392a689be2fc >>>>> f6da0a8151ff43b7be10d961a20c94d6 >>>>> " >>>>> >>>>> if I run the command >>>>> " >>>>> gluster-swift-gen-builders unified-storage-vol >>>>> a07d2f39117c4e5abdeba722cf245828 bd74a005f08541b9989e392a689be2fc >>>>> f6da0a8151ff43b7be10d961a20c94d6 >>>>> " >>>>> >>>>> because of a change in the script in this version as compaired to the >>>>> version I got from >>>>> http://repos.fedorapeople.org/repos/kkeithle/glusterfs/ the >>>>> gluster-swift-gen-builders script only takes the first option and >>>>> ignores the rest. >>>>> >>>>> other than the location of the config files none of the changes Ive >>>>> made are functionally different than the ones mentioned in >>>>> >>>>> http://www.gluster.org/2012/09/howto-using-ufo-swift-a-quick-and-dirty-setup-guide/ >>>>> >>>>> The result is that the first volume named "unified-storage-vol" winds >>>>> up being used for every thing regardless of the tenant, and users and >>>>> see and manage each others objects regardless of what tenant they are >>>>> members of. >>>>> through the swift command or via horizon. >>>>> >>>>> In a way this is a good thing for me it simplifies thing significantly >>>>> and would be fine if it just created a directory for each tenant and >>>>> only allow the user to access the individual directories, not the >>>>> whole gluster volume. >>>>> by the way seeing every thing includes the service tenants data so >>>>> unprivileged users can delete glance images without being a member of >>>>> the service group. >>>>> >>>>> >>>>> >>>>> >>>>> On Mon, Sep 2, 2013 at 9:58 PM, Paul Robert Marino >>>>> wrote: >>>>> >>>>> Well I'll give you the full details in the morning but simply I used the >>>>> stock cluster ring builder script that came with the 3.4 rpms and the old >>>>> version from 3.3 took the list of volumes and would add all of them the >>>>> version with 3.4 only takes the first one. >>>>> >>>>> Well I ran the script expecting the same behavior but instead they all >>>>> used >>>>> the first volume in the list. >>>>> >>>>> Now I knew from the docs I read that the per tenant directories in a >>>>> single >>>>> volume were one possible plan for 3.4 to deal with the scalding issue >>>>> with a >>>>> large number of tenants, so when I saw the difference in the script and >>>>> that >>>>> it worked I just assumed that this was done and I missed something. >>>>> >>>>> >>>>> >>>>> -- Sent from my HP Pre3 >>>>> >>>>> ________________________________ >>>>> On Sep 2, 2013 20:55, Ramana Raja wrote: >>>>> >>>>> Hi Paul, >>>>> >>>>> Currently, gluster-swift doesn't support the feature of multiple >>>>> accounts/tenants accessing the same volume. Each tenant still needs his >>>>> own >>>>> gluster volume. So I'm wondering how you were able to observe the >>>>> reported >>>>> behaviour. >>>>> >>>>> How did you prepare the ringfiles for the different tenants, which use >>>>> the >>>>> same gluster volume? Did you change the configuration of the servers? >>>>> Also, >>>>> how did you access the files that you mention? It'd be helpful if you >>>>> could >>>>> share the commands you used to perform these actions. >>>>> >>>>> Thanks, >>>>> >>>>> Ram >>>>> >>>>> >>>>> ----- Original Message ----- >>>>> From: "Vijay Bellur" >>>>> To: "Paul Robert Marino" >>>>> Cc: rhos-list at redhat.com, "Luis Pabon" , "Ramana Raja" >>>>> , "Chetan Risbud" >>>>> Sent: Monday, September 2, 2013 4:17:51 PM >>>>> Subject: Re: [rhos-list] Gluster UFO 3.4 swift Multi tenant question >>>>> >>>>> On 09/02/2013 01:39 AM, Paul Robert Marino wrote: >>>>> >>>>> I have Gluster UFO installed as a back end for swift from here >>>>> http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/RHEL/epel-6/ >>>>> with RDO 3 >>>>> >>>>> Its working well except for one thing. All of the tenants are seeing >>>>> one Gluster volume which is some what nice, especially when compared >>>>> to the old 3.3 behavior of creating one volume per tenant named after >>>>> the tenant ID number. >>>>> >>>>> The problem is I expected to see is sub directory created under the >>>>> volume root for each tenant but instead what in seeing is that all of >>>>> the tenants can see the root of the Gluster volume. The result is that >>>>> all of the tenants can access each others files and even delete them. >>>>> even scarier is that the tennants can see and delete each others >>>>> glance images and snapshots. >>>>> >>>>> Can any one suggest options to look at or documents to read to try to >>>>> figure out how to modify the behavior? >>>>> >>>>> Adding gluster swift developers who might be able to help. >>>>> >>>>> -Vijay >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From prmarino1 at gmail.com Mon Sep 23 14:59:08 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Mon, 23 Sep 2013 10:59:08 -0400 Subject: [rhos-list] [gluster-swift] Gluster UFO 3.4 swift Multi tenant question In-Reply-To: <523CABFD.2080607@redhat.com> References: <31398813.7155780.1378169708207.JavaMail.root@redhat.com> <5225425e.8482e00a.1bfd.ffff9dcc@mx.google.com> <523862D1.5040108@redhat.com> <523896CC.70105@redhat.com> <523CABFD.2080607@redhat.com> Message-ID: done it should have the tag GET-MOUNT_IP-FOR-GEN-BUILDERS-SCRIPT in Gerrit. On Fri, Sep 20, 2013 at 4:11 PM, Luis Pabon wrote: > This is great! Thank you. The only issue is that we use github as a public > repo and we do not use github for our normal development workflow. Do you > mind resubmitting the change using the development workflow as described > here:https://github.com/gluster/gluster-swift/blob/master/doc/markdown/dev_guide.md. > More information can be found at https://launchpad.net/gluster-swift > > - Luis > > > On 09/19/2013 05:23 PM, Paul Robert Marino wrote: > > Thank you every one for your help especially Louis > > I tested the RPM and it went well every thing is working now. > > I did have to use the tenant ID's as the volume names. I may submit an > update to the documentation to clarify this for people > > So in other words the volume names have to match the output of > > " > keystone tenant-list|grep -v + | \ > grep -v -P '^\|\s+id\s+\|\s+name\s+\|\s+enabled\s+\|$' | \ > grep -v -P '^\w+:' | awk '{print $2}' > " > > > I've created an updated copy of gluster-swift-gen-builders that grabs > the value of mount_ip from /etc/swift/fs.conf and posted it on github. > you should see a pull request on the site for the submission. of the > change > > Thank you every one for your help > > On Tue, Sep 17, 2013 at 4:38 PM, Paul Robert Marino > wrote: > > Luis > Thanks for the timely response. > > On Tue, Sep 17, 2013 at 1:52 PM, Luis Pabon wrote: > > On 09/17/2013 11:13 AM, Paul Robert Marino wrote: > > Luis > well thats intresting because it was my impression that Gluster UFO > 3.4 was based on the Grizzly version of Swift. > > [LP] Sorry, the gluster-ufo RPM is Essex only. > > [PRM] The source of my confusion was here > http://www.gluster.org/community/documentation/index.php/Features34 > and here > http://www.gluster.org/2013/06/glusterfs-3-4-and-swift-where-are-all-the-pieces/ > These pages on the gluster site should probably be updated to reflect > the changes. > > > > Also I was previously unaware of this new rpm which doesnt seem to be > in a repo any where. > > [LP] gluster-swift project RPMs have been submitted to Fedora and are > currently being reviewed. > > [PRM] Cool if they are in the EPEL testing repo Ill look for them > there because I would rather pull the properly EPEL signed RPMs if > they exist just to make node deployments easier. If not Ill ask some > of my friends offline if they can help expedite it. > > > also there is a line in this new howto that is extreamly unclear > > " > /usr/bin/gluster-swift-gen-builders test > " > in place of "test" what should go there is it the tenant ID string, > the tenant name, or just a generic volume you can name whatever you > want? > in other words how should the Gluster volumes be named? > > [LP] We will clarify that in the quick start guide. Thank you for pointing > it out. While we update the community site, please refer to the > documentation available here http://goo.gl/bQFI8o for a usage guide. > > As for the tool, the format is: > gluster-swift-gen-buildes [VOLUME] [VOLUME...] > > Where VOLUME is the name of the GlusterFS volume to use for object storage. > For example > if the following two GlusterFS volumes, volume1 and volume2, need to be > accessed over Swift, > then you can type the following: > > # gluster-swift-gen-builders volume1 volume2 > > [PRM] That part I understood however it doesn't answer the question exactly. > > Correct me if I'm wrong but looking over the code briefly it looks as > though the volume name needs to be the same as the tenant ID number > like it did with Gluster UFO 3.3. > so for example > if I do a " keystone tenant-list" and a see tenant1 with an id of > "f6da0a8151ff43b7be10d961a20c94d6" then I would need to create a > volume named f6da0a8151ff43b7be10d961a20c94d6 > > If I can name the volumes whatever I want or give them the same name > as the tenant that would be great because it makes it easier for other > SA's who are not directly working with OpenStack but may need to mount > the volumes to comprehend, but its not urgently needed. > > One thing I was glad to see is that with Gluster UFO 3.3 I had to add > mount points to /etc/fstab for each volume and manually create the > directories for the mount points this looks to have been corrected in > Gluster-Swift. > > For more information please read: http://goo.gl/gd8LkW > > Let us know if you have any more questions or comments. > > [PRM] I may fork the Github repo and add some changes that may be > beneficial so they can be reviewed and possibly merged. > for example it would be nice if the gluster-swift-gen-buildes script > used the value of the mount_ip field in /etc/swift/fs.conf instead of > 127.0.0.1 if its defined. > also I might make a more robust version that allows create, add, > remove, and list options. > > > Ill do testing tomorrow and let everyone know how it goes. > > > - Luis > > > On Tue, Sep 17, 2013 at 10:10 AM, Luis Pabon wrote: > > First thing I can see is that you have Essex based gluster-ufo-* which > has > been replaced by the gluster-swift project. We are currently in progress > of > replacing the gluster-ufo-* with RPMs from the gluster-swift project in > Fedora. > > Please checkout the following quickstart guide which show how to download > the Grizzly version of gluster-swift: > > https://github.com/gluster/gluster-swift/blob/master/doc/markdown/quick_start_guide.md > . > > For more information please visit: https://launchpad.net/gluster-swift > > - Luis > > > On 09/16/2013 05:02 PM, Paul Robert Marino wrote: > > Sorry for the delay on reporting the details. I got temporarily pulled > off the project and dedicated to a different project which was > considered higher priority by my employer. I'm just getting back to > doing my normal work today. > > first here are the rpms I have installed > " > rpm -qa |grep -P -i '(gluster|swift)' > glusterfs-libs-3.4.0-8.el6.x86_64 > glusterfs-server-3.4.0-8.el6.x86_64 > openstack-swift-plugin-swift3-1.0.0-0.20120711git.el6.noarch > openstack-swift-proxy-1.8.0-2.el6.noarch > glusterfs-3.4.0-8.el6.x86_64 > glusterfs-cli-3.4.0-8.el6.x86_64 > glusterfs-geo-replication-3.4.0-8.el6.x86_64 > glusterfs-api-3.4.0-8.el6.x86_64 > openstack-swift-1.8.0-2.el6.noarch > openstack-swift-container-1.8.0-2.el6.noarch > openstack-swift-object-1.8.0-2.el6.noarch > glusterfs-fuse-3.4.0-8.el6.x86_64 > glusterfs-rdma-3.4.0-8.el6.x86_64 > openstack-swift-account-1.8.0-2.el6.noarch > glusterfs-ufo-3.4.0-8.el6.noarch > glusterfs-vim-3.2.7-1.el6.x86_64 > python-swiftclient-1.4.0-1.el6.noarch > > here are some key config files note I've changed the passwords I'm > using and hostnames > " > cat /etc/swift/account-server.conf > [DEFAULT] > mount_check = true > bind_port = 6012 > user = root > log_facility = LOG_LOCAL2 > devices = /swift/tenants/ > > [pipeline:main] > pipeline = account-server > > [app:account-server] > use = egg:gluster_swift_ufo#account > log_name = account-server > log_level = DEBUG > log_requests = true > > [account-replicator] > vm_test_mode = yes > > [account-auditor] > > [account-reaper] > > " > > " > cat /etc/swift/container-server.conf > [DEFAULT] > devices = /swift/tenants/ > mount_check = true > bind_port = 6011 > user = root > log_facility = LOG_LOCAL2 > > [pipeline:main] > pipeline = container-server > > [app:container-server] > use = egg:gluster_swift_ufo#container > > [container-replicator] > vm_test_mode = yes > > [container-updater] > > [container-auditor] > > [container-sync] > " > > " > cat /etc/swift/object-server.conf > [DEFAULT] > mount_check = true > bind_port = 6010 > user = root > log_facility = LOG_LOCAL2 > devices = /swift/tenants/ > > [pipeline:main] > pipeline = object-server > > [app:object-server] > use = egg:gluster_swift_ufo#object > > [object-replicator] > vm_test_mode = yes > > [object-updater] > > [object-auditor] > " > > " > cat /etc/swift/proxy-server.conf > [DEFAULT] > bind_port = 8080 > user = root > log_facility = LOG_LOCAL1 > log_name = swift > log_level = DEBUG > log_headers = True > > [pipeline:main] > pipeline = healthcheck cache authtoken keystone proxy-server > > [app:proxy-server] > use = egg:gluster_swift_ufo#proxy > allow_account_management = true > account_autocreate = true > > [filter:tempauth] > use = egg:swift#tempauth > # Here you need to add users explicitly. See the OpenStack Swift > Deployment > # Guide for more information. The user and user64 directives take the > # following form: > # user__ = [group] [group] [...] > [storage_url] > # user64__ = [group] [group] > [...] [storage_url] > # Where you use user64 for accounts and/or usernames that include > underscores. > # > # NOTE (and WARNING): The account name must match the device name > specified > # when generating the account, container, and object build rings. > # > # E.g. > # user_ufo0_admin = abc123 .admin > > [filter:healthcheck] > use = egg:swift#healthcheck > > [filter:cache] > use = egg:swift#memcache > > > [filter:keystone] > use = egg:swift#keystoneauth > #paste.filter_factory = keystone.middleware.swift_auth:filter_factory > operator_roles = Member,admin,swiftoperator > > > [filter:authtoken] > paste.filter_factory = keystone.middleware.auth_token:filter_factory > auth_host = keystone01.vip.my.net > auth_port = 35357 > auth_protocol = http > admin_user = swift > admin_password = PASSWORD > admin_tenant_name = service > signing_dir = /var/cache/swift > service_port = 5000 > service_host = keystone01.vip.my.net > > [filter:swiftauth] > use = egg:keystone#swiftauth > auth_host = keystone01.vip.my.net > auth_port = 35357 > auth_protocol = http > keystone_url = https://keystone01.vip.my.net:5000/v2.0 > admin_user = swift > admin_password = PASSWORD > admin_tenant_name = service > signing_dir = /var/cache/swift > keystone_swift_operator_roles = Member,admin,swiftoperator > keystone_tenant_user_admin = true > > [filter:catch_errors] > use = egg:swift#catch_errors > " > > " > cat /etc/swift/swift.conf > [DEFAULT] > > > [swift-hash] > # random unique string that can never change (DO NOT LOSE) > swift_hash_path_suffix = gluster > #3d60c9458bb77abe > > > # The swift-constraints section sets the basic constraints on data > # saved in the swift cluster. > > [swift-constraints] > > # max_file_size is the largest "normal" object that can be saved in > # the cluster. This is also the limit on the size of each segment of > # a "large" object when using the large object manifest support. > # This value is set in bytes. Setting it to lower than 1MiB will cause > # some tests to fail. It is STRONGLY recommended to leave this value at > # the default (5 * 2**30 + 2). > > # FIXME: Really? Gluster can handle a 2^64 sized file? And can the > fronting > # web service handle such a size? I think with UFO, we need to keep with > the > # default size from Swift and encourage users to research what size their > web > # services infrastructure can handle. > > max_file_size = 18446744073709551616 > > > # max_meta_name_length is the max number of bytes in the utf8 encoding > # of the name portion of a metadata header. > > #max_meta_name_length = 128 > > > # max_meta_value_length is the max number of bytes in the utf8 encoding > # of a metadata value > > #max_meta_value_length = 256 > > > # max_meta_count is the max number of metadata keys that can be stored > # on a single account, container, or object > > #max_meta_count = 90 > > > # max_meta_overall_size is the max number of bytes in the utf8 encoding > # of the metadata (keys + values) > > #max_meta_overall_size = 4096 > > > # max_object_name_length is the max number of bytes in the utf8 encoding > of > an > # object name: Gluster FS can handle much longer file names, but the > length > # between the slashes of the URL is handled below. Remember that most web > # clients can't handle anything greater than 2048, and those that do are > # rather clumsy. > > max_object_name_length = 2048 > > # max_object_name_component_length (GlusterFS) is the max number of bytes > in > # the utf8 encoding of an object name component (the part between the > # slashes); this is a limit imposed by the underlying file system (for > XFS > it > # is 255 bytes). > > max_object_name_component_length = 255 > > # container_listing_limit is the default (and max) number of items > # returned for a container listing request > > #container_listing_limit = 10000 > > > # account_listing_limit is the default (and max) number of items returned > # for an account listing request > > #account_listing_limit = 10000 > > > # max_account_name_length is the max number of bytes in the utf8 encoding > of > # an account name: Gluster FS Filename limit (XFS limit?), must be the > same > # size as max_object_name_component_length above. > > max_account_name_length = 255 > > > # max_container_name_length is the max number of bytes in the utf8 > encoding > # of a container name: Gluster FS Filename limit (XFS limit?), must be > the > same > # size as max_object_name_component_length above. > > max_container_name_length = 255 > > " > > > The volumes > " > gluster volume list > cindervol > unified-storage-vol > a07d2f39117c4e5abdeba722cf245828 > bd74a005f08541b9989e392a689be2fc > f6da0a8151ff43b7be10d961a20c94d6 > " > > if I run the command > " > gluster-swift-gen-builders unified-storage-vol > a07d2f39117c4e5abdeba722cf245828 bd74a005f08541b9989e392a689be2fc > f6da0a8151ff43b7be10d961a20c94d6 > " > > because of a change in the script in this version as compaired to the > version I got from > http://repos.fedorapeople.org/repos/kkeithle/glusterfs/ the > gluster-swift-gen-builders script only takes the first option and > ignores the rest. > > other than the location of the config files none of the changes Ive > made are functionally different than the ones mentioned in > > http://www.gluster.org/2012/09/howto-using-ufo-swift-a-quick-and-dirty-setup-guide/ > > The result is that the first volume named "unified-storage-vol" winds > up being used for every thing regardless of the tenant, and users and > see and manage each others objects regardless of what tenant they are > members of. > through the swift command or via horizon. > > In a way this is a good thing for me it simplifies thing significantly > and would be fine if it just created a directory for each tenant and > only allow the user to access the individual directories, not the > whole gluster volume. > by the way seeing every thing includes the service tenants data so > unprivileged users can delete glance images without being a member of > the service group. > > > > > On Mon, Sep 2, 2013 at 9:58 PM, Paul Robert Marino > wrote: > > Well I'll give you the full details in the morning but simply I used the > stock cluster ring builder script that came with the 3.4 rpms and the old > version from 3.3 took the list of volumes and would add all of them the > version with 3.4 only takes the first one. > > Well I ran the script expecting the same behavior but instead they all > used > the first volume in the list. > > Now I knew from the docs I read that the per tenant directories in a > single > volume were one possible plan for 3.4 to deal with the scalding issue > with a > large number of tenants, so when I saw the difference in the script and > that > it worked I just assumed that this was done and I missed something. > > > > -- Sent from my HP Pre3 > > ________________________________ > On Sep 2, 2013 20:55, Ramana Raja wrote: > > Hi Paul, > > Currently, gluster-swift doesn't support the feature of multiple > accounts/tenants accessing the same volume. Each tenant still needs his > own > gluster volume. So I'm wondering how you were able to observe the > reported > behaviour. > > How did you prepare the ringfiles for the different tenants, which use > the > same gluster volume? Did you change the configuration of the servers? > Also, > how did you access the files that you mention? It'd be helpful if you > could > share the commands you used to perform these actions. > > Thanks, > > Ram > > > ----- Original Message ----- > From: "Vijay Bellur" > To: "Paul Robert Marino" > Cc: rhos-list at redhat.com, "Luis Pabon" , "Ramana Raja" > , "Chetan Risbud" > Sent: Monday, September 2, 2013 4:17:51 PM > Subject: Re: [rhos-list] Gluster UFO 3.4 swift Multi tenant question > > On 09/02/2013 01:39 AM, Paul Robert Marino wrote: > > I have Gluster UFO installed as a back end for swift from here > http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/RHEL/epel-6/ > with RDO 3 > > Its working well except for one thing. All of the tenants are seeing > one Gluster volume which is some what nice, especially when compared > to the old 3.3 behavior of creating one volume per tenant named after > the tenant ID number. > > The problem is I expected to see is sub directory created under the > volume root for each tenant but instead what in seeing is that all of > the tenants can see the root of the Gluster volume. The result is that > all of the tenants can access each others files and even delete them. > even scarier is that the tennants can see and delete each others > glance images and snapshots. > > Can any one suggest options to look at or documents to read to try to > figure out how to modify the behavior? > > Adding gluster swift developers who might be able to help. > > -Vijay > > >