From gcheng at salesforce.com Mon Jul 1 22:59:37 2013 From: gcheng at salesforce.com (Guolin Cheng) Date: Mon, 1 Jul 2013 15:59:37 -0700 Subject: [rhos-list] Retire openstack cinder hosts, storage backends? Message-ID: Hi all, Any one know where we can find the steps on removing cinder hosts, or one/more storage backends from a cinder volume host? Please shed a light. I followed the steps on the redhat admin. Guide to successfully add LVM & NFS storage backends, but then when I tried to remove the storage backends, there are no steps/procedure to follow. I tried to edit cinder.conf to comment out local LVM backend which is on a loopback device and restarted openstack-cinder-volume service, but that didn't help. Restart cinder api and cinder scheduler didn't help either. For the 2nd cinder volume storage server (LVM storage backend), I stopped cinder volume service, but it still shows up by cinder-manage command as well. [root@ cloudmaster ~(keystone_admin)]# cinder-manage host list host zone cloudmaster.example.com nova cloudmaster.example.com at cinder-volumes-1-driver nova cloudmaster.example.com at cinder-volumes-2-driver nova cinderdisk01.example.com nova [root@ cloudmaster ~(keystone_admin)]# vi /etc/cinder/cinder.conf [root@ cloudmaster ~(keystone_admin)]# /etc/init.d/openstack-cinder-scheduler restart Stopping openstack-cinder-scheduler: [ OK ] Starting openstack-cinder-scheduler: [ OK ] [root@ cloudmaster ~(keystone_admin)]# /etc/init.d/openstack-cinder-api restart Stopping openstack-cinder-api: [ OK ] Starting openstack-cinder-api: [ OK ] [root@ cloudmaster ~(keystone_admin)]# /etc/init.d/openstack-cinder-volume restart Stopping openstack-cinder-volume: [ OK ] Starting openstack-cinder-volume: [ OK ] [root@ cloudmaster ~(keystone_admin)]# cinder-manage host list host zone cloudmaster.example.com nova cloudmaster.example.com at cinder-volumes-1-driver nova cloudmaster.example.com at cinder-volumes-2-driver nova cinderdisk01.example.com nova [root@ cloudmaster ~(keystone_admin)]# Thanks. Guolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Mon Jul 1 23:22:28 2013 From: sgordon at redhat.com (Steve Gordon) Date: Mon, 1 Jul 2013 19:22:28 -0400 (EDT) Subject: [rhos-list] Retire openstack cinder hosts, storage backends? In-Reply-To: References: Message-ID: <1744527640.31475997.1372720948197.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Guolin Cheng" > To: rhos-list at redhat.com > Sent: Monday, July 1, 2013 6:59:37 PM > Subject: [rhos-list] Retire openstack cinder hosts, storage backends? > > Hi all, > > Any one know where we can find the steps on removing cinder hosts, or > one/more storage backends from a cinder volume host? Please shed a light. > > I followed the steps on the redhat admin. Guide to successfully add LVM & NFS > storage backends, but then when I tried to remove the storage backends, > there are no steps/procedure to follow. > > I tried to edit cinder.conf to comment out local LVM backend which is on a > loopback device and restarted openstack-cinder-volume service, but that > didn't help. Restart cinder api and cinder scheduler didn't help either. > > For the 2nd cinder volume storage server (LVM storage backend), I stopped > cinder volume service, but it still shows up by cinder-manage command as > well. > > [root@ cloudmaster ~(keystone_admin)]# cinder-manage host list > host zone > cloudmaster.example.com nova > cloudmaster.example.com at cinder-volumes-1-driver nova > cloudmaster.example.com at cinder-volumes-2-driver nova > cinderdisk01.example.com nova > [root@ cloudmaster ~(keystone_admin)]# vi /etc/cinder/cinder.conf > [root@ cloudmaster ~(keystone_admin)]# /etc/init.d/openstack-cinder-scheduler > restart > Stopping openstack-cinder-scheduler: [ OK ] > Starting openstack-cinder-scheduler: [ OK ] > [root@ cloudmaster ~(keystone_admin)]# /etc/init.d/openstack-cinder-api > restart > Stopping openstack-cinder-api: [ OK ] > Starting openstack-cinder-api: [ OK ] > [root@ cloudmaster ~(keystone_admin)]# /etc/init.d/openstack-cinder-volume > restart > Stopping openstack-cinder-volume: [ OK ] > Starting openstack-cinder-volume: [ OK ] > [root@ cloudmaster ~(keystone_admin)]# cinder-manage host list > host zone > cloudmaster.example.com nova > cloudmaster.example.com at cinder-volumes-1-driver nova > cloudmaster.example.com at cinder-volumes-2-driver nova > cinderdisk01.example.com nova > [root@ cloudmaster ~(keystone_admin)]# > > Thanks. > Guolin I originally scoped documenting such a process into RHOS 3.0 [1] but the response I got from engineering at the time indicated that there was no straightforward/supportable way of doing it for us to recommend to customers. Adding Eric to the thread in case he has any suggestions? Thanks, -- Steve Gordon, RHCE Documentation Lead, Red Hat OpenStack Engineering Content Services Red Hat Canada (Toronto, Ontario) [1] https://bugzilla.redhat.com/show_bug.cgi?id=959470 [2] http://post-office.corp.redhat.com/archives/rh-openstack-dev/2013-June/msg00067.html From andrey at xdel.ru Mon Jul 8 10:04:43 2013 From: andrey at xdel.ru (Andrey Korolyov) Date: Mon, 8 Jul 2013 14:04:43 +0400 Subject: [rhos-list] Issues with the OVS and RDO kernel Message-ID: Hello, There are some strange issues with kernels placed in [fedora epel-6]. First of all, OVS datapath shipped with RHEL/RedPatch kernel seems to be broken an all possible ways - connectivity isn`t working within two bridges connected via patchport neither via gre tunnels, both parts of openstack network topology (of course tunnels can be replaced with the vlan tags, but it`ll not fix connectivity). To summarize current problems: - 2.6.32-358.111.1.openstack.el6.x86_64 does not work with the third-party OVS module well ('openvswitch: cannot register gre protocol handler' kernel message) and built-in implementation does not work at all, - as RDO does not include glibc support for namespace, seems that the 144e6ce1679a768e987230efb4afa402a5ab58ac cannot be easily included, but bug with missing mounts is presented - I observed that the namespace was hold at least once by metadata agent when no entries was matched over /proc/pid/mounts. Any thoughts on how I may get working gre tunnels on the latest namespace-patched kernel will be highly appreciated. Previous one seems to be working in some way with tunnels, but not in the topology provided by openstack. From matthias.pfuetzner at redhat.com Tue Jul 2 11:00:29 2013 From: matthias.pfuetzner at redhat.com (=?ISO-8859-1?Q?Matthias_Pf=FCtzner?=) Date: Tue, 02 Jul 2013 13:00:29 +0200 Subject: [rhos-list] novaclient issue Message-ID: <51D2B2CD.5020701@redhat.com> Just as an FYI: A colleague of ours did add an install manual to github: https://github.com/rdoxenham/openstack-training/blob/master/documentation/openstack-manual.md That might possibly help in setting things up... HTH, Matthias -- Red Hat GmbH Matthias Pf?tzner Solution Architect, Cloud MesseTurm 60308 Frankfurt/Main phone: +49 69 365051 031 mobile: +49 172 7724032 fax: +49 69 365051 001 email: matthias.pfuetzner at redhat.com ___________________________________________________________________________ Reg. Adresse: Red Hat GmbH, Werner-von-Siemens-Ring 11 -15, 85630 Grasbrunn Handelsregister: Amtsgericht Muenchen HRB 153243 Geschaeftsfuehrer: Charles Cachera, Michael Cunningham, Mark Hegarty, Charlie Peters From prmarino1 at gmail.com Mon Jul 8 12:41:07 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Mon, 08 Jul 2013 08:41:07 -0400 Subject: [rhos-list] Issues with the OVS and RDO kernel In-Reply-To: Message-ID: <51dab365.829c420a.642f.0ea9@mx.google.com> An HTML attachment was scrubbed... URL: From pmyers at redhat.com Mon Jul 8 17:06:42 2013 From: pmyers at redhat.com (Perry Myers) Date: Mon, 08 Jul 2013 13:06:42 -0400 Subject: [rhos-list] Got inconsistent mac addresses for some nova instances. In-Reply-To: <51CDC992.6070503@redhat.com> References: <51CDC26B.6070108@redhat.com> <51CDC992.6070503@redhat.com> Message-ID: <51DAF1A2.3020103@redhat.com> taking to rhos-list and adding neutron team On 06/28/2013 01:36 PM, jhsiao at redhat.com wrote: > Hi Lon, > > This is a weird case --- Got inconsistent mac addresses for some nova > instances. > > Booted up four instances and all came up from the second compute note. > > All of them can be reachable by dhcp-agent via "ip netns exec". > > But, when accessed via virt-manager, two of them failed on "ifup eth1" > to get dhcp addresses. My investigation showed that the each of them had > a different mac address than what dhcp-agent got from a ssh session. > > Please see the first two cases below. > > Thanks! > > Jean > > -------- Original Message -------- > Subject: Wrong mac addresses > Date: Fri, 28 Jun 2013 13:05:47 -0400 > From: jhsiao at redhat.com > Reply-To: jhsiao at redhat.com > Organization: Red Hat > To: Jean Hsiao > > > > > 172.16.1.5 > # data from /var/lib/nova/instances/08b04d7e-250d-4d90-b8e3-853bb2c0bad2 > # same as seen by virt-manager console > [root at qe-dell-ovs3 08b04d7e-250d-4d90-b8e3-853bb2c0bad2]# !gr > grep "mac addr" libvirt.xml > ** > > # data from dhcp-agent ssh session > 2: eth1: mtu 1500 qdisc pfifo_fast > state UP qlen 1000 > link/ether*fa:16:3e:50:20:f7* brd ff:ff:ff:ff:ff:ff > inet 172.16.1.5/24 brd 172.16.1.255 scope global eth1 > inet6 fe80::f816:3eff:fe50:20f7/64 scope link tentative dadfailed > valid_lft forever preferred_lft forever > > > 172.16.1.6 > [root at qe-dell-ovs3 c58f0944-6559-4d8c-9230-5d2db0a14290]# grep "mac > addr" libvirt.xml > ** > > 2: eth1: mtu 1500 qdisc pfifo_fast > state UP qlen 1000 > link/ether *fa:16:3e:51:c4:d2* brd ff:ff:ff:ff:ff:ff > inet 172.16.1.6/24 brd 172.16.1.255 scope global eth1 > inet6 fe80::f816:3eff:fe51:c4d2/64 scope link tentative dadfailed > valid_lft forever preferred_lft forever > > > 172.16.1.7 > [root at qe-dell-ovs3 d09d4de4-54b3-48ee-bef5-f8ac3204ccee]# !gr > grep "mac addr" libvirt.xml > > > 2: eth1: mtu 1500 qdisc pfifo_fast > state UP qlen 1000 > link/ether fa:16:3e:22:6a:da brd ff:ff:ff:ff:ff:ff > inet 172.16.1.7/24 brd 172.16.1.255 scope global eth1 > inet6 fe80::f816:3eff:fe22:6ada/64 scope link tentative dadfailed > > 172,16.1.8 > [root at qe-dell-ovs3 f83b696d-8f88-4cdb-95b0-dac01674c122]# !gr > grep "mac addr" libvirt.xml > > > 2: eth1: mtu 1500 qdisc pfifo_fast > state UP qlen 1000 > link/ether fa:16:3e:6b:50:04 brd ff:ff:ff:ff:ff:ff > inet 172.16.1.8/24 brd 172.16.1.255 scope global eth1 > inet6 fe80::f816:3eff:fe6b:5004/64 scope link tentative dadfailed > valid_lft forever preferred_lft forever > From pmyers at redhat.com Mon Jul 8 17:24:10 2013 From: pmyers at redhat.com (Perry Myers) Date: Mon, 08 Jul 2013 13:24:10 -0400 Subject: [rhos-list] Issues with the OVS and RDO kernel In-Reply-To: <51dab365.829c420a.642f.0ea9@mx.google.com> References: <51dab365.829c420a.642f.0ea9@mx.google.com> Message-ID: <51DAF5BA.8000500@redhat.com> On 07/08/2013 08:41 AM, Paul Robert Marino wrote: > gre tunnels are not currently supported yet in RHEL not even in the > openstack kernel variant last I heard. That is correct in part. The netns enabled kernel we provide in RDO for RHEL does have kernel support for gre and vxlan tunnels. However, openvswitch does not yet support using the in-tree gre/vxlan tunnel mechanism. It only supports using the out-of-tree tunnels provided in the openvswitch kmod from upstream openvswitch.org git repo. What needs to happen is openvswitch needs to change to understand how to manipulate the in-tree tunnels. Until that happens, we can't use gre/vxlan tunnels via openvswitch and therefore neutron/quantum At least this is my understanding of things. I've added some neutron and ovs devs to comment Perry > > > -- Sent from my HP Pre3 > > ------------------------------------------------------------------------ > On Jul 8, 2013 6:06 AM, Andrey Korolyov wrote: > > Hello, > > There are some strange issues with kernels placed in [fedora epel-6]. > First of all, OVS datapath shipped with RHEL/RedPatch kernel seems to > be broken an all possible ways - connectivity isn`t working within two > bridges connected via patchport neither via gre tunnels, both parts of > openstack network topology (of course tunnels can be replaced with the > vlan tags, but it`ll not fix connectivity). > > To summarize current problems: > - 2.6.32-358.111.1.openstack.el6.x86_64 does not work with the > third-party OVS module well ('openvswitch: cannot register gre > protocol handler' kernel message) and built-in implementation does not > work at all, > - as RDO does not include glibc support for namespace, seems that the > 144e6ce1679a768e987230efb4afa402a5ab58ac cannot be easily included, > but bug with missing mounts is presented - I observed that the > namespace was hold at least once by metadata agent when no entries was > matched over /proc/pid/mounts. > > Any thoughts on how I may get working gre tunnels on the latest > namespace-patched kernel will be highly appreciated. Previous one > seems to be working in some way with tunnels, but not in the topology > provided by openstack. From andrey at xdel.ru Mon Jul 8 17:29:34 2013 From: andrey at xdel.ru (Andrey Korolyov) Date: Mon, 8 Jul 2013 21:29:34 +0400 Subject: [rhos-list] Issues with the OVS and RDO kernel In-Reply-To: <51dab365.829c420a.642f.0ea9@mx.google.com> References: <51dab365.829c420a.642f.0ea9@mx.google.com> Message-ID: Seems that the setting CONFIG_NET_IPGRE_DEMUX to the module allows OVS to work with its own(shipped with the OVS tree, not in-kernel) module. Please rebuild current kernel if possible with such a fix. Also is someone will find time to track and backport snipped of code in iproute2 allowing to delete namespaces not regarding of processes already bound to it, it`ll be quite awesome. Thanks! On Mon, Jul 8, 2013 at 4:41 PM, Paul Robert Marino wrote: > gre tunnels are not currently supported yet in RHEL not even in the > openstack kernel variant last I heard. > > > > -- Sent from my HP Pre3 > > ________________________________ > On Jul 8, 2013 6:06 AM, Andrey Korolyov wrote: > > Hello, > > There are some strange issues with kernels placed in [fedora epel-6]. > First of all, OVS datapath shipped with RHEL/RedPatch kernel seems to > be broken an all possible ways - connectivity isn`t working within two > bridges connected via patchport neither via gre tunnels, both parts of > openstack network topology (of course tunnels can be replaced with the > vlan tags, but it`ll not fix connectivity). > > To summarize current problems: > - 2.6.32-358.111.1.openstack.el6.x86_64 does not work with the > third-party OVS module well ('openvswitch: cannot register gre > protocol handler' kernel message) and built-in implementation does not > work at all, > - as RDO does not include glibc support for namespace, seems that the > 144e6ce1679a768e987230efb4afa402a5ab58ac cannot be easily included, > but bug with missing mounts is presented - I observed that the > namespace was hold at least once by metadata agent when no entries was > matched over /proc/pid/mounts. > > Any thoughts on how I may get working gre tunnels on the latest > namespace-patched kernel will be highly appreciated. Previous one > seems to be working in some way with tunnels, but not in the topology > provided by openstack. > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From pmyers at redhat.com Mon Jul 8 17:33:14 2013 From: pmyers at redhat.com (Perry Myers) Date: Mon, 08 Jul 2013 13:33:14 -0400 Subject: [rhos-list] Quantum security group egress In-Reply-To: <146367725.5705675.1372935798045.JavaMail.root@redhat.com> References: <146367725.5705675.1372935798045.JavaMail.root@redhat.com> Message-ID: <51DAF7DA.6010007@redhat.com> On 07/04/2013 07:03 AM, Ofer Blaut wrote: > Hi > > By default egress security group is allow all. > > http://docs.openstack.org/trunk/openstack-network/admin/content/securitygroups.html > > Since there are no deny actions, i expect once first egress rule is applied, all other traffic will be dropped > > I have tried it with add SSH to egress still ping worked > > http://pastebin.test.redhat.com/150744 Ofer, So what you're saying is that there should be a deny all rule added once the user adds the first real egress rule. Otherwise the egress rules serve no purpose really... That seems to make sense to me. What do the neutron folks think? Perry From marun at redhat.com Mon Jul 8 17:45:10 2013 From: marun at redhat.com (Maru Newby) Date: Mon, 8 Jul 2013 13:45:10 -0400 Subject: [rhos-list] Quantum security group egress In-Reply-To: <51DAF7DA.6010007@redhat.com> References: <146367725.5705675.1372935798045.JavaMail.root@redhat.com> <51DAF7DA.6010007@redhat.com> Message-ID: On Jul 8, 2013, at 1:33 PM, Perry Myers wrote: > On 07/04/2013 07:03 AM, Ofer Blaut wrote: >> Hi >> >> By default egress security group is allow all. >> >> http://docs.openstack.org/trunk/openstack-network/admin/content/securitygroups.html >> >> Since there are no deny actions, i expect once first egress rule is applied, all other traffic will be dropped >> >> I have tried it with add SSH to egress still ping worked >> >> http://pastebin.test.redhat.com/150744 > > Ofer, > > So what you're saying is that there should be a deny all rule added once > the user adds the first real egress rule. If a user wants to manage egress traffic, the first step is removing the default 'allow all egress' rule. This is by design, and there would be need to be a good reason (convenience is not it) for it to be changed. m. > > Otherwise the egress rules serve no purpose really... > > That seems to make sense to me. What do the neutron folks think? > > Perry From Hao.Chen at NRCan-RNCan.gc.ca Mon Jul 8 17:55:19 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Mon, 8 Jul 2013 17:55:19 +0000 Subject: [rhos-list] Swift validation error Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB64700065B@S-BSC-MBX2.nrn.nrcan.gc.ca> Greetings, An error occurred when validating the Object Storage Service Installation. [root at cloud1 ~(keystone_admin)]# swift list Authorization Failure. Authorization Failed: Unable to communicate with identity service: {"error": {"message": "Malformed endpoint URL (see ERROR log for details): http://10.2.0.196:8080/v1/AUTH_$(15e2cf851e5446e89c5ee7dd38ddd67a)s", "code": 500, "title": "Internal Server Error"}}. (HTTP 500) 2013-07-08 10:06:53 ERROR [keystone.catalog.core] Malformed endpoint http://10.2.0.196:8080/v1/AUTH_$(15e2cf851e5446e89c5ee7dd38ddd67a)s - unknown key u'15e2cf851e5446e89c5ee7dd38ddd67a' [root at cloud1 ~(keystone_admin)]# keystone service-list Authorization Failed: Unable to communicate with identity service: {"error": {"message": "Malformed endpoint URL (see ERROR log for details): http://10.2.0.196:8080/v1/AUTH_$(15e2cf851e5446e89c5ee7dd38ddd67a)s", "code": 500, "title": "Internal Server Error"}}. (HTTP 500) In the log file, it shows "unknown key ..." 2013-07-08 10:06:53 ERROR [keystone.catalog.core] Malformed endpoint http://10.2.0.196:8080/v1/AUTH_$(15e2cf851e5446e89c5ee7dd38ddd67a)s - unknown key u'15e2cf851e5446e89c5ee7dd38ddd67a' Can anyone help to identify where I have done wrong in the following installation and configuration processes. Many thanks, Hao (1) Keystone tables are as bellow. [root at cloud1 ~(keystone_admin)]# keystone user-list +----------------------------------+-------+---------+-------+ | id | name | enabled | email | +----------------------------------+-------+---------+-------+ | f22063c121b949a8a5b86df453b75a33 | admin | True | | | ... | ... | ... | | | 43e37a506d1e4c7ea6e6147a555628bd | swift | True | | +----------------------------------+-------+---------+-------+ [root at cloud1 ~(keystone_admin)]# keystone role-list +----------------------------------+----------+ | id | name | +----------------------------------+----------+ | 4ea34250eda8420dacf34b635ba88920 | Member | | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | | c3de86d7ddee494db7126bec3f4e20a1 | admin | +----------------------------------+----------+ [root at cloud1 ~(keystone_admin)]# keystone tenant-list +----------------------------------+----------+---------+ | id | name | enabled | +----------------------------------+----------+---------+ | 35caeda3b4d84d1582e675c2f871a00c | admin | True | | 215764a7940147a9980c34fc2d50fb49 | nfis | True | | 15e2cf851e5446e89c5ee7dd38ddd67a | services | True | +----------------------------------+----------+---------+ [root at cloud1 ~(keystone_admin)]# keystone service-list +----------------------------------+----------+--------------+---------------------------+ | id | name | type | description | +----------------------------------+----------+--------------+---------------------------+ | f5c1154f8d094e7ca231aeb96eb4b860 | keystone | identity | Keystone Identity Service | | 1ec51ed71b8941fe9151ae5c8a2819bd | swift | object-store | Swift Storage Service | +----------------------------------+----------+--------------+---------------------------+ (2) Add the Swift user to the Services tenant with the Admin role. [root at cloud1 ~(keystone_admin)]# keystone user-role-add --role-id c3de86d7ddee494db7126bec3f4e20a1 --tenant-id 15e2cf851e5446e89c5ee7dd38ddd67a --user-id 43e37a506d1e4c7ea6e6147a555628bd (3) Followed Page 65 and set up endpoints with service ID and tenant ID [root at cloud1 ~(keystone_admin)]# keystone endpoint-create --service_id 1ec51ed71b8941fe9151ae5c8a2819bd --publicurl "http://10.2.0.196:8080/v1/AUTH_\$(15e2cf851e5446e89c5ee7dd38ddd67a)s" --adminurl "http://10.2.0.196:8080/v1/AUTH_\$(15e2cf851e5446e89c5ee7dd38ddd67a)s" --internalurl "http://10.2.0.196:8080/v1/AUTH_\$(15e2cf851e5446e89c5ee7dd38ddd67a)s" +-------------+---------------------------------------------------------------------+ | Property | Value | +-------------+---------------------------------------------------------------------+ | adminurl | http://10.2.0.196:8080/v1/AUTH_$(15e2cf851e5446e89c5ee7dd38ddd67a)s | | id | 49135295e97d4ff38ec54dc0c41b4f78 | | internalurl | http://10.2.0.196:8080/v1/AUTH_$(15e2cf851e5446e89c5ee7dd38ddd67a)s | | publicurl | http://10.2.0.196:8080/v1/AUTH_$(15e2cf851e5446e89c5ee7dd38ddd67a)s | | region | regionOne | | service_id | 1ec51ed71b8941fe9151ae5c8a2819bd | +-------------+---------------------------------------------------------------------+ __________________________________________________ Hao Chen Physical Scientist / Sp?cialiste des science physiques Natural Resources Canada / Ressources naturelles Canada Canadian Forest Service / Service canadien des for?ts Pacific Forestry Centre / Centre de foresterie du Pacifique 506 W. Burnside Road / 506 rue Burnside Ouest Victoria, BC V8Z 1M5 / Victoria, C-B V8Z 1M5 Tel: (250) 298-2405 Facs: (250) 363-0775 Email: hchen at nrcan.gc.ca ___________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From oblaut at redhat.com Mon Jul 8 18:19:52 2013 From: oblaut at redhat.com (Ofer Blaut) Date: Mon, 8 Jul 2013 14:19:52 -0400 (EDT) Subject: [rhos-list] Quantum security group egress In-Reply-To: References: <146367725.5705675.1372935798045.JavaMail.root@redhat.com> <51DAF7DA.6010007@redhat.com> Message-ID: <1788376591.525551.1373307592387.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Maru Newby" > To: "Perry Myers" > Cc: "Ofer Blaut" , "rhos-list" , "Robert Kukura" , "Maru > Newby" , "Brent Eagles" , "Ryan O'Hara" , "Chris Wright" > , "Terry Wilson" > Sent: Monday, July 8, 2013 8:45:10 PM > Subject: Re: Quantum security group egress > > > On Jul 8, 2013, at 1:33 PM, Perry Myers wrote: > > > On 07/04/2013 07:03 AM, Ofer Blaut wrote: > >> Hi > >> > >> By default egress security group is allow all. > >> > >> http://docs.openstack.org/trunk/openstack-network/admin/content/securitygroups.html > >> > >> Since there are no deny actions, i expect once first egress rule is > >> applied, all other traffic will be dropped > >> > >> I have tried it with add SSH to egress still ping worked > >> > >> http://pastebin.test.redhat.com/150744 > > > > Ofer, > > > > So what you're saying is that there should be a deny all rule added once > > the user adds the first real egress rule. > > If a user wants to manage egress traffic, the first step is removing the > default 'allow all egress' rule. This is by design, and there would be need > to be a good reason (convenience is not it) for it to be changed. > > > m. Hi Maru Ingress security group works the same as CISCO ACLs and other products http://www.cisco.com/en/US/docs/ios/12_2/security/configuration/guide/scfacls.html " The Implied "Deny All Traffic" Criteria Statement At the end of every access list is an implied "deny all traffic" criteria statement. Therefore, if a packet does not match any of your criteria statements, the packet will be blocked. Note For most protocols, if you define an inbound access list for traffic filtering, you should include explicit access list criteria statements to permit routing updates. If you do not, you might effectively lose communication from the interface when routing updates are blocked by the implicit "deny all traffic" statement at the end of the access list. " I didn't find any info about what should be expected in blueprints Below is an example of egress and ingress rules of default security-group , how can tell the difference between ingress (deny all ) and egress (allow all) ? the output seems the same . ofer [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-list +--------------------------------------+----------------+-----------+----------+------------------+--------------+ | id | security_group | direction | protocol | remote_ip_prefix | remote_group | +--------------------------------------+----------------+-----------+----------+------------------+--------------+ | 80dd52fb-80b4-4a26-94ae-4e052f128fef | default | egress | | | | | 9acac0e0-9900-4d42-8ce9-813bd80592d3 | default | ingress | | | default | | 9bbe9a67-2b35-443b-ad3a-8338ede82fcd | default | ingress | | | default | | abc1bfa4-0a35-482f-a8ce-4443e3444532 | default | egress | tcp | | default | | fdb1f4c9-2260-450f-b4e7-196564870d79 | default | egress | | | | +--------------------------------------+----------------+-----------+----------+------------------+--------------+ [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-show 9acac0e0-9900-4d42-8ce9-813bd80592d3 +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | direction | ingress | | ethertype | IPv4 | | id | 9acac0e0-9900-4d42-8ce9-813bd80592d3 | | port_range_max | | | port_range_min | | | protocol | | | remote_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | | remote_ip_prefix | | | security_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | | tenant_id | fdf9ecab37d340e98b915bcd32504621 | +-------------------+--------------------------------------+ [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-show 80dd52fb-80b4-4a26-94ae-4e052f128fef +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | direction | egress | | ethertype | IPv4 | | id | 80dd52fb-80b4-4a26-94ae-4e052f128fef | | port_range_max | | | port_range_min | | | protocol | | | remote_group_id | | | remote_ip_prefix | | | security_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | | tenant_id | fdf9ecab37d340e98b915bcd32504621 | +-------------------+--------------------------------------+ > > > > > Otherwise the egress rules serve no purpose really... > > > > That seems to make sense to me. What do the neutron folks think? > > > > Perry > > From oblaut at redhat.com Mon Jul 8 18:34:21 2013 From: oblaut at redhat.com (Ofer Blaut) Date: Mon, 8 Jul 2013 14:34:21 -0400 (EDT) Subject: [rhos-list] Quantum security group egress In-Reply-To: <1788376591.525551.1373307592387.JavaMail.root@redhat.com> References: <146367725.5705675.1372935798045.JavaMail.root@redhat.com> <51DAF7DA.6010007@redhat.com> <1788376591.525551.1373307592387.JavaMail.root@redhat.com> Message-ID: <799963265.530792.1373308461417.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Ofer Blaut" > To: "Maru Newby" > Cc: "Perry Myers" , "rhos-list" , "Robert Kukura" , > "Maru Newby" , "Brent Eagles" , "Ryan O'Hara" , "Chris > Wright" , "Terry Wilson" > Sent: Monday, July 8, 2013 9:19:52 PM > Subject: Re: Quantum security group egress > > > > ----- Original Message ----- > > From: "Maru Newby" > > To: "Perry Myers" > > Cc: "Ofer Blaut" , "rhos-list" , > > "Robert Kukura" , "Maru > > Newby" , "Brent Eagles" , "Ryan > > O'Hara" , "Chris Wright" > > , "Terry Wilson" > > Sent: Monday, July 8, 2013 8:45:10 PM > > Subject: Re: Quantum security group egress > > > > > > On Jul 8, 2013, at 1:33 PM, Perry Myers wrote: > > > > > On 07/04/2013 07:03 AM, Ofer Blaut wrote: > > >> Hi > > >> > > >> By default egress security group is allow all. > > >> > > >> http://docs.openstack.org/trunk/openstack-network/admin/content/securitygroups.html > > >> > > >> Since there are no deny actions, i expect once first egress rule is > > >> applied, all other traffic will be dropped > > >> > > >> I have tried it with add SSH to egress still ping worked > > >> > > >> http://pastebin.test.redhat.com/150744 > > > > > > Ofer, > > > > > > So what you're saying is that there should be a deny all rule added once > > > the user adds the first real egress rule. > > > > If a user wants to manage egress traffic, the first step is removing the > > default 'allow all egress' rule. This is by design, and there would be > > need > > to be a good reason (convenience is not it) for it to be changed. > > > > > > m. > Hi Maru > > Ingress security group works the same as CISCO ACLs and other products > > http://www.cisco.com/en/US/docs/ios/12_2/security/configuration/guide/scfacls.html > " The Implied "Deny All Traffic" Criteria Statement > At the end of every access list is an implied "deny all traffic" criteria > statement. Therefore, if a packet does not match any of your criteria > statements, the packet will be blocked. > Note For most protocols, if you define an inbound access list for traffic > filtering, you should include explicit access list criteria statements to > permit routing updates. If you do not, you might effectively lose > communication from the interface when routing updates are blocked by the > implicit "deny all traffic" statement at the end of the access list. " > > > I didn't find any info about what should be expected in blueprints > > > Below is an example of egress and ingress rules of default security-group , > how can tell the difference between ingress (deny all ) and egress (allow > all) ? > the output seems the same . Adding RHOS Dev > ofer > > > [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-list > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > | id | security_group | direction | > | protocol | remote_ip_prefix | remote_group | > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > | 80dd52fb-80b4-4a26-94ae-4e052f128fef | default | egress | > | | | | > | 9acac0e0-9900-4d42-8ce9-813bd80592d3 | default | ingress | > | | | default | > | 9bbe9a67-2b35-443b-ad3a-8338ede82fcd | default | ingress | > | | | default | > | abc1bfa4-0a35-482f-a8ce-4443e3444532 | default | egress | tcp > | | | default | > | fdb1f4c9-2260-450f-b4e7-196564870d79 | default | egress | > | | | | > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > > [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-show > 9acac0e0-9900-4d42-8ce9-813bd80592d3 > +-------------------+--------------------------------------+ > | Field | Value | > +-------------------+--------------------------------------+ > | direction | ingress | > | ethertype | IPv4 | > | id | 9acac0e0-9900-4d42-8ce9-813bd80592d3 | > | port_range_max | | > | port_range_min | | > | protocol | | > | remote_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | > | remote_ip_prefix | | > | security_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | > | tenant_id | fdf9ecab37d340e98b915bcd32504621 | > +-------------------+--------------------------------------+ > [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-show > 80dd52fb-80b4-4a26-94ae-4e052f128fef > +-------------------+--------------------------------------+ > | Field | Value | > +-------------------+--------------------------------------+ > | direction | egress | > | ethertype | IPv4 | > | id | 80dd52fb-80b4-4a26-94ae-4e052f128fef | > | port_range_max | | > | port_range_min | | > | protocol | | > | remote_group_id | | > | remote_ip_prefix | | > | security_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | > | tenant_id | fdf9ecab37d340e98b915bcd32504621 | > +-------------------+--------------------------------------+ > > > > > > > > > > Otherwise the egress rules serve no purpose really... > > > > > > That seems to make sense to me. What do the neutron folks think? > > > > > > Perry > > > > > From marun at redhat.com Mon Jul 8 19:28:01 2013 From: marun at redhat.com (Maru Newby) Date: Mon, 8 Jul 2013 15:28:01 -0400 Subject: [rhos-list] Quantum security group egress In-Reply-To: <1788376591.525551.1373307592387.JavaMail.root@redhat.com> References: <146367725.5705675.1372935798045.JavaMail.root@redhat.com> <51DAF7DA.6010007@redhat.com> <1788376591.525551.1373307592387.JavaMail.root@redhat.com> Message-ID: On Jul 8, 2013, at 2:19 PM, Ofer Blaut wrote: > > > ----- Original Message ----- >> From: "Maru Newby" >> To: "Perry Myers" >> Cc: "Ofer Blaut" , "rhos-list" , "Robert Kukura" , "Maru >> Newby" , "Brent Eagles" , "Ryan O'Hara" , "Chris Wright" >> , "Terry Wilson" >> Sent: Monday, July 8, 2013 8:45:10 PM >> Subject: Re: Quantum security group egress >> >> >> On Jul 8, 2013, at 1:33 PM, Perry Myers wrote: >> >>> On 07/04/2013 07:03 AM, Ofer Blaut wrote: >>>> Hi >>>> >>>> By default egress security group is allow all. >>>> >>>> http://docs.openstack.org/trunk/openstack-network/admin/content/securitygroups.html >>>> >>>> Since there are no deny actions, i expect once first egress rule is >>>> applied, all other traffic will be dropped >>>> >>>> I have tried it with add SSH to egress still ping worked >>>> >>>> http://pastebin.test.redhat.com/150744 >>> >>> Ofer, >>> >>> So what you're saying is that there should be a deny all rule added once >>> the user adds the first real egress rule. >> >> If a user wants to manage egress traffic, the first step is removing the >> default 'allow all egress' rule. This is by design, and there would be need >> to be a good reason (convenience is not it) for it to be changed. >> >> >> m. > Hi Maru > > Ingress security group works the same as CISCO ACLs and other products > > http://www.cisco.com/en/US/docs/ios/12_2/security/configuration/guide/scfacls.html > " The Implied "Deny All Traffic" Criteria Statement > At the end of every access list is an implied "deny all traffic" criteria statement. Therefore, if a packet does not match any of your criteria statements, the packet will be blocked. > Note For most protocols, if you define an inbound access list for traffic filtering, you should include explicit access list criteria statements to permit routing updates. If you do not, you might effectively lose communication from the interface when routing updates are blocked by the implicit "deny all traffic" statement at the end of the access list. " > You are correct, and the implication is that if it is only possible to create a security group rule that allows traffic, not one that denies traffic. > I didn't find any info about what should be expected in blueprints The purpose of security groups is described in the opening sentence in the docs [1] (emphasis mine): "Security groups and security group rules allows administrators and tenants the ability to specify the type of traffic and direction (ingress/egress) that is ALLOWED to pass through a port." > > Below is an example of egress and ingress rules of default security-group , how can tell the difference between ingress (deny all ) and egress (allow all) ? > the output seems the same . Security groups work the same for both ingress and egress. If traffic matches a security group rule, it is passed. If it does not match a rule, it is dropped (default deny). Given this, the examples you have below are both 'allow all', since 'deny' rules are not supported. m. [1]: http://docs.openstack.org/trunk/openstack-network/admin/content/securitygroups.html > ofer > > > [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-list > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > | id | security_group | direction | protocol | remote_ip_prefix | remote_group | > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > | 80dd52fb-80b4-4a26-94ae-4e052f128fef | default | egress | | | | > | 9acac0e0-9900-4d42-8ce9-813bd80592d3 | default | ingress | | | default | > | 9bbe9a67-2b35-443b-ad3a-8338ede82fcd | default | ingress | | | default | > | abc1bfa4-0a35-482f-a8ce-4443e3444532 | default | egress | tcp | | default | > | fdb1f4c9-2260-450f-b4e7-196564870d79 | default | egress | | | | > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > > [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-show 9acac0e0-9900-4d42-8ce9-813bd80592d3 > +-------------------+--------------------------------------+ > | Field | Value | > +-------------------+--------------------------------------+ > | direction | ingress | > | ethertype | IPv4 | > | id | 9acac0e0-9900-4d42-8ce9-813bd80592d3 | > | port_range_max | | > | port_range_min | | > | protocol | | > | remote_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | > | remote_ip_prefix | | > | security_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | > | tenant_id | fdf9ecab37d340e98b915bcd32504621 | > +-------------------+--------------------------------------+ > [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-show 80dd52fb-80b4-4a26-94ae-4e052f128fef > +-------------------+--------------------------------------+ > | Field | Value | > +-------------------+--------------------------------------+ > | direction | egress | > | ethertype | IPv4 | > | id | 80dd52fb-80b4-4a26-94ae-4e052f128fef | > | port_range_max | | > | port_range_min | | > | protocol | | > | remote_group_id | | > | remote_ip_prefix | | > | security_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | > | tenant_id | fdf9ecab37d340e98b915bcd32504621 | > +-------------------+--------------------------------------+ > > >> >>> >>> Otherwise the egress rules serve no purpose really... >>> >>> That seems to make sense to me. What do the neutron folks think? >>> >>> Perry >> >> From marun at redhat.com Mon Jul 8 19:28:01 2013 From: marun at redhat.com (Maru Newby) Date: Mon, 8 Jul 2013 15:28:01 -0400 Subject: [rhos-list] Quantum security group egress In-Reply-To: <1788376591.525551.1373307592387.JavaMail.root@redhat.com> References: <146367725.5705675.1372935798045.JavaMail.root@redhat.com> <51DAF7DA.6010007@redhat.com> <1788376591.525551.1373307592387.JavaMail.root@redhat.com> Message-ID: <1B786413-05BF-497B-BA40-213196FD0053@redhat.com> On Jul 8, 2013, at 2:19 PM, Ofer Blaut wrote: > > > ----- Original Message ----- >> From: "Maru Newby" >> To: "Perry Myers" >> Cc: "Ofer Blaut" , "rhos-list" , "Robert Kukura" , "Maru >> Newby" , "Brent Eagles" , "Ryan O'Hara" , "Chris Wright" >> , "Terry Wilson" >> Sent: Monday, July 8, 2013 8:45:10 PM >> Subject: Re: Quantum security group egress >> >> >> On Jul 8, 2013, at 1:33 PM, Perry Myers wrote: >> >>> On 07/04/2013 07:03 AM, Ofer Blaut wrote: >>>> Hi >>>> >>>> By default egress security group is allow all. >>>> >>>> http://docs.openstack.org/trunk/openstack-network/admin/content/securitygroups.html >>>> >>>> Since there are no deny actions, i expect once first egress rule is >>>> applied, all other traffic will be dropped >>>> >>>> I have tried it with add SSH to egress still ping worked >>>> >>>> http://pastebin.test.redhat.com/150744 >>> >>> Ofer, >>> >>> So what you're saying is that there should be a deny all rule added once >>> the user adds the first real egress rule. >> >> If a user wants to manage egress traffic, the first step is removing the >> default 'allow all egress' rule. This is by design, and there would be need >> to be a good reason (convenience is not it) for it to be changed. >> >> >> m. > Hi Maru > > Ingress security group works the same as CISCO ACLs and other products > > http://www.cisco.com/en/US/docs/ios/12_2/security/configuration/guide/scfacls.html > " The Implied "Deny All Traffic" Criteria Statement > At the end of every access list is an implied "deny all traffic" criteria statement. Therefore, if a packet does not match any of your criteria statements, the packet will be blocked. > Note For most protocols, if you define an inbound access list for traffic filtering, you should include explicit access list criteria statements to permit routing updates. If you do not, you might effectively lose communication from the interface when routing updates are blocked by the implicit "deny all traffic" statement at the end of the access list. " > You are correct, and the implication is that if it is only possible to create a security group rule that allows traffic, not one that denies traffic. > I didn't find any info about what should be expected in blueprints The purpose of security groups is described in the opening sentence in the docs [1] (emphasis mine): "Security groups and security group rules allows administrators and tenants the ability to specify the type of traffic and direction (ingress/egress) that is ALLOWED to pass through a port." > > Below is an example of egress and ingress rules of default security-group , how can tell the difference between ingress (deny all ) and egress (allow all) ? > the output seems the same . Security groups work the same for both ingress and egress. If traffic matches a security group rule, it is passed. If it does not match a rule, it is dropped (default deny). Given this, the examples you have below are both 'allow all', since 'deny' rules are not supported. m. [1]: http://docs.openstack.org/trunk/openstack-network/admin/content/securitygroups.html > ofer > > > [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-list > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > | id | security_group | direction | protocol | remote_ip_prefix | remote_group | > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > | 80dd52fb-80b4-4a26-94ae-4e052f128fef | default | egress | | | | > | 9acac0e0-9900-4d42-8ce9-813bd80592d3 | default | ingress | | | default | > | 9bbe9a67-2b35-443b-ad3a-8338ede82fcd | default | ingress | | | default | > | abc1bfa4-0a35-482f-a8ce-4443e3444532 | default | egress | tcp | | default | > | fdb1f4c9-2260-450f-b4e7-196564870d79 | default | egress | | | | > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > > [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-show 9acac0e0-9900-4d42-8ce9-813bd80592d3 > +-------------------+--------------------------------------+ > | Field | Value | > +-------------------+--------------------------------------+ > | direction | ingress | > | ethertype | IPv4 | > | id | 9acac0e0-9900-4d42-8ce9-813bd80592d3 | > | port_range_max | | > | port_range_min | | > | protocol | | > | remote_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | > | remote_ip_prefix | | > | security_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | > | tenant_id | fdf9ecab37d340e98b915bcd32504621 | > +-------------------+--------------------------------------+ > [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-show 80dd52fb-80b4-4a26-94ae-4e052f128fef > +-------------------+--------------------------------------+ > | Field | Value | > +-------------------+--------------------------------------+ > | direction | egress | > | ethertype | IPv4 | > | id | 80dd52fb-80b4-4a26-94ae-4e052f128fef | > | port_range_max | | > | port_range_min | | > | protocol | | > | remote_group_id | | > | remote_ip_prefix | | > | security_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | > | tenant_id | fdf9ecab37d340e98b915bcd32504621 | > +-------------------+--------------------------------------+ > > >> >>> >>> Otherwise the egress rules serve no purpose really... >>> >>> That seems to make sense to me. What do the neutron folks think? >>> >>> Perry >> >> From zaitcev at redhat.com Mon Jul 8 21:34:06 2013 From: zaitcev at redhat.com (Pete Zaitcev) Date: Mon, 8 Jul 2013 15:34:06 -0600 Subject: [rhos-list] Swift validation error In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB64700065B@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB64700065B@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <20130708153406.05bc3c77@lembas.zaitcev.lan> On Mon, 8 Jul 2013 17:55:19 +0000 "Chen, Hao" wrote: > (3) Followed Page 65 and set up endpoints with service ID and tenant ID > # keystone endpoint-create --service_id 1ec51ed71b8941fe9151ae5c8a2819bd --publicurl "http://10.2.0.196:8080/v1/AUTH_\$(15e2cf851e5446e89c5ee7dd38ddd67a)s" This is clearly malformed and should be --publicurl "http://10.2.0.196:8080/v1/AUTH_\$(tenant_id)s" The substitution of the tenant_id into the template occurs when the requiest is processed, not when endpoint is defined. All tenants share the same endpoint, so it stands to reason that they use different storage URLs accoring to their IDs. What document are we discussing here? You mentioned "page 65", but I do not know where to look to verify. A URL would help. -- Pete From oblaut at redhat.com Tue Jul 9 05:56:07 2013 From: oblaut at redhat.com (Ofer Blaut) Date: Tue, 9 Jul 2013 01:56:07 -0400 (EDT) Subject: [rhos-list] Quantum security group egress In-Reply-To: <1B786413-05BF-497B-BA40-213196FD0053@redhat.com> References: <146367725.5705675.1372935798045.JavaMail.root@redhat.com> <51DAF7DA.6010007@redhat.com> <1788376591.525551.1373307592387.JavaMail.root@redhat.com> <1B786413-05BF-497B-BA40-213196FD0053@redhat.com> Message-ID: <1118190407.708791.1373349367308.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Maru Newby" > To: "Ofer Blaut" > Cc: "Perry Myers" , "rhos-list" , "Robert Kukura" , > "Maru Newby" , "Brent Eagles" , "Ryan O'Hara" , "Chris > Wright" , "Terry Wilson" > Sent: Monday, July 8, 2013 10:28:01 PM > Subject: Re: Quantum security group egress > > > On Jul 8, 2013, at 2:19 PM, Ofer Blaut wrote: > > > > > > > ----- Original Message ----- > >> From: "Maru Newby" > >> To: "Perry Myers" > >> Cc: "Ofer Blaut" , "rhos-list" , > >> "Robert Kukura" , "Maru > >> Newby" , "Brent Eagles" , "Ryan > >> O'Hara" , "Chris Wright" > >> , "Terry Wilson" > >> Sent: Monday, July 8, 2013 8:45:10 PM > >> Subject: Re: Quantum security group egress > >> > >> > >> On Jul 8, 2013, at 1:33 PM, Perry Myers wrote: > >> > >>> On 07/04/2013 07:03 AM, Ofer Blaut wrote: > >>>> Hi > >>>> > >>>> By default egress security group is allow all. > >>>> > >>>> http://docs.openstack.org/trunk/openstack-network/admin/content/securitygroups.html > >>>> > >>>> Since there are no deny actions, i expect once first egress rule is > >>>> applied, all other traffic will be dropped > >>>> > >>>> I have tried it with add SSH to egress still ping worked > >>>> > >>>> http://pastebin.test.redhat.com/150744 > >>> > >>> Ofer, > >>> > >>> So what you're saying is that there should be a deny all rule added once > >>> the user adds the first real egress rule. > >> > >> If a user wants to manage egress traffic, the first step is removing the > >> default 'allow all egress' rule. This is by design, and there would be > >> need > >> to be a good reason (convenience is not it) for it to be changed. > >> > >> > >> m. > > Hi Maru > > > > Ingress security group works the same as CISCO ACLs and other products > > > > http://www.cisco.com/en/US/docs/ios/12_2/security/configuration/guide/scfacls.html > > " The Implied "Deny All Traffic" Criteria Statement > > At the end of every access list is an implied "deny all traffic" criteria > > statement. Therefore, if a packet does not match any of your criteria > > statements, the packet will be blocked. > > Note For most protocols, if you define an inbound access list for traffic > > filtering, you should include explicit access list criteria statements to > > permit routing updates. If you do not, you might effectively lose > > communication from the interface when routing updates are blocked by the > > implicit "deny all traffic" statement at the end of the access list. " > > > > You are correct, and the implication is that if it is only possible to create > a security group rule that allows traffic, not one that denies traffic. > > > > I didn't find any info about what should be expected in blueprints > > The purpose of security groups is described in the opening sentence in the > docs [1] (emphasis mine): > > "Security groups and security group rules allows administrators and tenants > the ability to specify the type of traffic and direction (ingress/egress) > that is ALLOWED to pass through a port." > > > > > > Below is an example of egress and ingress rules of default security-group , > > how can tell the difference between ingress (deny all ) and egress (allow > > all) ? > > the output seems the same . > > Security groups work the same for both ingress and egress. If traffic > matches a security group rule, it is passed. If it does not match a rule, > it is dropped (default deny). Given this, the examples you have below are > both 'allow all', since 'deny' rules are not supported. > > > m. Hi Maru Thanks for your explanation When creating new tenant and network, new/default security group is created. Both ingress and egress rules contain the same values as in the example below ( please check the rule attributes ). Ingress rule is acting as deny all, and egress is as allow all. You can "see" it by checking quantum security-group-rule-show , both rules are identical (port_range_max = none, port_range_min = none , protocol = none ) IMHO we should remove all default ingress rules because they are misleading Thanks Ofer > > [1]: > http://docs.openstack.org/trunk/openstack-network/admin/content/securitygroups.html > > > ofer > > > > > > [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-list > > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > > | id | security_group | direction | > > | protocol | remote_ip_prefix | remote_group | > > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > > | 80dd52fb-80b4-4a26-94ae-4e052f128fef | default | egress | > > | | | | > > | 9acac0e0-9900-4d42-8ce9-813bd80592d3 | default | ingress | > > | | | default | > > | 9bbe9a67-2b35-443b-ad3a-8338ede82fcd | default | ingress | > > | | | default | > > | abc1bfa4-0a35-482f-a8ce-4443e3444532 | default | egress | tcp > > | | | default | > > | fdb1f4c9-2260-450f-b4e7-196564870d79 | default | egress | > > | | | | > > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > > > > [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-show > > 9acac0e0-9900-4d42-8ce9-813bd80592d3 > > +-------------------+--------------------------------------+ > > | Field | Value | > > +-------------------+--------------------------------------+ > > | direction | ingress | > > | ethertype | IPv4 | > > | id | 9acac0e0-9900-4d42-8ce9-813bd80592d3 | > > | port_range_max | | > > | port_range_min | | > > | protocol | | > > | remote_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | > > | remote_ip_prefix | | > > | security_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | > > | tenant_id | fdf9ecab37d340e98b915bcd32504621 | > > +-------------------+--------------------------------------+ > > [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-show > > 80dd52fb-80b4-4a26-94ae-4e052f128fef > > +-------------------+--------------------------------------+ > > | Field | Value | > > +-------------------+--------------------------------------+ > > | direction | egress | > > | ethertype | IPv4 | > > | id | 80dd52fb-80b4-4a26-94ae-4e052f128fef | > > | port_range_max | | > > | port_range_min | | > > | protocol | | > > | remote_group_id | | > > | remote_ip_prefix | | > > | security_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | > > | tenant_id | fdf9ecab37d340e98b915bcd32504621 | > > +-------------------+--------------------------------------+ > > > > > >> > >>> > >>> Otherwise the egress rules serve no purpose really... > >>> > >>> That seems to make sense to me. What do the neutron folks think? > >>> > >>> Perry > >> > >> > > From marun at redhat.com Tue Jul 9 15:26:30 2013 From: marun at redhat.com (Maru Newby) Date: Tue, 9 Jul 2013 11:26:30 -0400 Subject: [rhos-list] Quantum security group egress In-Reply-To: <1118190407.708791.1373349367308.JavaMail.root@redhat.com> References: <146367725.5705675.1372935798045.JavaMail.root@redhat.com> <51DAF7DA.6010007@redhat.com> <1788376591.525551.1373307592387.JavaMail.root@redhat.com> <1B786413-05BF-497B-BA40-213196FD0053@redhat.com> <1118190407.708791.1373349367308.JavaMail.root@redhat.com> Message-ID: <9935D785-1CDF-4ECE-89F8-5FCB60F202FE@redhat.com> On Jul 9, 2013, at 1:56 AM, Ofer Blaut wrote: > > > ----- Original Message ----- >> From: "Maru Newby" >> To: "Ofer Blaut" >> Cc: "Perry Myers" , "rhos-list" , "Robert Kukura" , >> "Maru Newby" , "Brent Eagles" , "Ryan O'Hara" , "Chris >> Wright" , "Terry Wilson" >> Sent: Monday, July 8, 2013 10:28:01 PM >> Subject: Re: Quantum security group egress >> >> >> On Jul 8, 2013, at 2:19 PM, Ofer Blaut wrote: >> >>> >>> >>> ----- Original Message ----- >>>> From: "Maru Newby" >>>> To: "Perry Myers" >>>> Cc: "Ofer Blaut" , "rhos-list" , >>>> "Robert Kukura" , "Maru >>>> Newby" , "Brent Eagles" , "Ryan >>>> O'Hara" , "Chris Wright" >>>> , "Terry Wilson" >>>> Sent: Monday, July 8, 2013 8:45:10 PM >>>> Subject: Re: Quantum security group egress >>>> >>>> >>>> On Jul 8, 2013, at 1:33 PM, Perry Myers wrote: >>>> >>>>> On 07/04/2013 07:03 AM, Ofer Blaut wrote: >>>>>> Hi >>>>>> >>>>>> By default egress security group is allow all. >>>>>> >>>>>> http://docs.openstack.org/trunk/openstack-network/admin/content/securitygroups.html >>>>>> >>>>>> Since there are no deny actions, i expect once first egress rule is >>>>>> applied, all other traffic will be dropped >>>>>> >>>>>> I have tried it with add SSH to egress still ping worked >>>>>> >>>>>> http://pastebin.test.redhat.com/150744 >>>>> >>>>> Ofer, >>>>> >>>>> So what you're saying is that there should be a deny all rule added once >>>>> the user adds the first real egress rule. >>>> >>>> If a user wants to manage egress traffic, the first step is removing the >>>> default 'allow all egress' rule. This is by design, and there would be >>>> need >>>> to be a good reason (convenience is not it) for it to be changed. >>>> >>>> >>>> m. >>> Hi Maru >>> >>> Ingress security group works the same as CISCO ACLs and other products >>> >>> http://www.cisco.com/en/US/docs/ios/12_2/security/configuration/guide/scfacls.html >>> " The Implied "Deny All Traffic" Criteria Statement >>> At the end of every access list is an implied "deny all traffic" criteria >>> statement. Therefore, if a packet does not match any of your criteria >>> statements, the packet will be blocked. >>> Note For most protocols, if you define an inbound access list for traffic >>> filtering, you should include explicit access list criteria statements to >>> permit routing updates. If you do not, you might effectively lose >>> communication from the interface when routing updates are blocked by the >>> implicit "deny all traffic" statement at the end of the access list. " >>> >> >> You are correct, and the implication is that if it is only possible to create >> a security group rule that allows traffic, not one that denies traffic. >> >> >>> I didn't find any info about what should be expected in blueprints >> >> The purpose of security groups is described in the opening sentence in the >> docs [1] (emphasis mine): >> >> "Security groups and security group rules allows administrators and tenants >> the ability to specify the type of traffic and direction (ingress/egress) >> that is ALLOWED to pass through a port." >> >> >>> >>> Below is an example of egress and ingress rules of default security-group , >>> how can tell the difference between ingress (deny all ) and egress (allow >>> all) ? >>> the output seems the same . >> >> Security groups work the same for both ingress and egress. If traffic >> matches a security group rule, it is passed. If it does not match a rule, >> it is dropped (default deny). Given this, the examples you have below are >> both 'allow all', since 'deny' rules are not supported. >> >> >> m. > Hi Maru > > Thanks for your explanation > > When creating new tenant and network, new/default security group is created. > Both ingress and egress rules contain the same values as in the example below ( please check the rule attributes ). > > Ingress rule is acting as deny all, and egress is as allow all. > > You can "see" it by checking quantum security-group-rule-show , both rules are identical (port_range_max = none, port_range_min = none , protocol = none ) > > IMHO we should remove all default ingress rules because they are misleading The rules are not identical. On the ingress rule, the remote_group_id field matches the id of the default security group. On the egress rule, remote_group_id is empty. A rule for which remote_group_id is set will allow traffic only from ports associated with the specified security group. This implies that the default ingress rule allows ingress traffic between VMs that are associated with the default security group, and combined with the 'allow all' egress rule, ensures that inter-VM communication is possible by default. m. > Thanks > > Ofer > > >> >> [1]: >> http://docs.openstack.org/trunk/openstack-network/admin/content/securitygroups.html >> >>> ofer >>> >>> >>> [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-list >>> +--------------------------------------+----------------+-----------+----------+------------------+--------------+ >>> | id | security_group | direction | >>> | protocol | remote_ip_prefix | remote_group | >>> +--------------------------------------+----------------+-----------+----------+------------------+--------------+ >>> | 80dd52fb-80b4-4a26-94ae-4e052f128fef | default | egress | >>> | | | | >>> | 9acac0e0-9900-4d42-8ce9-813bd80592d3 | default | ingress | >>> | | | default | >>> | 9bbe9a67-2b35-443b-ad3a-8338ede82fcd | default | ingress | >>> | | | default | >>> | abc1bfa4-0a35-482f-a8ce-4443e3444532 | default | egress | tcp >>> | | | default | >>> | fdb1f4c9-2260-450f-b4e7-196564870d79 | default | egress | >>> | | | | >>> +--------------------------------------+----------------+-----------+----------+------------------+--------------+ >>> >>> [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-show >>> 9acac0e0-9900-4d42-8ce9-813bd80592d3 >>> +-------------------+--------------------------------------+ >>> | Field | Value | >>> +-------------------+--------------------------------------+ >>> | direction | ingress | >>> | ethertype | IPv4 | >>> | id | 9acac0e0-9900-4d42-8ce9-813bd80592d3 | >>> | port_range_max | | >>> | port_range_min | | >>> | protocol | | >>> | remote_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | >>> | remote_ip_prefix | | >>> | security_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | >>> | tenant_id | fdf9ecab37d340e98b915bcd32504621 | >>> +-------------------+--------------------------------------+ >>> [root at puma04 ~(keystone_admin_tenant1)]$quantum security-group-rule-show >>> 80dd52fb-80b4-4a26-94ae-4e052f128fef >>> +-------------------+--------------------------------------+ >>> | Field | Value | >>> +-------------------+--------------------------------------+ >>> | direction | egress | >>> | ethertype | IPv4 | >>> | id | 80dd52fb-80b4-4a26-94ae-4e052f128fef | >>> | port_range_max | | >>> | port_range_min | | >>> | protocol | | >>> | remote_group_id | | >>> | remote_ip_prefix | | >>> | security_group_id | b0dc4c19-cb7f-4d08-a2d4-e315a4169f09 | >>> | tenant_id | fdf9ecab37d340e98b915bcd32504621 | >>> +-------------------+--------------------------------------+ >>> >>> >>>> >>>>> >>>>> Otherwise the egress rules serve no purpose really... >>>>> >>>>> That seems to make sense to me. What do the neutron folks think? >>>>> >>>>> Perry >>>> >>>> >> >> From lchristoph at arago.de Tue Jul 9 17:25:52 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Tue, 9 Jul 2013 17:25:52 +0000 Subject: [rhos-list] quantum router-gateway-set using deliberate IP address Message-ID: <131275537ff34b36b26ce81f077acd08@DB3PR07MB010.eurprd07.prod.outlook.com> Hi! quantum router-gateway-set "service router" 4b264867-a7ad-4910-8bbd-2e74a6853bb3 gives me this port: +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | edbf831f-3924-4907-a51c-78d785b9c51d | | fa:16:3e:97:43:af | {"subnet_id": "b6b77735-a7be-4703-801c-ce5e6961998f", "ip_address": "192.168.101.2"} | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ (Internal port deleted from the output) I need to use a specific address for this router prescribed by our network plan. How can I specify this or change the 192.168.101.2? Googling turned up only instructions that use the default. Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 [http://www.arago.net/wp-content/uploads/2013/06/EmailSignatur1.png] -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gcheng at salesforce.com Tue Jul 9 17:47:52 2013 From: gcheng at salesforce.com (Guolin Cheng) Date: Tue, 9 Jul 2013 10:47:52 -0700 Subject: [rhos-list] router interface always in DOWN status Message-ID: evaluation openstack environment is setup with packstack 'all-in-one' installation method with quantum networking selected. After reboot I created network/subnets, routers and attach router to subnets, then create instances. instances on the the same subnets(VLAN) can talk to each other, but instances on diff subnets(VLAN) can not talk with each other via the internal router between. I am afraid that this is because the router's interfaces are in DOWN status all the time. The setup is simple as: subnet1(vlan1) <-> router <-> subnet2(vlan2) Any ideas? Thanks. (quantum) router-port-list router01 +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+ | 3adbd1dd-6b30-4e87-bec2-ce208fd2cd07 | | fa:16:3e:55:28:08 | {"subnet_id": "bf217534-4be8-4d35-8e28-5d671dfa7b33", "ip_address": "172.17.0.1"} | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+ (quantum) port-show 3adbd1dd-6b30-4e87-bec2-ce208fd2cd07 +----------------------+-----------------------------------------------------------------------------------+ | Field | Value | +----------------------+-----------------------------------------------------------------------------------+ | admin_state_up | True | | binding:capabilities | {"port_filter": true} | | binding:vif_type | ovs | | device_id | c7a1545a-38a0-4414-ad01-065de632016d | | device_owner | network:router_interface | | fixed_ips | {"subnet_id": "bf217534-4be8-4d35-8e28-5d671dfa7b33", "ip_address": "172.17.0.1"} | | id | 3adbd1dd-6b30-4e87-bec2-ce208fd2cd07 | | mac_address | fa:16:3e:55:28:08 | | name | | | network_id | 6dff9ce0-735e-4536-89f1-b175a763470b | | security_groups | | | status | DOWN | | tenant_id | 4d4579565933453494c9bf25c3a5a847 | +----------------------+-----------------------------------------------------------------------------------+ (quantum) --Guolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From gcheng at salesforce.com Tue Jul 9 17:57:18 2013 From: gcheng at salesforce.com (Guolin Cheng) Date: Tue, 9 Jul 2013 10:57:18 -0700 Subject: [rhos-list] router interface always in DOWN status In-Reply-To: References: Message-ID: interestingly, the MAC address of the router is not shown in the ifconfig output of the holding host: 172.17.0.1 is the router's interface, 172.17.0.3 is the dhcp port, 172.17.0.2 is an instance. Note: the quantum's port-list reports a slightly different MAC address: the first byte is 'fa', while 'ifconfig -a' on the holding host reports the first byte as 'FE' for some ports, not sure why this happens neither. Please shed a light if possible. Thanks. [root at evaluation01 (keystone_admin)]# quantum port-list +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+ | 38273477-a174-4e4e-b0d5-308bc7f06244 | | fa:16:3e:53:54:1a | {"subnet_id": "bf217534-4be8-4d35-8e28-5d671dfa7b33", "ip_address": "172.17.0.2"} | | 3adbd1dd-6b30-4e87-bec2-ce208fd2cd07 | | fa:16:3e:55:28:08 | {"subnet_id": "bf217534-4be8-4d35-8e28-5d671dfa7b33", "ip_address": "172.17.0.1"} | | 6606b162-44d0-4fdd-9a59-78ce5ad64a32 | | fa:16:3e:f5:fa:28 | {"subnet_id": "bf217534-4be8-4d35-8e28-5d671dfa7b33", "ip_address": "172.17.0.3"} | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+ [root at evaluation01 (keystone_admin)]# ifconfig -a | grep -i 54:1a tap38273477-a1 Link encap:Ethernet HWaddr FE:16:3E:53:54:1A [root at evaluation01 (keystone_admin)]# ifconfig -a | grep -i 28:08 [root at evaluation01 (keystone_admin)]# ifconfig -a | grep -i fa:28 ns-6606b162-44 Link encap:Ethernet HWaddr FA:16:3E:F5:FA:28 [root at evaluation01 (keystone_admin)]# On Tue, Jul 9, 2013 at 10:47 AM, Guolin Cheng wrote: > evaluation openstack environment is setup with packstack 'all-in-one' > installation method with quantum networking selected. > > After reboot I created network/subnets, routers and attach router to > subnets, then create instances. > > instances on the the same subnets(VLAN) can talk to each other, but > instances on diff subnets(VLAN) can not talk with each other via the > internal router between. > > I am afraid that this is because the router's interfaces are in DOWN > status all the time. > > The setup is simple as: > > subnet1(vlan1) <-> router <-> subnet2(vlan2) > > Any ideas? Thanks. > > (quantum) router-port-list router01 > > +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+ > | id | name | mac_address | > fixed_ips > | > > +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+ > | 3adbd1dd-6b30-4e87-bec2-ce208fd2cd07 | | fa:16:3e:55:28:08 | > {"subnet_id": "bf217534-4be8-4d35-8e28-5d671dfa7b33", "ip_address": > "172.17.0.1"} | > > +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------+ > (quantum) port-show 3adbd1dd-6b30-4e87-bec2-ce208fd2cd07 > > +----------------------+-----------------------------------------------------------------------------------+ > | Field | > Value > | > > +----------------------+-----------------------------------------------------------------------------------+ > | admin_state_up | > True > | > | binding:capabilities | {"port_filter": > true} | > | binding:vif_type | > ovs > | > | device_id | > c7a1545a-38a0-4414-ad01-065de632016d > | > | device_owner | > network:router_interface > | > | fixed_ips | {"subnet_id": > "bf217534-4be8-4d35-8e28-5d671dfa7b33", "ip_address": "172.17.0.1"} | > | id | > 3adbd1dd-6b30-4e87-bec2-ce208fd2cd07 > | > | mac_address | > fa:16:3e:55:28:08 > | > | name > | > | > | network_id | > 6dff9ce0-735e-4536-89f1-b175a763470b > | > | security_groups > | > | > | status | > DOWN > | > | tenant_id | > 4d4579565933453494c9bf25c3a5a847 > | > > +----------------------+-----------------------------------------------------------------------------------+ > (quantum) > > --Guolin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lchristoph at arago.de Tue Jul 9 19:14:51 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Tue, 9 Jul 2013 19:14:51 +0000 Subject: [rhos-list] quantum router-gateway-set using deliberate IP address In-Reply-To: <131275537ff34b36b26ce81f077acd08@DB3PR07MB010.eurprd07.prod.outlook.com> References: <131275537ff34b36b26ce81f077acd08@DB3PR07MB010.eurprd07.prod.outlook.com> Message-ID: <2b12f72bcfc945e3a80fe18c45255210@DB3PR07MB010.eurprd07.prod.outlook.com> Hello! Found out how to fix that gateway address to a set of addresses, potentially just one - use the allocation pool normally used for DHCP. And set --enable_dhcp=False, we don't want to have that interfering. E.g. quantum subnet-create ext_net --allocation-pool start=7.7.7.130,end=7.7.7.150 \ --gateway 7.7.7.1 7.7.7.0/24 -- --enable_dhcp=False quantum router-gateway-set $ROUTER_ID $EXTERNAL_NETWORK_ID (Snarfed from http://docs.openstack.org/trunk/openstack-network/admin/content/demo_logical_network_config.html ) Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 [http://www.arago.net/wp-content/uploads/2013/06/EmailSignatur1.png] -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 ________________________________ Von: rhos-list-bounces at redhat.com im Auftrag von Lutz Christoph Gesendet: Dienstag, 9. Juli 2013 19:25 An: rhos-list Betreff: [rhos-list] quantum router-gateway-set using deliberate IP address Hi! quantum router-gateway-set "service router" 4b264867-a7ad-4910-8bbd-2e74a6853bb3 gives me this port: +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | edbf831f-3924-4907-a51c-78d785b9c51d | | fa:16:3e:97:43:af | {"subnet_id": "b6b77735-a7be-4703-801c-ce5e6961998f", "ip_address": "192.168.101.2"} | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ (Internal port deleted from the output) I need to use a specific address for this router prescribed by our network plan. How can I specify this or change the 192.168.101.2? Googling turned up only instructions that use the default. Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 [http://www.arago.net/wp-content/uploads/2013/06/EmailSignatur1.png] -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Wed Jul 10 05:07:21 2013 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 10 Jul 2013 15:07:21 +1000 Subject: [rhos-list] br-ex bridge and persistent MAC address Message-ID: <20130710050721.GB2167@fedora19> Hi, I installed RDO on RHEL, and have br-ex and br-int bridges setup by Quantum I presume. This mini-server only has one NIC, so I want to put em1 into the br-ex bridge and give the bridge the same address on the network. So I tried the following --- /etc/sysconfig/nework-scripts/ifcfg-br-ex DEVICE=br-ex ONBOOT=no DEVICETYPE=ovs TYPE=OVSBridge OVSBOOTPROTO=dhcp OVSDHCPINTERFACES=em1 OVS_EXTRA="set Bridge br-ex other-config:hwaddr=\"xx:xx\"" --- --- /etc/sysconfig/network-scripts/ifcfg-em1 --- DEVICE="em1" BOOTPROTO="none" DEVICETYPE="ovs TYPE=OVSPort OVS_BRIDGE=br-ex NM_CONTROLLED=no ONBOOT="yes" --- so "ifup em1" brings the physical NIC device up, automatically brings the bridge up and gets a dhcp address -- the problem being the MAC address of br-ex is constantly changing so my dhcp server can't assign it the right address. I can see that it knows about it the hwaddr config option --- # ovs-vsctl list bridge br-ex _uuid : 067c69b1-fed6-49ec-9180-b3d9e3c631aa controller : [] datapath_id : "000026b0a55452c8" datapath_type : "" external_ids : {} fail_mode : [] flood_vlans : [] flow_tables : {} mirrors : [] name : br-ex netflow : [] other_config : {hwaddr="26:b0:a5:54:52:c8"} ports : [25a0c131-b259-4a2d-94a8-ed8ac95323cd, 49e60140-868b-42dd-bbc3-9431796668c1] protocols : [] sflow : [] status : {} stp_enable : false --- but yet the MAC address remains randomly generated --- # ifconfig br-ex br-ex Link encap:Ethernet HWaddr BA:41:5A:A5:33:82 inet addr:192.168.0.22 Bcast:192.168.0.255 Mask:255.255.255.0 --- Any suggestions welcome! -i From prmarino1 at gmail.com Wed Jul 10 16:08:38 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Wed, 10 Jul 2013 12:08:38 -0400 Subject: [rhos-list] Gluster Swift Question Message-ID: Im trying to get Gluster UFO to work with RDO Grizzly on EL 6 I configured it per the documentation here http://www.gluster.org/2012/09/howto-using-ufo-swift-a-quick-and-dirty-setup-guide/ but when I try to stat the proxy using the following command " swift -V 2.0 -A http://keystonehost.my.domain:5000/v2.0 -U admin:Admin -K 'password' stat " the account server seems to have errors. I get a stream of errors like so in syslog and a bunch of simmilar messaged on the console " account-server ERROR __call__ error with HEAD /a07d2f39117c4e5abdeba722cf245828/0/AUTH_a07d2f39117c4e5abdeba722cf245828 : #012Traceback (most recent call last):#012 File "/usr/lib/python2.6/site-packages/swift/account/server.py", line 377, in __call__#012 res = method(req)#012 File "/usr/lib/python2.6/site-packages/swift/common/utils.py", line 1350, in wrapped#012 return func(*a, **kw)#012 File "/usr/lib/python2.6/site-packages/swift/account/server.py", line 177, in HEAD#012 broker = self._get_account_broker(drive, part, account)#012 File "/usr/lib/python2.6/site-packages/gluster/swift/account/server.py", line 38, in _get_account_broker#012 return DiskAccount(self.root, drive, account, self.logger)#012 File "/usr/lib/python2.6/site-packages/gluster/swift/common/DiskDir.py", line 419, in __init__#012 assert self.dir_exists#012AssertionError (txn: tx7a4fec44585649ca83f34b6c57bc1667) account-server xxx.xxx.xxx.xxx - - [10/Jul/2013:15:48:52 +0000] "HEAD /a07d2f39117c4e5abdeba722cf245828/0/AUTH_a07d2f39117c4e5abdeba722cf245828" 500 713 "tx7a4fec44585649ca83f34b6c57bc1667" "-" "-" 0.0008 "" >swift ERROR 500 From Account Server xxx.xxx.xxx.xxx:6012 (txn: tx7a4fec44585649ca83f34b6c57bc1667) (client_ip: yyy.yyy.yyy.yyy) swift Account HEAD returning 503 for [500] (txn: tx7a4fec44585649ca83f34b6c57bc1667) (client_ip: xxx.xxx.xxx.xxx) " does any one have any ideas about what might be causing it? here are the current RPMs for gluster and swift I have installed glusterfs-swift-3.3.1-15.el6.noarch glusterfs-swift-container-3.3.1-15.el6.noarch glusterfs-rdma-3.3.1-15.el6.x86_64 glusterfs-3.3.1-15.el6.x86_64 glusterfs-swift-proxy-3.3.1-15.el6.noarch glusterfs-swift-object-3.3.1-15.el6.noarch glusterfs-fuse-3.3.1-15.el6.x86_64 glusterfs-server-3.3.1-15.el6.x86_64 glusterfs-geo-replication-3.3.1-15.el6.x86_64 glusterfs-swift-account-3.3.1-15.el6.noarch glusterfs-vim-3.2.7-1.el6.x86_64 glusterfs-ufo-3.3.1-15.el6.noarch from the repo here http://repos.fedorapeople.org/repos/kkeithle/glusterfs/epel-6/ From prmarino1 at gmail.com Wed Jul 10 16:18:12 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Wed, 10 Jul 2013 12:18:12 -0400 Subject: [rhos-list] Gluster Swift Question In-Reply-To: References: Message-ID: correction I just solved my Issue it looks like I needed to add an fstab entry for the volume which resolved the issue On Wed, Jul 10, 2013 at 12:08 PM, Paul Robert Marino wrote: > Im trying to get Gluster UFO to work with RDO Grizzly on EL 6 > > I configured it per the documentation here > http://www.gluster.org/2012/09/howto-using-ufo-swift-a-quick-and-dirty-setup-guide/ > > > but when I try to stat the proxy using the following command > " > swift -V 2.0 -A http://keystonehost.my.domain:5000/v2.0 -U admin:Admin > -K 'password' stat > " > the account server seems to have errors. > > I get a stream of errors like so in syslog and a bunch of simmilar > messaged on the console > > " > account-server ERROR __call__ error with HEAD > /a07d2f39117c4e5abdeba722cf245828/0/AUTH_a07d2f39117c4e5abdeba722cf245828 > : #012Traceback (most recent call last):#012 File > "/usr/lib/python2.6/site-packages/swift/account/server.py", line 377, > in __call__#012 res = method(req)#012 File > "/usr/lib/python2.6/site-packages/swift/common/utils.py", line 1350, > in wrapped#012 return func(*a, **kw)#012 File > "/usr/lib/python2.6/site-packages/swift/account/server.py", line 177, > in HEAD#012 broker = self._get_account_broker(drive, part, > account)#012 File > "/usr/lib/python2.6/site-packages/gluster/swift/account/server.py", > line 38, in _get_account_broker#012 return DiskAccount(self.root, > drive, account, self.logger)#012 File > "/usr/lib/python2.6/site-packages/gluster/swift/common/DiskDir.py", > line 419, in __init__#012 assert self.dir_exists#012AssertionError > (txn: tx7a4fec44585649ca83f34b6c57bc1667) > > account-server xxx.xxx.xxx.xxx - - [10/Jul/2013:15:48:52 +0000] "HEAD > /a07d2f39117c4e5abdeba722cf245828/0/AUTH_a07d2f39117c4e5abdeba722cf245828" > 500 713 "tx7a4fec44585649ca83f34b6c57bc1667" "-" "-" 0.0008 "" >>swift ERROR 500 From Account Server xxx.xxx.xxx.xxx:6012 (txn: tx7a4fec44585649ca83f34b6c57bc1667) (client_ip: yyy.yyy.yyy.yyy) > swift Account HEAD returning 503 for [500] (txn: > tx7a4fec44585649ca83f34b6c57bc1667) (client_ip: xxx.xxx.xxx.xxx) > " > does any one have any ideas about what might be causing it? > > > here are the current RPMs for gluster and swift I have installed > > glusterfs-swift-3.3.1-15.el6.noarch > glusterfs-swift-container-3.3.1-15.el6.noarch > glusterfs-rdma-3.3.1-15.el6.x86_64 > glusterfs-3.3.1-15.el6.x86_64 > glusterfs-swift-proxy-3.3.1-15.el6.noarch > glusterfs-swift-object-3.3.1-15.el6.noarch > glusterfs-fuse-3.3.1-15.el6.x86_64 > glusterfs-server-3.3.1-15.el6.x86_64 > glusterfs-geo-replication-3.3.1-15.el6.x86_64 > glusterfs-swift-account-3.3.1-15.el6.noarch > glusterfs-vim-3.2.7-1.el6.x86_64 > glusterfs-ufo-3.3.1-15.el6.noarch > > from the repo here > http://repos.fedorapeople.org/repos/kkeithle/glusterfs/epel-6/ From Michael.Luksch at lcsystems.at Thu Jul 11 10:09:35 2013 From: Michael.Luksch at lcsystems.at (Michael Luksch) Date: Thu, 11 Jul 2013 10:09:35 +0000 Subject: [rhos-list] iptables checksum fix for DHCP with quantum/neutron (grizzly) Message-ID: Hello, there is/was this infamous bug in that DHCP reply packets have an incorrect checksum when sent from the KVM hypervisor host to a VM using the virtio network adapter type. As a result the DHCP client drops the response, and never sets the offered IP address. As a general "fix" for this an iptables mangle rule is used which sets a "correct" checksum by using the CHECKSUM target. As an example how common this fix is, just have a look on an default rhel/centos install of KVM. As you create a network with the virsh/virt-manager tools a rule like the following will be added: iptables -t mangle -A POSTROUTING -o -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill It had been reported to and implemented in openstack as well. (+1 year ago) see [1] The fix is in nova/network/linux_net.py [2] But in my install this rule is never created, and as such my VMs (cirros/other busybox-udhcpc based ones) failed in acquiring IPs via DHCP. I wonder if the code in nova/network/linux_net.py is ever called when using quantum/neutron. If not I guess a bug has to be created / re-opened. Or is the problem as a whole seen as fixed by using either other virtual NICs (performance?) or other DHCP-clients? If last sentence is true, what would be the "best" place to have my individual iptables-rules applied dynamically in the right namespace? I am using quantum with openvswitch-plugin and provider-network mapped VLANs in a multi-host environment. My dnsmasq instances are running in a separate namespace. ATM im not using any l3-agent at all. I had to add the iptables checksum rule by hand inside the dnsmasq namespace. I am using: CentOS 6.4 Grizzly ( 2013.1.2 ) Kernel 2.6.32-358.6.2.openstack.el6.x86_64 Dnsmasq 2.65 [1] https://bugzilla.redhat.com/show_bug.cgi?id=910619#c6 [2] https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L902 Thanks. Michael Luksch From Hao.Chen at NRCan-RNCan.gc.ca Thu Jul 11 17:09:14 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Thu, 11 Jul 2013 17:09:14 +0000 Subject: [rhos-list] Swift validation error In-Reply-To: <20130708153406.05bc3c77@lembas.zaitcev.lan> References: <76CC67FD1C99DB4DB4D43FEF354AADB64700065B@S-BSC-MBX2.nrn.nrcan.gc.ca> <20130708153406.05bc3c77@lembas.zaitcev.lan> Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB6470019D0@S-BSC-MBX2.nrn.nrcan.gc.ca> Thanks Pete. The document is Red Hat OpenStack Red Hat OpenStack 3.0 (Grizzly) Preview Installation and Configuration Guide. Page 63: (1) When running "swift-ring-builder /etc/swift/object_builder_file create part_power replica_count min_part_hours", the name of "object_builder_file" has to be "object", same as "account" and "container". Any different names will cause openstack-swift-proxy stop working. (2) When running "swift-ring-builder /etc/swift/account_builder_file add zX-node_ip_address:600Y/device_mountpoint partition_count", if using a different IP address other than 127.0.0.1, "bind_ip" in account-server.conf/container-server.conf/object-server.conf has to be changes accordingly. (3) An error occurred when [root at cloud1 ~(keystone_admin)]# head -c 1024 /dev/urandom > data.file ; swift upload cotainer data.file Error trying to create container 'cotainer': 503 Internal Server Error:

Service Unavailable

The server is currently Object PUT failed: http://10.2.0.196:8080/v2.0/AUTH_35caeda3b4d84d1582e675c2f871a00c/cotainer/data.file 503 Service Unavailable [first 60 chars of response]

Service Unavailable

The server is currently But "c1" has been created. It was able to create the container but failed to load the file to the container. [root at cloud1 ~(keystone_admin)]# swift list c1 [root at cloud1 ~(keystone_admin)]# swift list c1 Thanks in advance for any advise and suggestions. Hao -----Original Message----- From: rhos-list-bounces at redhat.com [mailto:rhos-list-bounces at redhat.com] On Behalf Of Pete Zaitcev Sent: July 8, 2013 14:34 To: Chen, Hao Cc: rhos-list at redhat.com Subject: Re: [rhos-list] Swift validation error On Mon, 8 Jul 2013 17:55:19 +0000 "Chen, Hao" wrote: > (3) Followed Page 65 and set up endpoints with service ID and tenant > ID # keystone endpoint-create --service_id 1ec51ed71b8941fe9151ae5c8a2819bd --publicurl "http://10.2.0.196:8080/v1/AUTH_\$(15e2cf851e5446e89c5ee7dd38ddd67a)s" This is clearly malformed and should be --publicurl "http://10.2.0.196:8080/v1/AUTH_\$(tenant_id)s" The substitution of the tenant_id into the template occurs when the requiest is processed, not when endpoint is defined. All tenants share the same endpoint, so it stands to reason that they use different storage URLs accoring to their IDs. What document are we discussing here? You mentioned "page 65", but I do not know where to look to verify. A URL would help. -- Pete _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list From sgordon at redhat.com Thu Jul 11 17:29:10 2013 From: sgordon at redhat.com (Steve Gordon) Date: Thu, 11 Jul 2013 13:29:10 -0400 (EDT) Subject: [rhos-list] Swift validation error In-Reply-To: <20130708153406.05bc3c77@lembas.zaitcev.lan> References: <76CC67FD1C99DB4DB4D43FEF354AADB64700065B@S-BSC-MBX2.nrn.nrcan.gc.ca> <20130708153406.05bc3c77@lembas.zaitcev.lan> Message-ID: <949588234.3282252.1373563750485.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Pete Zaitcev" > To: "Hao Chen" > Cc: rhos-list at redhat.com > Sent: Monday, July 8, 2013 5:34:06 PM > Subject: Re: [rhos-list] Swift validation error > > On Mon, 8 Jul 2013 17:55:19 +0000 > "Chen, Hao" wrote: > > > (3) Followed Page 65 and set up endpoints with service ID and tenant ID > > # keystone endpoint-create --service_id 1ec51ed71b8941fe9151ae5c8a2819bd > > --publicurl > > "http://10.2.0.196:8080/v1/AUTH_\$(15e2cf851e5446e89c5ee7dd38ddd67a)s" > > This is clearly malformed and should be > --publicurl "http://10.2.0.196:8080/v1/AUTH_\$(tenant_id)s" > The substitution of the tenant_id into the template occurs when the > requiest is processed, not when endpoint is defined. All tenants > share the same endpoint, so it stands to reason that they use > different storage URLs accoring to their IDs. > > What document are we discussing here? You mentioned "page 65", but > I do not know where to look to verify. A URL would help. I believe the direct link would be: https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack/3/html/Installation_and_Configuration_Guide/sect-Configuring_the_Object_Storage_Service.html#Configuraing_Keystone_for_the_Object_Storage_Service The relevant extract is: """ Create the swift endpoint entry: $ keystone endpoint-create --service_id SERVICEID \ --publicurl "http://IP:8080/v1/AUTH_\$(tenant_id)s" \ --adminurl "http://IP:8080/v1" \ --internalurl "http://IP:8080/v1/AUTH_\$(tenant_id)s" Replace SERVICEID with the identifier returned by the keystone service-create command. Replace IP with the IP address of fully qualified domain name of the system hosting the Object Storage Proxy service. """ Note that in the example as rendered in the actual guide SERVICEID and IP are marked up as replaceable (bold italics), and coupled with the descriptive text suggesting they be replaced. We did not do this with \$(tenant_id) because we did not intend for it to be replaced. Thanks, -- Steve Gordon, RHCE Documentation Lead, Red Hat OpenStack Engineering Content Services Red Hat Canada (Toronto, Ontario) From zaitcev at redhat.com Thu Jul 11 17:42:24 2013 From: zaitcev at redhat.com (Pete Zaitcev) Date: Thu, 11 Jul 2013 11:42:24 -0600 Subject: [rhos-list] Swift validation error In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB6470019D0@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB64700065B@S-BSC-MBX2.nrn.nrcan.gc.ca> <20130708153406.05bc3c77@lembas.zaitcev.lan> <76CC67FD1C99DB4DB4D43FEF354AADB6470019D0@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <20130711114224.7444017e@lembas.zaitcev.lan> On Thu, 11 Jul 2013 17:09:14 +0000 "Chen, Hao" wrote: > Thanks Pete. The document is Red Hat OpenStack Red Hat OpenStack 3.0 (Grizzly) Preview Installation and Configuration Guide. Thanks. I verified that it has the correct syntax for --publicurl. However, I used the currently published version, because it appears we may be reading different versions. > (1) When running "swift-ring-builder /etc/swift/object_builder_file create part_power replica_count min_part_hours", the name of "object_builder_file" has to be "object", same as "account" and "container". Any different names will cause openstack-swift-proxy stop working. Okay, this sounds about correct, if perhaps needlessly complex. However, I do not see any such phraseology in this document: https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_Preview/3/html/Installation_and_Configuration_Guide/Building_Object_Storage_Ring_Files.html What is the URL where you got your version, or if that is not available, what is the version? It has to say something like "Revision 3-29" at the very end of the document. > (2) When running "swift-ring-builder /etc/swift/account_builder_file add zX-node_ip_address:600Y/device_mountpoint partition_count", if using a different IP address other than 127.0.0.1, "bind_ip" in account-server.conf/container-server.conf/object-server.conf has to be changes accordingly. True. Well, default is actually 0.0.0.0, but you need to verify that Packstack or other tool didn't overwrite bind_ip with 127.0.0.1. > (3) An error occurred when > [root at cloud1 ~(keystone_admin)]# head -c 1024 /dev/urandom > data.file ; swift upload cotainer data.file > Error trying to create container 'cotainer': 503 Internal Server Error:

Service Unavailable

The server is currently > Object PUT failed: http://10.2.0.196:8080/v2.0/AUTH_35caeda3b4d84d1582e675c2f871a00c/cotainer/data.file 503 Service Unavailable [first 60 chars of response]

Service Unavailable

The server is currently Wait a moment, this can't possibly be right, because prefix is "/v2.0" instead of "/v1" as proper for Swift. Authentication has various versions there, but Swift does not, at least not yet. The authenticated account pattern looks good. > But "c1" has been created. It was able to create the container but failed to load the file to the container. Not sure how that happened. Perhaps it stayed there before changes in Keystone. -- Pete From dneary at redhat.com Thu Jul 11 20:15:15 2013 From: dneary at redhat.com (Dave Neary) Date: Thu, 11 Jul 2013 22:15:15 +0200 Subject: [rhos-list] Gluster Swift Question In-Reply-To: References: Message-ID: <51DF1253.4040107@redhat.com> Hi Paul, Would you care to add an item to the Troubleshooting page on openstack.redhat.com to leave a breadcrumb trail for others coming after you who may have the same issue? Thanks! Dave. On 07/10/2013 06:18 PM, Paul Robert Marino wrote: > correction I just solved my Issue it looks like I needed to add an > fstab entry for the volume which resolved the issue > > On Wed, Jul 10, 2013 at 12:08 PM, Paul Robert Marino > wrote: >> Im trying to get Gluster UFO to work with RDO Grizzly on EL 6 >> >> I configured it per the documentation here >> http://www.gluster.org/2012/09/howto-using-ufo-swift-a-quick-and-dirty-setup-guide/ >> >> >> but when I try to stat the proxy using the following command >> " >> swift -V 2.0 -A http://keystonehost.my.domain:5000/v2.0 -U admin:Admin >> -K 'password' stat >> " >> the account server seems to have errors. >> >> I get a stream of errors like so in syslog and a bunch of simmilar >> messaged on the console >> >> " >> account-server ERROR __call__ error with HEAD >> /a07d2f39117c4e5abdeba722cf245828/0/AUTH_a07d2f39117c4e5abdeba722cf245828 >> : #012Traceback (most recent call last):#012 File >> "/usr/lib/python2.6/site-packages/swift/account/server.py", line 377, >> in __call__#012 res = method(req)#012 File >> "/usr/lib/python2.6/site-packages/swift/common/utils.py", line 1350, >> in wrapped#012 return func(*a, **kw)#012 File >> "/usr/lib/python2.6/site-packages/swift/account/server.py", line 177, >> in HEAD#012 broker = self._get_account_broker(drive, part, >> account)#012 File >> "/usr/lib/python2.6/site-packages/gluster/swift/account/server.py", >> line 38, in _get_account_broker#012 return DiskAccount(self.root, >> drive, account, self.logger)#012 File >> "/usr/lib/python2.6/site-packages/gluster/swift/common/DiskDir.py", >> line 419, in __init__#012 assert self.dir_exists#012AssertionError >> (txn: tx7a4fec44585649ca83f34b6c57bc1667) >> >> account-server xxx.xxx.xxx.xxx - - [10/Jul/2013:15:48:52 +0000] "HEAD >> /a07d2f39117c4e5abdeba722cf245828/0/AUTH_a07d2f39117c4e5abdeba722cf245828" >> 500 713 "tx7a4fec44585649ca83f34b6c57bc1667" "-" "-" 0.0008 "" >>> swift ERROR 500 From Account Server xxx.xxx.xxx.xxx:6012 (txn: tx7a4fec44585649ca83f34b6c57bc1667) (client_ip: yyy.yyy.yyy.yyy) >> swift Account HEAD returning 503 for [500] (txn: >> tx7a4fec44585649ca83f34b6c57bc1667) (client_ip: xxx.xxx.xxx.xxx) >> " >> does any one have any ideas about what might be causing it? >> >> >> here are the current RPMs for gluster and swift I have installed >> >> glusterfs-swift-3.3.1-15.el6.noarch >> glusterfs-swift-container-3.3.1-15.el6.noarch >> glusterfs-rdma-3.3.1-15.el6.x86_64 >> glusterfs-3.3.1-15.el6.x86_64 >> glusterfs-swift-proxy-3.3.1-15.el6.noarch >> glusterfs-swift-object-3.3.1-15.el6.noarch >> glusterfs-fuse-3.3.1-15.el6.x86_64 >> glusterfs-server-3.3.1-15.el6.x86_64 >> glusterfs-geo-replication-3.3.1-15.el6.x86_64 >> glusterfs-swift-account-3.3.1-15.el6.noarch >> glusterfs-vim-3.2.7-1.el6.x86_64 >> glusterfs-ufo-3.3.1-15.el6.noarch >> >> from the repo here >> http://repos.fedorapeople.org/repos/kkeithle/glusterfs/epel-6/ > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From gcheng at salesforce.com Thu Jul 11 22:11:51 2013 From: gcheng at salesforce.com (Guolin Cheng) Date: Thu, 11 Jul 2013 15:11:51 -0700 Subject: [rhos-list] openstack nova-novncproxy failed when setup ssl encryption. Message-ID: I tried to create a pair of self-signed http cert/key with openssl, and config it in nova.conf then restart the service /etc/init.d/openstack-nova-novncproxy. The service failed immediately with no output. Please shed a light on how to setup it correctly. Thanks a lot. The only changes are below. ... # Disallow non-encrypted connections (boolean value) #ssl_only=false ssl_only=true # Source is ipv6 (boolean value) #source_is_ipv6=false # SSL certificate file (string value) cert=/etc/nova/nova.crt # SSL key file (if separate from cert) (string value) key=/etc/nova/nova.key ... [root at controller nova]# ls -alFrt /etc/nova/nova.key /etc/nova/nova.crt -rw------- 1 nova nova 1704 Jul 11 13:47 /etc/nova/nova.key -rw-r--r-- 1 nova nova 1456 Jul 11 13:47 /etc/nova/nova.crt [root at controller nova]# Thanks. Guolin Thanks. Guolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From Hao.Chen at NRCan-RNCan.gc.ca Thu Jul 11 23:54:43 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Thu, 11 Jul 2013 23:54:43 +0000 Subject: [rhos-list] Swift validation error In-Reply-To: <20130711114224.7444017e@lembas.zaitcev.lan> References: <76CC67FD1C99DB4DB4D43FEF354AADB64700065B@S-BSC-MBX2.nrn.nrcan.gc.ca> <20130708153406.05bc3c77@lembas.zaitcev.lan> <76CC67FD1C99DB4DB4D43FEF354AADB6470019D0@S-BSC-MBX2.nrn.nrcan.gc.ca> <20130711114224.7444017e@lembas.zaitcev.lan> Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB647001B9D@S-BSC-MBX2.nrn.nrcan.gc.ca> Hi Pete, Attached the PDF I downloaded through my redhat account. There are the differences between this one and the online document. May have to switch to the online guide. (1) Recreated the endpoints for swift (v1), but the problem still there. [root at cloud1 ~(keystone_admin)]# swift upload c2 data.file Error trying to create container 'c2': 503 Internal Server Error:

Service Unavailable

The server is currently Object PUT failed: http://10.2.0.196:8080/v1/AUTH_35caeda3b4d84d1582e675c2f871a00c/c2/data.file 503 Service Unavailable [first 60 chars of response]

Service Unavailable

The server is currently DEBUG:swiftclient:RESP STATUS: 503 DEBUG:swiftclient:RESP BODY:

Service Unavailable

The server is currently unavailable. Please try again at a later time.

Object PUT failed: http://10.2.0.196:8080/v1/AUTH_35caeda3b4d84d1582e675c2f871a00c/c2/data.file 503 Service Unavailable Looks like still not pointing to the right location or the service is not running? (2) In https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_Preview/3/html/Installation_and_Configuration_Guide/Building_Object_Storage_Ring_Files.html, it says "The device_mountpoint is the directory under /srv/node/ that your device is mounted at". Should that be "/srv/node/my_directory" or just "my_directory"? Thanks, Hao -----Original Message----- From: rhos-list-bounces at redhat.com [mailto:rhos-list-bounces at redhat.com] On Behalf Of Pete Zaitcev Sent: July 11, 2013 10:42 To: Chen, Hao Cc: rhos-list at redhat.com Subject: Re: [rhos-list] Swift validation error On Thu, 11 Jul 2013 17:09:14 +0000 "Chen, Hao" wrote: > Thanks Pete. The document is Red Hat OpenStack Red Hat OpenStack 3.0 (Grizzly) Preview Installation and Configuration Guide. Thanks. I verified that it has the correct syntax for --publicurl. However, I used the currently published version, because it appears we may be reading different versions. > (1) When running "swift-ring-builder /etc/swift/object_builder_file create part_power replica_count min_part_hours", the name of "object_builder_file" has to be "object", same as "account" and "container". Any different names will cause openstack-swift-proxy stop working. Okay, this sounds about correct, if perhaps needlessly complex. However, I do not see any such phraseology in this document: https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_Preview/3/html/Installation_and_Configuration_Guide/Building_Object_Storage_Ring_Files.html What is the URL where you got your version, or if that is not available, what is the version? It has to say something like "Revision 3-29" at the very end of the document. > (2) When running "swift-ring-builder /etc/swift/account_builder_file add zX-node_ip_address:600Y/device_mountpoint partition_count", if using a different IP address other than 127.0.0.1, "bind_ip" in account-server.conf/container-server.conf/object-server.conf has to be changes accordingly. True. Well, default is actually 0.0.0.0, but you need to verify that Packstack or other tool didn't overwrite bind_ip with 127.0.0.1. > (3) An error occurred when > [root at cloud1 ~(keystone_admin)]# head -c 1024 /dev/urandom > data.file > ; swift upload cotainer data.file Error trying to create container > 'cotainer': 503 Internal Server Error:

Service > Unavailable

The server is currently Object PUT failed: > http://10.2.0.196:8080/v2.0/AUTH_35caeda3b4d84d1582e675c2f871a00c/cota > iner/data.file 503 Service Unavailable [first 60 chars of response] >

Service Unavailable

The server is currently Wait a moment, this can't possibly be right, because prefix is "/v2.0" instead of "/v1" as proper for Swift. Authentication has various versions there, but Swift does not, at least not yet. The authenticated account pattern looks good. > But "c1" has been created. It was able to create the container but failed to load the file to the container. Not sure how that happened. Perhaps it stayed there before changes in Keystone. -- Pete _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- A non-text attachment was scrubbed... Name: Red_Hat_OpenStack-3-Installation_and_Configuration_Guide-en-US.pdf Type: application/pdf Size: 1665321 bytes Desc: Red_Hat_OpenStack-3-Installation_and_Configuration_Guide-en-US.pdf URL: From lchristoph at arago.de Fri Jul 12 08:33:36 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Fri, 12 Jul 2013 08:33:36 +0000 Subject: [rhos-list] tenant_network_type vlan not working on RHEL 6.4/RDO Message-ID: <6aa429b647cb4df98b18cc7522a8af08@DB3PR07MB010.eurprd07.prod.outlook.com> Hi! I have set up a simple test case for VLAN networks - just one Neutron network and one VLAN: /etc/quantum/plugin.ini [DATABASE] sql_connection = mysql://quantum:71884b6791004319 at 192.168.104.62/ovs_quantum sql_max_retries = 10 reconnect_interval = 2 [OVS] integration_bridge=br-int tenant_network_type = vlan enable_tunneling=False network_vlan_ranges = install:40:40 bridge_mappings = install:br-vlan [AGENT] polling_interval = 2 [SECURITYGROUP] firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 531c4059-68ce-4051-a049-f7b715a3aa61 | | name | install-vlan | | provider:network_type | vlan | | provider:physical_network | install | | provider:segmentation_id | 40 | | router:external | False | | shared | False | | status | ACTIVE | | subnets | d961c242-004e-4502-a6bb-59ce3157040c | | tenant_id | 91108a7377204cd78eed2cf22a978475 | +---------------------------+--------------------------------------+ # ovs-vsctl show e10c8833-e54d-4693-a657-b2034c5b244f Bridge br-vlan Port "tapb07358d3-48" Interface "tapb07358d3-48" Port br-vlan Interface br-vlan type: internal Port "eth4" Interface "eth4" Port phy-br-vlan Interface phy-br-vlan Bridge br-int Port "tap3158d4cc-20" tag: 1 Interface "tap3158d4cc-20" Port "tap0fae4e60-f1" tag: 1 Interface "tap0fae4e60-f1" Port "qvo942326c5-68" tag: 2 Interface "qvo942326c5-68" Port br-int Interface br-int type: internal Port int-br-vlan Interface int-br-vlan Port int-br-ex Interface int-br-ex Port "tapd446cc89-be" tag: 2 Interface "tapd446cc89-be" Port "qvo7b22401b-89" tag: 2 Interface "qvo7b22401b-89" Bridge br-ex Port br-ex Interface br-ex type: internal ovs_version: "1.10.0" I created an instance that runs GRML and got it an address (192.168.102.2) and tried to ping the router via the VLAN. The instance does not receive the ARP replies. This is a tcpdump from the trunk interface: # tcpdump -n -i eth4 -e arp tcpdump: WARNING: eth4: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth4, link-type EN10MB (Ethernet), capture size 65535 bytes 10:24:59.032932 fa:16:3e:a4:7e:d4 > Broadcast, ethertype 802.1Q (0x8100), length 46: vlan 40, p 0, ethertype ARP, Request who-has 192.168.102.1 tell 192.168.102.2, length 28 10:24:59.033341 00:00:5e:00:01:01 > fa:16:3e:a4:7e:d4, ethertype ARP (0x0806), length 60: Reply 192.168.102.1 is-at 00:00:5e:00:01:01, length 46 10:24:59.253619 90:e2:ba:3d:f6:5d > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.101.6 tell 192.168.101.7, length 46 10:25:00.030796 fa:16:3e:a4:7e:d4 > Broadcast, ethertype 802.1Q (0x8100), length 46: vlan 40, p 0, ethertype ARP, Request who-has 192.168.102.1 tell 192.168.102.2, length 28 10:25:00.032549 00:00:5e:00:01:01 > fa:16:3e:a4:7e:d4, ethertype ARP (0x0806), length 60: Reply 192.168.102.1 is-at 00:00:5e:00:01:01, length 46 10:25:00.253355 90:e2:ba:3d:f6:5d > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.101.6 tell 192.168.101.7, length 46 10:25:01.030887 fa:16:3e:a4:7e:d4 > Broadcast, ethertype 802.1Q (0x8100), length 46: vlan 40, p 0, ethertype ARP, Request who-has 192.168.102.1 tell 192.168.102.2, length 28 10:25:01.031237 00:00:5e:00:01:01 > fa:16:3e:a4:7e:d4, ethertype ARP (0x0806), length 60: Reply 192.168.102.1 is-at 00:00:5e:00:01:01, length 46 Curious thing on the side - the replies come with a VLAN tag when I create an eth4.40 interface. Nothing changes on the OpenStack/OVS side, though. Here are the flows and port numbers for the two bridges: # ovs-ofctl show br-vlan OFPT_FEATURES_REPLY (xid=0x2): dpid:00000025b502001b n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE 1(eth4): addr:00:25:b5:02:00:1b config: 0 state: 0 current: 10GB-FD FIBER advertised: 10GB-FD FIBER supported: 10GB-FD FIBER speed: 10000 Mbps now, 10000 Mbps max 2(phy-br-vlan): addr:72:55:3c:e6:ae:6b config: 0 state: 0 current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 3(tapb07358d3-48): addr:d6:35:64:a1:87:15 config: 0 state: 0 current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max LOCAL(br-vlan): addr:c2:98:c5:81:f9:f2 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0 # ovs-ofctl dump-flows br-vlan NXST_FLOW reply (xid=0x4): cookie=0x0, duration=1057.205s, table=0, n_packets=343, n_bytes=25981, idle_age=0, priority=4,in_port=2,dl_vlan=2 actions=mod_vlan_vid:40,NORMAL cookie=0x0, duration=1065.475s, table=0, n_packets=32, n_bytes=2384, idle_age=1052, priority=2,in_port=2 actions=drop cookie=0x0, duration=1067.066s, table=0, n_packets=5515, n_bytes=366100, idle_age=0, priority=1 actions=NORMAL # ovs-ofctl show br-int OFPT_FEATURES_REPLY (xid=0x2): dpid:0000beab831a754e n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE 1(int-br-vlan): addr:52:1f:bd:18:93:90 config: 0 state: 0 current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 2(tap0fae4e60-f1): addr:f6:42:b0:32:da:73 config: 0 state: 0 current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 3(tap3158d4cc-20): addr:12:4e:1d:fd:e1:8b config: 0 state: 0 current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 4(tapd446cc89-be): addr:f2:66:2a:aa:d2:c6 config: 0 state: 0 current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 5(qvo942326c5-68): addr:16:cd:bc:df:c7:69 config: 0 state: 0 current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max 6(qvo7b22401b-89): addr:be:3b:ee:cb:47:33 config: 0 state: 0 current: 10GB-FD COPPER speed: 10000 Mbps now, 0 Mbps max LOCAL(br-int): addr:3a:ff:87:95:03:0b config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0 # ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=1083.622s, table=0, n_packets=0, n_bytes=0, idle_age=1083, priority=3,in_port=1,dl_vlan=40 actions=mod_vlan_vid:2,NORMAL cookie=0x0, duration=1092.217s, table=0, n_packets=5659, n_bytes=375809, idle_age=0, priority=2,in_port=1 actions=drop cookie=0x0, duration=1093.975s, table=0, n_packets=457, n_bytes=39432, idle_age=0, priority=1 actions=NORMAL As you can see, the br-int flows don't see VLAN 40 and drop all packets from port 1, i.e. the br-vlan bridge. I'm out of ideas how to fix this. Before I install Ubuntu 13.04 and rebuild this thing, I thought somebody on this list understands what's happening. Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 [http://www.arago.net/wp-content/uploads/2013/06/EmailSignatur1.png] -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Michael.Luksch at lcsystems.at Fri Jul 12 09:35:46 2013 From: Michael.Luksch at lcsystems.at (Michael Luksch) Date: Fri, 12 Jul 2013 09:35:46 +0000 Subject: [rhos-list] tenant_network_type vlan not working on RHEL 6.4/RDO In-Reply-To: <6aa429b647cb4df98b18cc7522a8af08@DB3PR07MB010.eurprd07.prod.outlook.com> References: <6aa429b647cb4df98b18cc7522a8af08@DB3PR07MB010.eurprd07.prod.outlook.com> Message-ID: (not quoting question, because of HTML) Hi, Two things: 1. Your host needs at least one "vlan interface" up, otherwise packets get dropped. This is an bug in rhel/centos 6.4. The vlan itself does not really have to exist. Common workaround is to set up a dummy vlan interface on the physical interface transporting the VLANs. Dummy meaning here a nonexistent, like: cat /etc/sysconfig/network-scripts/ifcfg-ethX.666 DEVICE=eth1.666 ONBOOT=yes TYPE=Ethernet VLAN=yes I guess this is what you are experiencing when adding a eth4.40. 2. AFAIK you need to have use_veth = True in your quantum.conf Documentation on that is a little bit lacking last time I checked. hf, Michael Luksch From lchristoph at arago.de Fri Jul 12 09:42:51 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Fri, 12 Jul 2013 09:42:51 +0000 Subject: [rhos-list] tenant_network_type vlan not working on RHEL 6.4/RDO In-Reply-To: References: <6aa429b647cb4df98b18cc7522a8af08@DB3PR07MB010.eurprd07.prod.outlook.com>, Message-ID: Hello! Your part one did it. Grmbl, what a Stoopid(tm) bug. Many thanks, I owe you a beer. I have ovs_use_veth = True set in /etc/quantum/quantum.conf, that seems to do it for all parts of Quantum/Neutron. Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 ________________________________________ Von: Michael Luksch Gesendet: Freitag, 12. Juli 2013 11:35 An: Lutz Christoph; rhos-list at redhat.com Cc: Holger Schulz Betreff: AW: tenant_network_type vlan not working on RHEL 6.4/RDO (not quoting question, because of HTML) Hi, Two things: 1. Your host needs at least one "vlan interface" up, otherwise packets get dropped. This is an bug in rhel/centos 6.4. The vlan itself does not really have to exist. Common workaround is to set up a dummy vlan interface on the physical interface transporting the VLANs. Dummy meaning here a nonexistent, like: cat /etc/sysconfig/network-scripts/ifcfg-ethX.666 DEVICE=eth1.666 ONBOOT=yes TYPE=Ethernet VLAN=yes I guess this is what you are experiencing when adding a eth4.40. 2. AFAIK you need to have use_veth = True in your quantum.conf Documentation on that is a little bit lacking last time I checked. hf, Michael Luksch From prmarino1 at gmail.com Sun Jul 14 19:45:04 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Sun, 14 Jul 2013 15:45:04 -0400 Subject: [rhos-list] Swift validation error In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB647001B9D@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB64700065B@S-BSC-MBX2.nrn.nrcan.gc.ca> <20130708153406.05bc3c77@lembas.zaitcev.lan> <76CC67FD1C99DB4DB4D43FEF354AADB6470019D0@S-BSC-MBX2.nrn.nrcan.gc.ca> <20130711114224.7444017e@lembas.zaitcev.lan> <76CC67FD1C99DB4DB4D43FEF354AADB647001B9D@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: there are two things I see 1) I format my endpoints like this and it seems to work https://:8080/v1/AUTH_%(tenant_id)s 2) make sure you are using the keystone URI for the swift auth URI If neither of those work then enable verbose and debug logging in keystone and swift proxy it might be a key signing issue one common issue is if the signing_dir isn't explicitly set in the proxy-server.conf and defaults to a directory that doesn't exist or is owned by root instead of the swift user. On Thu, Jul 11, 2013 at 7:54 PM, Chen, Hao wrote: > Hi Pete, > > Attached the PDF I downloaded through my redhat account. There are the differences between this one and the online document. May have to switch to the online guide. > > (1) Recreated the endpoints for swift (v1), but the problem still there. > [root at cloud1 ~(keystone_admin)]# swift upload c2 data.file > Error trying to create container 'c2': 503 Internal Server Error:

Service Unavailable

The server is currently > Object PUT failed: http://10.2.0.196:8080/v1/AUTH_35caeda3b4d84d1582e675c2f871a00c/c2/data.file 503 Service Unavailable [first 60 chars of response]

Service Unavailable

The server is currently > DEBUG:swiftclient:RESP STATUS: 503 > DEBUG:swiftclient:RESP BODY:

Service Unavailable

The server is currently unavailable. Please try again at a later time.

> Object PUT failed: http://10.2.0.196:8080/v1/AUTH_35caeda3b4d84d1582e675c2f871a00c/c2/data.file 503 Service Unavailable > > Looks like still not pointing to the right location or the service is not running? > > (2) In https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_Preview/3/html/Installation_and_Configuration_Guide/Building_Object_Storage_Ring_Files.html, it says "The device_mountpoint is the directory under /srv/node/ that your device is mounted at". Should that be "/srv/node/my_directory" or just "my_directory"? > > Thanks, > Hao > > > -----Original Message----- > From: rhos-list-bounces at redhat.com [mailto:rhos-list-bounces at redhat.com] On Behalf Of Pete Zaitcev > Sent: July 11, 2013 10:42 > To: Chen, Hao > Cc: rhos-list at redhat.com > Subject: Re: [rhos-list] Swift validation error > > On Thu, 11 Jul 2013 17:09:14 +0000 > "Chen, Hao" wrote: > >> Thanks Pete. The document is Red Hat OpenStack Red Hat OpenStack 3.0 (Grizzly) Preview Installation and Configuration Guide. > > Thanks. I verified that it has the correct syntax for --publicurl. > However, I used the currently published version, because it appears we may be reading different versions. > >> (1) When running "swift-ring-builder /etc/swift/object_builder_file create part_power replica_count min_part_hours", the name of "object_builder_file" has to be "object", same as "account" and "container". Any different names will cause openstack-swift-proxy stop working. > > Okay, this sounds about correct, if perhaps needlessly complex. > However, I do not see any such phraseology in this document: > https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_Preview/3/html/Installation_and_Configuration_Guide/Building_Object_Storage_Ring_Files.html > > What is the URL where you got your version, or if that is not available, what is the version? It has to say something like "Revision 3-29" at the very end of the document. > >> (2) When running "swift-ring-builder /etc/swift/account_builder_file add zX-node_ip_address:600Y/device_mountpoint partition_count", if using a different IP address other than 127.0.0.1, "bind_ip" in account-server.conf/container-server.conf/object-server.conf has to be changes accordingly. > > True. Well, default is actually 0.0.0.0, but you need to verify that Packstack or other tool didn't overwrite bind_ip with 127.0.0.1. > >> (3) An error occurred when >> [root at cloud1 ~(keystone_admin)]# head -c 1024 /dev/urandom > data.file >> ; swift upload cotainer data.file Error trying to create container >> 'cotainer': 503 Internal Server Error:

Service >> Unavailable

The server is currently Object PUT failed: >> http://10.2.0.196:8080/v2.0/AUTH_35caeda3b4d84d1582e675c2f871a00c/cota >> iner/data.file 503 Service Unavailable [first 60 chars of response] >>

Service Unavailable

The server is currently > > Wait a moment, this can't possibly be right, because prefix is "/v2.0" > instead of "/v1" as proper for Swift. Authentication has various versions there, but Swift does not, at least not yet. > > The authenticated account pattern looks good. > >> But "c1" has been created. It was able to create the container but failed to load the file to the container. > > Not sure how that happened. Perhaps it stayed there before changes in Keystone. > > -- Pete > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From prashanth.prahal at gmail.com Mon Jul 15 07:22:11 2013 From: prashanth.prahal at gmail.com (Prashanth Prahalad) Date: Mon, 15 Jul 2013 12:52:11 +0530 Subject: [rhos-list] packets lost with openvswitch Message-ID: Hi Folks, I'm hoping someone could help me with this :-). I've the configuration below and when I boot up a VM (port 44 below), the network interface fails to come up because the DHCP requests are getting dropped somewhere in ovs. Note I do see the DHCP replies on br-eth5, but it never makes it to br-int and the VM interface. I've an exact configuration on a different test box and things work just fine on that. This is a quick dump of OVS information on the box : system at br-eth5: lookups: hit:11591 missed:113626 lost:0 flows: 4 port 0: br-eth5 (internal) port 4: eth5 port 17: phy-br-eth5 system at br-int: lookups: hit:10308 missed:101900 lost:0 flows: 4 port 0: br-int (internal) port 44: qvobfaa5b3c-81 port 45: int-br-eth5 ovs-dpctl show br-eth5 in_port(17),eth(src=86:d0:78:f6:2a:c9,dst=01:00:5e:00:00:01),eth_type(0x8100),vlan(vid=1,pcp=0),encap(eth_type(0x0800),ipv4(src=0.0.0.0,dst=224.0.0.1,proto=2,tos=0xc0,ttl=1,frag=no)), packets:0, bytes:0, used:never, actions:pop_vlan,push_vlan(vid=3000,pcp=0),4,0 in_port(17),eth(src=fa:16:3e:6e:ac:0e,dst=ff:ff:ff:ff:ff:ff),eth_type(0x8100),vlan(vid=1,pcp=0),encap(eth_type(0x0800),ipv4(src=0.0.0.0,dst=255.255.255.255,proto=17,tos=0,ttl=64,frag=no),udp(src=68,dst=67)), packets:2, bytes:652, used:4.527s, actions:pop_vlan,push_vlan(vid=3000,pcp=0),4,0 in_port(17),eth(src=86:d0:78:f6:2a:c9,dst=33:33:00:00:00:01),eth_type(0x8100),vlan(vid=1,pcp=0),encap(eth_type(0x86dd),ipv6(src=100:0:600:0:78fb:100::,dst=ff02::1,label=0,proto=58,tclass=0,hlimit=1,frag=no),icmpv6(type=130,code=0)), packets:0, bytes:0, used:never, actions:pop_vlan,push_vlan(vid=3000,pcp=0),4,0 in_port(4),eth(src=66:0e:94:bc:51:5b,dst=fa:16:3e:6e:ac:0e),eth_type(0x0800),ipv4(src=120.9.8.1,dst=120.9.8.4,proto=17,tos=0x10,ttl=128,frag=no),udp(src=67,dst=68), packets:2, bytes:684, used:4.527s, actions:17,0 ovs-ofctl dump-flows br-eth5 NXST_FLOW reply (xid=0x4): cookie=0x0, duration=40.923s, table=0, n_packets=6, n_bytes=468, idle_age=31, priority=2,in_port=17 actions=drop cookie=0x0, duration=42.473s, table=0, n_packets=13, n_bytes=1636, idle_age=4, priority=1 actions=NORMAL cookie=0x0, duration=39.964s, table=0, n_packets=6, n_bytes=1200, idle_age=4, priority=4,in_port=17,dl_vlan=1 actions=mod_vlan_vid:3000,NORMAL ovs-dpctl dump-flows br-int in_port(44),eth(src=86:d0:78:f6:2a:c9,dst=33:33:00:00:00:01),eth_type(0x86dd),ipv6(src=100:0:600:0:78fb:100::,dst=ff02::1,label=0,proto=58,tclass=0,hlimit=1,frag=no),icmpv6(type=130,code=0), packets:0, bytes:0, used:never, actions:push_vlan(vid=1,pcp=0),45,0 in_port(45),eth(src=66:0e:94:bc:51:5b,dst=fa:16:3e:6e:ac:0e),eth_type(0x0800),ipv4(src=120.9.8.1,dst=120.9.8.4,proto=17,tos=0x10,ttl=128,frag=no),udp(src=67,dst=68), packets:2, bytes:684, used:4.534s, actions:drop in_port(44),eth(src=fa:16:3e:6e:ac:0e,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0800),ipv4(src=0.0.0.0,dst=255.255.255.255,proto=17,tos=0,ttl=64,frag=no),udp(src=68,dst=67), packets:2, bytes:644, used:4.534s, actions:push_vlan(vid=1,pcp=0),45,0 in_port(44),eth(src=86:d0:78:f6:2a:c9,dst=01:00:5e:00:00:01),eth_type(0x0800),ipv4(src=0.0.0.0,dst=224.0.0.1,proto=2,tos=0xc0,ttl=1,frag=no), packets:0, bytes:0, used:never, actions:push_vlan(vid=1,pcp=0),45,0 ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=41.043s, table=0, n_packets=19, n_bytes=2104, idle_age=4, priority=2,in_port=45 actions=drop cookie=0x0, duration=42.709s, table=0, n_packets=6, n_bytes=1176, idle_age=4, priority=1 actions=NORMAL cookie=0x0, duration=39.851s, table=0, n_packets=0, n_bytes=0, idle_age=39, priority=3,in_port=45,dl_vlan=3000 actions=mod_vlan_vid:1,NORMAL This is the plugin configuration: [DATABASE] sql_connection = mysql://quantum:quantum at r5-20/ovs_quantum [OVS] tenant_network_type = vlan network_vlan_ranges = physnet5:3000:3999 bridge_mappings = physnet5:br-eth5 [AGENT] Any ideas ? Thanks ! Prashanth -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Mon Jul 15 15:14:09 2013 From: sgordon at redhat.com (Steve Gordon) Date: Mon, 15 Jul 2013 11:14:09 -0400 (EDT) Subject: [rhos-list] Packstack answer files In-Reply-To: References: Message-ID: <1985318454.820618.1373901249874.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Paul T Cherrier" > To: sgordon at redhat.com > Sent: Monday, July 15, 2013 10:59:45 AM > Subject: Packstack answer files > > Mr. Gordon, > > I am currently an intern working on an OpenStack Proof of Concept and I was > just curious on one of the example packstack answer file entries. In section > 4.3.2 Editing a PackStack Answer File of the Getting Started Guide for > OpenStack Grizzly release, the CONFIG_SWIFT-INSTALL ----os-swift-install > is set to 'n' and I was just curious as of why that is. All other > components of OpenStack are set to 'y' and I was wondering if this would > cause an issue of some sort or is like that for a reason. If you could let > me know I'd appreciate it.Thanks. As I understand it Object Storage service (Swift) installation currently defaults to no because although it is a core OpenStack service it is not strictly required for a basic installation that is capable of running virtual machine instances. The Object Storage service (Swift) can be used as the storage backend by the Image Storage service (Glance), that just isn't the case in the current "basic" deployment provided by PackStack as far as I know. I have CC'd rhos-list in case anybody cares to correct me :). This is a public mailing list where this kind of question is possibly better directed to ensure that it gets the most visibility, you can subscribe here: http://www.redhat.com/mailman/listinfo/rhos-list Thanks, Steve > P.S. > This is for a single node deployment > > > --Paul C. > > The information in this email is confidential and may be legally privileged > against disclosure other than to the intended recipient. It is intended > solely for the addressee. Access to this email by anyone else is > unauthorized. If you are not the intended recipient, any disclosure, > copying, distribution or any action taken or omitted to be taken in reliance > on it, is prohibited and may be unlawful. Please immediately delete this > message and inform the sender of this error. > -- Steve Gordon, RHCE Documentation Lead, Red Hat OpenStack Engineering Content Services Red Hat Canada (Toronto, Ontario) From lchristoph at arago.de Mon Jul 15 18:15:25 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Mon, 15 Jul 2013 18:15:25 +0000 Subject: [rhos-list] VLANs and metadata Message-ID: Hello! I've set up a simple case (detailed in a previous mail) of a VLANed network. The router is in the switch. Of course it doesn't know anything about 169.254.169.254. But it gets the packets for that address: 19:48:18.399719 fa:16:3e:84:be:da > 00:00:5e:00:01:01, ethertype 802.1Q (0x8100), length 78: vlan 40, p 0, ethertype IPv4, 192.168.102.2.59526 > 169.254.169.254.http: Flags [S], seq 3099670947, win 14600, options [mss 1460,sackOK,TS val 121280 ecr 0,nop,wscale 6], length 0 If a router was involved, the iptables associated with that router would take care of this. But this is using a VLAN, so it only goes through OpenVSwitch, hitting (AFAIK) iptables between the two bridges. So how does one redirect 169.254.169.254:80 to the metadata agent in this situation? I found some advice by googling that involves creating a 169.254.0.0/16 network, putting the metadata stuff there. But it didn't go beyond that, and while I may be able to do the network, I have no idea what to do on the Quantum, excuse me, Neutron side. I guess the metadata agent is not needed, only the Nova metadata proxy. Or maybe just redirect the request there in the global iptables? Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 [http://www.arago.net/wp-content/uploads/2013/06/EmailSignatur1.png] -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 -------------- next part -------------- An HTML attachment was scrubbed... URL: From roxenham at redhat.com Mon Jul 15 19:21:00 2013 From: roxenham at redhat.com (Rhys Oxenham) Date: Mon, 15 Jul 2013 15:21:00 -0400 (EDT) Subject: [rhos-list] VLANs and metadata In-Reply-To: References: Message-ID: <7F425980-0D52-408D-ACFF-EEE6B2A63358@redhat.com> Hi Lutz, I recently wrote a lab on this exact setup, metadata with Open vSwitch in VLAN tenant networks. If you skip to Lab 11 in this unfinished guide you should be able to set it up. By the way, the port doesn't have to be 8700, I just chose this port to keep it separate from similar ports used by Nova. https://github.com/rdoxenham/openstack-training/blob/master/documentation/openstack-manual.md Let us know if you have any questions. Thanks Rhys. Sent from my mobile device On 15 Jul 2013, at 19:15, Lutz Christoph wrote: > Hello! > > I've set up a simple case (detailed in a previous mail) of a VLANed network. The router is in the switch. Of course it doesn't know anything about 169.254.169.254. But it gets the packets for that address: > > 19:48:18.399719 fa:16:3e:84:be:da > 00:00:5e:00:01:01, ethertype 802.1Q (0x8100), length 78: vlan 40, p 0, ethertype IPv4, 192.168.102.2.59526 > 169.254.169.254.http: Flags [S], seq 3099670947, win 14600, options [mss 1460,sackOK,TS val 121280 ecr 0,nop,wscale 6], length 0 > > If a router was involved, the iptables associated with that router would take care of this. But this is using a VLAN, so it only goes through OpenVSwitch, hitting (AFAIK) iptables between the two bridges. > > So how does one redirect 169.254.169.254:80 to the metadata agent in this situation? I found some advice by googling that involves creating a 169.254.0.0/16 network, putting the metadata stuff there. But it didn't go beyond that, and while I may be able to do the network, I have no idea what to do on the Quantum, excuse me, Neutron side. I guess the metadata agent is not needed, only the Nova metadata proxy. Or maybe just redirect the request there in the global iptables? > > > Best regards / Mit freundlichen Gr??en > Lutz Christoph > > -- > > Lutz Christoph > > arago Institut f?r komplexes Datenmanagement AG > > Eschersheimer Landstra?e 526 - 532 > 60433 Frankfurt am Main > > eMail: lchristoph at arago.de - www: http://www.arago.de > Tel: 0172/6301004 > Mobil: 0172/6301004 > > > > -- > Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 > Vorstand: Hans-Christian Boos, Martin Friedrich > Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther > Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts > Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From lchristoph at arago.de Tue Jul 16 08:26:43 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Tue, 16 Jul 2013 08:26:43 +0000 Subject: [rhos-list] VLANs and metadata In-Reply-To: <7F425980-0D52-408D-ACFF-EEE6B2A63358@redhat.com> References: , <7F425980-0D52-408D-ACFF-EEE6B2A63358@redhat.com> Message-ID: Hello! Thanks for the link to your labs. I will have to take some time to read the other labs, but skimming through them, they look very useful. But I have to tell you that except for a missing metadata_ip (which does not seem to make a difference in my case, probably because it defaults to 127.0.0.1) I have all settings you list. So, alas, nothing has changed. The VM still does not get a reply from 169.254.169.254 because it does not exist, and nothing redirects there. Since what actually fails is an ARP request for that address, I doubt a redirect rule alone will do the job. I would need a gateway route to avoid the ARP *and* a redirect. I will change the scenario to a routed network (with the L3 agent) to see if that works OK, then see what I can learn for the VLAN case with the external router. I would be grateful for any further ideas. Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 [http://www.arago.net/wp-content/uploads/2013/06/EmailSignatur1.png] -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 ________________________________ Von: Rhys Oxenham Gesendet: Montag, 15. Juli 2013 21:21 An: Lutz Christoph Cc: rhos-list at redhat.com Betreff: Re: [rhos-list] VLANs and metadata Hi Lutz, I recently wrote a lab on this exact setup, metadata with Open vSwitch in VLAN tenant networks. If you skip to Lab 11 in this unfinished guide you should be able to set it up. By the way, the port doesn't have to be 8700, I just chose this port to keep it separate from similar ports used by Nova. https://github.com/rdoxenham/openstack-training/blob/master/documentation/openstack-manual.md Let us know if you have any questions. Thanks Rhys. Sent from my mobile device On 15 Jul 2013, at 19:15, Lutz Christoph > wrote: Hello! I've set up a simple case (detailed in a previous mail) of a VLANed network. The router is in the switch. Of course it doesn't know anything about 169.254.169.254. But it gets the packets for that address: 19:48:18.399719 fa:16:3e:84:be:da > 00:00:5e:00:01:01, ethertype 802.1Q (0x8100), length 78: vlan 40, p 0, ethertype IPv4, 192.168.102.2.59526 > 169.254.169.254.http: Flags [S], seq 3099670947, win 14600, options [mss 1460,sackOK,TS val 121280 ecr 0,nop,wscale 6], length 0 If a router was involved, the iptables associated with that router would take care of this. But this is using a VLAN, so it only goes through OpenVSwitch, hitting (AFAIK) iptables between the two bridges. So how does one redirect 169.254.169.254:80 to the metadata agent in this situation? I found some advice by googling that involves creating a 169.254.0.0/16 network, putting the metadata stuff there. But it didn't go beyond that, and while I may be able to do the network, I have no idea what to do on the Quantum, excuse me, Neutron side. I guess the metadata agent is not needed, only the Nova metadata proxy. Or maybe just redirect the request there in the global iptables? Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 [http://www.arago.net/wp-content/uploads/2013/06/EmailSignatur1.png] -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From tgraf at redhat.com Tue Jul 16 12:50:14 2013 From: tgraf at redhat.com (Thomas Graf) Date: Tue, 16 Jul 2013 14:50:14 +0200 Subject: [rhos-list] Issues with the OVS and RDO kernel In-Reply-To: <51DAF5BA.8000500@redhat.com> References: <51dab365.829c420a.642f.0ea9@mx.google.com> <51DAF5BA.8000500@redhat.com> Message-ID: <51E54186.6060505@redhat.com> On 07/08/2013 07:24 PM, Perry Myers wrote: > On 07/08/2013 08:41 AM, Paul Robert Marino wrote: >> gre tunnels are not currently supported yet in RHEL not even in the >> openstack kernel variant last I heard. > > That is correct in part. > > The netns enabled kernel we provide in RDO for RHEL does have kernel > support for gre and vxlan tunnels. > > However, openvswitch does not yet support using the in-tree gre/vxlan > tunnel mechanism. It only supports using the out-of-tree tunnels > provided in the openvswitch kmod from upstream openvswitch.org git repo. > > What needs to happen is openvswitch needs to change to understand how to > manipulate the in-tree tunnels. Until that happens, we can't use > gre/vxlan tunnels via openvswitch and therefore neutron/quantum > > At least this is my understanding of things. I've added some neutron > and ovs devs to comment This is 100% accurate. Work is underway for OVS to support the in-tree tunnel implementation, the kernel side has already been partially merged, see upstream netdev and ovs-dev mailing lists for additional details. From zaitcev at redhat.com Tue Jul 16 15:15:30 2013 From: zaitcev at redhat.com (Pete Zaitcev) Date: Tue, 16 Jul 2013 09:15:30 -0600 Subject: [rhos-list] Swift validation error In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB647001B9D@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB64700065B@S-BSC-MBX2.nrn.nrcan.gc.ca> <20130708153406.05bc3c77@lembas.zaitcev.lan> <76CC67FD1C99DB4DB4D43FEF354AADB6470019D0@S-BSC-MBX2.nrn.nrcan.gc.ca> <20130711114224.7444017e@lembas.zaitcev.lan> <76CC67FD1C99DB4DB4D43FEF354AADB647001B9D@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <20130716091530.71ecd868@lembas.zaitcev.lan> On Thu, 11 Jul 2013 23:54:43 +0000 "Chen, Hao" wrote: > Object PUT failed: http://10.2.0.196:8080/v1/AUTH_35caeda3b4d84d1582e675c2f871a00c/c2/data.file 503 Service Unavailable > > Looks like still not pointing to the right location or the service is not running? It's impossible to tell without seeing the logs. > (2) In https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_Preview/3/html/Installation_and_Configuration_Guide/Building_Object_Storage_Ring_Files.html, it says "The device_mountpoint is the directory under /srv/node/ that your device is mounted at". Should that be "/srv/node/my_directory" or just "my_directory"? It's "my_directory". It servers as a selector in a flat namespace of devices and used twice by the services: with /srv/node prepended it forms a path, and it's passed as an argument to rsync. -- Pete From lchristoph at arago.de Tue Jul 16 15:16:53 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Tue, 16 Jul 2013 15:16:53 +0000 Subject: [rhos-list] VLANs and metadata In-Reply-To: References: , <7F425980-0D52-408D-ACFF-EEE6B2A63358@redhat.com>, Message-ID: <3cba8096f75a4a29a1804505bc59d560@DB3PR07MB010.eurprd07.prod.outlook.com> Hello! I found a solution, but it's kinda kludgey. I created a Quantum network for 169.254.0.0/16 and a Quantum router to access it. Since the default router in my setup is a hardware box, that one can't do it. Then I attached a "host route" to my regular network, and SHAZAM! it works. Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 [http://www.arago.net/wp-content/uploads/2013/06/EmailSignatur1.png] -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 ________________________________ Von: Lutz Christoph Gesendet: Dienstag, 16. Juli 2013 10:26 An: Rhys Oxenham Cc: rhos-list at redhat.com Betreff: AW: [rhos-list] VLANs and metadata Hello! Thanks for the link to your labs. I will have to take some time to read the other labs, but skimming through them, they look very useful. But I have to tell you that except for a missing metadata_ip (which does not seem to make a difference in my case, probably because it defaults to 127.0.0.1) I have all settings you list. So, alas, nothing has changed. The VM still does not get a reply from 169.254.169.254 because it does not exist, and nothing redirects there. Since what actually fails is an ARP request for that address, I doubt a redirect rule alone will do the job. I would need a gateway route to avoid the ARP *and* a redirect. I will change the scenario to a routed network (with the L3 agent) to see if that works OK, then see what I can learn for the VLAN case with the external router. I would be grateful for any further ideas. Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 [http://www.arago.net/wp-content/uploads/2013/06/EmailSignatur1.png] -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 ________________________________ Von: Rhys Oxenham Gesendet: Montag, 15. Juli 2013 21:21 An: Lutz Christoph Cc: rhos-list at redhat.com Betreff: Re: [rhos-list] VLANs and metadata Hi Lutz, I recently wrote a lab on this exact setup, metadata with Open vSwitch in VLAN tenant networks. If you skip to Lab 11 in this unfinished guide you should be able to set it up. By the way, the port doesn't have to be 8700, I just chose this port to keep it separate from similar ports used by Nova. https://github.com/rdoxenham/openstack-training/blob/master/documentation/openstack-manual.md Let us know if you have any questions. Thanks Rhys. Sent from my mobile device On 15 Jul 2013, at 19:15, Lutz Christoph > wrote: Hello! I've set up a simple case (detailed in a previous mail) of a VLANed network. The router is in the switch. Of course it doesn't know anything about 169.254.169.254. But it gets the packets for that address: 19:48:18.399719 fa:16:3e:84:be:da > 00:00:5e:00:01:01, ethertype 802.1Q (0x8100), length 78: vlan 40, p 0, ethertype IPv4, 192.168.102.2.59526 > 169.254.169.254.http: Flags [S], seq 3099670947, win 14600, options [mss 1460,sackOK,TS val 121280 ecr 0,nop,wscale 6], length 0 If a router was involved, the iptables associated with that router would take care of this. But this is using a VLAN, so it only goes through OpenVSwitch, hitting (AFAIK) iptables between the two bridges. So how does one redirect 169.254.169.254:80 to the metadata agent in this situation? I found some advice by googling that involves creating a 169.254.0.0/16 network, putting the metadata stuff there. But it didn't go beyond that, and while I may be able to do the network, I have no idea what to do on the Quantum, excuse me, Neutron side. I guess the metadata agent is not needed, only the Nova metadata proxy. Or maybe just redirect the request there in the global iptables? Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 [http://www.arago.net/wp-content/uploads/2013/06/EmailSignatur1.png] -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From dneary at redhat.com Tue Jul 16 15:38:08 2013 From: dneary at redhat.com (Dave Neary) Date: Tue, 16 Jul 2013 17:38:08 +0200 Subject: [rhos-list] Issues with the OVS and RDO kernel In-Reply-To: <51E54186.6060505@redhat.com> References: <51dab365.829c420a.642f.0ea9@mx.google.com> <51DAF5BA.8000500@redhat.com> <51E54186.6060505@redhat.com> Message-ID: <51E568E0.3040008@redhat.com> Hi, On 07/16/2013 02:50 PM, Thomas Graf wrote: > Work is underway for OVS to support the in-tree tunnel implementation, > the kernel side has already been partially merged, see upstream netdev > and ovs-dev mailing lists for additional details. When someone says that they have not had any issues with GRE tunnels on alternate platforms, could that be accurate? It sound like for now upstream kernel & OVS hasn't done everything needed to support it. http://openstack.redhat.com/forum/discussion/comment/933#Comment_933 Thanks, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From tgraf at redhat.com Tue Jul 16 16:03:32 2013 From: tgraf at redhat.com (Thomas Graf) Date: Tue, 16 Jul 2013 18:03:32 +0200 Subject: [rhos-list] Issues with the OVS and RDO kernel In-Reply-To: <51E568E0.3040008@redhat.com> References: <51dab365.829c420a.642f.0ea9@mx.google.com> <51DAF5BA.8000500@redhat.com> <51E54186.6060505@redhat.com> <51E568E0.3040008@redhat.com> Message-ID: <51E56ED4.7030808@redhat.com> On 07/16/2013 05:38 PM, Dave Neary wrote: > Hi, > > On 07/16/2013 02:50 PM, Thomas Graf wrote: >> Work is underway for OVS to support the in-tree tunnel implementation, >> the kernel side has already been partially merged, see upstream netdev >> and ovs-dev mailing lists for additional details. > > When someone says that they have not had any issues with GRE tunnels on > alternate platforms, could that be accurate? I would assume that they have been using the out-of-tree GRE tunneling as found in the out-of-tree OVS kmod. > It sound like for now > upstream kernel & OVS hasn't done everything needed to support it. > > http://openstack.redhat.com/forum/discussion/comment/933#Comment_933 Correct, work is not finished yet. It is possible to get a working tunnel setup by using the latest git trees but it's still WIP and none of the code has been included in any official releases yet. From Hao.Chen at NRCan-RNCan.gc.ca Tue Jul 16 22:28:23 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Tue, 16 Jul 2013 22:28:23 +0000 Subject: [rhos-list] Swift validation error In-Reply-To: <20130716091530.71ecd868@lembas.zaitcev.lan> References: <76CC67FD1C99DB4DB4D43FEF354AADB64700065B@S-BSC-MBX2.nrn.nrcan.gc.ca> <20130708153406.05bc3c77@lembas.zaitcev.lan> <76CC67FD1C99DB4DB4D43FEF354AADB6470019D0@S-BSC-MBX2.nrn.nrcan.gc.ca> <20130711114224.7444017e@lembas.zaitcev.lan> <76CC67FD1C99DB4DB4D43FEF354AADB647001B9D@S-BSC-MBX2.nrn.nrcan.gc.ca> <20130716091530.71ecd868@lembas.zaitcev.lan> Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB647010C7B@S-BSC-MBX2.nrn.nrcan.gc.ca> Hi Pete and Paul, "swift upload c2 data.file" error fixed, because I used the obsolete manual and did to have xattr enabled. Will switch to the following link. https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_Preview/3/html/Installation_and_Configuration_Guide/Building_Object_Storage_Ring_Files.html Thanks for the help. Hao -----Original Message----- From: Pete Zaitcev [mailto:zaitcev at redhat.com] Sent: July 16, 2013 08:16 To: Chen, Hao Cc: rhos-list at redhat.com Subject: Re: [rhos-list] Swift validation error On Thu, 11 Jul 2013 23:54:43 +0000 "Chen, Hao" wrote: > Object PUT failed: > http://10.2.0.196:8080/v1/AUTH_35caeda3b4d84d1582e675c2f871a00c/c2/dat > a.file 503 Service Unavailable > > Looks like still not pointing to the right location or the service is not running? It's impossible to tell without seeing the logs. > (2) In https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_Preview/3/html/Installation_and_Configuration_Guide/Building_Object_Storage_Ring_Files.html, it says "The device_mountpoint is the directory under /srv/node/ that your device is mounted at". Should that be "/srv/node/my_directory" or just "my_directory"? It's "my_directory". It servers as a selector in a flat namespace of devices and used twice by the services: with /srv/node prepended it forms a path, and it's passed as an argument to rsync. -- Pete From pcherrier at nyiso.com Wed Jul 17 14:19:27 2013 From: pcherrier at nyiso.com (Cherrier, Paul T) Date: Wed, 17 Jul 2013 10:19:27 -0400 Subject: [rhos-list] Networking failure Message-ID: I am currently trying to implement a single node OpenStack environment using the PackStack automation scripts. So far I have been unsuccessful in ssh/pinging to my instances and I do not know why. I ran packstack with a customized answer file and these are the changes I made: # Cinder's volumes group size CONFIG_CINDER_VOLUMES_SIZE=70G # Private interface for Flat DHCP on the Nova compute servers CONFIG_NOVA_COMPUTE_PRIVIF=eth3 # Public interface on the Nova network server CONFIG_NOVA_NETWORK_PUBIF=eth1 # Private interface for Flat DHCP on the Nova network server CONFIG_NOVA_NETWORK_PRIVIF=eth3 # IP Range for Flat DHCP CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/24 # IP Range for Floating IP's CONFIG_NOVA_NETWORK_FLOATRANGE=xx.xx.xx.180/24 Please help, would greatly appreciate it. --Paul C. The information in this email is confidential and may be legally privileged against disclosure other than to the intended recipient. It is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. Please immediately delete this message and inform the sender of this error. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vogel at heig-vd.ch Thu Jul 18 05:53:54 2013 From: nicolas.vogel at heig-vd.ch (Vogel Nicolas) Date: Thu, 18 Jul 2013 05:53:54 +0000 Subject: [rhos-list] Multiple Flat Networking config Message-ID: Hi, I'm trying to make a quantum "Multiple Flat Network" configuration (the 2nd Use case in the Quantum doc). I have four NICs on my server (em1, em2, p1p1, p1p2). I configured the first (em1) with static IP and used it for the packstack deployment. Everthing worked fine. I want to use this NIC only for management purpose and Openstack services. After the packstack allinone install, br-int and br-ex were available. Then I created to new networks and subnets. The first only for internal communication, and the second for external communication. This external network/subnet matches the subnet connected on em2 NIC. Then I activate my second and third NIC (em2 abd p1p1) without any IP. I run following commands: sudo ovs-vsctl add-port br-ex em2 and sudo ovs-vsctl add-port br-int p1p1 because I wan't the br-int bridge for internal communication only and the br-ex bridge for communication with the outside world. I can start VMs with 2 NICs on the 2 networks I created and both got an IP address per DHCP. But my VM isn't reachable from the outside world. I add rules to the default group to allow PING and SSH. Is my configuration right? Did I make something wrong or do I need more specific configuration? Thanks for the answers. Nicolas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From beagles at redhat.com Thu Jul 18 13:32:18 2013 From: beagles at redhat.com (Brent Eagles) Date: Thu, 18 Jul 2013 11:02:18 -0230 Subject: [rhos-list] Multiple Flat Networking config In-Reply-To: References: Message-ID: <51E7EE62.2070008@redhat.com> Hi Nicolas, On 07/18/2013 03:23 AM, Vogel Nicolas wrote: > Hi, > > I'm trying to make a quantum "Multiple Flat Network" configuration (the 2nd > Use case in the Quantum doc). > > I have four NICs on my server (em1, em2, p1p1, p1p2). I configured the first > (em1) with static IP and used it for the packstack deployment. Everthing > worked fine. I want to use this NIC only for management purpose and Openstack > services. > > After the packstack allinone install, br-int and br-ex were available. Then I > created to new networks and subnets. The first only for internal > communication, and the second for external communication. This external > network/subnet matches the subnet connected on em2 NIC. > > Then I activate my second and third NIC (em2 abd p1p1) without any IP. I run > following commands: sudo ovs-vsctl add-port br-ex em2 and sudo ovs-vsctl > add-port br-int p1p1 because I wan't the br-int bridge for internal > communication only and the br-ex bridge for communication with the outside > world. > > I can start VMs with 2 NICs on the 2 networks I created and both got an IP > address per DHCP. But my VM isn't reachable from the outside world. I add > rules to the default group to allow PING and SSH. > > Is my configuration right? Did I make something wrong or do I need more > specific configuration? > > Thanks for the answers. > > Nicolas. While I cannot give you a single answer that will fix your configuration, I can help you debug it further and maybe resolve the issues yourself. Your VMs are able to communicate with each other over their private network, right? Configurations that don't require inter-host connectivity can use the "local" network type for the integration bridge and don't require a physical ethernet device to be added to the integration bridge. Since you are going down the route of having an actual external interface looks like you might heading for configuration a where you can add additional nodes in the future. Considering that, you should verify some details with respect to your network/bridge mappings in ovs_quantum_plugin.ini. It sounds like you missed a few steps because you would typically have something like: ovs-vsctl add-br br-p1p1 ovs-vsctl add-port br-p1p1 p1p1 instead of adding p1p1 directly to the integration bridge. Generally speaking, OpenStack takes care of br-int. After you've created br-p1p1, the ovs section in your ovs_quantum_plugin.ini file should have something like this: network_vlan_ranges = physnet1 bridge_mappings = physnet1:br-p1p1 The openvswitch agent ends up putting br-p1p1 on the br-int integration bridge for you, effectively connecting the integration bridge to your physical ethernet device! Take a look at http://docs.openstack.org/trunk/openstack-network/admin/content/demo_flat_installions.html for similar instructions. I know that is the topology you are going for but that part of the instructions is valid. Interestingly enough, a similar thing is going to happen for your 'external' network. You mention that your VMs have both interfaces initialized via DHCP. Does your second network (the one that is NOT the br-int one) and subnet have DHCP enabled, or did you create it as an externally routed network with the intent of allocating IPs like they were floating IP addresses. I'm guessing the former if the VMs have 2 network interfaces. I'll continue with that assumption, so forgive me if I am off base. br-ex is for the external network bridge, which is managed by the l3_agent, which implies routing. So adding an interface to br-ex won't get you what you want. It sounds really like you are trying to integrate a network that is reachable using a physical IF on the host with VIFs on the VMs. This is more interesting :) Honestly I haven't tried that myself... yet. However, it is similar to connecting multiple nodes together. Unfortunately, this is someone outside the realm of what the OpenStack documentation tends to address but that doesn't mean it is impossible. I'll give you some info that might help you solve this yourself! Disclaimer: I'm not saying you *should* do this, but your network topology is your business. As I've mentioned above, br-ex is for routing to public networks. If you are trying to connect VIFs from your VMs directly to host accessible network you are dealing with br-int. Basically you are going to want to setup your other interface (em2) just like p1p1 and associate your non-private network with "physnet2" or whatever you call it. You will need to study up on VLAN vs Flat etc, in order to configure ovs_quantum_plugin.ini properly. What is correct depends entirely on your network but if you understand the docs and understand your network, getting the config right shouldn't be too bad. Unfortunately, that is about as far as I can take you. I've already assumed a lot by guessing what you are trying to do with that second network and any further advice would be based on my imagination, not what you are actually trying to do ;) Cheers, Brent From Hao.Chen at NRCan-RNCan.gc.ca Thu Jul 18 18:15:00 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Thu, 18 Jul 2013 18:15:00 +0000 Subject: [rhos-list] Cinder validation Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB6470112ED@S-BSC-MBX2.nrn.nrcan.gc.ca> Greetings, When running "cinder create" and "cinder list", I did not see errors or complains. But in the table listed by "cinder list", "Status" showed "error". Does this mean something wrong in the cinder volume creation? Thanks in advance for answers. Hao [root at cloud1 ~(keystone_admin)]# cinder create 20 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2013-07-18T17:48:36.395789 | | display_description | None | | display_name | None | | id | 26078eba-d9a0-4575-ac32-7862f4d6c550 | | metadata | {} | | size | 20 | | snapshot_id | None | | source_volid | None | | status | creating | | volume_type | None | +---------------------+--------------------------------------+ [root at cloud1 ~(keystone_admin)]# cinder list +--------------------------------------+--------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+--------------+------+-------------+----------+-------------+ | 26078eba-d9a0-4575-ac32-7862f4d6c550 | error | None | 20 | None | false | | +--------------------------------------+--------+--------------+------+-------------+----------+-------------+ -------------- next part -------------- An HTML attachment was scrubbed... URL: From eharney at redhat.com Thu Jul 18 21:43:11 2013 From: eharney at redhat.com (Eric Harney) Date: Thu, 18 Jul 2013 17:43:11 -0400 Subject: [rhos-list] Cinder validation In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB6470112ED@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB6470112ED@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <51E8616F.4080105@redhat.com> On 07/18/2013 02:15 PM, Chen, Hao wrote: > Greetings, > > When running "cinder create" and "cinder list", I did not see errors or complains. But in the table listed by "cinder list", "Status" showed "error". Does this mean something wrong in the cinder volume creation? > > Thanks in advance for answers. > Hao > > [root at cloud1 ~(keystone_admin)]# cinder create 20 > +---------------------+--------------------------------------+ > | Property | Value | > +---------------------+--------------------------------------+ > | attachments | [] | > | availability_zone | nova | > | bootable | false | > | created_at | 2013-07-18T17:48:36.395789 | > | display_description | None | > | display_name | None | > | id | 26078eba-d9a0-4575-ac32-7862f4d6c550 | > | metadata | {} | > | size | 20 | > | snapshot_id | None | > | source_volid | None | > | status | creating | > | volume_type | None | > +---------------------+--------------------------------------+ > [root at cloud1 ~(keystone_admin)]# cinder list > +--------------------------------------+--------+--------------+------+-------------+----------+-------------+ > | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | > +--------------------------------------+--------+--------------+------+-------------+----------+-------------+ > | 26078eba-d9a0-4575-ac32-7862f4d6c550 | error | None | 20 | None | false | | > +--------------------------------------+--------+--------------+------+-------------+----------+-------------+ > Hi, The cinder create call is asynchronous -- if the creation process starts, that command will not error. The volume Status will start in "creating" and then transition to "available" or "error" as the operation progresses. /var/log/cinder/volume.log should contain the reason for failure. Eric > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From Hao.Chen at NRCan-RNCan.gc.ca Fri Jul 19 15:17:29 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Fri, 19 Jul 2013 15:17:29 +0000 Subject: [rhos-list] Cinder validation In-Reply-To: <51E8616F.4080105@redhat.com> References: <76CC67FD1C99DB4DB4D43FEF354AADB6470112ED@S-BSC-MBX2.nrn.nrcan.gc.ca> <51E8616F.4080105@redhat.com> Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB6470115EB@S-BSC-MBX2.nrn.nrcan.gc.ca> Thanks Eric for the information. The log file showed it was a permission problem. Fixed and working now. Thanks, Hao -----Original Message----- From: Eric Harney [mailto:eharney at redhat.com] Sent: July 18, 2013 14:43 To: Chen, Hao Cc: rhos-list at redhat.com Subject: Re: [rhos-list] Cinder validation On 07/18/2013 02:15 PM, Chen, Hao wrote: > Greetings, > > When running "cinder create" and "cinder list", I did not see errors or complains. But in the table listed by "cinder list", "Status" showed "error". Does this mean something wrong in the cinder volume creation? > > Thanks in advance for answers. > Hao > > [root at cloud1 ~(keystone_admin)]# cinder create 20 > +---------------------+--------------------------------------+ > | Property | Value | > +---------------------+--------------------------------------+ > | attachments | [] | > | availability_zone | nova | > | bootable | false | > | created_at | 2013-07-18T17:48:36.395789 | > | display_description | None | > | display_name | None | > | id | 26078eba-d9a0-4575-ac32-7862f4d6c550 | > | metadata | {} | > | size | 20 | > | snapshot_id | None | > | source_volid | None | > | status | creating | > | volume_type | None | > +---------------------+--------------------------------------+ > [root at cloud1 ~(keystone_admin)]# cinder list > +--------------------------------------+--------+--------------+------+-------------+----------+-------------+ > | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | > +--------------------------------------+--------+--------------+------+-------------+----------+-------------+ > | 26078eba-d9a0-4575-ac32-7862f4d6c550 | error | None | 20 | None | false | | > +--------------------------------------+--------+--------------+------+-------------+----------+-------------+ > Hi, The cinder create call is asynchronous -- if the creation process starts, that command will not error. The volume Status will start in "creating" and then transition to "available" or "error" as the operation progresses. /var/log/cinder/volume.log should contain the reason for failure. Eric > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From clunsfor at cisco.com Fri Jul 19 15:28:54 2013 From: clunsfor at cisco.com (Chris Lunsford (clunsfor)) Date: Fri, 19 Jul 2013 15:28:54 +0000 Subject: [rhos-list] Foreman Host Provisioning fails looking for vmlinuz Message-ID: <262E7491CD0C114CA4A2669812BB611D13522B1D@xmb-rcd-x08.cisco.com> Hi. I'm trying out the Foreman Technology Preview and am running into an issue with the host provisioning. The soon-to-be controller node boots and receives an IP address from foreman and pulls its pxelinux file. But it fails with this error: "Could not find kernel image: boot/RedHat-6.4-x86_64-vmlinuz". The file is present, but appears to be empty: [root at rhos-foreman boot]# pwd /var/lib/tftpboot/boot [root at rhos-foreman boot]# ls -al total 8 drwxr-xr-x. 2 foreman-proxy root 4096 Jul 19 09:49 . drwxr-xr-x. 4 root root 4096 Jul 18 16:18 .. -rw-r--r--. 1 root root 0 Jul 19 09:49 RedHat-6.4-x86_64-initrd.img -rw-r--r--. 1 root root 0 Jul 19 09:49 RedHat-6.4-x86_64-vmlinuz [root at rhos-foreman boot]# Was this an error in the configuration process, or am I supposed to supply these files myself? Thanks, Chris Lunsford -------------- next part -------------- An HTML attachment was scrubbed... URL: From nirlay at hotmail.com Fri Jul 19 19:47:22 2013 From: nirlay at hotmail.com (Nirlay Kundu) Date: Fri, 19 Jul 2013 15:47:22 -0400 Subject: [rhos-list] EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE Message-ID: HiThe example image "Fedora 19" from "http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2" is getting killed after sometime when trying to create a new image using RDO. Any idea why ? I tried both from the GUI and cmd line. [root at openstack home]# export OS_USERNAME=admin [root at openstack home]# export OS_PASSWORD=xxxxxxxxxxxx [root at openstack home]# export OS_TENANT_NAME=admin[root at openstack home]# export OS_AUTH_URL=http://localhost:5000/v2.0/[root at openstack home]# glance image-create --name "Fedora 19 x86_64" --disk-format qcow2 --container-format bare --is-public true --copy-from http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 [root at openstack ~]# glance image-create --name "Fedora 19 x86_64" --disk-format qcow2 --container-format bare --is-public true --copy-from http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 +------------------+--------------------------------------+| Property | Value |+------------------+--------------------------------------+| checksum | None || container_format | bare || created_at | 2013-07-19T15:09:18 || deleted | False || deleted_at | None || disk_format | qcow2 || id | c39d3bce-e6fb-40ef-b113-22d6b8099d16 || is_public | True || min_disk | 0 || min_ram | 0 || name | Fedora 19 x86_64 || owner | e4066aec64fb4a958b4fdff0f99ca5d2 || protected | False || size | 0 || status | queued || updated_at | 2013-07-19T15:09:18 |+------------------+--------------------------------------+[root at openstack ~]# -------------- next part -------------- An HTML attachment was scrubbed... URL: From Hao.Chen at NRCan-RNCan.gc.ca Mon Jul 22 18:14:14 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Mon, 22 Jul 2013 18:14:14 +0000 Subject: [rhos-list] quantum network_vlan_ranges Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB647012102@S-BSC-MBX2.nrn.nrcan.gc.ca> Hi, I am working on "3. Setting the Plug-in" in 9.4. Configuring the Networking Service. If flat or vlan networking is chosen, my question is how to set the value of the network_vlan_ranges configuration key. Following the example "physnet1:1000:2999, physnet2:3000:3999", "physnet1/2" is the name of the physical network. Does the name of the physical network have to be an existing physical network name? I am using 10.2.0.0/24 and 10.3.0.0/24, how do I set these values: NAME:START:END? Thanks in advance, Hao -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkukura at redhat.com Mon Jul 22 18:47:13 2013 From: rkukura at redhat.com (Robert Kukura) Date: Mon, 22 Jul 2013 14:47:13 -0400 Subject: [rhos-list] quantum network_vlan_ranges In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB647012102@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB647012102@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <51ED7E31.40702@redhat.com> On 07/22/2013 02:14 PM, Chen, Hao wrote: > Hi, > > I am working on ?3. Setting the Plug-in? in 9.4. Configuring the > Networking Service. > > If flat or vlan networking is chosen, my question is how to set the > value of the network_vlan_ranges configuration key. > Following the example ?physnet1:1000:2999, physnet2:3000:3999?, > ?physnet1/2? is the name of the physical network. Does the name of the > physical network have to be an existing physical network name? Hi Hao, The physical network names don't need to match anything existing outside quantum. They just need to match across the three places they are used within quantum: 1) network_vlan_ranges in the openvswitch or linuxbridge plugin configuration 2) bridge_mappings in the openvswitch agent configuration or physical_interface_mappings in the linuxbridge agent configuration 3) the provider:physical_network attribute of the network resource in the API Each distinct value names a distinct physical network. So if you are using VLANs, each is a different VLAN trunk - i.e. VLAN 1000 on physnet1 is a different isolated L2 network than VLAN 1000 on physnet2. They'd each typically correspond to a different network switch. On the compute and network nodes, each physical network is mapped to a different physical network interface and bridge. > I am using 10.2.0.0/24 and 10.3.0.0/24, how do I set these values: > NAME:START:END? The subnets are not directly related to the physical networks. You could use these subnets on the same or different physical networks (different VLANs of the same physical network most likely). The NAME is just an identifier matched between the three places physical_network names are used. The NAME must be listed in network_vlan_ranges (1) for it to be used in the L2 agent's mappings (2) or in the provider API (3). If you include :START:END after NAME, and tenant_network_type = vlan, then the specified set of VLANs on that physical_network are added to the pool of VLANs to be allocated as tenant networks. Hope this helps, -Bob > > Thanks in advance, > Hao > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From sgordon at redhat.com Mon Jul 22 18:48:08 2013 From: sgordon at redhat.com (Steve Gordon) Date: Mon, 22 Jul 2013 14:48:08 -0400 (EDT) Subject: [rhos-list] quantum network_vlan_ranges In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB647012102@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB647012102@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <1356207467.5020758.1374518888586.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Hao Chen" > To: rhos-list at redhat.com > Sent: Monday, July 22, 2013 2:14:14 PM > Subject: [rhos-list] quantum network_vlan_ranges > > Hi, > > I am working on "3. Setting the Plug-in" in 9.4. Configuring the Networking > Service. > > If flat or vlan networking is chosen, my question is how to set the value of > the network_vlan_ranges configuration key. > Following the example "physnet1:1000:2999, physnet2:3000:3999", "physnet1/2" > is the name of the physical network. Does the name of the physical network > have to be an existing physical network name? I'm trying to improve the coverage of the related settings under Bug # 928038 [1], my understanding is that the physical network names are "defined" when they are set on the system running the quantum-server service using the network_vlan_ranges configuration key. Once a physical network has been defined on that system it can then be configured/used on each of the systems running the plug-in agents. The relevant configuration keys are bridge_mappings (Open vSwitch) and physical_interface_mappings (Linux Bridge). > I am using 10.2.0.0/24 and 10.3.0.0/24, how do I set these values: > NAME:START:END? The start and end values declare the range of VLAN IDs set aside for the traffic associated with the given physical network, so without knowing more about the VLAN setup of your local network it's impossible to guess. Thanks, Steve [1] https://bugzilla.redhat.com/show_bug.cgi?id=928038 From cwolfe at redhat.com Tue Jul 23 16:09:13 2013 From: cwolfe at redhat.com (Crag Wolfe) Date: Tue, 23 Jul 2013 09:09:13 -0700 Subject: [rhos-list] Foreman Host Provisioning fails looking for vmlinuz In-Reply-To: <262E7491CD0C114CA4A2669812BB611D13522B1D@xmb-rcd-x08.cisco.com> References: <262E7491CD0C114CA4A2669812BB611D13522B1D@xmb-rcd-x08.cisco.com> Message-ID: <51EEAAA9.1050606@redhat.com> On 07/19/2013 08:28 AM, Chris Lunsford (clunsfor) wrote: > Hi. > I'm trying out the Foreman Technology Preview and am running into an issue with the host provisioning. > > The soon-to-be controller node boots and receives an IP address from foreman and pulls its pxelinux file. But it fails with this error: > "Could not find kernel image: boot/RedHat-6.4-x86_64-vmlinuz". > > The file is present, but appears to be empty: > > [root at rhos-foreman boot]# pwd > /var/lib/tftpboot/boot > [root at rhos-foreman boot]# ls -al > total 8 > drwxr-xr-x. 2 foreman-proxy root 4096 Jul 19 09:49 . > drwxr-xr-x. 4 root root 4096 Jul 18 16:18 .. > -rw-r--r--. 1 root root 0 Jul 19 09:49 RedHat-6.4-x86_64-initrd.img > -rw-r--r--. 1 root root 0 Jul 19 09:49 RedHat-6.4-x86_64-vmlinuz > [root at rhos-foreman boot]# > > Was this an error in the configuration process, or am I supposed to supply these files myself? > > Thanks, > Chris Lunsford > Those files get written/pulled down after you specify your RHEL 64 installation media in the Foreman UI (More->Provisioning->Installation Media) -- you should not write them to the filesystem yourself. So, it looks like for whatever reason, it could not find those files in the right subdir (I think images/pxeboot/) of the http/nfs/ftp path you provided. I'd start looking there. --Crag From Hao.Chen at NRCan-RNCan.gc.ca Tue Jul 23 18:59:09 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Tue, 23 Jul 2013 18:59:09 +0000 Subject: [rhos-list] quantum network_vlan_ranges In-Reply-To: <51ED7E31.40702@redhat.com> References: <76CC67FD1C99DB4DB4D43FEF354AADB647012102@S-BSC-MBX2.nrn.nrcan.gc.ca> <51ED7E31.40702@redhat.com> Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB647012662@S-BSC-MBX2.nrn.nrcan.gc.ca> Thanks all for the help. When configuring a Provider Network, the following error occurred. [root at cloud1 ~(keystone_admin)]# quantum router-interface-add 7f5823a5-5a41-4c00-affe-4062b54445f9 2a8743f4-fd25-4b9f-8c12-987e23a4cbdc Bad router request: Router already has a port on subnet 2a8743f4-fd25-4b9f-8c12-987e23a4cbdc In log file: ... 2013-07-22 17:06:25 ERROR [quantum.api.v2.resource] add_router_interface failed Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/quantum/api/v2/resource.py", line 82, in resource result = method(request=request, **args) File "/usr/lib/python2.6/site-packages/quantum/api/v2/base.py", line 147, in _handle_action body, **kwargs) File "/usr/lib/python2.6/site-packages/quantum/db/l3_db.py", line 363, in add_router_interface subnet['cidr']) File "/usr/lib/python2.6/site-packages/quantum/db/l3_db.py", line 302, in _check_for_dup_router_subnet raise q_exc.BadRequest(resource='router', msg=msg) BadRequest: Bad router request: Router already has a port on subnet 2a8743f4-fd25-4b9f-8c12-987e23a4cbdc ... Tried to use "quantum router-interface-delete ..." then the response was "... has no interface on subnet ..." Other than this error, everything else looked fine. Any suggestions? Thanks, Hao -----Original Message----- From: Robert Kukura [mailto:rkukura at redhat.com] Sent: July 22, 2013 11:47 To: Chen, Hao Cc: rhos-list at redhat.com Subject: Re: [rhos-list] quantum network_vlan_ranges On 07/22/2013 02:14 PM, Chen, Hao wrote: > Hi, > > I am working on "3. Setting the Plug-in" in 9.4. Configuring the > Networking Service. > > If flat or vlan networking is chosen, my question is how to set the > value of the network_vlan_ranges configuration key. > Following the example "physnet1:1000:2999, physnet2:3000:3999", > "physnet1/2" is the name of the physical network. Does the name of the > physical network have to be an existing physical network name? Hi Hao, The physical network names don't need to match anything existing outside quantum. They just need to match across the three places they are used within quantum: 1) network_vlan_ranges in the openvswitch or linuxbridge plugin configuration 2) bridge_mappings in the openvswitch agent configuration or physical_interface_mappings in the linuxbridge agent configuration 3) the provider:physical_network attribute of the network resource in the API Each distinct value names a distinct physical network. So if you are using VLANs, each is a different VLAN trunk - i.e. VLAN 1000 on physnet1 is a different isolated L2 network than VLAN 1000 on physnet2. They'd each typically correspond to a different network switch. On the compute and network nodes, each physical network is mapped to a different physical network interface and bridge. > I am using 10.2.0.0/24 and 10.3.0.0/24, how do I set these values: > NAME:START:END? The subnets are not directly related to the physical networks. You could use these subnets on the same or different physical networks (different VLANs of the same physical network most likely). The NAME is just an identifier matched between the three places physical_network names are used. The NAME must be listed in network_vlan_ranges (1) for it to be used in the L2 agent's mappings (2) or in the provider API (3). If you include :START:END after NAME, and tenant_network_type = vlan, then the specified set of VLANs on that physical_network are added to the pool of VLANs to be allocated as tenant networks. Hope this helps, -Bob > > Thanks in advance, > Hao > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From rkukura at redhat.com Tue Jul 23 19:11:43 2013 From: rkukura at redhat.com (Robert Kukura) Date: Tue, 23 Jul 2013 15:11:43 -0400 Subject: [rhos-list] quantum network_vlan_ranges In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB647012662@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB647012102@S-BSC-MBX2.nrn.nrcan.gc.ca> <51ED7E31.40702@redhat.com> <76CC67FD1C99DB4DB4D43FEF354AADB647012662@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <51EED56F.7080700@redhat.com> On 07/23/2013 02:59 PM, Chen, Hao wrote: > Thanks all for the help. > > When configuring a Provider Network, the following error occurred. > > [root at cloud1 ~(keystone_admin)]# quantum router-interface-add 7f5823a5-5a41-4c00-affe-4062b54445f9 2a8743f4-fd25-4b9f-8c12-987e23a4cbdc > Bad router request: Router already has a port on subnet 2a8743f4-fd25-4b9f-8c12-987e23a4cbdc > > In log file: > ... > 2013-07-22 17:06:25 ERROR [quantum.api.v2.resource] add_router_interface failed > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/api/v2/resource.py", line 82, in resource > result = method(request=request, **args) > File "/usr/lib/python2.6/site-packages/quantum/api/v2/base.py", line 147, in _handle_action > body, **kwargs) > File "/usr/lib/python2.6/site-packages/quantum/db/l3_db.py", line 363, in add_router_interface > subnet['cidr']) > File "/usr/lib/python2.6/site-packages/quantum/db/l3_db.py", line 302, in _check_for_dup_router_subnet > raise q_exc.BadRequest(resource='router', msg=msg) > BadRequest: Bad router request: Router already has a port on subnet 2a8743f4-fd25-4b9f-8c12-987e23a4cbdc > ... > > Tried to use "quantum router-interface-delete ..." then the response was "... has no interface on subnet ..." Other than this error, everything else looked fine. Any suggestions? Is it possible that you've already added this subnet's network as a gateway with "quantum router-gateway-set ..."? The same network/subnet cannot be connected to a router as both a gateway and as an interface. -Bob > > Thanks, > Hao > > -----Original Message----- > From: Robert Kukura [mailto:rkukura at redhat.com] > Sent: July 22, 2013 11:47 > To: Chen, Hao > Cc: rhos-list at redhat.com > Subject: Re: [rhos-list] quantum network_vlan_ranges > > On 07/22/2013 02:14 PM, Chen, Hao wrote: >> Hi, >> >> I am working on "3. Setting the Plug-in" in 9.4. Configuring the >> Networking Service. >> >> If flat or vlan networking is chosen, my question is how to set the >> value of the network_vlan_ranges configuration key. >> Following the example "physnet1:1000:2999, physnet2:3000:3999", >> "physnet1/2" is the name of the physical network. Does the name of the >> physical network have to be an existing physical network name? > > Hi Hao, > > The physical network names don't need to match anything existing outside quantum. They just need to match across the three places they are used within quantum: > > 1) network_vlan_ranges in the openvswitch or linuxbridge plugin configuration > 2) bridge_mappings in the openvswitch agent configuration or physical_interface_mappings in the linuxbridge agent configuration > 3) the provider:physical_network attribute of the network resource in the API > > Each distinct value names a distinct physical network. So if you are using VLANs, each is a different VLAN trunk - i.e. VLAN 1000 on physnet1 is a different isolated L2 network than VLAN 1000 on physnet2. They'd each typically correspond to a different network switch. > > On the compute and network nodes, each physical network is mapped to a different physical network interface and bridge. > >> I am using 10.2.0.0/24 and 10.3.0.0/24, how do I set these values: >> NAME:START:END? > > The subnets are not directly related to the physical networks. You could use these subnets on the same or different physical networks (different VLANs of the same physical network most likely). > > The NAME is just an identifier matched between the three places physical_network names are used. The NAME must be listed in network_vlan_ranges (1) for it to be used in the L2 agent's mappings (2) or in the provider API (3). > > If you include :START:END after NAME, and tenant_network_type = vlan, then the specified set of VLANs on that physical_network are added to the pool of VLANs to be allocated as tenant networks. > > Hope this helps, > > -Bob > > >> >> Thanks in advance, >> Hao >> >> >> >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list >> > From Hao.Chen at NRCan-RNCan.gc.ca Tue Jul 23 20:43:08 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Tue, 23 Jul 2013 20:43:08 +0000 Subject: [rhos-list] quantum network_vlan_ranges In-Reply-To: <51EED56F.7080700@redhat.com> References: <76CC67FD1C99DB4DB4D43FEF354AADB647012102@S-BSC-MBX2.nrn.nrcan.gc.ca> <51ED7E31.40702@redhat.com> <76CC67FD1C99DB4DB4D43FEF354AADB647012662@S-BSC-MBX2.nrn.nrcan.gc.ca> <51EED56F.7080700@redhat.com> Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB6470126D1@S-BSC-MBX2.nrn.nrcan.gc.ca> > [root at cloud1 ~(keystone_admin)]# quantum router-interface-add > 7f5823a5-5a41-4c00-affe-4062b54445f9 > 2a8743f4-fd25-4b9f-8c12-987e23a4cbdc > Bad router request: Router already has a port on subnet > 2a8743f4-fd25-4b9f-8c12-987e23a4cbdc > Is it possible that you've already added this subnet's network as a gateway with "quantum router-gateway-set ..."? The same network/subnet cannot be connected to a router as both a gateway and as an interface. Thanks Bob for the answer. Indeed, I used "quantum router-gateway-set ...". Followed the link bellow and tried to execute Step 6 "quantum router-interface-add ..." right after Step 5 "quantum router-gateway-set ...". I guess that is the problem. https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_Preview/3/html/Installation_and_Configuration_Guide/Configuring_a_Provider_Network1.html Hao From rkukura at redhat.com Wed Jul 24 00:19:44 2013 From: rkukura at redhat.com (Robert Kukura) Date: Tue, 23 Jul 2013 20:19:44 -0400 Subject: [rhos-list] quantum network_vlan_ranges In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB6470126D1@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB647012102@S-BSC-MBX2.nrn.nrcan.gc.ca> <51ED7E31.40702@redhat.com> <76CC67FD1C99DB4DB4D43FEF354AADB647012662@S-BSC-MBX2.nrn.nrcan.gc.ca> <51EED56F.7080700@redhat.com> <76CC67FD1C99DB4DB4D43FEF354AADB6470126D1@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <51EF1DA0.1090303@redhat.com> On 07/23/2013 04:43 PM, Chen, Hao wrote: >> [root at cloud1 ~(keystone_admin)]# quantum router-interface-add >> 7f5823a5-5a41-4c00-affe-4062b54445f9 >> 2a8743f4-fd25-4b9f-8c12-987e23a4cbdc >> Bad router request: Router already has a port on subnet >> 2a8743f4-fd25-4b9f-8c12-987e23a4cbdc >> > > Is it possible that you've already added this subnet's network as a gateway with "quantum router-gateway-set ..."? The same network/subnet cannot be connected to a router as both a gateway and as an interface. > > Thanks Bob for the answer. Indeed, I used "quantum router-gateway-set ...". Followed the link bellow and tried to execute Step 6 "quantum router-interface-add ..." right after Step 5 "quantum router-gateway-set ...". I guess that is the problem. > https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_Preview/3/html/Installation_and_Configuration_Guide/Configuring_a_Provider_Network1.html > Hi Hao, Glad that was it! I've filed bug https://bugzilla.redhat.com/show_bug.cgi?id=987711 against the documentation for this. Feel free to add yourself to the CC list and/or add any additional relevant information. > Hao > Thanks, -Bob From mehbhatt at cisco.com Wed Jul 24 08:33:38 2013 From: mehbhatt at cisco.com (Mehul Bhatt (mehbhatt)) Date: Wed, 24 Jul 2013 08:33:38 +0000 Subject: [rhos-list] download of rhos preview Message-ID: Hi guys, It has been more than 24 hours since I registered for the OpenStack evaluation (Grizzly based). All I got so far is: "We're excited that you want to evaluate Red Hat OpenStack. You're all set to install the software. NOTE: It could take up to 15 minutes before the download channels are available." What does this mean? Should I expect an email that has download link? Does somebody point me to right direction here? Thanks, -Mehul. -------------- next part -------------- An HTML attachment was scrubbed... URL: From acathrow at redhat.com Wed Jul 24 10:25:16 2013 From: acathrow at redhat.com (Andrew Cathrow) Date: Wed, 24 Jul 2013 06:25:16 -0400 (EDT) Subject: [rhos-list] download of rhos preview In-Reply-To: References: Message-ID: <26716153.300.1374661512639.JavaMail.acathrow@aic-desktop.cathrow.org> Mehul, Please can you send me (off list of course) the rhn user name you used to register and I'll chase. thanks Aic ----- Original Message ----- > From: "Mehul Bhatt (mehbhatt)" > To: rhos-list at redhat.com > Sent: Wednesday, July 24, 2013 4:33:38 AM > Subject: [rhos-list] download of rhos preview > Hi guys, > It has been more than 24 hours since I registered for the OpenStack > evaluation (Grizzly based). All I got so far is: > ?We?re excited that you want to evaluate Red Hat OpenStack. You?re > all set to install the software. > NOTE: It could take up to 15 minutes before the download channels are > available.? > What does this mean? Should I expect an email that has download link? > Does somebody point me to right direction here? > Thanks, > -Mehul. > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From griveros at redhat.com Wed Jul 24 11:34:15 2013 From: griveros at redhat.com (Gerry Riveros) Date: Wed, 24 Jul 2013 07:34:15 -0400 (EDT) Subject: [rhos-list] download of rhos preview In-Reply-To: <26716153.300.1374661512639.JavaMail.acathrow@aic-desktop.cathrow.org> References: <26716153.300.1374661512639.JavaMail.acathrow@aic-desktop.cathrow.org> Message-ID: <1427585737.18995934.1374665655397.JavaMail.root@redhat.com> Hello Mehul, Andy Cathrow asked me to look into your problem. I'm looking at your account (mehbhatt) and it shows that your RHEL OpenStack Platform Preview entitlement is in your account and active. So, you should be able to go ahead with your preview. Trying logging into your account via the customer portal (https://access.redhat.com/home) and see if you can see it in your account. Do you have the installation instructions? Best, -Gerry ----- Original Message ----- > Mehul, > > Please can you send me (off list of course) the rhn user name you used to > register and I'll chase. > > thanks > Aic > > > > > From: "Mehul Bhatt (mehbhatt)" > To: rhos-list at redhat.com > Sent: Wednesday, July 24, 2013 4:33:38 AM > Subject: [rhos-list] download of rhos preview > > > > Hi guys, > > > > It has been more than 24 hours since I registered for the OpenStack > evaluation (Grizzly based). All I got so far is: > > ?We?re excited that you want to evaluate Red Hat OpenStack. You?re all set to > install the software. > > NOTE: It could take up to 15 minutes before the download channels are > available.? > > > > What does this mean? Should I expect an email that has download link? Does > somebody point me to right direction here? > > > > Thanks, > > > > -Mehul. > > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -- Gerry Riveros Product Marketing Red Hat, Inc. 919.754.4350 griveros at redhat.com From nvogel67 at hotmail.com Wed Jul 24 15:05:24 2013 From: nvogel67 at hotmail.com (Nicolas VOGEL) Date: Wed, 24 Jul 2013 17:05:24 +0200 Subject: [rhos-list] floating IP not reachable Message-ID: Hi, I just installed a new all-in-one controller without quantum. Everything works fine and now I wan't to use floating IPs like described here: http://openstack.redhat.com/Floating_IP_range. I want to use my second NIC (em2) for this purpose. For the installation, I use my first NIC (em1) and packstack automatically created a bridge (br100). I deleted the default network and created a new one, which is matching the subnet on which em2 is connected. After that I modified the public_interface in the nova.conf to em2 and also the floating_range with the subnet I just created. I didn't modify the flat_interface and let the default value (lo). I just enabled the em2 interface but didn't assign any IP address to it. I added two rules to the default group to allow ping and SSH. I can start VMs and they got an internal IP address (from 192.168.32.0/22). I can also associate a floating IP to each VM. But I'm unable to ping a floating IP. If someone has any idea to resolve the problem it would be very helpful. And if someone has a configuration who runs correctly I would be interested how you configured your network interfaces and your nova.conf. Thanks, Nicolas. Here?s an output from my nova.conf : public_interface=em2 default_floating_pool=nova novncproxy_port=6080 dhcp_domain=novalocal libvirt_type=kvm floating_range=10.192.76.0/25 fixed_range=192.168.32.0/22 auto_assign_floating_ip=False novncproxy_base_url=http://10.192.75.190:6080/vnc_auto.html flat_interface=lo vnc_enabled=True flat_network_bridge=br100 -------------- next part -------------- An HTML attachment was scrubbed... URL: From roxenham at redhat.com Wed Jul 24 15:16:16 2013 From: roxenham at redhat.com (Rhys Oxenham) Date: Wed, 24 Jul 2013 16:16:16 +0100 Subject: [rhos-list] floating IP not reachable In-Reply-To: References: Message-ID: Hi Nicolas, When you've got the instance running and a floating-ip assigned, can you please pastebin the output of- 1) ip a 2) brctl show 3) nova list 4) nova-manage network-list 5) nova secgroup-list 6) nova secgroup-list-rules 7) iptables -L 8) iptables -L -t nat 9) iptables -S -t nat Oh, and when you have more than one instance running, can you ping between the instances via 192.168.32.0/22? Make sure to sanitise anything you need to in the pastes. Many thanks! Rhys On 24 Jul 2013, at 16:05, Nicolas VOGEL wrote: > Hi, > > I just installed a new all-in-one controller without quantum. Everything works fine and now I wan't to use floating IPs like described here:http://openstack.redhat.com/Floating_IP_range. I want to use my second NIC (em2) for this purpose. For the installation, I use my first NIC (em1) and packstack automatically created a bridge (br100). > > I deleted the default network and created a new one, which is matching the subnet on which em2 is connected. After that I modified the public_interface in the nova.conf to em2 and also the floating_range with the subnet I just created. I didn't modify the flat_interface and let the default value (lo). > > I just enabled the em2 interface but didn't assign any IP address to it. > I added two rules to the default group to allow ping and SSH. > > I can start VMs and they got an internal IP address (from 192.168.32.0/22). I can also associate a floating IP to each VM. But I'm unable to ping a floating IP. > > If someone has any idea to resolve the problem it would be very helpful. > And if someone has a configuration who runs correctly I would be interested how you configured your network interfaces and your nova.conf. > > Thanks, Nicolas. > > Here?s an output from my nova.conf : > public_interface=em2 > default_floating_pool=nova > novncproxy_port=6080 > dhcp_domain=novalocal > libvirt_type=kvm > floating_range=10.192.76.0/25 > fixed_range=192.168.32.0/22 > auto_assign_floating_ip=False > novncproxy_base_url=http://10.192.75.190:6080/vnc_auto.html > flat_interface=lo > vnc_enabled=True > flat_network_bridge=br100 > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From nvogel67 at hotmail.com Wed Jul 24 15:43:28 2013 From: nvogel67 at hotmail.com (Nicolas VOGEL) Date: Wed, 24 Jul 2013 17:43:28 +0200 Subject: [rhos-list] floating IP not reachable In-Reply-To: References: Message-ID: Hello Rhys, Thanks for your answer. I put all the outputs you asked. The outputs were made with two VMs running and floating IPs associated (192.168.32.2/10.192.76.136 and 192.168.32.3/10.192.76.135, see nova list output). I connected via ssh to the first VM and I could ping the second, the I thing internal communication is OK. I put the complete output from iptables commands because I don't know what you want to verify and I'm not very good with iptables. Thanks for your help! 1) ip a 1: lo: mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 169.254.169.254/32 scope link lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: em1: mtu 1500 qdisc mq state UP qlen 1000 link/ether 84:2b:2b:6c:fd:0f brd ff:ff:ff:ff:ff:ff inet 10.192.75.190/24 brd 10.192.75.255 scope global em1 inet6 fe80::862b:2bff:fe6c:fd0f/64 scope link valid_lft forever preferred_lft forever 3: em2: mtu 1500 qdisc mq state UP qlen 1000 link/ether 84:2b:2b:6c:fd:10 brd ff:ff:ff:ff:ff:ff inet 10.192.76.135/32 scope global em2 inet 10.192.76.136/32 scope global em2 inet6 fe80::862b:2bff:fe6c:fd10/64 scope link valid_lft forever preferred_lft forever 4: p1p1: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:1b:21:7c:b8:38 brd ff:ff:ff:ff:ff:ff 5: p1p2: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:1b:21:7c:b8:39 brd ff:ff:ff:ff:ff:ff 6: virbr0: mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:d6:4f:da brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 7: virbr0-nic: mtu 1500 qdisc noop state DOWN qlen 500 link/ether 52:54:00:d6:4f:da brd ff:ff:ff:ff:ff:ff 9: br100: mtu 1500 qdisc noqueue state UNKNOWN link/ether fe:16:3e:04:d9:a2 brd ff:ff:ff:ff:ff:ff inet 192.168.32.1/22 brd 192.168.35.255 scope global br100 inet6 fe80::3c6c:d7ff:fe0b:c6af/64 scope link valid_lft forever preferred_lft forever 10: vnet0: mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:16:3e:04:d9:a2 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fe04:d9a2/64 scope link valid_lft forever preferred_lft forever 11: vnet1: mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:16:3e:2f:a5:0e brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fe2f:a50e/64 scope link valid_lft forever preferred_lft forever ================================================================== 2) brctl show bridge name bridge id STP enabled interfaces br100 8000.fe163e04d9a2 no vnet0 vnet1 virbr0 8000.525400d64fda yes virbr0-nic ================================================================== 3) nova list +--------------------------------------+---------+--------+-----------------------------------------+ | ID | Name | Status | Networks | +--------------------------------------+---------+--------+-----------------------------------------+ | 0dd1311a-f188-4570-af5d-dbf0fe62d50e | fed32-1 | ACTIVE | novanetwork=192.168.32.2, 10.192.76.136 | | 57960ee0-e2f2-4a08-8560-3bf39c489b78 | fed64-1 | ACTIVE | novanetwork=192.168.32.3, 10.192.76.135 | +--------------------------------------+---------+--------+-----------------------------------------+ ================================================================== 4) nova-manage network-list id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid 1 192.168.32.0/22 None 192.168.32.2 8.8.4.4 None None None e2e597a5-7606-4335-911a-d8cadcb840d6 =================================================================== 5) nova secgroup-list +---------+-------------+ | Name | Description | +---------+-------------+ | default | default | +---------+-------------+ ==================================================================== 6) nova secgroup-list-rules +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | icmp | -1 | -1 | 0.0.0.0/0 | | | tcp | 22 | 22 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+ ============================================================================ 7) iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination nova-network-INPUT all -- anywhere anywhere nova-compute-INPUT all -- anywhere anywhere nova-api-INPUT all -- anywhere anywhere ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere multiport dports http /* 001 horizon incoming */ ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere multiport dports http /* 001 nagios incoming */ ACCEPT tcp -- anywhere anywhere tcp dpt:bootps ACCEPT tcp -- anywhere anywhere multiport dports iscsi-target,8776 /* 001 cinder incoming */ ACCEPT tcp -- anywhere anywhere multiport dports 5666 /* 001 nrpe incoming */ ACCEPT tcp -- anywhere anywhere multiport dports armtechdaemon /* 001 glance incoming */ ACCEPT tcp -- anywhere anywhere multiport dports rsync /* 001 rsync incoming */ ACCEPT tcp -- anywhere anywhere multiport dports webcache /* 001 swift proxy incoming */ ACCEPT tcp -- anywhere anywhere multiport dports x11,6001,6002,rsync /* 001 swift storage incoming */ ACCEPT tcp -- anywhere anywhere multiport dports commplex-main,35357 /* 001 keystone incoming */ ACCEPT tcp -- anywhere anywhere multiport dports vnc-server:cvsup /* 001 nova compute incoming */ ACCEPT tcp -- anywhere anywhere multiport dports mysql /* 001 mysql incoming */ ACCEPT tcp -- anywhere anywhere multiport dports 6080 /* 001 novncproxy incoming */ ACCEPT tcp -- anywhere anywhere multiport dports 8773,8774,8775 /* 001 novaapi incoming */ ACCEPT tcp -- anywhere anywhere multiport dports amqp /* 001 qpid incoming */ ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination nova-filter-top all -- anywhere anywhere nova-network-FORWARD all -- anywhere anywhere nova-compute-FORWARD all -- anywhere anywhere nova-api-FORWARD all -- anywhere anywhere ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED ACCEPT all -- 192.168.122.0/24 anywhere ACCEPT all -- anywhere anywhere REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-port-unreachable REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination nova-filter-top all -- anywhere anywhere nova-network-OUTPUT all -- anywhere anywhere nova-compute-OUTPUT all -- anywhere anywhere nova-api-OUTPUT all -- anywhere anywhere Chain nova-api-FORWARD (1 references) target prot opt source destination Chain nova-api-INPUT (1 references) target prot opt source destination ACCEPT tcp -- anywhere 10.192.75.190 tcp dpt:8775 Chain nova-api-OUTPUT (1 references) target prot opt source destination Chain nova-api-local (1 references) target prot opt source destination Chain nova-compute-FORWARD (1 references) target prot opt source destination ACCEPT udp -- default 255.255.255.255 udp spt:bootpc dpt:bootps ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain nova-compute-INPUT (1 references) target prot opt source destination ACCEPT udp -- default 255.255.255.255 udp spt:bootpc dpt:bootps Chain nova-compute-OUTPUT (1 references) target prot opt source destination Chain nova-compute-inst-2 (1 references) target prot opt source destination DROP all -- anywhere anywhere state INVALID ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED nova-compute-provider all -- anywhere anywhere ACCEPT udp -- 192.168.32.1 anywhere udp spt:bootps dpt:bootpc ACCEPT all -- 192.168.32.0/22 anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT icmp -- anywhere anywhere nova-compute-sg-fallback all -- anywhere anywhere Chain nova-compute-inst-3 (1 references) target prot opt source destination DROP all -- anywhere anywhere state INVALID ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED nova-compute-provider all -- anywhere anywhere ACCEPT udp -- 192.168.32.1 anywhere udp spt:bootps dpt:bootpc ACCEPT all -- 192.168.32.0/22 anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT icmp -- anywhere anywhere nova-compute-sg-fallback all -- anywhere anywhere Chain nova-compute-local (1 references) target prot opt source destination nova-compute-inst-2 all -- anywhere 192.168.32.2 nova-compute-inst-3 all -- anywhere 192.168.32.3 Chain nova-compute-provider (2 references) target prot opt source destination Chain nova-compute-sg-fallback (2 references) target prot opt source destination DROP all -- anywhere anywhere Chain nova-filter-top (2 references) target prot opt source destination nova-network-local all -- anywhere anywhere nova-compute-local all -- anywhere anywhere nova-api-local all -- anywhere anywhere Chain nova-network-FORWARD (1 references) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain nova-network-INPUT (1 references) target prot opt source destination ACCEPT udp -- anywhere anywhere udp dpt:bootps ACCEPT tcp -- anywhere anywhere tcp dpt:bootps ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpt:domain Chain nova-network-OUTPUT (1 references) target prot opt source destination Chain nova-network-local (1 references) target prot opt source destination ================================================================================ 8) iptables -L -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination nova-network-PREROUTING all -- anywhere anywhere nova-compute-PREROUTING all -- anywhere anywhere nova-api-PREROUTING all -- anywhere anywhere Chain POSTROUTING (policy ACCEPT) target prot opt source destination nova-network-POSTROUTING all -- anywhere anywhere nova-compute-POSTROUTING all -- anywhere anywhere nova-api-POSTROUTING all -- anywhere anywhere MASQUERADE tcp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 MASQUERADE udp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 MASQUERADE all -- 192.168.122.0/24 !192.168.122.0/24 nova-postrouting-bottom all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination nova-network-OUTPUT all -- anywhere anywhere nova-compute-OUTPUT all -- anywhere anywhere nova-api-OUTPUT all -- anywhere anywhere Chain nova-api-OUTPUT (1 references) target prot opt source destination Chain nova-api-POSTROUTING (1 references) target prot opt source destination Chain nova-api-PREROUTING (1 references) target prot opt source destination Chain nova-api-float-snat (1 references) target prot opt source destination Chain nova-api-snat (1 references) target prot opt source destination nova-api-float-snat all -- anywhere anywhere Chain nova-compute-OUTPUT (1 references) target prot opt source destination Chain nova-compute-POSTROUTING (1 references) target prot opt source destination Chain nova-compute-PREROUTING (1 references) target prot opt source destination Chain nova-compute-float-snat (1 references) target prot opt source destination Chain nova-compute-snat (1 references) target prot opt source destination nova-compute-float-snat all -- anywhere anywhere Chain nova-network-OUTPUT (1 references) target prot opt source destination DNAT all -- anywhere 10.192.76.135 to:192.168.32.3 DNAT all -- anywhere 10.192.76.136 to:192.168.32.2 Chain nova-network-POSTROUTING (1 references) target prot opt source destination ACCEPT all -- 192.168.32.0/22 10.192.75.190 ACCEPT all -- 192.168.32.0/22 192.168.32.0/22 ! ctstate DNAT SNAT all -- 192.168.32.3 anywhere ctstate DNAT to:10.192.76.135 SNAT all -- 192.168.32.2 anywhere ctstate DNAT to:10.192.76.136 Chain nova-network-PREROUTING (1 references) target prot opt source destination DNAT tcp -- anywhere 169.254.169.254 tcp dpt:http to:10.192.75.190:8775 DNAT all -- anywhere 10.192.76.135 to:192.168.32.3 DNAT all -- anywhere 10.192.76.136 to:192.168.32.2 Chain nova-network-float-snat (1 references) target prot opt source destination SNAT all -- 192.168.32.3 192.168.32.3 to:10.192.76.135 SNAT all -- 192.168.32.3 anywhere to:10.192.76.135 SNAT all -- 192.168.32.2 192.168.32.2 to:10.192.76.136 SNAT all -- 192.168.32.2 anywhere to:10.192.76.136 Chain nova-network-snat (1 references) target prot opt source destination nova-network-float-snat all -- anywhere anywhere SNAT all -- 192.168.32.0/22 anywhere to:10.192.75.190 Chain nova-postrouting-bottom (1 references) target prot opt source destination nova-network-snat all -- anywhere anywhere nova-compute-snat all -- anywhere anywhere nova-api-snat all -- anywhere anywhere =========================================================================== 9) iptables -S -t nat -P PREROUTING ACCEPT -P POSTROUTING ACCEPT -P OUTPUT ACCEPT -N nova-api-OUTPUT -N nova-api-POSTROUTING -N nova-api-PREROUTING -N nova-api-float-snat -N nova-api-snat -N nova-compute-OUTPUT -N nova-compute-POSTROUTING -N nova-compute-PREROUTING -N nova-compute-float-snat -N nova-compute-snat -N nova-network-OUTPUT -N nova-network-POSTROUTING -N nova-network-PREROUTING -N nova-network-float-snat -N nova-network-snat -N nova-postrouting-bottom -A PREROUTING -j nova-network-PREROUTING -A PREROUTING -j nova-compute-PREROUTING -A PREROUTING -j nova-api-PREROUTING -A POSTROUTING -j nova-network-POSTROUTING -A POSTROUTING -j nova-compute-POSTROUTING -A POSTROUTING -j nova-api-POSTROUTING -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535 -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535 -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE -A POSTROUTING -j nova-postrouting-bottom -A OUTPUT -j nova-network-OUTPUT -A OUTPUT -j nova-compute-OUTPUT -A OUTPUT -j nova-api-OUTPUT -A nova-api-snat -j nova-api-float-snat -A nova-compute-snat -j nova-compute-float-snat -A nova-network-OUTPUT -d 10.192.76.135/32 -j DNAT --to-destination 192.168.32.3 -A nova-network-OUTPUT -d 10.192.76.136/32 -j DNAT --to-destination 192.168.32.2 -A nova-network-POSTROUTING -s 192.168.32.0/22 -d 10.192.75.190/32 -j ACCEPT -A nova-network-POSTROUTING -s 192.168.32.0/22 -d 192.168.32.0/22 -m conntrack ! --ctstate DNAT -j ACCEPT -A nova-network-POSTROUTING -s 192.168.32.3/32 -m conntrack --ctstate DNAT -j SNAT --to-source 10.192.76.135 -A nova-network-POSTROUTING -s 192.168.32.2/32 -m conntrack --ctstate DNAT -j SNAT --to-source 10.192.76.136 -A nova-network-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.192.75.190:8775 -A nova-network-PREROUTING -d 10.192.76.135/32 -j DNAT --to-destination 192.168.32.3 -A nova-network-PREROUTING -d 10.192.76.136/32 -j DNAT --to-destination 192.168.32.2 -A nova-network-float-snat -s 192.168.32.3/32 -d 192.168.32.3/32 -j SNAT --to-source 10.192.76.135 -A nova-network-float-snat -s 192.168.32.3/32 -o em2 -j SNAT --to-source 10.192.76.135 -A nova-network-float-snat -s 192.168.32.2/32 -d 192.168.32.2/32 -j SNAT --to-source 10.192.76.136 -A nova-network-float-snat -s 192.168.32.2/32 -o em2 -j SNAT --to-source 10.192.76.136 -A nova-network-snat -j nova-network-float-snat -A nova-network-snat -s 192.168.32.0/22 -o em2 -j SNAT --to-source 10.192.75.190 -A nova-postrouting-bottom -j nova-network-snat -A nova-postrouting-bottom -j nova-compute-snat -A nova-postrouting-bottom -j nova-api-snat ======================================================================================== 10)em1 config file DEVICE=em1 HWADDR=84:2B:2B:6C:FD:0F TYPE=Ethernet UUID=e65a3f54-594e-4b2a-bd63-b488ba0d7adb ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=none IPADDR=10.192.75.190 PREFIX=24 GATEWAY=10.192.75.1 DNS1=10.192.48.100 DNS2=10.192.48.101 ================================================================================================== 11) em2 config file DEVICE=em2 HWADDR=84:2B:2B:6C:FD:10 TYPE=Ethernet UUID=ad6f5595-1df3-437d-b231-8b9e5db9c260 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=none ================================================================================================= ================================================================================================= -----Original Message----- From: Rhys Oxenham [mailto:roxenham at redhat.com] Sent: mercredi 24 juillet 2013 17:16 To: Nicolas VOGEL Cc: rhos-list at redhat.com Subject: Re: [rhos-list] floating IP not reachable Hi Nicolas, When you've got the instance running and a floating-ip assigned, can you please pastebin the output of- 1) ip a 2) brctl show 3) nova list 4) nova-manage network-list 5) nova secgroup-list 6) nova secgroup-list-rules 7) iptables -L 8) iptables -L -t nat 9) iptables -S -t nat Oh, and when you have more than one instance running, can you ping between the instances via 192.168.32.0/22? Make sure to sanitise anything you need to in the pastes. Many thanks! Rhys On 24 Jul 2013, at 16:05, Nicolas VOGEL wrote: > Hi, > > I just installed a new all-in-one controller without quantum. Everything works fine and now I wan't to use floating IPs like described here:http://openstack.redhat.com/Floating_IP_range. I want to use my second NIC (em2) for this purpose. For the installation, I use my first NIC (em1) and packstack automatically created a bridge (br100). > > I deleted the default network and created a new one, which is matching the subnet on which em2 is connected. After that I modified the public_interface in the nova.conf to em2 and also the floating_range with the subnet I just created. I didn't modify the flat_interface and let the default value (lo). > > I just enabled the em2 interface but didn't assign any IP address to it. > I added two rules to the default group to allow ping and SSH. > > I can start VMs and they got an internal IP address (from 192.168.32.0/22). I can also associate a floating IP to each VM. But I'm unable to ping a floating IP. > > If someone has any idea to resolve the problem it would be very helpful. > And if someone has a configuration who runs correctly I would be interested how you configured your network interfaces and your nova.conf. > > Thanks, Nicolas. > > Here?s an output from my nova.conf : > public_interface=em2 > default_floating_pool=nova > novncproxy_port=6080 > dhcp_domain=novalocal > libvirt_type=kvm > floating_range=10.192.76.0/25 > fixed_range=192.168.32.0/22 > auto_assign_floating_ip=False > novncproxy_base_url=http://10.192.75.190:6080/vnc_auto.html > flat_interface=lo > vnc_enabled=True > flat_network_bridge=br100 > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From Hao.Chen at NRCan-RNCan.gc.ca Wed Jul 24 16:34:38 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Wed, 24 Jul 2013 16:34:38 +0000 Subject: [rhos-list] quantum network_vlan_ranges In-Reply-To: <51EF1DA0.1090303@redhat.com> References: <76CC67FD1C99DB4DB4D43FEF354AADB647012102@S-BSC-MBX2.nrn.nrcan.gc.ca> <51ED7E31.40702@redhat.com> <76CC67FD1C99DB4DB4D43FEF354AADB647012662@S-BSC-MBX2.nrn.nrcan.gc.ca> <51EED56F.7080700@redhat.com> <76CC67FD1C99DB4DB4D43FEF354AADB6470126D1@S-BSC-MBX2.nrn.nrcan.gc.ca> <51EF1DA0.1090303@redhat.com> Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB647012AC8@S-BSC-MBX2.nrn.nrcan.gc.ca> Thanks Rob. (1) When checked the log files I found the following line coming out every second in /var/log/secure. Not sure what it means. ... Jul 24 08:49:22 cloud1 sudo: quantum : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf ovs-vsctl --timeout=2 list-ports br-int Jul 24 08:49:22 cloud1 sudo: quantum : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf ovs-vsctl --timeout=2 list-ports br-int ... (2) Start openvswitch in 9.8.1.2. should be in 9.7. Configuring the L3 Agent as 9.7.3. requires this service to run ovs-vsctl. Hao -----Original Message----- From: Robert Kukura [mailto:rkukura at redhat.com] Sent: July 23, 2013 17:20 To: Chen, Hao Cc: rhos-list at redhat.com Subject: Re: [rhos-list] quantum network_vlan_ranges On 07/23/2013 04:43 PM, Chen, Hao wrote: >> [root at cloud1 ~(keystone_admin)]# quantum router-interface-add >> 7f5823a5-5a41-4c00-affe-4062b54445f9 >> 2a8743f4-fd25-4b9f-8c12-987e23a4cbdc >> Bad router request: Router already has a port on subnet >> 2a8743f4-fd25-4b9f-8c12-987e23a4cbdc >> > > Is it possible that you've already added this subnet's network as a gateway with "quantum router-gateway-set ..."? The same network/subnet cannot be connected to a router as both a gateway and as an interface. > > Thanks Bob for the answer. Indeed, I used "quantum router-gateway-set ...". Followed the link bellow and tried to execute Step 6 "quantum router-interface-add ..." right after Step 5 "quantum router-gateway-set ...". I guess that is the problem. > https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_P > review/3/html/Installation_and_Configuration_Guide/Configuring_a_Provi > der_Network1.html > Hi Hao, Glad that was it! I've filed bug https://bugzilla.redhat.com/show_bug.cgi?id=987711 against the documentation for this. Feel free to add yourself to the CC list and/or add any additional relevant information. > Hao > Thanks, -Bob From rkukura at redhat.com Wed Jul 24 17:34:58 2013 From: rkukura at redhat.com (Robert Kukura) Date: Wed, 24 Jul 2013 13:34:58 -0400 Subject: [rhos-list] quantum network_vlan_ranges In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB647012AC8@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB647012102@S-BSC-MBX2.nrn.nrcan.gc.ca> <51ED7E31.40702@redhat.com> <76CC67FD1C99DB4DB4D43FEF354AADB647012662@S-BSC-MBX2.nrn.nrcan.gc.ca> <51EED56F.7080700@redhat.com> <76CC67FD1C99DB4DB4D43FEF354AADB6470126D1@S-BSC-MBX2.nrn.nrcan.gc.ca> <51EF1DA0.1090303@redhat.com> <76CC67FD1C99DB4DB4D43FEF354AADB647012AC8@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <51F01042.4090401@redhat.com> On 07/24/2013 12:34 PM, Chen, Hao wrote: > Thanks Rob. > > (1) When checked the log files I found the following line coming out every second in /var/log/secure. Not sure what it means. > ... > Jul 24 08:49:22 cloud1 sudo: quantum : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf ovs-vsctl --timeout=2 list-ports br-int > Jul 24 08:49:22 cloud1 sudo: quantum : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf ovs-vsctl --timeout=2 list-ports br-int This is normal. The openvswitch-agent is polling OVS periodically to determine if any new virtual NICs have attached to br-int. > ... > > (2) Start openvswitch in 9.8.1.2. should be in 9.7. Configuring the L3 Agent as 9.7.3. requires this service to run ovs-vsctl. You are right. Actually, section 9.8 on the L2 agent should be before the sections on the other agents. I've added this to the same doc BZ referenced below. Note that if you are really using a provider external network instead of an external bridge, you do not need to follow 9.7.3.A (run "ovs-vsctl add-br br-ex" or setup /etc/sysconfig/network-scripts/ifcfg-br-ex). Also, with RHOS 3.0 and network namespaces enabled, you should not need to set router_id, regardless of whether you are using a provider external network or an external bridge. I've added this to the BZ as well. Finally, I'd recommend using the latest RHOS 3.0 docs at https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack rather than the preview version. -Bob > > Hao > > -----Original Message----- > From: Robert Kukura [mailto:rkukura at redhat.com] > Sent: July 23, 2013 17:20 > To: Chen, Hao > Cc: rhos-list at redhat.com > Subject: Re: [rhos-list] quantum network_vlan_ranges > > On 07/23/2013 04:43 PM, Chen, Hao wrote: >>> [root at cloud1 ~(keystone_admin)]# quantum router-interface-add >>> 7f5823a5-5a41-4c00-affe-4062b54445f9 >>> 2a8743f4-fd25-4b9f-8c12-987e23a4cbdc >>> Bad router request: Router already has a port on subnet >>> 2a8743f4-fd25-4b9f-8c12-987e23a4cbdc >>> >> >> Is it possible that you've already added this subnet's network as a gateway with "quantum router-gateway-set ..."? The same network/subnet cannot be connected to a router as both a gateway and as an interface. >> >> Thanks Bob for the answer. Indeed, I used "quantum router-gateway-set ...". Followed the link bellow and tried to execute Step 6 "quantum router-interface-add ..." right after Step 5 "quantum router-gateway-set ...". I guess that is the problem. >> https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_P >> review/3/html/Installation_and_Configuration_Guide/Configuring_a_Provi >> der_Network1.html >> > > Hi Hao, > > Glad that was it! I've filed bug > https://bugzilla.redhat.com/show_bug.cgi?id=987711 against the documentation for this. Feel free to add yourself to the CC list and/or add any additional relevant information. > >> Hao >> > > Thanks, > > -Bob > From Hao.Chen at NRCan-RNCan.gc.ca Wed Jul 24 19:39:18 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Wed, 24 Jul 2013 19:39:18 +0000 Subject: [rhos-list] Installing the OpenStack Compute Service Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB647012B64@S-BSC-MBX2.nrn.nrcan.gc.ca> Followed the latest RHOS 3.0 docs at https://access.redhat.com/site/documentation//en-US/Red_Hat_OpenStack/3/html/Installation_and_Configuration_Guide/Updating_the_Compute_Configuration.html (1) All "/etc/nova/nova/conf" should be "/etc/nova/nova.conf" (2) When testing http://10.2.0.196:6080/vnc_auto.html received an error "Failed to connect to server (code: 1006)" (3) Is v3 supported or should use v2.0 instead in "DEFAULT quantum_admin_auth_url http://IP:35357/v3"? (4) 10.3.6. Starting the Compute Services When running "service messagebus restart" or "service messagebus stop" it either logged me out or killed the server causing the system to reboot. The log info is as bellow. ... Jul 24 12:23:31 cloud1 console-kit-daemon[15269]: WARNING: no sender#012 Jul 24 12:23:31 cloud1 rtkit-daemon[15439]: Demoting known real-time threads. Jul 24 12:23:31 cloud1 rtkit-daemon[15439]: Demoted 0 threads. Jul 24 12:23:31 cloud1 gdm[16053]: ******************* START ********************************** Jul 24 12:23:31 cloud1 gnome-keyring-daemon[15657]: dbus failure unregistering from session: Connection is closed Jul 24 12:23:31 cloud1 gnome-keyring-daemon[15657]: dbus failure unregistering from session: Connection is closed Jul 24 12:23:31 cloud1 gdm-binary: ******************* START ******************************** Jul 24 12:23:31 cloud1 gdm-binary: Frame 0: /usr/sbin/gdm-binary() [0x418bb9] Jul 24 12:23:31 cloud1 gdm-binary: Frame 1: /usr/sbin/gdm-binary() [0x418d17] Jul 24 12:23:31 cloud1 gdm-binary: Frame 2: /lib64/libpthread.so.0() [0x33c2c0f500] Jul 24 12:23:31 cloud1 gdm-binary: Frame 3: /lib64/libgobject-2.0.so.0(g_object_unref+0x1a) [0x33b600d98a] Jul 24 12:23:31 cloud1 gdm-binary: Frame 4: /usr/sbin/gdm-binary() [0x405ee9] Jul 24 12:23:31 cloud1 gdm-binary: Frame 5: /lib64/libgobject-2.0.so.0(g_closure_invoke+0x15e) [0x33b600bb3e] Jul 24 12:23:31 cloud1 gdm-binary: Frame 6: /lib64/libgobject-2.0.so.0() [0x33b6020e23] Jul 24 12:23:31 cloud1 gdm-binary: Frame 7: /lib64/libgobject-2.0.so.0(g_signal_emit_valist+0x7ef) [0x33b60220af] Jul 24 12:23:31 cloud1 gdm-binary: Frame 8: /lib64/libgobject-2.0.so.0(g_signal_emit+0x83) [0x33b60225f3] Jul 24 12:23:31 cloud1 gdm-binary: Frame 9: /usr/lib64/libdbus-glib-1.so.2() [0x388a812db6] Jul 24 12:23:31 cloud1 gdm-binary: Frame 10: /lib64/libgobject-2.0.so.0(g_object_run_dispose+0x60) [0x33b600dee0] Jul 24 12:23:31 cloud1 gdm-binary: Frame 11: /usr/lib64/libdbus-glib-1.so.2() [0x388a8130a0] Jul 24 12:23:31 cloud1 gdm-binary: Frame 12: /lib64/libdbus-1.so.3(dbus_connection_dispatch+0x336) [0x33c7c10b06] Jul 24 12:23:31 cloud1 gdm-binary: Frame 13: /usr/lib64/libdbus-glib-1.so.2() [0x388a809b45] Jul 24 12:23:31 cloud1 gdm-binary: Frame 14: /lib64/libglib-2.0.so.0(g_main_context_dispatch+0x22e) [0x33b5438f0e] Jul 24 12:23:31 cloud1 gdm-binary: Frame 15: /lib64/libglib-2.0.so.0() [0x33b543c938] Jul 24 12:23:31 cloud1 gdm-binary: Frame 16: /lib64/libglib-2.0.so.0(g_main_loop_run+0x195) [0x33b543cd55] Jul 24 12:23:31 cloud1 gdm-binary: Frame 17: /usr/sbin/gdm-binary() [0x406949] Jul 24 12:23:31 cloud1 gdm-binary: Frame 18: /lib64/libc.so.6(__libc_start_main+0xfd) [0x33b3c1ecdd] Jul 24 12:23:31 cloud1 gdm-binary: Frame 19: /usr/sbin/gdm-binary() [0x405de9] Jul 24 12:23:31 cloud1 gdm-binary: ******************* END ********************************** Jul 24 12:23:31 cloud1 init: prefdm main process (11261) terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16061) terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16075) terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16089) terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16104) terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 cloud1 gdm[16053]: ******************* END ********************************** Jul 24 12:23:31 cloud1 init: prefdm main process (16118) terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16132) terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16146) terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16160) terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16174) terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16188) terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm respawning too fast, stopped Jul 24 12:24:06 cloud1 init: tty (/dev/tty2) main process (11271) killed by TERM signal Jul 24 12:24:06 cloud1 init: tty (/dev/tty3) main process (11273) killed by TERM signal Jul 24 12:24:06 cloud1 init: tty (/dev/tty4) main process (11275) killed by TERM signal Jul 24 12:24:06 cloud1 init: tty (/dev/tty5) main process (11277) killed by TERM signal Jul 24 12:24:06 cloud1 init: tty (/dev/tty6) main process (11279) killed by TERM signal Jul 24 12:24:07 cloud1 qpidd[10788]: 2013-07-24 12:24:07 error Execution exception: not-found: Delete failed. No such queue: cinder-volume (qpid/broker/Broker.cpp:940) Jul 24 12:24:07 cloud1 proxy-server SIGTERM received Jul 24 12:24:07 cloud1 proxy-server Exited Jul 24 12:24:08 cloud1 qpidd[10788]: 2013-07-24 12:24:08 notice Shut down Jul 24 12:24:09 cloud1 abrtd: Got signal 15, exiting Jul 24 12:24:10 cloud1 tgtd: tgtd logger stopped, pid:10444 Jul 24 12:24:19 cloud1 acpid: exiting Jul 24 12:24:20 cloud1 ntpd[10497]: ntpd exiting on signal 15 Jul 24 12:24:20 cloud1 rpcbind: rpcbind terminating on signal. Restart with "rpcbind -w" Jul 24 12:24:20 cloud1 auditd[11693]: The audit daemon is exiting. Jul 24 12:24:20 cloud1 kernel: type=1305 audit(1374693860.398:332234): audit_pid=0 old=11693 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 Jul 24 12:24:20 cloud1 kernel: type=1305 audit(1374693860.498:332235): audit_enabled=0 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditctl_t:s0 res=1 Jul 24 12:24:20 cloud1 kernel: Kernel logging (proc) stopped. Jul 24 12:24:20 cloud1 rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="2522" x-info="http://www.rsyslog.com"] exiting on signal 15. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Wed Jul 24 21:09:03 2013 From: sgordon at redhat.com (Steve Gordon) Date: Wed, 24 Jul 2013 17:09:03 -0400 (EDT) Subject: [rhos-list] Installing the OpenStack Compute Service In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB647012B64@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB647012B64@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <2009256412.6358808.1374700143870.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Hao Chen" > To: rhos-list at redhat.com > Sent: Wednesday, July 24, 2013 3:39:18 PM > Subject: [rhos-list] Installing the OpenStack Compute Service Hi again, > Followed the latest RHOS 3.0 docs at > https://access.redhat.com/site/documentation//en-US/Red_Hat_OpenStack/3/html/Installation_and_Configuration_Guide/Updating_the_Compute_Configuration.html > > (1) All "/etc/nova/nova/conf" should be "/etc/nova/nova.conf" Yes, this one was raised previously here and we intend to fix shortly: https://bugzilla.redhat.com/show_bug.cgi?id=980547 > (2) When testing http://10.2.0.196:6080/vnc_auto.html received an error > "Failed to connect to server (code: 1006)" > (3) Is v3 supported or should use v2.0 instead in "DEFAULT > quantum_admin_auth_url http://IP:35357/v3"? I suspect this should be v2.0 for now, let me know if changing it helps and I will file a bug. Will have to wait and see what others have to say about (2) and (4) as it's not immediately clear to me whether they are issues with the instructions followed or something else. Thanks, Steve > (4) 10.3.6. Starting the Compute Services > When running "service messagebus restart" or "service messagebus stop" it > either logged me out or killed the server causing the system to reboot. The > log info is as bellow. > ... > Jul 24 12:23:31 cloud1 console-kit-daemon[15269]: WARNING: no sender#012 > Jul 24 12:23:31 cloud1 rtkit-daemon[15439]: Demoting known real-time threads. > Jul 24 12:23:31 cloud1 rtkit-daemon[15439]: Demoted 0 threads. > Jul 24 12:23:31 cloud1 gdm[16053]: ******************* START > ********************************** > Jul 24 12:23:31 cloud1 gnome-keyring-daemon[15657]: dbus failure > unregistering from session: Connection is closed > Jul 24 12:23:31 cloud1 gnome-keyring-daemon[15657]: dbus failure > unregistering from session: Connection is closed > Jul 24 12:23:31 cloud1 gdm-binary: ******************* START > ******************************** > Jul 24 12:23:31 cloud1 gdm-binary: Frame 0: /usr/sbin/gdm-binary() [0x418bb9] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 1: /usr/sbin/gdm-binary() [0x418d17] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 2: /lib64/libpthread.so.0() > [0x33c2c0f500] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 3: > /lib64/libgobject-2.0.so.0(g_object_unref+0x1a) [0x33b600d98a] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 4: /usr/sbin/gdm-binary() [0x405ee9] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 5: > /lib64/libgobject-2.0.so.0(g_closure_invoke+0x15e) [0x33b600bb3e] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 6: /lib64/libgobject-2.0.so.0() > [0x33b6020e23] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 7: > /lib64/libgobject-2.0.so.0(g_signal_emit_valist+0x7ef) [0x33b60220af] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 8: > /lib64/libgobject-2.0.so.0(g_signal_emit+0x83) [0x33b60225f3] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 9: /usr/lib64/libdbus-glib-1.so.2() > [0x388a812db6] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 10: > /lib64/libgobject-2.0.so.0(g_object_run_dispose+0x60) [0x33b600dee0] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 11: /usr/lib64/libdbus-glib-1.so.2() > [0x388a8130a0] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 12: > /lib64/libdbus-1.so.3(dbus_connection_dispatch+0x336) [0x33c7c10b06] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 13: /usr/lib64/libdbus-glib-1.so.2() > [0x388a809b45] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 14: > /lib64/libglib-2.0.so.0(g_main_context_dispatch+0x22e) [0x33b5438f0e] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 15: /lib64/libglib-2.0.so.0() > [0x33b543c938] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 16: > /lib64/libglib-2.0.so.0(g_main_loop_run+0x195) [0x33b543cd55] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 17: /usr/sbin/gdm-binary() > [0x406949] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 18: > /lib64/libc.so.6(__libc_start_main+0xfd) [0x33b3c1ecdd] > Jul 24 12:23:31 cloud1 gdm-binary: Frame 19: /usr/sbin/gdm-binary() > [0x405de9] > Jul 24 12:23:31 cloud1 gdm-binary: ******************* END > ********************************** > Jul 24 12:23:31 cloud1 init: prefdm main process (11261) terminated with > status 1 > Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning > Jul 24 12:23:31 cloud1 init: prefdm main process (16061) terminated with > status 1 > Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning > Jul 24 12:23:31 cloud1 init: prefdm main process (16075) terminated with > status 1 > Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning > Jul 24 12:23:31 cloud1 init: prefdm main process (16089) terminated with > status 1 > Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning > Jul 24 12:23:31 cloud1 init: prefdm main process (16104) terminated with > status 1 > Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning > Jul 24 12:23:31 cloud1 gdm[16053]: ******************* END > ********************************** > Jul 24 12:23:31 cloud1 init: prefdm main process (16118) terminated with > status 1 > Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning > Jul 24 12:23:31 cloud1 init: prefdm main process (16132) terminated with > status 1 > Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning > Jul 24 12:23:31 cloud1 init: prefdm main process (16146) terminated with > status 1 > Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning > Jul 24 12:23:31 cloud1 init: prefdm main process (16160) terminated with > status 1 > Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning > Jul 24 12:23:31 cloud1 init: prefdm main process (16174) terminated with > status 1 > Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning > Jul 24 12:23:31 cloud1 init: prefdm main process (16188) terminated with > status 1 > Jul 24 12:23:31 cloud1 init: prefdm respawning too fast, stopped > Jul 24 12:24:06 cloud1 init: tty (/dev/tty2) main process (11271) killed by > TERM signal > Jul 24 12:24:06 cloud1 init: tty (/dev/tty3) main process (11273) killed by > TERM signal > Jul 24 12:24:06 cloud1 init: tty (/dev/tty4) main process (11275) killed by > TERM signal > Jul 24 12:24:06 cloud1 init: tty (/dev/tty5) main process (11277) killed by > TERM signal > Jul 24 12:24:06 cloud1 init: tty (/dev/tty6) main process (11279) killed by > TERM signal > Jul 24 12:24:07 cloud1 qpidd[10788]: 2013-07-24 12:24:07 error Execution > exception: not-found: Delete failed. No such queue: cinder-volume > (qpid/broker/Broker.cpp:940) > Jul 24 12:24:07 cloud1 proxy-server SIGTERM received > Jul 24 12:24:07 cloud1 proxy-server Exited > Jul 24 12:24:08 cloud1 qpidd[10788]: 2013-07-24 12:24:08 notice Shut down > Jul 24 12:24:09 cloud1 abrtd: Got signal 15, exiting > Jul 24 12:24:10 cloud1 tgtd: tgtd logger stopped, pid:10444 > Jul 24 12:24:19 cloud1 acpid: exiting > Jul 24 12:24:20 cloud1 ntpd[10497]: ntpd exiting on signal 15 > Jul 24 12:24:20 cloud1 rpcbind: rpcbind terminating on signal. Restart with > "rpcbind -w" > Jul 24 12:24:20 cloud1 auditd[11693]: The audit daemon is exiting. > Jul 24 12:24:20 cloud1 kernel: type=1305 audit(1374693860.398:332234): > audit_pid=0 old=11693 auid=4294967295 ses=4294967295 > subj=system_u:system_r:auditd_t:s0 res=1 > Jul 24 12:24:20 cloud1 kernel: type=1305 audit(1374693860.498:332235): > audit_enabled=0 old=1 auid=4294967295 ses=4294967295 > subj=system_u:system_r:auditctl_t:s0 res=1 > Jul 24 12:24:20 cloud1 kernel: Kernel logging (proc) stopped. > Jul 24 12:24:20 cloud1 rsyslogd: [origin software="rsyslogd" > swVersion="5.8.10" x-pid="2522" x-info="http://www.rsyslog.com"] exiting on > signal 15. > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -- Steve Gordon, RHCE Documentation Lead, Red Hat OpenStack Engineering Content Services Red Hat Canada (Toronto, Ontario) From roxenham at redhat.com Wed Jul 24 21:48:50 2013 From: roxenham at redhat.com (Rhys Oxenham) Date: Wed, 24 Jul 2013 22:48:50 +0100 Subject: [rhos-list] floating IP not reachable In-Reply-To: References: Message-ID: Hi Nicolas, Thanks for sending that over, it looks good to me; the important NAT rules are in-place, e.g. -A nova-network-OUTPUT -d 10.192.76.135/32 -j DNAT --to-destination 192.168.32.3 -A nova-network-OUTPUT -d 10.192.76.136/32 -j DNAT --to-destination 192.168.32.2 (And associated SNAT) And then for the security groups- ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT icmp -- anywhere anywhere Your em2 interface is also listening on the correct IP addresses: inet 10.192.76.135/32 scope global em2 inet 10.192.76.136/32 scope global em2 So you're saying that you can directly access your instances by using the internal IP, i.e. the 192.168.32.0/22 network? But NOT via the floating IP's? I just need to understand what you cannot currently access; my concern is that there's no link between the local loopback device and your instances so I need to establish what works and what doesn't. Cheers Rhys On 24 Jul 2013, at 16:43, Nicolas VOGEL wrote: > Hello Rhys, > > Thanks for your answer. > I put all the outputs you asked. > The outputs were made with two VMs running and floating IPs associated (192.168.32.2/10.192.76.136 and 192.168.32.3/10.192.76.135, see nova list output). > I connected via ssh to the first VM and I could ping the second, the I thing internal communication is OK. > I put the complete output from iptables commands because I don't know what you want to verify and I'm not very good with iptables. > Thanks for your help! > > 1) ip a > 1: lo: mtu 16436 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > inet 169.254.169.254/32 scope link lo > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: em1: mtu 1500 qdisc mq state UP qlen 1000 > link/ether 84:2b:2b:6c:fd:0f brd ff:ff:ff:ff:ff:ff > inet 10.192.75.190/24 brd 10.192.75.255 scope global em1 > inet6 fe80::862b:2bff:fe6c:fd0f/64 scope link > valid_lft forever preferred_lft forever > 3: em2: mtu 1500 qdisc mq state UP qlen 1000 > link/ether 84:2b:2b:6c:fd:10 brd ff:ff:ff:ff:ff:ff > inet 10.192.76.135/32 scope global em2 > inet 10.192.76.136/32 scope global em2 > inet6 fe80::862b:2bff:fe6c:fd10/64 scope link > valid_lft forever preferred_lft forever > 4: p1p1: mtu 1500 qdisc noop state DOWN qlen 1000 > link/ether 00:1b:21:7c:b8:38 brd ff:ff:ff:ff:ff:ff > 5: p1p2: mtu 1500 qdisc noop state DOWN qlen 1000 > link/ether 00:1b:21:7c:b8:39 brd ff:ff:ff:ff:ff:ff > 6: virbr0: mtu 1500 qdisc noqueue state UNKNOWN > link/ether 52:54:00:d6:4f:da brd ff:ff:ff:ff:ff:ff > inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 > 7: virbr0-nic: mtu 1500 qdisc noop state DOWN qlen 500 > link/ether 52:54:00:d6:4f:da brd ff:ff:ff:ff:ff:ff > 9: br100: mtu 1500 qdisc noqueue state UNKNOWN > link/ether fe:16:3e:04:d9:a2 brd ff:ff:ff:ff:ff:ff > inet 192.168.32.1/22 brd 192.168.35.255 scope global br100 > inet6 fe80::3c6c:d7ff:fe0b:c6af/64 scope link > valid_lft forever preferred_lft forever > 10: vnet0: mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 > link/ether fe:16:3e:04:d9:a2 brd ff:ff:ff:ff:ff:ff > inet6 fe80::fc16:3eff:fe04:d9a2/64 scope link > valid_lft forever preferred_lft forever > 11: vnet1: mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 > link/ether fe:16:3e:2f:a5:0e brd ff:ff:ff:ff:ff:ff > inet6 fe80::fc16:3eff:fe2f:a50e/64 scope link > valid_lft forever preferred_lft forever > ================================================================== > > 2) brctl show > bridge name bridge id STP enabled interfaces > br100 8000.fe163e04d9a2 no vnet0 > vnet1 > virbr0 8000.525400d64fda yes virbr0-nic > ================================================================== > > 3) nova list > +--------------------------------------+---------+--------+-----------------------------------------+ > | ID | Name | Status | Networks | > +--------------------------------------+---------+--------+-----------------------------------------+ > | 0dd1311a-f188-4570-af5d-dbf0fe62d50e | fed32-1 | ACTIVE | novanetwork=192.168.32.2, 10.192.76.136 | > | 57960ee0-e2f2-4a08-8560-3bf39c489b78 | fed64-1 | ACTIVE | novanetwork=192.168.32.3, 10.192.76.135 | > +--------------------------------------+---------+--------+-----------------------------------------+ > ================================================================== > > 4) nova-manage network-list > id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid > 1 192.168.32.0/22 None 192.168.32.2 8.8.4.4 None None None e2e597a5-7606-4335-911a-d8cadcb840d6 > =================================================================== > > 5) nova secgroup-list > +---------+-------------+ > | Name | Description | > +---------+-------------+ > | default | default | > +---------+-------------+ > ==================================================================== > > 6) nova secgroup-list-rules > +-------------+-----------+---------+-----------+--------------+ > | IP Protocol | From Port | To Port | IP Range | Source Group | > +-------------+-----------+---------+-----------+--------------+ > | icmp | -1 | -1 | 0.0.0.0/0 | | > | tcp | 22 | 22 | 0.0.0.0/0 | | > +-------------+-----------+---------+-----------+--------------+ > ============================================================================ > > 7) iptables -L > Chain INPUT (policy ACCEPT) > target prot opt source destination > nova-network-INPUT all -- anywhere anywhere > nova-compute-INPUT all -- anywhere anywhere > nova-api-INPUT all -- anywhere anywhere > ACCEPT udp -- anywhere anywhere udp dpt:domain > ACCEPT tcp -- anywhere anywhere multiport dports http /* 001 horizon incoming */ > ACCEPT tcp -- anywhere anywhere tcp dpt:domain > ACCEPT udp -- anywhere anywhere udp dpt:bootps > ACCEPT tcp -- anywhere anywhere multiport dports http /* 001 nagios incoming */ > ACCEPT tcp -- anywhere anywhere tcp dpt:bootps > ACCEPT tcp -- anywhere anywhere multiport dports iscsi-target,8776 /* 001 cinder incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports 5666 /* 001 nrpe incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports armtechdaemon /* 001 glance incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports rsync /* 001 rsync incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports webcache /* 001 swift proxy incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports x11,6001,6002,rsync /* 001 swift storage incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports commplex-main,35357 /* 001 keystone incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports vnc-server:cvsup /* 001 nova compute incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports mysql /* 001 mysql incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports 6080 /* 001 novncproxy incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports 8773,8774,8775 /* 001 novaapi incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports amqp /* 001 qpid incoming */ > ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED > ACCEPT icmp -- anywhere anywhere > ACCEPT all -- anywhere anywhere > ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh > REJECT all -- anywhere anywhere reject-with icmp-host-prohibited > > Chain FORWARD (policy ACCEPT) > target prot opt source destination > nova-filter-top all -- anywhere anywhere > nova-network-FORWARD all -- anywhere anywhere > nova-compute-FORWARD all -- anywhere anywhere > nova-api-FORWARD all -- anywhere anywhere > ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED > ACCEPT all -- 192.168.122.0/24 anywhere > ACCEPT all -- anywhere anywhere > REJECT all -- anywhere anywhere reject-with icmp-port-unreachable > REJECT all -- anywhere anywhere reject-with icmp-port-unreachable > REJECT all -- anywhere anywhere reject-with icmp-host-prohibited > > Chain OUTPUT (policy ACCEPT) > target prot opt source destination > nova-filter-top all -- anywhere anywhere > nova-network-OUTPUT all -- anywhere anywhere > nova-compute-OUTPUT all -- anywhere anywhere > nova-api-OUTPUT all -- anywhere anywhere > > Chain nova-api-FORWARD (1 references) > target prot opt source destination > > Chain nova-api-INPUT (1 references) > target prot opt source destination > ACCEPT tcp -- anywhere 10.192.75.190 tcp dpt:8775 > > Chain nova-api-OUTPUT (1 references) > target prot opt source destination > > Chain nova-api-local (1 references) > target prot opt source destination > > Chain nova-compute-FORWARD (1 references) > target prot opt source destination > ACCEPT udp -- default 255.255.255.255 udp spt:bootpc dpt:bootps > ACCEPT all -- anywhere anywhere > ACCEPT all -- anywhere anywhere > > Chain nova-compute-INPUT (1 references) > target prot opt source destination > ACCEPT udp -- default 255.255.255.255 udp spt:bootpc dpt:bootps > > Chain nova-compute-OUTPUT (1 references) > target prot opt source destination > > Chain nova-compute-inst-2 (1 references) > target prot opt source destination > DROP all -- anywhere anywhere state INVALID > ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED > nova-compute-provider all -- anywhere anywhere > ACCEPT udp -- 192.168.32.1 anywhere udp spt:bootps dpt:bootpc > ACCEPT all -- 192.168.32.0/22 anywhere > ACCEPT tcp -- anywhere anywhere tcp dpt:ssh > ACCEPT icmp -- anywhere anywhere > nova-compute-sg-fallback all -- anywhere anywhere > > Chain nova-compute-inst-3 (1 references) > target prot opt source destination > DROP all -- anywhere anywhere state INVALID > ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED > nova-compute-provider all -- anywhere anywhere > ACCEPT udp -- 192.168.32.1 anywhere udp spt:bootps dpt:bootpc > ACCEPT all -- 192.168.32.0/22 anywhere > ACCEPT tcp -- anywhere anywhere tcp dpt:ssh > ACCEPT icmp -- anywhere anywhere > nova-compute-sg-fallback all -- anywhere anywhere > > Chain nova-compute-local (1 references) > target prot opt source destination > nova-compute-inst-2 all -- anywhere 192.168.32.2 > nova-compute-inst-3 all -- anywhere 192.168.32.3 > > Chain nova-compute-provider (2 references) > target prot opt source destination > > Chain nova-compute-sg-fallback (2 references) > target prot opt source destination > DROP all -- anywhere anywhere > > Chain nova-filter-top (2 references) > target prot opt source destination > nova-network-local all -- anywhere anywhere > nova-compute-local all -- anywhere anywhere > nova-api-local all -- anywhere anywhere > > Chain nova-network-FORWARD (1 references) > target prot opt source destination > ACCEPT all -- anywhere anywhere > ACCEPT all -- anywhere anywhere > > Chain nova-network-INPUT (1 references) > target prot opt source destination > ACCEPT udp -- anywhere anywhere udp dpt:bootps > ACCEPT tcp -- anywhere anywhere tcp dpt:bootps > ACCEPT udp -- anywhere anywhere udp dpt:domain > ACCEPT tcp -- anywhere anywhere tcp dpt:domain > > Chain nova-network-OUTPUT (1 references) > target prot opt source destination > > Chain nova-network-local (1 references) > target prot opt source destination > ================================================================================ > > 8) iptables -L -t nat > Chain PREROUTING (policy ACCEPT) > target prot opt source destination > nova-network-PREROUTING all -- anywhere anywhere > nova-compute-PREROUTING all -- anywhere anywhere > nova-api-PREROUTING all -- anywhere anywhere > > Chain POSTROUTING (policy ACCEPT) > target prot opt source destination > nova-network-POSTROUTING all -- anywhere anywhere > nova-compute-POSTROUTING all -- anywhere anywhere > nova-api-POSTROUTING all -- anywhere anywhere > MASQUERADE tcp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 > MASQUERADE udp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 > MASQUERADE all -- 192.168.122.0/24 !192.168.122.0/24 > nova-postrouting-bottom all -- anywhere anywhere > > Chain OUTPUT (policy ACCEPT) > target prot opt source destination > nova-network-OUTPUT all -- anywhere anywhere > nova-compute-OUTPUT all -- anywhere anywhere > nova-api-OUTPUT all -- anywhere anywhere > > Chain nova-api-OUTPUT (1 references) > target prot opt source destination > > Chain nova-api-POSTROUTING (1 references) > target prot opt source destination > > Chain nova-api-PREROUTING (1 references) > target prot opt source destination > > Chain nova-api-float-snat (1 references) > target prot opt source destination > > Chain nova-api-snat (1 references) > target prot opt source destination > nova-api-float-snat all -- anywhere anywhere > > Chain nova-compute-OUTPUT (1 references) > target prot opt source destination > > Chain nova-compute-POSTROUTING (1 references) > target prot opt source destination > > Chain nova-compute-PREROUTING (1 references) > target prot opt source destination > > Chain nova-compute-float-snat (1 references) > target prot opt source destination > > Chain nova-compute-snat (1 references) > target prot opt source destination > nova-compute-float-snat all -- anywhere anywhere > > Chain nova-network-OUTPUT (1 references) > target prot opt source destination > DNAT all -- anywhere 10.192.76.135 to:192.168.32.3 > DNAT all -- anywhere 10.192.76.136 to:192.168.32.2 > > Chain nova-network-POSTROUTING (1 references) > target prot opt source destination > ACCEPT all -- 192.168.32.0/22 10.192.75.190 > ACCEPT all -- 192.168.32.0/22 192.168.32.0/22 ! ctstate DNAT > SNAT all -- 192.168.32.3 anywhere ctstate DNAT to:10.192.76.135 > SNAT all -- 192.168.32.2 anywhere ctstate DNAT to:10.192.76.136 > > Chain nova-network-PREROUTING (1 references) > target prot opt source destination > DNAT tcp -- anywhere 169.254.169.254 tcp dpt:http to:10.192.75.190:8775 > DNAT all -- anywhere 10.192.76.135 to:192.168.32.3 > DNAT all -- anywhere 10.192.76.136 to:192.168.32.2 > > Chain nova-network-float-snat (1 references) > target prot opt source destination > SNAT all -- 192.168.32.3 192.168.32.3 to:10.192.76.135 > SNAT all -- 192.168.32.3 anywhere to:10.192.76.135 > SNAT all -- 192.168.32.2 192.168.32.2 to:10.192.76.136 > SNAT all -- 192.168.32.2 anywhere to:10.192.76.136 > > Chain nova-network-snat (1 references) > target prot opt source destination > nova-network-float-snat all -- anywhere anywhere > SNAT all -- 192.168.32.0/22 anywhere to:10.192.75.190 > > Chain nova-postrouting-bottom (1 references) > target prot opt source destination > nova-network-snat all -- anywhere anywhere > nova-compute-snat all -- anywhere anywhere > nova-api-snat all -- anywhere anywhere > =========================================================================== > > 9) iptables -S -t nat > -P PREROUTING ACCEPT > -P POSTROUTING ACCEPT > -P OUTPUT ACCEPT > -N nova-api-OUTPUT > -N nova-api-POSTROUTING > -N nova-api-PREROUTING > -N nova-api-float-snat > -N nova-api-snat > -N nova-compute-OUTPUT > -N nova-compute-POSTROUTING > -N nova-compute-PREROUTING > -N nova-compute-float-snat > -N nova-compute-snat > -N nova-network-OUTPUT > -N nova-network-POSTROUTING > -N nova-network-PREROUTING > -N nova-network-float-snat > -N nova-network-snat > -N nova-postrouting-bottom > -A PREROUTING -j nova-network-PREROUTING > -A PREROUTING -j nova-compute-PREROUTING > -A PREROUTING -j nova-api-PREROUTING > -A POSTROUTING -j nova-network-POSTROUTING > -A POSTROUTING -j nova-compute-POSTROUTING > -A POSTROUTING -j nova-api-POSTROUTING > -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535 > -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535 > -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE > -A POSTROUTING -j nova-postrouting-bottom > -A OUTPUT -j nova-network-OUTPUT > -A OUTPUT -j nova-compute-OUTPUT > -A OUTPUT -j nova-api-OUTPUT > -A nova-api-snat -j nova-api-float-snat > -A nova-compute-snat -j nova-compute-float-snat > -A nova-network-OUTPUT -d 10.192.76.135/32 -j DNAT --to-destination 192.168.32.3 > -A nova-network-OUTPUT -d 10.192.76.136/32 -j DNAT --to-destination 192.168.32.2 > -A nova-network-POSTROUTING -s 192.168.32.0/22 -d 10.192.75.190/32 -j ACCEPT > -A nova-network-POSTROUTING -s 192.168.32.0/22 -d 192.168.32.0/22 -m conntrack ! --ctstate DNAT -j ACCEPT > -A nova-network-POSTROUTING -s 192.168.32.3/32 -m conntrack --ctstate DNAT -j SNAT --to-source 10.192.76.135 > -A nova-network-POSTROUTING -s 192.168.32.2/32 -m conntrack --ctstate DNAT -j SNAT --to-source 10.192.76.136 > -A nova-network-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.192.75.190:8775 > -A nova-network-PREROUTING -d 10.192.76.135/32 -j DNAT --to-destination 192.168.32.3 > -A nova-network-PREROUTING -d 10.192.76.136/32 -j DNAT --to-destination 192.168.32.2 > -A nova-network-float-snat -s 192.168.32.3/32 -d 192.168.32.3/32 -j SNAT --to-source 10.192.76.135 > -A nova-network-float-snat -s 192.168.32.3/32 -o em2 -j SNAT --to-source 10.192.76.135 > -A nova-network-float-snat -s 192.168.32.2/32 -d 192.168.32.2/32 -j SNAT --to-source 10.192.76.136 > -A nova-network-float-snat -s 192.168.32.2/32 -o em2 -j SNAT --to-source 10.192.76.136 > -A nova-network-snat -j nova-network-float-snat > -A nova-network-snat -s 192.168.32.0/22 -o em2 -j SNAT --to-source 10.192.75.190 > -A nova-postrouting-bottom -j nova-network-snat > -A nova-postrouting-bottom -j nova-compute-snat > -A nova-postrouting-bottom -j nova-api-snat > ======================================================================================== > > 10)em1 config file > DEVICE=em1 > HWADDR=84:2B:2B:6C:FD:0F > TYPE=Ethernet > UUID=e65a3f54-594e-4b2a-bd63-b488ba0d7adb > ONBOOT=yes > NM_CONTROLLED=no > BOOTPROTO=none > IPADDR=10.192.75.190 > PREFIX=24 > GATEWAY=10.192.75.1 > DNS1=10.192.48.100 > DNS2=10.192.48.101 > ================================================================================================== > > 11) em2 config file > DEVICE=em2 > HWADDR=84:2B:2B:6C:FD:10 > TYPE=Ethernet > UUID=ad6f5595-1df3-437d-b231-8b9e5db9c260 > ONBOOT=yes > NM_CONTROLLED=no > BOOTPROTO=none > > ================================================================================================= > ================================================================================================= > > -----Original Message----- > From: Rhys Oxenham [mailto:roxenham at redhat.com] > Sent: mercredi 24 juillet 2013 17:16 > To: Nicolas VOGEL > Cc: rhos-list at redhat.com > Subject: Re: [rhos-list] floating IP not reachable > > Hi Nicolas, > > When you've got the instance running and a floating-ip assigned, can you please pastebin the output of- > > 1) ip a > 2) brctl show > 3) nova list > 4) nova-manage network-list > 5) nova secgroup-list > 6) nova secgroup-list-rules > 7) iptables -L > 8) iptables -L -t nat > 9) iptables -S -t nat > > Oh, and when you have more than one instance running, can you ping between the instances via 192.168.32.0/22? Make sure to sanitise anything you need to in the pastes. > > Many thanks! > Rhys > > > On 24 Jul 2013, at 16:05, Nicolas VOGEL wrote: > >> Hi, >> >> I just installed a new all-in-one controller without quantum. Everything works fine and now I wan't to use floating IPs like described here:http://openstack.redhat.com/Floating_IP_range. I want to use my second NIC (em2) for this purpose. For the installation, I use my first NIC (em1) and packstack automatically created a bridge (br100). >> >> I deleted the default network and created a new one, which is matching the subnet on which em2 is connected. After that I modified the public_interface in the nova.conf to em2 and also the floating_range with the subnet I just created. I didn't modify the flat_interface and let the default value (lo). >> >> I just enabled the em2 interface but didn't assign any IP address to it. >> I added two rules to the default group to allow ping and SSH. >> >> I can start VMs and they got an internal IP address (from 192.168.32.0/22). I can also associate a floating IP to each VM. But I'm unable to ping a floating IP. >> >> If someone has any idea to resolve the problem it would be very helpful. >> And if someone has a configuration who runs correctly I would be interested how you configured your network interfaces and your nova.conf. >> >> Thanks, Nicolas. >> >> Here?s an output from my nova.conf : >> public_interface=em2 >> default_floating_pool=nova >> novncproxy_port=6080 >> dhcp_domain=novalocal >> libvirt_type=kvm >> floating_range=10.192.76.0/25 >> fixed_range=192.168.32.0/22 >> auto_assign_floating_ip=False >> novncproxy_base_url=http://10.192.75.190:6080/vnc_auto.html >> flat_interface=lo >> vnc_enabled=True >> flat_network_bridge=br100 >> >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list > From roxenham at redhat.com Wed Jul 24 21:53:28 2013 From: roxenham at redhat.com (Rhys Oxenham) Date: Wed, 24 Jul 2013 22:53:28 +0100 Subject: [rhos-list] Installing the OpenStack Compute Service In-Reply-To: <2009256412.6358808.1374700143870.JavaMail.root@redhat.com> References: <76CC67FD1C99DB4DB4D43FEF354AADB647012B64@S-BSC-MBX2.nrn.nrcan.gc.ca> <2009256412.6358808.1374700143870.JavaMail.root@redhat.com> Message-ID: <77530665-7ADE-4BCA-B2C3-367D6A7E2898@redhat.com> On 24 Jul 2013, at 22:09, Steve Gordon wrote: > ----- Original Message ----- >> From: "Hao Chen" >> To: rhos-list at redhat.com >> Sent: Wednesday, July 24, 2013 3:39:18 PM >> Subject: [rhos-list] Installing the OpenStack Compute Service > > Hi again, > >> Followed the latest RHOS 3.0 docs at >> https://access.redhat.com/site/documentation//en-US/Red_Hat_OpenStack/3/html/Installation_and_Configuration_Guide/Updating_the_Compute_Configuration.html >> >> (1) All "/etc/nova/nova/conf" should be "/etc/nova/nova.conf" > > Yes, this one was raised previously here and we intend to fix shortly: > > https://bugzilla.redhat.com/show_bug.cgi?id=980547 > >> (2) When testing http://10.2.0.196:6080/vnc_auto.html received an error >> "Failed to connect to server (code: 1006)" Hi, Which browser are you testing with here? Can you verify that the iptables rule on your compute host exists for vnc? Cheers, Rhys >> (3) Is v3 supported or should use v2.0 instead in "DEFAULT >> quantum_admin_auth_url http://IP:35357/v3"? > > I suspect this should be v2.0 for now, let me know if changing it helps and I will file a bug. Will have to wait and see what others have to say about (2) and (4) as it's not immediately clear to me whether they are issues with the instructions followed or something else. > > Thanks, > > Steve > >> (4) 10.3.6. Starting the Compute Services >> When running "service messagebus restart" or "service messagebus stop" it >> either logged me out or killed the server causing the system to reboot. The >> log info is as bellow. >> ... >> Jul 24 12:23:31 cloud1 console-kit-daemon[15269]: WARNING: no sender#012 >> Jul 24 12:23:31 cloud1 rtkit-daemon[15439]: Demoting known real-time threads. >> Jul 24 12:23:31 cloud1 rtkit-daemon[15439]: Demoted 0 threads. >> Jul 24 12:23:31 cloud1 gdm[16053]: ******************* START >> ********************************** >> Jul 24 12:23:31 cloud1 gnome-keyring-daemon[15657]: dbus failure >> unregistering from session: Connection is closed >> Jul 24 12:23:31 cloud1 gnome-keyring-daemon[15657]: dbus failure >> unregistering from session: Connection is closed >> Jul 24 12:23:31 cloud1 gdm-binary: ******************* START >> ******************************** >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 0: /usr/sbin/gdm-binary() [0x418bb9] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 1: /usr/sbin/gdm-binary() [0x418d17] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 2: /lib64/libpthread.so.0() >> [0x33c2c0f500] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 3: >> /lib64/libgobject-2.0.so.0(g_object_unref+0x1a) [0x33b600d98a] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 4: /usr/sbin/gdm-binary() [0x405ee9] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 5: >> /lib64/libgobject-2.0.so.0(g_closure_invoke+0x15e) [0x33b600bb3e] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 6: /lib64/libgobject-2.0.so.0() >> [0x33b6020e23] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 7: >> /lib64/libgobject-2.0.so.0(g_signal_emit_valist+0x7ef) [0x33b60220af] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 8: >> /lib64/libgobject-2.0.so.0(g_signal_emit+0x83) [0x33b60225f3] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 9: /usr/lib64/libdbus-glib-1.so.2() >> [0x388a812db6] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 10: >> /lib64/libgobject-2.0.so.0(g_object_run_dispose+0x60) [0x33b600dee0] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 11: /usr/lib64/libdbus-glib-1.so.2() >> [0x388a8130a0] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 12: >> /lib64/libdbus-1.so.3(dbus_connection_dispatch+0x336) [0x33c7c10b06] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 13: /usr/lib64/libdbus-glib-1.so.2() >> [0x388a809b45] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 14: >> /lib64/libglib-2.0.so.0(g_main_context_dispatch+0x22e) [0x33b5438f0e] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 15: /lib64/libglib-2.0.so.0() >> [0x33b543c938] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 16: >> /lib64/libglib-2.0.so.0(g_main_loop_run+0x195) [0x33b543cd55] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 17: /usr/sbin/gdm-binary() >> [0x406949] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 18: >> /lib64/libc.so.6(__libc_start_main+0xfd) [0x33b3c1ecdd] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 19: /usr/sbin/gdm-binary() >> [0x405de9] >> Jul 24 12:23:31 cloud1 gdm-binary: ******************* END >> ********************************** >> Jul 24 12:23:31 cloud1 init: prefdm main process (11261) terminated with >> status 1 >> Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning >> Jul 24 12:23:31 cloud1 init: prefdm main process (16061) terminated with >> status 1 >> Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning >> Jul 24 12:23:31 cloud1 init: prefdm main process (16075) terminated with >> status 1 >> Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning >> Jul 24 12:23:31 cloud1 init: prefdm main process (16089) terminated with >> status 1 >> Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning >> Jul 24 12:23:31 cloud1 init: prefdm main process (16104) terminated with >> status 1 >> Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning >> Jul 24 12:23:31 cloud1 gdm[16053]: ******************* END >> ********************************** >> Jul 24 12:23:31 cloud1 init: prefdm main process (16118) terminated with >> status 1 >> Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning >> Jul 24 12:23:31 cloud1 init: prefdm main process (16132) terminated with >> status 1 >> Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning >> Jul 24 12:23:31 cloud1 init: prefdm main process (16146) terminated with >> status 1 >> Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning >> Jul 24 12:23:31 cloud1 init: prefdm main process (16160) terminated with >> status 1 >> Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning >> Jul 24 12:23:31 cloud1 init: prefdm main process (16174) terminated with >> status 1 >> Jul 24 12:23:31 cloud1 init: prefdm main process ended, respawning >> Jul 24 12:23:31 cloud1 init: prefdm main process (16188) terminated with >> status 1 >> Jul 24 12:23:31 cloud1 init: prefdm respawning too fast, stopped >> Jul 24 12:24:06 cloud1 init: tty (/dev/tty2) main process (11271) killed by >> TERM signal >> Jul 24 12:24:06 cloud1 init: tty (/dev/tty3) main process (11273) killed by >> TERM signal >> Jul 24 12:24:06 cloud1 init: tty (/dev/tty4) main process (11275) killed by >> TERM signal >> Jul 24 12:24:06 cloud1 init: tty (/dev/tty5) main process (11277) killed by >> TERM signal >> Jul 24 12:24:06 cloud1 init: tty (/dev/tty6) main process (11279) killed by >> TERM signal >> Jul 24 12:24:07 cloud1 qpidd[10788]: 2013-07-24 12:24:07 error Execution >> exception: not-found: Delete failed. No such queue: cinder-volume >> (qpid/broker/Broker.cpp:940) >> Jul 24 12:24:07 cloud1 proxy-server SIGTERM received >> Jul 24 12:24:07 cloud1 proxy-server Exited >> Jul 24 12:24:08 cloud1 qpidd[10788]: 2013-07-24 12:24:08 notice Shut down >> Jul 24 12:24:09 cloud1 abrtd: Got signal 15, exiting >> Jul 24 12:24:10 cloud1 tgtd: tgtd logger stopped, pid:10444 >> Jul 24 12:24:19 cloud1 acpid: exiting >> Jul 24 12:24:20 cloud1 ntpd[10497]: ntpd exiting on signal 15 >> Jul 24 12:24:20 cloud1 rpcbind: rpcbind terminating on signal. Restart with >> "rpcbind -w" >> Jul 24 12:24:20 cloud1 auditd[11693]: The audit daemon is exiting. >> Jul 24 12:24:20 cloud1 kernel: type=1305 audit(1374693860.398:332234): >> audit_pid=0 old=11693 auid=4294967295 ses=4294967295 >> subj=system_u:system_r:auditd_t:s0 res=1 >> Jul 24 12:24:20 cloud1 kernel: type=1305 audit(1374693860.498:332235): >> audit_enabled=0 old=1 auid=4294967295 ses=4294967295 >> subj=system_u:system_r:auditctl_t:s0 res=1 >> Jul 24 12:24:20 cloud1 kernel: Kernel logging (proc) stopped. >> Jul 24 12:24:20 cloud1 rsyslogd: [origin software="rsyslogd" >> swVersion="5.8.10" x-pid="2522" x-info="http://www.rsyslog.com"] exiting on >> signal 15. >> > > -- > Steve Gordon, RHCE > Documentation Lead, Red Hat OpenStack > Engineering Content Services > Red Hat Canada (Toronto, Ontario) > -- Rhys Oxenham Cloud Solution Architect, Red Hat UK e: roxenham at redhat.com m: +44 (0)7866 446625 From Hao.Chen at NRCan-RNCan.gc.ca Wed Jul 24 22:41:03 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Wed, 24 Jul 2013 22:41:03 +0000 Subject: [rhos-list] Installing the OpenStack Compute Service In-Reply-To: <77530665-7ADE-4BCA-B2C3-367D6A7E2898@redhat.com> References: <76CC67FD1C99DB4DB4D43FEF354AADB647012B64@S-BSC-MBX2.nrn.nrcan.gc.ca> <2009256412.6358808.1374700143870.JavaMail.root@redhat.com> <77530665-7ADE-4BCA-B2C3-367D6A7E2898@redhat.com> Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB647012C47@S-BSC-MBX2.nrn.nrcan.gc.ca> Hi Rhys, The browser is Firefox ESR 17.0.7. iptables has been turned off. Thanks all for the help. Hao >> (2) When testing http://10.2.0.196:6080/vnc_auto.html received an >> error "Failed to connect to server (code: 1006)" Hi, Which browser are you testing with here? Can you verify that the iptables rule on your compute host exists for vnc? Cheers, Rhys >> (3) Is v3 supported or should use v2.0 instead in "DEFAULT >> quantum_admin_auth_url http://IP:35357/v3"? > > I suspect this should be v2.0 for now, let me know if changing it helps and I will file a bug. Will have to wait and see what others have to say about (2) and (4) as it's not immediately clear to me whether they are issues with the instructions followed or something else. > > Thanks, > > Steve > >> (4) 10.3.6. Starting the Compute Services When running "service >> messagebus restart" or "service messagebus stop" it either logged me >> out or killed the server causing the system to reboot. The log info >> is as bellow. >> ... >> Jul 24 12:23:31 cloud1 console-kit-daemon[15269]: WARNING: no >> sender#012 Jul 24 12:23:31 cloud1 rtkit-daemon[15439]: Demoting known real-time threads. >> Jul 24 12:23:31 cloud1 rtkit-daemon[15439]: Demoted 0 threads. >> Jul 24 12:23:31 cloud1 gdm[16053]: ******************* START >> ********************************** >> Jul 24 12:23:31 cloud1 gnome-keyring-daemon[15657]: dbus failure >> unregistering from session: Connection is closed Jul 24 12:23:31 >> cloud1 gnome-keyring-daemon[15657]: dbus failure unregistering from >> session: Connection is closed Jul 24 12:23:31 cloud1 gdm-binary: >> ******************* START >> ******************************** >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 0: /usr/sbin/gdm-binary() >> [0x418bb9] Jul 24 12:23:31 cloud1 gdm-binary: Frame 1: >> /usr/sbin/gdm-binary() [0x418d17] Jul 24 12:23:31 cloud1 gdm-binary: >> Frame 2: /lib64/libpthread.so.0() [0x33c2c0f500] Jul 24 12:23:31 >> cloud1 gdm-binary: Frame 3: >> /lib64/libgobject-2.0.so.0(g_object_unref+0x1a) [0x33b600d98a] Jul 24 >> 12:23:31 cloud1 gdm-binary: Frame 4: /usr/sbin/gdm-binary() >> [0x405ee9] Jul 24 12:23:31 cloud1 gdm-binary: Frame 5: >> /lib64/libgobject-2.0.so.0(g_closure_invoke+0x15e) [0x33b600bb3e] Jul >> 24 12:23:31 cloud1 gdm-binary: Frame 6: /lib64/libgobject-2.0.so.0() >> [0x33b6020e23] Jul 24 12:23:31 cloud1 gdm-binary: Frame 7: >> /lib64/libgobject-2.0.so.0(g_signal_emit_valist+0x7ef) [0x33b60220af] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 8: >> /lib64/libgobject-2.0.so.0(g_signal_emit+0x83) [0x33b60225f3] Jul 24 >> 12:23:31 cloud1 gdm-binary: Frame 9: /usr/lib64/libdbus-glib-1.so.2() >> [0x388a812db6] Jul 24 12:23:31 cloud1 gdm-binary: Frame 10: >> /lib64/libgobject-2.0.so.0(g_object_run_dispose+0x60) [0x33b600dee0] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 11: >> /usr/lib64/libdbus-glib-1.so.2() [0x388a8130a0] Jul 24 12:23:31 >> cloud1 gdm-binary: Frame 12: >> /lib64/libdbus-1.so.3(dbus_connection_dispatch+0x336) [0x33c7c10b06] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 13: >> /usr/lib64/libdbus-glib-1.so.2() [0x388a809b45] Jul 24 12:23:31 >> cloud1 gdm-binary: Frame 14: >> /lib64/libglib-2.0.so.0(g_main_context_dispatch+0x22e) [0x33b5438f0e] >> Jul 24 12:23:31 cloud1 gdm-binary: Frame 15: >> /lib64/libglib-2.0.so.0() [0x33b543c938] Jul 24 12:23:31 cloud1 >> gdm-binary: Frame 16: >> /lib64/libglib-2.0.so.0(g_main_loop_run+0x195) [0x33b543cd55] Jul 24 >> 12:23:31 cloud1 gdm-binary: Frame 17: /usr/sbin/gdm-binary() >> [0x406949] Jul 24 12:23:31 cloud1 gdm-binary: Frame 18: >> /lib64/libc.so.6(__libc_start_main+0xfd) [0x33b3c1ecdd] Jul 24 >> 12:23:31 cloud1 gdm-binary: Frame 19: /usr/sbin/gdm-binary() >> [0x405de9] Jul 24 12:23:31 cloud1 gdm-binary: ******************* END >> ********************************** >> Jul 24 12:23:31 cloud1 init: prefdm main process (11261) terminated >> with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, >> respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16061) >> terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main >> process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main >> process (16075) terminated with status 1 Jul 24 12:23:31 cloud1 init: >> prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: >> prefdm main process (16089) terminated with status 1 Jul 24 12:23:31 >> cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 >> cloud1 init: prefdm main process (16104) terminated with status 1 Jul >> 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 >> 12:23:31 cloud1 gdm[16053]: ******************* END >> ********************************** >> Jul 24 12:23:31 cloud1 init: prefdm main process (16118) terminated >> with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, >> respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16132) >> terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main >> process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main >> process (16146) terminated with status 1 Jul 24 12:23:31 cloud1 init: >> prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: >> prefdm main process (16160) terminated with status 1 Jul 24 12:23:31 >> cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 >> cloud1 init: prefdm main process (16174) terminated with status 1 Jul >> 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 >> 12:23:31 cloud1 init: prefdm main process (16188) terminated with >> status 1 Jul 24 12:23:31 cloud1 init: prefdm respawning too fast, >> stopped Jul 24 12:24:06 cloud1 init: tty (/dev/tty2) main process >> (11271) killed by TERM signal Jul 24 12:24:06 cloud1 init: tty >> (/dev/tty3) main process (11273) killed by TERM signal Jul 24 >> 12:24:06 cloud1 init: tty (/dev/tty4) main process (11275) killed by >> TERM signal Jul 24 12:24:06 cloud1 init: tty (/dev/tty5) main process >> (11277) killed by TERM signal Jul 24 12:24:06 cloud1 init: tty >> (/dev/tty6) main process (11279) killed by TERM signal Jul 24 >> 12:24:07 cloud1 qpidd[10788]: 2013-07-24 12:24:07 error Execution >> exception: not-found: Delete failed. No such queue: cinder-volume >> (qpid/broker/Broker.cpp:940) >> Jul 24 12:24:07 cloud1 proxy-server SIGTERM received Jul 24 12:24:07 >> cloud1 proxy-server Exited Jul 24 12:24:08 cloud1 qpidd[10788]: >> 2013-07-24 12:24:08 notice Shut down Jul 24 12:24:09 cloud1 abrtd: >> Got signal 15, exiting Jul 24 12:24:10 cloud1 tgtd: tgtd logger >> stopped, pid:10444 Jul 24 12:24:19 cloud1 acpid: exiting Jul 24 >> 12:24:20 cloud1 ntpd[10497]: ntpd exiting on signal 15 Jul 24 >> 12:24:20 cloud1 rpcbind: rpcbind terminating on signal. Restart with >> "rpcbind -w" >> Jul 24 12:24:20 cloud1 auditd[11693]: The audit daemon is exiting. >> Jul 24 12:24:20 cloud1 kernel: type=1305 audit(1374693860.398:332234): >> audit_pid=0 old=11693 auid=4294967295 ses=4294967295 >> subj=system_u:system_r:auditd_t:s0 res=1 Jul 24 12:24:20 cloud1 >> kernel: type=1305 audit(1374693860.498:332235): >> audit_enabled=0 old=1 auid=4294967295 ses=4294967295 >> subj=system_u:system_r:auditctl_t:s0 res=1 Jul 24 12:24:20 cloud1 >> kernel: Kernel logging (proc) stopped. >> Jul 24 12:24:20 cloud1 rsyslogd: [origin software="rsyslogd" >> swVersion="5.8.10" x-pid="2522" x-info="http://www.rsyslog.com"] >> exiting on signal 15. >> > > -- > Steve Gordon, RHCE > Documentation Lead, Red Hat OpenStack > Engineering Content Services > Red Hat Canada (Toronto, Ontario) > -- Rhys Oxenham Cloud Solution Architect, Red Hat UK e: roxenham at redhat.com m: +44 (0)7866 446625 From roxenham at redhat.com Wed Jul 24 22:44:42 2013 From: roxenham at redhat.com (Rhys Oxenham) Date: Wed, 24 Jul 2013 23:44:42 +0100 Subject: [rhos-list] Installing the OpenStack Compute Service In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB647012C47@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB647012B64@S-BSC-MBX2.nrn.nrcan.gc.ca> <2009256412.6358808.1374700143870.JavaMail.root@redhat.com> <77530665-7ADE-4BCA-B2C3-367D6A7E2898@redhat.com> <76CC67FD1C99DB4DB4D43FEF354AADB647012C47@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <72D51425-A8AA-45E2-9DEF-A03922F24F8E@redhat.com> On 24 Jul 2013, at 23:41, "Chen, Hao" wrote: > Hi Rhys, > > The browser is Firefox ESR 17.0.7. iptables has been turned off. Hmm, usually Firefox works well for me with the vnc proxy service. I assume you're not using tenant networks with your setup if you've disabled iptables? On your compute host when an instance is running, can you confirm that the VNC server is listening? # netstat -tunpl | grep 59 Many thanks Rhys > > Thanks all for the help. > Hao > >>> (2) When testing http://10.2.0.196:6080/vnc_auto.html received an >>> error "Failed to connect to server (code: 1006)" > > Hi, > > Which browser are you testing with here? Can you verify that the iptables rule on your compute host exists for vnc? > > Cheers, > Rhys > > > >>> (3) Is v3 supported or should use v2.0 instead in "DEFAULT >>> quantum_admin_auth_url http://IP:35357/v3"? >> >> I suspect this should be v2.0 for now, let me know if changing it helps and I will file a bug. Will have to wait and see what others have to say about (2) and (4) as it's not immediately clear to me whether they are issues with the instructions followed or something else. >> >> Thanks, >> >> Steve >> >>> (4) 10.3.6. Starting the Compute Services When running "service >>> messagebus restart" or "service messagebus stop" it either logged me >>> out or killed the server causing the system to reboot. The log info >>> is as bellow. >>> ... >>> Jul 24 12:23:31 cloud1 console-kit-daemon[15269]: WARNING: no >>> sender#012 Jul 24 12:23:31 cloud1 rtkit-daemon[15439]: Demoting known real-time threads. >>> Jul 24 12:23:31 cloud1 rtkit-daemon[15439]: Demoted 0 threads. >>> Jul 24 12:23:31 cloud1 gdm[16053]: ******************* START >>> ********************************** >>> Jul 24 12:23:31 cloud1 gnome-keyring-daemon[15657]: dbus failure >>> unregistering from session: Connection is closed Jul 24 12:23:31 >>> cloud1 gnome-keyring-daemon[15657]: dbus failure unregistering from >>> session: Connection is closed Jul 24 12:23:31 cloud1 gdm-binary: >>> ******************* START >>> ******************************** >>> Jul 24 12:23:31 cloud1 gdm-binary: Frame 0: /usr/sbin/gdm-binary() >>> [0x418bb9] Jul 24 12:23:31 cloud1 gdm-binary: Frame 1: >>> /usr/sbin/gdm-binary() [0x418d17] Jul 24 12:23:31 cloud1 gdm-binary: >>> Frame 2: /lib64/libpthread.so.0() [0x33c2c0f500] Jul 24 12:23:31 >>> cloud1 gdm-binary: Frame 3: >>> /lib64/libgobject-2.0.so.0(g_object_unref+0x1a) [0x33b600d98a] Jul 24 >>> 12:23:31 cloud1 gdm-binary: Frame 4: /usr/sbin/gdm-binary() >>> [0x405ee9] Jul 24 12:23:31 cloud1 gdm-binary: Frame 5: >>> /lib64/libgobject-2.0.so.0(g_closure_invoke+0x15e) [0x33b600bb3e] Jul >>> 24 12:23:31 cloud1 gdm-binary: Frame 6: /lib64/libgobject-2.0.so.0() >>> [0x33b6020e23] Jul 24 12:23:31 cloud1 gdm-binary: Frame 7: >>> /lib64/libgobject-2.0.so.0(g_signal_emit_valist+0x7ef) [0x33b60220af] >>> Jul 24 12:23:31 cloud1 gdm-binary: Frame 8: >>> /lib64/libgobject-2.0.so.0(g_signal_emit+0x83) [0x33b60225f3] Jul 24 >>> 12:23:31 cloud1 gdm-binary: Frame 9: /usr/lib64/libdbus-glib-1.so.2() >>> [0x388a812db6] Jul 24 12:23:31 cloud1 gdm-binary: Frame 10: >>> /lib64/libgobject-2.0.so.0(g_object_run_dispose+0x60) [0x33b600dee0] >>> Jul 24 12:23:31 cloud1 gdm-binary: Frame 11: >>> /usr/lib64/libdbus-glib-1.so.2() [0x388a8130a0] Jul 24 12:23:31 >>> cloud1 gdm-binary: Frame 12: >>> /lib64/libdbus-1.so.3(dbus_connection_dispatch+0x336) [0x33c7c10b06] >>> Jul 24 12:23:31 cloud1 gdm-binary: Frame 13: >>> /usr/lib64/libdbus-glib-1.so.2() [0x388a809b45] Jul 24 12:23:31 >>> cloud1 gdm-binary: Frame 14: >>> /lib64/libglib-2.0.so.0(g_main_context_dispatch+0x22e) [0x33b5438f0e] >>> Jul 24 12:23:31 cloud1 gdm-binary: Frame 15: >>> /lib64/libglib-2.0.so.0() [0x33b543c938] Jul 24 12:23:31 cloud1 >>> gdm-binary: Frame 16: >>> /lib64/libglib-2.0.so.0(g_main_loop_run+0x195) [0x33b543cd55] Jul 24 >>> 12:23:31 cloud1 gdm-binary: Frame 17: /usr/sbin/gdm-binary() >>> [0x406949] Jul 24 12:23:31 cloud1 gdm-binary: Frame 18: >>> /lib64/libc.so.6(__libc_start_main+0xfd) [0x33b3c1ecdd] Jul 24 >>> 12:23:31 cloud1 gdm-binary: Frame 19: /usr/sbin/gdm-binary() >>> [0x405de9] Jul 24 12:23:31 cloud1 gdm-binary: ******************* END >>> ********************************** >>> Jul 24 12:23:31 cloud1 init: prefdm main process (11261) terminated >>> with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, >>> respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16061) >>> terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main >>> process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main >>> process (16075) terminated with status 1 Jul 24 12:23:31 cloud1 init: >>> prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: >>> prefdm main process (16089) terminated with status 1 Jul 24 12:23:31 >>> cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 >>> cloud1 init: prefdm main process (16104) terminated with status 1 Jul >>> 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 >>> 12:23:31 cloud1 gdm[16053]: ******************* END >>> ********************************** >>> Jul 24 12:23:31 cloud1 init: prefdm main process (16118) terminated >>> with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, >>> respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16132) >>> terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main >>> process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main >>> process (16146) terminated with status 1 Jul 24 12:23:31 cloud1 init: >>> prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: >>> prefdm main process (16160) terminated with status 1 Jul 24 12:23:31 >>> cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 >>> cloud1 init: prefdm main process (16174) terminated with status 1 Jul >>> 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 >>> 12:23:31 cloud1 init: prefdm main process (16188) terminated with >>> status 1 Jul 24 12:23:31 cloud1 init: prefdm respawning too fast, >>> stopped Jul 24 12:24:06 cloud1 init: tty (/dev/tty2) main process >>> (11271) killed by TERM signal Jul 24 12:24:06 cloud1 init: tty >>> (/dev/tty3) main process (11273) killed by TERM signal Jul 24 >>> 12:24:06 cloud1 init: tty (/dev/tty4) main process (11275) killed by >>> TERM signal Jul 24 12:24:06 cloud1 init: tty (/dev/tty5) main process >>> (11277) killed by TERM signal Jul 24 12:24:06 cloud1 init: tty >>> (/dev/tty6) main process (11279) killed by TERM signal Jul 24 >>> 12:24:07 cloud1 qpidd[10788]: 2013-07-24 12:24:07 error Execution >>> exception: not-found: Delete failed. No such queue: cinder-volume >>> (qpid/broker/Broker.cpp:940) >>> Jul 24 12:24:07 cloud1 proxy-server SIGTERM received Jul 24 12:24:07 >>> cloud1 proxy-server Exited Jul 24 12:24:08 cloud1 qpidd[10788]: >>> 2013-07-24 12:24:08 notice Shut down Jul 24 12:24:09 cloud1 abrtd: >>> Got signal 15, exiting Jul 24 12:24:10 cloud1 tgtd: tgtd logger >>> stopped, pid:10444 Jul 24 12:24:19 cloud1 acpid: exiting Jul 24 >>> 12:24:20 cloud1 ntpd[10497]: ntpd exiting on signal 15 Jul 24 >>> 12:24:20 cloud1 rpcbind: rpcbind terminating on signal. Restart with >>> "rpcbind -w" >>> Jul 24 12:24:20 cloud1 auditd[11693]: The audit daemon is exiting. >>> Jul 24 12:24:20 cloud1 kernel: type=1305 audit(1374693860.398:332234): >>> audit_pid=0 old=11693 auid=4294967295 ses=4294967295 >>> subj=system_u:system_r:auditd_t:s0 res=1 Jul 24 12:24:20 cloud1 >>> kernel: type=1305 audit(1374693860.498:332235): >>> audit_enabled=0 old=1 auid=4294967295 ses=4294967295 >>> subj=system_u:system_r:auditctl_t:s0 res=1 Jul 24 12:24:20 cloud1 >>> kernel: Kernel logging (proc) stopped. >>> Jul 24 12:24:20 cloud1 rsyslogd: [origin software="rsyslogd" >>> swVersion="5.8.10" x-pid="2522" x-info="http://www.rsyslog.com"] >>> exiting on signal 15. >>> >> >> -- >> Steve Gordon, RHCE >> Documentation Lead, Red Hat OpenStack >> Engineering Content Services >> Red Hat Canada (Toronto, Ontario) >> > > -- > > Rhys Oxenham > Cloud Solution Architect, Red Hat UK > e: roxenham at redhat.com > m: +44 (0)7866 446625 From nicolas.vogel at heig-vd.ch Thu Jul 25 06:37:01 2013 From: nicolas.vogel at heig-vd.ch (Vogel Nicolas) Date: Thu, 25 Jul 2013 06:37:01 +0000 Subject: [rhos-list] floating IP not reachable In-Reply-To: References: Message-ID: <5a0de70c731d4716a6a0f6e1c7d7d19e@EINTMBXC.einet.ad.eivd.ch> Hi, Yes that's right, I can ping and connect via SSH to my VMs from my controller using the private IP 192.168.32.2. My controller's name is IICT-SV1259 and my VMs' names are fed32-1 and fed64-1 Here's the output: [admin at IICT-SV1259 ~(keystone_admin)]$ ping 192.168.32.2 PING 192.168.32.2 (192.168.32.2) 56(84) bytes of data. 64 bytes from 192.168.32.2: icmp_seq=1 ttl=64 time=0.501 ms 64 bytes from 192.168.32.2: icmp_seq=2 ttl=64 time=0.334 ms 64 bytes from 192.168.32.2: icmp_seq=3 ttl=64 time=0.296 ms ^C --- 192.168.32.2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2979ms rtt min/avg/max/mdev = 0.296/0.377/0.501/0.089 ms [admin at IICT-SV1259 ~(keystone_admin)]$ ping 192.168.32.3 PING 192.168.32.3 (192.168.32.3) 56(84) bytes of data. 64 bytes from 192.168.32.3: icmp_seq=1 ttl=64 time=0.407 ms 64 bytes from 192.168.32.3: icmp_seq=2 ttl=64 time=0.219 ms 64 bytes from 192.168.32.3: icmp_seq=3 ttl=64 time=0.207 ms 64 bytes from 192.168.32.3: icmp_seq=4 ttl=64 time=0.349 ms ^C --- 192.168.32.3 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3292ms rtt min/avg/max/mdev = 0.207/0.295/0.407/0.086 ms [admin at IICT-SV1259 ~(keystone_admin)]$ sudo ssh -i grizzli_nova-network.pem fedora at 192.168.32.2 [sudo] password for admin: Last login: Wed Jul 24 15:31:43 2013 from 192.168.32.1 [fedora at fed32-1 ~]$ [fedora at fed32-1 ~]$ ping 192.168.32.3 PING 192.168.32.3 (192.168.32.3) 56(84) bytes of data. 64 bytes from 192.168.32.3: icmp_seq=1 ttl=64 time=0.291 ms 64 bytes from 192.168.32.3: icmp_seq=2 ttl=64 time=0.581 ms 64 bytes from 192.168.32.3: icmp_seq=3 ttl=64 time=0.614 ms 64 bytes from 192.168.32.3: icmp_seq=4 ttl=64 time=0.504 ms ^C --- 192.168.32.3 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3001ms rtt min/avg/max/mdev = 0.291/0.497/0.614/0.127 ms [fedora at fed32-1 ~]$ [fedora at fed32-1 ~]$ ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. ^C --- 8.8.8.8 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3000ms [fedora at fed32-1 ~]$ [fedora at fed32-1 ~]$ ping 10.192.75.1 PING 10.192.75.1 (10.192.75.1) 56(84) bytes of data. ^C --- 10.192.75.1 ping statistics --- 9 packets transmitted, 0 received, 100% packet loss, time 8000ms [fedora at fed32-1 ~]$ [fedora at fed32-1 ~]$ ping 10.192.76.1 PING 10.192.76.1 (10.192.76.1) 56(84) bytes of data. As you can see I'm connected to the fed32-1 VM but I can only ping my private network IPs (192.168.32.xx). There is no way to reach the external world. 10.192.75.0/24 is my management network (also used for by all the openstack services) and 10.192.76.0/24 is the network for my floating IPs. Here's the ouput from "route" and "ifconfig" commands on my VM: [fedora at fed32-1 ~]$ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.32.1 0.0.0.0 UG 0 0 0 eth0 192.168.32.0 * 255.255.252.0 U 0 0 0 eth0 [fedora at fed32-1 ~]$ ifconfig eth0: flags=4163 mtu 1500 inet 192.168.32.2 netmask 255.255.252.0 broadcast 192.168.35.255 inet6 fe80::f816:3eff:fe04:d9a2 prefixlen 64 scopeid 0x20 ether fa:16:3e:04:d9:a2 txqueuelen 1000 (Ethernet) RX packets 4319 bytes 639628 (624.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 5669 bytes 668617 (652.9 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73 mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0x10 loop txqueuelen 0 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 And here the same output on my controller: [admin at IICT-SV1259 ~(keystone_admin)]$ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.192.75.0 * 255.255.255.0 U 0 0 0 em1 192.168.122.0 * 255.255.255.0 U 0 0 0 virbr0 192.168.32.0 * 255.255.252.0 U 0 0 0 br100 link-local * 255.255.0.0 U 1002 0 0 em1 default 10.192.75.1 0.0.0.0 UG 0 0 0 em1 [admin at IICT-SV1259 ~(keystone_admin)]$ ifconfig br100 Link encap:Ethernet HWaddr FE:16:3E:04:D9:A2 inet addr:192.168.32.1 Bcast:192.168.35.255 Mask:255.255.252.0 inet6 addr: fe80::3c6c:d7ff:fe0b:c6af/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5930 errors:0 dropped:0 overruns:0 frame:0 TX packets:6039 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:980212 (957.2 KiB) TX bytes:1083240 (1.0 MiB) em1 Link encap:Ethernet HWaddr 84:2B:2B:6C:FD:0F inet addr:10.192.75.190 Bcast:10.192.75.255 Mask:255.255.255.0 inet6 addr: fe80::862b:2bff:fe6c:fd0f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:168093 errors:0 dropped:0 overruns:0 frame:0 TX packets:18570 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:17214443 (16.4 MiB) TX bytes:2952682 (2.8 MiB) em2 Link encap:Ethernet HWaddr 84:2B:2B:6C:FD:10 inet addr:10.192.76.135 Bcast:0.0.0.0 Mask:255.255.255.255 inet6 addr: fe80::862b:2bff:fe6c:fd10/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:47875 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3960287 (3.7 MiB) TX bytes:492 (492.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1989973 errors:0 dropped:0 overruns:0 frame:0 TX packets:1989973 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1091699634 (1.0 GiB) TX bytes:1091699634 (1.0 GiB) virbr0 Link encap:Ethernet HWaddr 52:54:00:D6:4F:DA inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) vnet0 Link encap:Ethernet HWaddr FE:16:3E:04:D9:A2 inet6 addr: fe80::fc16:3eff:fe04:d9a2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5720 errors:0 dropped:0 overruns:0 frame:0 TX packets:4361 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:675343 (659.5 KiB) TX bytes:643288 (628.2 KiB) vnet1 Link encap:Ethernet HWaddr FE:16:3E:2F:A5:0E inet6 addr: fe80::fc16:3eff:fe2f:a50e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5163 errors:0 dropped:0 overruns:0 frame:0 TX packets:3624 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:596811 (582.8 KiB) TX bytes:570388 (557.0 KiB) I hope that can help. What about my nova.conf file? Is everything all right with it? Should I modify something with the lo interface? Thanks, Nicolas. -----Original Message----- From: rhos-list-bounces at redhat.com [mailto:rhos-list-bounces at redhat.com] On Behalf Of Rhys Oxenham Sent: mercredi 24 juillet 2013 23:49 To: Nicolas VOGEL Cc: rhos-list at redhat.com Subject: Re: [rhos-list] floating IP not reachable Hi Nicolas, Thanks for sending that over, it looks good to me; the important NAT rules are in-place, e.g. -A nova-network-OUTPUT -d 10.192.76.135/32 -j DNAT --to-destination 192.168.32.3 -A nova-network-OUTPUT -d 10.192.76.136/32 -j DNAT --to-destination 192.168.32.2 (And associated SNAT) And then for the security groups- ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT icmp -- anywhere anywhere Your em2 interface is also listening on the correct IP addresses: inet 10.192.76.135/32 scope global em2 inet 10.192.76.136/32 scope global em2 So you're saying that you can directly access your instances by using the internal IP, i.e. the 192.168.32.0/22 network? But NOT via the floating IP's? I just need to understand what you cannot currently access; my concern is that there's no link between the local loopback device and your instances so I need to establish what works and what doesn't. Cheers Rhys On 24 Jul 2013, at 16:43, Nicolas VOGEL wrote: > Hello Rhys, > > Thanks for your answer. > I put all the outputs you asked. > The outputs were made with two VMs running and floating IPs associated (192.168.32.2/10.192.76.136 and 192.168.32.3/10.192.76.135, see nova list output). > I connected via ssh to the first VM and I could ping the second, the I thing internal communication is OK. > I put the complete output from iptables commands because I don't know what you want to verify and I'm not very good with iptables. > Thanks for your help! > > 1) ip a > 1: lo: mtu 16436 qdisc noqueue state UNKNOWN > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > inet 169.254.169.254/32 scope link lo > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: em1: mtu 1500 qdisc mq state UP qlen 1000 > link/ether 84:2b:2b:6c:fd:0f brd ff:ff:ff:ff:ff:ff > inet 10.192.75.190/24 brd 10.192.75.255 scope global em1 > inet6 fe80::862b:2bff:fe6c:fd0f/64 scope link > valid_lft forever preferred_lft forever > 3: em2: mtu 1500 qdisc mq state UP qlen 1000 > link/ether 84:2b:2b:6c:fd:10 brd ff:ff:ff:ff:ff:ff > inet 10.192.76.135/32 scope global em2 > inet 10.192.76.136/32 scope global em2 > inet6 fe80::862b:2bff:fe6c:fd10/64 scope link > valid_lft forever preferred_lft forever > 4: p1p1: mtu 1500 qdisc noop state DOWN qlen 1000 > link/ether 00:1b:21:7c:b8:38 brd ff:ff:ff:ff:ff:ff > 5: p1p2: mtu 1500 qdisc noop state DOWN qlen 1000 > link/ether 00:1b:21:7c:b8:39 brd ff:ff:ff:ff:ff:ff > 6: virbr0: mtu 1500 qdisc noqueue state UNKNOWN > link/ether 52:54:00:d6:4f:da brd ff:ff:ff:ff:ff:ff > inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 > 7: virbr0-nic: mtu 1500 qdisc noop state DOWN qlen 500 > link/ether 52:54:00:d6:4f:da brd ff:ff:ff:ff:ff:ff > 9: br100: mtu 1500 qdisc noqueue state UNKNOWN > link/ether fe:16:3e:04:d9:a2 brd ff:ff:ff:ff:ff:ff > inet 192.168.32.1/22 brd 192.168.35.255 scope global br100 > inet6 fe80::3c6c:d7ff:fe0b:c6af/64 scope link > valid_lft forever preferred_lft forever > 10: vnet0: mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 > link/ether fe:16:3e:04:d9:a2 brd ff:ff:ff:ff:ff:ff > inet6 fe80::fc16:3eff:fe04:d9a2/64 scope link > valid_lft forever preferred_lft forever > 11: vnet1: mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 > link/ether fe:16:3e:2f:a5:0e brd ff:ff:ff:ff:ff:ff > inet6 fe80::fc16:3eff:fe2f:a50e/64 scope link > valid_lft forever preferred_lft forever > ================================================================== > > 2) brctl show > bridge name bridge id STP enabled interfaces > br100 8000.fe163e04d9a2 no vnet0 > vnet1 > virbr0 8000.525400d64fda yes virbr0-nic > ================================================================== > > 3) nova list > +--------------------------------------+---------+--------+-----------------------------------------+ > | ID | Name | Status | Networks | > +--------------------------------------+---------+--------+-----------------------------------------+ > | 0dd1311a-f188-4570-af5d-dbf0fe62d50e | fed32-1 | ACTIVE | > | novanetwork=192.168.32.2, 10.192.76.136 | > | 57960ee0-e2f2-4a08-8560-3bf39c489b78 | fed64-1 | ACTIVE | > | novanetwork=192.168.32.3, 10.192.76.135 | > +--------------------------------------+---------+--------+-----------------------------------------+ > ================================================================== > > 4) nova-manage network-list > id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid > 1 192.168.32.0/22 None 192.168.32.2 8.8.4.4 None None None e2e597a5-7606-4335-911a-d8cadcb840d6 > =================================================================== > > 5) nova secgroup-list > +---------+-------------+ > | Name | Description | > +---------+-------------+ > | default | default | > +---------+-------------+ > ==================================================================== > > 6) nova secgroup-list-rules > +-------------+-----------+---------+-----------+--------------+ > | IP Protocol | From Port | To Port | IP Range | Source Group | > +-------------+-----------+---------+-----------+--------------+ > | icmp | -1 | -1 | 0.0.0.0/0 | | > | tcp | 22 | 22 | 0.0.0.0/0 | | > +-------------+-----------+---------+-----------+--------------+ > ====================================================================== > ====== > > 7) iptables -L > Chain INPUT (policy ACCEPT) > target prot opt source destination > nova-network-INPUT all -- anywhere anywhere > nova-compute-INPUT all -- anywhere anywhere > nova-api-INPUT all -- anywhere anywhere > ACCEPT udp -- anywhere anywhere udp dpt:domain > ACCEPT tcp -- anywhere anywhere multiport dports http /* 001 horizon incoming */ > ACCEPT tcp -- anywhere anywhere tcp dpt:domain > ACCEPT udp -- anywhere anywhere udp dpt:bootps > ACCEPT tcp -- anywhere anywhere multiport dports http /* 001 nagios incoming */ > ACCEPT tcp -- anywhere anywhere tcp dpt:bootps > ACCEPT tcp -- anywhere anywhere multiport dports iscsi-target,8776 /* 001 cinder incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports 5666 /* 001 nrpe incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports armtechdaemon /* 001 glance incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports rsync /* 001 rsync incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports webcache /* 001 swift proxy incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports x11,6001,6002,rsync /* 001 swift storage incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports commplex-main,35357 /* 001 keystone incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports vnc-server:cvsup /* 001 nova compute incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports mysql /* 001 mysql incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports 6080 /* 001 novncproxy incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports 8773,8774,8775 /* 001 novaapi incoming */ > ACCEPT tcp -- anywhere anywhere multiport dports amqp /* 001 qpid incoming */ > ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED > ACCEPT icmp -- anywhere anywhere > ACCEPT all -- anywhere anywhere > ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh > REJECT all -- anywhere anywhere reject-with icmp-host-prohibited > > Chain FORWARD (policy ACCEPT) > target prot opt source destination > nova-filter-top all -- anywhere anywhere > nova-network-FORWARD all -- anywhere anywhere > nova-compute-FORWARD all -- anywhere anywhere > nova-api-FORWARD all -- anywhere anywhere > ACCEPT all -- anywhere 192.168.122.0/24 state RELATED,ESTABLISHED > ACCEPT all -- 192.168.122.0/24 anywhere > ACCEPT all -- anywhere anywhere > REJECT all -- anywhere anywhere reject-with icmp-port-unreachable > REJECT all -- anywhere anywhere reject-with icmp-port-unreachable > REJECT all -- anywhere anywhere reject-with icmp-host-prohibited > > Chain OUTPUT (policy ACCEPT) > target prot opt source destination > nova-filter-top all -- anywhere anywhere > nova-network-OUTPUT all -- anywhere anywhere > nova-compute-OUTPUT all -- anywhere anywhere > nova-api-OUTPUT all -- anywhere anywhere > > Chain nova-api-FORWARD (1 references) > target prot opt source destination > > Chain nova-api-INPUT (1 references) > target prot opt source destination > ACCEPT tcp -- anywhere 10.192.75.190 tcp dpt:8775 > > Chain nova-api-OUTPUT (1 references) > target prot opt source destination > > Chain nova-api-local (1 references) > target prot opt source destination > > Chain nova-compute-FORWARD (1 references) > target prot opt source destination > ACCEPT udp -- default 255.255.255.255 udp spt:bootpc dpt:bootps > ACCEPT all -- anywhere anywhere > ACCEPT all -- anywhere anywhere > > Chain nova-compute-INPUT (1 references) > target prot opt source destination > ACCEPT udp -- default 255.255.255.255 udp spt:bootpc dpt:bootps > > Chain nova-compute-OUTPUT (1 references) > target prot opt source destination > > Chain nova-compute-inst-2 (1 references) > target prot opt source destination > DROP all -- anywhere anywhere state INVALID > ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED > nova-compute-provider all -- anywhere anywhere > ACCEPT udp -- 192.168.32.1 anywhere udp spt:bootps dpt:bootpc > ACCEPT all -- 192.168.32.0/22 anywhere > ACCEPT tcp -- anywhere anywhere tcp dpt:ssh > ACCEPT icmp -- anywhere anywhere > nova-compute-sg-fallback all -- anywhere anywhere > > Chain nova-compute-inst-3 (1 references) > target prot opt source destination > DROP all -- anywhere anywhere state INVALID > ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED > nova-compute-provider all -- anywhere anywhere > ACCEPT udp -- 192.168.32.1 anywhere udp spt:bootps dpt:bootpc > ACCEPT all -- 192.168.32.0/22 anywhere > ACCEPT tcp -- anywhere anywhere tcp dpt:ssh > ACCEPT icmp -- anywhere anywhere > nova-compute-sg-fallback all -- anywhere anywhere > > Chain nova-compute-local (1 references) > target prot opt source destination > nova-compute-inst-2 all -- anywhere 192.168.32.2 > nova-compute-inst-3 all -- anywhere 192.168.32.3 > > Chain nova-compute-provider (2 references) > target prot opt source destination > > Chain nova-compute-sg-fallback (2 references) > target prot opt source destination > DROP all -- anywhere anywhere > > Chain nova-filter-top (2 references) > target prot opt source destination > nova-network-local all -- anywhere anywhere > nova-compute-local all -- anywhere anywhere > nova-api-local all -- anywhere anywhere > > Chain nova-network-FORWARD (1 references) > target prot opt source destination > ACCEPT all -- anywhere anywhere > ACCEPT all -- anywhere anywhere > > Chain nova-network-INPUT (1 references) > target prot opt source destination > ACCEPT udp -- anywhere anywhere udp dpt:bootps > ACCEPT tcp -- anywhere anywhere tcp dpt:bootps > ACCEPT udp -- anywhere anywhere udp dpt:domain > ACCEPT tcp -- anywhere anywhere tcp dpt:domain > > Chain nova-network-OUTPUT (1 references) > target prot opt source destination > > Chain nova-network-local (1 references) > target prot opt source destination > ====================================================================== > ========== > > 8) iptables -L -t nat > Chain PREROUTING (policy ACCEPT) > target prot opt source destination > nova-network-PREROUTING all -- anywhere anywhere > nova-compute-PREROUTING all -- anywhere anywhere > nova-api-PREROUTING all -- anywhere anywhere > > Chain POSTROUTING (policy ACCEPT) > target prot opt source destination > nova-network-POSTROUTING all -- anywhere anywhere > nova-compute-POSTROUTING all -- anywhere anywhere > nova-api-POSTROUTING all -- anywhere anywhere > MASQUERADE tcp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 > MASQUERADE udp -- 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535 > MASQUERADE all -- 192.168.122.0/24 !192.168.122.0/24 > nova-postrouting-bottom all -- anywhere anywhere > > Chain OUTPUT (policy ACCEPT) > target prot opt source destination > nova-network-OUTPUT all -- anywhere anywhere > nova-compute-OUTPUT all -- anywhere anywhere > nova-api-OUTPUT all -- anywhere anywhere > > Chain nova-api-OUTPUT (1 references) > target prot opt source destination > > Chain nova-api-POSTROUTING (1 references) > target prot opt source destination > > Chain nova-api-PREROUTING (1 references) > target prot opt source destination > > Chain nova-api-float-snat (1 references) > target prot opt source destination > > Chain nova-api-snat (1 references) > target prot opt source destination > nova-api-float-snat all -- anywhere anywhere > > Chain nova-compute-OUTPUT (1 references) > target prot opt source destination > > Chain nova-compute-POSTROUTING (1 references) > target prot opt source destination > > Chain nova-compute-PREROUTING (1 references) > target prot opt source destination > > Chain nova-compute-float-snat (1 references) > target prot opt source destination > > Chain nova-compute-snat (1 references) > target prot opt source destination > nova-compute-float-snat all -- anywhere anywhere > > Chain nova-network-OUTPUT (1 references) > target prot opt source destination > DNAT all -- anywhere 10.192.76.135 to:192.168.32.3 > DNAT all -- anywhere 10.192.76.136 to:192.168.32.2 > > Chain nova-network-POSTROUTING (1 references) > target prot opt source destination > ACCEPT all -- 192.168.32.0/22 10.192.75.190 > ACCEPT all -- 192.168.32.0/22 192.168.32.0/22 ! ctstate DNAT > SNAT all -- 192.168.32.3 anywhere ctstate DNAT to:10.192.76.135 > SNAT all -- 192.168.32.2 anywhere ctstate DNAT to:10.192.76.136 > > Chain nova-network-PREROUTING (1 references) > target prot opt source destination > DNAT tcp -- anywhere 169.254.169.254 tcp dpt:http to:10.192.75.190:8775 > DNAT all -- anywhere 10.192.76.135 to:192.168.32.3 > DNAT all -- anywhere 10.192.76.136 to:192.168.32.2 > > Chain nova-network-float-snat (1 references) > target prot opt source destination > SNAT all -- 192.168.32.3 192.168.32.3 to:10.192.76.135 > SNAT all -- 192.168.32.3 anywhere to:10.192.76.135 > SNAT all -- 192.168.32.2 192.168.32.2 to:10.192.76.136 > SNAT all -- 192.168.32.2 anywhere to:10.192.76.136 > > Chain nova-network-snat (1 references) > target prot opt source destination > nova-network-float-snat all -- anywhere anywhere > SNAT all -- 192.168.32.0/22 anywhere to:10.192.75.190 > > Chain nova-postrouting-bottom (1 references) > target prot opt source destination > nova-network-snat all -- anywhere anywhere > nova-compute-snat all -- anywhere anywhere > nova-api-snat all -- anywhere anywhere > ====================================================================== > ===== > > 9) iptables -S -t nat > -P PREROUTING ACCEPT > -P POSTROUTING ACCEPT > -P OUTPUT ACCEPT > -N nova-api-OUTPUT > -N nova-api-POSTROUTING > -N nova-api-PREROUTING > -N nova-api-float-snat > -N nova-api-snat > -N nova-compute-OUTPUT > -N nova-compute-POSTROUTING > -N nova-compute-PREROUTING > -N nova-compute-float-snat > -N nova-compute-snat > -N nova-network-OUTPUT > -N nova-network-POSTROUTING > -N nova-network-PREROUTING > -N nova-network-float-snat > -N nova-network-snat > -N nova-postrouting-bottom > -A PREROUTING -j nova-network-PREROUTING -A PREROUTING -j > nova-compute-PREROUTING -A PREROUTING -j nova-api-PREROUTING -A > POSTROUTING -j nova-network-POSTROUTING -A POSTROUTING -j > nova-compute-POSTROUTING -A POSTROUTING -j nova-api-POSTROUTING -A > POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j > MASQUERADE --to-ports 1024-65535 -A POSTROUTING -s 192.168.122.0/24 ! > -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535 -A > POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE -A > POSTROUTING -j nova-postrouting-bottom -A OUTPUT -j > nova-network-OUTPUT -A OUTPUT -j nova-compute-OUTPUT -A OUTPUT -j > nova-api-OUTPUT -A nova-api-snat -j nova-api-float-snat -A > nova-compute-snat -j nova-compute-float-snat -A nova-network-OUTPUT -d > 10.192.76.135/32 -j DNAT --to-destination 192.168.32.3 -A > nova-network-OUTPUT -d 10.192.76.136/32 -j DNAT --to-destination > 192.168.32.2 -A nova-network-POSTROUTING -s 192.168.32.0/22 -d > 10.192.75.190/32 -j ACCEPT -A nova-network-POSTROUTING -s > 192.168.32.0/22 -d 192.168.32.0/22 -m conntrack ! --ctstate DNAT -j > ACCEPT -A nova-network-POSTROUTING -s 192.168.32.3/32 -m conntrack > --ctstate DNAT -j SNAT --to-source 10.192.76.135 -A > nova-network-POSTROUTING -s 192.168.32.2/32 -m conntrack --ctstate > DNAT -j SNAT --to-source 10.192.76.136 -A nova-network-PREROUTING -d > 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination > 10.192.75.190:8775 -A nova-network-PREROUTING -d 10.192.76.135/32 -j > DNAT --to-destination 192.168.32.3 -A nova-network-PREROUTING -d > 10.192.76.136/32 -j DNAT --to-destination 192.168.32.2 -A > nova-network-float-snat -s 192.168.32.3/32 -d 192.168.32.3/32 -j SNAT > --to-source 10.192.76.135 -A nova-network-float-snat -s > 192.168.32.3/32 -o em2 -j SNAT --to-source 10.192.76.135 -A > nova-network-float-snat -s 192.168.32.2/32 -d 192.168.32.2/32 -j SNAT > --to-source 10.192.76.136 -A nova-network-float-snat -s > 192.168.32.2/32 -o em2 -j SNAT --to-source 10.192.76.136 -A > nova-network-snat -j nova-network-float-snat -A nova-network-snat -s > 192.168.32.0/22 -o em2 -j SNAT --to-source 10.192.75.190 -A > nova-postrouting-bottom -j nova-network-snat -A > nova-postrouting-bottom -j nova-compute-snat -A > nova-postrouting-bottom -j nova-api-snat > ====================================================================== > ================== > > 10)em1 config file > DEVICE=em1 > HWADDR=84:2B:2B:6C:FD:0F > TYPE=Ethernet > UUID=e65a3f54-594e-4b2a-bd63-b488ba0d7adb > ONBOOT=yes > NM_CONTROLLED=no > BOOTPROTO=none > IPADDR=10.192.75.190 > PREFIX=24 > GATEWAY=10.192.75.1 > DNS1=10.192.48.100 > DNS2=10.192.48.101 > ====================================================================== > ============================ > > 11) em2 config file > DEVICE=em2 > HWADDR=84:2B:2B:6C:FD:10 > TYPE=Ethernet > UUID=ad6f5595-1df3-437d-b231-8b9e5db9c260 > ONBOOT=yes > NM_CONTROLLED=no > BOOTPROTO=none > > ====================================================================== > =========================== > ====================================================================== > =========================== > > -----Original Message----- > From: Rhys Oxenham [mailto:roxenham at redhat.com] > Sent: mercredi 24 juillet 2013 17:16 > To: Nicolas VOGEL > Cc: rhos-list at redhat.com > Subject: Re: [rhos-list] floating IP not reachable > > Hi Nicolas, > > When you've got the instance running and a floating-ip assigned, can > you please pastebin the output of- > > 1) ip a > 2) brctl show > 3) nova list > 4) nova-manage network-list > 5) nova secgroup-list > 6) nova secgroup-list-rules > 7) iptables -L > 8) iptables -L -t nat > 9) iptables -S -t nat > > Oh, and when you have more than one instance running, can you ping between the instances via 192.168.32.0/22? Make sure to sanitise anything you need to in the pastes. > > Many thanks! > Rhys > > > On 24 Jul 2013, at 16:05, Nicolas VOGEL wrote: > >> Hi, >> >> I just installed a new all-in-one controller without quantum. Everything works fine and now I wan't to use floating IPs like described here:http://openstack.redhat.com/Floating_IP_range. I want to use my second NIC (em2) for this purpose. For the installation, I use my first NIC (em1) and packstack automatically created a bridge (br100). >> >> I deleted the default network and created a new one, which is matching the subnet on which em2 is connected. After that I modified the public_interface in the nova.conf to em2 and also the floating_range with the subnet I just created. I didn't modify the flat_interface and let the default value (lo). >> >> I just enabled the em2 interface but didn't assign any IP address to it. >> I added two rules to the default group to allow ping and SSH. >> >> I can start VMs and they got an internal IP address (from 192.168.32.0/22). I can also associate a floating IP to each VM. But I'm unable to ping a floating IP. >> >> If someone has any idea to resolve the problem it would be very helpful. >> And if someone has a configuration who runs correctly I would be interested how you configured your network interfaces and your nova.conf. >> >> Thanks, Nicolas. >> >> Here?s an output from my nova.conf : >> public_interface=em2 >> default_floating_pool=nova >> novncproxy_port=6080 >> dhcp_domain=novalocal >> libvirt_type=kvm >> floating_range=10.192.76.0/25 >> fixed_range=192.168.32.0/22 >> auto_assign_floating_ip=False >> novncproxy_base_url=http://10.192.75.190:6080/vnc_auto.html >> flat_interface=lo >> vnc_enabled=True >> flat_network_bridge=br100 >> >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list > _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list From Hao.Chen at NRCan-RNCan.gc.ca Thu Jul 25 17:53:07 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Thu, 25 Jul 2013 17:53:07 +0000 Subject: [rhos-list] Installing the OpenStack Compute Service In-Reply-To: <72D51425-A8AA-45E2-9DEF-A03922F24F8E@redhat.com> References: <76CC67FD1C99DB4DB4D43FEF354AADB647012B64@S-BSC-MBX2.nrn.nrcan.gc.ca> <2009256412.6358808.1374700143870.JavaMail.root@redhat.com> <77530665-7ADE-4BCA-B2C3-367D6A7E2898@redhat.com> <76CC67FD1C99DB4DB4D43FEF354AADB647012C47@S-BSC-MBX2.nrn.nrcan.gc.ca> <72D51425-A8AA-45E2-9DEF-A03922F24F8E@redhat.com> Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB64701465F@S-BSC-MBX2.nrn.nrcan.gc.ca> Hi Rhys, # netstat -tunpl | grep 59 returns nothing. Have I missed something here? a VNC server? Thanks, Hao -----Original Message----- From: Rhys Oxenham [mailto:roxenham at redhat.com] Sent: July 24, 2013 15:45 To: Chen, Hao Cc: rhos-list at redhat.com Subject: Re: [rhos-list] Installing the OpenStack Compute Service On 24 Jul 2013, at 23:41, "Chen, Hao" wrote: > Hi Rhys, > > The browser is Firefox ESR 17.0.7. iptables has been turned off. Hmm, usually Firefox works well for me with the vnc proxy service. I assume you're not using tenant networks with your setup if you've disabled iptables? On your compute host when an instance is running, can you confirm that the VNC server is listening? # netstat -tunpl | grep 59 Many thanks Rhys > > Thanks all for the help. > Hao > >>> (2) When testing http://10.2.0.196:6080/vnc_auto.html received an >>> error "Failed to connect to server (code: 1006)" > > Hi, > > Which browser are you testing with here? Can you verify that the iptables rule on your compute host exists for vnc? > > Cheers, > Rhys > > > >>> (3) Is v3 supported or should use v2.0 instead in "DEFAULT >>> quantum_admin_auth_url http://IP:35357/v3"? >> >> I suspect this should be v2.0 for now, let me know if changing it helps and I will file a bug. Will have to wait and see what others have to say about (2) and (4) as it's not immediately clear to me whether they are issues with the instructions followed or something else. >> >> Thanks, >> >> Steve >> >>> (4) 10.3.6. Starting the Compute Services When running "service >>> messagebus restart" or "service messagebus stop" it either logged me >>> out or killed the server causing the system to reboot. The log info >>> is as bellow. >>> ... >>> Jul 24 12:23:31 cloud1 console-kit-daemon[15269]: WARNING: no >>> sender#012 Jul 24 12:23:31 cloud1 rtkit-daemon[15439]: Demoting known real-time threads. >>> Jul 24 12:23:31 cloud1 rtkit-daemon[15439]: Demoted 0 threads. >>> Jul 24 12:23:31 cloud1 gdm[16053]: ******************* START >>> ********************************** >>> Jul 24 12:23:31 cloud1 gnome-keyring-daemon[15657]: dbus failure >>> unregistering from session: Connection is closed Jul 24 12:23:31 >>> cloud1 gnome-keyring-daemon[15657]: dbus failure unregistering from >>> session: Connection is closed Jul 24 12:23:31 cloud1 gdm-binary: >>> ******************* START >>> ******************************** >>> Jul 24 12:23:31 cloud1 gdm-binary: Frame 0: /usr/sbin/gdm-binary() >>> [0x418bb9] Jul 24 12:23:31 cloud1 gdm-binary: Frame 1: >>> /usr/sbin/gdm-binary() [0x418d17] Jul 24 12:23:31 cloud1 gdm-binary: >>> Frame 2: /lib64/libpthread.so.0() [0x33c2c0f500] Jul 24 12:23:31 >>> cloud1 gdm-binary: Frame 3: >>> /lib64/libgobject-2.0.so.0(g_object_unref+0x1a) [0x33b600d98a] Jul 24 >>> 12:23:31 cloud1 gdm-binary: Frame 4: /usr/sbin/gdm-binary() >>> [0x405ee9] Jul 24 12:23:31 cloud1 gdm-binary: Frame 5: >>> /lib64/libgobject-2.0.so.0(g_closure_invoke+0x15e) [0x33b600bb3e] Jul >>> 24 12:23:31 cloud1 gdm-binary: Frame 6: /lib64/libgobject-2.0.so.0() >>> [0x33b6020e23] Jul 24 12:23:31 cloud1 gdm-binary: Frame 7: >>> /lib64/libgobject-2.0.so.0(g_signal_emit_valist+0x7ef) [0x33b60220af] >>> Jul 24 12:23:31 cloud1 gdm-binary: Frame 8: >>> /lib64/libgobject-2.0.so.0(g_signal_emit+0x83) [0x33b60225f3] Jul 24 >>> 12:23:31 cloud1 gdm-binary: Frame 9: /usr/lib64/libdbus-glib-1.so.2() >>> [0x388a812db6] Jul 24 12:23:31 cloud1 gdm-binary: Frame 10: >>> /lib64/libgobject-2.0.so.0(g_object_run_dispose+0x60) [0x33b600dee0] >>> Jul 24 12:23:31 cloud1 gdm-binary: Frame 11: >>> /usr/lib64/libdbus-glib-1.so.2() [0x388a8130a0] Jul 24 12:23:31 >>> cloud1 gdm-binary: Frame 12: >>> /lib64/libdbus-1.so.3(dbus_connection_dispatch+0x336) [0x33c7c10b06] >>> Jul 24 12:23:31 cloud1 gdm-binary: Frame 13: >>> /usr/lib64/libdbus-glib-1.so.2() [0x388a809b45] Jul 24 12:23:31 >>> cloud1 gdm-binary: Frame 14: >>> /lib64/libglib-2.0.so.0(g_main_context_dispatch+0x22e) [0x33b5438f0e] >>> Jul 24 12:23:31 cloud1 gdm-binary: Frame 15: >>> /lib64/libglib-2.0.so.0() [0x33b543c938] Jul 24 12:23:31 cloud1 >>> gdm-binary: Frame 16: >>> /lib64/libglib-2.0.so.0(g_main_loop_run+0x195) [0x33b543cd55] Jul 24 >>> 12:23:31 cloud1 gdm-binary: Frame 17: /usr/sbin/gdm-binary() >>> [0x406949] Jul 24 12:23:31 cloud1 gdm-binary: Frame 18: >>> /lib64/libc.so.6(__libc_start_main+0xfd) [0x33b3c1ecdd] Jul 24 >>> 12:23:31 cloud1 gdm-binary: Frame 19: /usr/sbin/gdm-binary() >>> [0x405de9] Jul 24 12:23:31 cloud1 gdm-binary: ******************* END >>> ********************************** >>> Jul 24 12:23:31 cloud1 init: prefdm main process (11261) terminated >>> with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, >>> respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16061) >>> terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main >>> process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main >>> process (16075) terminated with status 1 Jul 24 12:23:31 cloud1 init: >>> prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: >>> prefdm main process (16089) terminated with status 1 Jul 24 12:23:31 >>> cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 >>> cloud1 init: prefdm main process (16104) terminated with status 1 Jul >>> 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 >>> 12:23:31 cloud1 gdm[16053]: ******************* END >>> ********************************** >>> Jul 24 12:23:31 cloud1 init: prefdm main process (16118) terminated >>> with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, >>> respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16132) >>> terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main >>> process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main >>> process (16146) terminated with status 1 Jul 24 12:23:31 cloud1 init: >>> prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: >>> prefdm main process (16160) terminated with status 1 Jul 24 12:23:31 >>> cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 >>> cloud1 init: prefdm main process (16174) terminated with status 1 Jul >>> 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 >>> 12:23:31 cloud1 init: prefdm main process (16188) terminated with >>> status 1 Jul 24 12:23:31 cloud1 init: prefdm respawning too fast, >>> stopped Jul 24 12:24:06 cloud1 init: tty (/dev/tty2) main process >>> (11271) killed by TERM signal Jul 24 12:24:06 cloud1 init: tty >>> (/dev/tty3) main process (11273) killed by TERM signal Jul 24 >>> 12:24:06 cloud1 init: tty (/dev/tty4) main process (11275) killed by >>> TERM signal Jul 24 12:24:06 cloud1 init: tty (/dev/tty5) main process >>> (11277) killed by TERM signal Jul 24 12:24:06 cloud1 init: tty >>> (/dev/tty6) main process (11279) killed by TERM signal Jul 24 >>> 12:24:07 cloud1 qpidd[10788]: 2013-07-24 12:24:07 error Execution >>> exception: not-found: Delete failed. No such queue: cinder-volume >>> (qpid/broker/Broker.cpp:940) >>> Jul 24 12:24:07 cloud1 proxy-server SIGTERM received Jul 24 12:24:07 >>> cloud1 proxy-server Exited Jul 24 12:24:08 cloud1 qpidd[10788]: >>> 2013-07-24 12:24:08 notice Shut down Jul 24 12:24:09 cloud1 abrtd: >>> Got signal 15, exiting Jul 24 12:24:10 cloud1 tgtd: tgtd logger >>> stopped, pid:10444 Jul 24 12:24:19 cloud1 acpid: exiting Jul 24 >>> 12:24:20 cloud1 ntpd[10497]: ntpd exiting on signal 15 Jul 24 >>> 12:24:20 cloud1 rpcbind: rpcbind terminating on signal. Restart with >>> "rpcbind -w" >>> Jul 24 12:24:20 cloud1 auditd[11693]: The audit daemon is exiting. >>> Jul 24 12:24:20 cloud1 kernel: type=1305 audit(1374693860.398:332234): >>> audit_pid=0 old=11693 auid=4294967295 ses=4294967295 >>> subj=system_u:system_r:auditd_t:s0 res=1 Jul 24 12:24:20 cloud1 >>> kernel: type=1305 audit(1374693860.498:332235): >>> audit_enabled=0 old=1 auid=4294967295 ses=4294967295 >>> subj=system_u:system_r:auditctl_t:s0 res=1 Jul 24 12:24:20 cloud1 >>> kernel: Kernel logging (proc) stopped. >>> Jul 24 12:24:20 cloud1 rsyslogd: [origin software="rsyslogd" >>> swVersion="5.8.10" x-pid="2522" x-info="http://www.rsyslog.com"] >>> exiting on signal 15. >>> >> >> -- >> Steve Gordon, RHCE >> Documentation Lead, Red Hat OpenStack >> Engineering Content Services >> Red Hat Canada (Toronto, Ontario) >> > > -- > > Rhys Oxenham > Cloud Solution Architect, Red Hat UK > e: roxenham at redhat.com > m: +44 (0)7866 446625 From roxenham at redhat.com Thu Jul 25 20:12:07 2013 From: roxenham at redhat.com (Rhys Oxenham) Date: Thu, 25 Jul 2013 16:12:07 -0400 (EDT) Subject: [rhos-list] Installing the OpenStack Compute Service In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB64701465F@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB647012B64@S-BSC-MBX2.nrn.nrcan.gc.ca> <2009256412.6358808.1374700143870.JavaMail.root@redhat.com> <77530665-7ADE-4BCA-B2C3-367D6A7E2898@redhat.com> <76CC67FD1C99DB4DB4D43FEF354AADB647012C47@S-BSC-MBX2.nrn.nrcan.gc.ca> <72D51425-A8AA-45E2-9DEF-A03922F24F8E@redhat.com> <76CC67FD1C99DB4DB4D43FEF354AADB64701465F@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <7D1D94F9-743E-4613-90DF-3BD613B7C1A3@redhat.com> Hi Hao, Quite possibly, do you have vnc_enabled and vncserver_listen in your nova.conf? You can also check the libvirt definition for your instance in /var/lib/nova/instances Thanks Rhys. Sent from my mobile device On 25 Jul 2013, at 18:53, "Chen, Hao" wrote: > Hi Rhys, > # netstat -tunpl | grep 59 returns nothing. Have I missed something here? a VNC server? > Thanks, > Hao > > > -----Original Message----- > From: Rhys Oxenham [mailto:roxenham at redhat.com] > Sent: July 24, 2013 15:45 > To: Chen, Hao > Cc: rhos-list at redhat.com > Subject: Re: [rhos-list] Installing the OpenStack Compute Service > > > On 24 Jul 2013, at 23:41, "Chen, Hao" wrote: > >> Hi Rhys, >> >> The browser is Firefox ESR 17.0.7. iptables has been turned off. > > Hmm, usually Firefox works well for me with the vnc proxy service. I assume you're not using tenant networks with your setup if you've disabled iptables? > > On your compute host when an instance is running, can you confirm that the VNC server is listening? > > # netstat -tunpl | grep 59 > > Many thanks > Rhys > > >> >> Thanks all for the help. >> Hao >> >>>> (2) When testing http://10.2.0.196:6080/vnc_auto.html received an >>>> error "Failed to connect to server (code: 1006)" >> >> Hi, >> >> Which browser are you testing with here? Can you verify that the iptables rule on your compute host exists for vnc? >> >> Cheers, >> Rhys >> >> >> >>>> (3) Is v3 supported or should use v2.0 instead in "DEFAULT >>>> quantum_admin_auth_url http://IP:35357/v3"? >>> >>> I suspect this should be v2.0 for now, let me know if changing it helps and I will file a bug. Will have to wait and see what others have to say about (2) and (4) as it's not immediately clear to me whether they are issues with the instructions followed or something else. >>> >>> Thanks, >>> >>> Steve >>> >>>> (4) 10.3.6. Starting the Compute Services When running "service >>>> messagebus restart" or "service messagebus stop" it either logged me >>>> out or killed the server causing the system to reboot. The log info >>>> is as bellow. >>>> ... >>>> Jul 24 12:23:31 cloud1 console-kit-daemon[15269]: WARNING: no >>>> sender#012 Jul 24 12:23:31 cloud1 rtkit-daemon[15439]: Demoting known real-time threads. >>>> Jul 24 12:23:31 cloud1 rtkit-daemon[15439]: Demoted 0 threads. >>>> Jul 24 12:23:31 cloud1 gdm[16053]: ******************* START >>>> ********************************** >>>> Jul 24 12:23:31 cloud1 gnome-keyring-daemon[15657]: dbus failure >>>> unregistering from session: Connection is closed Jul 24 12:23:31 >>>> cloud1 gnome-keyring-daemon[15657]: dbus failure unregistering from >>>> session: Connection is closed Jul 24 12:23:31 cloud1 gdm-binary: >>>> ******************* START >>>> ******************************** >>>> Jul 24 12:23:31 cloud1 gdm-binary: Frame 0: /usr/sbin/gdm-binary() >>>> [0x418bb9] Jul 24 12:23:31 cloud1 gdm-binary: Frame 1: >>>> /usr/sbin/gdm-binary() [0x418d17] Jul 24 12:23:31 cloud1 gdm-binary: >>>> Frame 2: /lib64/libpthread.so.0() [0x33c2c0f500] Jul 24 12:23:31 >>>> cloud1 gdm-binary: Frame 3: >>>> /lib64/libgobject-2.0.so.0(g_object_unref+0x1a) [0x33b600d98a] Jul 24 >>>> 12:23:31 cloud1 gdm-binary: Frame 4: /usr/sbin/gdm-binary() >>>> [0x405ee9] Jul 24 12:23:31 cloud1 gdm-binary: Frame 5: >>>> /lib64/libgobject-2.0.so.0(g_closure_invoke+0x15e) [0x33b600bb3e] Jul >>>> 24 12:23:31 cloud1 gdm-binary: Frame 6: /lib64/libgobject-2.0.so.0() >>>> [0x33b6020e23] Jul 24 12:23:31 cloud1 gdm-binary: Frame 7: >>>> /lib64/libgobject-2.0.so.0(g_signal_emit_valist+0x7ef) [0x33b60220af] >>>> Jul 24 12:23:31 cloud1 gdm-binary: Frame 8: >>>> /lib64/libgobject-2.0.so.0(g_signal_emit+0x83) [0x33b60225f3] Jul 24 >>>> 12:23:31 cloud1 gdm-binary: Frame 9: /usr/lib64/libdbus-glib-1.so.2() >>>> [0x388a812db6] Jul 24 12:23:31 cloud1 gdm-binary: Frame 10: >>>> /lib64/libgobject-2.0.so.0(g_object_run_dispose+0x60) [0x33b600dee0] >>>> Jul 24 12:23:31 cloud1 gdm-binary: Frame 11: >>>> /usr/lib64/libdbus-glib-1.so.2() [0x388a8130a0] Jul 24 12:23:31 >>>> cloud1 gdm-binary: Frame 12: >>>> /lib64/libdbus-1.so.3(dbus_connection_dispatch+0x336) [0x33c7c10b06] >>>> Jul 24 12:23:31 cloud1 gdm-binary: Frame 13: >>>> /usr/lib64/libdbus-glib-1.so.2() [0x388a809b45] Jul 24 12:23:31 >>>> cloud1 gdm-binary: Frame 14: >>>> /lib64/libglib-2.0.so.0(g_main_context_dispatch+0x22e) [0x33b5438f0e] >>>> Jul 24 12:23:31 cloud1 gdm-binary: Frame 15: >>>> /lib64/libglib-2.0.so.0() [0x33b543c938] Jul 24 12:23:31 cloud1 >>>> gdm-binary: Frame 16: >>>> /lib64/libglib-2.0.so.0(g_main_loop_run+0x195) [0x33b543cd55] Jul 24 >>>> 12:23:31 cloud1 gdm-binary: Frame 17: /usr/sbin/gdm-binary() >>>> [0x406949] Jul 24 12:23:31 cloud1 gdm-binary: Frame 18: >>>> /lib64/libc.so.6(__libc_start_main+0xfd) [0x33b3c1ecdd] Jul 24 >>>> 12:23:31 cloud1 gdm-binary: Frame 19: /usr/sbin/gdm-binary() >>>> [0x405de9] Jul 24 12:23:31 cloud1 gdm-binary: ******************* END >>>> ********************************** >>>> Jul 24 12:23:31 cloud1 init: prefdm main process (11261) terminated >>>> with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, >>>> respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16061) >>>> terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main >>>> process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main >>>> process (16075) terminated with status 1 Jul 24 12:23:31 cloud1 init: >>>> prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: >>>> prefdm main process (16089) terminated with status 1 Jul 24 12:23:31 >>>> cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 >>>> cloud1 init: prefdm main process (16104) terminated with status 1 Jul >>>> 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 >>>> 12:23:31 cloud1 gdm[16053]: ******************* END >>>> ********************************** >>>> Jul 24 12:23:31 cloud1 init: prefdm main process (16118) terminated >>>> with status 1 Jul 24 12:23:31 cloud1 init: prefdm main process ended, >>>> respawning Jul 24 12:23:31 cloud1 init: prefdm main process (16132) >>>> terminated with status 1 Jul 24 12:23:31 cloud1 init: prefdm main >>>> process ended, respawning Jul 24 12:23:31 cloud1 init: prefdm main >>>> process (16146) terminated with status 1 Jul 24 12:23:31 cloud1 init: >>>> prefdm main process ended, respawning Jul 24 12:23:31 cloud1 init: >>>> prefdm main process (16160) terminated with status 1 Jul 24 12:23:31 >>>> cloud1 init: prefdm main process ended, respawning Jul 24 12:23:31 >>>> cloud1 init: prefdm main process (16174) terminated with status 1 Jul >>>> 24 12:23:31 cloud1 init: prefdm main process ended, respawning Jul 24 >>>> 12:23:31 cloud1 init: prefdm main process (16188) terminated with >>>> status 1 Jul 24 12:23:31 cloud1 init: prefdm respawning too fast, >>>> stopped Jul 24 12:24:06 cloud1 init: tty (/dev/tty2) main process >>>> (11271) killed by TERM signal Jul 24 12:24:06 cloud1 init: tty >>>> (/dev/tty3) main process (11273) killed by TERM signal Jul 24 >>>> 12:24:06 cloud1 init: tty (/dev/tty4) main process (11275) killed by >>>> TERM signal Jul 24 12:24:06 cloud1 init: tty (/dev/tty5) main process >>>> (11277) killed by TERM signal Jul 24 12:24:06 cloud1 init: tty >>>> (/dev/tty6) main process (11279) killed by TERM signal Jul 24 >>>> 12:24:07 cloud1 qpidd[10788]: 2013-07-24 12:24:07 error Execution >>>> exception: not-found: Delete failed. No such queue: cinder-volume >>>> (qpid/broker/Broker.cpp:940) >>>> Jul 24 12:24:07 cloud1 proxy-server SIGTERM received Jul 24 12:24:07 >>>> cloud1 proxy-server Exited Jul 24 12:24:08 cloud1 qpidd[10788]: >>>> 2013-07-24 12:24:08 notice Shut down Jul 24 12:24:09 cloud1 abrtd: >>>> Got signal 15, exiting Jul 24 12:24:10 cloud1 tgtd: tgtd logger >>>> stopped, pid:10444 Jul 24 12:24:19 cloud1 acpid: exiting Jul 24 >>>> 12:24:20 cloud1 ntpd[10497]: ntpd exiting on signal 15 Jul 24 >>>> 12:24:20 cloud1 rpcbind: rpcbind terminating on signal. Restart with >>>> "rpcbind -w" >>>> Jul 24 12:24:20 cloud1 auditd[11693]: The audit daemon is exiting. >>>> Jul 24 12:24:20 cloud1 kernel: type=1305 audit(1374693860.398:332234): >>>> audit_pid=0 old=11693 auid=4294967295 ses=4294967295 >>>> subj=system_u:system_r:auditd_t:s0 res=1 Jul 24 12:24:20 cloud1 >>>> kernel: type=1305 audit(1374693860.498:332235): >>>> audit_enabled=0 old=1 auid=4294967295 ses=4294967295 >>>> subj=system_u:system_r:auditctl_t:s0 res=1 Jul 24 12:24:20 cloud1 >>>> kernel: Kernel logging (proc) stopped. >>>> Jul 24 12:24:20 cloud1 rsyslogd: [origin software="rsyslogd" >>>> swVersion="5.8.10" x-pid="2522" x-info="http://www.rsyslog.com"] >>>> exiting on signal 15. >>> >>> -- >>> Steve Gordon, RHCE >>> Documentation Lead, Red Hat OpenStack >>> Engineering Content Services >>> Red Hat Canada (Toronto, Ontario) >> >> -- >> >> Rhys Oxenham >> Cloud Solution Architect, Red Hat UK >> e: roxenham at redhat.com >> m: +44 (0)7866 446625 > From pmyers at redhat.com Tue Jul 30 18:40:49 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 30 Jul 2013 14:40:49 -0400 Subject: [rhos-list] EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE In-Reply-To: References: , <51F7FE1E.3080901@redhat.com> Message-ID: <51F808B1.3070306@redhat.com> moving this thread back to rhos-list so other folks can chime in On 07/30/2013 02:35 PM, Nirlay Kundu wrote: > Thanks for your reply, Peter. > I am not getting anything beyond what I pasted. I am not getting image > to download, I think. It is getting queued, and then getting Killed. So, > there is no question of booting the image. Ok, so the glance import is just hanging. > The nova logs are irrelevant, and it is just updating the > resource_tracker with free mem, disk, vcpu etc. > Please give me suggestion what log files I should include. Well, glance logs probably since it's a glance import that seems to be hung/failing One question is, why are you running glance image-create with the same image name/url two times? Or was that just a duplicate copy/paste? Can you wget the image from that URL? If so, what speed are you seeing downloading the image? Perhaps it is just taking a really long time (which could make things look like it is hung) >> > ------------------------------------------------------------------------ >> > From: nirlay at hotmail.com >> > To: rhos-list at redhat.com >> > Subject: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE >> > Date: Fri, 19 Jul 2013 15:47:22 -0400 >> > >> > Hi >> > >> > The example image "Fedora 19" from >> > "http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2" is getting >> > killed after sometime when trying to create a new image using RDO. Any >> > idea why ? I tried both from the GUI and cmd line. >> > >> > >> > [root at openstack home]# export OS_USERNAME=admin >> > >> > [root at openstack home]# export OS_PASSWORD=xxxxxxxxxxxx >> > >> > [root at openstack home]# export OS_TENANT_NAME=admin >> > >> > [root at openstack home]# export OS_AUTH_URL=http://localhost:5000/v2.0/ >> > >> > [root at openstack home]# glance image-create --name "Fedora 19 x86_64" >> > --disk-format qcow2 --container-format bare --is-public true --copy-from >> > http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 >> > >> > [root at openstack ~]# glance image-create --name "Fedora 19 x86_64" >> > --disk-format qcow2 --container-format bare --is-public true --copy-from >> > http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 >> > >> > +------------------+--------------------------------------+ >> > >> > | Property | Value | >> > >> > +------------------+--------------------------------------+ >> > >> > | checksum | None | >> > >> > | container_format | bare | >> > >> > | created_at | 2013-07-19T15:09:18 | >> > >> > | deleted | False | >> > >> > | deleted_at | None | >> > >> > | disk_format | qcow2 | >> > >> > | id | c39d3bce-e6fb-40ef-b113-22d6b8099d16 | >> > >> > | is_public | True | >> > >> > | min_disk | 0 | >> > >> > | min_ram | 0 | >> > >> > | name | Fedora 19 x86_64 | >> > >> > | owner | e4066aec64fb4a958b4fdff0f99ca5d2 | >> > >> > | protected | False | >> > >> > | size | 0 | >> > >> > | status | queued | >> > >> > | updated_at | 2013-07-19T15:09:18 | >> > >> > +------------------+--------------------------------------+ >> > >> > [root at openstack ~]# >> > >> > >> From nirlay at hotmail.com Tue Jul 30 18:59:15 2013 From: nirlay at hotmail.com (Nirlay Kundu) Date: Tue, 30 Jul 2013 14:59:15 -0400 Subject: [rhos-list] EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE In-Reply-To: <51F808B1.3070306@redhat.com> References: , <51F7FE1E.3080901@redhat.com> , <51F808B1.3070306@redhat.com> Message-ID: About logs : /var/log/glance , no files are updating.Yes, glance import is hung. I have tried this from the GUI, as well as cmd line - same response.I am not running the glance image-create twice, when I type up the command and hit enter, I get this response repeated on its own - I do not enter this. Yes, I can get the image from the url manually. I have downloaded that and uploaded the image as a local image file ( from the GUI) - no issues. When I do this download from glance image-create, it is queued up and then getting Killed ( after a minute). It is not hung. When I am pasting the url http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 in the web browser to do a manual download, it fails a few times, unless it finds a good mirror site, which it can talk to. When I am doing wget from the cmd line, I am getting Failed : Connection Timed out, No route to host 8 times, before it gives up. So, this may be the issue. Can I specify a particular mirror site to get the image from ? ThanksNirlay > Date: Tue, 30 Jul 2013 14:40:49 -0400 > From: pmyers at redhat.com > To: nirlay at hotmail.com; acathrow at redhat.com; rhos-list at redhat.com; fpercoco at redhat.com; abaron at redhat.com > Subject: Re: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > > moving this thread back to rhos-list so other folks can chime in > > On 07/30/2013 02:35 PM, Nirlay Kundu wrote: > > Thanks for your reply, Peter. > > I am not getting anything beyond what I pasted. I am not getting image > > to download, I think. It is getting queued, and then getting Killed. So, > > there is no question of booting the image. > > Ok, so the glance import is just hanging. > > > The nova logs are irrelevant, and it is just updating the > > resource_tracker with free mem, disk, vcpu etc. > > Please give me suggestion what log files I should include. > > Well, glance logs probably since it's a glance import that seems to be > hung/failing > > One question is, why are you running glance image-create with the same > image name/url two times? Or was that just a duplicate copy/paste? > > Can you wget the image from that URL? If so, what speed are you seeing > downloading the image? Perhaps it is just taking a really long time > (which could make things look like it is hung) > > > > >> > ------------------------------------------------------------------------ > >> > From: nirlay at hotmail.com > >> > To: rhos-list at redhat.com > >> > Subject: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > >> > Date: Fri, 19 Jul 2013 15:47:22 -0400 > >> > > >> > Hi > >> > > >> > The example image "Fedora 19" from > >> > "http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2" is getting > >> > killed after sometime when trying to create a new image using RDO. Any > >> > idea why ? I tried both from the GUI and cmd line. > >> > > >> > > >> > [root at openstack home]# export OS_USERNAME=admin > >> > > >> > [root at openstack home]# export OS_PASSWORD=xxxxxxxxxxxx > >> > > >> > [root at openstack home]# export OS_TENANT_NAME=admin > >> > > >> > [root at openstack home]# export OS_AUTH_URL=http://localhost:5000/v2.0/ > >> > > >> > [root at openstack home]# glance image-create --name "Fedora 19 x86_64" > >> > --disk-format qcow2 --container-format bare --is-public true --copy-from > >> > http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 > >> > > >> > [root at openstack ~]# glance image-create --name "Fedora 19 x86_64" > >> > --disk-format qcow2 --container-format bare --is-public true --copy-from > >> > http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 > >> > > >> > +------------------+--------------------------------------+ > >> > > >> > | Property | Value | > >> > > >> > +------------------+--------------------------------------+ > >> > > >> > | checksum | None | > >> > > >> > | container_format | bare | > >> > > >> > | created_at | 2013-07-19T15:09:18 | > >> > > >> > | deleted | False | > >> > > >> > | deleted_at | None | > >> > > >> > | disk_format | qcow2 | > >> > > >> > | id | c39d3bce-e6fb-40ef-b113-22d6b8099d16 | > >> > > >> > | is_public | True | > >> > > >> > | min_disk | 0 | > >> > > >> > | min_ram | 0 | > >> > > >> > | name | Fedora 19 x86_64 | > >> > > >> > | owner | e4066aec64fb4a958b4fdff0f99ca5d2 | > >> > > >> > | protected | False | > >> > > >> > | size | 0 | > >> > > >> > | status | queued | > >> > > >> > | updated_at | 2013-07-19T15:09:18 | > >> > > >> > +------------------+--------------------------------------+ > >> > > >> > [root at openstack ~]# > >> > > >> > > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jbresnah at redhat.com Tue Jul 30 19:04:39 2013 From: jbresnah at redhat.com (John Bresnahan) Date: Tue, 30 Jul 2013 09:04:39 -1000 Subject: [rhos-list] EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE In-Reply-To: References: , <51F7FE1E.3080901@redhat.com> , <51F808B1.3070306@redhat.com> Message-ID: <51F80E47.80702@redhat.com> On 07/30/2013 08:59 AM, Nirlay Kundu wrote: > About logs : /var/log/glance , no files are updating. > Yes, glance import is hung. I have tried this from the GUI, as well as > cmd line - same response. > I am not running the glance image-create twice, when I type up the > command and hit enter, I get this response repeated on its own - I do > not enter this. > Yes, I can get the image from the url manually. I have downloaded that > and uploaded the image as a local image file ( from the GUI) - no issues. > When I do this download from glance image-create, it is queued up and > then getting Killed ( after a minute). It is not hung. > > When I am pasting the url > http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 in the web > browser to do a manual download, it fails a few times, unless it finds a > good mirror site, which it can talk to. > When I am doing wget from the cmd line, I am getting Failed : Connection > Timed out, No route to host 8 times, before it gives up. So, this may be > the issue. Can I specify a particular mirror site to get the image from ? It sounds like what happens is that Glance attempts to download the file but fails due to external http errors. After it is 'killed' what does glance image-show return to you? You should be able to specify a specific mirror simply by using the mirrors url, ex: http://mirror.solarvps.com/fedora/linux/releases/19/Images/x86_64/Fedora-x86_64-19-20130627-sda.qcow2 From nirlay at hotmail.com Tue Jul 30 19:39:00 2013 From: nirlay at hotmail.com (Nirlay Kundu) Date: Tue, 30 Jul 2013 15:39:00 -0400 Subject: [rhos-list] EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE In-Reply-To: References: , , <51F7FE1E.3080901@redhat.com>, , <51F808B1.3070306@redhat.com>, Message-ID: Hi Have some updates. I set the http_proxy and tried wget from cmd line. It downloads. Then , I am retrying the glane image-create , it gives me auth failed : Unabvle to communicate with identity service. Then, I restarted keystone and tried keystone user-list, it says : auth failed. Unable to communicate with identity service. What should I try ? thanksnirlay From: nirlay at hotmail.com To: pmyers at redhat.com; acathrow at redhat.com; rhos-list at redhat.com; fpercoco at redhat.com; abaron at redhat.com Subject: RE: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE Date: Tue, 30 Jul 2013 14:59:15 -0400 About logs : /var/log/glance , no files are updating.Yes, glance import is hung. I have tried this from the GUI, as well as cmd line - same response.I am not running the glance image-create twice, when I type up the command and hit enter, I get this response repeated on its own - I do not enter this. Yes, I can get the image from the url manually. I have downloaded that and uploaded the image as a local image file ( from the GUI) - no issues. When I do this download from glance image-create, it is queued up and then getting Killed ( after a minute). It is not hung. When I am pasting the url http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 in the web browser to do a manual download, it fails a few times, unless it finds a good mirror site, which it can talk to. When I am doing wget from the cmd line, I am getting Failed : Connection Timed out, No route to host 8 times, before it gives up. So, this may be the issue. Can I specify a particular mirror site to get the image from ? ThanksNirlay > Date: Tue, 30 Jul 2013 14:40:49 -0400 > From: pmyers at redhat.com > To: nirlay at hotmail.com; acathrow at redhat.com; rhos-list at redhat.com; fpercoco at redhat.com; abaron at redhat.com > Subject: Re: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > > moving this thread back to rhos-list so other folks can chime in > > On 07/30/2013 02:35 PM, Nirlay Kundu wrote: > > Thanks for your reply, Peter. > > I am not getting anything beyond what I pasted. I am not getting image > > to download, I think. It is getting queued, and then getting Killed. So, > > there is no question of booting the image. > > Ok, so the glance import is just hanging. > > > The nova logs are irrelevant, and it is just updating the > > resource_tracker with free mem, disk, vcpu etc. > > Please give me suggestion what log files I should include. > > Well, glance logs probably since it's a glance import that seems to be > hung/failing > > One question is, why are you running glance image-create with the same > image name/url two times? Or was that just a duplicate copy/paste? > > Can you wget the image from that URL? If so, what speed are you seeing > downloading the image? Perhaps it is just taking a really long time > (which could make things look like it is hung) > > > > >> > ------------------------------------------------------------------------ > >> > From: nirlay at hotmail.com > >> > To: rhos-list at redhat.com > >> > Subject: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > >> > Date: Fri, 19 Jul 2013 15:47:22 -0400 > >> > > >> > Hi > >> > > >> > The example image "Fedora 19" from > >> > "http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2" is getting > >> > killed after sometime when trying to create a new image using RDO. Any > >> > idea why ? I tried both from the GUI and cmd line. > >> > > >> > > >> > [root at openstack home]# export OS_USERNAME=admin > >> > > >> > [root at openstack home]# export OS_PASSWORD=xxxxxxxxxxxx > >> > > >> > [root at openstack home]# export OS_TENANT_NAME=admin > >> > > >> > [root at openstack home]# export OS_AUTH_URL=http://localhost:5000/v2.0/ > >> > > >> > [root at openstack home]# glance image-create --name "Fedora 19 x86_64" > >> > --disk-format qcow2 --container-format bare --is-public true --copy-from > >> > http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 > >> > > >> > [root at openstack ~]# glance image-create --name "Fedora 19 x86_64" > >> > --disk-format qcow2 --container-format bare --is-public true --copy-from > >> > http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 > >> > > >> > +------------------+--------------------------------------+ > >> > > >> > | Property | Value | > >> > > >> > +------------------+--------------------------------------+ > >> > > >> > | checksum | None | > >> > > >> > | container_format | bare | > >> > > >> > | created_at | 2013-07-19T15:09:18 | > >> > > >> > | deleted | False | > >> > > >> > | deleted_at | None | > >> > > >> > | disk_format | qcow2 | > >> > > >> > | id | c39d3bce-e6fb-40ef-b113-22d6b8099d16 | > >> > > >> > | is_public | True | > >> > > >> > | min_disk | 0 | > >> > > >> > | min_ram | 0 | > >> > > >> > | name | Fedora 19 x86_64 | > >> > > >> > | owner | e4066aec64fb4a958b4fdff0f99ca5d2 | > >> > > >> > | protected | False | > >> > > >> > | size | 0 | > >> > > >> > | status | queued | > >> > > >> > | updated_at | 2013-07-19T15:09:18 | > >> > > >> > +------------------+--------------------------------------+ > >> > > >> > [root at openstack ~]# > >> > > >> > > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Hao.Chen at NRCan-RNCan.gc.ca Tue Jul 30 23:25:01 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Tue, 30 Jul 2013 23:25:01 +0000 Subject: [rhos-list] Validating the installation Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB647015BE8@S-BSC-MBX2.nrn.nrcan.gc.ca> Greetings, (1) After creating an instance with rhel-server-x86_64-kvm-6.4_20130130.0-4.qcow2, a KVM Guest Image downloaded from https://rhn.redhat.com/rhn/software/channel/downloads/Download.do?cid=16952, I was asked for the Login ID and Password for the console access. Does anyone know the Login info? (2) I am having trouble with the Router Interfaces. The internal interface is working " 192.168.1.1 ACTIVE Internal Interface", but the status of the external interface always shows Down "10.2.0.193 DOWN External Gateway". Very grateful for any suggestions. Thanks, Hao -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Wed Jul 31 00:36:18 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 30 Jul 2013 20:36:18 -0400 Subject: [rhos-list] Validating the installation In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB647015BE8@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB647015BE8@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <51F85C02.7090706@redhat.com> On 07/30/2013 07:25 PM, Chen, Hao wrote: > Greetings, > > (1) After creating an instance with > rhel-server-x86_64-kvm-6.4_20130130.0-4.qcow2, a KVM Guest Image > downloaded from > _https://rhn.redhat.com/rhn/software/channel/downloads/Download.do?cid=16952_, > I was asked for the Login ID and Password for the console access. Does > anyone know the Login info? Joey, what is the default root password on the RHEL qcow2 images? > (2) I am having trouble with the Router Interfaces. The internal > interface is working ? 192.168.1.1 ACTIVE Internal Interface?, but the > status of the external interface always shows Down ?10.2.0.193 DOWN > External Gateway?. Very grateful for any suggestions. I'll have to defer to networking folks on this one though From roxenham at redhat.com Wed Jul 31 00:39:59 2013 From: roxenham at redhat.com (Rhys Oxenham) Date: Wed, 31 Jul 2013 01:39:59 +0100 Subject: [rhos-list] Validating the installation In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB647015BE8@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB647015BE8@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <471B1A3D-CD02-46DA-8E4C-5D1E0850AEA3@redhat.com> Hi Hao, On 31 Jul 2013, at 00:25, "Chen, Hao" wrote: > Greetings, > > (1) After creating an instance with rhel-server-x86_64-kvm-6.4_20130130.0-4.qcow2, a KVM Guest Image downloaded fromhttps://rhn.redhat.com/rhn/software/channel/downloads/Download.do?cid=16952, I was asked for the Login ID and Password for the console access. Does anyone know the Login info? Please see: http://rhn.redhat.com/errata/RHSA-2013-0849.html (https://bugzilla.redhat.com/show_bug.cgi?id=964299) As far as I'm aware the image expects you to inject an SSH key using the metadata service in OpenStack and that the default root password is locked. Once connected into the instance via the SSH key it would be possible to reset the root password there. But hopefully others will clarify the situation. If this is not an option for you, you may want to take the image and use libguestfs/guestfish to make modifications to the image before uploading into Glance. For example, set the root password to something specific to your requirements, but please note that the image will likely disable password logins via sshd, so this too will have to be changed. If this is ONLY for testing and will not be used in production then I'd just reset the root password to blank as it's quick and easy to get the image up and running, below is just an example... # virt-edit -a /path/to/rhel-server-x86_64-kvm-6.4_20130130.0-4.qcow2 /etc/ssh/sshd_config -e 's/^PasswordAuthentication.*/PasswordAuthentication yes/' # virt-edit -a /path/to/rhel-server-x86_64-kvm-6.4_20130130.0-4.qcow2 /etc/ssh/sshd_config -e 's/^PermitRootLogin.*/PermitRootLogin yes/' # virt-edit -a /path/to/rhel-server-x86_64-kvm-6.4_20130130.0-4.qcow2 /etc/ssh/sshd_config -e 's/^PermitEmptyPasswords.*/PermitEmptyPasswords yes/' # virt-edit -a /path/to/rhel-server-x86_64-kvm-6.4_20130130.0-4.qcow2 /etc/ssh/sshd_config -e 's/^root:.*?:/root::/' # glance image-create ?. I've not tested the above, it may require further steps for it to work as expected. I'll try this out in the morning. > > (2) I am having trouble with the Router Interfaces. The internal interface is working ? 192.168.1.1 ACTIVE Internal Interface?, but the status of the external interface always shows Down ?10.2.0.193 DOWN External Gateway?. Very grateful for any suggestions. I too have this, my internal interface (as in the internal port on the router) is shown as "UP" yet the external gateway port is "DOWN". I don't actually have any problems though... [root at openstack-controller ~(keystone_admin)]$ quantum port-show 11f2d170-baec-461f-bc30-b1f880132a03 +----------------------+---------------------------------------------------------------------------------------+ | Field | Value | +----------------------+---------------------------------------------------------------------------------------+ | admin_state_up | True | | binding:capabilities | {"port_filter": true} | | binding:vif_type | ovs | | device_id | 6bea3ee4-47d6-4a3e-a9da-c82fed18baa0 | | device_owner | network:router_gateway | | fixed_ips | {"subnet_id": "89ee4bc1-073e-4ccd-a108-6c839dad011d", "ip_address": "192.168.122.10"} | | id | 11f2d170-baec-461f-bc30-b1f880132a03 | | mac_address | fa:16:3e:cd:2b:20 | | name | | | network_id | 7382ead9-faba-405a-a78f-404c236c9334 | | security_groups | | | status | DOWN | | tenant_id | | +----------------------+---------------------------------------------------------------------------------------+ Yet the L3 agent works perfectly for me? [root at openstack-controller ~(keystone_admin)]$ ip netns exec qrouter-6bea3ee4-47d6-4a3e-a9da-c82fed18baa0 ssh cirros at 30.0.0.4 cirros at 30.0.0.4's password: $ ping 8.8.8.8 -c 3 PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: seq=0 ttl=127 time=29.141 ms 64 bytes from 8.8.8.8: seq=1 ttl=127 time=32.105 ms 64 bytes from 8.8.8.8: seq=2 ttl=127 time=27.258 ms --- 8.8.8.8 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 27.258/29.501/32.105 ms Do things work for you like above, or are you seeing problems? Cheers, Rhys > > Thanks, > Hao > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From nicolas.vogel at heig-vd.ch Wed Jul 31 07:41:25 2013 From: nicolas.vogel at heig-vd.ch (Vogel Nicolas) Date: Wed, 31 Jul 2013 07:41:25 +0000 Subject: [rhos-list] adding a compute node: br100 configuration Message-ID: <30a46e2acb344ec4ac029bcde1e26a32@EINTMBXC.einet.ad.eivd.ch> Hi, After adding a compute (using a modified packstack answer-file like described in the Quick Start Guide), I see that the br100 on the compute node has no IP address configured. I connected the two flat interfaces from the controller and the compute node together using a little switch. But the VMs started on my compute aren't reachable from the controller. And the compute has no entry for the private network (192.168.32.0/22) in his routing table. What must be configured to get a fully multi-node configuration running? Do I have to give br100 a fixed IP in the 192.168.32.0/22 range? Do I have to add a route in the routing table? Thanks, Nicolas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nirlay at hotmail.com Wed Jul 31 15:06:51 2013 From: nirlay at hotmail.com (Nirlay Kundu) Date: Wed, 31 Jul 2013 11:06:51 -0400 Subject: [rhos-list] EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE In-Reply-To: References: , , <51F7FE1E.3080901@redhat.com>, , <51F808B1.3070306@redhat.com>, , Message-ID: I got back to a state where I was. I fixed the auth problem by exporting some variables again.I tried specifying a particular miiror - still no go.After the image gets "Killed", I do a glance image-show and I get a dump which shows status Killed. Any other logs I can look ? Is there any proxy that is preventing the http download ? wget works though. thanks From: nirlay at hotmail.com To: pmyers at redhat.com; acathrow at redhat.com; rhos-list at redhat.com; fpercoco at redhat.com; abaron at redhat.com Subject: RE: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE Date: Tue, 30 Jul 2013 15:39:00 -0400 Hi Have some updates. I set the http_proxy and tried wget from cmd line. It downloads. Then , I am retrying the glane image-create , it gives me auth failed : Unabvle to communicate with identity service. Then, I restarted keystone and tried keystone user-list, it says : auth failed. Unable to communicate with identity service. What should I try ? thanksnirlay From: nirlay at hotmail.com To: pmyers at redhat.com; acathrow at redhat.com; rhos-list at redhat.com; fpercoco at redhat.com; abaron at redhat.com Subject: RE: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE Date: Tue, 30 Jul 2013 14:59:15 -0400 About logs : /var/log/glance , no files are updating.Yes, glance import is hung. I have tried this from the GUI, as well as cmd line - same response.I am not running the glance image-create twice, when I type up the command and hit enter, I get this response repeated on its own - I do not enter this. Yes, I can get the image from the url manually. I have downloaded that and uploaded the image as a local image file ( from the GUI) - no issues. When I do this download from glance image-create, it is queued up and then getting Killed ( after a minute). It is not hung. When I am pasting the url http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 in the web browser to do a manual download, it fails a few times, unless it finds a good mirror site, which it can talk to. When I am doing wget from the cmd line, I am getting Failed : Connection Timed out, No route to host 8 times, before it gives up. So, this may be the issue. Can I specify a particular mirror site to get the image from ? ThanksNirlay > Date: Tue, 30 Jul 2013 14:40:49 -0400 > From: pmyers at redhat.com > To: nirlay at hotmail.com; acathrow at redhat.com; rhos-list at redhat.com; fpercoco at redhat.com; abaron at redhat.com > Subject: Re: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > > moving this thread back to rhos-list so other folks can chime in > > On 07/30/2013 02:35 PM, Nirlay Kundu wrote: > > Thanks for your reply, Peter. > > I am not getting anything beyond what I pasted. I am not getting image > > to download, I think. It is getting queued, and then getting Killed. So, > > there is no question of booting the image. > > Ok, so the glance import is just hanging. > > > The nova logs are irrelevant, and it is just updating the > > resource_tracker with free mem, disk, vcpu etc. > > Please give me suggestion what log files I should include. > > Well, glance logs probably since it's a glance import that seems to be > hung/failing > > One question is, why are you running glance image-create with the same > image name/url two times? Or was that just a duplicate copy/paste? > > Can you wget the image from that URL? If so, what speed are you seeing > downloading the image? Perhaps it is just taking a really long time > (which could make things look like it is hung) > > > > >> > ------------------------------------------------------------------------ > >> > From: nirlay at hotmail.com > >> > To: rhos-list at redhat.com > >> > Subject: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > >> > Date: Fri, 19 Jul 2013 15:47:22 -0400 > >> > > >> > Hi > >> > > >> > The example image "Fedora 19" from > >> > "http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2" is getting > >> > killed after sometime when trying to create a new image using RDO. Any > >> > idea why ? I tried both from the GUI and cmd line. > >> > > >> > > >> > [root at openstack home]# export OS_USERNAME=admin > >> > > >> > [root at openstack home]# export OS_PASSWORD=xxxxxxxxxxxx > >> > > >> > [root at openstack home]# export OS_TENANT_NAME=admin > >> > > >> > [root at openstack home]# export OS_AUTH_URL=http://localhost:5000/v2.0/ > >> > > >> > [root at openstack home]# glance image-create --name "Fedora 19 x86_64" > >> > --disk-format qcow2 --container-format bare --is-public true --copy-from > >> > http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 > >> > > >> > [root at openstack ~]# glance image-create --name "Fedora 19 x86_64" > >> > --disk-format qcow2 --container-format bare --is-public true --copy-from > >> > http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 > >> > > >> > +------------------+--------------------------------------+ > >> > > >> > | Property | Value | > >> > > >> > +------------------+--------------------------------------+ > >> > > >> > | checksum | None | > >> > > >> > | container_format | bare | > >> > > >> > | created_at | 2013-07-19T15:09:18 | > >> > > >> > | deleted | False | > >> > > >> > | deleted_at | None | > >> > > >> > | disk_format | qcow2 | > >> > > >> > | id | c39d3bce-e6fb-40ef-b113-22d6b8099d16 | > >> > > >> > | is_public | True | > >> > > >> > | min_disk | 0 | > >> > > >> > | min_ram | 0 | > >> > > >> > | name | Fedora 19 x86_64 | > >> > > >> > | owner | e4066aec64fb4a958b4fdff0f99ca5d2 | > >> > > >> > | protected | False | > >> > > >> > | size | 0 | > >> > > >> > | status | queued | > >> > > >> > | updated_at | 2013-07-19T15:09:18 | > >> > > >> > +------------------+--------------------------------------+ > >> > > >> > [root at openstack ~]# > >> > > >> > > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbrady at redhat.com Wed Jul 31 15:22:37 2013 From: pbrady at redhat.com (=?ISO-8859-1?Q?P=E1draig_Brady?=) Date: Wed, 31 Jul 2013 16:22:37 +0100 Subject: [rhos-list] EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE In-Reply-To: References: , , <51F7FE1E.3080901@redhat.com>, , <51F808B1.3070306@redhat.com>, , Message-ID: <51F92BBD.5080502@redhat.com> On 07/31/2013 04:06 PM, Nirlay Kundu wrote: > I got back to a state where I was. I fixed the auth problem by exporting some variables again. > I tried specifying a particular miiror - still no go. > After the image gets "Killed", I do a glance image-show and I get a dump which shows status Killed. Any other logs I can look ? Is there any proxy that is preventing the http download ? wget works though. I suspect http_proxy and/or https_proxy is not set for the glance service. Since you've already downloaded the image using wget, you can pass it directly to glance like: glance image-create --name 'Fedora 19 x86_64' --disk-format qcow2 \ --container-format bare --is-public true < fedora-19.x86_64.qcow2 thanks, P?draig. From flavio at redhat.com Wed Jul 31 15:29:32 2013 From: flavio at redhat.com (Flavio Percoco) Date: Wed, 31 Jul 2013 17:29:32 +0200 Subject: [rhos-list] EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE In-Reply-To: References: <51F7FE1E.3080901@redhat.com> <51F808B1.3070306@redhat.com> Message-ID: <20130731152932.GN14047@redhat.com> On 31/07/13 11:06 -0400, Nirlay Kundu wrote: >I got back to a state where I was. I fixed the auth problem by exporting some >variables again. >I tried specifying a particular miiror - still no go. >After the image gets "Killed", I do a glance image-show and I get a dump which >shows status Killed. Any other logs I can look ? Is there any proxy that is >preventing the http download ? wget works though. > Hi, I'm afraid you might be hitting this bug[0]. What happens is that the URL you're using for the base image does not send the Content-Length header along with the answer. I also checked the URL you're using actually redirects to another URL. What happens if you use this one[1]? Cheers, FF [0] https://bugzilla.redhat.com/show_bug.cgi?id=974119 [1] http://download.fedoraproject.org/pub/fedora/linux/releases/19/Images/x86_64/Fedora-x86_64-19-20130627-sda.qcow2 >thanks > >??????????????????????????????????????????????????????????????????????????????? >From: nirlay at hotmail.com >To: pmyers at redhat.com; acathrow at redhat.com; rhos-list at redhat.com; >fpercoco at redhat.com; abaron at redhat.com >Subject: RE: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE >Date: Tue, 30 Jul 2013 15:39:00 -0400 > >Hi > >Have some updates. I set the http_proxy and tried wget from cmd line. It >downloads. > >Then , I am retrying the glane image-create , it gives me auth failed : Unabvle >to communicate with identity service. > >Then, I restarted keystone and tried keystone user-list, it says : auth failed. >Unable to communicate with identity service. > >What should I try ? > >thanks >nirlay > >??????????????????????????????????????????????????????????????????????????????? >From: nirlay at hotmail.com >To: pmyers at redhat.com; acathrow at redhat.com; rhos-list at redhat.com; >fpercoco at redhat.com; abaron at redhat.com >Subject: RE: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE >Date: Tue, 30 Jul 2013 14:59:15 -0400 > >About logs : /var/log/glance , no files are updating. >Yes, glance import is hung. I have tried this from the GUI, as well as cmd line >- same response. >I am not running the glance image-create twice, when I type up the command and >hit enter, I get this response repeated on its own - I do not enter this. >Yes, I can get the image from the url manually. I have downloaded that and >uploaded the image as a local image file ( from the GUI) - no issues. >When I do this download from glance image-create, it is queued up and then >getting Killed ( after a minute). It is not hung. > >When I am pasting the url http://cloud.fedoraproject.org/ >fedora-19.x86_64.qcow2 in the web browser to do a manual download, it fails a >few times, unless it finds a good mirror site, which it can talk to. >When I am doing wget from the cmd line, I am getting Failed : Connection Timed >out, No route to host 8 times, before it gives up. So, this may be the issue. >Can I specify a particular mirror site to get the image from ? > >Thanks >Nirlay > >> Date: Tue, 30 Jul 2013 14:40:49 -0400 >> From: pmyers at redhat.com >> To: nirlay at hotmail.com; acathrow at redhat.com; rhos-list at redhat.com; >fpercoco at redhat.com; abaron at redhat.com >> Subject: Re: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE >> >> moving this thread back to rhos-list so other folks can chime in >> >> On 07/30/2013 02:35 PM, Nirlay Kundu wrote: >> > Thanks for your reply, Peter. >> > I am not getting anything beyond what I pasted. I am not getting image >> > to download, I think. It is getting queued, and then getting Killed. So, >> > there is no question of booting the image. >> >> Ok, so the glance import is just hanging. >> >> > The nova logs are irrelevant, and it is just updating the >> > resource_tracker with free mem, disk, vcpu etc. >> > Please give me suggestion what log files I should include. >> >> Well, glance logs probably since it's a glance import that seems to be >> hung/failing >> >> One question is, why are you running glance image-create with the same >> image name/url two times? Or was that just a duplicate copy/paste? >> >> Can you wget the image from that URL? If so, what speed are you seeing >> downloading the image? Perhaps it is just taking a really long time >> (which could make things look like it is hung) >> >> >> >> >> > ------------------------------------------------------------------------ >> >> > From: nirlay at hotmail.com >> >> > To: rhos-list at redhat.com >> >> > Subject: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE >> >> > Date: Fri, 19 Jul 2013 15:47:22 -0400 >> >> > >> >> > Hi >> >> > >> >> > The example image "Fedora 19" from >> >> > "http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2" is getting >> >> > killed after sometime when trying to create a new image using RDO. Any >> >> > idea why ? I tried both from the GUI and cmd line. >> >> > >> >> > >> >> > [root at openstack home]# export OS_USERNAME=admin >> >> > >> >> > [root at openstack home]# export OS_PASSWORD=xxxxxxxxxxxx >> >> > >> >> > [root at openstack home]# export OS_TENANT_NAME=admin >> >> > >> >> > [root at openstack home]# export OS_AUTH_URL=http://localhost:5000/v2.0/ >> >> > >> >> > [root at openstack home]# glance image-create --name "Fedora 19 x86_64" >> >> > --disk-format qcow2 --container-format bare --is-public true --copy-from >> >> > http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 >> >> > >> >> > [root at openstack ~]# glance image-create --name "Fedora 19 x86_64" >> >> > --disk-format qcow2 --container-format bare --is-public true --copy-from >> >> > http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 >> >> > >> >> > +------------------+--------------------------------------+ >> >> > >> >> > | Property | Value | >> >> > >> >> > +------------------+--------------------------------------+ >> >> > >> >> > | checksum | None | >> >> > >> >> > | container_format | bare | >> >> > >> >> > | created_at | 2013-07-19T15:09:18 | >> >> > >> >> > | deleted | False | >> >> > >> >> > | deleted_at | None | >> >> > >> >> > | disk_format | qcow2 | >> >> > >> >> > | id | c39d3bce-e6fb-40ef-b113-22d6b8099d16 | >> >> > >> >> > | is_public | True | >> >> > >> >> > | min_disk | 0 | >> >> > >> >> > | min_ram | 0 | >> >> > >> >> > | name | Fedora 19 x86_64 | >> >> > >> >> > | owner | e4066aec64fb4a958b4fdff0f99ca5d2 | >> >> > >> >> > | protected | False | >> >> > >> >> > | size | 0 | >> >> > >> >> > | status | queued | >> >> > >> >> > | updated_at | 2013-07-19T15:09:18 | >> >> > >> >> > +------------------+--------------------------------------+ >> >> > >> >> > [root at openstack ~]# >> >> > >> >> > >> >> >> -- @flaper87 Flavio Percoco From nirlay at hotmail.com Wed Jul 31 15:52:07 2013 From: nirlay at hotmail.com (Nirlay Kundu) Date: Wed, 31 Jul 2013 11:52:07 -0400 Subject: [rhos-list] EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE In-Reply-To: <51F92BBD.5080502@redhat.com> References: , , <51F7FE1E.3080901@redhat.com>, , <51F808B1.3070306@redhat.com>, , , <51F92BBD.5080502@redhat.com> Message-ID: Thanks P?draig for your input. You are correct. http proxy was not set. Now I set the http proxy from cmdline to the same proxy that I have in the web browser. When I ran the glance image-create command again, it gives me auth failure : unable to communicate with Identity service.Then I ran keystone user-list and getting "unable to communicate with identity service" message. To get back to my original state, I had to open a fresh terminal, then export OS_SERVICE_TOKEN, OS_SERVICE_ENDPOINT, OS_USERNAME, OS_PASSWORD, OS_AUTH_URL, OS_TENANT_NAME. So, there must be something happening when I am setting the http proxy by doing export http_proxy=. Yes, I can download the image locally and redirect it as you said - which is a workaround. So, although I am not stuck, I want to understand if anything wrong I am doing or if it is a bug. ThanksNirlay > Date: Wed, 31 Jul 2013 16:22:37 +0100 > From: pbrady at redhat.com > To: nirlay at hotmail.com > CC: pmyers at redhat.com; acathrow at redhat.com; rhos-list at redhat.com; fpercoco at redhat.com; abaron at redhat.com > Subject: Re: [rhos-list] EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > > On 07/31/2013 04:06 PM, Nirlay Kundu wrote: > > I got back to a state where I was. I fixed the auth problem by exporting some variables again. > > I tried specifying a particular miiror - still no go. > > After the image gets "Killed", I do a glance image-show and I get a dump which shows status Killed. Any other logs I can look ? Is there any proxy that is preventing the http download ? wget works though. > > I suspect http_proxy and/or https_proxy is not set for the glance service. > Since you've already downloaded the image using wget, you can > pass it directly to glance like: > > glance image-create --name 'Fedora 19 x86_64' --disk-format qcow2 \ > --container-format bare --is-public true < fedora-19.x86_64.qcow2 > > thanks, > P?draig. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nirlay at hotmail.com Wed Jul 31 15:58:56 2013 From: nirlay at hotmail.com (Nirlay Kundu) Date: Wed, 31 Jul 2013 11:58:56 -0400 Subject: [rhos-list] EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE In-Reply-To: <20130731152932.GN14047@redhat.com> References: , , <51F7FE1E.3080901@redhat.com>, , <51F808B1.3070306@redhat.com>, , , , <20130731152932.GN14047@redhat.com> Message-ID: Should I use [1] after setting the http_proxy ?I tried [1] when http_proxy is not set and I get "Killed' message when I do glance image-show thanks > Date: Wed, 31 Jul 2013 17:29:32 +0200 > From: flavio at redhat.com > To: nirlay at hotmail.com > CC: pmyers at redhat.com; acathrow at redhat.com; rhos-list at redhat.com; fpercoco at redhat.com; abaron at redhat.com > Subject: Re: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > > On 31/07/13 11:06 -0400, Nirlay Kundu wrote: > >I got back to a state where I was. I fixed the auth problem by exporting some > >variables again. > >I tried specifying a particular miiror - still no go. > >After the image gets "Killed", I do a glance image-show and I get a dump which > >shows status Killed. Any other logs I can look ? Is there any proxy that is > >preventing the http download ? wget works though. > > > > Hi, > > I'm afraid you might be hitting this bug[0]. What happens is that the > URL you're using for the base image does not send the Content-Length > header along with the answer. > > I also checked the URL you're using actually redirects to another URL. > What happens if you use this one[1]? > > Cheers, > FF > > [0] https://bugzilla.redhat.com/show_bug.cgi?id=974119 > [1] > http://download.fedoraproject.org/pub/fedora/linux/releases/19/Images/x86_64/Fedora-x86_64-19-20130627-sda.qcow2 > > > >thanks > > > >??????????????????????????????????????????????????????????????????????????????? > >From: nirlay at hotmail.com > >To: pmyers at redhat.com; acathrow at redhat.com; rhos-list at redhat.com; > >fpercoco at redhat.com; abaron at redhat.com > >Subject: RE: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > >Date: Tue, 30 Jul 2013 15:39:00 -0400 > > > >Hi > > > >Have some updates. I set the http_proxy and tried wget from cmd line. It > >downloads. > > > >Then , I am retrying the glane image-create , it gives me auth failed : Unabvle > >to communicate with identity service. > > > >Then, I restarted keystone and tried keystone user-list, it says : auth failed. > >Unable to communicate with identity service. > > > >What should I try ? > > > >thanks > >nirlay > > > >??????????????????????????????????????????????????????????????????????????????? > >From: nirlay at hotmail.com > >To: pmyers at redhat.com; acathrow at redhat.com; rhos-list at redhat.com; > >fpercoco at redhat.com; abaron at redhat.com > >Subject: RE: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > >Date: Tue, 30 Jul 2013 14:59:15 -0400 > > > >About logs : /var/log/glance , no files are updating. > >Yes, glance import is hung. I have tried this from the GUI, as well as cmd line > >- same response. > >I am not running the glance image-create twice, when I type up the command and > >hit enter, I get this response repeated on its own - I do not enter this. > >Yes, I can get the image from the url manually. I have downloaded that and > >uploaded the image as a local image file ( from the GUI) - no issues. > >When I do this download from glance image-create, it is queued up and then > >getting Killed ( after a minute). It is not hung. > > > >When I am pasting the url http://cloud.fedoraproject.org/ > >fedora-19.x86_64.qcow2 in the web browser to do a manual download, it fails a > >few times, unless it finds a good mirror site, which it can talk to. > >When I am doing wget from the cmd line, I am getting Failed : Connection Timed > >out, No route to host 8 times, before it gives up. So, this may be the issue. > >Can I specify a particular mirror site to get the image from ? > > > >Thanks > >Nirlay > > > >> Date: Tue, 30 Jul 2013 14:40:49 -0400 > >> From: pmyers at redhat.com > >> To: nirlay at hotmail.com; acathrow at redhat.com; rhos-list at redhat.com; > >fpercoco at redhat.com; abaron at redhat.com > >> Subject: Re: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > >> > >> moving this thread back to rhos-list so other folks can chime in > >> > >> On 07/30/2013 02:35 PM, Nirlay Kundu wrote: > >> > Thanks for your reply, Peter. > >> > I am not getting anything beyond what I pasted. I am not getting image > >> > to download, I think. It is getting queued, and then getting Killed. So, > >> > there is no question of booting the image. > >> > >> Ok, so the glance import is just hanging. > >> > >> > The nova logs are irrelevant, and it is just updating the > >> > resource_tracker with free mem, disk, vcpu etc. > >> > Please give me suggestion what log files I should include. > >> > >> Well, glance logs probably since it's a glance import that seems to be > >> hung/failing > >> > >> One question is, why are you running glance image-create with the same > >> image name/url two times? Or was that just a duplicate copy/paste? > >> > >> Can you wget the image from that URL? If so, what speed are you seeing > >> downloading the image? Perhaps it is just taking a really long time > >> (which could make things look like it is hung) > >> > >> > >> > >> >> > ------------------------------------------------------------------------ > >> >> > From: nirlay at hotmail.com > >> >> > To: rhos-list at redhat.com > >> >> > Subject: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > >> >> > Date: Fri, 19 Jul 2013 15:47:22 -0400 > >> >> > > >> >> > Hi > >> >> > > >> >> > The example image "Fedora 19" from > >> >> > "http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2" is getting > >> >> > killed after sometime when trying to create a new image using RDO. Any > >> >> > idea why ? I tried both from the GUI and cmd line. > >> >> > > >> >> > > >> >> > [root at openstack home]# export OS_USERNAME=admin > >> >> > > >> >> > [root at openstack home]# export OS_PASSWORD=xxxxxxxxxxxx > >> >> > > >> >> > [root at openstack home]# export OS_TENANT_NAME=admin > >> >> > > >> >> > [root at openstack home]# export OS_AUTH_URL=http://localhost:5000/v2.0/ > >> >> > > >> >> > [root at openstack home]# glance image-create --name "Fedora 19 x86_64" > >> >> > --disk-format qcow2 --container-format bare --is-public true --copy-from > >> >> > http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 > >> >> > > >> >> > [root at openstack ~]# glance image-create --name "Fedora 19 x86_64" > >> >> > --disk-format qcow2 --container-format bare --is-public true --copy-from > >> >> > http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 > >> >> > > >> >> > +------------------+--------------------------------------+ > >> >> > > >> >> > | Property | Value | > >> >> > > >> >> > +------------------+--------------------------------------+ > >> >> > > >> >> > | checksum | None | > >> >> > > >> >> > | container_format | bare | > >> >> > > >> >> > | created_at | 2013-07-19T15:09:18 | > >> >> > > >> >> > | deleted | False | > >> >> > > >> >> > | deleted_at | None | > >> >> > > >> >> > | disk_format | qcow2 | > >> >> > > >> >> > | id | c39d3bce-e6fb-40ef-b113-22d6b8099d16 | > >> >> > > >> >> > | is_public | True | > >> >> > > >> >> > | min_disk | 0 | > >> >> > > >> >> > | min_ram | 0 | > >> >> > > >> >> > | name | Fedora 19 x86_64 | > >> >> > > >> >> > | owner | e4066aec64fb4a958b4fdff0f99ca5d2 | > >> >> > > >> >> > | protected | False | > >> >> > > >> >> > | size | 0 | > >> >> > > >> >> > | status | queued | > >> >> > > >> >> > | updated_at | 2013-07-19T15:09:18 | > >> >> > > >> >> > +------------------+--------------------------------------+ > >> >> > > >> >> > [root at openstack ~]# > >> >> > > >> >> > > >> >> > >> > > -- > @flaper87 > Flavio Percoco -------------- next part -------------- An HTML attachment was scrubbed... URL: From nirlay at hotmail.com Wed Jul 31 16:06:39 2013 From: nirlay at hotmail.com (Nirlay Kundu) Date: Wed, 31 Jul 2013 12:06:39 -0400 Subject: [rhos-list] EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE In-Reply-To: References: , , <51F7FE1E.3080901@redhat.com>, , <51F808B1.3070306@redhat.com>, , , , <20130731152932.GN14047@redhat.com>, Message-ID: I tried wget when http_proxy was not set. It failed. Then I set http_proxy and wget was successful.With http_proxy set, I again tried glance image-create with [1]. It is saying "Authorization Failed: Unable to communicate with identity service" To get back to where I was, I have to open up a fresh terminal, set all the variables again etc. From: nirlay at hotmail.com To: flavio at redhat.com CC: pmyers at redhat.com; acathrow at redhat.com; rhos-list at redhat.com; fpercoco at redhat.com; abaron at redhat.com Subject: RE: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE Date: Wed, 31 Jul 2013 11:58:56 -0400 Should I use [1] after setting the http_proxy ?I tried [1] when http_proxy is not set and I get "Killed' message when I do glance image-show thanks > Date: Wed, 31 Jul 2013 17:29:32 +0200 > From: flavio at redhat.com > To: nirlay at hotmail.com > CC: pmyers at redhat.com; acathrow at redhat.com; rhos-list at redhat.com; fpercoco at redhat.com; abaron at redhat.com > Subject: Re: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > > On 31/07/13 11:06 -0400, Nirlay Kundu wrote: > >I got back to a state where I was. I fixed the auth problem by exporting some > >variables again. > >I tried specifying a particular miiror - still no go. > >After the image gets "Killed", I do a glance image-show and I get a dump which > >shows status Killed. Any other logs I can look ? Is there any proxy that is > >preventing the http download ? wget works though. > > > > Hi, > > I'm afraid you might be hitting this bug[0]. What happens is that the > URL you're using for the base image does not send the Content-Length > header along with the answer. > > I also checked the URL you're using actually redirects to another URL. > What happens if you use this one[1]? > > Cheers, > FF > > [0] https://bugzilla.redhat.com/show_bug.cgi?id=974119 > [1] > http://download.fedoraproject.org/pub/fedora/linux/releases/19/Images/x86_64/Fedora-x86_64-19-20130627-sda.qcow2 > > > >thanks > > > >??????????????????????????????????????????????????????????????????????????????? > >From: nirlay at hotmail.com > >To: pmyers at redhat.com; acathrow at redhat.com; rhos-list at redhat.com; > >fpercoco at redhat.com; abaron at redhat.com > >Subject: RE: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > >Date: Tue, 30 Jul 2013 15:39:00 -0400 > > > >Hi > > > >Have some updates. I set the http_proxy and tried wget from cmd line. It > >downloads. > > > >Then , I am retrying the glane image-create , it gives me auth failed : Unabvle > >to communicate with identity service. > > > >Then, I restarted keystone and tried keystone user-list, it says : auth failed. > >Unable to communicate with identity service. > > > >What should I try ? > > > >thanks > >nirlay > > > >??????????????????????????????????????????????????????????????????????????????? > >From: nirlay at hotmail.com > >To: pmyers at redhat.com; acathrow at redhat.com; rhos-list at redhat.com; > >fpercoco at redhat.com; abaron at redhat.com > >Subject: RE: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > >Date: Tue, 30 Jul 2013 14:59:15 -0400 > > > >About logs : /var/log/glance , no files are updating. > >Yes, glance import is hung. I have tried this from the GUI, as well as cmd line > >- same response. > >I am not running the glance image-create twice, when I type up the command and > >hit enter, I get this response repeated on its own - I do not enter this. > >Yes, I can get the image from the url manually. I have downloaded that and > >uploaded the image as a local image file ( from the GUI) - no issues. > >When I do this download from glance image-create, it is queued up and then > >getting Killed ( after a minute). It is not hung. > > > >When I am pasting the url http://cloud.fedoraproject.org/ > >fedora-19.x86_64.qcow2 in the web browser to do a manual download, it fails a > >few times, unless it finds a good mirror site, which it can talk to. > >When I am doing wget from the cmd line, I am getting Failed : Connection Timed > >out, No route to host 8 times, before it gives up. So, this may be the issue. > >Can I specify a particular mirror site to get the image from ? > > > >Thanks > >Nirlay > > > >> Date: Tue, 30 Jul 2013 14:40:49 -0400 > >> From: pmyers at redhat.com > >> To: nirlay at hotmail.com; acathrow at redhat.com; rhos-list at redhat.com; > >fpercoco at redhat.com; abaron at redhat.com > >> Subject: Re: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > >> > >> moving this thread back to rhos-list so other folks can chime in > >> > >> On 07/30/2013 02:35 PM, Nirlay Kundu wrote: > >> > Thanks for your reply, Peter. > >> > I am not getting anything beyond what I pasted. I am not getting image > >> > to download, I think. It is getting queued, and then getting Killed. So, > >> > there is no question of booting the image. > >> > >> Ok, so the glance import is just hanging. > >> > >> > The nova logs are irrelevant, and it is just updating the > >> > resource_tracker with free mem, disk, vcpu etc. > >> > Please give me suggestion what log files I should include. > >> > >> Well, glance logs probably since it's a glance import that seems to be > >> hung/failing > >> > >> One question is, why are you running glance image-create with the same > >> image name/url two times? Or was that just a duplicate copy/paste? > >> > >> Can you wget the image from that URL? If so, what speed are you seeing > >> downloading the image? Perhaps it is just taking a really long time > >> (which could make things look like it is hung) > >> > >> > >> > >> >> > ------------------------------------------------------------------------ > >> >> > From: nirlay at hotmail.com > >> >> > To: rhos-list at redhat.com > >> >> > Subject: EXAMPLE IMAGE GETTING KILLED WHEN TRYING TO CREATE > >> >> > Date: Fri, 19 Jul 2013 15:47:22 -0400 > >> >> > > >> >> > Hi > >> >> > > >> >> > The example image "Fedora 19" from > >> >> > "http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2" is getting > >> >> > killed after sometime when trying to create a new image using RDO. Any > >> >> > idea why ? I tried both from the GUI and cmd line. > >> >> > > >> >> > > >> >> > [root at openstack home]# export OS_USERNAME=admin > >> >> > > >> >> > [root at openstack home]# export OS_PASSWORD=xxxxxxxxxxxx > >> >> > > >> >> > [root at openstack home]# export OS_TENANT_NAME=admin > >> >> > > >> >> > [root at openstack home]# export OS_AUTH_URL=http://localhost:5000/v2.0/ > >> >> > > >> >> > [root at openstack home]# glance image-create --name "Fedora 19 x86_64" > >> >> > --disk-format qcow2 --container-format bare --is-public true --copy-from > >> >> > http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 > >> >> > > >> >> > [root at openstack ~]# glance image-create --name "Fedora 19 x86_64" > >> >> > --disk-format qcow2 --container-format bare --is-public true --copy-from > >> >> > http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2 > >> >> > > >> >> > +------------------+--------------------------------------+ > >> >> > > >> >> > | Property | Value | > >> >> > > >> >> > +------------------+--------------------------------------+ > >> >> > > >> >> > | checksum | None | > >> >> > > >> >> > | container_format | bare | > >> >> > > >> >> > | created_at | 2013-07-19T15:09:18 | > >> >> > > >> >> > | deleted | False | > >> >> > > >> >> > | deleted_at | None | > >> >> > > >> >> > | disk_format | qcow2 | > >> >> > > >> >> > | id | c39d3bce-e6fb-40ef-b113-22d6b8099d16 | > >> >> > > >> >> > | is_public | True | > >> >> > > >> >> > | min_disk | 0 | > >> >> > > >> >> > | min_ram | 0 | > >> >> > > >> >> > | name | Fedora 19 x86_64 | > >> >> > > >> >> > | owner | e4066aec64fb4a958b4fdff0f99ca5d2 | > >> >> > > >> >> > | protected | False | > >> >> > > >> >> > | size | 0 | > >> >> > > >> >> > | status | queued | > >> >> > > >> >> > | updated_at | 2013-07-19T15:09:18 | > >> >> > > >> >> > +------------------+--------------------------------------+ > >> >> > > >> >> > [root at openstack ~]# > >> >> > > >> >> > > >> >> > >> > > -- > @flaper87 > Flavio Percoco -------------- next part -------------- An HTML attachment was scrubbed... URL: