From berrange at redhat.com Mon Jun 3 09:30:52 2013 From: berrange at redhat.com (Daniel P. Berrange) Date: Mon, 3 Jun 2013 10:30:52 +0100 Subject: [rhos-list] Libvirt Error (warning). In-Reply-To: <20130523230915.GW1997@redhat.com> References: <519E66CD.4090304@redhat.com> <20130523205043.GU1997@redhat.com> <20130523230915.GW1997@redhat.com> Message-ID: <20130603093052.GD2437@redhat.com> On Thu, May 23, 2013 at 07:09:15PM -0400, Dave Allan wrote: > On Thu, May 23, 2013 at 04:50:43PM -0400, Dave Allan wrote: > > On Thu, May 23, 2013 at 02:58:21PM -0400, Perry Myers wrote: > > > On 05/23/2013 02:28 PM, Minton, Rich wrote: > > > > Does anyone know what this means and how to fix it? if it needs to be > > > > fixed? These are from ?libvirtd.log? > > > > > > > > > > > > > > > > warning : qemuDomainObjTaint:1377 : Domain id=6 name='instance-000000f8' > > > > uuid=d5d6e9a4-10d0-41d1-b9ec-4d331ed70478 is tainted: high-privileges This warning however is serious. Normally QEMU processes are run unprivileged as a qemu:qemu user / group pair. This warning messages indicates that you have modified /etc/libvirt/qemu.conf to run with elevated privileges. Since this is known to be an insecure configuration we taint the VM. > > > Dan or Dave, can you shed light on this? > > > > > > Perry > > > > A quick look at the code suggests it should be harmless. Laine, can > > you give a deeper answer on what causes it? > > Ok, confirmed harmless, and Stefan Berger posted patches to remove > those messages: > > https://www.redhat.com/archives/libvir-list/2013-April/msg00953.html > Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :| From rich.minton at lmco.com Mon Jun 3 17:38:22 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Mon, 3 Jun 2013 17:38:22 +0000 Subject: [rhos-list] EXTERNAL: Re: ovs-vswitchd config In-Reply-To: <51A5A1BD.70803@redhat.com> References: <51A50455.9010009@redhat.com> <51A5A1BD.70803@redhat.com> Message-ID: What do I do if it's not persistent across reboots? I have to add the port after every reboot or "service network restart". Rick -----Original Message----- From: Gary Kotton [mailto:gkotton at redhat.com] Sent: Wednesday, May 29, 2013 2:36 AM To: Perry Myers Cc: Minton, Rich; Terry Wilson; Ryan O'Hara; Robert Kukura; rhos-list at redhat.com Subject: EXTERNAL: Re: [rhos-list] ovs-vswitchd config On 05/28/2013 10:24 PM, Perry Myers wrote: > On 05/28/2013 03:16 PM, Minton, Rich wrote: >> Is there a way to set this permanently so I don't have to run it each >> time my server reboots? >> >> >> >> "ovs-vsctl add-port br-eth1 eth1" > I think we discussed having packstack precreate the bridges as part of > install, but I'm not sure how the packstack/quantum folks plan to make > that persistent (adding folks to cc list to answer) This operation only needs to be done and it should be persistent. Thanks Gary > > Perry > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From smanoo76 at gmail.com Tue Jun 4 06:28:20 2013 From: smanoo76 at gmail.com (S Manoo) Date: Mon, 3 Jun 2013 23:28:20 -0700 Subject: [rhos-list] Problems with quantum and dhcp-agent Message-ID: I'm trying to get a fairly simple all-in-one setup going with RHOS 3.0 preview, and am following the Packstack-to-quantum wiki instructions. I've a simple flat (local) private network within the host, with only the dhcp agent (and no L3 agent). When I create the network and associate it with an instance, the appropriate port and fixed ip address does seem to get created for the node via dhcp. However, when the instance boots up, it does not get any reply to the dhcp requests. Can someone help me figure out why the dhcp requests are not being answered? More info below, thanks for any help! *The DHCP info created by dhcp-agent:* [root at grizzly ~(keystone_admin)]# cat /var/lib/quantum/dhcp/abc962db-2dd1-4229-9bc0-c86b047bcc3a/host fa:16:3e:c7:c6:82,192-168-100-2.openstacklocal,192.168.100.2 fa:16:3e:17:29:04,192-168-100-3.openstacklocal,192.168.100.3 fa:16:3e:16:02:a7,192-168-100-4.openstacklocal,192.168.100.4 *ip addr* [root at grizzly ~(keystone_admin)]# ip addr 1: lo: mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:10:18:78:6c:20 brd ff:ff:ff:ff:ff:ff inet 10.0.0.19/24 brd 10.0.0.255 scope global eth0 inet6 fe80::210:18ff:fe78:6c20/64 scope link valid_lft forever preferred_lft forever 3: eth1: mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 00:10:18:78:6c:22 brd ff:ff:ff:ff:ff:ff 4: eth2: mtu 1500 qdisc mq state DOWN qlen 1000 link/ether d8:d3:85:5b:0e:0e brd ff:ff:ff:ff:ff:ff 5: eth3: mtu 1500 qdisc mq state UP qlen 1000 link/ether d8:d3:85:5b:0e:0f brd ff:ff:ff:ff:ff:ff inet6 fe80::dad3:85ff:fe5b:e0f/64 scope link valid_lft forever preferred_lft forever 6: br-int: mtu 1500 qdisc noop state DOWN link/ether 3a:97:75:6d:22:47 brd ff:ff:ff:ff:ff:ff 7: virbr0: mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:89:36:43 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 8: virbr0-nic: mtu 1500 qdisc noop state DOWN qlen 500 link/ether 52:54:00:89:36:43 brd ff:ff:ff:ff:ff:ff 10: ns-7802be13-79: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fa:16:3e:c7:c6:82 brd ff:ff:ff:ff:ff:ff inet 192.168.100.2/24 brd 192.168.100.255 scope global ns-7802be13-79 inet6 fe80::f816:3eff:fec7:c682/64 scope link valid_lft forever preferred_lft forever 11: tap7802be13-79: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether ae:b8:f6:40:7c:da brd ff:ff:ff:ff:ff:ff inet6 fe80::acb8:f6ff:fe40:7cda/64 scope link valid_lft forever preferred_lft forever *ovs-vsctl show:* [root at grizzly ~(keystone_admin)]# ovs-vsctl show 703b6230-652a-4ae3-8a83-eb1cad3c1581 Bridge br-int Port br-int Interface br-int type: internal Port "tap7802be13-79" tag: 1 Interface "tap7802be13-79" ovs_version: "1.9.0" * * *quantum:* [root at grizzly ~(keystone_admin)]# quantum net-list +--------------------------------------+---------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+---------+-------------------------------------------------------+ | abc962db-2dd1-4229-9bc0-c86b047bcc3a | private | 65e0ff39-4ec6-4c09-af0c-876e898685a7 192.168.100.0/24 | +--------------------------------------+---------+-------------------------------------------------------+ [root at grizzly ~(keystone_admin)]# quantum port-list +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | 7802be13-79a3-4fb6-a0ce-033fb40b819a | | fa:16:3e:c7:c6:82 | {"subnet_id": "65e0ff39-4ec6-4c09-af0c-876e898685a7", "ip_address": "192.168.100.2"} | | 9115064a-86cf-441a-900e-c153edd5a0d3 | | fa:16:3e:16:02:a7 | {"subnet_id": "65e0ff39-4ec6-4c09-af0c-876e898685a7", "ip_address": "192.168.100.4"} | | fd5389d5-ec76-4525-8594-f1dd80a470b5 | | fa:16:3e:17:29:04 | {"subnet_id": "65e0ff39-4ec6-4c09-af0c-876e898685a7", "ip_address": "192.168.100.3"} | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ *Tcpdump*: [root at grizzly ~(keystone_admin)]# tcpdump -i ns-7802be13-79 -vv -n tcpdump: listening on ns-7802be13-79, link-type EN10MB (Ethernet), capture size 65535 bytes 23:02:19.753579 IP (tos 0x10, ttl 128, id 0, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from fa:16:3e:16:02:a7, length 300, xid 0x2a1ed44d, Flags [none] (0x0000) Client-Ethernet-Address fa:16:3e:16:02:a7 Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Discover Parameter-Request Option 55, length 13: Subnet-Mask, BR, Time-Zone, Classless-Static-Route Domain-Name, Domain-Name-Server, Hostname, YD YS, NTP, MTU, Option 119 Default-Gateway 23:02:24.756291 IP (tos 0x10, ttl 128, id 0, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from fa:16:3e:16:02:a7, length 300, xid 0x2a1ed44d, secs 5, Flags [none] (0x0000) Client-Ethernet-Address fa:16:3e:16:02:a7 Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Discover Parameter-Request Option 55, length 13: Subnet-Mask, BR, Time-Zone, Classless-Static-Route Domain-Name, Domain-Name-Server, Hostname, YD YS, NTP, MTU, Option 119 Default-Gateway 23:02:30.757391 IP (tos 0x10, ttl 128, id 0, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from fa:16:3e:16:02:a7, length 300, xid 0x2a1ed44d, secs 11, Flags [none] (0x0000) Client-Ethernet-Address fa:16:3e:16:02:a7 Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Discover Parameter-Request Option 55, length 13: Subnet-Mask, BR, Time-Zone, Classless-Static-Route Domain-Name, Domain-Name-Server, Hostname, YD YS, NTP, MTU, Option 119 Default-Gateway *dhcp-agent.ini:* [root at grizzly (keystone_admin)]# cat /etc/quantum/dhcp_agent.ini [DEFAULT] auth_url = http://10.0.0.19:35357/v2.0/ admin_username = admin admin_password = pass admin_tenant_name = admin interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver *dhcp-agent.log:* [root at grizzly ~(keystone_admin)]# cat dhcp-agent.log 2013-06-03 22:27:09 INFO [quantum.common.config] Logging enabled! 2013-06-03 22:27:09 INFO [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server on 10.0.0.19:5672 2013-06-03 22:27:09 INFO [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server on 10.0.0.19:5672 2013-06-03 22:27:10 INFO [quantum.agent.dhcp_agent] DHCP agent started 2013-06-03 22:28:10 ERROR [quantum.agent.dhcp_agent] Failed reporting state! Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 700, in _report_state self.agent_state) File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state topic=self.topic) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call return rpc.call(context, self._get_topic(topic), msg, timeout) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call return _get_impl().call(CONF, context, topic, msg, timeout) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call rpc_amqp.get_connection_pool(conf, Connection)) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 613, in call rv = list(rv) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 555, in __iter__ self.done() File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ self.gen.next() File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 552, in __iter__ self._iterator.next() File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 436, in iterconsume yield self.ensure(_error_callback, _consume) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 380, in ensure error_callback(e) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 421, in _error_callback raise rpc_common.Timeout() Timeout: Timeout while waiting on RPC response. 2013-06-03 22:28:10 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 56.133099 sec 2013-06-03 22:28:10 INFO [quantum.agent.dhcp_agent] Synchronizing state [root at grizzly ~(keystone_admin)]# -------------- next part -------------- An HTML attachment was scrubbed... URL: From smanoo76 at gmail.com Tue Jun 4 06:31:44 2013 From: smanoo76 at gmail.com (S Manoo) Date: Mon, 3 Jun 2013 23:31:44 -0700 Subject: [rhos-list] Problems with quantum and dhcp-agent Message-ID: I'm trying to get a fairly simple all-in-one setup going with RHOS 3.0 preview, and am following the Packstack-to-quantum wiki instructions. I've a simple flat (local) private network within the host, with only the dhcp agent (and no L3 agent). When I create the network and associate it with an instance, the appropriate port and fixed ip address does seem to get created for the node via dhcp. However, when the instance boots up, it does not get any reply to the dhcp requests. Can someone help me figure out why the dhcp requests are not being answered? More info below, thanks for any help! *The DHCP info created by dhcp-agent:* [root at grizzly ~(keystone_admin)]# cat /var/lib/quantum/dhcp/abc962db-2dd1-4229-9bc0-c86b047bcc3a/host fa:16:3e:c7:c6:82,192-168-100-2.openstacklocal,192.168.100.2 fa:16:3e:17:29:04,192-168-100-3.openstacklocal,192.168.100.3 fa:16:3e:16:02:a7,192-168-100-4.openstacklocal,192.168.100.4 *ip addr* [root at grizzly ~(keystone_admin)]# ip addr 1: lo: mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:10:18:78:6c:20 brd ff:ff:ff:ff:ff:ff inet 10.0.0.19/24 brd 10.0.0.255 scope global eth0 inet6 fe80::210:18ff:fe78:6c20/64 scope link valid_lft forever preferred_lft forever 3: eth1: mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 00:10:18:78:6c:22 brd ff:ff:ff:ff:ff:ff 4: eth2: mtu 1500 qdisc mq state DOWN qlen 1000 link/ether d8:d3:85:5b:0e:0e brd ff:ff:ff:ff:ff:ff 5: eth3: mtu 1500 qdisc mq state UP qlen 1000 link/ether d8:d3:85:5b:0e:0f brd ff:ff:ff:ff:ff:ff inet6 fe80::dad3:85ff:fe5b:e0f/64 scope link valid_lft forever preferred_lft forever 6: br-int: mtu 1500 qdisc noop state DOWN link/ether 3a:97:75:6d:22:47 brd ff:ff:ff:ff:ff:ff 7: virbr0: mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:89:36:43 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 8: virbr0-nic: mtu 1500 qdisc noop state DOWN qlen 500 link/ether 52:54:00:89:36:43 brd ff:ff:ff:ff:ff:ff 10: ns-7802be13-79: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fa:16:3e:c7:c6:82 brd ff:ff:ff:ff:ff:ff inet 192.168.100.2/24 brd 192.168.100.255 scope global ns-7802be13-79 inet6 fe80::f816:3eff:fec7:c682/64 scope link valid_lft forever preferred_lft forever 11: tap7802be13-79: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether ae:b8:f6:40:7c:da brd ff:ff:ff:ff:ff:ff inet6 fe80::acb8:f6ff:fe40:7cda/64 scope link valid_lft forever preferred_lft forever *ovs-vsctl show:* [root at grizzly ~(keystone_admin)]# ovs-vsctl show 703b6230-652a-4ae3-8a83-eb1cad3c1581 Bridge br-int Port br-int Interface br-int type: internal Port "tap7802be13-79" tag: 1 Interface "tap7802be13-79" ovs_version: "1.9.0" * * *quantum:* [root at grizzly ~(keystone_admin)]# quantum net-list +--------------------------------------+---------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+---------+-------------------------------------------------------+ | abc962db-2dd1-4229-9bc0-c86b047bcc3a | private | 65e0ff39-4ec6-4c09-af0c-876e898685a7 192.168.100.0/24 | +--------------------------------------+---------+-------------------------------------------------------+ [root at grizzly ~(keystone_admin)]# quantum port-list +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ | 7802be13-79a3-4fb6-a0ce-033fb40b819a | | fa:16:3e:c7:c6:82 | {"subnet_id": "65e0ff39-4ec6-4c09-af0c-876e898685a7", "ip_address": "192.168.100.2"} | | 9115064a-86cf-441a-900e-c153edd5a0d3 | | fa:16:3e:16:02:a7 | {"subnet_id": "65e0ff39-4ec6-4c09-af0c-876e898685a7", "ip_address": "192.168.100.4"} | | fd5389d5-ec76-4525-8594-f1dd80a470b5 | | fa:16:3e:17:29:04 | {"subnet_id": "65e0ff39-4ec6-4c09-af0c-876e898685a7", "ip_address": "192.168.100.3"} | +--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+ *Tcpdump*: [root at grizzly ~(keystone_admin)]# tcpdump -i ns-7802be13-79 -vv -n tcpdump: listening on ns-7802be13-79, link-type EN10MB (Ethernet), capture size 65535 bytes 23:02:19.753579 IP (tos 0x10, ttl 128, id 0, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from fa:16:3e:16:02:a7, length 300, xid 0x2a1ed44d, Flags [none] (0x0000) Client-Ethernet-Address fa:16:3e:16:02:a7 Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Discover Parameter-Request Option 55, length 13: Subnet-Mask, BR, Time-Zone, Classless-Static-Route Domain-Name, Domain-Name-Server, Hostname, YD YS, NTP, MTU, Option 119 Default-Gateway 23:02:24.756291 IP (tos 0x10, ttl 128, id 0, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from fa:16:3e:16:02:a7, length 300, xid 0x2a1ed44d, secs 5, Flags [none] (0x0000) Client-Ethernet-Address fa:16:3e:16:02:a7 Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Discover Parameter-Request Option 55, length 13: Subnet-Mask, BR, Time-Zone, Classless-Static-Route Domain-Name, Domain-Name-Server, Hostname, YD YS, NTP, MTU, Option 119 Default-Gateway 23:02:30.757391 IP (tos 0x10, ttl 128, id 0, offset 0, flags [none], proto UDP (17), length 328) 0.0.0.0.bootpc > 255.255.255.255.bootps: [udp sum ok] BOOTP/DHCP, Request from fa:16:3e:16:02:a7, length 300, xid 0x2a1ed44d, secs 11, Flags [none] (0x0000) Client-Ethernet-Address fa:16:3e:16:02:a7 Vendor-rfc1048 Extensions Magic Cookie 0x63825363 DHCP-Message Option 53, length 1: Discover Parameter-Request Option 55, length 13: Subnet-Mask, BR, Time-Zone, Classless-Static-Route Domain-Name, Domain-Name-Server, Hostname, YD YS, NTP, MTU, Option 119 Default-Gateway *dhcp-agent.ini:* [root at grizzly (keystone_admin)]# cat /etc/quantum/dhcp_agent.ini [DEFAULT] auth_url = http://10.0.0.19:35357/v2.0/ admin_username = admin admin_password = pass admin_tenant_name = admin interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver *dhcp-agent.log:* [root at grizzly ~(keystone_admin)]# cat dhcp-agent.log 2013-06-03 22:27:09 INFO [quantum.common.config] Logging enabled! 2013-06-03 22:27:09 INFO [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server on 10.0.0.19:5672 2013-06-03 22:27:09 INFO [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server on 10.0.0.19:5672 2013-06-03 22:27:10 INFO [quantum.agent.dhcp_agent] DHCP agent started 2013-06-03 22:28:10 ERROR [quantum.agent.dhcp_agent] Failed reporting state! Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 700, in _report_state self.agent_state) File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state topic=self.topic) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call return rpc.call(context, self._get_topic(topic), msg, timeout) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call return _get_impl().call(CONF, context, topic, msg, timeout) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call rpc_amqp.get_connection_pool(conf, Connection)) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 613, in call rv = list(rv) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 555, in __iter__ self.done() File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ self.gen.next() File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 552, in __iter__ self._iterator.next() File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 436, in iterconsume yield self.ensure(_error_callback, _consume) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 380, in ensure error_callback(e) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 421, in _error_callback raise rpc_common.Timeout() Timeout: Timeout while waiting on RPC response. 2013-06-03 22:28:10 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 56.133099 sec 2013-06-03 22:28:10 INFO [quantum.agent.dhcp_agent] Synchronizing state [root at grizzly ~(keystone_admin)]# -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.vogel at heig-vd.ch Tue Jun 4 13:59:06 2013 From: nicolas.vogel at heig-vd.ch (Vogel Nicolas) Date: Tue, 4 Jun 2013 13:59:06 +0000 Subject: [rhos-list] Glance problem "not enough disk space on the image storage media Message-ID: Hi, I'm trying to add new images to glance but got following error: 413 Request Entity Too Large Image storage media is full: There is not enough disk space on the image storage media. For the moment I only have two registered images in Glance (displayed with Glance image-list), but i tried a lot of different images to find the best configuration for each. I found this discussion (https://lists.launchpad.net/openstack/msg10811.html) but that don't helped me. Where can I found and delete all the old and wrong images that are even stored on my disk? Are this images all stored on the controller or are they also stored on my compute nodes? Thanks, Nicolas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smanoo76 at gmail.com Tue Jun 4 20:09:21 2013 From: smanoo76 at gmail.com (S Manoo) Date: Tue, 4 Jun 2013 13:09:21 -0700 Subject: [rhos-list] Problems with quantum and dhcp-agent In-Reply-To: References: Message-ID: Looking into this further, I'm observing the same error message relating to timeouts talking to qpid in dhcp-agent.log after every restart, perhaps this is why I'm unable to get any dhcp responses to instances? Any suggestions on what's causing this and where I might look to troubleshoot this further? */var/log/quantum/dhcp-agent.log:* 2013-06-04 12:50:44 INFO [quantum.common.config] Logging enabled! 2013-06-04 12:50:44 INFO [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server on localhost:5672 2013-06-04 12:50:44 INFO [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server on localhost:5672 2013-06-04 12:50:44 INFO [quantum.agent.dhcp_agent] DHCP agent started 2013-06-04 12:51:44 ERROR [quantum.agent.dhcp_agent] Failed reporting state! Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line 700, in _report_state self.agent_state) File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, in report_state topic=self.topic) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", line 80, in call return rpc.call(context, self._get_topic(topic), msg, timeout) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", line 140, in call return _get_impl().call(CONF, context, topic, msg, timeout) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 611, in call rpc_amqp.get_connection_pool(conf, Connection)) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 613, in call rv = list(rv) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 555, in __iter__ self.done() File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ self.gen.next() File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", line 552, in __iter__ self._iterator.next() File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 436, in iterconsume yield self.ensure(_error_callback, _consume) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 380, in ensure error_callback(e) File "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", line 421, in _error_callback raise rpc_common.Timeout() Timeout: Timeout while waiting on RPC response. 2013-06-04 12:51:44 WARNING [quantum.openstack.common.loopingcall] task run outlasted interval by 56.108887 sec 2013-06-04 12:51:44 INFO [quantum.agent.dhcp_agent] Synchronizing state On Mon, Jun 3, 2013 at 11:28 PM, S Manoo wrote: > > > *dhcp-agent.log:* > [root at grizzly ~(keystone_admin)]# cat dhcp-agent.log > 2013-06-03 22:27:09 INFO [quantum.common.config] Logging enabled! > 2013-06-03 22:27:09 INFO [quantum.openstack.common.rpc.impl_qpid] > Connected to AMQP server on 10.0.0.19:5672 > 2013-06-03 22:27:09 INFO [quantum.openstack.common.rpc.impl_qpid] > Connected to AMQP server on 10.0.0.19:5672 > 2013-06-03 22:27:10 INFO [quantum.agent.dhcp_agent] DHCP agent started > 2013-06-03 22:28:10 ERROR [quantum.agent.dhcp_agent] Failed reporting > state! > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", > line 700, in _report_state > self.agent_state) > File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, > in report_state > topic=self.topic) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", > line 80, in call > return rpc.call(context, self._get_topic(topic), msg, timeout) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", > line 140, in call > return _get_impl().call(CONF, context, topic, msg, timeout) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 611, in call > rpc_amqp.get_connection_pool(conf, Connection)) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > line 613, in call > rv = list(rv) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > line 555, in __iter__ > self.done() > File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ > self.gen.next() > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > line 552, in __iter__ > self._iterator.next() > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 436, in iterconsume > yield self.ensure(_error_callback, _consume) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 380, in ensure > error_callback(e) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 421, in _error_callback > raise rpc_common.Timeout() > Timeout: Timeout while waiting on RPC response. > 2013-06-03 22:28:10 WARNING [quantum.openstack.common.loopingcall] task > run outlasted interval by 56.133099 sec > 2013-06-03 22:28:10 INFO [quantum.agent.dhcp_agent] Synchronizing state > [root at grizzly ~(keystone_admin)]# > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdake at redhat.com Tue Jun 4 22:17:39 2013 From: sdake at redhat.com (Steven Dake) Date: Tue, 04 Jun 2013 15:17:39 -0700 Subject: [rhos-list] Problems with quantum and dhcp-agent In-Reply-To: References: Message-ID: <51AE6783.5040602@redhat.com> On 06/04/2013 01:09 PM, S Manoo wrote: > Looking into this further, I'm observing the same error message > relating to timeouts talking to qpid in dhcp-agent.log after every > restart, perhaps this is why I'm unable to get any dhcp responses to > instances? Any suggestions on what's causing this and where I might > look to troubleshoot this further? > S Manoo, We may have just fixed a bug related to this problem which is not fixed in the preview. Please try the workaround in this bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=970453 Regards -steve > */var/log/quantum/dhcp-agent.log:* > 2013-06-04 12:50:44 INFO [quantum.common.config] Logging enabled! > 2013-06-04 12:50:44 INFO [quantum.openstack.common.rpc.impl_qpid] > Connected to AMQP server on localhost:5672 > 2013-06-04 12:50:44 INFO [quantum.openstack.common.rpc.impl_qpid] > Connected to AMQP server on localhost:5672 > 2013-06-04 12:50:44 INFO [quantum.agent.dhcp_agent] DHCP agent started > 2013-06-04 12:51:44 ERROR [quantum.agent.dhcp_agent] Failed > reporting state! > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", > line 700, in _report_state > self.agent_state) > File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line > 66, in report_state > topic=self.topic) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", > line 80, in call > return rpc.call(context, self._get_topic(topic), msg, timeout) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", > line 140, in call > return _get_impl().call(CONF, context, topic, msg, timeout) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 611, in call > rpc_amqp.get_connection_pool(conf, Connection)) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > line 613, in call > rv = list(rv) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > line 555, in __iter__ > self.done() > File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ > self.gen.next() > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > line 552, in __iter__ > self._iterator.next() > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 436, in iterconsume > yield self.ensure(_error_callback, _consume) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 380, in ensure > error_callback(e) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 421, in _error_callback > raise rpc_common.Timeout() > Timeout: Timeout while waiting on RPC response. > 2013-06-04 12:51:44 WARNING [quantum.openstack.common.loopingcall] > task run outlasted interval by 56.108887 sec > 2013-06-04 12:51:44 INFO [quantum.agent.dhcp_agent] Synchronizing > state > > > > > On Mon, Jun 3, 2013 at 11:28 PM, S Manoo > wrote: > > > > *dhcp-agent.log:* > [root at grizzly ~(keystone_admin)]# cat dhcp-agent.log > 2013-06-03 22:27:09 INFO [quantum.common.config] Logging enabled! > 2013-06-03 22:27:09 INFO > [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server > on 10.0.0.19:5672 > 2013-06-03 22:27:09 INFO > [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server > on 10.0.0.19:5672 > 2013-06-03 22:27:10 INFO [quantum.agent.dhcp_agent] DHCP agent > started > 2013-06-03 22:28:10 ERROR [quantum.agent.dhcp_agent] Failed > reporting state! > Traceback (most recent call last): > File > "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", > line 700, in _report_state > self.agent_state) > File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", > line 66, in report_state > topic=self.topic) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", > line 80, in call > return rpc.call(context, self._get_topic(topic), msg, timeout) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", > line 140, in call > return _get_impl().call(CONF, context, topic, msg, timeout) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 611, in call > rpc_amqp.get_connection_pool(conf, Connection)) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > line 613, in call > rv = list(rv) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > line 555, in __iter__ > self.done() > File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ > self.gen.next() > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > line 552, in __iter__ > self._iterator.next() > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 436, in iterconsume > yield self.ensure(_error_callback, _consume) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 380, in ensure > error_callback(e) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 421, in _error_callback > raise rpc_common.Timeout() > Timeout: Timeout while waiting on RPC response. > 2013-06-03 22:28:10 WARNING > [quantum.openstack.common.loopingcall] task run outlasted interval > by 56.133099 sec > 2013-06-03 22:28:10 INFO [quantum.agent.dhcp_agent] > Synchronizing state > [root at grizzly ~(keystone_admin)]# > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at redhat.com Wed Jun 5 07:14:58 2013 From: gkotton at redhat.com (Gary Kotton) Date: Wed, 05 Jun 2013 10:14:58 +0300 Subject: [rhos-list] Problems with quantum and dhcp-agent In-Reply-To: <51AE6783.5040602@redhat.com> References: <51AE6783.5040602@redhat.com> Message-ID: <51AEE572.70409@redhat.com> On 06/05/2013 01:17 AM, Steven Dake wrote: > On 06/04/2013 01:09 PM, S Manoo wrote: >> Looking into this further, I'm observing the same error message >> relating to timeouts talking to qpid in dhcp-agent.log after every >> restart, perhaps this is why I'm unable to get any dhcp responses to >> instances? Any suggestions on what's causing this and where I might >> look to troubleshoot this further? When one restarts a host each process needs to register with the message broker. If you are running all of the services on the same host then they will only be able to connect when the qpid service is up and running. This usually takes a few seconds after reboot. If a service does not receive an answer from the qpid service then it will wait and retry again. This is why you see the timeouts. The wait is incremental. I have seen that all service are usually able to connect within a minute of booting a host (we should try and reduce this time). Please note that the quantum cli has an option: quantum agent-list. This provides the list of agents, their status and hosts that they are running on. If you spin up an instance after the dhcp agent is up and running do you see the problem? >> > S Manoo, > > We may have just fixed a bug related to this problem which is not > fixed in the preview. Please try the workaround in this bugzilla: > > https://bugzilla.redhat.com/show_bug.cgi?id=970453 This fix is good for an all in one setup but will not help if the DHCP agent is running on another host. In Quantum we have the notion of a network node. Please look at https://docs.google.com/drawings/d/167gegaoTBZpd318b2JTgF_Qi9YdkIX8pcQ6YBJLUtGY/edit?usp=sharing If the message broker goes down (say for example host reboot or network problems) then the dhcp agent will try and reconnect. > > Regards > -steve > > >> */var/log/quantum/dhcp-agent.log:* >> 2013-06-04 12:50:44 INFO [quantum.common.config] Logging enabled! >> 2013-06-04 12:50:44 INFO [quantum.openstack.common.rpc.impl_qpid] >> Connected to AMQP server on localhost:5672 >> 2013-06-04 12:50:44 INFO [quantum.openstack.common.rpc.impl_qpid] >> Connected to AMQP server on localhost:5672 >> 2013-06-04 12:50:44 INFO [quantum.agent.dhcp_agent] DHCP agent >> started >> 2013-06-04 12:51:44 ERROR [quantum.agent.dhcp_agent] Failed >> reporting state! >> Traceback (most recent call last): >> File >> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line >> 700, in _report_state >> self.agent_state) >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line >> 66, in report_state >> topic=self.topic) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", >> line 80, in call >> return rpc.call(context, self._get_topic(topic), msg, timeout) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", >> line 140, in call >> return _get_impl().call(CONF, context, topic, msg, timeout) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >> line 611, in call >> rpc_amqp.get_connection_pool(conf, Connection)) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >> line 613, in call >> rv = list(rv) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >> line 555, in __iter__ >> self.done() >> File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ >> self.gen.next() >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >> line 552, in __iter__ >> self._iterator.next() >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >> line 436, in iterconsume >> yield self.ensure(_error_callback, _consume) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >> line 380, in ensure >> error_callback(e) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >> line 421, in _error_callback >> raise rpc_common.Timeout() >> Timeout: Timeout while waiting on RPC response. >> 2013-06-04 12:51:44 WARNING [quantum.openstack.common.loopingcall] >> task run outlasted interval by 56.108887 sec >> 2013-06-04 12:51:44 INFO [quantum.agent.dhcp_agent] Synchronizing >> state >> >> >> >> >> On Mon, Jun 3, 2013 at 11:28 PM, S Manoo > > wrote: >> >> >> >> *dhcp-agent.log:* >> [root at grizzly ~(keystone_admin)]# cat dhcp-agent.log >> 2013-06-03 22:27:09 INFO [quantum.common.config] Logging enabled! >> 2013-06-03 22:27:09 INFO >> [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server >> on 10.0.0.19:5672 >> 2013-06-03 22:27:09 INFO >> [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server >> on 10.0.0.19:5672 >> 2013-06-03 22:27:10 INFO [quantum.agent.dhcp_agent] DHCP >> agent started >> 2013-06-03 22:28:10 ERROR [quantum.agent.dhcp_agent] Failed >> reporting state! >> Traceback (most recent call last): >> File >> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", >> line 700, in _report_state >> self.agent_state) >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", >> line 66, in report_state >> topic=self.topic) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", >> line 80, in call >> return rpc.call(context, self._get_topic(topic), msg, timeout) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", >> line 140, in call >> return _get_impl().call(CONF, context, topic, msg, timeout) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >> line 611, in call >> rpc_amqp.get_connection_pool(conf, Connection)) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >> line 613, in call >> rv = list(rv) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >> line 555, in __iter__ >> self.done() >> File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ >> self.gen.next() >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >> line 552, in __iter__ >> self._iterator.next() >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >> line 436, in iterconsume >> yield self.ensure(_error_callback, _consume) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >> line 380, in ensure >> error_callback(e) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >> line 421, in _error_callback >> raise rpc_common.Timeout() >> Timeout: Timeout while waiting on RPC response. >> 2013-06-03 22:28:10 WARNING >> [quantum.openstack.common.loopingcall] task run outlasted >> interval by 56.133099 sec >> 2013-06-03 22:28:10 INFO [quantum.agent.dhcp_agent] >> Synchronizing state >> [root at grizzly ~(keystone_admin)]# >> >> >> >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdake at redhat.com Wed Jun 5 20:14:13 2013 From: sdake at redhat.com (Steven Dake) Date: Wed, 05 Jun 2013 13:14:13 -0700 Subject: [rhos-list] Problems with quantum and dhcp-agent In-Reply-To: <51AEE572.70409@redhat.com> References: <51AE6783.5040602@redhat.com> <51AEE572.70409@redhat.com> Message-ID: <51AF9C15.6050507@redhat.com> On 06/05/2013 12:14 AM, Gary Kotton wrote: > On 06/05/2013 01:17 AM, Steven Dake wrote: >> On 06/04/2013 01:09 PM, S Manoo wrote: >>> Looking into this further, I'm observing the same error message >>> relating to timeouts talking to qpid in dhcp-agent.log after every >>> restart, perhaps this is why I'm unable to get any dhcp responses to >>> instances? Any suggestions on what's causing this and where I might >>> look to troubleshoot this further? > > When one restarts a host each process needs to register with the > message broker. If you are running all of the services on the same > host then they will only be able to connect when the qpid service is > up and running. This usually takes a few seconds after reboot. If a > service does not receive an answer from the qpid service then it will > wait and retry again. This is why you see the timeouts. The wait is > incremental. I have seen that all service are usually able to connect > within a minute of booting a host (we should try and reduce this time). > > Please note that the quantum cli has an option: quantum agent-list. > This provides the list of agents, their status and hosts that they are > running on. > > If you spin up an instance after the dhcp agent is up and running do > you see the problem? > >>> >> S Manoo, >> >> We may have just fixed a bug related to this problem which is not >> fixed in the preview. Please try the workaround in this bugzilla: >> >> https://bugzilla.redhat.com/show_bug.cgi?id=970453 > > This fix is good for an all in one setup but will not help if the DHCP > agent is running on another host. In Quantum we have the notion of a > network node. Please look at > https://docs.google.com/drawings/d/167gegaoTBZpd318b2JTgF_Qi9YdkIX8pcQ6YBJLUtGY/edit?usp=sharing > > If the message broker goes down (say for example host reboot or > network problems) then the dhcp agent will try and reconnect. > Gary, I have found dhcp agent stops responding permanently in this condition on a all in one setup. Perhaps the same is true for multinode (ie the retry logic doesn't work as expected). I don't have multiple nodes to test, but might be worth double-checking if you do. Regards -steve >> >> Regards >> -steve >> >> >>> */var/log/quantum/dhcp-agent.log:* >>> 2013-06-04 12:50:44 INFO [quantum.common.config] Logging enabled! >>> 2013-06-04 12:50:44 INFO >>> [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server on >>> localhost:5672 >>> 2013-06-04 12:50:44 INFO >>> [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server on >>> localhost:5672 >>> 2013-06-04 12:50:44 INFO [quantum.agent.dhcp_agent] DHCP agent >>> started >>> 2013-06-04 12:51:44 ERROR [quantum.agent.dhcp_agent] Failed >>> reporting state! >>> Traceback (most recent call last): >>> File >>> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line >>> 700, in _report_state >>> self.agent_state) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line >>> 66, in report_state >>> topic=self.topic) >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", >>> line 80, in call >>> return rpc.call(context, self._get_topic(topic), msg, timeout) >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", >>> line 140, in call >>> return _get_impl().call(CONF, context, topic, msg, timeout) >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>> line 611, in call >>> rpc_amqp.get_connection_pool(conf, Connection)) >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>> line 613, in call >>> rv = list(rv) >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>> line 555, in __iter__ >>> self.done() >>> File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ >>> self.gen.next() >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>> line 552, in __iter__ >>> self._iterator.next() >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>> line 436, in iterconsume >>> yield self.ensure(_error_callback, _consume) >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>> line 380, in ensure >>> error_callback(e) >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>> line 421, in _error_callback >>> raise rpc_common.Timeout() >>> Timeout: Timeout while waiting on RPC response. >>> 2013-06-04 12:51:44 WARNING [quantum.openstack.common.loopingcall] >>> task run outlasted interval by 56.108887 sec >>> 2013-06-04 12:51:44 INFO [quantum.agent.dhcp_agent] >>> Synchronizing state >>> >>> >>> >>> >>> On Mon, Jun 3, 2013 at 11:28 PM, S Manoo >> > wrote: >>> >>> >>> >>> *dhcp-agent.log:* >>> [root at grizzly ~(keystone_admin)]# cat dhcp-agent.log >>> 2013-06-03 22:27:09 INFO [quantum.common.config] Logging >>> enabled! >>> 2013-06-03 22:27:09 INFO >>> [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP >>> server on 10.0.0.19:5672 >>> 2013-06-03 22:27:09 INFO >>> [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP >>> server on 10.0.0.19:5672 >>> 2013-06-03 22:27:10 INFO [quantum.agent.dhcp_agent] DHCP >>> agent started >>> 2013-06-03 22:28:10 ERROR [quantum.agent.dhcp_agent] Failed >>> reporting state! >>> Traceback (most recent call last): >>> File >>> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", >>> line 700, in _report_state >>> self.agent_state) >>> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", >>> line 66, in report_state >>> topic=self.topic) >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", >>> line 80, in call >>> return rpc.call(context, self._get_topic(topic), msg, timeout) >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", >>> line 140, in call >>> return _get_impl().call(CONF, context, topic, msg, timeout) >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>> line 611, in call >>> rpc_amqp.get_connection_pool(conf, Connection)) >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>> line 613, in call >>> rv = list(rv) >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>> line 555, in __iter__ >>> self.done() >>> File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ >>> self.gen.next() >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>> line 552, in __iter__ >>> self._iterator.next() >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>> line 436, in iterconsume >>> yield self.ensure(_error_callback, _consume) >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>> line 380, in ensure >>> error_callback(e) >>> File >>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>> line 421, in _error_callback >>> raise rpc_common.Timeout() >>> Timeout: Timeout while waiting on RPC response. >>> 2013-06-03 22:28:10 WARNING >>> [quantum.openstack.common.loopingcall] task run outlasted >>> interval by 56.133099 sec >>> 2013-06-03 22:28:10 INFO [quantum.agent.dhcp_agent] >>> Synchronizing state >>> [root at grizzly ~(keystone_admin)]# >>> >>> >>> >>> >>> _______________________________________________ >>> rhos-list mailing list >>> rhos-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rhos-list >> >> >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at redhat.com Thu Jun 6 08:29:45 2013 From: gkotton at redhat.com (Gary Kotton) Date: Thu, 06 Jun 2013 11:29:45 +0300 Subject: [rhos-list] Problems with quantum and dhcp-agent In-Reply-To: <51AF9C15.6050507@redhat.com> References: <51AE6783.5040602@redhat.com> <51AEE572.70409@redhat.com> <51AF9C15.6050507@redhat.com> Message-ID: <51B04879.7020404@redhat.com> On 06/05/2013 11:14 PM, Steven Dake wrote: > On 06/05/2013 12:14 AM, Gary Kotton wrote: >> On 06/05/2013 01:17 AM, Steven Dake wrote: >>> On 06/04/2013 01:09 PM, S Manoo wrote: >>>> Looking into this further, I'm observing the same error message >>>> relating to timeouts talking to qpid in dhcp-agent.log after every >>>> restart, perhaps this is why I'm unable to get any dhcp responses >>>> to instances? Any suggestions on what's causing this and where I >>>> might look to troubleshoot this further? >> >> When one restarts a host each process needs to register with the >> message broker. If you are running all of the services on the same >> host then they will only be able to connect when the qpid service is >> up and running. This usually takes a few seconds after reboot. If a >> service does not receive an answer from the qpid service then it will >> wait and retry again. This is why you see the timeouts. The wait is >> incremental. I have seen that all service are usually able to connect >> within a minute of booting a host (we should try and reduce this time). >> >> Please note that the quantum cli has an option: quantum agent-list. >> This provides the list of agents, their status and hosts that they >> are running on. >> >> If you spin up an instance after the dhcp agent is up and running do >> you see the problem? >> >>>> >>> S Manoo, >>> >>> We may have just fixed a bug related to this problem which is not >>> fixed in the preview. Please try the workaround in this bugzilla: >>> >>> https://bugzilla.redhat.com/show_bug.cgi?id=970453 >> >> This fix is good for an all in one setup but will not help if the >> DHCP agent is running on another host. In Quantum we have the notion >> of a network node. Please look at >> https://docs.google.com/drawings/d/167gegaoTBZpd318b2JTgF_Qi9YdkIX8pcQ6YBJLUtGY/edit?usp=sharing >> >> If the message broker goes down (say for example host reboot or >> network problems) then the dhcp agent will try and reconnect. >> > Gary, > > I have found dhcp agent stops responding permanently in this condition > on a all in one setup. Perhaps the same is true for multinode (ie the > retry logic doesn't work as expected). I don't have multiple nodes to > test, but might be worth double-checking if you do. I have done the following check (on an all in one setup): 1. reboot host 2. stop quantum service 3. check that dhcp agent has a timeout with the quantum service 4. restart quantum service I see a number of issues which I am going to investigate: 1. The agent is up: [root at dhcp-4-126 ~(keystone_admin)]# quantum agent-list +--------------------------------------+--------------------+---------------------------+-------+----------------+ | id | agent_type | host | alive | admin_state_up | +--------------------------------------+--------------------+---------------------------+-------+----------------+ | 11e35126-6c07-4a2f-b681-399cdbc8210d | L3 agent | dhcp-4-126.tlv.redhat.com | :-) | True | | 5e75d5d9-edb0-462e-850b-013ad7a518f4 | DHCP agent | dhcp-4-126.tlv.redhat.com | :-) | True | | af51a50f-f45e-4736-8517-ed3cda759b3c | Open vSwitch agent | dhcp-4-126.tlv.redhat.com | :-) | True | | d7588cb1-b287-4b4c-a8a8-539d4c5129b2 | Open vSwitch agent | dhcp-4-227.tlv.redhat.com | :-) | True | +--------------------------------------+--------------------+---------------------------+-------+----------------+ [root at dhcp-4-126 ~(keystone_admin)]# This means the the agent successfully sent a message to the plugin. 2. In the DHCP log there are timeouts with the qpid service and no notification of a resync (which used to happen in Folsom. I am on it. And will post on any progress. A few days ago I had problem with nova compute which seemed similar to this. I hope that it is not an issue with qpid and solely related to the dhcp agent (which is easier for me to address) > > Regards > -steve > >>> >>> Regards >>> -steve >>> >>> >>>> */var/log/quantum/dhcp-agent.log:* >>>> 2013-06-04 12:50:44 INFO [quantum.common.config] Logging enabled! >>>> 2013-06-04 12:50:44 INFO >>>> [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server >>>> on localhost:5672 >>>> 2013-06-04 12:50:44 INFO >>>> [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server >>>> on localhost:5672 >>>> 2013-06-04 12:50:44 INFO [quantum.agent.dhcp_agent] DHCP agent >>>> started >>>> 2013-06-04 12:51:44 ERROR [quantum.agent.dhcp_agent] Failed >>>> reporting state! >>>> Traceback (most recent call last): >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", >>>> line 700, in _report_state >>>> self.agent_state) >>>> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", >>>> line 66, in report_state >>>> topic=self.topic) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", >>>> line 80, in call >>>> return rpc.call(context, self._get_topic(topic), msg, timeout) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", >>>> line 140, in call >>>> return _get_impl().call(CONF, context, topic, msg, timeout) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>> line 611, in call >>>> rpc_amqp.get_connection_pool(conf, Connection)) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>>> line 613, in call >>>> rv = list(rv) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>>> line 555, in __iter__ >>>> self.done() >>>> File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ >>>> self.gen.next() >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>>> line 552, in __iter__ >>>> self._iterator.next() >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>> line 436, in iterconsume >>>> yield self.ensure(_error_callback, _consume) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>> line 380, in ensure >>>> error_callback(e) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>> line 421, in _error_callback >>>> raise rpc_common.Timeout() >>>> Timeout: Timeout while waiting on RPC response. >>>> 2013-06-04 12:51:44 WARNING [quantum.openstack.common.loopingcall] >>>> task run outlasted interval by 56.108887 sec >>>> 2013-06-04 12:51:44 INFO [quantum.agent.dhcp_agent] >>>> Synchronizing state >>>> >>>> >>>> >>>> >>>> On Mon, Jun 3, 2013 at 11:28 PM, S Manoo >>> > wrote: >>>> >>>> >>>> >>>> *dhcp-agent.log:* >>>> [root at grizzly ~(keystone_admin)]# cat dhcp-agent.log >>>> 2013-06-03 22:27:09 INFO [quantum.common.config] Logging >>>> enabled! >>>> 2013-06-03 22:27:09 INFO >>>> [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP >>>> server on 10.0.0.19:5672 >>>> 2013-06-03 22:27:09 INFO >>>> [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP >>>> server on 10.0.0.19:5672 >>>> 2013-06-03 22:27:10 INFO [quantum.agent.dhcp_agent] DHCP >>>> agent started >>>> 2013-06-03 22:28:10 ERROR [quantum.agent.dhcp_agent] Failed >>>> reporting state! >>>> Traceback (most recent call last): >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", >>>> line 700, in _report_state >>>> self.agent_state) >>>> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", >>>> line 66, in report_state >>>> topic=self.topic) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", >>>> line 80, in call >>>> return rpc.call(context, self._get_topic(topic), msg, timeout) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", >>>> line 140, in call >>>> return _get_impl().call(CONF, context, topic, msg, timeout) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>> line 611, in call >>>> rpc_amqp.get_connection_pool(conf, Connection)) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>>> line 613, in call >>>> rv = list(rv) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>>> line 555, in __iter__ >>>> self.done() >>>> File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ >>>> self.gen.next() >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>>> line 552, in __iter__ >>>> self._iterator.next() >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>> line 436, in iterconsume >>>> yield self.ensure(_error_callback, _consume) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>> line 380, in ensure >>>> error_callback(e) >>>> File >>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>> line 421, in _error_callback >>>> raise rpc_common.Timeout() >>>> Timeout: Timeout while waiting on RPC response. >>>> 2013-06-03 22:28:10 WARNING >>>> [quantum.openstack.common.loopingcall] task run outlasted >>>> interval by 56.133099 sec >>>> 2013-06-03 22:28:10 INFO [quantum.agent.dhcp_agent] >>>> Synchronizing state >>>> [root at grizzly ~(keystone_admin)]# >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> rhos-list mailing list >>>> rhos-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rhos-list >>> >>> >>> >>> _______________________________________________ >>> rhos-list mailing list >>> rhos-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rhos-list >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Thu Jun 6 10:18:02 2013 From: gfidente at redhat.com (Giulio Fidente) Date: Thu, 06 Jun 2013 12:18:02 +0200 Subject: [rhos-list] Glance problem "not enough disk space on the image storage media In-Reply-To: References: Message-ID: <51B061DA.7060304@redhat.com> On 06/04/2013 03:59 PM, Vogel Nicolas wrote: > Where can I found and delete all the old and wrong images that are even > stored on my disk? the path is configured in glance-api.conf filesystem_store_datadir = /var/lib/glance/images/ -- Giulio Fidente GPG KEY: 08D733BA | IRC: giulivo From gkotton at redhat.com Thu Jun 6 11:21:29 2013 From: gkotton at redhat.com (Gary Kotton) Date: Thu, 06 Jun 2013 14:21:29 +0300 Subject: [rhos-list] Problems with quantum and dhcp-agent In-Reply-To: <51B04879.7020404@redhat.com> References: <51AE6783.5040602@redhat.com> <51AEE572.70409@redhat.com> <51AF9C15.6050507@redhat.com> <51B04879.7020404@redhat.com> Message-ID: <51B070B9.6060900@redhat.com> On 06/06/2013 11:29 AM, Gary Kotton wrote: > On 06/05/2013 11:14 PM, Steven Dake wrote: >> On 06/05/2013 12:14 AM, Gary Kotton wrote: >>> On 06/05/2013 01:17 AM, Steven Dake wrote: >>>> On 06/04/2013 01:09 PM, S Manoo wrote: >>>>> Looking into this further, I'm observing the same error message >>>>> relating to timeouts talking to qpid in dhcp-agent.log after every >>>>> restart, perhaps this is why I'm unable to get any dhcp responses >>>>> to instances? Any suggestions on what's causing this and where I >>>>> might look to troubleshoot this further? >>> >>> When one restarts a host each process needs to register with the >>> message broker. If you are running all of the services on the same >>> host then they will only be able to connect when the qpid service is >>> up and running. This usually takes a few seconds after reboot. If a >>> service does not receive an answer from the qpid service then it >>> will wait and retry again. This is why you see the timeouts. The >>> wait is incremental. I have seen that all service are usually able >>> to connect within a minute of booting a host (we should try and >>> reduce this time). >>> >>> Please note that the quantum cli has an option: quantum agent-list. >>> This provides the list of agents, their status and hosts that they >>> are running on. >>> >>> If you spin up an instance after the dhcp agent is up and running do >>> you see the problem? >>> >>>>> >>>> S Manoo, >>>> >>>> We may have just fixed a bug related to this problem which is not >>>> fixed in the preview. Please try the workaround in this bugzilla: >>>> >>>> https://bugzilla.redhat.com/show_bug.cgi?id=970453 >>> >>> This fix is good for an all in one setup but will not help if the >>> DHCP agent is running on another host. In Quantum we have the notion >>> of a network node. Please look at >>> https://docs.google.com/drawings/d/167gegaoTBZpd318b2JTgF_Qi9YdkIX8pcQ6YBJLUtGY/edit?usp=sharing >>> >>> If the message broker goes down (say for example host reboot or >>> network problems) then the dhcp agent will try and reconnect. >>> >> Gary, >> >> I have found dhcp agent stops responding permanently in this >> condition on a all in one setup. Perhaps the same is true for >> multinode (ie the retry logic doesn't work as expected). I don't >> have multiple nodes to test, but might be worth double-checking if >> you do. > > I have done the following check (on an all in one setup): > 1. reboot host > 2. stop quantum service > 3. check that dhcp agent has a timeout with the quantum service > 4. restart quantum service > > I see a number of issues which I am going to investigate: > > 1. The agent is up: > [root at dhcp-4-126 ~(keystone_admin)]# quantum agent-list > +--------------------------------------+--------------------+---------------------------+-------+----------------+ > | id | agent_type | > host | alive | admin_state_up | > +--------------------------------------+--------------------+---------------------------+-------+----------------+ > | 11e35126-6c07-4a2f-b681-399cdbc8210d | L3 agent | > dhcp-4-126.tlv.redhat.com | :-) | True | > | 5e75d5d9-edb0-462e-850b-013ad7a518f4 | DHCP agent | > dhcp-4-126.tlv.redhat.com | :-) | True | > | af51a50f-f45e-4736-8517-ed3cda759b3c | Open vSwitch agent | > dhcp-4-126.tlv.redhat.com | :-) | True | > | d7588cb1-b287-4b4c-a8a8-539d4c5129b2 | Open vSwitch agent | > dhcp-4-227.tlv.redhat.com | :-) | True | > +--------------------------------------+--------------------+---------------------------+-------+----------------+ > [root at dhcp-4-126 ~(keystone_admin)]# > This means the the agent successfully sent a message to the plugin. > > 2. In the DHCP log there are timeouts with the qpid service and no > notification of a resync (which used to happen in Folsom. > > I am on it. And will post on any progress. > > A few days ago I had problem with nova compute which seemed similar to > this. I hope that it is not an issue with qpid and solely related to > the dhcp agent (which is easier for me to address) I have found the problem and am pushing patches. Thanks Gary > >> >> Regards >> -steve >> >>>> >>>> Regards >>>> -steve >>>> >>>> >>>>> */var/log/quantum/dhcp-agent.log:* >>>>> 2013-06-04 12:50:44 INFO [quantum.common.config] Logging enabled! >>>>> 2013-06-04 12:50:44 INFO >>>>> [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server >>>>> on localhost:5672 >>>>> 2013-06-04 12:50:44 INFO >>>>> [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP server >>>>> on localhost:5672 >>>>> 2013-06-04 12:50:44 INFO [quantum.agent.dhcp_agent] DHCP agent >>>>> started >>>>> 2013-06-04 12:51:44 ERROR [quantum.agent.dhcp_agent] Failed >>>>> reporting state! >>>>> Traceback (most recent call last): >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", >>>>> line 700, in _report_state >>>>> self.agent_state) >>>>> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", >>>>> line 66, in report_state >>>>> topic=self.topic) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", >>>>> line 80, in call >>>>> return rpc.call(context, self._get_topic(topic), msg, timeout) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", >>>>> line 140, in call >>>>> return _get_impl().call(CONF, context, topic, msg, timeout) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>>> line 611, in call >>>>> rpc_amqp.get_connection_pool(conf, Connection)) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>>>> line 613, in call >>>>> rv = list(rv) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>>>> line 555, in __iter__ >>>>> self.done() >>>>> File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ >>>>> self.gen.next() >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>>>> line 552, in __iter__ >>>>> self._iterator.next() >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>>> line 436, in iterconsume >>>>> yield self.ensure(_error_callback, _consume) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>>> line 380, in ensure >>>>> error_callback(e) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>>> line 421, in _error_callback >>>>> raise rpc_common.Timeout() >>>>> Timeout: Timeout while waiting on RPC response. >>>>> 2013-06-04 12:51:44 WARNING >>>>> [quantum.openstack.common.loopingcall] task run outlasted interval >>>>> by 56.108887 sec >>>>> 2013-06-04 12:51:44 INFO [quantum.agent.dhcp_agent] >>>>> Synchronizing state >>>>> >>>>> >>>>> >>>>> >>>>> On Mon, Jun 3, 2013 at 11:28 PM, S Manoo >>>> > wrote: >>>>> >>>>> >>>>> >>>>> *dhcp-agent.log:* >>>>> [root at grizzly ~(keystone_admin)]# cat dhcp-agent.log >>>>> 2013-06-03 22:27:09 INFO [quantum.common.config] Logging >>>>> enabled! >>>>> 2013-06-03 22:27:09 INFO >>>>> [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP >>>>> server on 10.0.0.19:5672 >>>>> 2013-06-03 22:27:09 INFO >>>>> [quantum.openstack.common.rpc.impl_qpid] Connected to AMQP >>>>> server on 10.0.0.19:5672 >>>>> 2013-06-03 22:27:10 INFO [quantum.agent.dhcp_agent] DHCP >>>>> agent started >>>>> 2013-06-03 22:28:10 ERROR [quantum.agent.dhcp_agent] Failed >>>>> reporting state! >>>>> Traceback (most recent call last): >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", line >>>>> 700, in _report_state >>>>> self.agent_state) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line >>>>> 66, in report_state >>>>> topic=self.topic) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", >>>>> line 80, in call >>>>> return rpc.call(context, self._get_topic(topic), msg, timeout) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", >>>>> line 140, in call >>>>> return _get_impl().call(CONF, context, topic, msg, timeout) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>>> line 611, in call >>>>> rpc_amqp.get_connection_pool(conf, Connection)) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>>>> line 613, in call >>>>> rv = list(rv) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>>>> line 555, in __iter__ >>>>> self.done() >>>>> File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ >>>>> self.gen.next() >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >>>>> line 552, in __iter__ >>>>> self._iterator.next() >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>>> line 436, in iterconsume >>>>> yield self.ensure(_error_callback, _consume) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>>> line 380, in ensure >>>>> error_callback(e) >>>>> File >>>>> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >>>>> line 421, in _error_callback >>>>> raise rpc_common.Timeout() >>>>> Timeout: Timeout while waiting on RPC response. >>>>> 2013-06-03 22:28:10 WARNING >>>>> [quantum.openstack.common.loopingcall] task run outlasted >>>>> interval by 56.133099 sec >>>>> 2013-06-03 22:28:10 INFO [quantum.agent.dhcp_agent] >>>>> Synchronizing state >>>>> [root at grizzly ~(keystone_admin)]# >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> rhos-list mailing list >>>>> rhos-list at redhat.com >>>>> https://www.redhat.com/mailman/listinfo/rhos-list >>>> >>>> >>>> >>>> _______________________________________________ >>>> rhos-list mailing list >>>> rhos-list at redhat.com >>>> https://www.redhat.com/mailman/listinfo/rhos-list >>> >> > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From ar4unix at gmail.com Thu Jun 6 15:16:39 2013 From: ar4unix at gmail.com (Angel Rosario) Date: Thu, 6 Jun 2013 11:16:39 -0400 Subject: [rhos-list] Horizon issues with packstack Message-ID: Hello All: I am experiencing a redirect issue in horizon. When I attempt to login I get the 'Something went wrong!' page, I then check the apache logs, and horizon logs respectively and the only errors I get are the following: incomplete redirection target of '/dashboard/' for URI '/' modified to ' http://nova/dashboard/' Any and all assistance is greatly appreciated. Thanks, Angel -------------- next part -------------- An HTML attachment was scrubbed... URL: From prmarino1 at gmail.com Thu Jun 6 22:14:01 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Thu, 6 Jun 2013 18:14:01 -0400 Subject: [rhos-list] I found a problem in EPEL Message-ID: lzop-1.02-0.9.rc1.el6.x86_64 exists in all EL6 variants and there is an other one from a different compile in EPEL This kills builds of RDO servers via spacewalk. note if it effects spacewalk then this also effects Red Hat Network Satellite as well. Ill open a ticket with EPEL latter but in case any one has been having difficulties with this deleting it from your local copy of EPEL should work around it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smanoo76 at gmail.com Fri Jun 7 03:45:42 2013 From: smanoo76 at gmail.com (S Manoo) Date: Thu, 6 Jun 2013 20:45:42 -0700 Subject: [rhos-list] Problems with quantum and dhcp-agent In-Reply-To: <51AE6783.5040602@redhat.com> References: <51AE6783.5040602@redhat.com> Message-ID: This workaround did not help unfortunately. I'll wait for the other patches mentioned in this thread, and will continue to investigate. On Tue, Jun 4, 2013 at 3:17 PM, Steven Dake wrote: > On 06/04/2013 01:09 PM, S Manoo wrote: > > Looking into this further, I'm observing the same error message relating > to timeouts talking to qpid in dhcp-agent.log after every restart, perhaps > this is why I'm unable to get any dhcp responses to instances? Any > suggestions on what's causing this and where I might look to troubleshoot > this further? > > S Manoo, > > We may have just fixed a bug related to this problem which is not fixed in > the preview. Please try the workaround in this bugzilla: > > https://bugzilla.redhat.com/show_bug.cgi?id=970453 > > Regards > -steve > > > */var/log/quantum/dhcp-agent.log:* > 2013-06-04 12:50:44 INFO [quantum.common.config] Logging enabled! > 2013-06-04 12:50:44 INFO [quantum.openstack.common.rpc.impl_qpid] > Connected to AMQP server on localhost:5672 > 2013-06-04 12:50:44 INFO [quantum.openstack.common.rpc.impl_qpid] > Connected to AMQP server on localhost:5672 > 2013-06-04 12:50:44 INFO [quantum.agent.dhcp_agent] DHCP agent started > 2013-06-04 12:51:44 ERROR [quantum.agent.dhcp_agent] Failed reporting > state! > Traceback (most recent call last): > File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", > line 700, in _report_state > self.agent_state) > File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, > in report_state > topic=self.topic) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", > line 80, in call > return rpc.call(context, self._get_topic(topic), msg, timeout) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", > line 140, in call > return _get_impl().call(CONF, context, topic, msg, timeout) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 611, in call > rpc_amqp.get_connection_pool(conf, Connection)) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > line 613, in call > rv = list(rv) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > line 555, in __iter__ > self.done() > File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ > self.gen.next() > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", > line 552, in __iter__ > self._iterator.next() > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 436, in iterconsume > yield self.ensure(_error_callback, _consume) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 380, in ensure > error_callback(e) > File > "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", > line 421, in _error_callback > raise rpc_common.Timeout() > Timeout: Timeout while waiting on RPC response. > 2013-06-04 12:51:44 WARNING [quantum.openstack.common.loopingcall] task > run outlasted interval by 56.108887 sec > 2013-06-04 12:51:44 INFO [quantum.agent.dhcp_agent] Synchronizing state > > > > > On Mon, Jun 3, 2013 at 11:28 PM, S Manoo wrote: > >> >> >> *dhcp-agent.log:* >> [root at grizzly ~(keystone_admin)]# cat dhcp-agent.log >> 2013-06-03 22:27:09 INFO [quantum.common.config] Logging enabled! >> 2013-06-03 22:27:09 INFO [quantum.openstack.common.rpc.impl_qpid] >> Connected to AMQP server on 10.0.0.19:5672 >> 2013-06-03 22:27:09 INFO [quantum.openstack.common.rpc.impl_qpid] >> Connected to AMQP server on 10.0.0.19:5672 >> 2013-06-03 22:27:10 INFO [quantum.agent.dhcp_agent] DHCP agent started >> 2013-06-03 22:28:10 ERROR [quantum.agent.dhcp_agent] Failed reporting >> state! >> Traceback (most recent call last): >> File "/usr/lib/python2.6/site-packages/quantum/agent/dhcp_agent.py", >> line 700, in _report_state >> self.agent_state) >> File "/usr/lib/python2.6/site-packages/quantum/agent/rpc.py", line 66, >> in report_state >> topic=self.topic) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/proxy.py", >> line 80, in call >> return rpc.call(context, self._get_topic(topic), msg, timeout) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/__init__.py", >> line 140, in call >> return _get_impl().call(CONF, context, topic, msg, timeout) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >> line 611, in call >> rpc_amqp.get_connection_pool(conf, Connection)) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >> line 613, in call >> rv = list(rv) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >> line 555, in __iter__ >> self.done() >> File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ >> self.gen.next() >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/amqp.py", >> line 552, in __iter__ >> self._iterator.next() >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >> line 436, in iterconsume >> yield self.ensure(_error_callback, _consume) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >> line 380, in ensure >> error_callback(e) >> File >> "/usr/lib/python2.6/site-packages/quantum/openstack/common/rpc/impl_qpid.py", >> line 421, in _error_callback >> raise rpc_common.Timeout() >> Timeout: Timeout while waiting on RPC response. >> 2013-06-03 22:28:10 WARNING [quantum.openstack.common.loopingcall] task >> run outlasted interval by 56.133099 sec >> 2013-06-03 22:28:10 INFO [quantum.agent.dhcp_agent] Synchronizing >> state >> [root at grizzly ~(keystone_admin)]# >> > > > > _______________________________________________ > rhos-list mailing listrhos-list at redhat.comhttps://www.redhat.com/mailman/listinfo/rhos-list > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prmarino1 at gmail.com Sun Jun 9 18:30:22 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Sun, 9 Jun 2013 14:30:22 -0400 Subject: [rhos-list] I found a problem in EPEL In-Reply-To: References: Message-ID: I found an other one of these packages libart_lgpl-2.3.20-5.1.el6.x86_64 its in EPEL and the base OS. it pops up as an issue when I tried to kickstart with RDO Grizzly and EPEL. On Thu, Jun 6, 2013 at 6:14 PM, Paul Robert Marino wrote: > lzop-1.02-0.9.rc1.el6.x86_64 exists in all EL6 variants and there is an > other one from a different compile in EPEL > This kills builds of RDO servers via spacewalk. > > note if it effects spacewalk then this also effects Red Hat Network > Satellite as well. > > Ill open a ticket with EPEL latter but in case any one has been having > difficulties with this deleting it from your local copy of EPEL should work > around it. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at redhat.com Mon Jun 10 09:25:50 2013 From: mrunge at redhat.com (Matthias Runge) Date: Mon, 10 Jun 2013 11:25:50 +0200 Subject: [rhos-list] Horizon issues with packstack In-Reply-To: References: Message-ID: <51B59B9E.6070904@redhat.com> On 06/06/2013 05:16 PM, Angel Rosario wrote: > Hello All: > > I am experiencing a redirect issue in horizon. When I attempt to login I > get the 'Something went wrong!' page, I then check the apache logs, and > horizon logs respectively and the only errors I get are the following: > > incomplete redirection target of '/dashboard/' for URI '/' modified to > 'http://nova/dashboard/' > > Any and all assistance is greatly appreciated. What version are you using? (rpm -q openstack-dashboard) How are you trying to access the login page? http://..../dashboard ? Or what are you trying to accomplish? Did you made any changes on http configs? And did you tweak the hosts file? It looks like 'nova' is a host in your network, right? Best regards, Matthias From lchristoph at arago.de Tue Jun 11 11:44:22 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Tue, 11 Jun 2013 11:44:22 +0000 Subject: [rhos-list] Glance API can't authenticate with swift proxy Message-ID: Hi! I'm trying to set up Open Stack Grizzly with the Red Hat packages, and I'm falbbergasted with a problem between the glance API daemon and the swift proxy. Here is a piece of strace from the proxy: 25807 recvfrom(9, "POST /v1/tokens HTTP/1.1\r\nHost: 192.168.101.118:8080\r\nContent-Length: 105\r\nContent-Type: application/json\r\nAccept-Encoding: gzip, deflate, compress\r\nAccept: */*\r\nUser-Agent: python-keystoneclient\r\n\r\n{\"auth\": {\"tenantName\": \"service\", \"passwordCredentials\": {\"username\": \"swift\", \"password\": \"bar\"}}}", 8192, 0, NULL, NULL) = 304 25807 getsockname(9, {sa_family=AF_INET, sin_port=htons(8080), sin_addr=inet_addr("192.168.101.118")}, [16]) = 0 25807 gettimeofday({1370936801, 180804}, NULL) = 0 25807 gettimeofday({1370936801, 181775}, NULL) = 0 25807 sendto(7, "<132>proxy-server Unable to find authentication token in headers\0", 65, 0, NULL, 0) = 65 25807 gettimeofday({1370936801, 182507}, NULL) = 0 25807 sendto(7, "<134>proxy-server Invalid user token - rejecting request\0", 57, 0, NULL, 0) = 57 25807 gettimeofday({1370936801, 183435}, NULL) = 0 25807 sendto(9, "HTTP/1.1 401 Unauthorized\r\nContent-Type: text/html; charset=UTF-8\r\nWww-Authenticate: Keystone uri='http://127.0.0.1:35357'\r\nContent-Length: 387\r\nDate: Tue, 11 Jun 2013 07:46:41 GMT\r\n\r\n\n \n 401 Unauthorized\n \n \n

401 Unauthorized

\n This server could not verify that you are authorized to\r\naccess the document you requested. Either you supplied the\r\nwrong credentials (e.g., bad password), or your browser\r\ndoes not understand how to supply the credentials required.\r\n

\nAuthentication required\n\n\n \n", 571, 0, NULL, 0) = 571 The proxy code seems to want a X-Auth-Token header, which the glance code duely send to other daemons. Here is the config on the glance side (assuming the proxy is right to complain): sql_connection = mysql://glance:foo at 192.168.101.118/glance default_store = swift swift_store_auth_version = 2 swift_store_auth_address = http://192.168.101.118:8080/v1/ swift_store_user = service:swift swift_store_key = foo Any clues? I've been stuck for a day now. Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 -------------- next part -------------- An HTML attachment was scrubbed... URL: From flavio at redhat.com Tue Jun 11 12:12:07 2013 From: flavio at redhat.com (Flavio Percoco) Date: Tue, 11 Jun 2013 14:12:07 +0200 Subject: [rhos-list] Glance API can't authenticate with swift proxy In-Reply-To: References: Message-ID: <20130611121207.GB9679@redhat.com> On 11/06/13 11:44 +0000, Lutz Christoph wrote: > Hi! > I'm trying to set up Open Stack Grizzly with the Red Hat packages, and > I'm falbbergasted with a problem between the glance API daemon and the > swift proxy. Here is a piece of strace from the proxy: > 25807 recvfrom(9, "POST /v1/tokens HTTP/1.1\r\nHost: > 192.168.101.118:8080\r\nContent-Length: 105\r\nContent-Type: > application/json\r\nAccept-Encoding: gzip, deflate, compress\r\nAccept: > */*\r\nUser-Agent: python-keystoneclient\r\n\r\n{\"auth\": > {\"tenantName\": \"service\", \"passwordCredentials\": {\"username\": > \"swift\", \"password\": \"bar\"}}}", 8192, 0, NULL, NULL) = 304 > 25807 getsockname(9, {sa_family=AF_INET, sin_port=htons(8080), > sin_addr=inet_addr("192.168.101.118")}, [16]) = 0 > 25807 gettimeofday({1370936801, 180804}, NULL) = 0 > 25807 gettimeofday({1370936801, 181775}, NULL) = 0 > 25807 sendto(7, "<132>proxy-server Unable to find authentication token > in headers\0", 65, 0, NULL, 0) = 65 > 25807 gettimeofday({1370936801, 182507}, NULL) = 0 > 25807 sendto(7, "<134>proxy-server Invalid user token - rejecting > request\0", 57, 0, NULL, 0) = 57 > 25807 gettimeofday({1370936801, 183435}, NULL) = 0 > 25807 sendto(9, "HTTP/1.1 401 Unauthorized\r\nContent-Type: text/html; > charset=UTF-8\r\nWww-Authenticate: Keystone > uri='http://127.0.0.1:35357'\r\nContent-Length: 387\r\nDate: Tue, 11 > Jun 2013 07:46:41 GMT\r\n\r\n\n \n 401 > Unauthorized\n \n \n

401 Unauthorized

\n > This server could not verify that you are authorized to\r\naccess the > document you requested. Either you supplied the\r\nwrong credentials > (e.g., bad password), or your browser\r\ndoes not understand how to > supply the credentials required.\r\n

\nAuthentication > required\n\n\n \n", 571, 0, NULL, 0) = 571 > The proxy code seems to want a X-Auth-Token header, which the glance > code duely send to other daemons. > Here is the config on the glance side (assuming the proxy is right to > complain): > sql_connection = mysql://glance:foo at 192.168.101.118/glance > default_store = swift > swift_store_auth_version = 2 > swift_store_auth_address = http://192.168.101.118:8080/v1/ > swift_store_user = service:swift > swift_store_key = foo > Any clues? I've been stuck for a day now. Hi, Could you please send the output of the glanceclient command as well? (please use -d -v flags). This seems to be something related to keystone. Are you able to authenticate to keystone and use other glance actions not requiring swift? Cheers, FF -- @flaper87 Flavio Percoco From lchristoph at arago.de Tue Jun 11 12:29:27 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Tue, 11 Jun 2013 12:29:27 +0000 Subject: [rhos-list] Glance API can't authenticate with swift proxy In-Reply-To: <20130611121207.GB9679@redhat.com> References: , <20130611121207.GB9679@redhat.com> Message-ID: <456edeeeffc24efea37d0ac94b62c61c@DB3PR07MB010.eurprd07.prod.outlook.com> > Von: Flavio Percoco > Gesendet: Dienstag, 11. Juni 2013 14:12 > An: Lutz Christoph > Cc: rhos-list at redhat.com; Holger Schulz > Betreff: Re: [rhos-list] Glance API can't authenticate with swift proxy > Could you please send the output of the glanceclient command as well? > (please use -d -v flags). [root at rhopenstack ~(keystone_admin)]# glance -d -v image-create --name "grml64" --is-public true --disk-format iso --container-format bare --file /tmp/images/grml64-small_2013.02.iso curl -i -X POST -H 'x-image-meta-container_format: bare' -H 'Transfer-Encoding: chunked' -H 'User-Agent: python-glanceclient' -H 'x-image-meta-size: 174063616' -H 'x-image-meta-is_public: True' -H 'X-Auth-Token: MIIGzwYJKoZIhvcNAQcCoIIGwDCCBrwCAQExCTAHBgUrDgMCGjCCBagGCSqGSIb3DQEHAaCCBZkEggWVeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0wNi0xMVQxMjoxOToyMS4wNTQ1OTQiLCAiZXhwaXJlcyI6ICIyMDEzLTA2LTEyVDEyOjE5OjIxWiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogbnVsbCwgImVuYWJsZWQiOiB0cnVlLCAiaWQiOiAiNDkyMmE2NDQzYjkzNDdkMThmNjdjODZiZmI3MjAyMmIiLCAibmFtZSI6ICJhZG1pbiJ9fSwgInNlcnZpY2VDYXRhbG9nIjogW3siZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzE5Mi4xNjguMTAxLjExODo4MDgwL3YxL0FVVEhfNDkyMmE2NDQzYjkzNDdkMThmNjdjODZiZmI3MjAyMmIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTkyLjE2OC4xMDEuMTE4OjgwODAvdjEvQVVUSF80OTIyYTY0NDNiOTM0N2QxOGY2N2M4NmJmYjcyMDIyYiIsICJpZCI6ICJhZDdhYWE2YzQzZGQ0YTdhOWJmNTQyN2NiOTI4ZGE1NiIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzE5Mi4xNjguMTAxLjExODo4MDgwL3YxL0FVVEhfNDkyMmE2NDQzYjkzNDdkMThmNjdjODZiZmI3MjAyMmIifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAib2JqZWN0LXN0b3JlIiwgIm5hbWUiOiAic3dpZnQifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTkyLjE2OC4xMDEuMTE4OjkyOTIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTkyLjE2OC4xMDEuMTE4OjkyOTIiLCAiaWQiOiAiZDUyNTljZTFhMWIxNGU2NGIzMTMyZDJhZTk2OTQ5OGMiLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjEwMS4xMTg6OTI5MiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpbWFnZSIsICJuYW1lIjogImdsYW5jZSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9yaG9wZW5zdGFjazozNTM1Ny92Mi4wIiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL3Job3BlbnN0YWNrOjUwMDAvdjIuMCIsICJpZCI6ICJiMDEzYWIyMzY0NTI0NGMxYmVjYzBhZTlmZjY3NDhiMyIsICJwdWJsaWNVUkwiOiAiaHR0cDovL3Job3BlbnN0YWNrOjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAiYWRtaW4iLCAicm9sZXNfbGlua3MiOiBbXSwgImlkIjogImY0MWVmYmE3NDY4YjQxYWNiNWU4MGY0YjgwZTA4YmFlIiwgInJvbGVzIjogW3sibmFtZSI6ICJhZG1pbiJ9XSwgIm5hbWUiOiAiYWRtaW4ifSwgIm1ldGFkYXRhIjogeyJpc19hZG1pbiI6IDAsICJyb2xlcyI6IFsiNjY0NWMxNmYxZTBlNDk1ZWE0OWVmYzY1MmNiYmViNWEiXX19fTGB-zCB-AIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIEwVVbnNldDEOMAwGA1UEBxMFVW5zZXQxDjAMBgNVBAoTBVVuc2V0MRgwFgYDVQQDEw93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEgYBLNbyLN9OWUXjleSw8hIZ5cq1bROIsW3HZ-YsGgcpYemKEkr1cEGSRuIvpAfRpvo+HnbRACEKfhQzoIYxE4HE9Y3REkLnRxqNthxQIwu-USLiiMAyNyXKfqZG1mlZDNEIhT33M4Xp5ZyensLFrB4aVIJoLAtJTvLNdCM2898BIaA==' -H 'Content-Type: application/octet-stream' -H 'x-image-meta-disk_format: iso' -H 'x-image-meta-name: grml64' -d '' http://192.168.101.118:9292/v1/images HTTP/1.1 500 Internal Server Error date: Tue, 11 Jun 2013 12:19:21 GMT content-length: 114 content-type: text/plain; charset=UTF-8 x-openstack-request-id: req-9e5ae3c4-14c0-4e8a-87b6-5d4e12cfc06e 500 Internal Server Error The server has either erred or is incapable of performing the requested operation. Request returned failure status. 500 Internal Server Error The server has either erred or is incapable of performing the requested operation. (HTTP 500) > This seems to be something related to keystone. Are you able to > authenticate to keystone and use other glance actions not requiring > swift? glance image-list runs without complaint,. but returns an empty list (as expected). The glance client uses Keystone and passes the token to the glance daemons: 30080 recvfrom(6, "POST //v1/images HTTP/1.1\r\nHost: 192.168.101.118:9292\r\nAccept-Encoding: identity\r\nx-image-meta-container_format: bare\r\nTransfer-Encoding: chunked\r\nUser-Agent: python-glanceclient\r\nx-image-meta-size: 174063616\r\nx-image-meta-is_public: True\r\nX-Auth-Token: MIIGzwYJKoZIhvcNAQcCoIIGwDCCBrwCAQExCTAHBgUrDgMCGjCCBagGCSqGSIb3DQEHAaCCBZkEggWVeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0wNi0xMVQwODozOTo1NS44MjQ1NzUiLCAiZXhwaXJlcyI6ICIyMDEzLTA2LTEyVDA4OjM5OjU1WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogbnVsbCwgImVuYWJsZWQiOiB0cnVlLCAiaWQiOiAiNDkyMmE2NDQzYjkzNDdkMThmNjdjODZiZmI3MjAyMmIiLCAibmFtZSI6ICJhZG1pbiJ9fSwgInNlcnZpY2VDYXRhbG9nIjogW3siZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzE5Mi4xNjguMTAxLjExODo4MDgwL3YxL0FVVEhfNDkyMmE2NDQzYjkzNDdkMThmNjdjODZiZmI3MjAyMmIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTkyLjE2OC4xMDEuMTE4OjgwODAvdjEvQVVUSF80OTIyYTY0NDNiOTM0N2QxOGY2N2M4NmJmYjcyMDIyYiIsICJpZCI6ICJhZDdhYWE2YzQzZGQ0YTdhOWJmNTQyN2NiOTI4ZGE1NiIsICJwdWJsaWNVUkwiOiAiaHR0cDovLzE5Mi4xNjguMTAxLjExODo4MDgwL3YxL0FVVEhfNDkyMmE2NDQzYjkzNDdkMThmNjdjODZiZmI3MjAyMmIifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAib2JqZWN0LXN0b3JlIiwgIm5hbWUiOiAic3dpZnQifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTkyLjE2OC4xMDEuMTE4OjkyOTIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTkyLjE2OC4xMDEuMTE4OjkyOTIiLCAiaWQiOiAiZDUyNTljZTFhMWIxNGU2NGIzMTMyZDJhZTk2OTQ5OGMiLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjEwMS4xMTg6OTI5MiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpbWFnZSIsICJuYW1lIjogImdsYW5jZSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9yaG9wZW5zdGFjazozNTM1Ny92Mi4wIiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL3Job3BlbnN0YWNrOjUwMDAvdjIuMCIsICJpZCI6ICJiMDEzYWIyMzY0NTI0NGMxYmVjYzBhZTlmZjY3NDhiMyIsICJwdWJsaWNVUkwiOiAiaHR0cDovL3Job3BlbnN0YWNrOjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAiYWRtaW4iLCAicm9sZXNfbGlua3MiOiBbXSwgImlkIjogImY0MWVmYmE3NDY4YjQxYW"..., 8192, 0, NULL, NULL) = 8192 Thanks! Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 ________________________________________ From pmyers at redhat.com Wed Jun 12 03:04:36 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 11 Jun 2013 23:04:36 -0400 Subject: [rhos-list] I think I found something missing in iproute2 In-Reply-To: <51A21873.4080509@redhat.com> References: <518BD6F2.6090600@redhat.com> <20130509174502.GM4016@x200.localdomain> <20130509181644.GN4016@x200.localdomain> <15554A73-0676-406D-A66C-8277845025CB@gmail.com> <51A21873.4080509@redhat.com> Message-ID: <51B7E544.6080009@redhat.com> On 05/26/2013 10:13 AM, Perry Myers wrote: > Just so folks know... > > We're working to get a kernel out on RDO based on the latest RHEL 6.4.z > kernel that contains the netns functionality. > > Hopefully in the next week or two we should be able to put this on the > RDO repos. > > We are just working out the minimal patch set required to backport the > functionality from upstream into the RHEL 6 kernel line, and validating > that netns works well enough to satisfy the use cases that OpenStack > Networking needs it for. > > More info as we get it Sorry for the delayed notification here, but just to let folks know. We do have a kernel in the RDO repos right now, that is based on the same base kernel as RHEL 6.4.z 2.6.32-358.6.2 kernel. This kernel is basically 358.6.2 + netns patches, and should work well for using Quantum in RDO. http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/kernel-2.6.32-358.6.2.openstack.el6.x86_64.rpm One note is that Red Hat has released to RHN the next 6.4.z update of the kernel (358.11.1), and we are working to get an updated netns enabled kernel based on this latest z-stream. An additional note is that this RDO kernel is specifically for the community. It is not a RHEL kernel and is not officially supported in any way. Installing this kernel on RHEL systems will impact the support of the baseOS and Red Hat support will need you to revert to a fully supported kernel in order to get support for any kernel issues you encounter. Cheers, Perry From rich.minton at lmco.com Fri Jun 14 15:34:15 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Fri, 14 Jun 2013 15:34:15 +0000 Subject: [rhos-list] Passing user-data. Message-ID: Has anyone had any success passing user-data within the Horizon Dashboard when launching an instance? If so, what is the secret? Thanks in advance. Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Fri Jun 14 19:10:11 2013 From: pmyers at redhat.com (Perry Myers) Date: Fri, 14 Jun 2013 15:10:11 -0400 Subject: [rhos-list] Passing user-data. In-Reply-To: References: Message-ID: <51BB6A93.4070701@redhat.com> Adding some of our horizon folks in case they miss this On 06/14/2013 11:34 AM, Minton, Rich wrote: > Has anyone had any success passing user-data within the Horizon > Dashboard when launching an instance? If so, what is the secret? > > > > Thanks in advance. > > Rick From rich.minton at lmco.com Fri Jun 14 19:35:10 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Fri, 14 Jun 2013 19:35:10 +0000 Subject: [rhos-list] EXTERNAL: Re: Passing user-data. In-Reply-To: <51BB6A93.4070701@redhat.com> References: <51BB6A93.4070701@redhat.com> Message-ID: Additional info: I have the latest cloud-init installed in my images, cloud-init-0.7.1-2.el6.noarch. When I run "curl http://169.254.169.254/2009-04-04/user-data" inside my instance the command returns the metadata I passed in the "Configuration Script" field in Horizon Dashboard but the script is not run during boot of the instance. Rick -----Original Message----- From: Perry Myers [mailto:pmyers at redhat.com] Sent: Friday, June 14, 2013 3:10 PM To: Minton, Rich; Julie Pichon; Matthias Runge Cc: rhos-list at redhat.com Subject: EXTERNAL: Re: [rhos-list] Passing user-data. Adding some of our horizon folks in case they miss this On 06/14/2013 11:34 AM, Minton, Rich wrote: > Has anyone had any success passing user-data within the Horizon > Dashboard when launching an instance? If so, what is the secret? > > > > Thanks in advance. > > Rick From sdake at redhat.com Fri Jun 14 20:02:35 2013 From: sdake at redhat.com (Steven Dake) Date: Fri, 14 Jun 2013 13:02:35 -0700 Subject: [rhos-list] EXTERNAL: Re: Passing user-data. In-Reply-To: References: <51BB6A93.4070701@redhat.com> Message-ID: <51BB76DB.20303@redhat.com> On 06/14/2013 12:35 PM, Minton, Rich wrote: > Additional info: > > I have the latest cloud-init installed in my images, cloud-init-0.7.1-2.el6.noarch. > When I run "curl http://169.254.169.254/2009-04-04/user-data" inside my instance the command returns the metadata I passed in the "Configuration Script" field in Horizon Dashboard but the script is not run during boot of the instance. > > > Rick Rick, Couple possibilities a) Cloud-init is not downloading the userdata b) The userdata is not formatted in a way that cloud-init can understand To eliminate A, check for a file called /var/lib/cloud/userdata.txt If this file is present, cloudinit has downloaded the userdata In order to actually run the data, the userdata must be formatted as a mime multipart message with a cloud-config file as well as a cloud-boothook mime type. I don't think there is a generic way to run a userdata command with cloudinit without formatting it into a mime multipart message. It may be that horizon doesn't do this for you, since the formatting of the mime multipart message is cloud-init specific. See: http://sdake.wordpress.com/2013/03/03/how-we-use-cloudinit-in-openstack-heat/ I believe the OpenStack docs and tools like Horizon (correct me if I'm wrong Horizon core devs) are designed with using cirros images in mind, which do a curl on the address you mentioned and run that script in rc.local. This is different then how cloud-init behaves. Regards -steve -----Original Message----- From: Perry Myers [mailto:pmyers at redhat.com] Sent: Friday, June 14, 2013 3:10 PM To: Minton, Rich; Julie Pichon; Matthias Runge Cc: rhos-list at redhat.com Subject: EXTERNAL: Re: [rhos-list] Passing user-data. Adding some of our horizon folks in case they miss this On 06/14/2013 11:34 AM, Minton, Rich wrote: >> Has anyone had any success passing user-data within the Horizon >> Dashboard when launching an instance? If so, what is the secret? >> >> >> >> Thanks in advance. >> >> Rick > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkukura at redhat.com Sun Jun 16 18:27:12 2013 From: rkukura at redhat.com (Robert Kukura) Date: Sun, 16 Jun 2013 14:27:12 -0400 Subject: [rhos-list] Are we having two options to add VM's to br-int In-Reply-To: <51BDF017.8070700@redhat.com> References: <51BD1E18.3040903@redhat.com> <51BDC003.1030707@redhat.com> <51BDF017.8070700@redhat.com> Message-ID: <51BE0380.50304@redhat.com> On 06/16/2013 01:04 PM, jhsiao at redhat.com wrote: > On 06/16/2013 09:39 AM, Perry Myers wrote: >> Taking to rhos-dev and adding the other Quantum engineers in addition >> to Bob >> >> Jean, would you be okay with me taking this discussion to rhos-list or >> rdo-list? > Hi Perry, > > NP >> I feel like we are having purely technical discussions on downstream >> lists too much, and we should try to move questions like the one below >> to our upstream lists > > Agree. > > Thanks! > > Jean >> Thanks, >> >> Perry >> >> On 06/15/2013 10:08 PM, jhsiao at redhat.com wrote: >>> Hi Bob, >>> >>> With Folsom VM's were added to the integration bridge indirectly. >>> >>> Then, with earlier Gizzly, they were added to the bridge directly. >>> >>> I can see the point by adding them directly --- for performance reason. >>> >>> Now, with the latest Gizzly, they are being added to the br-int >>> indirectly again. >>> >>> So, are we having two options? If so, how to pick which option to >>> deploy? Hi Jean, There are two options in the grizzly release when using the openvswitch plugin. Nova must be configured to use LibvirtHybridOVSBridgeDriver as the VIF driver if security group functionality is needed. If security groups are not needed, then the LibvirtGenericVIFDriver VIF driver can be used. The former adds a traditional bridge and veth so that iptables rules are applied, while the latter connects the vNIC directly to the integration bridge. -Bob >>> >>> Thanks! >>> >>> Jean > From mrunge at redhat.com Mon Jun 17 10:18:58 2013 From: mrunge at redhat.com (Matthias Runge) Date: Mon, 17 Jun 2013 12:18:58 +0200 Subject: [rhos-list] EXTERNAL: Re: Passing user-data. In-Reply-To: <51BB76DB.20303@redhat.com> References: <51BB6A93.4070701@redhat.com> <51BB76DB.20303@redhat.com> Message-ID: <51BEE292.5090302@redhat.com> On 14/06/13 22:02, Steven Dake wrote: > On 06/14/2013 12:35 PM, Minton, Rich wrote: >> Additional info: >> >> I have the latest cloud-init installed in my images, >> cloud-init-0.7.1-2.el6.noarch. When I run "curl >> http://169.254.169.254/2009-04-04/user-data" inside my instance the >> command returns the metadata I passed in the "Configuration Script" >> field in Horizon Dashboard but the script is not run during boot of >> the instance. >> >> >> Rick > Rick, > > Couple possibilities a) Cloud-init is not downloading the userdata b) > The userdata is not formatted in a way that cloud-init can > understand > > To eliminate A, check for a file called /var/lib/cloud/userdata.txt > > If this file is present, cloudinit has downloaded the userdata I could reproduce this to this point. What I did: made a simple script: touch /tmp/hello to be executed at image startup. This can be found at /var/lib/cloud/instance/user-data.txt So something has worked, but still, I can not see, that it has been executed somehow. /var/log/cloud-init.log even shows: Jun 17 03:36:19 localhost [CLOUDINIT] __init__.py[DEBUG]: handling scripts-per-instance with freq=None and args=[] Jun 17 03:36:19 localhost [CLOUDINIT] __init__.py[DEBUG]: handling scripts-user with freq=None and args=[] In Horizon, we don't have a special handling for user data. It is just passed to novaclient: novaclient(request).servers.create( name, image, flavor, userdata=user_data, security_groups=security_groups, key_name=key_name, block_device_mapping=block_device_mapping, nics=nics, availability_zone=availability_zone, min_count=instance_count, admin_pass=admin_pass) What I can find about Cloudinit is this: https://help.ubuntu.com/community/CloudInit Sadly, esp. the description about user-data points to a 404. Matthias From mrunge at redhat.com Mon Jun 17 10:30:47 2013 From: mrunge at redhat.com (Matthias Runge) Date: Mon, 17 Jun 2013 12:30:47 +0200 Subject: [rhos-list] EXTERNAL: Re: Passing user-data. In-Reply-To: References: <51BB6A93.4070701@redhat.com> Message-ID: <51BEE557.7010805@redhat.com> On 14/06/13 21:35, Minton, Rich wrote: > Additional info: > > I have the latest cloud-init installed in my images, cloud-init-0.7.1-2.el6.noarch. > When I run "curl http://169.254.169.254/2009-04-04/user-data" inside my instance the command returns the metadata I passed in the "Configuration Script" field in Horizon Dashboard but the script is not run during boot of the instance. > > > Rick I guess, the trick is to add this from cloud-init page: User-Data Script begins with: "#!" or "Content-Type: text/x-shellscript" script will be executed at "rc.local-like" level during first boot. rc.local-like means "very late in the boot sequence" I got it immediately working then prepending: Content-Type: text/x-shellscript Matthias From rich.minton at lmco.com Mon Jun 17 13:45:23 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Mon, 17 Jun 2013 13:45:23 +0000 Subject: [rhos-list] EXTERNAL: Re: Passing user-data. In-Reply-To: <51BB76DB.20303@redhat.com> References: <51BB6A93.4070701@redhat.com> <51BB76DB.20303@redhat.com> Message-ID: I don't have a file "userdata.txt" in /var/lib/cloud/. I put cloud-init in debug mode but all I see is /var/lib/cloud/instance/boot-finished being written at completion. From: Steven Dake [mailto:sdake at redhat.com] Sent: Friday, June 14, 2013 4:03 PM To: Minton, Rich Cc: Perry Myers; Julie Pichon; Matthias Runge; rhos-list at redhat.com Subject: Re: [rhos-list] EXTERNAL: Re: Passing user-data. On 06/14/2013 12:35 PM, Minton, Rich wrote: Additional info: I have the latest cloud-init installed in my images, cloud-init-0.7.1-2.el6.noarch. When I run "curl http://169.254.169.254/2009-04-04/user-data" inside my instance the command returns the metadata I passed in the "Configuration Script" field in Horizon Dashboard but the script is not run during boot of the instance. Rick Rick, Couple possibilities a) Cloud-init is not downloading the userdata b) The userdata is not formatted in a way that cloud-init can understand To eliminate A, check for a file called /var/lib/cloud/userdata.txt If this file is present, cloudinit has downloaded the userdata In order to actually run the data, the userdata must be formatted as a mime multipart message with a cloud-config file as well as a cloud-boothook mime type. I don't think there is a generic way to run a userdata command with cloudinit without formatting it into a mime multipart message. It may be that horizon doesn't do this for you, since the formatting of the mime multipart message is cloud-init specific. See: http://sdake.wordpress.com/2013/03/03/how-we-use-cloudinit-in-openstack-heat/ I believe the OpenStack docs and tools like Horizon (correct me if I'm wrong Horizon core devs) are designed with using cirros images in mind, which do a curl on the address you mentioned and run that script in rc.local. This is different then how cloud-init behaves. Regards -steve -----Original Message----- From: Perry Myers [mailto:pmyers at redhat.com] Sent: Friday, June 14, 2013 3:10 PM To: Minton, Rich; Julie Pichon; Matthias Runge Cc: rhos-list at redhat.com Subject: EXTERNAL: Re: [rhos-list] Passing user-data. Adding some of our horizon folks in case they miss this On 06/14/2013 11:34 AM, Minton, Rich wrote: Has anyone had any success passing user-data within the Horizon Dashboard when launching an instance? If so, what is the secret? Thanks in advance. Rick _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Tue Jun 18 11:41:09 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 18 Jun 2013 07:41:09 -0400 Subject: [rhos-list] Fwd: [Bug 975338] New: "quantum security-group-rule-list" from the "admin" tenant shows the security group rules of all tenants In-Reply-To: References: Message-ID: <51C04755.5060707@redhat.com> Bob, Interesting. In the lab session recently done at Red Hat Summit in Boston, we noticed on RHOS 3.0 Preview/RHEL 6.4 installs that users running as the admin tenant were seeing two 'default' security groups in Quantum. Maybe the below is the reason? Perhaps they were seeing the 'default' group for the admin tenant as well as the other user they created in keystone? Perry -------- Original Message -------- Subject: [Bug 975338] New: "quantum security-group-rule-list" from the "admin" tenant shows the security group rules of all tenants Date: Tue, 18 Jun 2013 07:14:20 +0000 From: bugzilla at redhat.com To: pmyers at redhat.com https://bugzilla.redhat.com/show_bug.cgi?id=975338 Bug ID: 975338 Summary: "quantum security-group-rule-list" from the "admin" tenant shows the security group rules of all tenants Product: Red Hat OpenStack Version: 3.0 Component: python-cliff Severity: medium Priority: unspecified Assignee: rhos-maint at redhat.com Reporter: rvaknin at redhat.com Version: Grizzly on rhel6.4 with openstack-quantum-2013.1.2-3.el6ost and python-cliff-1.3-1.el6ost (puddle 2013-06-13.2). Description: "quantum security-group-rule-list" running in the "admin" user context show security group rules of all tenants while it should show security group rules of the admin tenant. The list of all security group rules should appear only when the "--all-tenant" argument is in use. [root ~(keystone_admin)]# quantum security-group-rule-list +--------------------------------------+----------------+-----------+----------+------------------+--------------+ | id | security_group | direction | protocol | remote_ip_prefix | remote_group | +--------------------------------------+----------------+-----------+----------+------------------+--------------+ | 04b69c0e-4fe1-44ba-b772-794d844e5101 | default | ingress | tcp | 0.0.0.0/0 | | | 19d17912-2e20-46d0-bf8d-1fc6c52220ce | default | egress | | | | | 1f158243-cb24-4950-803a-e19025e1ac9f | default | egress | | | | | 5acf9b3d-347c-483b-9ab4-e79f4d044918 | default | ingress | | | default | | 5bb0e605-3bab-45ae-bedd-898f484daec0 | default | ingress | icmp | 0.0.0.0/0 | | | 5cccde9b-ebae-450a-8590-5d36797ddd9c | default | ingress | | | default | | 6b5b5d71-123e-41ff-9b93-0b1db724b540 | default | egress | | | | | 7057ea12-44c1-4090-a93c-dd80ae1c6414 | default | egress | | | | | 8c53ad7b-565e-433b-809c-b69b40518ad3 | default | ingress | | | default | | 9bccf920-2da7-4566-b590-eb2fb091f0b2 | default | ingress | | | default | | af095e7f-55d1-4d90-ac29-7741424ade57 | default | egress | | | | | b7d7742d-11c3-428f-835f-6191b4303d15 | default | egress | | | | | ce1708e0-db0b-41f2-894f-d630d63069fe | default | ingress | | | default | | dc4cd283-6aa1-49a4-ac2d-9d1fd2296e1d | default | ingress | | | default | | e221c58f-f08b-4b18-a501-7d88c2b6fa27 | default | ingress | icmp | 0.0.0.0/0 | | | e77e4065-37a8-4f0d-ac06-4e826328e218 | default | ingress | tcp | 0.0.0.0/0 | | +--------------------------------------+----------------+-----------+----------+------------------+--------------+ [root ~(keystone_admin)]# . keystonerc_vlan_186 [root ~(keystone_vlan_186)]$ quantum security-group-rule-list +--------------------------------------+----------------+-----------+----------+------------------+--------------+ | id | security_group | direction | protocol | remote_ip_prefix | remote_group | +--------------------------------------+----------------+-----------+----------+------------------+--------------+ | 19d17912-2e20-46d0-bf8d-1fc6c52220ce | default | egress | | | | | 9bccf920-2da7-4566-b590-eb2fb091f0b2 | default | ingress | | | default | | b7d7742d-11c3-428f-835f-6191b4303d15 | default | egress | | | | | dc4cd283-6aa1-49a4-ac2d-9d1fd2296e1d | default | ingress | | | default | | e221c58f-f08b-4b18-a501-7d88c2b6fa27 | default | ingress | icmp | 0.0.0.0/0 | | | e77e4065-37a8-4f0d-ac06-4e826328e218 | default | ingress | tcp | 0.0.0.0/0 | | +--------------------------------------+----------------+-----------+----------+------------------+--------------+ For instance, security group rule id "e77e4065-37a8-4f0d-ac06-4e826328e218" appears in the output of "quantum security-group-rule-list" command while running it from both the "admin" tenant and other non-admin tenant. -- You are receiving this mail because: You are watching the assignee of the bug. From roxenham at redhat.com Tue Jun 18 13:01:53 2013 From: roxenham at redhat.com (Rhys Oxenham) Date: Tue, 18 Jun 2013 14:01:53 +0100 Subject: [rhos-list] RHOS w/Quantum + UCS Message-ID: Hi All, Struggling to find some information as to whether we're going to be supporting plugins such as Cisco's UCS plugin with RHOS; we have a potential opportunity in the UK for a large bank who are moving a lot of their x86 estate to Cisco UCS. I've found a lot of technical information about it as well as our partnership with them but was hoping to get some commercial information around the support model if possible. Kindest Regards, Rhys -- Rhys Oxenham Cloud Solution Architect, Red Hat UK e: roxenham at redhat.com m: +44 (0)7866 446625 From cdubuque at redhat.com Tue Jun 18 13:10:19 2013 From: cdubuque at redhat.com (Chuck Dubuque) Date: Tue, 18 Jun 2013 09:10:19 -0400 (EDT) Subject: [rhos-list] RHOS w/Quantum + UCS In-Reply-To: References: Message-ID: <769324829.294446.1371561019159.JavaMail.root@redhat.com> We are launching the certification program for OpenStack Networking and Storage (including test suites) officially next month. Cisco will certify to that program, then those solutions will be certified for Red Hat OpenStack. Support would be like for any ISV--Red Hat would work with Cisco through TSAnet to solve any issues. Chuck Dubuque Senior Manager, Product Marketing, Red Hat Virtualization Business Unit cdubuque at redhat.com -- 650-450-4022 (cell) Try RHEV 3 - 60 day supported trial! http://red.ht/uE1lu3 ----- Original Message ----- > From: "Rhys Oxenham" > To: rhos-list at redhat.com > Sent: Tuesday, June 18, 2013 9:01:53 AM > Subject: [rhos-list] RHOS w/Quantum + UCS > > Hi All, > > Struggling to find some information as to whether we're going to be > supporting plugins such as Cisco's UCS plugin with RHOS; we have a potential > opportunity in the UK for a large bank who are moving a lot of their x86 > estate to Cisco UCS. I've found a lot of technical information about it as > well as our partnership with them but was hoping to get some commercial > information around the support model if possible. > > Kindest Regards, > Rhys > > > > -- > > Rhys Oxenham > Cloud Solution Architect, Red Hat UK > e: roxenham at redhat.com > m: +44 (0)7866 446625 > > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From shshang at cisco.com Tue Jun 18 14:33:05 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Tue, 18 Jun 2013 14:33:05 +0000 Subject: [rhos-list] RHEL Grizzly Release date Message-ID: <6190AA83EB69374DABAE074D7E900F7512566BD7@xmb-aln-x13.cisco.com> Hi, guys: May I ask when Redhat version of Grizzly release will be ready for trial? Thanks! Shixiong [cid:image001.png at 01CE0A3A.4C739FF0] Shixiong Shang Solution Architect WWSP Digital Media Solution Architect Cisco Services CCIE R&S - #17235 shshang at cisco.com Phone: +1 919 392 5192 Mobile: +1 919 272 1358 Cisco Systems, Inc. 7200-4 Kit Creek Road RTP, NC 27709-4987 United States Cisco.com !--- Stay Hungry Stay Foolish ---! This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 9461 bytes Desc: image001.png URL: From pmyers at redhat.com Tue Jun 18 16:25:17 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 18 Jun 2013 12:25:17 -0400 Subject: [rhos-list] RHEL Grizzly Release date In-Reply-To: <6190AA83EB69374DABAE074D7E900F7512566BD7@xmb-aln-x13.cisco.com> References: <6190AA83EB69374DABAE074D7E900F7512566BD7@xmb-aln-x13.cisco.com> Message-ID: <51C089ED.9080509@redhat.com> On 06/18/2013 10:33 AM, Shixiong Shang (shshang) wrote: > Hi, guys: > > May I ask when Redhat version of Grizzly release will be ready for trial? RHOS 3.0 (based on Grizzly) is already available as a pre-release (Preview). You can get access to the bits via Red Hat Network by signing up for an evaluation here: http://www.redhat.com/openstack We are continuing to resolve bugs in preparation for the release, so we will periodically upload new RPMs to the RHOS 3.0 Preview Channel. For example, we are working to get the network namespace enabled kernel pushed to the RHOS 3.0 Preview channel presently. Hopefully it will be there very soon :) Our plan for GA is still in early to mid July. Cheers, Perry From pmyers at redhat.com Tue Jun 18 16:27:52 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 18 Jun 2013 12:27:52 -0400 Subject: [rhos-list] Fwd: [Rdo-list] #rdo on freenode irc In-Reply-To: <51C05D1D.7080806@redhat.com> References: <51C05D1D.7080806@redhat.com> Message-ID: <51C08A88.7060407@redhat.com> Just FYI for folks on rhos-list that might not be on rdo-list :) -------- Original Message -------- Subject: [Rdo-list] #rdo on freenode irc Date: Tue, 18 Jun 2013 09:14:05 -0400 From: Perry Myers To: rdo-list at redhat.com For folks who want to participate in the RDO community via more direct medium (vs. email and forums), we now have #rdo channel on freenode irc. So, please join us on #rdo, and ask questions or help others. Complete oversight that we didn't create this channel sooner! Cheers, Perry _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list From pmyers at redhat.com Tue Jun 18 18:45:19 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 18 Jun 2013 14:45:19 -0400 Subject: [rhos-list] Red Hat Summit 2013: OpenStack Keynotes, Videos & Presentations Message-ID: <51C0AABF.60502@redhat.com> http://openstack.redhat.com/forum/discussion/222/red-hat-summit-2013-openstack-keynotes-videos-presentations From rich.minton at lmco.com Wed Jun 19 14:48:17 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Wed, 19 Jun 2013 14:48:17 +0000 Subject: [rhos-list] Instance in shutdown state. Message-ID: RHOS community, I'm having a problem where if a compute node panics and reboots itself then all of the instances that were running on that node come up as Status = Error and State = Shutdown. I can set the instance to"Active" using "nova reset-state -active" but I cannot seem to get the instance to come out of Shutdown state. I have tried "nova reboot -hard" but they will not come out of shutdown. This happened on two different compute nodes. Anyone have any ideas? Thank you, Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From prmarino1 at gmail.com Wed Jun 19 16:18:30 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Wed, 19 Jun 2013 12:18:30 -0400 Subject: [rhos-list] Instance in shutdown state. In-Reply-To: Message-ID: <51c1d9d8.aabc340a.60d0.ffffcc03@mx.google.com> An HTML attachment was scrubbed... URL: From jrfuller at redhat.com Wed Jun 19 21:01:18 2013 From: jrfuller at redhat.com (Johnray Fuller) Date: Wed, 19 Jun 2013 17:01:18 -0400 Subject: [rhos-list] Question around VLANs in Quantum In-Reply-To: <51C216B8.4060705@redhat.com> References: <89073807A615C347AB3302B616522A181290879A@OYWEX0203N3.msad.ms.com> <51C1F96F.60006@redhat.com> <51C216B8.4060705@redhat.com> Message-ID: <51C21C1E.5010800@redhat.com> Hello, In "net-show", I see the provider:segmentation_id is set to 1024 as per the configuration. When one looks at the output of ovs-vsctl show, I see tags: Port "tap9744e841-99" tag: 2 Interface "tap9744e841-99" From what I can tell, the tag field is the VLAN tag. It appears that this is set automatically by OVS. Do I need to sync OVS tag with the quantum "segmentation_id"? I assume not, but wanted to verify. Thanks, Johnray From rkukura at redhat.com Wed Jun 19 21:32:22 2013 From: rkukura at redhat.com (Robert Kukura) Date: Wed, 19 Jun 2013 17:32:22 -0400 Subject: [rhos-list] Question around VLANs in Quantum In-Reply-To: <51C21C1E.5010800@redhat.com> References: <89073807A615C347AB3302B616522A181290879A@OYWEX0203N3.msad.ms.com> <51C1F96F.60006@redhat.com> <51C216B8.4060705@redhat.com> <51C21C1E.5010800@redhat.com> Message-ID: <51C22366.8070908@redhat.com> On 06/19/2013 05:01 PM, Johnray Fuller wrote: > Hello, > > In "net-show", I see the provider:segmentation_id is set to 1024 as per > the configuration. > > When one looks at the output of ovs-vsctl show, I see tags: > Port "tap9744e841-99" > tag: 2 > Interface "tap9744e841-99" > > From what I can tell, the tag field is the VLAN tag. It appears that > this is set automatically by OVS. Do I need to sync OVS tag with the > quantum "segmentation_id"? > > I assume not, but wanted to verify. Hi Johnray, The VLAN tags you see used on br-int are locally assigned by the openvswitch-agent. Flow rules are created in br-int and in the physical network bridge (br-ethX typically) that translate VLAN tags as packets pass over the veth connecting these bridges. # ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=947602.247s, table=0, n_packets=150980, n_bytes=37605726, idle_age=1, hard_age=65534, priority=1 actions=NORMAL cookie=0x0, duration=947598.9s, table=0, n_packets=16652192, n_bytes=1367674080, idle_age=0, hard_age=65534, priority=2,in_port=2 actions=drop cookie=0x0, duration=110.96s, table=0, n_packets=4, n_bytes=1158, idle_age=48, priority=3,in_port=2,dl_vlan=1000 actions=mod_vlan_vid:2,NORMAL # ovs-ofctl dump-flows br-eth2 NXST_FLOW reply (xid=0x4): cookie=0x0, duration=947604.202s, table=0, n_packets=17266871, n_bytes=1429653351, idle_age=0, hard_age=65534, priority=1 actions=NORMAL cookie=0x0, duration=947601.253s, table=0, n_packets=31, n_bytes=2318, idle_age=113, hard_age=65534, priority=2,in_port=4 actions=drop cookie=0x0, duration=113.882s, table=0, n_packets=107, n_bytes=6118, idle_age=1, priority=4,in_port=4,dl_vlan=2 actions=mod_vlan_vid:1000,NORMAL In this case, the local VLAN tag on br-int is 2, and the VLAN tag on the physical network (br-eth2) is 1000. -Bob > > Thanks, > Johnray > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From chrisw at redhat.com Wed Jun 19 21:40:44 2013 From: chrisw at redhat.com (Chris Wright) Date: Wed, 19 Jun 2013 14:40:44 -0700 Subject: [rhos-list] Question around VLANs in Quantum In-Reply-To: <51C21C1E.5010800@redhat.com> References: <89073807A615C347AB3302B616522A181290879A@OYWEX0203N3.msad.ms.com> <51C1F96F.60006@redhat.com> <51C216B8.4060705@redhat.com> <51C21C1E.5010800@redhat.com> Message-ID: <20130619214044.GL3615@x200.localdomain> * Johnray Fuller (jrfuller at redhat.com) wrote: > In "net-show", I see the provider:segmentation_id is set to 1024 as per the > configuration. > > When one looks at the output of ovs-vsctl show, I see tags: > Port "tap9744e841-99" > tag: 2 > Interface "tap9744e841-99" This is the device on br-int? > From what I can tell, the tag field is the VLAN tag. It appears that this is > set automatically by OVS. Do I need to sync OVS tag with the quantum > "segmentation_id"? > > I assume not, but wanted to verify. There are different tags involved here. One is local to the br-int which allows that bridge to behave like a learning switch. The other is the connectivity to the outside where that tag is translated to something that goes out on the wire. You can see this if you dump the flows (ovs-ofctl dump-flows ...) thanks, -chris From gkotton at redhat.com Thu Jun 20 06:12:09 2013 From: gkotton at redhat.com (Gary Kotton) Date: Thu, 20 Jun 2013 09:12:09 +0300 Subject: [rhos-list] Question around VLANs in Quantum In-Reply-To: <20130619214044.GL3615@x200.localdomain> References: <89073807A615C347AB3302B616522A181290879A@OYWEX0203N3.msad.ms.com> <51C1F96F.60006@redhat.com> <51C216B8.4060705@redhat.com> <51C21C1E.5010800@redhat.com> <20130619214044.GL3615@x200.localdomain> Message-ID: <51C29D39.5050207@redhat.com> On 06/20/2013 12:40 AM, Chris Wright wrote: > * Johnray Fuller (jrfuller at redhat.com) wrote: >> In "net-show", I see the provider:segmentation_id is set to 1024 as per the >> configuration. >> >> When one looks at the output of ovs-vsctl show, I see tags: >> Port "tap9744e841-99" >> tag: 2 >> Interface "tap9744e841-99" > This is the device on br-int? Yes, this is on br-int. > >> From what I can tell, the tag field is the VLAN tag. It appears that this is >> set automatically by OVS. Do I need to sync OVS tag with the quantum >> "segmentation_id"? >> >> I assume not, but wanted to verify. > There are different tags involved here. One is local to the br-int > which allows that bridge to behave like a learning switch. The other is > the connectivity to the outside where that tag is translated to > something that goes out on the wire. You can see this if you dump the > flows (ovs-ofctl dump-flows ...) The provider network had 1024 configured. +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 6578063e-4139-4c16-8c00-a90de07190e0 | | name | net1 | | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 1024 | | router:external | False | | shared | False | | status | ACTIVE | | subnets | b0925ed1-ffba-45c2-a3da-b0a8d37c1838 | | tenant_id | 52fd866ba84344e3bb3b798fc66e67b9 | +---------------------------+--------------------------------------+ > > thanks, > -chris > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From jrfuller at redhat.com Fri Jun 21 16:02:29 2013 From: jrfuller at redhat.com (Johnray Fuller) Date: Fri, 21 Jun 2013 12:02:29 -0400 Subject: [rhos-list] Quantum Query: Message-ID: <51C47915.8070907@redhat.com> Hello, We have a set up similar to the set up outlined here titled "Scenario 1: one tenant, two networks, one router": http://docs.openstack.org/grizzly/openstack-network/admin/content/under_the_hood_openvswitch.html#d6e1178 The one exception is that we only have one internal network. In our set up, 162f67e0-c1 appears to be the prefix to the ID of the external router port, e.g.: # quantum port-list | grep 162f67e0-c1 | 162f67e0-c1a6-476a-8c40-98fffdb51c6b | | fa:16:3e:e5:27:83 | {"subnet_id": "afe6f126-9ba7-48ca-8a51-2c033d0ebc17", "ip_address": "129.40.19.48"} | Here "129.40.19.48" is the router external address. Why is this interface connected to br-int? Isn't it supposed to be in br-ex? Any feedback would be appreciated. Thank you, Johnray -- Johnray Fuller Solutions Architect Red Hat Inc. jrfuller at redhat.com c: 917-453-8216 From rkukura at redhat.com Fri Jun 21 16:43:30 2013 From: rkukura at redhat.com (Robert Kukura) Date: Fri, 21 Jun 2013 12:43:30 -0400 Subject: [rhos-list] Quantum Query: In-Reply-To: <51C47915.8070907@redhat.com> References: <51C47915.8070907@redhat.com> Message-ID: <51C482B2.80906@redhat.com> On 06/21/2013 12:02 PM, Johnray Fuller wrote: > Hello, > > We have a set up similar to the set up outlined here titled "Scenario 1: > one tenant, two networks, one router": > > http://docs.openstack.org/grizzly/openstack-network/admin/content/under_the_hood_openvswitch.html#d6e1178 > > > The one exception is that we only have one internal network. > > In our set up, 162f67e0-c1 appears to be the prefix to the ID of the > external router port, e.g.: > > # quantum port-list | grep 162f67e0-c1 > | 162f67e0-c1a6-476a-8c40-98fffdb51c6b | | fa:16:3e:e5:27:83 | > {"subnet_id": "afe6f126-9ba7-48ca-8a51-2c033d0ebc17", "ip_address": > "129.40.19.48"} | > > Here "129.40.19.48" is the router external address. > > Why is this interface connected to br-int? Isn't it supposed to be in > br-ex? There are two different approaches to connecting a router to an external network. One uses an external bridge (br-ex), and bypasses the L2 agent. The other uses a provider external network, and the interface driver and L2 agent handle this network just like any other network (using br-int). To use a provider external network, just set: external_network_bridge = in /etc/quantum/l3_agent.ini and create the external network with the provider attributes describing it (typically provider:network_type is flat or vlan). The provider external network approach is more flexible - the external network can be a VLAN and can coexist on the same physical network with tenant networks, VMs can be connected directly to the external network, different routers can use different external networks, etc.. Also, this approach works with both openvswitch and linuxbridge, whereas the external bridge approach only works with openvswitch. I see that chapter 5 in the referenced upstream documentation may be adding to the confusion by sort of mixing the two approaches. It is specifying the provider details on the external network: quantum net-create --tenant-id $tenant public01 \ --provider:network_type flat \ --provider:physical_network physnet1 \ --router:external=True but the l3 agent is accessing it via br-ex where the provider details don't matter because the l2 agent is bypassed. If using the external bridge, I'd recommend creating the external network with provider:network_type of local as is shown in appendix A: quantum net-create Ext-Net --provider:network_type local \ --router:external true Hope this helps, -Bob > > Any feedback would be appreciated. > > Thank you, > Johnray > From shshang at cisco.com Fri Jun 21 18:17:54 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Fri, 21 Jun 2013 18:17:54 +0000 Subject: [rhos-list] Quantum Query: In-Reply-To: <51C482B2.80906@redhat.com> References: <51C47915.8070907@redhat.com> <51C482B2.80906@redhat.com> Message-ID: <6190AA83EB69374DABAE074D7E900F751256C3FF@xmb-aln-x13.cisco.com> Hi, Bob: Thanks a ton for the reply (even I didn't raise the original question). It is very informational and educational. I read it many many times and it helps me clarify some confusion as you rightly pointed out. But I still have some questions and would like to make sure I precisely understand what you mean here. 1) In my current setup, I created external network net_ext, but later on I realized that router cannot ping its default gateway. I have to manually set the tag for qg- port on br-eth3 (mapped to physnet3) to 311 in order to make the ping happen. Based on your explanation, is it because provider attributes in "quantum net-create" command are ignored by L3 agent, so no VLAN tagging takes place? quantum net-create --tenant-id 110ea394615d4cefa9824cf8829c841f net_ext --provider:network_type vlan --provider:physical_network physnet3 --provider:segmentation_id 311 --router:external=True 2) When provider attributes in "quantum net-create" command are ignored by L3 agent, there is no linkage between external network/subnet to a specific bridge, and that is why we need to manually put in "external_network_bridge" in l3_agent.ini. Is that correct? 3) If 1) and 2) are correct, then one limitation I can see by using "provider external network" approach is, we can only have a single external bridge and a single external network on quantum network node in order to formulate 1-to-1 mapping. Is that correct? 4) In "external bridge" approach, is there specific reason you recommend using network_type of local? Thanks a lot again! Shixiong On Jun 21, 2013, at 12:43 PM, Robert Kukura > wrote: There are two different approaches to connecting a router to an external network. One uses an external bridge (br-ex), and bypasses the L2 agent. The other uses a provider external network, and the interface driver and L2 agent handle this network just like any other network (using br-int). To use a provider external network, just set: external_network_bridge = in /etc/quantum/l3_agent.ini and create the external network with the provider attributes describing it (typically provider:network_type is flat or vlan). The provider external network approach is more flexible - the external network can be a VLAN and can coexist on the same physical network with tenant networks, VMs can be connected directly to the external network, different routers can use different external networks, etc.. Also, this approach works with both openvswitch and linuxbridge, whereas the external bridge approach only works with openvswitch. I see that chapter 5 in the referenced upstream documentation may be adding to the confusion by sort of mixing the two approaches. It is specifying the provider details on the external network: quantum net-create --tenant-id $tenant public01 \ --provider:network_type flat \ --provider:physical_network physnet1 \ --router:external=True but the l3 agent is accessing it via br-ex where the provider details don't matter because the l2 agent is bypassed. If using the external bridge, I'd recommend creating the external network with provider:network_type of local as is shown in appendix A: quantum net-create Ext-Net --provider:network_type local \ --router:external true Hope this helps, -Bob -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkukura at redhat.com Fri Jun 21 18:39:37 2013 From: rkukura at redhat.com (Robert Kukura) Date: Fri, 21 Jun 2013 14:39:37 -0400 Subject: [rhos-list] Quantum Query: In-Reply-To: <6190AA83EB69374DABAE074D7E900F751256C3FF@xmb-aln-x13.cisco.com> References: <51C47915.8070907@redhat.com> <51C482B2.80906@redhat.com> <6190AA83EB69374DABAE074D7E900F751256C3FF@xmb-aln-x13.cisco.com> Message-ID: <51C49DE9.7060402@redhat.com> On 06/21/2013 02:17 PM, Shixiong Shang (shshang) wrote: > Hi, Bob: > > Thanks a ton for the reply (even I didn't raise the original question). > It is very informational and educational. I read it many many times and > it helps me clarify some confusion as you rightly pointed out. But I > still have some questions and would like to make sure I precisely > understand what you mean here. > > 1) In my current setup, I created external network net_ext, but later on > I realized that router cannot ping its default gateway. I have to > manually set the tag for qg- port on br-eth3 (mapped to physnet3) to 311 > in order to make the ping happen. Based on your explanation, is it > because provider attributes in "quantum net-create" command are ignored > by L3 agent, so no VLAN tagging takes place? > > quantum net-create --tenant-id 110ea394615d4cefa9824cf8829c841f net_ext > --provider:network_type vlan --provider:physical_network physnet3 > --provider:segmentation_id 311 --router:external=True Yes, if external_network_bridge is not empty (i.e. is its default value of br-ex), then the provider attributes of the external network are completely ignored. This is because l3-agent plugs directly into the specified bridge rather than into br-int, and openvswitch-agent does not get involved. > > > 2) When provider attributes in "quantum net-create" command are ignored > by L3 agent, there is no linkage between external network/subnet to a > specific bridge, and that is why we need to manually put in > "external_network_bridge" in l3_agent.ini. Is that correct? I guess I'd turn it around, and say that external_network_bridge being set causes the provider attributes to be ignored. > > 3) If 1) and 2) are correct, then one limitation I can see by using > "provider external network" approach is, we can only have a single > external bridge and a single external network on quantum network node in > order to formulate 1-to-1 mapping. Is that correct? I don't think this is a limitation of the provider external network approach. By unsetting external_network_bridge and using the provider attributes instead, you can use any number of external networks, and they can even be on VLANs. When using a non-empty external_network_bridge, you can only have one external network (per l3-agent). > > 4) In "external bridge" approach, is there specific reason you recommend > using network_type of local? Yes - to avoid confusion and avoid wasting a valuable resource. The original upstream documentation for folsom showed creating the external network as a normal tenant network (no provider attributes). Doing so would allocate a tenant network that would never actually be used. If tenant_network_type is vlan, this wastes a VLAN tag from the pool. Creating the external network as a provider network with a network_type of local avoids allocating a tenant network. -Bob > > Thanks a lot again! > > Shixiong > > > > > > On Jun 21, 2013, at 12:43 PM, Robert Kukura > > wrote: > >> There are two different approaches to connecting a router to an external >> network. One uses an external bridge (br-ex), and bypasses the L2 agent. >> The other uses a provider external network, and the interface driver and >> L2 agent handle this network just like any other network (using br-int). >> To use a provider external network, just set: >> >> external_network_bridge = >> >> in /etc/quantum/l3_agent.ini and create the external network with the >> provider attributes describing it (typically provider:network_type is >> flat or vlan). >> >> The provider external network approach is more flexible - the external >> network can be a VLAN and can coexist on the same physical network with >> tenant networks, VMs can be connected directly to the external network, >> different routers can use different external networks, etc.. Also, this >> approach works with both openvswitch and linuxbridge, whereas the >> external bridge approach only works with openvswitch. >> >> I see that chapter 5 in the referenced upstream documentation may be >> adding to the confusion by sort of mixing the two approaches. It is >> specifying the provider details on the external network: >> >> quantum net-create --tenant-id $tenant public01 \ >> --provider:network_type flat \ >> --provider:physical_network physnet1 \ >> --router:external=True >> >> but the l3 agent is accessing it via br-ex where the provider details >> don't matter because the l2 agent is bypassed. If using the external >> bridge, I'd recommend creating the external network with >> provider:network_type of local as is shown in appendix A: >> >> quantum net-create Ext-Net --provider:network_type local \ >> --router:external true >> >> Hope this helps, >> >> -Bob > From shshang at cisco.com Sun Jun 23 03:33:19 2013 From: shshang at cisco.com (Shixiong Shang (shshang)) Date: Sun, 23 Jun 2013 03:33:19 +0000 Subject: [rhos-list] Quantum Query: In-Reply-To: <51C49DE9.7060402@redhat.com> References: <51C47915.8070907@redhat.com> <51C482B2.80906@redhat.com> <6190AA83EB69374DABAE074D7E900F751256C3FF@xmb-aln-x13.cisco.com> <51C49DE9.7060402@redhat.com> Message-ID: <6190AA83EB69374DABAE074D7E900F751256D4E5@xmb-aln-x13.cisco.com> Hi, Bob: Thank you very much for the further clarification! I tried both approach tonight and noticed that when "external_network_bridge" is unset, L3_agent failed to start due to the fact that I was using "br-eth3" for both tenant network and external network. L3 kept looking for "br-ex" and it didn't exist at all (br-eth3 and br-int are the only bridges). That being said, if we try to create more than one external networks, they all must point to br-ex. If this is the case, then the only way to realize multiple external networks is to use VLAN tagging on br-ex to separate the traffic. Is my observation correct? Thanks again! Shixiong On Jun 21, 2013, at 2:39 PM, Robert Kukura > wrote: On 06/21/2013 02:17 PM, Shixiong Shang (shshang) wrote: Hi, Bob: Thanks a ton for the reply (even I didn't raise the original question). It is very informational and educational. I read it many many times and it helps me clarify some confusion as you rightly pointed out. But I still have some questions and would like to make sure I precisely understand what you mean here. 1) In my current setup, I created external network net_ext, but later on I realized that router cannot ping its default gateway. I have to manually set the tag for qg- port on br-eth3 (mapped to physnet3) to 311 in order to make the ping happen. Based on your explanation, is it because provider attributes in "quantum net-create" command are ignored by L3 agent, so no VLAN tagging takes place? quantum net-create --tenant-id 110ea394615d4cefa9824cf8829c841f net_ext --provider:network_type vlan --provider:physical_network physnet3 --provider:segmentation_id 311 --router:external=True Yes, if external_network_bridge is not empty (i.e. is its default value of br-ex), then the provider attributes of the external network are completely ignored. This is because l3-agent plugs directly into the specified bridge rather than into br-int, and openvswitch-agent does not get involved. 2) When provider attributes in "quantum net-create" command are ignored by L3 agent, there is no linkage between external network/subnet to a specific bridge, and that is why we need to manually put in "external_network_bridge" in l3_agent.ini. Is that correct? I guess I'd turn it around, and say that external_network_bridge being set causes the provider attributes to be ignored. 3) If 1) and 2) are correct, then one limitation I can see by using "provider external network" approach is, we can only have a single external bridge and a single external network on quantum network node in order to formulate 1-to-1 mapping. Is that correct? I don't think this is a limitation of the provider external network approach. By unsetting external_network_bridge and using the provider attributes instead, you can use any number of external networks, and they can even be on VLANs. When using a non-empty external_network_bridge, you can only have one external network (per l3-agent). 4) In "external bridge" approach, is there specific reason you recommend using network_type of local? Yes - to avoid confusion and avoid wasting a valuable resource. The original upstream documentation for folsom showed creating the external network as a normal tenant network (no provider attributes). Doing so would allocate a tenant network that would never actually be used. If tenant_network_type is vlan, this wastes a VLAN tag from the pool. Creating the external network as a provider network with a network_type of local avoids allocating a tenant network. -Bob Thanks a lot again! Shixiong On Jun 21, 2013, at 12:43 PM, Robert Kukura > wrote: There are two different approaches to connecting a router to an external network. One uses an external bridge (br-ex), and bypasses the L2 agent. The other uses a provider external network, and the interface driver and L2 agent handle this network just like any other network (using br-int). To use a provider external network, just set: external_network_bridge = in /etc/quantum/l3_agent.ini and create the external network with the provider attributes describing it (typically provider:network_type is flat or vlan). The provider external network approach is more flexible - the external network can be a VLAN and can coexist on the same physical network with tenant networks, VMs can be connected directly to the external network, different routers can use different external networks, etc.. Also, this approach works with both openvswitch and linuxbridge, whereas the external bridge approach only works with openvswitch. I see that chapter 5 in the referenced upstream documentation may be adding to the confusion by sort of mixing the two approaches. It is specifying the provider details on the external network: quantum net-create --tenant-id $tenant public01 \ --provider:network_type flat \ --provider:physical_network physnet1 \ --router:external=True but the l3 agent is accessing it via br-ex where the provider details don't matter because the l2 agent is bypassed. If using the external bridge, I'd recommend creating the external network with provider:network_type of local as is shown in appendix A: quantum net-create Ext-Net --provider:network_type local \ --router:external true Hope this helps, -Bob -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkukura at redhat.com Mon Jun 24 17:43:01 2013 From: rkukura at redhat.com (Robert Kukura) Date: Mon, 24 Jun 2013 13:43:01 -0400 Subject: [rhos-list] Quantum Query: In-Reply-To: <6190AA83EB69374DABAE074D7E900F751256D4E5@xmb-aln-x13.cisco.com> References: <51C47915.8070907@redhat.com> <51C482B2.80906@redhat.com> <6190AA83EB69374DABAE074D7E900F751256C3FF@xmb-aln-x13.cisco.com> <51C49DE9.7060402@redhat.com> <6190AA83EB69374DABAE074D7E900F751256D4E5@xmb-aln-x13.cisco.com> Message-ID: <51C88525.1000206@redhat.com> On 06/22/2013 11:33 PM, Shixiong Shang (shshang) wrote: > Hi, Bob: > > Thank you very much for the further clarification! > > I tried both approach tonight and noticed that when > "external_network_bridge" is unset, L3_agent failed to start due to the > fact that I was using "br-eth3" for both tenant network and external > network. L3 kept looking for "br-ex" and it didn't exist at all (br-eth3 > and br-int are the only bridges). IT sounds to me like /etc/quantum/l3_agent.ini does not contain: external_network_bridge = Note that the default value is "br-ex", so it needs to be explicitly overridden to set it to nothing. > > That being said, if we try to create more than one external networks, > they all must point to br-ex. If this is the case, then the only way to > realize multiple external networks is to use VLAN tagging on br-ex to > separate the traffic. Is my observation correct? If you are successfully using the provider external network approach, you are not using br-ex. With the provider approach, and physical networks listed in network_vlan_ranges can be used for flat or vlan external networks. The physical networks need to be mapped to OVS bridges via bridge_mappings, and the quantum-openvswitch-agent will take care of ensuring the proper tags are used. -Bob > > Thanks again! > > Shixiong > > > > > On Jun 21, 2013, at 2:39 PM, Robert Kukura > wrote: > >> On 06/21/2013 02:17 PM, Shixiong Shang (shshang) wrote: >>> Hi, Bob: >>> >>> Thanks a ton for the reply (even I didn't raise the original question). >>> It is very informational and educational. I read it many many times and >>> it helps me clarify some confusion as you rightly pointed out. But I >>> still have some questions and would like to make sure I precisely >>> understand what you mean here. >>> >>> 1) In my current setup, I created external network net_ext, but later on >>> I realized that router cannot ping its default gateway. I have to >>> manually set the tag for qg- port on br-eth3 (mapped to physnet3) to 311 >>> in order to make the ping happen. Based on your explanation, is it >>> because provider attributes in "quantum net-create" command are ignored >>> by L3 agent, so no VLAN tagging takes place? >>> >>> quantum net-create --tenant-id 110ea394615d4cefa9824cf8829c841f net_ext >>> --provider:network_type vlan --provider:physical_network physnet3 >>> --provider:segmentation_id 311 --router:external=True >> >> Yes, if external_network_bridge is not empty (i.e. is its default value >> of br-ex), then the provider attributes of the external network are >> completely ignored. This is because l3-agent plugs directly into the >> specified bridge rather than into br-int, and openvswitch-agent does not >> get involved. >> >>> >>> >>> 2) When provider attributes in "quantum net-create" command are ignored >>> by L3 agent, there is no linkage between external network/subnet to a >>> specific bridge, and that is why we need to manually put in >>> "external_network_bridge" in l3_agent.ini. Is that correct? >> >> I guess I'd turn it around, and say that external_network_bridge being >> set causes the provider attributes to be ignored. >> >>> >>> 3) If 1) and 2) are correct, then one limitation I can see by using >>> "provider external network" approach is, we can only have a single >>> external bridge and a single external network on quantum network node in >>> order to formulate 1-to-1 mapping. Is that correct? >> >> >> I don't think this is a limitation of the provider external network >> approach. By unsetting external_network_bridge and using the provider >> attributes instead, you can use any number of external networks, and >> they can even be on VLANs. >> >> When using a non-empty external_network_bridge, you can only have one >> external network (per l3-agent). >> >>> >>> 4) In "external bridge" approach, is there specific reason you recommend >>> using network_type of local? >> >> Yes - to avoid confusion and avoid wasting a valuable resource. The >> original upstream documentation for folsom showed creating the external >> network as a normal tenant network (no provider attributes). Doing so >> would allocate a tenant network that would never actually be used. If >> tenant_network_type is vlan, this wastes a VLAN tag from the pool. >> Creating the external network as a provider network with a network_type >> of local avoids allocating a tenant network. >> >> -Bob >> >>> >>> Thanks a lot again! >>> >>> Shixiong >>> >>> >>> >>> >>> >>> On Jun 21, 2013, at 12:43 PM, Robert Kukura >> >>> > >>> wrote: >>> >>>> There are two different approaches to connecting a router to an external >>>> network. One uses an external bridge (br-ex), and bypasses the L2 agent. >>>> The other uses a provider external network, and the interface driver and >>>> L2 agent handle this network just like any other network (using br-int). >>>> To use a provider external network, just set: >>>> >>>> external_network_bridge = >>>> >>>> in /etc/quantum/l3_agent.ini and create the external network with the >>>> provider attributes describing it (typically provider:network_type is >>>> flat or vlan). >>>> >>>> The provider external network approach is more flexible - the external >>>> network can be a VLAN and can coexist on the same physical network with >>>> tenant networks, VMs can be connected directly to the external network, >>>> different routers can use different external networks, etc.. Also, this >>>> approach works with both openvswitch and linuxbridge, whereas the >>>> external bridge approach only works with openvswitch. >>>> >>>> I see that chapter 5 in the referenced upstream documentation may be >>>> adding to the confusion by sort of mixing the two approaches. It is >>>> specifying the provider details on the external network: >>>> >>>> quantum net-create --tenant-id $tenant public01 \ >>>> --provider:network_type flat \ >>>> --provider:physical_network physnet1 \ >>>> --router:external=True >>>> >>>> but the l3 agent is accessing it via br-ex where the provider details >>>> don't matter because the l2 agent is bypassed. If using the external >>>> bridge, I'd recommend creating the external network with >>>> provider:network_type of local as is shown in appendix A: >>>> >>>> quantum net-create Ext-Net --provider:network_type local \ >>>> --router:external true >>>> >>>> Hope this helps, >>>> >>>> -Bob >>> >> > From lchristoph at arago.de Tue Jun 25 11:21:55 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Tue, 25 Jun 2013 11:21:55 +0000 Subject: [rhos-list] Can't log into newly installed OpenStack with Dashboard Message-ID: <0e0562aefbb940ecb4040803f5ad0334@DB3PR07MB010.eurprd07.prod.outlook.com> Hello! This is from a just completed Grizzly install, version 2013.1.1-4.el6ost. Right after I log into Dashboard, I get this: KeyError at /project/ 'tenant_usage' Request Method: GET Request URL: http://rhopenstack.lab.db.com/dashboard/project/ Django Version: 1.4.4 Exception Type: KeyError Exception Value: 'tenant_usage' Exception Location: /usr/lib/python2.6/site-packages/novaclient/base.py in _get, line 141 Python Executable: /usr/bin/python Python Version: 2.6.6 Python Path: ['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..', '/usr/lib64/python26.zip', '/usr/lib64/python2.6', '/usr/lib64/python2.6/plat-linux2', '/usr/lib64/python2.6/lib-tk', '/usr/lib64/python2.6/lib-old', '/usr/lib64/python2.6/lib-dynload', '/usr/lib64/python2.6/site-packages', '/usr/lib64/python2.6/site-packages/gtk-2.0', '/usr/lib/python2.6/site-packages', '/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info', '/usr/share/openstack-dashboard/openstack_dashboard'] Server time: Tue, 25 Jun 2013 11:08:09 +0000 Here is the traceback: Environment: Request Method: GET Request URL: http://rhopenstack.lab.db.com/dashboard/project/ Django Version: 1.4.4 Python Version: 2.6.6 Installed Applications: ['openstack_dashboard', 'django.contrib.contenttypes', 'django.contrib.auth', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'django.contrib.humanize', 'compressor', 'horizon', 'openstack_dashboard.dashboards.project', 'openstack_dashboard.dashboards.admin', 'openstack_dashboard.dashboards.settings', 'openstack_auth'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'horizon.middleware.HorizonMiddleware', 'django.middleware.doc.XViewMiddleware', 'django.middleware.locale.LocaleMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware') Traceback: File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py" in get_response 111. response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/horizon/decorators.py" in dec 38. return view_func(request, *args, **kwargs) File "/usr/lib/python2.6/site-packages/horizon/decorators.py" in dec 54. return view_func(request, *args, **kwargs) File "/usr/lib/python2.6/site-packages/horizon/decorators.py" in dec 38. return view_func(request, *args, **kwargs) File "/usr/lib/python2.6/site-packages/django/views/generic/base.py" in view 48. return self.dispatch(request, *args, **kwargs) File "/usr/lib/python2.6/site-packages/django/views/generic/base.py" in dispatch 69. return handler(request, *args, **kwargs) File "/usr/lib/python2.6/site-packages/horizon/tables/views.py" in get 155. handled = self.construct_tables() File "/usr/lib/python2.6/site-packages/horizon/tables/views.py" in construct_tables 146. handled = self.handle_table(table) File "/usr/lib/python2.6/site-packages/horizon/tables/views.py" in handle_table 118. data = self._get_data_dict() File "/usr/lib/python2.6/site-packages/horizon/tables/views.py" in _get_data_dict 182. self._data = {self.table_class._meta.name: self.get_data()} File "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/overview/views.py" in get_data 32. super(ProjectOverview, self).get_data() File "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/views.py" in get_data 33. self.usage.summarize(*self.usage.get_date_range()) File "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/base.py" in summarize 98. _('Unable to retrieve usage information.')) File "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/base.py" in summarize 95. self.usage_list = self.get_usage_list(start, end) File "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/usage/base.py" in get_usage_list 142. usage = api.nova.usage_get(self.request, self.tenant_id, start, end) File "/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/nova.py" in usage_get 469. return NovaUsage(novaclient(request).usage.get(tenant_id, start, end)) File "/usr/lib/python2.6/site-packages/novaclient/v1_1/usage.py" in get 48. "tenant_usage") File "/usr/lib/python2.6/site-packages/novaclient/base.py" in _get 141. return self.resource_class(self, body[response_key], loaded=True) Exception Type: KeyError at /project/ Exception Value: 'tenant_usage' Even after looking at the code, I'm clueless how this works, why it is called, and most importantly how to repair it. Anybody to explain? Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at redhat.com Tue Jun 25 12:12:01 2013 From: mrunge at redhat.com (Matthias Runge) Date: Tue, 25 Jun 2013 14:12:01 +0200 Subject: [rhos-list] novaclient issue In-Reply-To: <0e0562aefbb940ecb4040803f5ad0334@DB3PR07MB010.eurprd07.prod.outlook.com> References: <0e0562aefbb940ecb4040803f5ad0334@DB3PR07MB010.eurprd07.prod.outlook.com> Message-ID: <51C98911.9000004@redhat.com> On 25/06/13 13:21, Lutz Christoph wrote: > Hello! > > This is from a just completed Grizzly install, version 2013.1.1-4.el6ost. > > Right after I log into Dashboard, I get this: > > > KeyError at /project/ > > 'tenant_usage' > Hey Lutz, could you please explain a bit more, how you installed your OpenStack deployment as well? ... is it a multi-node environment? SELinux enforcing? By any chance, is your Dashboard host able to connect to the nova host, given by keystone catalog (Service compute)? What about nova usage? Does that work for you? (Since the failing call is hard-coded in novaclient) and I'm not aware of anybody else seeing this issue. Best regards, Matthias From lchristoph at arago.de Tue Jun 25 12:41:31 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Tue, 25 Jun 2013 12:41:31 +0000 Subject: [rhos-list] novaclient issue In-Reply-To: <51C98911.9000004@redhat.com> References: <0e0562aefbb940ecb4040803f5ad0334@DB3PR07MB010.eurprd07.prod.outlook.com>, <51C98911.9000004@redhat.com> Message-ID: Hello! > Von: rhos-list-bounces at redhat.com im Auftrag von Matthias Runge > Gesendet: Dienstag, 25. Juni 2013 14:12 > An: rhos-list at redhat.com > Betreff: Re: [rhos-list] novaclient issue > could you please explain a bit more, how you installed your OpenStack > deployment as well? ... is it a multi-node environment? SELinux enforcing? Everything but nova is on a VM running on RHEL 6.4 and KVM (RHEV). I have one nova node that refers back to the "all the rest" VM running on hardware that supports KVM. > By any chance, is your Dashboard host able to connect to the nova host, > given by keystone catalog (Service compute)? Dashboard is doing a login, authenticated by keystone. The request from the browser that fails is this: GET /dashboard/admin/ HTTP/1.1 Host: rhopenstack.example.com User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:21.0) Gecko/20100101 Firefox/21.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.8,de-de;q=0.5,de;q=0.3 Accept-Encoding: gzip, deflate DNT: 1 Referer: http://rhopenstack.example.com/dashboard Cookie: csrftoken=KdRnLZQvRfAtBcyHmGlQuHEmoxU1L2QO; sessionid=52e5ec63b1e720e255dd1791cfe9ec56 Connection: keep-alive That triggers an internal request to the nova API daemon: GET /os-simple-tenant-usage?start=2013-06-01T00:00:00&end=2013-06-25T10:52:36.348852&detailed=1 HTTP/1.1 Host: 192.168.104.62:8774 X-Auth-Project-Id: 4922a6443b9347d18f67c86bfb72022b Accept-Encoding: gzip, deflate, compress Content-Length: 0 Accept: application/json User-Agent: python-novaclient X-Auth-Token: 28734c23bdf049d0b03b34a784c152b2 Nova answers with: HTTP/1.1 300 Multiple Choices Content-Type: application/json Content-Length: 337 Date: Tue, 25 Jun 2013 10:51:37 GMT {\"choices\": [{\"status\": \"CURRENT\", \"media-types\": [{\"base\": \"application/xml\", \"type\": \"application/vnd.openstack.compute+xml;version=2\"}, {\"base\": \"application/json\", \"type\": \"application/vnd.openstack.compute+json;version=2\"}], \"id\": \"v2.0\", \"links\": [{\"href\": \"http://192.168.104.62:8774/v2/os-simple-tenant-usage\", \"rel\": \"self\"}]}]} > What about nova usage? Does that work for you? (Since the failing call > is hard-coded in novaclient) and I'm not aware of anybody else seeing > this issue. I don't understand what you mean by "nova usage". I'm quite sure that the installation instructions from Red Hat are missing something, so far they proved not to be exact. Many copy-and-pastoes, etc. Very entertaining. Anyway, I have no idea *what* needs to be done to make the nova API daemon return the tenant_usage data. I can't use the dashboard to create any objects... Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 From marun at redhat.com Tue Jun 25 16:05:27 2013 From: marun at redhat.com (Maru Newby) Date: Tue, 25 Jun 2013 12:05:27 -0400 Subject: [rhos-list] [Bug 975338] New: "quantum security-group-rule-list" from the "admin" tenant shows the security group rules of all tenants In-Reply-To: <51C04755.5060707@redhat.com> References: <51C04755.5060707@redhat.com> Message-ID: On Jun 18, 2013, at 7:41 AM, Perry Myers wrote: > Bob, > > Interesting. In the lab session recently done at Red Hat Summit in > Boston, we noticed on RHOS 3.0 Preview/RHEL 6.4 installs that users > running as the admin tenant were seeing two 'default' security groups in > Quantum. > > Maybe the below is the reason? Perhaps they were seeing the 'default' > group for the admin tenant as well as the other user they created in > keystone? > > Perry It definitely looks to me that the bug in question is the reason for seeing multiple default security groups. Maybe it makes sense to have the admin user see a user identifier for each row by default to limit this kind of confusion? On a related note I don't see any protection at the db level against duplicate security group names per tenant in upstream master (let alone in grizzly). A race condition exists that could allow the creation of multiple default groups per tenant. I'm going to follow up on the upstream list to see that this gets addressed. m. > > > -------- Original Message -------- > Subject: [Bug 975338] New: "quantum security-group-rule-list" from the > "admin" tenant shows the security group rules of all tenants > Date: Tue, 18 Jun 2013 07:14:20 +0000 > From: bugzilla at redhat.com > To: pmyers at redhat.com > > https://bugzilla.redhat.com/show_bug.cgi?id=975338 > > Bug ID: 975338 > Summary: "quantum security-group-rule-list" from the "admin" > tenant shows the security group rules of all tenants > Product: Red Hat OpenStack > Version: 3.0 > Component: python-cliff > Severity: medium > Priority: unspecified > Assignee: rhos-maint at redhat.com > Reporter: rvaknin at redhat.com > > Version: > Grizzly on rhel6.4 with openstack-quantum-2013.1.2-3.el6ost and > python-cliff-1.3-1.el6ost (puddle 2013-06-13.2). > > Description: > "quantum security-group-rule-list" running in the "admin" user context show > security group rules of all tenants while it should show security group > rules > of the admin tenant. > The list of all security group rules should appear only when the > "--all-tenant" > argument is in use. > > [root ~(keystone_admin)]# quantum security-group-rule-list > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > | id | security_group | direction | > protocol > | remote_ip_prefix | remote_group | > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > | 04b69c0e-4fe1-44ba-b772-794d844e5101 | default | ingress | > tcp > | 0.0.0.0/0 | | > | 19d17912-2e20-46d0-bf8d-1fc6c52220ce | default | egress | > > | | | > | 1f158243-cb24-4950-803a-e19025e1ac9f | default | egress | > > | | | > | 5acf9b3d-347c-483b-9ab4-e79f4d044918 | default | ingress | > > | | default | > | 5bb0e605-3bab-45ae-bedd-898f484daec0 | default | ingress | > icmp > | 0.0.0.0/0 | | > | 5cccde9b-ebae-450a-8590-5d36797ddd9c | default | ingress | > > | | default | > | 6b5b5d71-123e-41ff-9b93-0b1db724b540 | default | egress | > > | | | > | 7057ea12-44c1-4090-a93c-dd80ae1c6414 | default | egress | > > | | | > | 8c53ad7b-565e-433b-809c-b69b40518ad3 | default | ingress | > > | | default | > | 9bccf920-2da7-4566-b590-eb2fb091f0b2 | default | ingress | > > | | default | > | af095e7f-55d1-4d90-ac29-7741424ade57 | default | egress | > > | | | > | b7d7742d-11c3-428f-835f-6191b4303d15 | default | egress | > > | | | > | ce1708e0-db0b-41f2-894f-d630d63069fe | default | ingress | > > | | default | > | dc4cd283-6aa1-49a4-ac2d-9d1fd2296e1d | default | ingress | > > | | default | > | e221c58f-f08b-4b18-a501-7d88c2b6fa27 | default | ingress | > icmp > | 0.0.0.0/0 | | > | e77e4065-37a8-4f0d-ac06-4e826328e218 | default | ingress | > tcp > | 0.0.0.0/0 | | > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > [root ~(keystone_admin)]# . keystonerc_vlan_186 > [root ~(keystone_vlan_186)]$ quantum security-group-rule-list > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > | id | security_group | direction | > protocol > | remote_ip_prefix | remote_group | > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > | 19d17912-2e20-46d0-bf8d-1fc6c52220ce | default | egress | > > | | | > | 9bccf920-2da7-4566-b590-eb2fb091f0b2 | default | ingress | > > | | default | > | b7d7742d-11c3-428f-835f-6191b4303d15 | default | egress | > > | | | > | dc4cd283-6aa1-49a4-ac2d-9d1fd2296e1d | default | ingress | > > | | default | > | e221c58f-f08b-4b18-a501-7d88c2b6fa27 | default | ingress | > icmp > | 0.0.0.0/0 | | > | e77e4065-37a8-4f0d-ac06-4e826328e218 | default | ingress | > tcp > | 0.0.0.0/0 | | > +--------------------------------------+----------------+-----------+----------+------------------+--------------+ > > For instance, security group rule id "e77e4065-37a8-4f0d-ac06-4e826328e218" > appears in the output of "quantum security-group-rule-list" command while > running it from both the "admin" tenant and other non-admin tenant. > > -- > You are receiving this mail because: > You are watching the assignee of the bug. > > From jrfuller at redhat.com Tue Jun 25 17:54:50 2013 From: jrfuller at redhat.com (Johnray Fuller) Date: Tue, 25 Jun 2013 13:54:50 -0400 Subject: [rhos-list] Quantum: Packet Fragmentation Issue? Message-ID: <51C9D96A.80204@redhat.com> Hello, I appear to have an issue with packet fragmentation When we try to ssh from one VM to another where the VMs run on different hosts on the source host the physical link (eth4) shows: 13:24:19.520812 fa:16:3e:dd:3c:4e (oui Unknown) > fa:16:3e:6c:eb:80 (oui Unknown), ethertype 802.1Q (0x8100), length 78: vlan 2, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 31721, offset 0, flags [DF], proto TCP (6), length 60) 10.0.0.11.60025 > 10.0.0.12.ssh: Flags [S], cksum 0xdb91 (correct), seq 3087184310, win 14600, options [mss 1460,sackOK,TS val 11061855 ecr 0,nop,wscale 6], length 0 While on the receiving end we see: 13:24:20.555105 3e:6c:eb:80:fa:16 (oui Unknown) > 3c:4e:fa:16:fa:16 (oui Unknown), ethertype Unknown (0x3edd), length 78: 0x0000: 3c4e 0800 4500 003c 7be9 4000 4006 aabc References: <51C9D96A.80204@redhat.com> Message-ID: <51C9EA8C.4060506@redhat.com> On 06/25/2013 08:54 PM, Johnray Fuller wrote: > Hello, > > > I appear to have an issue with packet fragmentation > > When we try to ssh from one VM to another where the VMs run on > different hosts on the source host the physical link (eth4) shows: > > 13:24:19.520812 fa:16:3e:dd:3c:4e (oui Unknown) > fa:16:3e:6c:eb:80 > (oui Unknown), ethertype 802.1Q (0x8100), length 78: vlan 2, p 0, > ethertype IPv4, (tos 0x0, ttl 64, id 31721, offset 0, flags [DF], > proto TCP (6), length 60) > 10.0.0.11.60025 > 10.0.0.12.ssh: Flags [S], cksum 0xdb91 (correct), > seq 3087184310, win 14600, options [mss 1460,sackOK,TS val 11061855 > ecr 0,nop,wscale 6], length 0 > > While on the receiving end we see: > > 13:24:20.555105 3e:6c:eb:80:fa:16 (oui Unknown) > 3c:4e:fa:16:fa:16 > (oui Unknown), ethertype Unknown (0x3edd), length 78: > 0x0000: 3c4e 0800 4500 003c 7be9 4000 4006 aabc 0x0010: 0a00 000b 0a00 000c ea79 0016 b802 b1b6 .........y...... > 0x0020: 0000 0000 a002 3908 db91 0000 0204 05b4 ......9......... > 0x0030: 0402 080a 00a8 ca5f 0000 0000 0103 0306 ......._........ > This looks like it could be related to VLAN splinters when using openvswitch. Are you using Openvswitch? Maybe http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan help. Can you please past the NIC details. I have cc'ed Chris and Thomas. Thanks Gary > It seems that encapsulation causes the packet to break. Does anyone > have any ideas on how to troubleshoot this? > > These VMs are on different hosts. > > We tried increasing the mtu on both hosts' eth4, but still no joy. > > We found the following, https://review.openstack.org/#/c/31518/ , > which might be related, but this patch was abandoned. > > Any assistance would be greatly appreciated. > > J > From tgraf at redhat.com Tue Jun 25 19:17:38 2013 From: tgraf at redhat.com (Thomas Graf) Date: Tue, 25 Jun 2013 21:17:38 +0200 Subject: [rhos-list] Quantum: Packet Fragmentation Issue? In-Reply-To: <51C9EA8C.4060506@redhat.com> References: <51C9D96A.80204@redhat.com> <51C9EA8C.4060506@redhat.com> Message-ID: <51C9ECD2.2040905@redhat.com> On 06/25/2013 09:07 PM, Gary Kotton wrote: > On 06/25/2013 08:54 PM, Johnray Fuller wrote: >> Hello, >> >> >> I appear to have an issue with packet fragmentation >> >> When we try to ssh from one VM to another where the VMs run on >> different hosts on the source host the physical link (eth4) shows: >> >> 13:24:19.520812 fa:16:3e:dd:3c:4e (oui Unknown) > fa:16:3e:6c:eb:80 >> (oui Unknown), ethertype 802.1Q (0x8100), length 78: vlan 2, p 0, >> ethertype IPv4, (tos 0x0, ttl 64, id 31721, offset 0, flags [DF], >> proto TCP (6), length 60) >> 10.0.0.11.60025 > 10.0.0.12.ssh: Flags [S], cksum 0xdb91 (correct), >> seq 3087184310, win 14600, options [mss 1460,sackOK,TS val 11061855 >> ecr 0,nop,wscale 6], length 0 >> >> While on the receiving end we see: >> >> 13:24:20.555105 3e:6c:eb:80:fa:16 (oui Unknown) > 3c:4e:fa:16:fa:16 >> (oui Unknown), ethertype Unknown (0x3edd), length 78: >> 0x0000: 3c4e 0800 4500 003c 7be9 4000 4006 aabc > 0x0010: 0a00 000b 0a00 000c ea79 0016 b802 b1b6 .........y...... >> 0x0020: 0000 0000 a002 3908 db91 0000 0204 05b4 ......9......... >> 0x0030: 0402 080a 00a8 ca5f 0000 0000 0103 0306 ......._........ >> > > This looks like it could be related to VLAN splinters when using > openvswitch. Are you using Openvswitch? Maybe > http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan > help. This is not related to VLAN splinters but VLANs usage in general. We have seen this before and it usually is caused by an intermediate device on the host having the same MTU as the interface inside the VM. Typically both have 1500. The VM outputs 1500 sized frames, OVS adds a VLAN header and that exceeds the MTU of any device on the host. Fix is to either a) decrease the MTU inside the VM by 4 if VLAN tagged packets are to go out to a physical ethernet b) increase MTU of all intermediate interfaces on the host by at least 4 to avoid fragmentation. c) increase MTU of all soft devices on the host and enable jumbo frames on the physical ethernet device. I would choose b) for Neutron if tunneling is being used. If external VLANs are in play option c) is nice with a fallback to frags if jumbo frames are unsupported. > >> It seems that encapsulation causes the packet to break. Does anyone >> have any ideas on how to troubleshoot this? >> >> These VMs are on different hosts. >> >> We tried increasing the mtu on both hosts' eth4, but still no joy. >> >> We found the following, https://review.openstack.org/#/c/31518/ , >> which might be related, but this patch was abandoned. >> >> Any assistance would be greatly appreciated. >> >> J >> > From Balazs.Fulop at morganstanley.com Tue Jun 25 19:42:11 2013 From: Balazs.Fulop at morganstanley.com (Fulop, Balazs) Date: Tue, 25 Jun 2013 19:42:11 +0000 Subject: [rhos-list] Quantum: Packet Fragmentation Issue? In-Reply-To: <51C9ECD2.2040905@redhat.com> References: <51C9D96A.80204@redhat.com> <51C9EA8C.4060506@redhat.com> <51C9ECD2.2040905@redhat.com> Message-ID: <89073807A615C347AB3302B616522A181292B3FC@OYWEX0203N3.msad.ms.com> Dear All, Thank you for looking into this. I tried option a) but unfortunately the packets still appear to get scrambled (tcpdump effectively shows the same and ssh will never connect). The network card used on both hosts in this demo cluster is: Emulex Corporation OneConnect 10Gb NIC If you have any further ideas on a possible resolution / workaround, please kindly let me know. Regards, Balazs Fulop Morgan Stanley | Enterprise Infrastructure Lechner Odon fasor 8 | Floor 07 Budapest, 1095 Phone: +36 1 881-3941 Balazs.Fulop at morganstanley.com Be carbon conscious. Please consider our environment before printing this email. -----Original Message----- From: Thomas Graf [mailto:tgraf at redhat.com] Sent: Tuesday, June 25, 2013 9:18 PM To: gkotton at redhat.com Cc: Johnray Fuller; rhos-list at redhat.com; Fulop, Balazs (Enterprise Infrastructure); Chris Wright; Robert Kukura; jmh at redhat.com Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? On 06/25/2013 09:07 PM, Gary Kotton wrote: > On 06/25/2013 08:54 PM, Johnray Fuller wrote: >> Hello, >> >> >> I appear to have an issue with packet fragmentation >> >> When we try to ssh from one VM to another where the VMs run on >> different hosts on the source host the physical link (eth4) shows: >> >> 13:24:19.520812 fa:16:3e:dd:3c:4e (oui Unknown) > fa:16:3e:6c:eb:80 >> (oui Unknown), ethertype 802.1Q (0x8100), length 78: vlan 2, p 0, >> ethertype IPv4, (tos 0x0, ttl 64, id 31721, offset 0, flags [DF], >> proto TCP (6), length 60) >> 10.0.0.11.60025 > 10.0.0.12.ssh: Flags [S], cksum 0xdb91 (correct), >> seq 3087184310, win 14600, options [mss 1460,sackOK,TS val 11061855 >> ecr 0,nop,wscale 6], length 0 >> >> While on the receiving end we see: >> >> 13:24:20.555105 3e:6c:eb:80:fa:16 (oui Unknown) > 3c:4e:fa:16:fa:16 >> (oui Unknown), ethertype Unknown (0x3edd), length 78: >> 0x0000: 3c4e 0800 4500 003c 7be9 4000 4006 aabc > 0x0010: 0a00 000b 0a00 000c ea79 0016 b802 b1b6 .........y...... >> 0x0020: 0000 0000 a002 3908 db91 0000 0204 05b4 ......9......... >> 0x0030: 0402 080a 00a8 ca5f 0000 0000 0103 0306 ......._........ >> > > This looks like it could be related to VLAN splinters when using > openvswitch. Are you using Openvswitch? Maybe > http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan > help. This is not related to VLAN splinters but VLANs usage in general. We have seen this before and it usually is caused by an intermediate device on the host having the same MTU as the interface inside the VM. Typically both have 1500. The VM outputs 1500 sized frames, OVS adds a VLAN header and that exceeds the MTU of any device on the host. Fix is to either a) decrease the MTU inside the VM by 4 if VLAN tagged packets are to go out to a physical ethernet b) increase MTU of all intermediate interfaces on the host by at least 4 to avoid fragmentation. c) increase MTU of all soft devices on the host and enable jumbo frames on the physical ethernet device. I would choose b) for Neutron if tunneling is being used. If external VLANs are in play option c) is nice with a fallback to frags if jumbo frames are unsupported. > >> It seems that encapsulation causes the packet to break. Does anyone >> have any ideas on how to troubleshoot this? >> >> These VMs are on different hosts. >> >> We tried increasing the mtu on both hosts' eth4, but still no joy. >> >> We found the following, https://review.openstack.org/#/c/31518/ , >> which might be related, but this patch was abandoned. >> >> Any assistance would be greatly appreciated. >> >> J >> > -------------------------------------------------------------------------------- NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers. If you cannot access these links, please notify us by reply message and we will send the contents to you. By messaging with Morgan Stanley you consent to the foregoing. From Hao.Chen at NRCan-RNCan.gc.ca Tue Jun 25 20:01:01 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Tue, 25 Jun 2013 20:01:01 +0000 Subject: [rhos-list] Bypassing authentication Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB646FF8DB5@S-BSC-MBX2.nrn.nrcan.gc.ca> Greetings, (1) Validating the OpenStack Identity Service shows a warning message "WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored)." [root at cloud1 ~(keystone_user)]# keystone user-list WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored). +----------------------------------+-------+---------+-------+ | id | name | enabled | email | +----------------------------------+-------+---------+-------+ | f22063c121b949a8a5b86df453b75a33 | admin | True | | | 4e0226b27ce546f99bac39270a2db50c | aft | True | | | 96fd8489a1d644bbb173c3c2c406d2dc | nfis | True | | +----------------------------------+-------+---------+-------+ (2) keystone token-get error. [root at cloud1 ~(keystone_user)]# keystone token-get WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored). Configuration error: Client configured to run without a service catalog. Run the client using --os-auth-url or OS_AUTH_URL, instead of --os-endpoint or OS_SERVICE_ENDPOINT, for example. [root at cloud1 ~]# vi /etc/qpidd.conf cluster-mechanism=PLAIN auth=yes It would be very grateful if anyone could provide any suggestions or solutions to fix these problems? Hao Chen Natural Resources Canada / Ressources naturelles Canada Canadian Forest Service / Service canadien des for?ts Pacific Forestry Centre / Centre de foresterie du Pacifique 506 W. Burnside Road / 506 rue Burnside Ouest Victoria, BC V8Z 1M5 / Victoria, C-B V8Z 1M5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From roxenham at redhat.com Tue Jun 25 20:09:42 2013 From: roxenham at redhat.com (Rhys Oxenham) Date: Tue, 25 Jun 2013 21:09:42 +0100 Subject: [rhos-list] Bypassing authentication In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB646FF8DB5@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB646FF8DB5@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <9029F216-623E-48EC-811C-A7F9635BA072@redhat.com> Hi Hao, It looks like you've already got a token and endpoint set within your environment variables. Are you just finishing off an installation? By the looks of it you've created an rc file which sets up the environment with an authentication URL and an associated username/password but either you've hard-coded the service token and endpoint here or the values are still set. I suggest you check your rc file to ensure that it only contains your username, password, tenant and authentication URL and unset the token and service endpoint environment variables. Then you can re-attempt your commands. This is likely to be the cause of both of your problems. Let us know whether that works for you. Kindest Regards, Rhys -- Rhys Oxenham Cloud Solution Architect, Red Hat UK e: roxenham at redhat.com m: +44 (0)7866 446625 On 25 Jun 2013, at 21:01, "Chen, Hao" wrote: > Greetings, > > (1) Validating the OpenStack Identity Service shows a warning message "WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored)." > [root at cloud1 ~(keystone_user)]# keystone user-list > WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored). > +----------------------------------+-------+---------+-------+ > | id | name | enabled | email | > +----------------------------------+-------+---------+-------+ > | f22063c121b949a8a5b86df453b75a33 | admin | True | | > | 4e0226b27ce546f99bac39270a2db50c | aft | True | | > | 96fd8489a1d644bbb173c3c2c406d2dc | nfis | True | | > +----------------------------------+-------+---------+-------+ > (2) keystone token-get error. > [root at cloud1 ~(keystone_user)]# keystone token-get > WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored). > Configuration error: Client configured to run without a service catalog. Run the client using --os-auth-url or OS_AUTH_URL, instead of --os-endpoint or OS_SERVICE_ENDPOINT, for example. > > [root at cloud1 ~]# vi /etc/qpidd.conf > cluster-mechanism=PLAIN > auth=yes > > It would be very grateful if anyone could provide any suggestions or solutions to fix these problems? > > Hao Chen > > Natural Resources Canada / Ressources naturelles Canada > Canadian Forest Service / Service canadien des for?ts > Pacific Forestry Centre / Centre de foresterie du Pacifique > 506 W. Burnside Road / 506 rue Burnside Ouest > Victoria, BC V8Z 1M5 / Victoria, C-B V8Z 1M5 > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From zaitcev at redhat.com Tue Jun 25 22:16:24 2013 From: zaitcev at redhat.com (Pete Zaitcev) Date: Tue, 25 Jun 2013 16:16:24 -0600 Subject: [rhos-list] Bypassing authentication In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB646FF8DB5@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB646FF8DB5@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <20130625161624.78538560@lembas.zaitcev.lan> On Tue, 25 Jun 2013 20:01:01 +0000 "Chen, Hao" wrote: > [root at cloud1 ~(keystone_user)]# keystone user-list > WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored). It's harmless in your case, I think. The message means that you supplied options to configure two methods of authentication simultaneously: the admin token and endpoint on the one hand, and URL, admin username and admin password on the other. This is a misconfiguration and may be unintentional. Typically the token method is used for bootstrapping the whole thing. For example, my Keystone setup sctipt has: keystone="keystone --endpoint http://localhost:35357/v2.0 --token=$ATOK" Once done with the setup, you put the url, user, and password into ~/keystonerc or even ~/.bashrc, but do not put token in there. -- Pete From tgraf at redhat.com Tue Jun 25 23:27:38 2013 From: tgraf at redhat.com (Thomas Graf) Date: Wed, 26 Jun 2013 01:27:38 +0200 Subject: [rhos-list] Quantum: Packet Fragmentation Issue? In-Reply-To: <89073807A615C347AB3302B616522A181292B3FC@OYWEX0203N3.msad.ms.com> References: <51C9D96A.80204@redhat.com> <51C9EA8C.4060506@redhat.com> <51C9ECD2.2040905@redhat.com> <89073807A615C347AB3302B616522A181292B3FC@OYWEX0203N3.msad.ms.com> Message-ID: <51CA276A.7060508@redhat.com> On 06/25/2013 09:42 PM, Fulop, Balazs wrote: > Dear All, > > Thank you for looking into this. I tried option a) but unfortunately the packets still appear to get scrambled (tcpdump effectively shows the same and ssh will never connect). The network card used on both hosts in this demo cluster is: > Emulex Corporation OneConnect 10Gb NIC > > If you have any further ideas on a possible resolution / workaround, please kindly let me know. Looking at your packet capture below it seems like a VLAN tag has been inserted but has been corrupted [0x3edd 0x3c4e]. Any chance you could capture the traffic on a switch between the hosts to see if the packet is corrupted on the sending or receiving side? benet is currently not approved for use with OVS in RDO, you can find the latest list here: https://access.redhat.com/site/articles/289823 As suggested earlier, use of VLAN splinters could help to work around this issue. Best, Thomas > > Regards, > > Balazs Fulop > Morgan Stanley | Enterprise Infrastructure > Lechner Odon fasor 8 | Floor 07 > Budapest, 1095 > Phone: +36 1 881-3941 > Balazs.Fulop at morganstanley.com > > > Be carbon conscious. Please consider our environment before printing this email. > > > -----Original Message----- > From: Thomas Graf [mailto:tgraf at redhat.com] > Sent: Tuesday, June 25, 2013 9:18 PM > To: gkotton at redhat.com > Cc: Johnray Fuller; rhos-list at redhat.com; Fulop, Balazs (Enterprise Infrastructure); Chris Wright; Robert Kukura; jmh at redhat.com > Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? > > On 06/25/2013 09:07 PM, Gary Kotton wrote: >> On 06/25/2013 08:54 PM, Johnray Fuller wrote: >>> Hello, >>> >>> >>> I appear to have an issue with packet fragmentation >>> >>> When we try to ssh from one VM to another where the VMs run on >>> different hosts on the source host the physical link (eth4) shows: >>> >>> 13:24:19.520812 fa:16:3e:dd:3c:4e (oui Unknown) > fa:16:3e:6c:eb:80 >>> (oui Unknown), ethertype 802.1Q (0x8100), length 78: vlan 2, p 0, >>> ethertype IPv4, (tos 0x0, ttl 64, id 31721, offset 0, flags [DF], >>> proto TCP (6), length 60) >>> 10.0.0.11.60025 > 10.0.0.12.ssh: Flags [S], cksum 0xdb91 (correct), >>> seq 3087184310, win 14600, options [mss 1460,sackOK,TS val 11061855 >>> ecr 0,nop,wscale 6], length 0 >>> >>> While on the receiving end we see: >>> >>> 13:24:20.555105 3e:6c:eb:80:fa:16 (oui Unknown) > 3c:4e:fa:16:fa:16 >>> (oui Unknown), ethertype Unknown (0x3edd), length 78: >>> 0x0000: 3c4e 0800 4500 003c 7be9 4000 4006 aabc >> 0x0010: 0a00 000b 0a00 000c ea79 0016 b802 b1b6 .........y...... >>> 0x0020: 0000 0000 a002 3908 db91 0000 0204 05b4 ......9......... >>> 0x0030: 0402 080a 00a8 ca5f 0000 0000 0103 0306 ......._........ >>> >> >> This looks like it could be related to VLAN splinters when using >> openvswitch. Are you using Openvswitch? Maybe >> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan >> help. > > This is not related to VLAN splinters but VLANs usage in general. > > We have seen this before and it usually is caused by an intermediate > device on the host having the same MTU as the interface inside the VM. > Typically both have 1500. The VM outputs 1500 sized frames, OVS adds > a VLAN header and that exceeds the MTU of any device on the host. > > Fix is to either > > a) decrease the MTU inside the VM by 4 if VLAN tagged packets are to > go out to a physical ethernet > > b) increase MTU of all intermediate interfaces on the host by at least > 4 to avoid fragmentation. > > c) increase MTU of all soft devices on the host and enable jumbo frames > on the physical ethernet device. > > I would choose b) for Neutron if tunneling is being used. If external > VLANs are in play option c) is nice with a fallback to frags if jumbo > frames are unsupported. > > > >> >>> It seems that encapsulation causes the packet to break. Does anyone >>> have any ideas on how to troubleshoot this? >>> >>> These VMs are on different hosts. >>> >>> We tried increasing the mtu on both hosts' eth4, but still no joy. >>> >>> We found the following, https://review.openstack.org/#/c/31518/ , >>> which might be related, but this patch was abandoned. >>> >>> Any assistance would be greatly appreciated. >>> >>> J >>> >> > > > > -------------------------------------------------------------------------------- > > NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers. If you cannot access these links, please notify us by reply message and we will send the contents to you. By messaging with Morgan Stanley you consent to the foregoing. > From Balazs.Fulop at morganstanley.com Wed Jun 26 07:30:00 2013 From: Balazs.Fulop at morganstanley.com (Fulop, Balazs) Date: Wed, 26 Jun 2013 07:30:00 +0000 Subject: [rhos-list] Quantum: Packet Fragmentation Issue? In-Reply-To: <51CA276A.7060508@redhat.com> References: <51C9D96A.80204@redhat.com> <51C9EA8C.4060506@redhat.com> <51C9ECD2.2040905@redhat.com> <89073807A615C347AB3302B616522A181292B3FC@OYWEX0203N3.msad.ms.com> <51CA276A.7060508@redhat.com> Message-ID: <89073807A615C347AB3302B616522A181292CC00@OYWEX0203N3.msad.ms.com> Dear Thomas, >> Any chance you could capture the traffic on a switch between the hosts >> to see if the packet is corrupted on the sending or receiving side? Given I don't maintain the switch this will be tricky but I'll try. >> benet is currently not approved for use with OVS in RDO I'm not sure I follow. Can you please elaborate? >> As suggested earlier, use of VLAN splinters could help to work around >> this issue. What are VLAN splinters? Could you please give us some documentation pointers? The following returns 404 not found: http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan Regards, Balazs Fulop Morgan Stanley | Enterprise Infrastructure Lechner Odon fasor 8 | Floor 07 Budapest, 1095 Phone: +36 1 881-3941 Balazs.Fulop at morganstanley.com Be carbon conscious. Please consider our environment before printing this email. -----Original Message----- From: Thomas Graf [mailto:tgraf at redhat.com] Sent: Wednesday, June 26, 2013 1:28 AM To: Fulop, Balazs (Enterprise Infrastructure) Cc: gkotton at redhat.com; Johnray Fuller; rhos-list at redhat.com; Chris Wright; Robert Kukura; jmh at redhat.com Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? On 06/25/2013 09:42 PM, Fulop, Balazs wrote: > Dear All, > > Thank you for looking into this. I tried option a) but unfortunately the packets still appear to get scrambled (tcpdump effectively shows the same and ssh will never connect). The network card used on both hosts in this demo cluster is: > Emulex Corporation OneConnect 10Gb NIC > > If you have any further ideas on a possible resolution / workaround, please kindly let me know. Looking at your packet capture below it seems like a VLAN tag has been inserted but has been corrupted [0x3edd 0x3c4e]. Any chance you could capture the traffic on a switch between the hosts to see if the packet is corrupted on the sending or receiving side? benet is currently not approved for use with OVS in RDO, you can find the latest list here: https://access.redhat.com/site/articles/289823 As suggested earlier, use of VLAN splinters could help to work around this issue. Best, Thomas > > Regards, > > Balazs Fulop > Morgan Stanley | Enterprise Infrastructure > Lechner Odon fasor 8 | Floor 07 > Budapest, 1095 > Phone: +36 1 881-3941 > Balazs.Fulop at morganstanley.com > > > Be carbon conscious. Please consider our environment before printing this email. > > > -----Original Message----- > From: Thomas Graf [mailto:tgraf at redhat.com] > Sent: Tuesday, June 25, 2013 9:18 PM > To: gkotton at redhat.com > Cc: Johnray Fuller; rhos-list at redhat.com; Fulop, Balazs (Enterprise Infrastructure); Chris Wright; Robert Kukura; jmh at redhat.com > Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? > > On 06/25/2013 09:07 PM, Gary Kotton wrote: >> On 06/25/2013 08:54 PM, Johnray Fuller wrote: >>> Hello, >>> >>> >>> I appear to have an issue with packet fragmentation >>> >>> When we try to ssh from one VM to another where the VMs run on >>> different hosts on the source host the physical link (eth4) shows: >>> >>> 13:24:19.520812 fa:16:3e:dd:3c:4e (oui Unknown) > fa:16:3e:6c:eb:80 >>> (oui Unknown), ethertype 802.1Q (0x8100), length 78: vlan 2, p 0, >>> ethertype IPv4, (tos 0x0, ttl 64, id 31721, offset 0, flags [DF], >>> proto TCP (6), length 60) >>> 10.0.0.11.60025 > 10.0.0.12.ssh: Flags [S], cksum 0xdb91 (correct), >>> seq 3087184310, win 14600, options [mss 1460,sackOK,TS val 11061855 >>> ecr 0,nop,wscale 6], length 0 >>> >>> While on the receiving end we see: >>> >>> 13:24:20.555105 3e:6c:eb:80:fa:16 (oui Unknown) > 3c:4e:fa:16:fa:16 >>> (oui Unknown), ethertype Unknown (0x3edd), length 78: >>> 0x0000: 3c4e 0800 4500 003c 7be9 4000 4006 aabc >> 0x0010: 0a00 000b 0a00 000c ea79 0016 b802 b1b6 .........y...... >>> 0x0020: 0000 0000 a002 3908 db91 0000 0204 05b4 ......9......... >>> 0x0030: 0402 080a 00a8 ca5f 0000 0000 0103 0306 ......._........ >>> >> >> This looks like it could be related to VLAN splinters when using >> openvswitch. Are you using Openvswitch? Maybe >> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan >> help. > > This is not related to VLAN splinters but VLANs usage in general. > > We have seen this before and it usually is caused by an intermediate > device on the host having the same MTU as the interface inside the VM. > Typically both have 1500. The VM outputs 1500 sized frames, OVS adds > a VLAN header and that exceeds the MTU of any device on the host. > > Fix is to either > > a) decrease the MTU inside the VM by 4 if VLAN tagged packets are to > go out to a physical ethernet > > b) increase MTU of all intermediate interfaces on the host by at least > 4 to avoid fragmentation. > > c) increase MTU of all soft devices on the host and enable jumbo frames > on the physical ethernet device. > > I would choose b) for Neutron if tunneling is being used. If external > VLANs are in play option c) is nice with a fallback to frags if jumbo > frames are unsupported. > > > >> >>> It seems that encapsulation causes the packet to break. Does anyone >>> have any ideas on how to troubleshoot this? >>> >>> These VMs are on different hosts. >>> >>> We tried increasing the mtu on both hosts' eth4, but still no joy. >>> >>> We found the following, https://review.openstack.org/#/c/31518/ , >>> which might be related, but this patch was abandoned. >>> >>> Any assistance would be greatly appreciated. >>> >>> J >>> >> > > > > -------------------------------------------------------------------------------- > > NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers. If you cannot access these links, please notify us by reply message and we will send the contents to you. By messaging with Morgan Stanley you consent to the foregoing. > -------------------------------------------------------------------------------- NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers. If you cannot access these links, please notify us by reply message and we will send the contents to you. By messaging with Morgan Stanley you consent to the foregoing. From roxenham at redhat.com Wed Jun 26 08:21:19 2013 From: roxenham at redhat.com (Rhys Oxenham) Date: Wed, 26 Jun 2013 04:21:19 -0400 (EDT) Subject: [rhos-list] Quantum: Packet Fragmentation Issue? In-Reply-To: <89073807A615C347AB3302B616522A181292CC00@OYWEX0203N3.msad.ms.com> References: <51C9D96A.80204@redhat.com> <51C9EA8C.4060506@redhat.com> <51C9ECD2.2040905@redhat.com> <89073807A615C347AB3302B616522A181292B3FC@OYWEX0203N3.msad.ms.com> <51CA276A.7060508@redhat.com> <89073807A615C347AB3302B616522A181292CC00@OYWEX0203N3.msad.ms.com> Message-ID: <19D5E254-D402-4B38-875E-6958BA0A325C@redhat.com> Hi Balazs, On 26 Jun 2013, at 08:30, "Fulop, Balazs" wrote: > Dear Thomas, > >>> Any chance you could capture the traffic on a switch between the hosts >>> to see if the packet is corrupted on the sending or receiving side? > > Given I don't maintain the switch this will be tricky but I'll try. > >>> benet is currently not approved for use with OVS in RDO > > I'm not sure I follow. Can you please elaborate? be2net is the driver that supports the Emulex OneConnect 10G card. It is on the approved list for use with Open vSwitch and RDO/RHOS. Can you confirm you're using this driver? > >>> As suggested earlier, use of VLAN splinters could help to work around >>> this issue. > > What are VLAN splinters? Could you please give us some documentation pointers? The following returns 404 not found: > http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan > This link works if you delete the 'can' at the end. Cheers Rhys. > Regards, > > Balazs Fulop > Morgan Stanley | Enterprise Infrastructure > Lechner Odon fasor 8 | Floor 07 > Budapest, 1095 > Phone: +36 1 881-3941 > Balazs.Fulop at morganstanley.com > > > Be carbon conscious. Please consider our environment before printing this email. > > > -----Original Message----- > From: Thomas Graf [mailto:tgraf at redhat.com] > Sent: Wednesday, June 26, 2013 1:28 AM > To: Fulop, Balazs (Enterprise Infrastructure) > Cc: gkotton at redhat.com; Johnray Fuller; rhos-list at redhat.com; Chris Wright; Robert Kukura; jmh at redhat.com > Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? > > On 06/25/2013 09:42 PM, Fulop, Balazs wrote: >> Dear All, >> >> Thank you for looking into this. I tried option a) but unfortunately the packets still appear to get scrambled (tcpdump effectively shows the same and ssh will never connect). The network card used on both hosts in this demo cluster is: >> Emulex Corporation OneConnect 10Gb NIC >> >> If you have any further ideas on a possible resolution / workaround, please kindly let me know. > > Looking at your packet capture below it seems like a VLAN tag has been > inserted but has been corrupted [0x3edd 0x3c4e]. > > Any chance you could capture the traffic on a switch between the hosts > to see if the packet is corrupted on the sending or receiving side? > > benet is currently not approved for use with OVS in RDO, you can find > the latest list here: > https://access.redhat.com/site/articles/289823 > > As suggested earlier, use of VLAN splinters could help to work around > this issue. > > Best, > Thomas > >> >> Regards, >> >> Balazs Fulop >> Morgan Stanley | Enterprise Infrastructure >> Lechner Odon fasor 8 | Floor 07 >> Budapest, 1095 >> Phone: +36 1 881-3941 >> Balazs.Fulop at morganstanley.com >> >> >> Be carbon conscious. Please consider our environment before printing this email. >> >> >> -----Original Message----- >> From: Thomas Graf [mailto:tgraf at redhat.com] >> Sent: Tuesday, June 25, 2013 9:18 PM >> To: gkotton at redhat.com >> Cc: Johnray Fuller; rhos-list at redhat.com; Fulop, Balazs (Enterprise Infrastructure); Chris Wright; Robert Kukura; jmh at redhat.com >> Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? >> >> On 06/25/2013 09:07 PM, Gary Kotton wrote: >>> On 06/25/2013 08:54 PM, Johnray Fuller wrote: >>>> Hello, >>>> >>>> >>>> I appear to have an issue with packet fragmentation >>>> >>>> When we try to ssh from one VM to another where the VMs run on >>>> different hosts on the source host the physical link (eth4) shows: >>>> >>>> 13:24:19.520812 fa:16:3e:dd:3c:4e (oui Unknown) > fa:16:3e:6c:eb:80 >>>> (oui Unknown), ethertype 802.1Q (0x8100), length 78: vlan 2, p 0, >>>> ethertype IPv4, (tos 0x0, ttl 64, id 31721, offset 0, flags [DF], >>>> proto TCP (6), length 60) >>>> 10.0.0.11.60025 > 10.0.0.12.ssh: Flags [S], cksum 0xdb91 (correct), >>>> seq 3087184310, win 14600, options [mss 1460,sackOK,TS val 11061855 >>>> ecr 0,nop,wscale 6], length 0 >>>> >>>> While on the receiving end we see: >>>> >>>> 13:24:20.555105 3e:6c:eb:80:fa:16 (oui Unknown) > 3c:4e:fa:16:fa:16 >>>> (oui Unknown), ethertype Unknown (0x3edd), length 78: >>>> 0x0000: 3c4e 0800 4500 003c 7be9 4000 4006 aabc >>> 0x0010: 0a00 000b 0a00 000c ea79 0016 b802 b1b6 .........y...... >>>> 0x0020: 0000 0000 a002 3908 db91 0000 0204 05b4 ......9......... >>>> 0x0030: 0402 080a 00a8 ca5f 0000 0000 0103 0306 ......._........ >>> >>> This looks like it could be related to VLAN splinters when using >>> openvswitch. Are you using Openvswitch? Maybe >>> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan >>> help. >> >> This is not related to VLAN splinters but VLANs usage in general. >> >> We have seen this before and it usually is caused by an intermediate >> device on the host having the same MTU as the interface inside the VM. >> Typically both have 1500. The VM outputs 1500 sized frames, OVS adds >> a VLAN header and that exceeds the MTU of any device on the host. >> >> Fix is to either >> >> a) decrease the MTU inside the VM by 4 if VLAN tagged packets are to >> go out to a physical ethernet >> >> b) increase MTU of all intermediate interfaces on the host by at least >> 4 to avoid fragmentation. >> >> c) increase MTU of all soft devices on the host and enable jumbo frames >> on the physical ethernet device. >> >> I would choose b) for Neutron if tunneling is being used. If external >> VLANs are in play option c) is nice with a fallback to frags if jumbo >> frames are unsupported. >> >> >> >>> >>>> It seems that encapsulation causes the packet to break. Does anyone >>>> have any ideas on how to troubleshoot this? >>>> >>>> These VMs are on different hosts. >>>> >>>> We tried increasing the mtu on both hosts' eth4, but still no joy. >>>> >>>> We found the following, https://review.openstack.org/#/c/31518/ , >>>> which might be related, but this patch was abandoned. >>>> >>>> Any assistance would be greatly appreciated. >>>> >>>> J >> >> >> >> -------------------------------------------------------------------------------- >> >> NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers. If you cannot access these links, please notify us by reply message and we will send the contents to you. By messaging with Morgan Stanley you consent to the foregoing. > > > > -------------------------------------------------------------------------------- > > NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers. If you cannot access these links, please notify us by reply message and we will send the contents to you. By messaging with Morgan Stanley you consent to the foregoing. > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From Balazs.Fulop at morganstanley.com Wed Jun 26 08:55:47 2013 From: Balazs.Fulop at morganstanley.com (Fulop, Balazs) Date: Wed, 26 Jun 2013 08:55:47 +0000 Subject: [rhos-list] Quantum: Packet Fragmentation Issue? In-Reply-To: <19D5E254-D402-4B38-875E-6958BA0A325C@redhat.com> References: <51C9D96A.80204@redhat.com> <51C9EA8C.4060506@redhat.com> <51C9ECD2.2040905@redhat.com> <89073807A615C347AB3302B616522A181292B3FC@OYWEX0203N3.msad.ms.com> <51CA276A.7060508@redhat.com> <89073807A615C347AB3302B616522A181292CC00@OYWEX0203N3.msad.ms.com> <19D5E254-D402-4B38-875E-6958BA0A325C@redhat.com> Message-ID: <89073807A615C347AB3302B616522A181292CCF1@OYWEX0203N3.msad.ms.com> Dear Rhys, Thanks for the quick turnaround. Yes, we're using be2net for eth4. We'll give enable-vlan-splinters=true a try. Regards, Balazs Fulop Morgan Stanley | Enterprise Infrastructure Lechner Odon fasor 8 | Floor 07 Budapest, 1095 Phone: +36 1 881-3941 Balazs.Fulop at morganstanley.com Be carbon conscious. Please consider our environment before printing this email. -----Original Message----- From: Rhys Oxenham [mailto:roxenham at redhat.com] Sent: Wednesday, June 26, 2013 10:21 AM To: Fulop, Balazs (Enterprise Infrastructure) Cc: Thomas Graf; Chris Wright; Szombath, Lajos (Enterprise Infrastructure); rhos-list at redhat.com Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? Hi Balazs, On 26 Jun 2013, at 08:30, "Fulop, Balazs" wrote: > Dear Thomas, > >>> Any chance you could capture the traffic on a switch between the hosts >>> to see if the packet is corrupted on the sending or receiving side? > > Given I don't maintain the switch this will be tricky but I'll try. > >>> benet is currently not approved for use with OVS in RDO > > I'm not sure I follow. Can you please elaborate? be2net is the driver that supports the Emulex OneConnect 10G card. It is on the approved list for use with Open vSwitch and RDO/RHOS. Can you confirm you're using this driver? > >>> As suggested earlier, use of VLAN splinters could help to work around >>> this issue. > > What are VLAN splinters? Could you please give us some documentation pointers? The following returns 404 not found: > http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan > This link works if you delete the 'can' at the end. Cheers Rhys. > Regards, > > Balazs Fulop > Morgan Stanley | Enterprise Infrastructure > Lechner Odon fasor 8 | Floor 07 > Budapest, 1095 > Phone: +36 1 881-3941 > Balazs.Fulop at morganstanley.com > > > Be carbon conscious. Please consider our environment before printing this email. > > > -----Original Message----- > From: Thomas Graf [mailto:tgraf at redhat.com] > Sent: Wednesday, June 26, 2013 1:28 AM > To: Fulop, Balazs (Enterprise Infrastructure) > Cc: gkotton at redhat.com; Johnray Fuller; rhos-list at redhat.com; Chris Wright; Robert Kukura; jmh at redhat.com > Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? > > On 06/25/2013 09:42 PM, Fulop, Balazs wrote: >> Dear All, >> >> Thank you for looking into this. I tried option a) but unfortunately the packets still appear to get scrambled (tcpdump effectively shows the same and ssh will never connect). The network card used on both hosts in this demo cluster is: >> Emulex Corporation OneConnect 10Gb NIC >> >> If you have any further ideas on a possible resolution / workaround, please kindly let me know. > > Looking at your packet capture below it seems like a VLAN tag has been > inserted but has been corrupted [0x3edd 0x3c4e]. > > Any chance you could capture the traffic on a switch between the hosts > to see if the packet is corrupted on the sending or receiving side? > > benet is currently not approved for use with OVS in RDO, you can find > the latest list here: > https://access.redhat.com/site/articles/289823 > > As suggested earlier, use of VLAN splinters could help to work around > this issue. > > Best, > Thomas > >> >> Regards, >> >> Balazs Fulop >> Morgan Stanley | Enterprise Infrastructure >> Lechner Odon fasor 8 | Floor 07 >> Budapest, 1095 >> Phone: +36 1 881-3941 >> Balazs.Fulop at morganstanley.com >> >> >> Be carbon conscious. Please consider our environment before printing this email. >> >> >> -----Original Message----- >> From: Thomas Graf [mailto:tgraf at redhat.com] >> Sent: Tuesday, June 25, 2013 9:18 PM >> To: gkotton at redhat.com >> Cc: Johnray Fuller; rhos-list at redhat.com; Fulop, Balazs (Enterprise Infrastructure); Chris Wright; Robert Kukura; jmh at redhat.com >> Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? >> >> On 06/25/2013 09:07 PM, Gary Kotton wrote: >>> On 06/25/2013 08:54 PM, Johnray Fuller wrote: >>>> Hello, >>>> >>>> >>>> I appear to have an issue with packet fragmentation >>>> >>>> When we try to ssh from one VM to another where the VMs run on >>>> different hosts on the source host the physical link (eth4) shows: >>>> >>>> 13:24:19.520812 fa:16:3e:dd:3c:4e (oui Unknown) > fa:16:3e:6c:eb:80 >>>> (oui Unknown), ethertype 802.1Q (0x8100), length 78: vlan 2, p 0, >>>> ethertype IPv4, (tos 0x0, ttl 64, id 31721, offset 0, flags [DF], >>>> proto TCP (6), length 60) >>>> 10.0.0.11.60025 > 10.0.0.12.ssh: Flags [S], cksum 0xdb91 (correct), >>>> seq 3087184310, win 14600, options [mss 1460,sackOK,TS val 11061855 >>>> ecr 0,nop,wscale 6], length 0 >>>> >>>> While on the receiving end we see: >>>> >>>> 13:24:20.555105 3e:6c:eb:80:fa:16 (oui Unknown) > 3c:4e:fa:16:fa:16 >>>> (oui Unknown), ethertype Unknown (0x3edd), length 78: >>>> 0x0000: 3c4e 0800 4500 003c 7be9 4000 4006 aabc >>> 0x0010: 0a00 000b 0a00 000c ea79 0016 b802 b1b6 .........y...... >>>> 0x0020: 0000 0000 a002 3908 db91 0000 0204 05b4 ......9......... >>>> 0x0030: 0402 080a 00a8 ca5f 0000 0000 0103 0306 ......._........ >>> >>> This looks like it could be related to VLAN splinters when using >>> openvswitch. Are you using Openvswitch? Maybe >>> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan >>> help. >> >> This is not related to VLAN splinters but VLANs usage in general. >> >> We have seen this before and it usually is caused by an intermediate >> device on the host having the same MTU as the interface inside the VM. >> Typically both have 1500. The VM outputs 1500 sized frames, OVS adds >> a VLAN header and that exceeds the MTU of any device on the host. >> >> Fix is to either >> >> a) decrease the MTU inside the VM by 4 if VLAN tagged packets are to >> go out to a physical ethernet >> >> b) increase MTU of all intermediate interfaces on the host by at least >> 4 to avoid fragmentation. >> >> c) increase MTU of all soft devices on the host and enable jumbo frames >> on the physical ethernet device. >> >> I would choose b) for Neutron if tunneling is being used. If external >> VLANs are in play option c) is nice with a fallback to frags if jumbo >> frames are unsupported. >> >> >> >>> >>>> It seems that encapsulation causes the packet to break. Does anyone >>>> have any ideas on how to troubleshoot this? >>>> >>>> These VMs are on different hosts. >>>> >>>> We tried increasing the mtu on both hosts' eth4, but still no joy. >>>> >>>> We found the following, https://review.openstack.org/#/c/31518/ , >>>> which might be related, but this patch was abandoned. >>>> >>>> Any assistance would be greatly appreciated. >>>> >>>> J >> >> >> >> -------------------------------------------------------------------------------- >> >> NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers. If you cannot access these links, please notify us by reply message and we will send the contents to you. By messaging with Morgan Stanley you consent to the foregoing. > > > > -------------------------------------------------------------------------------- > > NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers. If you cannot access these links, please notify us by reply message and we will send the contents to you. By messaging with Morgan Stanley you consent to the foregoing. > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------------------------------------------------------------------------- NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers. If you cannot access these links, please notify us by reply message and we will send the contents to you. By messaging with Morgan Stanley you consent to the foregoing. From tgraf at redhat.com Wed Jun 26 09:52:19 2013 From: tgraf at redhat.com (Thomas Graf) Date: Wed, 26 Jun 2013 11:52:19 +0200 Subject: [rhos-list] Quantum: Packet Fragmentation Issue? In-Reply-To: <19D5E254-D402-4B38-875E-6958BA0A325C@redhat.com> References: <51C9D96A.80204@redhat.com> <51C9EA8C.4060506@redhat.com> <51C9ECD2.2040905@redhat.com> <89073807A615C347AB3302B616522A181292B3FC@OYWEX0203N3.msad.ms.com> <51CA276A.7060508@redhat.com> <89073807A615C347AB3302B616522A181292CC00@OYWEX0203N3.msad.ms.com> <19D5E254-D402-4B38-875E-6958BA0A325C@redhat.com> Message-ID: <51CAB9D3.3050208@redhat.com> On 06/26/2013 10:21 AM, Rhys Oxenham wrote: > Hi Balazs, > > On 26 Jun 2013, at 08:30, "Fulop, Balazs" wrote: > >> Dear Thomas, >> >>>> Any chance you could capture the traffic on a switch between the hosts >>>> to see if the packet is corrupted on the sending or receiving side? >> >> Given I don't maintain the switch this will be tricky but I'll try. >> >>>> benet is currently not approved for use with OVS in RDO >> >> I'm not sure I follow. Can you please elaborate? > > be2net is the driver that supports the Emulex OneConnect 10G card. It is on the approved list for use with Open vSwitch and RDO/RHOS. be2net was on the approved list for RHEL6.4 but a regression causes the driver to be non functional for VLAN usage in the current RDO kernel. We are working on a fix. The use of VLAN splinters should work around the problem. > Can you confirm you're using this driver? > >> >>>> As suggested earlier, use of VLAN splinters could help to work around >>>> this issue. >> >> What are VLAN splinters? Could you please give us some documentation pointers? The following returns 404 not found: >> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan >> > > This link works if you delete the 'can' at the end. > > Cheers > Rhys. > >> Regards, >> >> Balazs Fulop >> Morgan Stanley | Enterprise Infrastructure >> Lechner Odon fasor 8 | Floor 07 >> Budapest, 1095 >> Phone: +36 1 881-3941 >> Balazs.Fulop at morganstanley.com >> >> >> Be carbon conscious. Please consider our environment before printing this email. >> >> >> -----Original Message----- >> From: Thomas Graf [mailto:tgraf at redhat.com] >> Sent: Wednesday, June 26, 2013 1:28 AM >> To: Fulop, Balazs (Enterprise Infrastructure) >> Cc: gkotton at redhat.com; Johnray Fuller; rhos-list at redhat.com; Chris Wright; Robert Kukura; jmh at redhat.com >> Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? >> >> On 06/25/2013 09:42 PM, Fulop, Balazs wrote: >>> Dear All, >>> >>> Thank you for looking into this. I tried option a) but unfortunately the packets still appear to get scrambled (tcpdump effectively shows the same and ssh will never connect). The network card used on both hosts in this demo cluster is: >>> Emulex Corporation OneConnect 10Gb NIC >>> >>> If you have any further ideas on a possible resolution / workaround, please kindly let me know. >> >> Looking at your packet capture below it seems like a VLAN tag has been >> inserted but has been corrupted [0x3edd 0x3c4e]. >> >> Any chance you could capture the traffic on a switch between the hosts >> to see if the packet is corrupted on the sending or receiving side? >> >> benet is currently not approved for use with OVS in RDO, you can find >> the latest list here: >> https://access.redhat.com/site/articles/289823 >> >> As suggested earlier, use of VLAN splinters could help to work around >> this issue. >> >> Best, >> Thomas >> >>> >>> Regards, >>> >>> Balazs Fulop >>> Morgan Stanley | Enterprise Infrastructure >>> Lechner Odon fasor 8 | Floor 07 >>> Budapest, 1095 >>> Phone: +36 1 881-3941 >>> Balazs.Fulop at morganstanley.com >>> >>> >>> Be carbon conscious. Please consider our environment before printing this email. >>> >>> >>> -----Original Message----- >>> From: Thomas Graf [mailto:tgraf at redhat.com] >>> Sent: Tuesday, June 25, 2013 9:18 PM >>> To: gkotton at redhat.com >>> Cc: Johnray Fuller; rhos-list at redhat.com; Fulop, Balazs (Enterprise Infrastructure); Chris Wright; Robert Kukura; jmh at redhat.com >>> Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? >>> >>> On 06/25/2013 09:07 PM, Gary Kotton wrote: >>>> On 06/25/2013 08:54 PM, Johnray Fuller wrote: >>>>> Hello, >>>>> >>>>> >>>>> I appear to have an issue with packet fragmentation >>>>> >>>>> When we try to ssh from one VM to another where the VMs run on >>>>> different hosts on the source host the physical link (eth4) shows: >>>>> >>>>> 13:24:19.520812 fa:16:3e:dd:3c:4e (oui Unknown) > fa:16:3e:6c:eb:80 >>>>> (oui Unknown), ethertype 802.1Q (0x8100), length 78: vlan 2, p 0, >>>>> ethertype IPv4, (tos 0x0, ttl 64, id 31721, offset 0, flags [DF], >>>>> proto TCP (6), length 60) >>>>> 10.0.0.11.60025 > 10.0.0.12.ssh: Flags [S], cksum 0xdb91 (correct), >>>>> seq 3087184310, win 14600, options [mss 1460,sackOK,TS val 11061855 >>>>> ecr 0,nop,wscale 6], length 0 >>>>> >>>>> While on the receiving end we see: >>>>> >>>>> 13:24:20.555105 3e:6c:eb:80:fa:16 (oui Unknown) > 3c:4e:fa:16:fa:16 >>>>> (oui Unknown), ethertype Unknown (0x3edd), length 78: >>>>> 0x0000: 3c4e 0800 4500 003c 7be9 4000 4006 aabc >>>> 0x0010: 0a00 000b 0a00 000c ea79 0016 b802 b1b6 .........y...... >>>>> 0x0020: 0000 0000 a002 3908 db91 0000 0204 05b4 ......9......... >>>>> 0x0030: 0402 080a 00a8 ca5f 0000 0000 0103 0306 ......._........ >>>> >>>> This looks like it could be related to VLAN splinters when using >>>> openvswitch. Are you using Openvswitch? Maybe >>>> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan >>>> help. >>> >>> This is not related to VLAN splinters but VLANs usage in general. >>> >>> We have seen this before and it usually is caused by an intermediate >>> device on the host having the same MTU as the interface inside the VM. >>> Typically both have 1500. The VM outputs 1500 sized frames, OVS adds >>> a VLAN header and that exceeds the MTU of any device on the host. >>> >>> Fix is to either >>> >>> a) decrease the MTU inside the VM by 4 if VLAN tagged packets are to >>> go out to a physical ethernet >>> >>> b) increase MTU of all intermediate interfaces on the host by at least >>> 4 to avoid fragmentation. >>> >>> c) increase MTU of all soft devices on the host and enable jumbo frames >>> on the physical ethernet device. >>> >>> I would choose b) for Neutron if tunneling is being used. If external >>> VLANs are in play option c) is nice with a fallback to frags if jumbo >>> frames are unsupported. >>> >>> >>> >>>> >>>>> It seems that encapsulation causes the packet to break. Does anyone >>>>> have any ideas on how to troubleshoot this? >>>>> >>>>> These VMs are on different hosts. >>>>> >>>>> We tried increasing the mtu on both hosts' eth4, but still no joy. >>>>> >>>>> We found the following, https://review.openstack.org/#/c/31518/ , >>>>> which might be related, but this patch was abandoned. >>>>> >>>>> Any assistance would be greatly appreciated. >>>>> >>>>> J >>> >>> >>> >>> -------------------------------------------------------------------------------- >>> >>> NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers. If you cannot access these links, please notify us by reply message and we will send the contents to you. By messaging with Morgan Stanley you consent to the foregoing. >> >> >> >> -------------------------------------------------------------------------------- >> >> NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers. If you cannot access these links, please notify us by reply message and we will send the contents to you. By messaging with Morgan Stanley you consent to the foregoing. >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list From jmh at redhat.com Wed Jun 26 11:38:03 2013 From: jmh at redhat.com (Jan Mark Holzer) Date: Wed, 26 Jun 2013 07:38:03 -0400 Subject: [rhos-list] Quantum: Packet Fragmentation Issue? In-Reply-To: <89073807A615C347AB3302B616522A181292CC00@OYWEX0203N3.msad.ms.com> References: <51C9D96A.80204@redhat.com> <51C9EA8C.4060506@redhat.com> <51C9ECD2.2040905@redhat.com> <89073807A615C347AB3302B616522A181292B3FC@OYWEX0203N3.msad.ms.com> <51CA276A.7060508@redhat.com> <89073807A615C347AB3302B616522A181292CC00@OYWEX0203N3.msad.ms.com> Message-ID: <51CAD29B.3010902@redhat.com> Hello, On 06/26/2013 03:30 AM, Fulop, Balazs wrote: > Dear Thomas, > >>> Any chance you could capture the traffic on a switch between the hosts >>> to see if the packet is corrupted on the sending or receiving side? > Given I don't maintain the switch this will be tricky but I'll try. > >>> benet is currently not approved for use with OVS in RDO > I'm not sure I follow. Can you please elaborate? We have a list of drivers which are known to work with OVS and there is a KBase article available at https://access.redhat.com/site/articles/289823 which lists the drivers and a workaround for others The driver you're using (benet) is currently not on the list of "working" drivers. However many of these drivers are in the process of being fixed In the meantime you could try to use the VLAN splinter workaround as described in https://access.redhat.com/site/articles/289823 (ie # ovs-vsctl set int [$DEV] other-config:enable-vlan-splinters=true ) If you could report back if the workaround did help that would be great >>> As suggested earlier, use of VLAN splinters could help to work around >>> this issue. > What are VLAN splinters? Could you please give us some documentation pointers? The following returns 404 not found: > http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan I'll try the pointer again and hopefully it will work this time around :) http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEAD Hth, Jan > Regards, > > Balazs Fulop > Morgan Stanley | Enterprise Infrastructure > Lechner Odon fasor 8 | Floor 07 > Budapest, 1095 > Phone: +36 1 881-3941 > Balazs.Fulop at morganstanley.com > > > Be carbon conscious. Please consider our environment before printing this email. > > > -----Original Message----- > From: Thomas Graf [mailto:tgraf at redhat.com] > Sent: Wednesday, June 26, 2013 1:28 AM > To: Fulop, Balazs (Enterprise Infrastructure) > Cc: gkotton at redhat.com; Johnray Fuller; rhos-list at redhat.com; Chris Wright; Robert Kukura; jmh at redhat.com > Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? > > On 06/25/2013 09:42 PM, Fulop, Balazs wrote: >> Dear All, >> >> Thank you for looking into this. I tried option a) but unfortunately the packets still appear to get scrambled (tcpdump effectively shows the same and ssh will never connect). The network card used on both hosts in this demo cluster is: >> Emulex Corporation OneConnect 10Gb NIC >> >> If you have any further ideas on a possible resolution / workaround, please kindly let me know. > Looking at your packet capture below it seems like a VLAN tag has been > inserted but has been corrupted [0x3edd 0x3c4e]. > > Any chance you could capture the traffic on a switch between the hosts > to see if the packet is corrupted on the sending or receiving side? > > benet is currently not approved for use with OVS in RDO, you can find > the latest list here: > https://access.redhat.com/site/articles/289823 > > As suggested earlier, use of VLAN splinters could help to work around > this issue. > > Best, > Thomas > >> Regards, >> >> Balazs Fulop >> Morgan Stanley | Enterprise Infrastructure >> Lechner Odon fasor 8 | Floor 07 >> Budapest, 1095 >> Phone: +36 1 881-3941 >> Balazs.Fulop at morganstanley.com >> >> >> Be carbon conscious. Please consider our environment before printing this email. >> >> >> -----Original Message----- >> From: Thomas Graf [mailto:tgraf at redhat.com] >> Sent: Tuesday, June 25, 2013 9:18 PM >> To: gkotton at redhat.com >> Cc: Johnray Fuller; rhos-list at redhat.com; Fulop, Balazs (Enterprise Infrastructure); Chris Wright; Robert Kukura; jmh at redhat.com >> Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? >> >> On 06/25/2013 09:07 PM, Gary Kotton wrote: >>> On 06/25/2013 08:54 PM, Johnray Fuller wrote: >>>> Hello, >>>> >>>> >>>> I appear to have an issue with packet fragmentation >>>> >>>> When we try to ssh from one VM to another where the VMs run on >>>> different hosts on the source host the physical link (eth4) shows: >>>> >>>> 13:24:19.520812 fa:16:3e:dd:3c:4e (oui Unknown) > fa:16:3e:6c:eb:80 >>>> (oui Unknown), ethertype 802.1Q (0x8100), length 78: vlan 2, p 0, >>>> ethertype IPv4, (tos 0x0, ttl 64, id 31721, offset 0, flags [DF], >>>> proto TCP (6), length 60) >>>> 10.0.0.11.60025 > 10.0.0.12.ssh: Flags [S], cksum 0xdb91 (correct), >>>> seq 3087184310, win 14600, options [mss 1460,sackOK,TS val 11061855 >>>> ecr 0,nop,wscale 6], length 0 >>>> >>>> While on the receiving end we see: >>>> >>>> 13:24:20.555105 3e:6c:eb:80:fa:16 (oui Unknown) > 3c:4e:fa:16:fa:16 >>>> (oui Unknown), ethertype Unknown (0x3edd), length 78: >>>> 0x0000: 3c4e 0800 4500 003c 7be9 4000 4006 aabc >>> 0x0010: 0a00 000b 0a00 000c ea79 0016 b802 b1b6 .........y...... >>>> 0x0020: 0000 0000 a002 3908 db91 0000 0204 05b4 ......9......... >>>> 0x0030: 0402 080a 00a8 ca5f 0000 0000 0103 0306 ......._........ >>>> >>> This looks like it could be related to VLAN splinters when using >>> openvswitch. Are you using Openvswitch? Maybe >>> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan >>> help. >> This is not related to VLAN splinters but VLANs usage in general. >> >> We have seen this before and it usually is caused by an intermediate >> device on the host having the same MTU as the interface inside the VM. >> Typically both have 1500. The VM outputs 1500 sized frames, OVS adds >> a VLAN header and that exceeds the MTU of any device on the host. >> >> Fix is to either >> >> a) decrease the MTU inside the VM by 4 if VLAN tagged packets are to >> go out to a physical ethernet >> >> b) increase MTU of all intermediate interfaces on the host by at least >> 4 to avoid fragmentation. >> >> c) increase MTU of all soft devices on the host and enable jumbo frames >> on the physical ethernet device. >> >> I would choose b) for Neutron if tunneling is being used. If external >> VLANs are in play option c) is nice with a fallback to frags if jumbo >> frames are unsupported. >> >> >> >>>> It seems that encapsulation causes the packet to break. Does anyone >>>> have any ideas on how to troubleshoot this? >>>> >>>> These VMs are on different hosts. >>>> >>>> We tried increasing the mtu on both hosts' eth4, but still no joy. >>>> >>>> We found the following, https://review.openstack.org/#/c/31518/ , >>>> which might be related, but this patch was abandoned. >>>> >>>> Any assistance would be greatly appreciated. >>>> >>>> J >>>> >> >> >> -------------------------------------------------------------------------------- >> >> NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers. If you cannot access these links, please notify us by reply message and we will send the contents to you. By messaging with Morgan Stanley you consent to the foregoing. >> > > > -------------------------------------------------------------------------------- > > NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link: http://www.morganstanley.com/disclaimers. If you cannot access these links, please notify us by reply message and we will send the contents to you. By messaging with Morgan Stanley you consent to the foregoing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Balazs.Fulop at morganstanley.com Wed Jun 26 13:49:57 2013 From: Balazs.Fulop at morganstanley.com (Fulop, Balazs) Date: Wed, 26 Jun 2013 13:49:57 +0000 Subject: [rhos-list] Quantum: Packet Fragmentation Issue? In-Reply-To: <51CAD29B.3010902@redhat.com> References: <51C9D96A.80204@redhat.com> <51C9EA8C.4060506@redhat.com> <51C9ECD2.2040905@redhat.com> <89073807A615C347AB3302B616522A181292B3FC@OYWEX0203N3.msad.ms.com> <51CA276A.7060508@redhat.com> <89073807A615C347AB3302B616522A181292CC00@OYWEX0203N3.msad.ms.com> <51CAD29B.3010902@redhat.com> Message-ID: <89073807A615C347AB3302B616522A181292E0ED@OYWEX0203N3.msad.ms.com> Dear All, Thanks for all the responses. The "VLAN splinters" trick worked and this networking issue has been resolved. Regards, Balazs Fulop Morgan Stanley | Enterprise Infrastructure Lechner Odon fasor 8 | Floor 07 Budapest, 1095 Phone: +36 1 881-3941 Balazs.Fulop at morganstanley.com Be carbon conscious. Please consider our environment before printing this email. From: Jan Mark Holzer [mailto:jmh at redhat.com] Sent: Wednesday, June 26, 2013 1:38 PM To: Fulop, Balazs (Enterprise Infrastructure) Cc: Thomas Graf; gkotton at redhat.com; Johnray Fuller; rhos-list at redhat.com; Chris Wright; Robert Kukura; Szombath, Lajos (Enterprise Infrastructure) Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? Hello, On 06/26/2013 03:30 AM, Fulop, Balazs wrote: Dear Thomas, Any chance you could capture the traffic on a switch between the hosts to see if the packet is corrupted on the sending or receiving side? Given I don't maintain the switch this will be tricky but I'll try. benet is currently not approved for use with OVS in RDO I'm not sure I follow. Can you please elaborate? We have a list of drivers which are known to work with OVS and there is a KBase article available at https://access.redhat.com/site/articles/289823 which lists the drivers and a workaround for others The driver you're using (benet) is currently not on the list of "working" drivers. However many of these drivers are in the process of being fixed In the meantime you could try to use the VLAN splinter workaround as described in https://access.redhat.com/site/articles/289823 (ie # ovs-vsctl set int [$DEV] other-config:enable-vlan-splinters=true ) If you could report back if the workaround did help that would be great As suggested earlier, use of VLAN splinters could help to work around this issue. What are VLAN splinters? Could you please give us some documentation pointers? The following returns 404 not found: http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan I'll try the pointer again and hopefully it will work this time around :) http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEAD Hth, Jan Regards, Balazs Fulop Morgan Stanley | Enterprise Infrastructure Lechner Odon fasor 8 | Floor 07 Budapest, 1095 Phone: +36 1 881-3941 Balazs.Fulop at morganstanley.com Be carbon conscious. Please consider our environment before printing this email. -----Original Message----- From: Thomas Graf [mailto:tgraf at redhat.com] Sent: Wednesday, June 26, 2013 1:28 AM To: Fulop, Balazs (Enterprise Infrastructure) Cc: gkotton at redhat.com; Johnray Fuller; rhos-list at redhat.com; Chris Wright; Robert Kukura; jmh at redhat.com Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? On 06/25/2013 09:42 PM, Fulop, Balazs wrote: Dear All, Thank you for looking into this. I tried option a) but unfortunately the packets still appear to get scrambled (tcpdump effectively shows the same and ssh will never connect). The network card used on both hosts in this demo cluster is: Emulex Corporation OneConnect 10Gb NIC If you have any further ideas on a possible resolution / workaround, please kindly let me know. Looking at your packet capture below it seems like a VLAN tag has been inserted but has been corrupted [0x3edd 0x3c4e]. Any chance you could capture the traffic on a switch between the hosts to see if the packet is corrupted on the sending or receiving side? benet is currently not approved for use with OVS in RDO, you can find the latest list here: https://access.redhat.com/site/articles/289823 As suggested earlier, use of VLAN splinters could help to work around this issue. Best, Thomas Regards, Balazs Fulop Morgan Stanley | Enterprise Infrastructure Lechner Odon fasor 8 | Floor 07 Budapest, 1095 Phone: +36 1 881-3941 Balazs.Fulop at morganstanley.com Be carbon conscious. Please consider our environment before printing this email. -----Original Message----- From: Thomas Graf [mailto:tgraf at redhat.com] Sent: Tuesday, June 25, 2013 9:18 PM To: gkotton at redhat.com Cc: Johnray Fuller; rhos-list at redhat.com; Fulop, Balazs (Enterprise Infrastructure); Chris Wright; Robert Kukura; jmh at redhat.com Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? On 06/25/2013 09:07 PM, Gary Kotton wrote: On 06/25/2013 08:54 PM, Johnray Fuller wrote: Hello, I appear to have an issue with packet fragmentation When we try to ssh from one VM to another where the VMs run on different hosts on the source host the physical link (eth4) shows: 13:24:19.520812 fa:16:3e:dd:3c:4e (oui Unknown) > fa:16:3e:6c:eb:80 (oui Unknown), ethertype 802.1Q (0x8100), length 78: vlan 2, p 0, ethertype IPv4, (tos 0x0, ttl 64, id 31721, offset 0, flags [DF], proto TCP (6), length 60) 10.0.0.11.60025 > 10.0.0.12.ssh: Flags [S], cksum 0xdb91 (correct), seq 3087184310, win 14600, options [mss 1460,sackOK,TS val 11061855 ecr 0,nop,wscale 6], length 0 While on the receiving end we see: 13:24:20.555105 3e:6c:eb:80:fa:16 (oui Unknown) > 3c:4e:fa:16:fa:16 (oui Unknown), ethertype Unknown (0x3edd), length 78: 0x0000: 3c4e 0800 4500 003c 7be9 4000 4006 aabc From tgraf at redhat.com Wed Jun 26 14:07:52 2013 From: tgraf at redhat.com (Thomas Graf) Date: Wed, 26 Jun 2013 16:07:52 +0200 Subject: [rhos-list] Quantum: Packet Fragmentation Issue? In-Reply-To: <89073807A615C347AB3302B616522A181292E0ED@OYWEX0203N3.msad.ms.com> References: <51C9D96A.80204@redhat.com> <51C9EA8C.4060506@redhat.com> <51C9ECD2.2040905@redhat.com> <89073807A615C347AB3302B616522A181292B3FC@OYWEX0203N3.msad.ms.com> <51CA276A.7060508@redhat.com> <89073807A615C347AB3302B616522A181292CC00@OYWEX0203N3.msad.ms.com> <51CAD29B.3010902@redhat.com> <89073807A615C347AB3302B616522A181292E0ED@OYWEX0203N3.msad.ms.com> Message-ID: <51CAF5B8.7040702@redhat.com> On 06/26/2013 03:49 PM, Fulop, Balazs wrote: > Dear All, > > Thanks for all the responses. The ?VLAN splinters? trick worked and this > networking issue has been resolved. Thanks for the confirmation. We will let you know when an updated driver is available no longer requiring the VLAN splinters workaround. Best, Thomas > > Regards, > > Balazs Fulop > *Morgan Stanley | Enterprise Infrastructure > *Lechner Odon fasor 8 | Floor 07 > Budapest, 1095 > Phone: +36 1 881-3941 > Balazs.Fulop at morganstanley.com > > > Be carbon conscious. Please consider our environment before printing > this email. > > *From:*Jan Mark Holzer [mailto:jmh at redhat.com] > *Sent:* Wednesday, June 26, 2013 1:38 PM > *To:* Fulop, Balazs (Enterprise Infrastructure) > *Cc:* Thomas Graf; gkotton at redhat.com; Johnray Fuller; > rhos-list at redhat.com; Chris Wright; Robert Kukura; Szombath, Lajos > (Enterprise Infrastructure) > *Subject:* Re: [rhos-list] Quantum: Packet Fragmentation Issue? > > > Hello, > > On 06/26/2013 03:30 AM, Fulop, Balazs wrote: > > Dear Thomas, > > > > Any chance you could capture the traffic on a switch between the hosts > > to see if the packet is corrupted on the sending or receiving side? > > > > Given I don't maintain the switch this will be tricky but I'll try. > > > > benet is currently not approved for use with OVS in RDO > > > > I'm not sure I follow. Can you please elaborate? > > We have a list of drivers which are known to work with OVS and there is > a KBase article available > at https://access.redhat.com/site/articles/289823 which lists the > drivers and a workaround for others > The driver you're using (benet) is currently not on the list of > "working" drivers. > However many of these drivers are in the process of being fixed > In the meantime you could try to use the VLAN splinter workaround as > described in https://access.redhat.com/site/articles/289823 > (ie # ovs-vsctl set int [$DEV] other-config:enable-vlan-splinters=true ) > > If you could report back if the workaround did help that would be great > > > > > As suggested earlier, use of VLAN splinters could help to work around > > this issue. > > > > What are VLAN splinters? Could you please give us some documentation pointers? The following returns 404 not found: > > http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan > > I'll try the pointer again and hopefully it will work this time around :) > http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEAD > > Hth, > Jan > > > > > Regards, > > > > Balazs Fulop > > Morgan Stanley | Enterprise Infrastructure > > Lechner Odon fasor 8 | Floor 07 > > Budapest, 1095 > > Phone: +36 1 881-3941 > > Balazs.Fulop at morganstanley.com > > > > > > Be carbon conscious. Please consider our environment before printing this email. > > > > > > -----Original Message----- > > From: Thomas Graf [mailto:tgraf at redhat.com] > > Sent: Wednesday, June 26, 2013 1:28 AM > > To: Fulop, Balazs (Enterprise Infrastructure) > > Cc:gkotton at redhat.com ; Johnray Fuller;rhos-list at redhat.com ; Chris Wright; Robert Kukura;jmh at redhat.com > > Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? > > > > On 06/25/2013 09:42 PM, Fulop, Balazs wrote: > > Dear All, > > > > Thank you for looking into this. I tried option a) but unfortunately the packets still appear to get scrambled (tcpdump effectively shows the same and ssh will never connect). The network card used on both hosts in this demo cluster is: > > Emulex Corporation OneConnect 10Gb NIC > > > > If you have any further ideas on a possible resolution / workaround, please kindly let me know. > > > > Looking at your packet capture below it seems like a VLAN tag has been > > inserted but has been corrupted [0x3edd 0x3c4e]. > > > > Any chance you could capture the traffic on a switch between the hosts > > to see if the packet is corrupted on the sending or receiving side? > > > > benet is currently not approved for use with OVS in RDO, you can find > > the latest list here: > > https://access.redhat.com/site/articles/289823 > > > > As suggested earlier, use of VLAN splinters could help to work around > > this issue. > > > > Best, > > Thomas > > > > > > Regards, > > > > Balazs Fulop > > Morgan Stanley | Enterprise Infrastructure > > Lechner Odon fasor 8 | Floor 07 > > Budapest, 1095 > > Phone: +36 1 881-3941 > > Balazs.Fulop at morganstanley.com > > > > > > Be carbon conscious. Please consider our environment before printing this email. > > > > > > -----Original Message----- > > From: Thomas Graf [mailto:tgraf at redhat.com] > > Sent: Tuesday, June 25, 2013 9:18 PM > > To:gkotton at redhat.com > > Cc: Johnray Fuller;rhos-list at redhat.com ; Fulop, Balazs (Enterprise Infrastructure); Chris Wright; Robert Kukura;jmh at redhat.com > > Subject: Re: [rhos-list] Quantum: Packet Fragmentation Issue? > > > > On 06/25/2013 09:07 PM, Gary Kotton wrote: > > On 06/25/2013 08:54 PM, Johnray Fuller wrote: > > Hello, > > > > > > I appear to have an issue with packet fragmentation > > > > When we try to ssh from one VM to another where the VMs run on > > different hosts on the source host the physical link (eth4) shows: > > > > 13:24:19.520812 fa:16:3e:dd:3c:4e (oui Unknown) > fa:16:3e:6c:eb:80 > > (oui Unknown), ethertype 802.1Q (0x8100), length 78: vlan 2, p 0, > > ethertype IPv4, (tos 0x0, ttl 64, id 31721, offset 0, flags [DF], > > proto TCP (6), length 60) > > 10.0.0.11.60025 > 10.0.0.12.ssh: Flags [S], cksum 0xdb91 (correct), > > seq 3087184310, win 14600, options [mss 1460,sackOK,TS val 11061855 > > ecr 0,nop,wscale 6], length 0 > > > > While on the receiving end we see: > > > > 13:24:20.555105 3e:6c:eb:80:fa:16 (oui Unknown) > 3c:4e:fa:16:fa:16 > > (oui Unknown), ethertype Unknown (0x3edd), length 78: > > 0x0000: 3c4e 0800 4500 003c 7be9 4000 4006 aabc > 0x0010: 0a00 000b 0a00 000c ea79 0016 b802 b1b6 .........y...... > > 0x0020: 0000 0000 a002 3908 db91 0000 0204 05b4 ......9......... > > 0x0030: 0402 080a 00a8 ca5f 0000 0000 0103 0306 ......._........ > > > > > > This looks like it could be related to VLAN splinters when using > > openvswitch. Are you using Openvswitch? Maybe > > http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEADcan > > help. > > > > This is not related to VLAN splinters but VLANs usage in general. > > > > We have seen this before and it usually is caused by an intermediate > > device on the host having the same MTU as the interface inside the VM. > > Typically both have 1500. The VM outputs 1500 sized frames, OVS adds > > a VLAN header and that exceeds the MTU of any device on the host. > > > > Fix is to either > > > > a) decrease the MTU inside the VM by 4 if VLAN tagged packets are to > > go out to a physical ethernet > > > > b) increase MTU of all intermediate interfaces on the host by at least > > 4 to avoid fragmentation. > > > > c) increase MTU of all soft devices on the host and enable jumbo frames > > on the physical ethernet device. > > > > I would choose b) for Neutron if tunneling is being used. If external > > VLANs are in play option c) is nice with a fallback to frags if jumbo > > frames are unsupported. > > > > > > > > > > It seems that encapsulation causes the packet to break. Does anyone > > have any ideas on how to troubleshoot this? > > > > These VMs are on different hosts. > > > > We tried increasing the mtu on both hosts' eth4, but still no joy. > > > > We found the following,https://review.openstack.org/#/c/31518/ , > > which might be related, but this patch was abandoned. > > > > Any assistance would be greatly appreciated. > > > > J > > > > > > > > > > > > -------------------------------------------------------------------------------- > > > > NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link:http://www.morganstanley.com/disclaimers. If you cannot access these links, please notify us by reply message and we will send the contents to you. By messaging with Morgan Stanley you consent to the foregoing. > > > > > > > > > > -------------------------------------------------------------------------------- > > > > NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions or views contained herein are not intended to be, and do not constitute, advice within the meaning of Section 975 of the Dodd-Frank Wall Street Reform and Consumer Protection Act. If you have received this communication in error, please destroy all electronic and paper copies and notify the sender immediately. Mistransmission is not intended to waive confidentiality or privilege. Morgan Stanley reserves the right, to the extent permitted under applicable law, to monitor electronic communications. This message is subject to terms available at the following link:http://www.morganstanley.com/disclaimers. If you cannot access these links, please notify us by reply message and we will send the contents to you. By messaging with Morgan Stanley you consent to the foregoing. > > > > ------------------------------------------------------------------------ > > NOTICE: Morgan Stanley is not acting as a municipal advisor and the > opinions or views contained herein are not intended to be, and do not > constitute, advice within the meaning of Section 975 of the Dodd-Frank > Wall Street Reform and Consumer Protection Act. If you have received > this communication in error, please destroy all electronic and paper > copies and notify the sender immediately. Mistransmission is not > intended to waive confidentiality or privilege. Morgan Stanley reserves > the right, to the extent permitted under applicable law, to monitor > electronic communications. This message is subject to terms available at > the following link: http://www.morganstanley.com/disclaimers If you > cannot access these links, please notify us by reply message and we will > send the contents to you. By messaging with Morgan Stanley you consent > to the foregoing. > From rich.minton at lmco.com Wed Jun 26 14:52:19 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Wed, 26 Jun 2013 14:52:19 +0000 Subject: [rhos-list] Packstack variables. Message-ID: Is there any chance we can add the capability to set the Region in packstack? I have multiple sites that I want to name something other than "RegionOne". Also, how about the capability to make the public endpoint URL something different than the private endpoint URL. We use Chef to configure VMs and also allow VMs to make calls to the nova CLI. I have to change the public URL manually after packstack finishes the install. Just some thoughts. Thank you, Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Wed Jun 26 14:58:20 2013 From: pmyers at redhat.com (Perry Myers) Date: Wed, 26 Jun 2013 10:58:20 -0400 Subject: [rhos-list] Packstack variables. In-Reply-To: References: Message-ID: <51CB018C.7080208@redhat.com> On 06/26/2013 10:52 AM, Minton, Rich wrote: > Is there any chance we can add the capability to set the Region in > packstack? I have multiple sites that I want to name something other > than ?RegionOne?. I think this came up in the past, but I don't remember the discussion and whether there was a good reason to not do this, or if it was just a feature that we need to add Can you file a feature request on this? https://bugzilla.redhat.com/enter_bug.cgi?product=Red+Hat+OpenStack We can track it there :) > Also, how about the capability to make the public endpoint URL something > different than the private endpoint URL. We use Chef to configure VMs > and also allow VMs to make calls to the nova CLI. I have to change the > public URL manually after packstack finishes the install. This one I'm not sure about, I've cc'd some folks onto the thread Perry From Hao.Chen at NRCan-RNCan.gc.ca Wed Jun 26 22:13:32 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Wed, 26 Jun 2013 22:13:32 +0000 Subject: [rhos-list] Bypassing authentication In-Reply-To: <9029F216-623E-48EC-811C-A7F9635BA072@redhat.com> References: <76CC67FD1C99DB4DB4D43FEF354AADB646FF8DB5@S-BSC-MBX2.nrn.nrcan.gc.ca> <9029F216-623E-48EC-811C-A7F9635BA072@redhat.com> Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB646FF94B6@S-BSC-MBX2.nrn.nrcan.gc.ca> Hi Rhys and Pete, Thanks very much for your quick responses. Really appreciate your help. We are trying to install Redhat OpenStack and see if it works for us. After unsetting the token and service endpoint environment variables, I ran into the following error: [root at cloud1 ~(keystone_user)]# keystone user-list Unable to communicate with identity service: {"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}}. (HTTP 404) >From log file: 2013-06-26 14:07:29 INFO [access] 10.2.0.196 - - [26/Jun/2013:21:07:29 +0000] "POST http://cloud1.nfis.org:35357/v2.0/tokens HTTP/1.0" 200 2197 2013-06-26 14:07:29 INFO [access] 10.2.0.196 - - [26/Jun/2013:21:07:29 +0000] "GET http://cloud1.nfis.org:35357/v3.0/users HTTP/1.0" 404 93 The identity service is looking for v3.0. However this endpoint seems not existing even though no error occurred when running "keystone endpoint-create ..." for v3.0. http://cloud1.nfis.org:35357/v3.0 The keystone rc file is like this: export OS_USERNAME=nfis export OS_TENANT_NAME=nfis export OS_PASSWORD=password export OS_AUTH_URL=http://cloud1.nfis.org:5000/v2.0/ Thanks and best regards, Hao -----Original Message----- From: Rhys Oxenham [mailto:roxenham at redhat.com] Sent: June 25, 2013 13:10 To: Chen, Hao Cc: rhos-list at redhat.com Subject: Re: [rhos-list] Bypassing authentication Hi Hao, It looks like you've already got a token and endpoint set within your environment variables. Are you just finishing off an installation? By the looks of it you've created an rc file which sets up the environment with an authentication URL and an associated username/password but either you've hard-coded the service token and endpoint here or the values are still set. I suggest you check your rc file to ensure that it only contains your username, password, tenant and authentication URL and unset the token and service endpoint environment variables. Then you can re-attempt your commands. This is likely to be the cause of both of your problems. Let us know whether that works for you. Kindest Regards, Rhys -- Rhys Oxenham Cloud Solution Architect, Red Hat UK e: roxenham at redhat.com m: +44 (0)7866 446625 On 25 Jun 2013, at 21:01, "Chen, Hao" wrote: > Greetings, > > (1) Validating the OpenStack Identity Service shows a warning message "WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored)." > [root at cloud1 ~(keystone_user)]# keystone user-list > WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored). > +----------------------------------+-------+---------+-------+ > | id | name | enabled | email | > +----------------------------------+-------+---------+-------+ > | f22063c121b949a8a5b86df453b75a33 | admin | True | | > | 4e0226b27ce546f99bac39270a2db50c | aft | True | | > | 96fd8489a1d644bbb173c3c2c406d2dc | nfis | True | | > +----------------------------------+-------+---------+-------+ > (2) keystone token-get error. > [root at cloud1 ~(keystone_user)]# keystone token-get > WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored). > Configuration error: Client configured to run without a service catalog. Run the client using --os-auth-url or OS_AUTH_URL, instead of --os-endpoint or OS_SERVICE_ENDPOINT, for example. > > [root at cloud1 ~]# vi /etc/qpidd.conf > cluster-mechanism=PLAIN > auth=yes > > It would be very grateful if anyone could provide any suggestions or solutions to fix these problems? > > Hao Chen > > Natural Resources Canada / Ressources naturelles Canada Canadian > Forest Service / Service canadien des for?ts Pacific Forestry Centre / > Centre de foresterie du Pacifique > 506 W. Burnside Road / 506 rue Burnside Ouest Victoria, BC V8Z 1M5 / > Victoria, C-B V8Z 1M5 > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From zaitcev at redhat.com Wed Jun 26 22:29:45 2013 From: zaitcev at redhat.com (Pete Zaitcev) Date: Wed, 26 Jun 2013 16:29:45 -0600 Subject: [rhos-list] Bypassing authentication In-Reply-To: <76CC67FD1C99DB4DB4D43FEF354AADB646FF94B6@S-BSC-MBX2.nrn.nrcan.gc.ca> References: <76CC67FD1C99DB4DB4D43FEF354AADB646FF8DB5@S-BSC-MBX2.nrn.nrcan.gc.ca> <9029F216-623E-48EC-811C-A7F9635BA072@redhat.com> <76CC67FD1C99DB4DB4D43FEF354AADB646FF94B6@S-BSC-MBX2.nrn.nrcan.gc.ca> Message-ID: <20130626162945.52758c17@lembas.zaitcev.lan> On Wed, 26 Jun 2013 22:13:32 +0000 "Chen, Hao" wrote: > 2013-06-26 14:07:29 INFO [access] 10.2.0.196 - - [26/Jun/2013:21:07:29 +0000] "GET http://cloud1.nfis.org:35357/v3.0/users HTTP/1.0" 404 93 > > The identity service is looking for v3.0. However this endpoint seems > not existing even though no error occurred when running "keystone > endpoint-create ..." for v3.0. I am not sure if we even support Keystone v3 in Grizzly, never mind Folsom (RHOS 2.1). You need to ask someone who knows for sure. Sorry, I'm not an expert in Keystone. Until then, I would just stick to v2, which is known to work. -- Pete From gcheng at salesforce.com Wed Jun 26 23:49:08 2013 From: gcheng at salesforce.com (Guolin Cheng) Date: Wed, 26 Jun 2013 16:49:08 -0700 Subject: [rhos-list] Bypassing authentication In-Reply-To: <20130626162945.52758c17@lembas.zaitcev.lan> Message-ID: Hi Pete and all, Same problems on my evaluation site too. The document Red_Hat_OpenStack-3-Installation_and_Configuration_Guide-en-US.pdf -- page 52 - instructs to create V3.0 service endpoints. But when I follow it and create v3.0 service endpoints, I see the same problems when I run 'keystone user-list' or other commands. The dirty&quick fix worked here is to remove all the v3.0 endpoints from mysql database directly. Before that I sourced the keystonerc file and tried 'keystone endpoint-delete ' operation but that operation failed. After the clean up operation above, all the keystone operation works without a glitch. Is this the upstream openstack Grizzly problem, or redhat packaging / documenation issue? I followed the redhat installation documents step by step exactly. Thanks. Guolin On 6/26/13 3:29 PM, "Pete Zaitcev" wrote: On Wed, 26 Jun 2013 22:13:32 +0000 "Chen, Hao" wrote: > 2013-06-26 14:07:29 INFO [access] 10.2.0.196 - - [26/Jun/2013:21:07:29 +0000] "GET http://cloud1.nfis.org:35357/v3.0/users HTTP/1.0" 404 93 > > The identity service is looking for v3.0. However this endpoint seems > not existing even though no error occurred when running "keystone > endpoint-create ..." for v3.0. I am not sure if we even support Keystone v3 in Grizzly, never mind Folsom (RHOS 2.1). You need to ask someone who knows for sure. Sorry, I'm not an expert in Keystone. Until then, I would just stick to v2, which is known to work. -- Pete _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From zaitcev at redhat.com Thu Jun 27 00:09:11 2013 From: zaitcev at redhat.com (Pete Zaitcev) Date: Wed, 26 Jun 2013 18:09:11 -0600 Subject: [rhos-list] Bypassing authentication In-Reply-To: References: <20130626162945.52758c17@lembas.zaitcev.lan> Message-ID: <20130626180911.71584394@lembas.zaitcev.lan> On Wed, 26 Jun 2013 16:49:08 -0700 Guolin Cheng wrote: > The document Red_Hat_OpenStack-3-Installation_and_Configuration_Guide-en-US.pdf > -- page 52 - instructs to create V3.0 service endpoints. Hmm, indeed. This is clearly expected to work. Not being an expert in Keystone, I cannot help to debug this further, sorry. I would just make sure again I weren't trying to install RHOS 2.1 packages using the RHOS 3 instructions. > Is this the upstream openstack Grizzly problem, or redhat packaging / documenation issue? > I followed the redhat installation documents step by step exactly. If docs are wrong, it is our problem, IMHO. Opening a bug would help fixing it up, but I suspect that it's not a bug in docs at this point. -- Pete From pmyers at redhat.com Thu Jun 27 00:53:41 2013 From: pmyers at redhat.com (Perry Myers) Date: Wed, 26 Jun 2013 20:53:41 -0400 Subject: [rhos-list] Bypassing authentication In-Reply-To: References: Message-ID: <51CB8D15.8020201@redhat.com> On 06/26/2013 07:49 PM, Guolin Cheng wrote: > Hi Pete and all, > > Same problems on my evaluation site too. > > The document > /_Red_Hat_OpenStack-3-Installation_and_Configuration_Guide-en-US.pdf_/ > -- page 52 ? instructs to create V3.0 service endpoints. > > But when I follow it and create v3.0 service endpoints, I see the same > problems when I run ?keystone user-list? or other commands. > > The dirty&quick fix worked here is to remove all the v3.0 endpoints from > mysql database directly. Before that I sourced the keystonerc file and > tried ?keystone endpoint-delete ? operation but that > operation failed. > > After the clean up operation above, all the keystone operation works > without a glitch. > > Is this the upstream openstack Grizzly problem, or redhat packaging / > documenation issue? I followed the redhat installation documents step by > step exactly. Adding some keystone experts to the thread :) > Thanks. > Guolin > > > On 6/26/13 3:29 PM, "Pete Zaitcev" wrote: > > On Wed, 26 Jun 2013 22:13:32 +0000 > "Chen, Hao" wrote: > > > 2013-06-26 14:07:29 INFO [access] 10.2.0.196 - - [26/Jun/2013:21:07:29 +0000] "GET http://cloud1.nfis.org:35357/v3.0/users HTTP/1.0" 404 93 > > > > The identity service is looking for v3.0. However this endpoint seems > > not existing even though no error occurred when running "keystone > > endpoint-create ..." for v3.0. > > I am not sure if we even support Keystone v3 in Grizzly, never mind > Folsom (RHOS 2.1). You need to ask someone who knows for sure. > Sorry, I'm not an expert in Keystone. > > Until then, I would just stick to v2, which is known to work. > > -- Pete > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From gcheng at salesforce.com Thu Jun 27 01:07:18 2013 From: gcheng at salesforce.com (Guolin Cheng) Date: Wed, 26 Jun 2013 18:07:18 -0700 Subject: [rhos-list] Bypassing authentication In-Reply-To: <20130626180911.71584394@lembas.zaitcev.lan> Message-ID: Hi Pete, I assume that the header version in Redhat openstack repo is 'grizzly' - see openstack release/name table at http://docs.openstack.org/trunk/openstack-compute/install/yum/content/version.html. [coolguy at testcloud ~]$ keystone-manage --version 2013.1.1 So I assume that the installed openstack keystone RPM version is compatible with redhat openstack installation guide version. BTW, could you help explain what are the RHOS 2.1 vers RHOS 3? Does RHOS 3 means grizzly, RHOS 2.1 means some patch level of Folsom? Thanks. --Guolin On 6/26/13 5:09 PM, "Pete Zaitcev" wrote: On Wed, 26 Jun 2013 16:49:08 -0700 Guolin Cheng wrote: > The document Red_Hat_OpenStack-3-Installation_and_Configuration_Guide-en-US.pdf > -- page 52 - instructs to create V3.0 service endpoints. Hmm, indeed. This is clearly expected to work. Not being an expert in Keystone, I cannot help to debug this further, sorry. I would just make sure again I weren't trying to install RHOS 2.1 packages using the RHOS 3 instructions. > Is this the upstream openstack Grizzly problem, or redhat packaging / documenation issue? > I followed the redhat installation documents step by step exactly. If docs are wrong, it is our problem, IMHO. Opening a bug would help fixing it up, but I suspect that it's not a bug in docs at this point. -- Pete -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Thu Jun 27 01:11:05 2013 From: sgordon at redhat.com (Steve Gordon) Date: Wed, 26 Jun 2013 21:11:05 -0400 (EDT) Subject: [rhos-list] Bypassing authentication In-Reply-To: <20130626180911.71584394@lembas.zaitcev.lan> References: <20130626162945.52758c17@lembas.zaitcev.lan> <20130626180911.71584394@lembas.zaitcev.lan> Message-ID: <176038331.28897141.1372295465804.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Pete Zaitcev" > To: "Guolin Cheng" > Cc: "Hao Chen" , rhos-list at redhat.com > Sent: Wednesday, June 26, 2013 8:09:11 PM > Subject: Re: [rhos-list] Bypassing authentication > > On Wed, 26 Jun 2013 16:49:08 -0700 > Guolin Cheng wrote: > > > The document > > Red_Hat_OpenStack-3-Installation_and_Configuration_Guide-en-US.pdf > > -- page 52 - instructs to create V3.0 service endpoints. > > Hmm, indeed. This is clearly expected to work. Not being an expert > in Keystone, I cannot help to debug this further, sorry. I would just > make sure again I weren't trying to install RHOS 2.1 packages using > the RHOS 3 instructions. > > > Is this the upstream openstack Grizzly problem, or redhat packaging / > > documenation issue? > I followed the redhat installation documents step by > > step exactly. > > If docs are wrong, it is our problem, IMHO. Opening a bug would help > fixing it up, but I suspect that it's not a bug in docs at this point. v3.0 is grizzly only, in the procedure we create both a v2.0 and v3.0 endpoint, my understanding was/is this means clients can use the one they know (until v2.0 is deprecated in a future release anyway). The bug that resulted in v3.0 endpoint creation being added to the docs is here: https://bugzilla.redhat.com/show_bug.cgi?id=961441 Not saying there isn't potentially a mistake/issue here, just attempting to clarify why creation of a v3.0 endpoint was added to the guide. Note that there was no RHOS 2.1 version of this particular guide. Thanks, Steve From sgordon at redhat.com Thu Jun 27 01:31:22 2013 From: sgordon at redhat.com (Steve Gordon) Date: Wed, 26 Jun 2013 21:31:22 -0400 (EDT) Subject: [rhos-list] Bypassing authentication In-Reply-To: <51CB8D15.8020201@redhat.com> References: <51CB8D15.8020201@redhat.com> Message-ID: <363212517.28901056.1372296682255.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Perry Myers" > To: "Guolin Cheng" , "Adam Young" , "Alan Pevec" > Cc: "Hao Chen" , rhos-list at redhat.com > Sent: Wednesday, June 26, 2013 8:53:41 PM > Subject: Re: [rhos-list] Bypassing authentication > > On 06/26/2013 07:49 PM, Guolin Cheng wrote: > > Hi Pete and all, > > > > Same problems on my evaluation site too. > > > > The document > > /_Red_Hat_OpenStack-3-Installation_and_Configuration_Guide-en-US.pdf_/ > > -- page 52 ? instructs to create V3.0 service endpoints. > > > > But when I follow it and create v3.0 service endpoints, I see the same > > problems when I run ?keystone user-list? or other commands. > > > > The dirty&quick fix worked here is to remove all the v3.0 endpoints from > > mysql database directly. Before that I sourced the keystonerc file and > > tried ?keystone endpoint-delete ? operation but that > > operation failed. > > > > After the clean up operation above, all the keystone operation works > > without a glitch. > > > > Is this the upstream openstack Grizzly problem, or redhat packaging / > > documenation issue? I followed the redhat installation documents step by > > step exactly. > > Adding some keystone experts to the thread :) Looking closer I think one of the issues is that in the endpoint definition I used "v3.0" instead of "v3". The other question I have for the Keystone gurus though is to support both versions of the API do you need both endpoint definitions, or only "v3"? If we can confirm this one way or another I will update the documentation. Thanks, Steve From pmyers at redhat.com Thu Jun 27 01:41:34 2013 From: pmyers at redhat.com (Perry Myers) Date: Wed, 26 Jun 2013 21:41:34 -0400 Subject: [rhos-list] Bypassing authentication In-Reply-To: References: Message-ID: <51CB984E.2090901@redhat.com> On 06/26/2013 09:07 PM, Guolin Cheng wrote: > Hi Pete, > > I assume that the header version in Redhat openstack repo is ?grizzly? > ? see openstack release/name table at > http://docs.openstack.org/trunk/openstack-compute/install/yum/content/version.html. > > [coolguy at testcloud ~]$ keystone-manage --version > 2013.1.1 > > So I assume that the installed openstack keystone RPM version is > compatible with redhat openstack installation guide version. Yes, but we should (in a day or so) have updated packages that update to 2013.1.2 stable branch releases. > BTW, could you help explain what are the RHOS 2.1 vers RHOS 3? Does RHOS > 3 means grizzly, RHOS 2.1 means some patch level of Folsom? Yes, that is correct. You can see that more explicitly on the main docs page here: https://access.redhat.com/site/documentation/Red_Hat_OpenStack/ From gcheng at salesforce.com Thu Jun 27 02:11:52 2013 From: gcheng at salesforce.com (Guolin Cheng) Date: Wed, 26 Jun 2013 19:11:52 -0700 Subject: [rhos-list] Bypassing authentication In-Reply-To: <363212517.28901056.1372296682255.JavaMail.root@redhat.com> Message-ID: Hi Steve, So, on page 56 of the document, it should be 'v3' instead of 'v3.0'? That will be a quick fix, then. I'm just following the document for a under-the-hood installation evaluation, so that I can have a better understanding of Redhat Openstack in case of something goes south. If Grizzly version supports both version 2.0 API and version 3 API. Sure I'd like to have both enabled. Thanks. Guolin On 6/26/13 6:31 PM, "Steve Gordon" wrote: ----- Original Message ----- > From: "Perry Myers" > To: "Guolin Cheng" , "Adam Young" , "Alan Pevec" > Cc: "Hao Chen" , rhos-list at redhat.com > Sent: Wednesday, June 26, 2013 8:53:41 PM > Subject: Re: [rhos-list] Bypassing authentication > > On 06/26/2013 07:49 PM, Guolin Cheng wrote: > > Hi Pete and all, > > > > Same problems on my evaluation site too. > > > > The document > > /_Red_Hat_OpenStack-3-Installation_and_Configuration_Guide-en-US.pdf_/ > > -- page 52 - instructs to create V3.0 service endpoints. > > > > But when I follow it and create v3.0 service endpoints, I see the same > > problems when I run 'keystone user-list' or other commands. > > > > The dirty&quick fix worked here is to remove all the v3.0 endpoints from > > mysql database directly. Before that I sourced the keystonerc file and > > tried 'keystone endpoint-delete ' operation but that > > operation failed. > > > > After the clean up operation above, all the keystone operation works > > without a glitch. > > > > Is this the upstream openstack Grizzly problem, or redhat packaging / > > documenation issue? I followed the redhat installation documents step by > > step exactly. > > Adding some keystone experts to the thread :) Looking closer I think one of the issues is that in the endpoint definition I used "v3.0" instead of "v3". The other question I have for the Keystone gurus though is to support both versions of the API do you need both endpoint definitions, or only "v3"? If we can confirm this one way or another I will update the documentation. Thanks, Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgordon at redhat.com Thu Jun 27 03:22:21 2013 From: sgordon at redhat.com (Steve Gordon) Date: Wed, 26 Jun 2013 23:22:21 -0400 (EDT) Subject: [rhos-list] Bypassing authentication In-Reply-To: References: Message-ID: <1919708911.28934965.1372303341872.JavaMail.root@redhat.com> ----- Original Message ----- > From: "Guolin Cheng" > To: "Steve Gordon" , "Perry Myers" > Cc: "Adam Young" , "Alan Pevec" , "Hao Chen" , > rhos-list at redhat.com > Sent: Wednesday, June 26, 2013 10:11:52 PM > Subject: Re: [rhos-list] Bypassing authentication > > Hi Steve, > > So, on page 56 of the document, it should be 'v3' instead of 'v3.0'? That > will be a quick fix, then. Yes, it appears that is the case. I also want to confirm though whether you need *both* endpoints or just v3 with the clients auto-negotiating the finer details but I'll raise a bug now to track the fact that we definitely need to make that change. Apologies for the inconvenience! Thanks, Steve From jpichon at redhat.com Thu Jun 27 09:55:36 2013 From: jpichon at redhat.com (Julie Pichon) Date: Thu, 27 Jun 2013 05:55:36 -0400 (EDT) Subject: [rhos-list] novaclient issue In-Reply-To: References: <0e0562aefbb940ecb4040803f5ad0334@DB3PR07MB010.eurprd07.prod.outlook.com> <51C98911.9000004@redhat.com> Message-ID: <1715277141.5945558.1372326936176.JavaMail.root@redhat.com> Hello, "Lutz Christoph" wrote: > > could you please explain a bit more, how you installed your OpenStack > > deployment as well? ... is it a multi-node environment? SELinux enforcing? > > Everything but nova is on a VM running on RHEL 6.4 and KVM (RHEV). I have one > nova node that refers back to the "all the rest" VM running on hardware that > supports KVM. Could you confirm the versions for nova on your nova node, and nova client on your "all the rest" VM? > > By any chance, is your Dashboard host able to connect to the nova host, > > given by keystone catalog (Service compute)? > > Dashboard is doing a login, authenticated by keystone. The request from the > browser that fails is this: > > GET /dashboard/admin/ HTTP/1.1 > Host: rhopenstack.example.com > User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:21.0) Gecko/20100101 > Firefox/21.0 > Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > Accept-Language: > en-us,en;q=0.8,de-de;q=0.5,de;q=0.3 > Accept-Encoding: gzip, deflate > DNT: 1 > Referer: http://rhopenstack.example.com/dashboard > Cookie: csrftoken=KdRnLZQvRfAtBcyHmGlQuHEmoxU1L2QO; > sessionid=52e5ec63b1e720e255dd1791cfe9ec56 > Connection: keep-alive > > That triggers an internal request to the nova API daemon: > > GET > /os-simple-tenant-usage?start=2013-06-01T00:00:00&end=2013-06-25T10:52:36.348852&detailed=1 > HTTP/1.1 > Host: 192.168.104.62:8774 > X-Auth-Project-Id: 4922a6443b9347d18f67c86bfb72022b > Accept-Encoding: gzip, deflate, compress > Content-Length: 0 > Accept: application/json > User-Agent: python-novaclient > X-Auth-Token: 28734c23bdf049d0b03b34a784c152b2 > > Nova answers with: > > HTTP/1.1 300 Multiple Choices > Content-Type: application/json > Content-Length: 337 > Date: Tue, 25 Jun 2013 10:51:37 GMT > > {\"choices\": [{\"status\": \"CURRENT\", \"media-types\": [{\"base\": > \"application/xml\", \"type\": > \"application/vnd.openstack.compute+xml;version=2\"}, {\"base\": > \"application/json\", \"type\": > \"application/vnd.openstack.compute+json;version=2\"}], \"id\": \"v2.0\", > \"links\": [{\"href\": > \"http://192.168.104.62:8774/v2/os-simple-tenant-usage\", \"rel\": > \"self\"}]}]} I think perhaps your nova endpoint is not configured properly. It should have the tenant id in it. See the last comment in https://bugs.launchpad.net/horizon/+bug/967391: http://192.168.104.62:8774/v2.0 instead of: http://192.168.104.62:8774/v2/$(tenant_id)s > > What about nova usage? Does that work for you? (Since the failing call > > is hard-coded in novaclient) and I'm not aware of anybody else seeing > > this issue. > > I don't understand what you mean by "nova usage". I'm quite sure that the > installation instructions from Red Hat are missing something, so far they > proved not to be exact. Many copy-and-pastoes, etc. Very entertaining. > Anyway, I have no idea *what* needs to be done to make the nova API daemon > return the tenant_usage data. I can't use the dashboard to create any > objects... I think with "nova usage", Matthias meant to ask if this is working when using the command-line tools. If the above didn't help, could you try running "$ nova usage" and "$ nova usage-list" and see if it works or if the same problem occurs? Also running these commands with the --debug flag may give more information. If you found deficiencies in the documentation, it would be really appreciated if you could file bugs about them! We'd love to fix it and help make the process smoother. Thanks, Julie From lchristoph at arago.de Thu Jun 27 11:09:49 2013 From: lchristoph at arago.de (Lutz Christoph) Date: Thu, 27 Jun 2013 11:09:49 +0000 Subject: [rhos-list] novaclient issue In-Reply-To: <1715277141.5945558.1372326936176.JavaMail.root@redhat.com> References: <0e0562aefbb940ecb4040803f5ad0334@DB3PR07MB010.eurprd07.prod.outlook.com> <51C98911.9000004@redhat.com> , <1715277141.5945558.1372326936176.JavaMail.root@redhat.com> Message-ID: <1299e64a17814ddcbfbafc460445ce3d@DB3PR07MB010.eurprd07.prod.outlook.com> Hello! I'm sorry, but I can't give you the RPM versions on the nova node. I recycled it to (successfully, BTW) test RDO. But I can tell you where I got them from: the "Red Hat OpenStack 3.0 Preview" see https://rhn.redhat.com/network/software/channels/details.pxt?cid=18771 The "all the rest" VM is installed from the same repo. I will try changing the endpoint. Keystone lists it as http://192.168.104.62:8774, though. No v2 or v2.0. As I can't test right now, I can only assume the novaclient code adds that. I have to build a new nova node first, though. Same goes for "nova usage", of course (sorry for misunderstanding). I can't promiss to file bugs for the documentation - I would have to repeat all the configuration. I don't know if I can find the time to repeat it to run into the problems again I could solve on my own. Best regards / Mit freundlichen Gr??en Lutz Christoph -- Lutz Christoph arago Institut f?r komplexes Datenmanagement AG Eschersheimer Landstra?e 526 - 532 60433 Frankfurt am Main eMail: lchristoph at arago.de - www: http://www.arago.de Tel: 0172/6301004 Mobil: 0172/6301004 -- Bankverbindung: Frankfurter Sparkasse, BLZ: 500 502 01, Kto.-Nr.: 79343 Vorstand: Hans-Christian Boos, Martin Friedrich Vorsitzender des Aufsichtsrats: Dr. Bernhard Walther Sitz: Kronberg im Taunus - HRB 5731 - Registergericht: K?nigstein i.Ts Ust.Idnr. DE 178572359 - Steuernummer 2603 003 228 43435 ________________________________________ Von: Julie Pichon Gesendet: Donnerstag, 27. Juni 2013 11:55 An: Lutz Christoph Cc: Matthias Runge; rhos-list at redhat.com Betreff: Re: [rhos-list] novaclient issue Hello, "Lutz Christoph" wrote: > > could you please explain a bit more, how you installed your OpenStack > > deployment as well? ... is it a multi-node environment? SELinux enforcing? > > Everything but nova is on a VM running on RHEL 6.4 and KVM (RHEV). I have one > nova node that refers back to the "all the rest" VM running on hardware that > supports KVM. Could you confirm the versions for nova on your nova node, and nova client on your "all the rest" VM? > > By any chance, is your Dashboard host able to connect to the nova host, > > given by keystone catalog (Service compute)? > > Dashboard is doing a login, authenticated by keystone. The request from the > browser that fails is this: > > GET /dashboard/admin/ HTTP/1.1 > Host: rhopenstack.example.com > User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:21.0) Gecko/20100101 > Firefox/21.0 > Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > Accept-Language: > en-us,en;q=0.8,de-de;q=0.5,de;q=0.3 > Accept-Encoding: gzip, deflate > DNT: 1 > Referer: http://rhopenstack.example.com/dashboard > Cookie: csrftoken=KdRnLZQvRfAtBcyHmGlQuHEmoxU1L2QO; > sessionid=52e5ec63b1e720e255dd1791cfe9ec56 > Connection: keep-alive > > That triggers an internal request to the nova API daemon: > > GET > /os-simple-tenant-usage?start=2013-06-01T00:00:00&end=2013-06-25T10:52:36.348852&detailed=1 > HTTP/1.1 > Host: 192.168.104.62:8774 > X-Auth-Project-Id: 4922a6443b9347d18f67c86bfb72022b > Accept-Encoding: gzip, deflate, compress > Content-Length: 0 > Accept: application/json > User-Agent: python-novaclient > X-Auth-Token: 28734c23bdf049d0b03b34a784c152b2 > > Nova answers with: > > HTTP/1.1 300 Multiple Choices > Content-Type: application/json > Content-Length: 337 > Date: Tue, 25 Jun 2013 10:51:37 GMT > > {\"choices\": [{\"status\": \"CURRENT\", \"media-types\": [{\"base\": > \"application/xml\", \"type\": > \"application/vnd.openstack.compute+xml;version=2\"}, {\"base\": > \"application/json\", \"type\": > \"application/vnd.openstack.compute+json;version=2\"}], \"id\": \"v2.0\", > \"links\": [{\"href\": > \"http://192.168.104.62:8774/v2/os-simple-tenant-usage\", \"rel\": > \"self\"}]}]} I think perhaps your nova endpoint is not configured properly. It should have the tenant id in it. See the last comment in https://bugs.launchpad.net/horizon/+bug/967391: http://192.168.104.62:8774/v2.0 instead of: http://192.168.104.62:8774/v2/$(tenant_id)s > > What about nova usage? Does that work for you? (Since the failing call > > is hard-coded in novaclient) and I'm not aware of anybody else seeing > > this issue. > > I don't understand what you mean by "nova usage". I'm quite sure that the > installation instructions from Red Hat are missing something, so far they > proved not to be exact. Many copy-and-pastoes, etc. Very entertaining. > Anyway, I have no idea *what* needs to be done to make the nova API daemon > return the tenant_usage data. I can't use the dashboard to create any > objects... I think with "nova usage", Matthias meant to ask if this is working when using the command-line tools. If the above didn't help, could you try running "$ nova usage" and "$ nova usage-list" and see if it works or if the same problem occurs? Also running these commands with the --debug flag may give more information. If you found deficiencies in the documentation, it would be really appreciated if you could file bugs about them! We'd love to fix it and help make the process smoother. Thanks, Julie From jpichon at redhat.com Thu Jun 27 13:10:33 2013 From: jpichon at redhat.com (Julie Pichon) Date: Thu, 27 Jun 2013 09:10:33 -0400 (EDT) Subject: [rhos-list] novaclient issue In-Reply-To: <1299e64a17814ddcbfbafc460445ce3d@DB3PR07MB010.eurprd07.prod.outlook.com> References: <0e0562aefbb940ecb4040803f5ad0334@DB3PR07MB010.eurprd07.prod.outlook.com> <51C98911.9000004@redhat.com> <1715277141.5945558.1372326936176.JavaMail.root@redhat.com> <1299e64a17814ddcbfbafc460445ce3d@DB3PR07MB010.eurprd07.prod.outlook.com> Message-ID: <1091220295.6014755.1372338633154.JavaMail.root@redhat.com> Hello, "Lutz Christoph" wrote: > I'm sorry, but I can't give you the RPM versions on the nova node. I recycled > it to (successfully, BTW) test RDO. But I can tell you where I got them > from: the "Red Hat OpenStack 3.0 Preview" see > https://rhn.redhat.com/network/software/channels/details.pxt?cid=18771 > > The "all the rest" VM is installed from the same repo. Thank you for the extra information (and glad that the RDO tests went successfully). My possible theory of conflicting versions is unlikely then. > I will try changing the endpoint. Keystone lists it as > http://192.168.104.62:8774, though. No v2 or v2.0. As I can't test right > now, I can only assume the novaclient code adds that. I have to build a new > nova node first, though. > > Same goes for "nova usage", of course (sorry for misunderstanding). No problems, thanks for the follow-up and let us know how it goes. I checked multiple machines and the nova endpoints always have a version + %(tenant_id)s syntax, I believe this is required. > I can't promiss to file bugs for the documentation - I would have to repeat > all the configuration. I don't know if I can find the time to repeat it to > run into the problems again I could solve on my own. Of course, absolutely. Just something to consider doing as you go along, if you have issues with docs again in the future. I think there is an issue with the Nova endpoints in the current docs, I filed BZ 979004 to address it. Thanks & regards, Julie From Hao.Chen at NRCan-RNCan.gc.ca Fri Jun 28 17:08:59 2013 From: Hao.Chen at NRCan-RNCan.gc.ca (Chen, Hao) Date: Fri, 28 Jun 2013 17:08:59 +0000 Subject: [rhos-list] Bypassing authentication In-Reply-To: <1919708911.28934965.1372303341872.JavaMail.root@redhat.com> References: <1919708911.28934965.1372303341872.JavaMail.root@redhat.com> Message-ID: <76CC67FD1C99DB4DB4D43FEF354AADB646FF9F38@S-BSC-MBX2.nrn.nrcan.gc.ca> Thank you all for the input. When using v3 other than v3.0 for keystone endpoint-create, the keystone service works. Hao -----Original Message----- From: Steve Gordon [mailto:sgordon at redhat.com] Sent: June 26, 2013 20:22 To: Guolin Cheng Cc: Perry Myers; Adam Young; Alan Pevec; Chen, Hao; rhos-list at redhat.com Subject: Re: [rhos-list] Bypassing authentication ----- Original Message ----- > From: "Guolin Cheng" > To: "Steve Gordon" , "Perry Myers" > > Cc: "Adam Young" , "Alan Pevec" > , "Hao Chen" , > rhos-list at redhat.com > Sent: Wednesday, June 26, 2013 10:11:52 PM > Subject: Re: [rhos-list] Bypassing authentication > > Hi Steve, > > So, on page 56 of the document, it should be 'v3' instead of 'v3.0'? > That will be a quick fix, then. Yes, it appears that is the case. I also want to confirm though whether you need *both* endpoints or just v3 with the clients auto-negotiating the finer details but I'll raise a bug now to track the fact that we definitely need to make that change. Apologies for the inconvenience! Thanks, Steve