From kimi.zhang at nsn.com Thu May 2 05:47:21 2013 From: kimi.zhang at nsn.com (Zhang, Kimi (NSN - CN/Cheng Du)) Date: Thu, 2 May 2013 05:47:21 +0000 Subject: [Rdo-list] [Grizzly] Network problem with Quantum + Openvswitch + Vlan In-Reply-To: <517D2872.9000401@redhat.com> References: <90CF2062F86FD8498897037C7FBBC088046F52@SGSIMBX001.nsn-intra.net> <517CC6A4.9040201@redhat.com> <90CF2062F86FD8498897037C7FBBC088046F91@SGSIMBX001.nsn-intra.net> <517CC934.4070809@redhat.com> <90CF2062F86FD8498897037C7FBBC088046FC9@SGSIMBX001.nsn-intra.net> <517CCACE.8000203@redhat.com> <90CF2062F86FD8498897037C7FBBC088046FEE@SGSIMBX001.nsn-intra.net> <517CCE14.2040105@redhat.com> <517CD09C.9080609@redhat.com> <90CF2062F86FD8498897037C7FBBC08804705E@SGSIMBX001.nsn-intra.net> <90CF2062F86FD8498897037C7FBBC0880470B0@SGSIMBX001.nsn-intra.net> <517CDE34.1080506@redhat.com> <90CF2062F86FD8498897037C7FBBC0880470D8@SGSIMBX001.nsn-intra.net> <517CE212.2030000@redhat.com> <90CF2062F86FD8498897037C7FBBC0880470FB@SGSIMBX001.nsn-intra.net> <517CEB42.1010005@redhat.com> <517D003B.6030802@redhat.com> <517D2872.9000401@redhat.com> Message-ID: <90CF2062F86FD8498897037C7FBBC088048AB7@SGSIMBX001.nsn-intra.net> Hi, Gary Finally I found the root cause: the vlan handling of RHEL 6.4: In my setup, on both gateway and compute node, I use NIC p3p1 as internal traffic port, it?s added into br-int. Analysis: I do tcpdump on compute node p3p1 port, find the frame sending out the port does have vlan tag, but when I do tcpdump on gateway node (receiver) p3p1 port, I see the frame has no vlan tag on it. Somehow the OS eats the vlan tag of incoming frame from p3p1 port. That causes openvswitch on gateway node can't handle these abnormal frames. Test shows the same results for traffic from gateway node to compute node. What I have done to fix this: On both gateway and compute node, I add a vlan interface p3p1.195 (vlan 195 is actually not used). After this, the OS can handle vlan tagging on frames received from p3p1 correctly. It seems the NIC does not know how to handle vlan tags unless there?s at least one vlan configured on the port. It can be a OS bug of just a configuration problem. I don't know if there?s a better way to activate vlan handling on the port without actually creating a vlan interface on it. ? Any ideas ? About your findings. 1. quantum-ovs-cleanup script, yes, it has a bug, I did the same fix too and enable startup run for it. 2. Yes, I use dnsmasq 2.48 since it comes from OS image. I do find running it has problem, vm can?t get right gateway IP, always uses dhcp server(dhcp agent) IP as gateway IP. After I upgrade it to 2.65, it works normally. 3. Yes, I notice the same that quantum client does not support security groups, even a client from folsom release suppot that, this should be updated. 4. The 192.168.122.x network in dnsmasq was default network created by libvirtd, I already removed after libvirtd installation by running: virsh net-destroy default virsh net-undefine default I don?t think it?s the cause of the problem Regards, Kimi Zhang MP: +86 186 0800 8182 Call me(NCS): sip:+86018608008182 From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of ext Gary Kotton Sent: Sunday, April 28, 2013 9:48 PM To: gkotton at redhat.com Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] [Grizzly] Network problem with Quantum + Openvswitch + Vlan Hi, In addition to that I have discovered that there are the following: [root at dhcp-4-126 ~]# ps aux |grep dns nobody 2320 0.0 0.0 12888 576 ? S 09:31 0:00 /usr/sbin/dnsmasq --strict-order --local=// --domain-needed --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --bind-interfaces --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override --dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile --addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts nobody 2718 0.0 0.0 12884 600 ? S 09:32 0:00 /usr/sbin/dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tapd0ed5836-38 --except-interface=lo --pid-file=/var/lib/quantum/dhcp/45f9b635-c996-4230-89df-b8c6ac1adb71/pid --dhcp-hostsfile=/var/lib/quantum/dhcp/45f9b635-c996-4230-89df-b8c6ac1adb71/host --dhcp-optsfile=/var/lib/quantum/dhcp/45f9b635-c996-4230-89df-b8c6ac1adb71/opts --dhcp-script=/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update --leasefile-ro --dhcp-range=set:tag0,10.0.0.0,static,120s --conf-file= --domain=openstacklocal root 2719 0.0 0.0 12884 208 ? S 09:32 0:00 /usr/sbin/dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tapd0ed5836-38 --except-interface=lo --pid-file=/var/lib/quantum/dhcp/45f9b635-c996-4230-89df-b8c6ac1adb71/pid --dhcp-hostsfile=/var/lib/quantum/dhcp/45f9b635-c996-4230-89df-b8c6ac1adb71/host --dhcp-optsfile=/var/lib/quantum/dhcp/45f9b635-c996-4230-89df-b8c6ac1adb71/opts --dhcp-script=/usr/bin/quantum-dhcp-agent-dnsmasq-lease-update --leasefile-ro --dhcp-range=set:tag0,10.0.0.0,static,120s --conf-file= --domain=openstacklocal root 6054 0.0 0.0 103248 840 pts/0 S+ 09:39 0:00 grep dns When the process 2320 is killed the VM receives its address. So in short we have some hardening to do :) With patches for the issues below and shutting down the aforementioned process I have a VM getting a address. Thanks Gary On 04/28/2013 01:55 PM, Gary Kotton wrote: Hi, I have found a few problems and hopefully one or more may be related to the case that you have experienced: 1. When using OVS it is important you run the service ovs-quantum-cleanup when the host boots. This is due to the fact that OVS will store all tap device. This causes havoc when restarting hosts (in particular ones that have dhcp and l3 agents). So please make sure you have run "chkconfig quantum-ovs-cleanup on" on all hosts that are running the OVS. You can verify if this is the case by checking of the DHCP agent has created an IP address on the host. [Please note that we have a problem here - in the file /etc/init.d/quantum-ovs-cleanup "--config-file /usr/share/$proj/$proj-dist.conf" needs to be removed]. 2. Which dnsmasq version are you using? If this is 2.48 then there is a problem with the DHCP agent running. We are in the process of resolving this. If you make use of a version with tag support then this will work. 3. The quantum client needs to be updated to support the security groups. Hopefully we will have solutions for all of the above ASAP. Thanks Gary On 04/28/2013 12:26 PM, Gary Kotton wrote: Hi, I have been able to reproduce the problem. I'll get back to you as soon as I have any information. Thanks Gary On 04/28/2013 11:56 AM, Zhang, Kimi (NSN - CN/Cheng Du) wrote: Yes, I did run quantum-dhcp-setup on network node. Thanks, good luck there. Regards, Kimi Zhang MP: +86 186 0800 8182 Call me(NCS): sip:+86018608008182 From: ext Gary Kotton [mailto:gkotton at redhat.com] Sent: Sunday, April 28, 2013 4:47 PM To: Zhang, Kimi (NSN - CN/Cheng Du) Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] [Grizzly] Network problem with Quantum + Openvswitch + Vlan Thanks. One more question - on the network node, did you run quantum-dhcp-setup? I am nearly ready with my setup. Hopefully I'll have a reproduction or some additional questions. Thanks Gary On 04/28/2013 11:41 AM, Zhang, Kimi (NSN - CN/Cheng Du) wrote: Sure, my answers below. :) Regards, Kimi Zhang MP: +86 186 0800 8182 Call me(NCS): sip:+86018608008182 From: ext Gary Kotton [mailto:gkotton at redhat.com] Sent: Sunday, April 28, 2013 4:31 PM To: Zhang, Kimi (NSN - CN/Cheng Du) Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] [Grizzly] Network problem with Quantum + Openvswitch + Vlan Hi, I have a few questions (please be patient with me): 1. On the compute node, which services are running? nova-compute, nova-novncproxy, quantum-openvswitch-agent, openvswitch 2. Can you please print the iptables on the compute node? I disabled it already, here's output before I do it. [root at computer-2 ~]# iptables-save # Generated by iptables-save v1.4.7 on Sun Apr 28 16:37:18 2013 *filter :INPUT ACCEPT [22634:3487580] :FORWARD ACCEPT [22:704] :OUTPUT ACCEPT [22619:5860198] :nova-compute-FORWARD - [0:0] :nova-compute-INPUT - [0:0] :nova-compute-OUTPUT - [0:0] :nova-compute-inst-26 - [0:0] :nova-compute-local - [0:0] :nova-compute-provider - [0:0] :nova-compute-sg-fallback - [0:0] :nova-filter-top - [0:0] -A INPUT -j nova-compute-INPUT -A FORWARD -j nova-filter-top -A FORWARD -j nova-compute-FORWARD -A OUTPUT -j nova-filter-top -A OUTPUT -j nova-compute-OUTPUT -A nova-compute-FORWARD -s 0.0.0.0/32 -d 255.255.255.255/32 -p udp -m udp --sport 68 --dport 67 -j ACCEPT -A nova-compute-INPUT -s 0.0.0.0/32 -d 255.255.255.255/32 -p udp -m udp --sport 68 --dport 67 -j ACCEPT -A nova-compute-inst-26 -m state --state INVALID -j DROP -A nova-compute-inst-26 -m state --state RELATED,ESTABLISHED -j ACCEPT -A nova-compute-inst-26 -j nova-compute-provider -A nova-compute-inst-26 -s 172.1.1.3/32 -p udp -m udp --sport 67 --dport 68 -j ACCEPT -A nova-compute-inst-26 -s 172.1.1.0/24 -j ACCEPT -A nova-compute-inst-26 -p icmp -j ACCEPT -A nova-compute-inst-26 -p tcp -m tcp --dport 22 -j ACCEPT -A nova-compute-inst-26 -j nova-compute-sg-fallback -A nova-compute-local -d 172.1.1.5/32 -j nova-compute-inst-26 -A nova-compute-sg-fallback -j DROP -A nova-filter-top -j nova-compute-local COMMIT # Completed on Sun Apr 28 16:37:18 2013 # Generated by iptables-save v1.4.7 on Sun Apr 28 16:37:18 2013 *mangle :PREROUTING ACCEPT [22733:3519752] :INPUT ACCEPT [22733:3519752] :FORWARD ACCEPT [175:50468] :OUTPUT ACCEPT [22705:5868566] :POSTROUTING ACCEPT [22880:5919034] :nova-compute-POSTROUTING - [0:0] -A POSTROUTING -j nova-compute-POSTROUTING COMMIT # Completed on Sun Apr 28 16:37:18 2013 # Generated by iptables-save v1.4.7 on Sun Apr 28 16:37:18 2013 *nat :PREROUTING ACCEPT [16:14570] :POSTROUTING ACCEPT [338:22855] :OUTPUT ACCEPT [331:20579] :nova-compute-OUTPUT - [0:0] :nova-compute-POSTROUTING - [0:0] :nova-compute-PREROUTING - [0:0] :nova-compute-float-snat - [0:0] :nova-compute-snat - [0:0] :nova-postrouting-bottom - [0:0] -A PREROUTING -j nova-compute-PREROUTING -A POSTROUTING -j nova-compute-POSTROUTING -A POSTROUTING -j nova-postrouting-bottom -A OUTPUT -j nova-compute-OUTPUT -A nova-compute-snat -j nova-compute-float-snat -A nova-postrouting-bottom -j nova-compute-snat COMMIT # Completed on Sun Apr 28 16:37:18 2013 3. Can you please print the flow table rules (ovs-dpctl dump-flows br-int)? I suppose you mean ovs-ofctl dump-flows br-int ? [root at computer-2 ~]# ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=4125.444s, table=0, n_packets=1707, n_bytes=90606, idle_age=12, priority=1 actions=NORMAL cookie=0x0, duration=4123.006s, table=0, n_packets=143, n_bytes=8688, idle_age=20, priority=2,in_port=1 actions=drop cookie=0x0, duration=3349.566s, table=0, n_packets=0, n_bytes=0, idle_age=3349, priority=3,in_port=1,dl_vlan=1001 actions=mod_vlan_vid:1,NORMAL Here?s also ovs-dpctl show: [root at computer-2 ~]# ovs-dpctl show system at br-p3p1: lookups: hit:3967 missed:314 lost:0 flows: 1 port 0: br-p3p1 (internal) port 1: p3p1 port 2: phy-br-p3p1 system at br-int: lookups: hit:1575 missed:302 lost:0 flows: 0 port 0: br-int (internal) port 1: int-br-p3p1 port 4: qvo39242f22-ec Thanks Gary On 04/28/2013 11:17 AM, Zhang, Kimi (NSN - CN/Cheng Du) wrote: Hi? Gary I tried capture packet while keeping VM to restart it?s network. I can see dhcp request broadcast packet on tap, qbr, qvb and qvo interfaces. Failed to see packet on int-br-p3p1 on bridge br-int. Not sure if it has something to do with openflow setting? I attach some ovs-ofctl outputs I have not seen ?veth? port anywhere? ---Record--- [root at computer-2 ~]# brctl show bridge name bridge id STP enabled interfaces qbr39242f22-ec 8000.c6f95e6a859a no qvb39242f22-ec tap39242f22-ec virbr0 8000.525400c47f62 yes virbr0-nic [root at computer-2 ~]# ovs-vsctl show 5660d1b5-1f26-46fc-bcb7-0ccfd06fe57b Bridge br-int Port br-int Interface br-int type: internal Port "int-br-p3p1" Interface "int-br-p3p1" Port "qvo39242f22-ec" tag: 1 Interface "qvo39242f22-ec" Bridge "br-p3p1" Port "phy-br-p3p1" Interface "phy-br-p3p1" Port "p3p1" Interface "p3p1" Port "br-p3p1" Interface "br-p3p1" type: internal ovs_version: "1.9.0" [root at computer-2 ~]# tcpdump -i tap39242f22-ec port 67 tcpdump: WARNING: tap39242f22-ec: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap39242f22-ec, link-type EN10MB (Ethernet), capture size 65535 bytes 16:12:21.455212 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:15:82:82 (oui Unknown), length 300 16:12:21.455289 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:15:82:82 (oui Unknown), length 300 ^C 2 packets captured 2 packets received by filter 0 packets dropped by kernel [root at computer-2 ~]# tcpdump -i qbr39242f22-ec port 67 tcpdump: WARNING: qbr39242f22-ec: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on qbr39242f22-ec, link-type EN10MB (Ethernet), capture size 65535 bytes 16:12:34.456228 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:15:82:82 (oui Unknown), length 300 ^C 1 packets captured 1 packets received by filter 0 packets dropped by kernel [root at computer-2 ~]# tcpdump -i qvb39242f22-ec port 67 tcpdump: WARNING: qvb39242f22-ec: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on qvb39242f22-ec, link-type EN10MB (Ethernet), capture size 65535 bytes 16:12:43.460251 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:15:82:82 (oui Unknown), length 300 ^C 1 packets captured 1 packets received by filter 0 packets dropped by kernel [root at computer-2 ~]# tcpdump -i qvo39242f22-ec port 67 tcpdump: WARNING: qvo39242f22-ec: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on qvo39242f22-ec, link-type EN10MB (Ethernet), capture size 65535 bytes 16:13:03.712272 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:15:82:82 (oui Unknown), length 300 16:13:08.455932 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from fa:16:3e:15:82:82 (oui Unknown), length 300 ^C 2 packets captured 2 packets received by filter 0 packets dropped by kernel [root at computer-2 ~]# tcpdump -i int-br-p3p1 port 67 tcpdump: WARNING: int-br-p3p1: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on int-br-p3p1, link-type EN10MB (Ethernet), capture size 65535 bytes ^C 0 packets captured 0 packets received by filter 0 packets dropped by kernel ---output of ovs-ofctl--- [root at computer-2 ~]# ovs-ofctl show br-int OFPT_FEATURES_REPLY (xid=0x1): dpid:000086401820f142 n_tables:255, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE 1(int-br-p3p1): addr:de:42:e4:9d:b7:1d config: 0 state: 0 current: 10GB-FD COPPER speed: 10000 Mbps now, 100 Mbps max 4(qvo39242f22-ec): addr:ea:5d:b8:7e:4a:78 config: 0 state: 0 current: 10GB-FD COPPER speed: 10000 Mbps now, 100 Mbps max LOCAL(br-int): addr:86:40:18:20:f1:42 config: PORT_DOWN state: LINK_DOWN speed: 100 Mbps now, 100 Mbps max OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0 [root at computer-2 ~]# [root at computer-2 ~]# ovs-ofctl show br-p3p1 OFPT_FEATURES_REPLY (xid=0x1): dpid:0000a0369f15d424 n_tables:255, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE 1(p3p1): addr:a0:36:9f:15:d4:24 config: 0 state: 0 current: 10GB-FD advertised: 10GB-FD FIBER supported: 10GB-FD FIBER speed: 10000 Mbps now, 10000 Mbps max 2(phy-br-p3p1): addr:be:3c:f9:8d:d9:d0 config: 0 state: 0 current: 10GB-FD COPPER speed: 10000 Mbps now, 100 Mbps max LOCAL(br-p3p1): addr:a0:36:9f:15:d4:24 config: PORT_DOWN state: LINK_DOWN speed: 100 Mbps now, 100 Mbps max OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0 [root at computer-2 ~]# ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=4125.444s, table=0, n_packets=1707, n_bytes=90606, idle_age=12, priority=1 actions=NORMAL cookie=0x0, duration=4123.006s, table=0, n_packets=143, n_bytes=8688, idle_age=20, priority=2,in_port=1 actions=drop cookie=0x0, duration=3349.566s, table=0, n_packets=0, n_bytes=0, idle_age=3349, priority=3,in_port=1,dl_vlan=1001 actions=mod_vlan_vid:1,NORMAL [root at computer-2 ~]# ovs-ofctl dump-flows br-p3p1 NXST_FLOW reply (xid=0x4): cookie=0x0, duration=4129.629s, table=0, n_packets=2175, n_bytes=138652, idle_age=0, priority=1 actions=NORMAL cookie=0x0, duration=4127.415s, table=0, n_packets=16, n_bytes=1224, idle_age=1045, priority=2,in_port=2 actions=drop cookie=0x0, duration=3354.578s, table=0, n_packets=1697, n_bytes=96638, idle_age=17, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:1001,NORMAL Regards, Kimi Zhang MP: +86 186 0800 8182 Call me(NCS): sip:+86018608008182 From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of ext Zhang, Kimi (NSN - CN/Cheng Du) Sent: Sunday, April 28, 2013 3:40 PM To: gkotton at redhat.com; rdo-list at redhat.com Subject: Re: [Rdo-list] [Grizzly] Network problem with Quantum + Openvswitch + Vlan Very nice pic, I am going to try to capture packet on each port. I did not configure to use quantum to manage firewall , just leave it to nova-compute, will try your configs later. Regards, Kimi Zhang MP: +86 186 0800 8182 Call me(NCS): sip:+86018608008182 From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of ext Gary Kotton Sent: Sunday, April 28, 2013 3:33 PM To: rdo-list at redhat.com Subject: Re: [Rdo-list] [Grizzly] Network problem with Quantum + Openvswitch + Vlan Hi, Can you also please check that firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver is configured in plugin.ini file.And security_group_api = quantum is set in nova.conf Thanks Gary On 04/28/2013 10:21 AM, Gary Kotton wrote: On 04/28/2013 10:16 AM, Zhang, Kimi (NSN - CN/Cheng Du) wrote: Hi, Gary I tried to disable iptables on both network and compute nodes, still does not work out :( Can you please look at https://docs.google.com/drawings/d/1wax2Nlk-LRJeOXwF_6X9L05cAf9HKl2FI_0B51rG4XE/edit?usp=sharing When using the OVS there are a number of devices. Would it be possible that you try and capture on each device so that we can try and see where the packet is discarded. I will have a setup ready in about an hour. From quantum openvswitch agent logs, following messages keeps coming out repeatly every 2-3 seconds, not sure if they matter or not? The messages below are OK - this is how the OVS agent works. It polls the OVS every interval to check if new ports are created. 2013-04-28 15:15:39 DEBUG [quantum.openstack.common.rpc.amqp] Making synchronous call on q-plugin ... 2013-04-28 15:15:39 DEBUG [quantum.openstack.common.rpc.amqp] MSG_ID is 92f4e83cf92c46f1b9304c879f9b7a41 2013-04-28 15:15:39 DEBUG [quantum.openstack.common.rpc.amqp] UNIQUE_ID is b27f9545ca9d4745961ac574abdc103b. 2013-04-28 15:15:40 DEBUG [quantum.agent.linux.utils] Running command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'list-ports', 'br-int'] 2013-04-28 15:15:40 DEBUG [quantum.agent.linux.utils] Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'list-ports', 'br-int'] Exit code: 0 Stdout: 'int-br-p3p1\n' Stderr: '' 2013-04-28 15:15:40 DEBUG [quantum.agent.linux.utils] Running command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'get', 'Interface', 'int-br-p3p1', 'external_ids'] 2013-04-28 15:15:41 DEBUG [quantum.agent.linux.utils] Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'get', 'Interface', 'int-br-p3p1', 'external_ids'] Exit code: 0 Stdout: '{}\n' Stderr: '' 2013-04-28 15:15:42 DEBUG [quantum.agent.linux.utils] Running command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'list-ports', 'br-int'] 2013-04-28 15:15:42 DEBUG [quantum.agent.linux.utils] Running command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'list-ports', 'br-int'] 2013-04-28 15:15:42 DEBUG [quantum.agent.linux.utils] Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'list-ports', 'br-int'] Exit code: 0 Stdout: 'int-br-p3p1\n' Stderr: '' 2013-04-28 15:15:42 DEBUG [quantum.agent.linux.utils] Running command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'get', 'Interface', 'int-br-p3p1', 'external_ids'] 2013-04-28 15:15:42 DEBUG [quantum.agent.linux.utils] Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'list-ports', 'br-int'] Exit code: 0 Stdout: 'int-br-p3p1\n' Stderr: '' 2013-04-28 15:15:42 DEBUG [quantum.agent.linux.utils] Running command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'get', 'Interface', 'int-br-p3p1', 'external_ids'] 2013-04-28 15:15:43 DEBUG [quantum.agent.linux.utils] Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'get', 'Interface', 'int-br-p3p1', 'external_ids'] Exit code: 0 Stdout: '{}\n' Stderr: '' 2013-04-28 15:15:43 DEBUG [quantum.agent.linux.utils] Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ovs-vsctl', '--timeout=2', 'get', 'Interface', 'int-br-p3p1', 'external_ids'] Exit code: 0 Stdout: '{}\n' Stderr: '' Regards, Kimi Zhang MP: +86 186 0800 8182 Call me(NCS): sip:+86018608008182 From: ext Gary Kotton [mailto:gkotton at redhat.com] Sent: Sunday, April 28, 2013 3:08 PM To: Zhang, Kimi (NSN - CN/Cheng Du) Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] [Grizzly] Network problem with Quantum + Openvswitch + Vlan On 04/28/2013 10:04 AM, Zhang, Kimi (NSN - CN/Cheng Du) wrote: I tried that too, no lucky. From tcpdump ,it seems br-int does not forward any packet to interfaces connect to br-p3p1, which connects to physical network? There could be a number of issues here: 1. The iptables are dropping the traffic (I am in the process of getting a setup up and running) 2. The network connectivity In order to ensure that it is not the first one can you try and see which iptables rules are matched or disable the iptables? Regards, Kimi Zhang MP: +86 186 0800 8182 Call me(NCS): sip:+86018608008182 From: ext Gary Kotton [mailto:gkotton at redhat.com] Sent: Sunday, April 28, 2013 3:01 PM To: Zhang, Kimi (NSN - CN/Cheng Du) Cc: rdo-list at redhat.com Subject: Re: [Rdo-list] [Grizzly] Network problem with Quantum + Openvswitch + Vlan On 04/28/2013 09:54 AM, Zhang, Kimi (NSN - CN/Cheng Du) wrote: Hi, Gary Yes, I?m aware of that packstack does not support quantum yet. The whole setup was installed manually. I did run quantum-server-setup and quantum-host-setup, I tried linuxbridge plugin too, it has no issue for VM to get IP address, but openvswitch has issues on this? ok. if you configure and IP address manually on the VM are you able to ping the port of the DHCP agent? you can get the IP from quantum port-list Regards, Kimi From: rdo-list-bounces at redhat.com [mailto:rdo-list-bounces at redhat.com] On Behalf Of ext Gary Kotton Sent: Sunday, April 28, 2013 2:50 PM To: rdo-list at redhat.com Subject: Re: [Rdo-list] [Grizzly] Network problem with Quantum + Openvswitch + Vlan Hi Kimi, Thanks for the mail. Please see the inline comments below. Please note that at the moment we do not have packstack support for Quantum so there is a little manual plumbing that needs to be done (not sure if you have done this already). On the host where the quantum service is running you need to run quantum-server-setup and on the compute nodes you need to run quantum-host-setup (please note that the relevant keystone credentials need to be set too). Thanks Gary On 04/28/2013 09:38 AM, Zhang, Kimi (NSN - CN/Cheng Du) wrote: converted from rtf When I start VM instance, the VM can?t get IP address. Could someone help me on this ? I will try 3 nodes Setup with RHEL 6.4 OS + rdo grizzly repository. ? Controller node: Services: Keystone+Glance+Cinder+Quantum server + Nova services Network: bond0(10.68.125.11 for O&M) ? Network node: Services: quantum-openvswitch-agent, quantum-l3-agent, quantum-dhcp-agent, quantum-metadata-agent Network: bond0(10.68.125.15 for O&M) , p3p1 for VM internal network, p3p2 for external network Please note that RHEL currently does not support namespaces so there are a number of limitations. We are addressing this at the moment. If namespaces are not used then it is suggested that one does not run the DHCP agent and the L3 agent on the same host. The reason for this is that there is no network isolation. ? Compute node: Services: nove-compute and quantum-openvswitch-agent Network: bond0(10.68.125.16 for O&M), p3p1 for VM internal network ? Switch setup tagging for vlan 1000-2999 for p3p1 ports(VM network) of network and compute nodes. 1. Quantum.conf: [DEFAULT] debug = True verbose = True lock_path = $state_path/lock bind_host = 0.0.0.0 bind_port = 9696 core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2 api_paste_config = api-paste.ini rpc_backend = quantum.openstack.common.rpc.impl_kombu Are you using rabbit or qpid? control_exchange = quantum rabbit_host = 10.68.125.11 notification_driver = quantum.openstack.common.notifier.rpc_notifier default_notification_level = INFO notification_topics = notifications [QUOTAS] [DEFAULT_SERVICETYPE] [AGENT] polling_interval = 2 root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf [keystone_authtoken] auth_host = 10.68.125.11 auth_port = 35357 auth_protocol = http signing_dir = /var/lib/quantum/keystone-signing admin_tenant_name = service admin_user = quantum admin_password = password 2. ovs_quantum_plugin.ini [DATABASE] sql_connection = mysql://quantum:quantum at 10.68.125.11:3306/ovs_quantum reconnect_interval = 2 [OVS] tenant_network_type = vlan network_vlan_ranges = physnet1:1000:2999 bridge_mappings = physnet1:br-p3p1 [AGENT] polling_interval = 2 [SECURITYGROUP] 3. nova.conf [DEFAULT] verbose=true logdir = /var/log/nova state_path = /var/lib/nova lock_path = /var/lib/nova/tmp volumes_dir = /etc/nova/volumes dhcpbridge = /usr/bin/nova-dhcpbridge dhcpbridge_flagfile = /etc/nova/nova.conf force_dhcp_release = True injected_network_template = /usr/share/nova/interfaces.template libvirt_nonblocking = True libvirt_inject_partition = -1 network_manager = nova.network.manager.FlatDHCPManager iscsi_helper = tgtadm compute_driver = libvirt.LibvirtDriver libvirt_type=kvm libvirt_ovs_bridge=br-int firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver manager=nova.conductor.manager.ConductorManager rpc_backend = nova.openstack.common.rpc.impl_kombu rabbit_host = 10.68.125.11 rootwrap_config = /etc/nova/rootwrap.conf use_deprecated_auth=false auth_strategy=keystone glance_api_servers=10.68.125.11:9292 image_service=nova.image.glance.GlanceImageService novnc_enabled=true novncproxy_base_url=http://10.68.125.11:6080/vnc_auto.html novncproxy_port=6080 vncserver_proxyclient_address=10.68.125.16 vncserver_listen=0.0.0.0 libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver libvirt_use_virtio_for_bridges=True network_api_class=nova.network.quantumv2.api.API quantum_url=http://10.68.125.11:9696 quantum_auth_strategy=keystone quantum_admin_tenant_name=service quantum_admin_username=quantum quantum_admin_password=password quantum_admin_auth_url=http://10.68.125.11:35357/v2.0 linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver libvirt_vif_type=ethernet service_quantum_metadata_proxy = True quantum_metadata_proxy_shared_secret = helloOpenStack metadata_host = 10.68.125.11 metadata_listen = 0.0.0.0 metadata_listen_port = 8775 [keystone_authtoken] admin_tenant_name = service admin_user = nova admin_password = password auth_host = 10.68.125.11 auth_port = 35357 auth_protocol = http signing_dir = /tmp/keystone-signing-nova 4. ovs-vsctl show on network node: aeeb6cf7-271b-405a-aa17-1b95bcd9e301 Bridge "br-p3p1" Port "p3p1" Interface "p3p1" Port "phy-br-p3p1" Interface "phy-br-p3p1" Port "br-p3p1" Interface "br-p3p1" type: internal Bridge br-ex Port br-ex Interface br-ex type: internal Port "qg-a83c0abd-f4" Interface "qg-a83c0abd-f4" type: internal Port "p3p2" Interface "p3p2" Bridge br-int Port br-int Interface br-int type: internal Port "int-br-p3p1" Interface "int-br-p3p1" Port "tap1f386a2a-12" tag: 1 Interface "tap1f386a2a-12" type: internal ovs_version: "1.9.0" 5. ovs-vsctl show on compute node: 8d6c2637-ff69-4a2d-a7db-e4f181273bc0 Bridge "br-p3p1" Port "br-p3p1" Interface "br-p3p1" type: internal Port "phy-br-p3p1" Interface "phy-br-p3p1" Port "p3p1" Interface "p3p1" Bridge br-int Port "qvo56a4572c-dc" tag: 2 Interface "qvo56a4572c-dc" Port "int-br-p3p1" Interface "int-br-p3p1" Port br-int Interface br-int type: internal ovs_version: "1.9.0" On compute node, I can see dhcp request packet from tcpdump on qvo56a4572c-dc, but it seems the packet is not forwarded out since I can?t see packet from int-br-p3p1 on br-int or any port from br-p3p1. Any chance to get the DHCP and the L3 agent configuration files? Please check that use_namespaces = False in both of these files. Are there any log errors? Thank you! Regards, Kimi _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list _______________________________________________ Rdo-list mailing list Rdo-list at redhat.com https://www.redhat.com/mailman/listinfo/rdo-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From dneary at redhat.com Mon May 6 10:39:50 2013 From: dneary at redhat.com (Dave Neary) Date: Mon, 06 May 2013 12:39:50 +0200 Subject: [Rdo-list] Report from access.redhat.com re release RPM dependencies Message-ID: <51878876.708@redhat.com> Hi, I got a report of an unavailable RPM for the release-2 RPM via access.redhat.com - I haven't been able to check it out on a bare RHEL install yet. Can someone please check this out, and confirm/deny its existence? I'd like to see us update the RPM soon if there's an issue. Thanks, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From dennisml at conversis.de Tue May 7 16:15:28 2013 From: dennisml at conversis.de (Dennis Jacobfeuerborn) Date: Tue, 07 May 2013 18:15:28 +0200 Subject: [Rdo-list] Issue with cinder? Message-ID: <518928A0.20504@conversis.de> Hi, I'm having trouble getting cinder to work with the rdo packages. When I run "cinder list" I get "ERROR: Malformed request url" REQ: curl -i http://10.16.171.3:8776/v1/11b39f6529ea4eb6a527de82122ba6f6/volumes/detail -X GET -H "X-Auth-Project-Id: demo" -H "User-Agent: python-cinderclient" -H "Accept: application/json" -H "X-Auth-Token: MIIIcQYJ...5nBwM=" RESP: [400] {'date': 'Tue, 07 May 2013 16:09:47 GMT', 'content-length': '65', 'content-type': 'application/json; charset=UTF-8', 'x-compute-request-id': 'req-255a4d4d-3d0d-42dd-85af-772be8fff677'} RESP BODY: {"badRequest": {"message": "Malformed request url", "code": 400}} (I shortened the auth token in the request output above) In the cinder-api debug log I get this: 2013-05-07 17:49:31 DEBUG [routes.middleware] Match dict: {'action': u'detail', 'controller': , 'project_id': u'11b39f6529ea4eb6a527de82122ba6f6'} 2013-05-07 17:49:31 INFO [cinder.api.openstack.wsgi] GET http://10.16.171.3:8776/v1/11b39f6529ea4eb6a527de82122ba6f6/volumes/detail 2013-05-07 17:49:31 DEBUG [cinder.api.openstack.wsgi] Unrecognized Content-Type provided in request 2013-05-07 17:50:07 DEBUG [routes.middleware] Matched GET /11b39f6529ea4eb6a527de82122ba6f6/volumes/detail 2013-05-07 17:50:07 DEBUG [routes.middleware] Route path: '/{project_id}/volumes/detail', defaults: {'action': u'detail', 'controller': } 2013-05-07 17:50:07 DEBUG [routes.middleware] Match dict: {'action': u'detail', 'controller': , 'project_id': u'11b39f6529ea4eb6a527de82122ba6f6'} 2013-05-07 17:50:07 INFO [cinder.api.openstack.wsgi] GET http://10.16.171.3:8776/v1/11b39f6529ea4eb6a527de82122ba6f6/volumes/detail 2013-05-07 17:50:07 DEBUG [cinder.api.openstack.wsgi] Unrecognized Content-Type provided in request Any ideas what is wrong with this request? Regards, Dennis From kimi.zhang at nsn.com Mon May 13 06:50:42 2013 From: kimi.zhang at nsn.com (Zhang, Kimi (NSN - CN/Cheng Du)) Date: Mon, 13 May 2013 06:50:42 +0000 Subject: [Rdo-list] Problem of integrating EMC VNX as Cinder backend: Message-ID: <90CF2062F86FD8498897037C7FBBC08804C926@SGSIMBX001.nsn-intra.net> When I try to use EMC driver to integrate EMC VNX to be cinder backend, I got following error in /var/log/cinder/volume.log 2013-05-13 14:40:52 CRITICAL [cinder] No module named emc Traceback (most recent call last): File "/usr/bin/cinder-volume", line 57, in server = service.Service.create(binary='cinder-volume') File "/usr/lib/python2.6/site-packages/cinder/service.py", line 435, in create service_name=service_name) File "/usr/lib/python2.6/site-packages/cinder/service.py", line 330, in __init__ *args, **kwargs) File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 129, in __init__ configuration=self.configuration) File "/usr/lib/python2.6/site-packages/cinder/openstack/common/importutils.py", line 40, in import_object return import_class(import_str)(*args, **kwargs) File "/usr/lib/python2.6/site-packages/cinder/openstack/common/importutils.py", line 30, in import_class __import__(mod_str) ImportError: No module named emc My cinder.conf : [DEFAULT] logdir = /var/log/cinder state_path = /var/lib/cinder lock_path = /var/lib/cinder/tmp volumes_dir = /etc/cinder/volumes iscsi_helper = tgtadm sql_connection = mysql://cinder:cinder at 127.0.0.1/cinder rootwrap_config = /etc/cinder/rootwrap.conf # VG based backend #volume_name_template = volume-%s #volume_group = cinder-volumes verbose = True auth_strategy = keystone # RabbitMQ rpc_backend = cinder.openstack.common.rpc.impl_kombu rabbit_host = 10.68.125.11 #EMC backend driver iscsi_target_prefix = iqn.1992-04.com.emc iscsi_ip_address = 198.19.10.111 volume_driver = cinder.volume.emc.EMCISCSIDriver #volume_driver = cinder.volume.emc.EMCSMISISCSIDriver cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml [keystone_authtoken] admin_tenant_name = service admin_user = cinder admin_password = password auth_host = 10.68.125.11 auth_port = 35357 auth_protocol = http signing_dirname = /tmp/keystone-signing-cinder The cinder version is: openstack-cinder-2013.1-2.el6.noarch Kimi Zhang +86 186 0800 8182 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dneary at redhat.com Wed May 15 13:35:52 2013 From: dneary at redhat.com (Dave Neary) Date: Wed, 15 May 2013 15:35:52 +0200 Subject: [Rdo-list] Issue with cinder? In-Reply-To: <518928A0.20504@conversis.de> References: <518928A0.20504@conversis.de> Message-ID: <51938F38.7040200@redhat.com> Hi Dennis, On 05/07/2013 06:15 PM, Dennis Jacobfeuerborn wrote: > Hi, > I'm having trouble getting cinder to work with the rdo packages. > When I run "cinder list" I get "ERROR: Malformed request url" Did you ever get an answer to this question? It looks similar to this issue which was reported on the forum: http://openstack.redhat.com/forum/discussion/96 - except that one is for new server creation. Unfortunately, the post didn't get any response. Did you ever figure this out? Thanks, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From dennisml at conversis.de Wed May 15 23:17:30 2013 From: dennisml at conversis.de (Dennis Jacobfeuerborn) Date: Thu, 16 May 2013 01:17:30 +0200 Subject: [Rdo-list] Issue with cinder? In-Reply-To: <51938F38.7040200@redhat.com> References: <518928A0.20504@conversis.de> <51938F38.7040200@redhat.com> Message-ID: <5194178A.7090103@conversis.de> On 15.05.2013 15:35, Dave Neary wrote: > Hi Dennis, > > On 05/07/2013 06:15 PM, Dennis Jacobfeuerborn wrote: >> Hi, >> I'm having trouble getting cinder to work with the rdo packages. >> When I run "cinder list" I get "ERROR: Malformed request url" > > Did you ever get an answer to this question? > > It looks similar to this issue which was reported on the forum: > http://openstack.redhat.com/forum/discussion/96 - except that one is for > new server creation. Unfortunately, the post didn't get any response. > > Did you ever figure this out? Yes, I posted the solution to the openstack mailing list where I also asked but forgot to post it here as well, sorry. So the solution is a missing "auth_strategy = keystone" directive in the the default section of cinder.conf. With that the error went away. After that I could create a volume but it always immediately went into the "error" state. The problem here was that the system time wasn't synchronized on the machines (ntpd was running but apparently couldn't reach the time servers). Once I fixed that I could successfully create a volume. Regards, Dennis From dneary at redhat.com Thu May 16 08:37:22 2013 From: dneary at redhat.com (Dave Neary) Date: Thu, 16 May 2013 10:37:22 +0200 Subject: [Rdo-list] [rhos-list] LDAP integration In-Reply-To: References: <5188DCAE.2090208@redhat.com> , <5193B5E7.5050202@redhat.com> Message-ID: <51949AC2.3060601@redhat.com> Hi Nicolas, Bringing the topic back to the mailing list (you're using RDO, so I added rdo-list also). On 05/15/2013 06:50 PM, Vogel Nicolas wrote: > I installed Grizzly with the RDO packstack installation guide on CentOS 6.4. > nova --version = 2.13.0 > keystone --version = 0.2.3 > If you need more information you can ask any time. >> On 05/07/2013 08:00 AM, Vogel Nicolas wrote: >>> After successfully installing an ? all-in-one Node ? using Packstack, >>> I want to user LDAP to manage my users. >>> >>> The LDAP backend isn?t available in the keystone.conf. Do I have to >>> replace the SQL backend with the LDAP backend? >>> >>> Wenn I switch to LDAP, is my admin user created by Packstack usable >>> yet or do I have to modify everything so that one of my LDAP user >>> becomes the admin ? I'm pretty sure that Adam Young can answer your question. AFAIK, when you switch to the LDAP back-end for Keystone, that you will have to take care of mapping your schema to Keystone attributes and access control. This page seems to be pretty complete: http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-keystone-for-ldap-backend.html Thanks, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From gkotton at redhat.com Sat May 18 16:24:33 2013 From: gkotton at redhat.com (Gary Kotton) Date: Sat, 18 May 2013 19:24:33 +0300 Subject: [Rdo-list] Nova boot problems with devstack Message-ID: <5197AB41.3040405@redhat.com> Hi, I Have encountered a problem and maybe someone can help (RHEL-6.4 with upstream devstack). I deploy a VM and it works. If I restart my host, install devstack and then try and deploy a VM then it fails. If I try again it works. After the next reboot in order to get a VM up and running I need to deploy 3 VM's. The trace on nova compute is: 2013-05-16 08:18:11.742 ERROR nova.openstack.common.rpc.amqp [req-cd75ed38-1d04-4611-9f34-cce01b388413 demo demo] Exception during message handling 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last): 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 433, in _process_data 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp **args) 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 148, in dispatch 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp return getattr(proxyobj, method)(ctxt, **kwargs) 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/exception.py", line 98, in wrapped 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp temp_level, payload) 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/exception.py", line 75, in wrapped 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp return f(self, context, *args, **kw) 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 213, in decorated_function 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp pass 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 199, in decorated_function 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs) 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 264, in decorated_function 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp function(self, context, *args, **kwargs) 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 241, in decorated_function 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp e, sys.exc_info()) 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 228, in decorated_function 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp return function(self, context, *args, **kwargs) 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 1320, in run_instance 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp do_run_instance() 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/openstack/common/lockutils.py", line 246, in inner 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp retval = f(*args, **kwargs) 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 1319, in do_run_instance 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp admin_password, is_first_time, node, instance) 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 877, in _run_instance 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp notify("error", msg=unicode(e)) # notify that build failed 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp self.gen.next() 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 853, in _run_instance 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp image_meta = self._prebuild_instance(context, instance) 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 880, in _prebuild_instance 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp self._check_instance_exists(context, instance) 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File "/opt/stack/nova/nova/compute/manager.py", line 1083, in _check_instance_exists 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp raise exception.InstanceExists(name=instance['name']) 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp InstanceExists: Instance instance-00000001 already exists. 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp Problem seems to be that instances hang around: [openstack at dhcp-4-126 devstack]$ virsh list --all Id Name State ---------------------------------------------------- [openstack at dhcp-4-126 devstack]$ sudo virsh list --all Id Name State ---------------------------------------------------- - instance-00000001 shut off - instance-00000002 shut off - instance-00000003 shut off - instance-00000004 shut off Problem is that I am unable to delete any of these. Any ideas? Thanks Gary From gkotton at redhat.com Sun May 19 15:00:48 2013 From: gkotton at redhat.com (Gary Kotton) Date: Sun, 19 May 2013 18:00:48 +0300 Subject: [Rdo-list] Fwd: Re: Nova boot problems with devstack In-Reply-To: <5198E8D6.6070205@redhat.com> References: <5198E8D6.6070205@redhat.com> Message-ID: <5198E920.2020506@redhat.com> On 05/18/2013 07:24 PM, Gary Kotton wrote: > Hi, > I Have encountered a problem and maybe someone can help (RHEL-6.4 with > upstream devstack). I deploy a VM and it works. If I restart my host, > install devstack and then try and deploy a VM then it fails. If I try > again it works. After the next reboot in order to get a VM up and > running I need to deploy 3 VM's. > > The trace on nova compute is: > > 2013-05-16 08:18:11.742 ERROR nova.openstack.common.rpc.amqp > [req-cd75ed38-1d04-4611-9f34-cce01b388413 demo demo] Exception during > message handling > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp Traceback > (most recent call last): > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 433, in > _process_data > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp **args) > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 148, > in dispatch > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > return getattr(proxyobj, method)(ctxt, **kwargs) > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/exception.py", line 98, in wrapped > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > temp_level, payload) > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > self.gen.next() > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/exception.py", line 75, in wrapped > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > return f(self, context, *args, **kw) > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/compute/manager.py", line 213, in > decorated_function > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp pass > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > self.gen.next() > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/compute/manager.py", line 199, in > decorated_function > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > return function(self, context, *args, **kwargs) > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/compute/manager.py", line 264, in > decorated_function > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > function(self, context, *args, **kwargs) > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/compute/manager.py", line 241, in > decorated_function > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp e, > sys.exc_info()) > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > self.gen.next() > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/compute/manager.py", line 228, in > decorated_function > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > return function(self, context, *args, **kwargs) > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/compute/manager.py", line 1320, in run_instance > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > do_run_instance() > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/openstack/common/lockutils.py", line 246, in inner > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > retval = f(*args, **kwargs) > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/compute/manager.py", line 1319, in do_run_instance > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > admin_password, is_first_time, node, instance) > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/compute/manager.py", line 877, in _run_instance > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > notify("error", msg=unicode(e)) # notify that build failed > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > self.gen.next() > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/compute/manager.py", line 853, in _run_instance > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > image_meta = self._prebuild_instance(context, instance) > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/compute/manager.py", line 880, in > _prebuild_instance > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > self._check_instance_exists(context, instance) > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp File > "/opt/stack/nova/nova/compute/manager.py", line 1083, in > _check_instance_exists > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp raise > exception.InstanceExists(name=instance['name']) > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > InstanceExists: Instance instance-00000001 already exists. > 2013-05-16 08:18:11.742 TRACE nova.openstack.common.rpc.amqp > > Problem seems to be that instances hang around: > > [openstack at dhcp-4-126 devstack]$ virsh list --all > Id Name State > ---------------------------------------------------- > > [openstack at dhcp-4-126 devstack]$ sudo virsh list --all > Id Name State > ---------------------------------------------------- > - instance-00000001 shut off > - instance-00000002 shut off > - instance-00000003 shut off > - instance-00000004 shut off Issue appears to be that when one reboots a host running devstack (with running VMS) that the domains are saved: ./etc/libvirt/qemu/instance-00000001.xml ./var/lib/libvirt/qemu/save/instance-00000001.save ./var/log/libvirt/qemu/instance-00000001.log By deleting the files it enables one to run VM's after reboot. Is there a libvirt configuration variable that will ensure that this is not persistent? Or do we need to purge these each time devstack is restarted? > > Problem is that I am unable to delete any of these. > > Any ideas? > Thanks > Gary From dneary at redhat.com Mon May 27 13:29:41 2013 From: dneary at redhat.com (Dave Neary) Date: Mon, 27 May 2013 15:29:41 +0200 Subject: [Rdo-list] Webinar: "RDO: An OpenStack community project" Message-ID: <51A35FC5.2050603@redhat.com> Hi everyone, This Wednesday, May 29th, Keith Basil, OpenStack Product Manager with Red Hat, and myself will be delivering a webinar on OpenStack, the RDO project, and Red Hat's involvement in the OpenStack project. You are all welcome to register to join here: http://www.redhat.com/about/events-webinars/webinars/203-05-29-rdo-openstack-community Thanks! Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From msolberg at redhat.com Thu May 30 21:13:59 2013 From: msolberg at redhat.com (Michael Solberg) Date: Thu, 30 May 2013 17:13:59 -0400 Subject: [Rdo-list] RDO with Red Hat IDM Message-ID: <51A7C117.6060100@redhat.com> Hi list. I've spent a day or two now trying to use Red Hat IDM as a backing store for Keystone in RDO and I'm about to pull my hair out. I started with Adam Young's blog post here: http://adam.younglogic.com/2012/02/freeipa-keystone-ldap/ Then I watched his Summit video here: http://www.openstack.org/summit/portland-2013/session-videos/presentation/securing-openstack-with-freeipa Then I tried to follow this document: http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-keystone-for-ldap-backend.html I definitely ran into the domain_id problem described here: https://lists.launchpad.net/openstack/msg23387.html I also ran into the issue around the RFC 4519 schema not allowing a "enabled" attribute. I think I've mitigated this by setting the "attribute_ignore" settings in keystone.conf. I've tried tackling the architecture from a few different directions and I've gotten to the point where I can create roles, create tenants, and list users in my IDM domain, but not assign roles to users. I think this is because I'm trying to separate out the tenants and roles from the users in the directory tree. I don't mind keystone creating objects in it's own tree, but I don't want it updating user accounts from IDM. Has anyone gotten this configuration working? I'm willing to wade through details, but I'm curious if someone else has this working and I could just replicate their setup. Michael. -- Michael Solberg Principal Architect, Red Hat, Inc. From ayoung at redhat.com Fri May 31 00:04:12 2013 From: ayoung at redhat.com (Adam Young) Date: Thu, 30 May 2013 20:04:12 -0400 Subject: [Rdo-list] Fwd: RDO with Red Hat IDM In-Reply-To: <51A7CB8F.7030501@redhat.com> References: <51A7C117.6060100@redhat.com> <51A7CB8F.7030501@redhat.com> Message-ID: <51A7E8FC.6070601@redhat.com> On 05/30/2013 05:58 PM, Dave Neary wrote: > Hi Adam, > > Can you have a look at this post on rdo-list and see if you can figure > out what's going wrong, please? > > Thanks! > Dave. > > > > -------- Original Message -------- > Subject: [Rdo-list] RDO with Red Hat IDM > Date: Thu, 30 May 2013 17:13:59 -0400 > From: Michael Solberg > To: rdo-list at redhat.com > > Hi list. > > I've spent a day or two now trying to use Red Hat IDM as a backing store > for Keystone in RDO and I'm about to pull my hair out. > > I started with Adam Young's blog post here: > http://adam.younglogic.com/2012/02/freeipa-keystone-ldap/ > > Then I watched his Summit video here: > http://www.openstack.org/summit/portland-2013/session-videos/presentation/securing-openstack-with-freeipa > > Then I tried to follow this document: > http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-keystone-for-ldap-backend.html > > I definitely ran into the domain_id problem described here: > https://lists.launchpad.net/openstack/msg23387.html > > I also ran into the issue around the RFC 4519 schema not allowing a > "enabled" attribute. I think I've mitigated this by setting the > "attribute_ignore" settings in keystone.conf. > > I've tried tackling the architecture from a few different directions and > I've gotten to the point where I can create roles, create tenants, and > list users in my IDM domain, but not assign roles to users. I think > this is because I'm trying to separate out the tenants and roles from > the users in the directory tree. I don't mind keystone creating objects > in it's own tree, but I don't want it updating user accounts from IDM. So, you have put projects into their own subtree? Can the LDAP user from Keystone modify that tree? I would think you would want to make user that has ACLs set up permitting them to make modifications to that tree, but not to add users. Configure Keystone to use that user to talk to LDAP. Assuming you call that user KeystoneManager, you would then set the [LDAP] config value for user = KeystoneManager in Keystone Config. The ACL stuff in IPA is kind of cool. I did some write ups here: http://adam.younglogic.com/2012/02/group-managers-in-freeipa/ That should give you an indication of how you want to proceed. > > Has anyone gotten this configuration working? I'm willing to wade > through details, but I'm curious if someone else has this working and I > could just replicate their setup. > > Michael. > From dneary at redhat.com Fri May 31 07:00:31 2013 From: dneary at redhat.com (Dave Neary) Date: Fri, 31 May 2013 09:00:31 +0200 Subject: [Rdo-list] Reports of F18 problems Message-ID: <51A84A8F.3020106@redhat.com> Hi all, I have had a few reports related to installation on Fedora 18 in recent days - anyone here had success installing RDO on that platform in the last week or so? The issues reported are: * After installation, the default security group is created, but is not visible in the dashboard until I create a second security group: http://openstack.redhat.com/forum/discussion/155/packstack-installation-no-default-security-group-created/p1 * Networking does not work out of the box: http://openstack.redhat.com/forum/discussion/156/router-interface-status-down/p1 Anyone else encountering these issues? Thanks, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From kimi.zhang at nsn.com Fri May 31 07:48:12 2013 From: kimi.zhang at nsn.com (Zhang, Kimi (NSN - CN/Cheng Du)) Date: Fri, 31 May 2013 07:48:12 +0000 Subject: [Rdo-list] QEMU-KVM RBD support Message-ID: <90CF2062F86FD8498897037C7FBBC08805ECFB@SGSIMBX001.nsn-intra.net> I am testing Ceph as backend of Cinder, current KVM came with RHEL6.4 has no RBD support, any plan to add this support ? I tried to build newer version KVM from source with RBD support, but there are problems for VM instance launching and volume attaching. Kimi Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronac07 at gmail.com Fri May 31 10:30:44 2013 From: ronac07 at gmail.com (ronac07 at gmail.com) Date: Fri, 31 May 2013 06:30:44 -0400 Subject: [Rdo-list] Reports of F18 problems In-Reply-To: <51A84A8F.3020106@redhat.com> References: <51A84A8F.3020106@redhat.com> Message-ID: Exact same problems but installing on Centos 6.4 with its latest updates. I ran an "allinone" build with Packstack. Per the forum post if I add a "new" security group the default then shows up. No network is created by default. If I try create a network from the GUI I can launch VMs but they do not get an address. I did not try adding a router. I've manually configured quantum/openvswitch for a flat shared network. At this point my VMs are getting addresses and I have connectivity between VMs but cannot ping or SSH a VM from the server. Sent from my iPad On May 31, 2013, at 3:00 AM, Dave Neary wrote: > Hi all, > > I have had a few reports related to installation on Fedora 18 in recent > days - anyone here had success installing RDO on that platform in the > last week or so? > > The issues reported are: > * After installation, the default security group is created, but is not > visible in the dashboard until I create a second security group: > http://openstack.redhat.com/forum/discussion/155/packstack-installation-no-default-security-group-created/p1 > * Networking does not work out of the box: > http://openstack.redhat.com/forum/discussion/156/router-interface-status-down/p1 > > Anyone else encountering these issues? > > Thanks, > Dave. > > -- > Dave Neary - Community Action and Impact > Open Source and Standards, Red Hat - http://community.redhat.com > Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From pbrady at redhat.com Fri May 31 10:53:16 2013 From: pbrady at redhat.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Fri, 31 May 2013 11:53:16 +0100 Subject: [Rdo-list] [package announce] network namespace support Message-ID: <51A8811C.8090603@redhat.com> The RDO repositories were updated with a kernel and support utilities to support network namespaces as used by openstack networking. The effected packages were: update: iputils-20071127-17.el6_4 add: kernel-2.6.32-358.6.2.openstack.el6 From pbrady at redhat.com Fri May 31 10:56:07 2013 From: pbrady at redhat.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Fri, 31 May 2013 11:56:07 +0100 Subject: [Rdo-list] [package announce] nova network dhcp lease control Message-ID: <51A881C7.3060900@redhat.com> The dnsmasq-utils package providing direct control over dhcp leases, is now installed automatically from the RDO repositories. add: dnsmasq-utils-2.48-13.el6 update: openstack-nova-2013.1.1-3.el6 From red at fedoraproject.org Fri May 31 10:56:34 2013 From: red at fedoraproject.org (Sandro "red" Mathys) Date: Fri, 31 May 2013 12:56:34 +0200 Subject: [Rdo-list] [package announce] network namespace support In-Reply-To: <51A8811C.8090603@redhat.com> References: <51A8811C.8090603@redhat.com> Message-ID: Nice one. No GRE support while at it, though? On Fri, May 31, 2013 at 12:53 PM, P?draig Brady wrote: > The RDO repositories were updated with a kernel > and support utilities to support network namespaces > as used by openstack networking. > > The effected packages were: > > update: iputils-20071127-17.el6_4 > add: kernel-2.6.32-358.6.2.openstack.el6 > > _______________________________________________ > Rdo-list mailing list > Rdo-list at redhat.com > https://www.redhat.com/mailman/listinfo/rdo-list From pbrady at redhat.com Fri May 31 11:05:45 2013 From: pbrady at redhat.com (=?UTF-8?B?UMOhZHJhaWcgQnJhZHk=?=) Date: Fri, 31 May 2013 12:05:45 +0100 Subject: [Rdo-list] [package announce] network namespace support In-Reply-To: References: <51A8811C.8090603@redhat.com> Message-ID: <51A88409.6000204@redhat.com> On 05/31/2013 11:56 AM, Sandro "red" Mathys wrote: > Nice one. No GRE support while at it, though? There are no concrete plans for this at present. thanks, P?draig. From msolberg at redhat.com Fri May 31 13:51:53 2013 From: msolberg at redhat.com (Michael Solberg) Date: Fri, 31 May 2013 09:51:53 -0400 Subject: [Rdo-list] Fwd: RDO with Red Hat IDM In-Reply-To: <51A7E8FC.6070601@redhat.com> References: <51A7C117.6060100@redhat.com> <51A7CB8F.7030501@redhat.com> <51A7E8FC.6070601@redhat.com> Message-ID: <51A8AAF9.5080305@redhat.com> On 05/30/2013 08:04 PM, Adam Young wrote: > On 05/30/2013 05:58 PM, Dave Neary wrote: >> Hi Adam, >> >> Can you have a look at this post on rdo-list and see if you can figure >> out what's going wrong, please? >> >> Thanks! >> Dave. >> >> >> >> -------- Original Message -------- >> Subject: [Rdo-list] RDO with Red Hat IDM >> Date: Thu, 30 May 2013 17:13:59 -0400 >> From: Michael Solberg >> To: rdo-list at redhat.com >> >> Hi list. >> >> I've spent a day or two now trying to use Red Hat IDM as a backing store >> for Keystone in RDO and I'm about to pull my hair out. >> >> I started with Adam Young's blog post here: >> http://adam.younglogic.com/2012/02/freeipa-keystone-ldap/ >> >> Then I watched his Summit video here: >> http://www.openstack.org/summit/portland-2013/session-videos/presentation/securing-openstack-with-freeipa >> >> >> Then I tried to follow this document: >> http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-keystone-for-ldap-backend.html >> >> >> I definitely ran into the domain_id problem described here: >> https://lists.launchpad.net/openstack/msg23387.html >> >> I also ran into the issue around the RFC 4519 schema not allowing a >> "enabled" attribute. I think I've mitigated this by setting the >> "attribute_ignore" settings in keystone.conf. >> >> I've tried tackling the architecture from a few different directions and >> I've gotten to the point where I can create roles, create tenants, and >> list users in my IDM domain, but not assign roles to users. I think >> this is because I'm trying to separate out the tenants and roles from >> the users in the directory tree. I don't mind keystone creating objects >> in it's own tree, but I don't want it updating user accounts from IDM. > > So, you have put projects into their own subtree? Can the LDAP user > from Keystone modify that tree? Yes - for right now, I'm just using the cn=Directory Manager account. I figured I'd work on the ACLs once I got the mappings correct. All of my issues so far have been around Keystone trying to create or read objects in the tree that don't conform to the standard directory types that we ship in IDM (groupOfNames, posixaccount, etc). That's why I was curious if someone had a working configuration that I could look at. It looks like we've documented using AD upstream, but not IDM. I think what I want is something like this: user_tree_dn = cn=users,cn=accounts,dc=atl,dc=salab,dc=redhat,dc=com user_objectclass = person user_domain_id_attribute = businessCategory user_id_attribute = uid user_name_attribute = uid user_mail_attribute = email user_pass_attribute = userPassword user_attribute_ignore = enabled user_allow_create = False user_allow_update = False user_allow_delete = False (This is the IDM-managed list of users) tenant_tree_dn = ou=tenants,cn=openstack,dc=atl,dc=salab,dc=redhat,dc=com tenant_attribute_ignore = enabled (This is the Keystone-managed list of tenants) role_tree_dn = ou=roles,cn=openstack,dc=atl,dc=salab,dc=redhat,dc=com role_attribute_ignore = enabled (This is the Keystone-managed list of roles) > I would think you would want to make user that has ACLs set up > permitting them to make modifications to that tree, but not to add > users. Configure Keystone to use that user to talk to LDAP. Yep. Once I figure out what a working configuration looks like, I was going to go down the that road. Michael.