From gkotton at redhat.com Wed May 1 06:03:02 2013 From: gkotton at redhat.com (Gary Kotton) Date: Wed, 01 May 2013 09:03:02 +0300 Subject: [rhos-list] Fwd: Quantum Metadata service In-Reply-To: <5180A98A.1050005@redhat.com> References: <5180085B.1030003@redhat.com> <5180A98A.1050005@redhat.com> Message-ID: <5180B016.1060209@redhat.com> -------- Original Message -------- >> Subject: [rhos-list] Quantum Metadata service >> Date: Tue, 30 Apr 2013 16:54:49 +0000 >> From: Minton, Rich >> To: rhos-list at redhat.com >> >> >> >> Regarding Metadata and Openstack Networking (Quantum), is it necessary >> to have the L3-agent running in order to access metadata from a VM? Yes. In RHOS 2.1 this is done via the L3 agent. In RHOS 3.0 there will be an option to do this via the DHCP agent too. >> >> >> >> Also, the Openstack Networking documentation says to add the following >> to nova.conf: Some of the variables below are specific to Grizzly (3.0) and not Folsom (2.1). I'll explain each below. >> >> >> >> firewall_driver = nova.virt.firewall.NoopFirewallDriver This is to disable the security group driver in Nova as it will be done via Quantum >> >> security_group_api = quantum This indicates that Quantum will do the security group implementation. Nova will just be a proxy to Quantum. >> >> service_quantum_metadata_proxy = true The flag indicates that Quantum will proxy the metadata requests and will resolve the instance ID (this is only relevant to 3.0/Grizzly) >> >> quantum_metadata_proxy_shared_secret = "password" Shared secret to validate the proxy requests (ditto regarding the version) >> >> network_api_class = nova.network.quantumv2.api.API This indicates that Quantum will do the network management and not Nova >> >> >> >> Also, if quantum proxies calls to metadata, do I still need this line: >> >> enabled_apis=ec2,osapi_compute,metadata I think so. I think that this is specific to the nova api. >> >> >> >> Basically do I need to add these to every compute node and is this all I >> need to get metadata service up and running? Yes regarding the compute nodes. In addition to this you will need to do the following regarding quantum: 1. You will need to update the file /etc/quantum/metadata_agent.ini with all of the relevant credentials to enable Quantum to access the metadata service 2. You will need to configure the l3 agent/dhcp agent (depending on your deployment choice) to interface with the metadata agent 3. You will need to launch the metadata proxy - for example: python /opt/sck/quantum/bin/quantum-metadata-agent --config-file /etc/quantum/quantum.conf --config-file=/etc/quantum/metadata_agent.ini Please note that the above is only relevant for Grizzly. In Folsom this is done via the L3 agent - the limitation here is there is no overlapping IP support. Thanks Gary >> >> >> >> Thanks for the help. >> >> Rick >> >> >> >> _Richard Minton_ >> >> LMICC Systems Administrator >> >> 4000 Geerdes Blvd, 13D31 >> >> King of Prussia, PA 19406 >> >> Phone: 610-354-5482 >> >> >> >> >> > From rich.minton at lmco.com Thu May 2 16:54:22 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Thu, 2 May 2013 16:54:22 +0000 Subject: [rhos-list] Metadata redux. Message-ID: I can see the light at the end of the tunnel... Just need some clarification on implementing metadata when using Quantum. I have the following Openstack configuration: 1 Controller/compute node running Keystone, compute, cinder, nova-api for metadata, glance, Horizon, Openvswitch, L2-agent (and depending on the answer to this email, the L3-agent). 1 Network node running openvswith and dhcp agents 3 Compute nodes running compute, cinder, openvswitch, and L2-agent Two NICs - one NIC, eth0 is setup to carrie three VLANs, one for host management network, one for Storage network, and one for external datacenter traffic. The other NIC, eth1 is for VM to VM communications and for VMs to access external networks. Eth1 has br-int interface. We currently have no br-ex interface defined. We are using a "flat" provider network for VMs. Is there a way to reach Metadata service without using the Quantum L3 agent? If not can I attach br-ex to the same interface as br-int or can I just use NATs to route through br-int? Let me know if you need more information... config files, etc. Thank you, Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkukura at redhat.com Thu May 2 18:06:14 2013 From: rkukura at redhat.com (Robert Kukura) Date: Thu, 02 May 2013 14:06:14 -0400 Subject: [rhos-list] Metadata redux. In-Reply-To: References: Message-ID: <5182AB16.7070604@redhat.com> On 05/02/2013 12:54 PM, Minton, Rich wrote: > I can see the light at the end of the tunnel? > > > > Just need some clarification on implementing metadata when using Quantum. > > > > I have the following Openstack configuration: > > > > 1 Controller/compute node running Keystone, compute, cinder, nova-api > for metadata, glance, Horizon, Openvswitch, L2-agent (and depending on > the answer to this email, the L3-agent). > > 1 Network node running openvswith and dhcp agents > > 3 Compute nodes running compute, cinder, openvswitch, and L2-agent > > > > Two NICs ? one NIC, eth0 is setup to carrie three VLANs, one for host > management network, one for Storage network, and one for external > datacenter traffic. The other NIC, eth1 is for VM to VM communications > and for VMs to access external networks. Eth1 has br-int interface. We > currently have no br-ex interface defined. Hi Rich, I'm not very familiar with the metadata service, so will leave that for someone else (Gary hopefully). But I'm concerned about your basic L2 setup. You should not be putting a physical network interface directly on br-int. The VLAN tags used on br-int are managed locally by quantum-openvswitch-agent, and are not the same VLAN tags you want on your physical network. The physical interface should be on a different OVS bridge (i.e. br-eth1 or br-physnet1), and the bridge_mappings configuration variable should map your physical network name to the name of this bridge on each node where the quantum-openvswitch-agent runs. The quantum-openvswitch-agent takes care of creating a veth to connect br-int to br-eth1, and creating flow rules that translate the VLAN tags as packets cross this veth in either direction. > > > > We are using a ?flat? provider network for VMs. The above applies even with flat networks. The packets on br-int will still be tagged with a local VLAN id, and the flow rules will add/remove the tag as packets cross the veth. > > > > Is there a way to reach Metadata service without using the Quantum L3 > agent? If not can I attach br-ex to the same interface as br-int or can > I just use NATs to route through br-int? Again, I don't know about the metadata service. But if you are using quantum-l3-agent, there is no need to use br-ex for the external network. Instead, you can set external_bridge = "" in l3_agent.ini to disable it, and create your external network as a flat or vlan provider network. -Bob > > > > Let me know if you need more information? config files, etc. > > > > Thank you, > > Rick > > > > _Richard Minton_ > > LMICC Systems Administrator > > 4000 Geerdes Blvd, 13D31 > > King of Prussia, PA 19406 > > Phone: 610-354-5482 > > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From prashanth.prahal at gmail.com Thu May 2 21:31:03 2013 From: prashanth.prahal at gmail.com (Prashanth Prahalad) Date: Thu, 2 May 2013 14:31:03 -0700 Subject: [rhos-list] openstack networking with openvswitch Message-ID: Hi Folks, I' m in the process of setting up a openvswitch deployment and following this guide ( https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_Preview/2/pdf/Release_Notes/Red_Hat_OpenStack_Preview-2-Release_Notes-en-US.pdf ) My plan was to create network segmented by different vlan'ids using OVS. This is my configuration : ---------------------------------------------------------------------------------------------- [compute node] [nova-compute and other nova utilities] [quantum-server] [quantum-dhcp-agent] ---------------------------------------------------------------------------------------------- 10.9.10.43 eth5 | | [mgmt] [data] | | 10.9.10.129 eth1 ---------------------------------------------------------------------------------------------- [network node] [quantum-l3-agent] ---------------------------------------------------------------------------------------------- I've the configuration files pasted at the end of this email for more clarity, but here's what I was expecting to accomplish. Step 1 : Create a network *quantum net-create opn1 --provider:network-type vlan --provider:physical-network physnet5 --provider:segmentation-id 500* Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 5d47f63f-c804-4d23-8aaa-86373bc96b3b | | name | opn1 | | provider:network_type | vlan | | provider:physical_network | physnet5 | | provider:segmentation_id | 500 | | router:external | False | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | b26737806380406dbed3d273308a6a2f | +---------------------------+--------------------------------------+ Step 2 : Create a subnet *quantum subnet-create opn1 65.1.1.0/24* Created a new subnet: +------------------+--------------------------------------------+ | Field | Value | +------------------+--------------------------------------------+ | allocation_pools | {"start": "65.1.1.2", "end": "65.1.1.254"} | | cidr | 65.1.1.0/24 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 65.1.1.1 | | host_routes | | | id | 5df16c75-31eb-4332-b76c-c0986525e2de | | ip_version | 4 | | name | | | network_id | 5d47f63f-c804-4d23-8aaa-86373bc96b3b | C | tenant_id | b26737806380406dbed3d273308a6a2f | Step 3: boot an image and attach it to this network *nova boot --image cirros --flavor m1.tiny --nic net-id=5d47f63f-c804-4d23-8aaa-86373bc96b3b --key-name test my_1_server* At this point, the vm comes up with an address on the subnet and is accessible locally from within the compute node : +--------------------------------------+-------------+--------+---------------+ | ID | Name | Status | Networks | +--------------------------------------+-------------+--------+---------------+ | 72224a7c-273e-4dea-922b-09c38bd77538 | my_1_server | ACTIVE | opn1=65.1.1.3 | +--------------------------------------+-------------+--------+---------------+ But, I was expecting to see a vnic on eth5 for the vlan 500 which we created in Step1 - that didn't seem to have happened from the ovs-vsctl show output. f83d2ba4-ff86-4e2c-8f00-0c572e30533f Bridge "br-eth5" Port "br-eth5" Interface "br-eth5" type: internal Bridge br-ex Port br-ex Interface br-ex type: internal Bridge br-int Port "tapa12a740a-c2" tag: 5 Interface "tapa12a740a-c2" type: internal Port "qvoccf5b741-60" tag: 4095 Interface "qvoccf5b741-60" Port "qvodc963159-16" tag: 4095 Interface "qvodc963159-16" Port "qvod7544ba5-c1" tag: 5 Interface "qvod7544ba5-c1" Port "qvo4894fa5d-40" tag: 4095 Interface "qvo4894fa5d-40" Port br-int Interface br-int type: internal ovs_version: "1.9.0" My question is how do we expect the VMs on this compute node to talk with other VMs on a different compute node if the physical interface is not plugged into the br-int. Am I missing something here ? Regards, Prashanth Below is a snapshot of the different configuration files : * * *[Compute Node]* *quantum.conf* [DEFAULT] rpc_backend = quantum.openstack.common.rpc.impl_qpid qpid_hostname = 10.9.10.43 core_plugin = quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2 auth_strategy = keystone verbose = True debug = True bind_port = 9696 [keystone_authtoken] admin_tenant_name = openstack_network admin_user = openstack_network admin_password = test123 *dhcp_agent.ini* [DEFAULT] auth_url = http://localhost:35357/v2.0/ admin_tenant_name = openstack_network admin_user = openstack_network admin_password = test123 interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver use_namespaces = False dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq admin_username = quantum */etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini* [DATABASE] sql_connection = mysql://quantum:quantum at r5-20/ovs_quantum [OVS] tenant_network_type = vlan network_vlan_ranges = physnet5:100:1000 bridge_mapping = physnet5:br-eth5 *nova.conf* [DEFAULT] network_api_class = nova.network.quantumv2.api.API quantum_admin_username = openstack_network quantum_admin_password = test123 quantum_admin_auth_url = http://127.0.0.1:35357/v2.0/ quantum_auth_strategy = keystone quantum_admin_tenant_name = openstack_network quantum_url = http://10.9.10.43:9696/ libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver libvirt_use_virtio_for_bridges=true On the network node, this is the l3 configuration : *l3_agent.ini* [DEFAULT] auth_url = http://10.9.10.43:35357/v2.0/ admin_user = openstack_network admin_password = test123 admin_tenant_name = openstack_network auth_strategy = keystone interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver use_namespaces = False verbose = True debug = False interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver auth_region = regionOne router_id = 0496b7f6-1b27-487f-8a95-d7430302b080 external_network_bridge = br-ex -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkukura at redhat.com Fri May 3 12:42:46 2013 From: rkukura at redhat.com (Robert Kukura) Date: Fri, 03 May 2013 08:42:46 -0400 Subject: [rhos-list] openstack networking with openvswitch In-Reply-To: References: Message-ID: <5183B0C6.1030400@redhat.com> On 05/02/2013 05:31 PM, Prashanth Prahalad wrote: > Hi Folks, > > I' m in the process of setting up a openvswitch deployment and following > this guide > (https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_Preview/2/pdf/Release_Notes/Red_Hat_OpenStack_Preview-2-Release_Notes-en-US.pdf) > > My plan was to create network segmented by different vlan'ids using OVS. > > This is my configuration : > > ---------------------------------------------------------------------------------------------- > [compute node] [nova-compute and other nova > utilities] > [quantum-server] > [quantum-dhcp-agent] > ---------------------------------------------------------------------------------------------- > 10.9.10.43 eth5 > | | > [mgmt] [data] > | | > 10.9.10.129 eth1 > ---------------------------------------------------------------------------------------------- > [network node] [quantum-l3-agent] > ---------------------------------------------------------------------------------------------- > > I've the configuration files pasted at the end of this email for more > clarity, but here's what I was expecting to accomplish. > > Step 1 : Create a network > *quantum net-create opn1 --provider:network-type vlan > --provider:physical-network physnet5 --provider:segmentation-id 500* > Created a new network: > +---------------------------+--------------------------------------+ > | Field | Value | > +---------------------------+--------------------------------------+ > | admin_state_up | True | > | id | 5d47f63f-c804-4d23-8aaa-86373bc96b3b | > | name | opn1 | > | provider:network_type | vlan | > | provider:physical_network | physnet5 | > | provider:segmentation_id | 500 | > | router:external | False | > | shared | False | > | status | ACTIVE | > | subnets | | > | tenant_id | b26737806380406dbed3d273308a6a2f | > +---------------------------+--------------------------------------+ > > Step 2 : > Create a subnet > *quantum subnet-create opn1 65.1.1.0/24 * > Created a new subnet: > +------------------+--------------------------------------------+ > | Field | Value | > +------------------+--------------------------------------------+ > | allocation_pools | {"start": "65.1.1.2", "end": "65.1.1.254"} | > | cidr | 65.1.1.0/24 > | > | dns_nameservers | | > | enable_dhcp | True | > | gateway_ip | 65.1.1.1 | > | host_routes | | > | id | 5df16c75-31eb-4332-b76c-c0986525e2de | > | ip_version | 4 | > | name | | > | network_id | 5d47f63f-c804-4d23-8aaa-86373bc96b3b | > C > | tenant_id | b26737806380406dbed3d273308a6a2f | > > > Step 3: boot an image and attach it to this network > *nova boot --image cirros --flavor m1.tiny --nic > net-id=5d47f63f-c804-4d23-8aaa-86373bc96b3b --key-name test my_1_server* > > At this point, the vm comes up with an address on the subnet and is > accessible locally from within the compute node : > > +--------------------------------------+-------------+--------+---------------+ > | ID | Name | Status | Networks > | > +--------------------------------------+-------------+--------+---------------+ > | 72224a7c-273e-4dea-922b-09c38bd77538 | my_1_server | ACTIVE | > opn1=65.1.1.3 | > +--------------------------------------+-------------+--------+---------------+ > > But, I was expecting to see a vnic on eth5 for the vlan 500 which we > created in Step1 - that didn't seem to have happened from the ovs-vsctl > show output. > > f83d2ba4-ff86-4e2c-8f00-0c572e30533f > Bridge "br-eth5" > Port "br-eth5" > Interface "br-eth5" > type: internal > Bridge br-ex > Port br-ex > Interface br-ex > type: internal > Bridge br-int > Port "tapa12a740a-c2" > tag: 5 > Interface "tapa12a740a-c2" > type: internal > Port "qvoccf5b741-60" > tag: 4095 > Interface "qvoccf5b741-60" > Port "qvodc963159-16" > tag: 4095 > Interface "qvodc963159-16" > Port "qvod7544ba5-c1" > tag: 5 > Interface "qvod7544ba5-c1" > Port "qvo4894fa5d-40" > tag: 4095 > Interface "qvo4894fa5d-40" > Port br-int > Interface br-int > type: internal > ovs_version: "1.9.0" > > > My question is how do we expect the VMs on this compute node to talk > with other VMs on a different compute node if the physical interface is > not plugged into the br-int. Am I missing something here ? Hi Prashanth, You are missing something! The quantum-openvswitch-agent should be creating a veth to connect br-int to br-eth5, along with flow rules to translate VLAN tags as packets cross the veth. You should see ports named phy-br-eth5 and int-br-eth5 similar to those shown here: # ovs-vsctl show 212029ed-2bd2-4ce1-beee-f75aea4d5535 Bridge br-int Port "tapb806d2cd-66" tag: 4 Interface "tapb806d2cd-66" type: internal Port br-int Interface br-int type: internal Port "qvo72a8ea81-4f" tag: 4 Interface "qvo72a8ea81-4f" Port "int-br-eth2" Interface "int-br-eth2" Port "int-br-eth1" Interface "int-br-eth1" Bridge "br-eth1" Port "eth1" Interface "eth1" Port "phy-br-eth1" Interface "phy-br-eth1" Port "br-eth1" Interface "br-eth1" type: internal Bridge "br-eth2" Port "eth2" Interface "eth2" Port "br-eth2" Interface "br-eth2" type: internal Port "phy-br-eth2" Interface "phy-br-eth2" ovs_version: "1.9.0" You should also see the phy-br-eth5 and int-br-eth5 devices when you run "ip link". Please make sure quantum-openvswitch-agent is running on all compute and network nodes, is getting the proper configuration (/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini below looks OK), and check its log for errors. -Bob > > Regards, > Prashanth > > > Below is a snapshot of the different configuration files : > * > * > *[Compute Node]* > *quantum.conf* > > [DEFAULT] > rpc_backend = quantum.openstack.common.rpc.impl_qpid > qpid_hostname = 10.9.10.43 > core_plugin = > quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2 > auth_strategy = keystone > verbose = True > debug = True > bind_port = 9696 > [keystone_authtoken] > admin_tenant_name = openstack_network > admin_user = openstack_network > admin_password = test123 > > *dhcp_agent.ini* > [DEFAULT] > auth_url = http://localhost:35357/v2.0/ > admin_tenant_name = openstack_network > admin_user = openstack_network > admin_password = test123 > interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver > use_namespaces = False > dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq > admin_username = quantum > > */etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini* > [DATABASE] > sql_connection = mysql://quantum:quantum at r5-20/ovs_quantum > [OVS] > tenant_network_type = vlan > network_vlan_ranges = physnet5:100:1000 > bridge_mapping = physnet5:br-eth5 > > *nova.conf* > [DEFAULT] > > network_api_class = nova.network.quantumv2.api.API > quantum_admin_username = openstack_network > quantum_admin_password = test123 > quantum_admin_auth_url = http://127.0.0.1:35357/v2.0/ > quantum_auth_strategy = keystone > quantum_admin_tenant_name = openstack_network > quantum_url = http://10.9.10.43:9696/ > libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver > libvirt_use_virtio_for_bridges=true > > > On the network node, this is the l3 configuration : > *l3_agent.ini* > [DEFAULT] > auth_url = http://10.9.10.43:35357/v2.0/ > admin_user = openstack_network > admin_password = test123 > admin_tenant_name = openstack_network > auth_strategy = keystone > interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver > use_namespaces = False > verbose = True > debug = False > interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver > auth_region = regionOne > router_id = 0496b7f6-1b27-487f-8a95-d7430302b080 > external_network_bridge = br-ex > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From rich.minton at lmco.com Fri May 3 17:42:05 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Fri, 3 May 2013 17:42:05 +0000 Subject: [rhos-list] Quantum Networking Interfaces. Message-ID: Quick question regarding quantum networking interfaces, specifically br-int. I have a 5 node cluster - 1 Controller/compute node which runs the Quantum server and the l3-agent, 1 Network node which runs the dhcp-agent, and 3 compute nodes. The all have openvswitch installed and I have created the br-int interfaces on all. One thing I noticed is that only the quantum server node (controller) and the network node have an entry for "br-int" when I run the route command. The compute nodes do not. The VMs on my other compute nodes get an IP from DHCP but I am not able to ping my gateway (10.0.56.1) from within the VM. I can ping the same gateway from each of the compute hosts. I double-checked all of my config files and they appear to be correct. I can send any of those along if needed. Controller - with Quantum Server Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.0 * 255.255.255.0 U 0 0 0 eth0.502 10.255.254.0 * 255.255.255.0 U 0 0 0 eth0.159 172.17.0.0 * 255.255.255.0 U 0 0 0 eth0.500 link-local * 255.255.0.0 U 1003 0 0 eth0 link-local * 255.255.0.0 U 1004 0 0 eth1 link-local * 255.255.0.0 U 1009 0 0 br-int link-local * 255.255.0.0 U 1010 0 0 eth0.159 link-local * 255.255.0.0 U 1011 0 0 eth0.500 link-local * 255.255.0.0 U 1012 0 0 eth0.502 link-local * 255.255.0.0 U 1028 0 0 br-ex default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0.500 Network Node Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.0 * 255.255.255.0 U 0 0 0 eth0.502 10.255.254.0 * 255.255.255.0 U 0 0 0 eth0.159 172.17.0.0 * 255.255.255.0 U 0 0 0 eth0.500 10.0.56.0 * 255.255.248.0 U 0 0 0 tapaee8f28f-74 link-local * 255.255.0.0 U 1003 0 0 eth0 link-local * 255.255.0.0 U 1004 0 0 eth1 link-local * 255.255.0.0 U 1009 0 0 br-int link-local * 255.255.0.0 U 1010 0 0 eth0.159 link-local * 255.255.0.0 U 1011 0 0 eth0.500 link-local * 255.255.0.0 U 1012 0 0 eth0.502 default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0.500 Compute Nodes Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.0 * 255.255.255.0 U 0 0 0 eth0.502 10.255.254.0 * 255.255.255.0 U 0 0 0 eth0.159 172.17.0.0 * 255.255.255.0 U 0 0 0 eth0.500 link-local * 255.255.0.0 U 1005 0 0 eth0 link-local * 255.255.0.0 U 1006 0 0 eth1 link-local * 255.255.0.0 U 1042 0 0 eth0.159 link-local * 255.255.0.0 U 1043 0 0 eth0.500 link-local * 255.255.0.0 U 1044 0 0 eth0.502 default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0.500 Ok, so this may not be a quick question... Once again, all or your help is greatly appreciated. Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From prashanth.prahal at gmail.com Fri May 3 18:38:34 2013 From: prashanth.prahal at gmail.com (Prashanth Prahalad) Date: Fri, 3 May 2013 11:38:34 -0700 Subject: [rhos-list] openstack networking with openvswitch In-Reply-To: <5183B0C6.1030400@redhat.com> References: <5183B0C6.1030400@redhat.com> Message-ID: Thanks for taking a look at this Bob. On Fri, May 3, 2013 at 5:42 AM, Robert Kukura wrote: > > Hi Prashanth, > > You are missing something! The quantum-openvswitch-agent should be > creating a veth to connect br-int to br-eth5, along with flow rules to > translate VLAN tags as packets cross the veth. You should see ports > named phy-br-eth5 and int-br-eth5 similar to those shown here: > > Interestingly, both these "phy-br-eth5" and "int-br-eth5" are not showing up either in ovs-vsctl output or in ip link output. Do I have to create these (phy-br-eth5 and int-br-eth5) or does the quantum-openvswitch-agent take care of the plumbing underneath ? *[root at r5-20 /]# ip link | grep eth5* 7: eth5: mtu 1500 qdisc mq state UP qlen 1000 53: eth5.100 at eth5: mtu 1500 qdisc noqueue state UP 126: eth5.101 at eth5: mtu 1500 qdisc noqueue state UP 129: eth5.102 at eth5: mtu 1500 qdisc noqueue state UP There is no trace of "phy-br-eth5" or "int-br-eth5" either in /var/log/quantum/* or /var/log/openvswitch/* This is the list of services I have running. Configured devices: lo eth0 eth1 eth2 eth3 eth4 Ethan Currently active devices: lo eth0 eth5 virbr0 ns-5dda48ca-af tap5dda48ca-af ns-df3efddd-ea tapdf3efddd-ea ns-7fc3a799-ff tap7fc3a799- ff ns-75ad2ce4-79 tap75ad2ce4-79 brqd2837208-60 eth5.100 at eth5 br-int qbrdc963159-16 qvodc963159-16 qvbdc963 159-16 qbrccf5b741-60 qvoccf5b741-60 qvbccf5b741-60 qbr4894fa5d-40 qvo4894fa5d-40 qvb4894fa5d-40 brq3d55f554-2d tap9bd180cd-c0 eth5.101 at eth5brq399c778c-8a tap3018422e-8c eth5.102 at eth5qbr9bd180cd-c0 qvo9bd180cd-c0 qvb9bd180cd-c0 qbr3018422e-8c qvo3018422e-8c qvb3018422e-8c <..> openstack-cinder-api (pid 3200) is running... openstack-cinder-scheduler (pid 3208) is running... openstack-cinder-volume dead but pid file exists openstack-glance-api (pid 3230) is running... openstack-glance-registry (pid 3244) is running... openstack-glance-scrubber is stopped keystone (pid 3252) is running... openstack-nova-api (pid 18776) is running... openstack-nova-cert (pid 3273) is running... openstack-nova-compute (pid 18760) is running... openstack-nova-console is stopped openstack-nova-consoleauth (pid 3312) is running... openstack-nova-metadata-api is stopped openstack-nova-network is stopped openstack-nova-novncproxy (pid 3320) is running... openstack-nova-scheduler (pid 3330) is running... openstack-nova-xvpvncproxy is stopped ovsdb-server is running with pid 17062 ovs-vswitchd is running with pid 17071 quantum-dhcp-agent (pid 18854) is running... quantum-l3-agent (pid 18816) is running... quantum-linuxbridge-agent is stopped quantum-openvswitch-agent (pid 19121) is running... quantum-server (pid 18796) is running... Please let me know if you have any other ideas. Regards, Prashanth > # ovs-vsctl show > 212029ed-2bd2-4ce1-beee-f75aea4d5535 > Bridge br-int > Port "tapb806d2cd-66" > tag: 4 > Interface "tapb806d2cd-66" > type: internal > Port br-int > Interface br-int > type: internal > Port "qvo72a8ea81-4f" > tag: 4 > Interface "qvo72a8ea81-4f" > Port "int-br-eth2" > Interface "int-br-eth2" > Port "int-br-eth1" > Interface "int-br-eth1" > Bridge "br-eth1" > Port "eth1" > Interface "eth1" > Port "phy-br-eth1" > Interface "phy-br-eth1" > Port "br-eth1" > Interface "br-eth1" > type: internal > Bridge "br-eth2" > Port "eth2" > Interface "eth2" > Port "br-eth2" > Interface "br-eth2" > type: internal > Port "phy-br-eth2" > Interface "phy-br-eth2" > ovs_version: "1.9.0" > > You should also see the phy-br-eth5 and int-br-eth5 devices when you run > "ip link". > > Please make sure quantum-openvswitch-agent is running on all compute and > network nodes, is getting the proper configuration > (/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini below looks > OK), and check its log for errors. > > -Bob > > > > > Regards, > > Prashanth > > > > > > Below is a snapshot of the different configuration files : > > * > > * > > *[Compute Node]* > > *quantum.conf* > > > > [DEFAULT] > > rpc_backend = quantum.openstack.common.rpc.impl_qpid > > qpid_hostname = 10.9.10.43 > > core_plugin = > > quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2 > > auth_strategy = keystone > > verbose = True > > debug = True > > bind_port = 9696 > > [keystone_authtoken] > > admin_tenant_name = openstack_network > > admin_user = openstack_network > > admin_password = test123 > > > > *dhcp_agent.ini* > > [DEFAULT] > > auth_url = http://localhost:35357/v2.0/ > > admin_tenant_name = openstack_network > > admin_user = openstack_network > > admin_password = test123 > > interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver > > use_namespaces = False > > dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq > > admin_username = quantum > > > > */etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini* > > [DATABASE] > > sql_connection = mysql://quantum:quantum at r5-20/ovs_quantum > > [OVS] > > tenant_network_type = vlan > > network_vlan_ranges = physnet5:100:1000 > > bridge_mapping = physnet5:br-eth5 > > > > *nova.conf* > > [DEFAULT] > > > > network_api_class = nova.network.quantumv2.api.API > > quantum_admin_username = openstack_network > > quantum_admin_password = test123 > > quantum_admin_auth_url = http://127.0.0.1:35357/v2.0/ > > quantum_auth_strategy = keystone > > quantum_admin_tenant_name = openstack_network > > quantum_url = http://10.9.10.43:9696/ > > libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver > > libvirt_use_virtio_for_bridges=true > > > > > > On the network node, this is the l3 configuration : > > *l3_agent.ini* > > [DEFAULT] > > auth_url = http://10.9.10.43:35357/v2.0/ > > admin_user = openstack_network > > admin_password = test123 > > admin_tenant_name = openstack_network > > auth_strategy = keystone > > interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver > > use_namespaces = False > > verbose = True > > debug = False > > interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver > > auth_region = regionOne > > router_id = 0496b7f6-1b27-487f-8a95-d7430302b080 > > external_network_bridge = br-ex > > > > > > > > _______________________________________________ > > rhos-list mailing list > > rhos-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rhos-list > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkukura at redhat.com Fri May 3 19:38:35 2013 From: rkukura at redhat.com (Robert Kukura) Date: Fri, 03 May 2013 15:38:35 -0400 Subject: [rhos-list] openstack networking with openvswitch In-Reply-To: References: <5183B0C6.1030400@redhat.com> Message-ID: <5184123B.4060307@redhat.com> On 05/03/2013 02:38 PM, Prashanth Prahalad wrote: > Thanks for taking a look at this Bob. > > > On Fri, May 3, 2013 at 5:42 AM, Robert Kukura > wrote: > > > Hi Prashanth, > > You are missing something! The quantum-openvswitch-agent should be > creating a veth to connect br-int to br-eth5, along with flow rules to > translate VLAN tags as packets cross the veth. You should see ports > named phy-br-eth5 and int-br-eth5 similar to those shown here: > > > Interestingly, both these "phy-br-eth5" and "int-br-eth5" are not > showing up either in ovs-vsctl output or in ip link output. Do I have to > create these (phy-br-eth5 and int-br-eth5) or does the > quantum-openvswitch-agent take care of the plumbing underneath ? That's what I meant by "You are missing something!". The quantum-openvswitch-agent should take care of creating the veth and connecting it to br-int and br-eth5 (which you already created). I do see below that quantum-openvswitch-agent is running. It creates the veth(s) based on the content of its bridge-mappings configuration variable. I just noticed below that you have this spelled "bridge-mapping" rather than "bridge-mappings". I suspect that is the problem. One of the first lines output to the log when quantum-openvswitch-agent starts up is the parsed content of the bridge mappings: 2013-04-03 15:32:06 INFO [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Bridge mappings: {'physnet2': 'br-eth2', 'physnet1': 'br-eth1'} I suspect yours is empty. > > *[root at r5-20 /]# ip link | grep eth5* > 7: eth5: mtu 1500 qdisc mq state UP > qlen 1000 > 53: eth5.100 at eth5: mtu 1500 > qdisc noqueue state UP > 126: eth5.101 at eth5: mtu 1500 qdisc > noqueue state UP > 129: eth5.102 at eth5: mtu 1500 qdisc > noqueue state UP > > There is no trace of "phy-br-eth5" or "int-br-eth5" either in > /var/log/quantum/* or /var/log/openvswitch/* > > This is the list of services I have running. > Configured devices: > lo eth0 eth1 eth2 eth3 eth4 Ethan > Currently active devices: > lo eth0 eth5 virbr0 ns-5dda48ca-af tap5dda48ca-af ns-df3efddd-ea > tapdf3efddd-ea ns-7fc3a799-ff tap7fc3a799- > ff ns-75ad2ce4-79 tap75ad2ce4-79 brqd2837208-60 eth5.100 at eth5 br-int > qbrdc963159-16 qvodc963159-16 qvbdc963 > 159-16 qbrccf5b741-60 qvoccf5b741-60 qvbccf5b741-60 qbr4894fa5d-40 > qvo4894fa5d-40 qvb4894fa5d-40 brq3d55f554-2d tap9bd180cd-c0 > eth5.101 at eth5 brq399c778c-8a tap3018422e-8c eth5.102 at eth5 qbr9bd180cd-c0 > qvo9bd180cd-c0 qvb9bd180cd-c0 qbr3018422e-8c qvo3018422e-8c qvb3018422e-8c > <..> > openstack-cinder-api (pid 3200) is running... > openstack-cinder-scheduler (pid 3208) is running... > openstack-cinder-volume dead but pid file exists > openstack-glance-api (pid 3230) is running... > openstack-glance-registry (pid 3244) is running... > openstack-glance-scrubber is stopped > keystone (pid 3252) is running... > openstack-nova-api (pid 18776) is running... > openstack-nova-cert (pid 3273) is running... > openstack-nova-compute (pid 18760) is running... > openstack-nova-console is stopped > openstack-nova-consoleauth (pid 3312) is running... > openstack-nova-metadata-api is stopped > openstack-nova-network is stopped > openstack-nova-novncproxy (pid 3320) is running... > openstack-nova-scheduler (pid 3330) is running... > openstack-nova-xvpvncproxy is stopped > ovsdb-server is running with pid 17062 > ovs-vswitchd is running with pid 17071 > quantum-dhcp-agent (pid 18854) is running... > quantum-l3-agent (pid 18816) is running... > quantum-linuxbridge-agent is stopped > quantum-openvswitch-agent (pid 19121) is running... > quantum-server (pid 18796) is running... > > Please let me know if you have any other ideas. > > Regards, > Prashanth > > > > # ovs-vsctl show > 212029ed-2bd2-4ce1-beee-f75aea4d5535 > Bridge br-int > Port "tapb806d2cd-66" > tag: 4 > Interface "tapb806d2cd-66" > type: internal > Port br-int > Interface br-int > type: internal > Port "qvo72a8ea81-4f" > tag: 4 > Interface "qvo72a8ea81-4f" > Port "int-br-eth2" > Interface "int-br-eth2" > Port "int-br-eth1" > Interface "int-br-eth1" > Bridge "br-eth1" > Port "eth1" > Interface "eth1" > Port "phy-br-eth1" > Interface "phy-br-eth1" > Port "br-eth1" > Interface "br-eth1" > type: internal > Bridge "br-eth2" > Port "eth2" > Interface "eth2" > Port "br-eth2" > Interface "br-eth2" > type: internal > Port "phy-br-eth2" > Interface "phy-br-eth2" > ovs_version: "1.9.0" > > You should also see the phy-br-eth5 and int-br-eth5 devices when you run > "ip link". > > Please make sure quantum-openvswitch-agent is running on all compute and > network nodes, is getting the proper configuration > (/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini below looks > OK), and check its log for errors. > > -Bob > > > > > Regards, > > Prashanth > > > > > > Below is a snapshot of the different configuration files : > > * > > * > > *[Compute Node]* > > *quantum.conf* > > > > [DEFAULT] > > rpc_backend = quantum.openstack.common.rpc.impl_qpid > > qpid_hostname = 10.9.10.43 > > core_plugin = > > quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2 > > auth_strategy = keystone > > verbose = True > > debug = True > > bind_port = 9696 > > [keystone_authtoken] > > admin_tenant_name = openstack_network > > admin_user = openstack_network > > admin_password = test123 > > > > *dhcp_agent.ini* > > [DEFAULT] > > auth_url = http://localhost:35357/v2.0/ > > admin_tenant_name = openstack_network > > admin_user = openstack_network > > admin_password = test123 > > interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver > > use_namespaces = False > > dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq > > admin_username = quantum > > > > */etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini* > > [DATABASE] > > sql_connection = mysql://quantum:quantum at r5-20/ovs_quantum > > [OVS] > > tenant_network_type = vlan > > network_vlan_ranges = physnet5:100:1000 > > bridge_mapping = physnet5:br-eth5 The above should be: bridge_mappings = physnet5:br-eth5 -Bob > > > > *nova.conf* > > [DEFAULT] > > > > network_api_class = nova.network.quantumv2.api.API > > quantum_admin_username = openstack_network > > quantum_admin_password = test123 > > quantum_admin_auth_url = http://127.0.0.1:35357/v2.0/ > > quantum_auth_strategy = keystone > > quantum_admin_tenant_name = openstack_network > > quantum_url = http://10.9.10.43:9696/ > > libvirt_vif_driver = > nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver > > libvirt_use_virtio_for_bridges=true > > > > > > On the network node, this is the l3 configuration : > > *l3_agent.ini* > > [DEFAULT] > > auth_url = http://10.9.10.43:35357/v2.0/ > > admin_user = openstack_network > > admin_password = test123 > > admin_tenant_name = openstack_network > > auth_strategy = keystone > > interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver > > use_namespaces = False > > verbose = True > > debug = False > > interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver > > auth_region = regionOne > > router_id = 0496b7f6-1b27-487f-8a95-d7430302b080 > > external_network_bridge = br-ex > > > > > > > > _______________________________________________ > > rhos-list mailing list > > rhos-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rhos-list > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > > From gkotton at redhat.com Sun May 5 07:39:36 2013 From: gkotton at redhat.com (Gary Kotton) Date: Sun, 05 May 2013 10:39:36 +0300 Subject: [rhos-list] Quantum Networking Interfaces. In-Reply-To: References: Message-ID: <51860CB8.3040003@redhat.com> On 05/03/2013 08:42 PM, Minton, Rich wrote: > > Quick question regarding quantum networking interfaces, specifically > br-int. > > I have a 5 node cluster -- 1 Controller/compute node which runs the > Quantum server and the l3-agent, 1 Network node which runs the > dhcp-agent, and 3 compute nodes. The all have openvswitch installed > and I have created the br-int interfaces on all. One thing I noticed > is that only the quantum server node (controller) and the network node > have an entry for "br-int" when I run the route command. The compute > nodes do not. > This could be a result of the following: 1. On the nodes that you have routing entry for br-int you most probably added a interface , that is, you have created a file /etc/sysconfig/network-scripts/ifcfg-br-int 2. On the other nodes you most probably just added br-int to the ovs > The VMs on my other compute nodes get an IP from DHCP but I am not > able to ping my gateway (10.0.56.1) from within the VM. > I have a few questions which hopefully could provide some more details (sorry for the silly questions - it is just a bit difficult to debug remotely):- Which version are you using? Are you able to send pings between VM's on different hosts? Did you try and capture traffic on the interfaces of the various hosts? This may help isolate where the ICMP is being discarded. Thanks Gary > I can ping the same gateway from each of the compute hosts. I > double-checked all of my config files and they appear to be correct. I > can send any of those along if needed. > > Controller - with Quantum Server > > Kernel IP routing table > > Destination Gateway Genmask Flags Metric Ref > Use Iface > > 10.0.0.0 * 255.255.255.0 U 0 0 > 0 eth0.502 > > 10.255.254.0 * 255.255.255.0 U 0 0 > 0 eth0.159 > > 172.17.0.0 * 255.255.255.0 U 0 0 > 0 eth0.500 > > link-local * 255.255.0.0 U 1003 0 > 0 eth0 > > link-local * 255.255.0.0 U 1004 0 > 0 eth1 > > link-local * 255.255.0.0 U 1009 0 > 0 br-int > > link-local * 255.255.0.0 U 1010 0 > 0 eth0.159 > > link-local * 255.255.0.0 U 1011 0 > 0 eth0.500 > > link-local * 255.255.0.0 U 1012 0 > 0 eth0.502 > > link-local * 255.255.0.0 U 1028 0 > 0 br-ex > > default 172.17.0.1 0.0.0.0 UG 0 0 > 0 eth0.500 > > Network Node > > Kernel IP routing table > > Destination Gateway Genmask Flags Metric Ref > Use Iface > > 10.0.0.0 * 255.255.255.0 U 0 0 > 0 eth0.502 > > 10.255.254.0 * 255.255.255.0 U 0 0 > 0 eth0.159 > > 172.17.0.0 * 255.255.255.0 U 0 0 > 0 eth0.500 > > 10.0.56.0 * 255.255.248.0 U 0 0 > 0 tapaee8f28f-74 > > link-local * 255.255.0.0 U 1003 0 > 0 eth0 > > link-local * 255.255.0.0 U 1004 0 > 0 eth1 > > link-local * 255.255.0.0 U 1009 0 > 0 br-int > > link-local * 255.255.0.0 U 1010 0 > 0 eth0.159 > > link-local * 255.255.0.0 U 1011 0 > 0 eth0.500 > > link-local * 255.255.0.0 U 1012 0 > 0 eth0.502 > > default 172.17.0.1 0.0.0.0 UG 0 0 > 0 eth0.500 > > Compute Nodes > > Kernel IP routing table > > Destination Gateway Genmask Flags Metric Ref > Use Iface > > 10.0.0.0 * 255.255.255.0 U 0 0 > 0 eth0.502 > > 10.255.254.0 * 255.255.255.0 U 0 0 > 0 eth0.159 > > 172.17.0.0 * 255.255.255.0 U 0 0 > 0 eth0.500 > > link-local * 255.255.0.0 U 1005 0 > 0 eth0 > > link-local * 255.255.0.0 U 1006 0 > 0 eth1 > > link-local * 255.255.0.0 U 1042 0 > 0 eth0.159 > > link-local * 255.255.0.0 U 1043 0 > 0 eth0.500 > > link-local * 255.255.0.0 U 1044 0 > 0 eth0.502 > > default 172.17.0.1 0.0.0.0 UG 0 0 > 0 eth0.500 > > Ok, so this may not be a quick question... > > Once again, all or your help is greatly appreciated. > > Rick > > _Richard Minton_ > > LMICC Systems Administrator > > 4000 Geerdes Blvd, 13D31 > > King of Prussia, PA 19406 > > Phone: 610-354-5482 > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From prashanth.prahal at gmail.com Mon May 6 16:40:49 2013 From: prashanth.prahal at gmail.com (Prashanth Prahalad) Date: Mon, 6 May 2013 09:40:49 -0700 Subject: [rhos-list] openstack networking with openvswitch In-Reply-To: <5184123B.4060307@redhat.com> References: <5183B0C6.1030400@redhat.com> <5184123B.4060307@redhat.com> Message-ID: Thanks On Fri, May 3, 2013 at 12:38 PM, Robert Kukura wrote: > That's what I meant by "You are missing something!". The > quantum-openvswitch-agent should take care of creating the veth and > connecting it to br-int and br-eth5 (which you already created). > > I do see below that quantum-openvswitch-agent is running. > > It creates the veth(s) based on the content of its bridge-mappings > configuration variable. I just noticed below that you have this spelled > "bridge-mapping" rather than "bridge-mappings". I suspect that is the > problem. > > One of the first lines output to the log when quantum-openvswitch-agent > starts up is the parsed content of the bridge mappings: > > 2013-04-03 15:32:06 INFO > [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Bridge mappings: > {'physnet2': 'br-eth2', 'physnet1': 'br-eth1'} > > I suspect yours is empty. > > So, yes, this was the problem (changing it to bridge-mappings helped). But I still had to create br-eth5 manually. Appreciate the help ! Prashanth > > > > *[root at r5-20 /]# ip link | grep eth5* > > 7: eth5: mtu 1500 qdisc mq state UP > > qlen 1000 > > 53: eth5.100 at eth5: mtu 1500 > > qdisc noqueue state UP > > 126: eth5.101 at eth5: mtu 1500 qdisc > > noqueue state UP > > 129: eth5.102 at eth5: mtu 1500 qdisc > > noqueue state UP > > > > There is no trace of "phy-br-eth5" or "int-br-eth5" either in > > /var/log/quantum/* or /var/log/openvswitch/* > > > > This is the list of services I have running. > > Configured devices: > > lo eth0 eth1 eth2 eth3 eth4 Ethan > > Currently active devices: > > lo eth0 eth5 virbr0 ns-5dda48ca-af tap5dda48ca-af ns-df3efddd-ea > > tapdf3efddd-ea ns-7fc3a799-ff tap7fc3a799- > > ff ns-75ad2ce4-79 tap75ad2ce4-79 brqd2837208-60 eth5.100 at eth5 br-int > > qbrdc963159-16 qvodc963159-16 qvbdc963 > > 159-16 qbrccf5b741-60 qvoccf5b741-60 qvbccf5b741-60 qbr4894fa5d-40 > > qvo4894fa5d-40 qvb4894fa5d-40 brq3d55f554-2d tap9bd180cd-c0 > > eth5.101 at eth5 brq399c778c-8a tap3018422e-8c eth5.102 at eth5 qbr9bd180cd-c0 > > qvo9bd180cd-c0 qvb9bd180cd-c0 qbr3018422e-8c qvo3018422e-8c > qvb3018422e-8c > > <..> > > openstack-cinder-api (pid 3200) is running... > > openstack-cinder-scheduler (pid 3208) is running... > > openstack-cinder-volume dead but pid file exists > > openstack-glance-api (pid 3230) is running... > > openstack-glance-registry (pid 3244) is running... > > openstack-glance-scrubber is stopped > > keystone (pid 3252) is running... > > openstack-nova-api (pid 18776) is running... > > openstack-nova-cert (pid 3273) is running... > > openstack-nova-compute (pid 18760) is running... > > openstack-nova-console is stopped > > openstack-nova-consoleauth (pid 3312) is running... > > openstack-nova-metadata-api is stopped > > openstack-nova-network is stopped > > openstack-nova-novncproxy (pid 3320) is running... > > openstack-nova-scheduler (pid 3330) is running... > > openstack-nova-xvpvncproxy is stopped > > ovsdb-server is running with pid 17062 > > ovs-vswitchd is running with pid 17071 > > quantum-dhcp-agent (pid 18854) is running... > > quantum-l3-agent (pid 18816) is running... > > quantum-linuxbridge-agent is stopped > > quantum-openvswitch-agent (pid 19121) is running... > > quantum-server (pid 18796) is running... > > > > Please let me know if you have any other ideas. > > > > Regards, > > Prashanth > > > > > > > > # ovs-vsctl show > > 212029ed-2bd2-4ce1-beee-f75aea4d5535 > > Bridge br-int > > Port "tapb806d2cd-66" > > tag: 4 > > Interface "tapb806d2cd-66" > > type: internal > > Port br-int > > Interface br-int > > type: internal > > Port "qvo72a8ea81-4f" > > tag: 4 > > Interface "qvo72a8ea81-4f" > > Port "int-br-eth2" > > Interface "int-br-eth2" > > Port "int-br-eth1" > > Interface "int-br-eth1" > > Bridge "br-eth1" > > Port "eth1" > > Interface "eth1" > > Port "phy-br-eth1" > > Interface "phy-br-eth1" > > Port "br-eth1" > > Interface "br-eth1" > > type: internal > > Bridge "br-eth2" > > Port "eth2" > > Interface "eth2" > > Port "br-eth2" > > Interface "br-eth2" > > type: internal > > Port "phy-br-eth2" > > Interface "phy-br-eth2" > > ovs_version: "1.9.0" > > > > You should also see the phy-br-eth5 and int-br-eth5 devices when you > run > > "ip link". > > > > Please make sure quantum-openvswitch-agent is running on all compute > and > > network nodes, is getting the proper configuration > > (/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini below looks > > OK), and check its log for errors. > > > > -Bob > > > > > > > > Regards, > > > Prashanth > > > > > > > > > Below is a snapshot of the different configuration files : > > > * > > > * > > > *[Compute Node]* > > > *quantum.conf* > > > > > > [DEFAULT] > > > rpc_backend = quantum.openstack.common.rpc.impl_qpid > > > qpid_hostname = 10.9.10.43 > > > core_plugin = > > > quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2 > > > auth_strategy = keystone > > > verbose = True > > > debug = True > > > bind_port = 9696 > > > [keystone_authtoken] > > > admin_tenant_name = openstack_network > > > admin_user = openstack_network > > > admin_password = test123 > > > > > > *dhcp_agent.ini* > > > [DEFAULT] > > > auth_url = http://localhost:35357/v2.0/ > > > admin_tenant_name = openstack_network > > > admin_user = openstack_network > > > admin_password = test123 > > > interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver > > > use_namespaces = False > > > dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq > > > admin_username = quantum > > > > > > */etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini* > > > [DATABASE] > > > sql_connection = mysql://quantum:quantum at r5-20/ovs_quantum > > > [OVS] > > > tenant_network_type = vlan > > > network_vlan_ranges = physnet5:100:1000 > > > bridge_mapping = physnet5:br-eth5 > > The above should be: > > bridge_mappings = physnet5:br-eth5 > > -Bob > > > > > > > *nova.conf* > > > [DEFAULT] > > > > > > network_api_class = nova.network.quantumv2.api.API > > > quantum_admin_username = openstack_network > > > quantum_admin_password = test123 > > > quantum_admin_auth_url = http://127.0.0.1:35357/v2.0/ > > > quantum_auth_strategy = keystone > > > quantum_admin_tenant_name = openstack_network > > > quantum_url = http://10.9.10.43:9696/ > > > libvirt_vif_driver = > > nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver > > > libvirt_use_virtio_for_bridges=true > > > > > > > > > On the network node, this is the l3 configuration : > > > *l3_agent.ini* > > > [DEFAULT] > > > auth_url = http://10.9.10.43:35357/v2.0/ > > > admin_user = openstack_network > > > admin_password = test123 > > > admin_tenant_name = openstack_network > > > auth_strategy = keystone > > > interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver > > > use_namespaces = False > > > verbose = True > > > debug = False > > > interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver > > > auth_region = regionOne > > > router_id = 0496b7f6-1b27-487f-8a95-d7430302b080 > > > external_network_bridge = br-ex > > > > > > > > > > > > _______________________________________________ > > > rhos-list mailing list > > > rhos-list at redhat.com > > > https://www.redhat.com/mailman/listinfo/rhos-list > > > > > > > _______________________________________________ > > rhos-list mailing list > > rhos-list at redhat.com > > https://www.redhat.com/mailman/listinfo/rhos-list > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rkukura at redhat.com Mon May 6 17:04:44 2013 From: rkukura at redhat.com (Robert Kukura) Date: Mon, 06 May 2013 13:04:44 -0400 Subject: [rhos-list] openstack networking with openvswitch In-Reply-To: References: <5183B0C6.1030400@redhat.com> <5184123B.4060307@redhat.com> Message-ID: <5187E2AC.4020308@redhat.com> On 05/06/2013 12:40 PM, Prashanth Prahalad wrote: > Thanks > > > On Fri, May 3, 2013 at 12:38 PM, Robert Kukura > wrote: > > That's what I meant by "You are missing something!". The > quantum-openvswitch-agent should take care of creating the veth and > connecting it to br-int and br-eth5 (which you already created). > > I do see below that quantum-openvswitch-agent is running. > > It creates the veth(s) based on the content of its bridge-mappings > configuration variable. I just noticed below that you have this spelled > "bridge-mapping" rather than "bridge-mappings". I suspect that is the > problem. > > One of the first lines output to the log when quantum-openvswitch-agent > starts up is the parsed content of the bridge mappings: > > 2013-04-03 15:32:06 INFO > [quantum.plugins.openvswitch.agent.ovs_quantum_agent] Bridge mappings: > {'physnet2': 'br-eth2', 'physnet1': 'br-eth1'} > > I suspect yours is empty. > > > So, yes, this was the problem (changing it to bridge-mappings helped). Glad to hear it! > > But I still had to create br-eth5 manually. Yes, the physical network bridge must be created manually and the network interface must be added to it manually. This allows flexibility to bond interfaces, configure an IP address on the bridge at boot via a network script if needed, etc.. > > Appreciate the help ! > Prashanth No problem! -Bob > > > > > > > *[root at r5-20 /]# ip link | grep eth5* > > 7: eth5: mtu 1500 qdisc mq state UP > > qlen 1000 > > 53: eth5.100 at eth5: mtu 1500 > > qdisc noqueue state UP > > 126: eth5.101 at eth5: mtu 1500 qdisc > > noqueue state UP > > 129: eth5.102 at eth5: mtu 1500 qdisc > > noqueue state UP > > > > There is no trace of "phy-br-eth5" or "int-br-eth5" either in > > /var/log/quantum/* or /var/log/openvswitch/* > > > > This is the list of services I have running. > > Configured devices: > > lo eth0 eth1 eth2 eth3 eth4 Ethan > > Currently active devices: > > lo eth0 eth5 virbr0 ns-5dda48ca-af tap5dda48ca-af ns-df3efddd-ea > > tapdf3efddd-ea ns-7fc3a799-ff tap7fc3a799- > > ff ns-75ad2ce4-79 tap75ad2ce4-79 brqd2837208-60 eth5.100 at eth5 br-int > > qbrdc963159-16 qvodc963159-16 qvbdc963 > > 159-16 qbrccf5b741-60 qvoccf5b741-60 qvbccf5b741-60 qbr4894fa5d-40 > > qvo4894fa5d-40 qvb4894fa5d-40 brq3d55f554-2d tap9bd180cd-c0 > > eth5.101 at eth5 brq399c778c-8a tap3018422e-8c eth5.102 at eth5 > qbr9bd180cd-c0 > > qvo9bd180cd-c0 qvb9bd180cd-c0 qbr3018422e-8c qvo3018422e-8c > qvb3018422e-8c > > <..> > > openstack-cinder-api (pid 3200) is running... > > openstack-cinder-scheduler (pid 3208) is running... > > openstack-cinder-volume dead but pid file exists > > openstack-glance-api (pid 3230) is running... > > openstack-glance-registry (pid 3244) is running... > > openstack-glance-scrubber is stopped > > keystone (pid 3252) is running... > > openstack-nova-api (pid 18776) is running... > > openstack-nova-cert (pid 3273) is running... > > openstack-nova-compute (pid 18760) is running... > > openstack-nova-console is stopped > > openstack-nova-consoleauth (pid 3312) is running... > > openstack-nova-metadata-api is stopped > > openstack-nova-network is stopped > > openstack-nova-novncproxy (pid 3320) is running... > > openstack-nova-scheduler (pid 3330) is running... > > openstack-nova-xvpvncproxy is stopped > > ovsdb-server is running with pid 17062 > > ovs-vswitchd is running with pid 17071 > > quantum-dhcp-agent (pid 18854) is running... > > quantum-l3-agent (pid 18816) is running... > > quantum-linuxbridge-agent is stopped > > quantum-openvswitch-agent (pid 19121) is running... > > quantum-server (pid 18796) is running... > > > > Please let me know if you have any other ideas. > > > > Regards, > > Prashanth > > > > > > > > # ovs-vsctl show > > 212029ed-2bd2-4ce1-beee-f75aea4d5535 > > Bridge br-int > > Port "tapb806d2cd-66" > > tag: 4 > > Interface "tapb806d2cd-66" > > type: internal > > Port br-int > > Interface br-int > > type: internal > > Port "qvo72a8ea81-4f" > > tag: 4 > > Interface "qvo72a8ea81-4f" > > Port "int-br-eth2" > > Interface "int-br-eth2" > > Port "int-br-eth1" > > Interface "int-br-eth1" > > Bridge "br-eth1" > > Port "eth1" > > Interface "eth1" > > Port "phy-br-eth1" > > Interface "phy-br-eth1" > > Port "br-eth1" > > Interface "br-eth1" > > type: internal > > Bridge "br-eth2" > > Port "eth2" > > Interface "eth2" > > Port "br-eth2" > > Interface "br-eth2" > > type: internal > > Port "phy-br-eth2" > > Interface "phy-br-eth2" > > ovs_version: "1.9.0" > > > > You should also see the phy-br-eth5 and int-br-eth5 devices > when you run > > "ip link". > > > > Please make sure quantum-openvswitch-agent is running on all > compute and > > network nodes, is getting the proper configuration > > (/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini below > looks > > OK), and check its log for errors. > > > > -Bob > > > > > > > > Regards, > > > Prashanth > > > > > > > > > Below is a snapshot of the different configuration files : > > > * > > > * > > > *[Compute Node]* > > > *quantum.conf* > > > > > > [DEFAULT] > > > rpc_backend = quantum.openstack.common.rpc.impl_qpid > > > qpid_hostname = 10.9.10.43 > > > core_plugin = > > > > quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2 > > > auth_strategy = keystone > > > verbose = True > > > debug = True > > > bind_port = 9696 > > > [keystone_authtoken] > > > admin_tenant_name = openstack_network > > > admin_user = openstack_network > > > admin_password = test123 > > > > > > *dhcp_agent.ini* > > > [DEFAULT] > > > auth_url = http://localhost:35357/v2.0/ > > > admin_tenant_name = openstack_network > > > admin_user = openstack_network > > > admin_password = test123 > > > interface_driver = > quantum.agent.linux.interface.OVSInterfaceDriver > > > use_namespaces = False > > > dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq > > > admin_username = quantum > > > > > > */etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini* > > > [DATABASE] > > > sql_connection = mysql://quantum:quantum at r5-20/ovs_quantum > > > [OVS] > > > tenant_network_type = vlan > > > network_vlan_ranges = physnet5:100:1000 > > > bridge_mapping = physnet5:br-eth5 > > The above should be: > > bridge_mappings = physnet5:br-eth5 > > -Bob > > > > > > > *nova.conf* > > > [DEFAULT] > > > > > > network_api_class = nova.network.quantumv2.api.API > > > quantum_admin_username = openstack_network > > > quantum_admin_password = test123 > > > quantum_admin_auth_url = http://127.0.0.1:35357/v2.0/ > > > quantum_auth_strategy = keystone > > > quantum_admin_tenant_name = openstack_network > > > quantum_url = http://10.9.10.43:9696/ > > > libvirt_vif_driver = > > nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver > > > libvirt_use_virtio_for_bridges=true > > > > > > > > > On the network node, this is the l3 configuration : > > > *l3_agent.ini* > > > [DEFAULT] > > > auth_url = http://10.9.10.43:35357/v2.0/ > > > admin_user = openstack_network > > > admin_password = test123 > > > admin_tenant_name = openstack_network > > > auth_strategy = keystone > > > interface_driver = > quantum.agent.linux.interface.OVSInterfaceDriver > > > use_namespaces = False > > > verbose = True > > > debug = False > > > interface_driver = > quantum.agent.linux.interface.OVSInterfaceDriver > > > auth_region = regionOne > > > router_id = 0496b7f6-1b27-487f-8a95-d7430302b080 > > > external_network_bridge = br-ex > > > > > > > > > > > > _______________________________________________ > > > rhos-list mailing list > > > rhos-list at redhat.com > > > > > https://www.redhat.com/mailman/listinfo/rhos-list > > > > > > > _______________________________________________ > > rhos-list mailing list > > rhos-list at redhat.com > > > > https://www.redhat.com/mailman/listinfo/rhos-list > > > > > > From nicolas.vogel at heig-vd.ch Tue May 7 06:00:48 2013 From: nicolas.vogel at heig-vd.ch (Vogel Nicolas) Date: Tue, 7 May 2013 06:00:48 +0000 Subject: [rhos-list] LDAP integration Message-ID: Hi, After successfully installing an ? all-in-one Node ? using Packstack, I want to user LDAP to manage my users. The LDAP backend isn?t available in the keystone.conf. Do I have to replace the SQL backend with the LDAP backend? Wenn I switch to LDAP, is my admin user created by Packstack usable yet or do I have to modify everything so that one of my LDAP user becomes the admin ? Cheers, Nicolas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dneary at redhat.com Tue May 7 10:51:26 2013 From: dneary at redhat.com (Dave Neary) Date: Tue, 07 May 2013 12:51:26 +0200 Subject: [rhos-list] LDAP integration In-Reply-To: References: Message-ID: <5188DCAE.2090208@redhat.com> Hi Nicolas, You mention an all in one install - are you installing RHOS Folsom, or RDO Grizzly? Thanks, Dave. On 05/07/2013 08:00 AM, Vogel Nicolas wrote: > Hi, > > > > After successfully installing an ? all-in-one Node ? using Packstack, I > want to user LDAP to manage my users. > > The LDAP backend isn?t available in the keystone.conf. Do I have to > replace the SQL backend with the LDAP backend? > > Wenn I switch to LDAP, is my admin user created by Packstack usable yet > or do I have to modify everything so that one of my LDAP user becomes > the admin ? > > > > Cheers, > > > > Nicolas.** > > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From nicolas.vogel at heig-vd.ch Tue May 7 11:06:22 2013 From: nicolas.vogel at heig-vd.ch (Vogel Nicolas) Date: Tue, 7 May 2013 11:06:22 +0000 Subject: [rhos-list] LDAP integration In-Reply-To: <5188DCAE.2090208@redhat.com> References: <5188DCAE.2090208@redhat.com> Message-ID: I installed RDO Grizzli like described here: http://openstack.redhat.com/Quickstart In my keystone.conf file I had two lines with the sql backend (exactly the same lines) but nothing for ldap backend... driver = keystone.identity.backends.sql.Identity #driver = keystone.identity.backends.sql.Identity I tried to replace manually "sql" with "ldap" but after that my admin user is lost his rights to work with Keystone: [root at Test-srv ~(keystone_admin)]# Keystone user-list Unable to communicate with identity service: {"error": {"message": "The request you have made requires authentication.", "code": 401, "title": "Not Authorized"}}. (HTTP 401) Thanks, Nicolas. -----Original Message----- From: rhos-list-bounces at redhat.com [mailto:rhos-list-bounces at redhat.com] On Behalf Of Dave Neary Sent: mardi 7 mai 2013 12:51 To: rhos-list at redhat.com Subject: Re: [rhos-list] LDAP integration Hi Nicolas, You mention an all in one install - are you installing RHOS Folsom, or RDO Grizzly? Thanks, Dave. On 05/07/2013 08:00 AM, Vogel Nicolas wrote: > Hi, > > > > After successfully installing an ? all-in-one Node ? using Packstack, > I want to user LDAP to manage my users. > > The LDAP backend isn?t available in the keystone.conf. Do I have to > replace the SQL backend with the LDAP backend? > > Wenn I switch to LDAP, is my admin user created by Packstack usable > yet or do I have to modify everything so that one of my LDAP user > becomes the admin ? > > > > Cheers, > > > > Nicolas.** > > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list From rich.minton at lmco.com Wed May 8 13:19:53 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Wed, 8 May 2013 13:19:53 +0000 Subject: [rhos-list] Questions about Quantum. Message-ID: We are running into difficulty implementing RHEL Openstack using Quantum Networking and need to verify the level of Quantum Support provided by RHEL. Which of the RHEL Openstack Folsum Quantum implementations are supported (FLAT, FLAT DHCP, VLAN, GRE)? Which of those implementations support the Openstack Metadata Service? Can Metadata be supported without using the Quantum L3 agent, such as in the FLAT/FLATDHCP models? Which Openstack Quantum Use Cases are supported? Single Flat Network Multiple Flat Network Mixed Flat and Private Provider Router with Private Networks Per-tenant Routers with Private Networks Thanks for the help. Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rich.minton at lmco.com Wed May 8 13:24:45 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Wed, 8 May 2013 13:24:45 +0000 Subject: [rhos-list] EXTERNAL: Re: Quantum Networking Interfaces. In-Reply-To: <51860CB8.3040003@redhat.com> References: <51860CB8.3040003@redhat.com> Message-ID: Which version are you using? openstack-quantum-2012.2.3-10.el6ost.noarch Are you able to send pings between VM's on different hosts? No Did you try and capture traffic on the interfaces of the various hosts? This may help isolate where the ICMP is being discarded. Doing that now. From: Gary Kotton [mailto:gkotton at redhat.com] Sent: Sunday, May 05, 2013 3:40 AM To: Minton, Rich Cc: rhos-list at redhat.com Subject: EXTERNAL: Re: [rhos-list] Quantum Networking Interfaces. On 05/03/2013 08:42 PM, Minton, Rich wrote: Quick question regarding quantum networking interfaces, specifically br-int. I have a 5 node cluster - 1 Controller/compute node which runs the Quantum server and the l3-agent, 1 Network node which runs the dhcp-agent, and 3 compute nodes. The all have openvswitch installed and I have created the br-int interfaces on all. One thing I noticed is that only the quantum server node (controller) and the network node have an entry for "br-int" when I run the route command. The compute nodes do not. This could be a result of the following: 1. On the nodes that you have routing entry for br-int you most probably added a interface , that is, you have created a file /etc/sysconfig/network-scripts/ifcfg-br-int 2. On the other nodes you most probably just added br-int to the ovs The VMs on my other compute nodes get an IP from DHCP but I am not able to ping my gateway (10.0.56.1) from within the VM. I have a few questions which hopefully could provide some more details (sorry for the silly questions - it is just a bit difficult to debug remotely):- Which version are you using? openstack-quantum-2012.2.3-10.el6ost.noarch Are you able to send pings between VM's on different hosts? No Did you try and capture traffic on the interfaces of the various hosts? This may help isolate where the ICMP is being discarded. Doing that now. Thanks Gary I can ping the same gateway from each of the compute hosts. I double-checked all of my config files and they appear to be correct. I can send any of those along if needed. Controller - with Quantum Server Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.0 * 255.255.255.0 U 0 0 0 eth0.502 10.255.254.0 * 255.255.255.0 U 0 0 0 eth0.159 172.17.0.0 * 255.255.255.0 U 0 0 0 eth0.500 link-local * 255.255.0.0 U 1003 0 0 eth0 link-local * 255.255.0.0 U 1004 0 0 eth1 link-local * 255.255.0.0 U 1009 0 0 br-int link-local * 255.255.0.0 U 1010 0 0 eth0.159 link-local * 255.255.0.0 U 1011 0 0 eth0.500 link-local * 255.255.0.0 U 1012 0 0 eth0.502 link-local * 255.255.0.0 U 1028 0 0 br-ex default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0.500 Network Node Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.0 * 255.255.255.0 U 0 0 0 eth0.502 10.255.254.0 * 255.255.255.0 U 0 0 0 eth0.159 172.17.0.0 * 255.255.255.0 U 0 0 0 eth0.500 10.0.56.0 * 255.255.248.0 U 0 0 0 tapaee8f28f-74 link-local * 255.255.0.0 U 1003 0 0 eth0 link-local * 255.255.0.0 U 1004 0 0 eth1 link-local * 255.255.0.0 U 1009 0 0 br-int link-local * 255.255.0.0 U 1010 0 0 eth0.159 link-local * 255.255.0.0 U 1011 0 0 eth0.500 link-local * 255.255.0.0 U 1012 0 0 eth0.502 default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0.500 Compute Nodes Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.0 * 255.255.255.0 U 0 0 0 eth0.502 10.255.254.0 * 255.255.255.0 U 0 0 0 eth0.159 172.17.0.0 * 255.255.255.0 U 0 0 0 eth0.500 link-local * 255.255.0.0 U 1005 0 0 eth0 link-local * 255.255.0.0 U 1006 0 0 eth1 link-local * 255.255.0.0 U 1042 0 0 eth0.159 link-local * 255.255.0.0 U 1043 0 0 eth0.500 link-local * 255.255.0.0 U 1044 0 0 eth0.502 default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0.500 Ok, so this may not be a quick question... Once again, all or your help is greatly appreciated. Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at redhat.com Wed May 8 14:01:41 2013 From: gkotton at redhat.com (Gary Kotton) Date: Wed, 08 May 2013 17:01:41 +0300 Subject: [rhos-list] Questions about Quantum. In-Reply-To: References: Message-ID: <518A5AC5.6060702@redhat.com> On 05/08/2013 04:19 PM, Minton, Rich wrote: > > We are running into difficulty implementing RHEL Openstack using > Quantum Networking and need to verify the level of Quantum Support > provided by RHEL. > Sorry to hear that. Hopefully we will be able to help out. > Which of the RHEL Openstack Folsum Quantum implementations are > supported (FLAT, FLAT DHCP, VLAN, GRE)? > GRE is not supported. This is due to the fact that there are some missing parts from the current kernel. > Which of those implementations support the Openstack Metadata Service? > In Folsom a VM can only access the metadata service if the L3 agent is running. In Grizzly this can be done via the L3 agent or the DHCP agent (there may be cases where one would not want to run the L3 agent). Please note that due to the fact that there is no namespace support in RHEL this support does not work when there are networks with overlapping IP ranges. > Can Metadata be supported without using the Quantum L3 agent, such as > in the FLAT/FLATDHCP models? > Only with Grizzly. > Which Openstack Quantum Use Cases are supported? > > Single Flat Network > > Multiple Flat Network > > Mixed Flat and Private > > Provider Router with Private Networks > > Per-tenant Routers with Private Networks > I think all of the above. > Thanks for the help. > > Rick > > _Richard Minton_ > > LMICC Systems Administrator > > 4000 Geerdes Blvd, 13D31 > > King of Prussia, PA 19406 > > Phone: 610-354-5482 > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From rich.minton at lmco.com Thu May 9 13:00:14 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Thu, 9 May 2013 13:00:14 +0000 Subject: [rhos-list] EXTERNAL: Re: Questions about Quantum. In-Reply-To: <518A5AC5.6060702@redhat.com> References: <518A5AC5.6060702@redhat.com> Message-ID: Gary, I was hoping you could shed some light on what I'm seeing. I was doing some comparison between my controller/compute node and another compute node - using "nova-manage config list". I noticed some configuration entries that look like they are leftovers from Nova Network. When I created our Openstack cluster originally I used packstack to start and then converted from Nova Network to Quantum. Is it possible that some of these leftover config entries are causing conflicts with Quantum Networking? Some of the questionable entries are: network_manager = nova.network.manager.FlatDHCPManager network_topic = network dhcpbridge = /usr/bin/nova-dhcpbridge num_networks = 1 dhcpbridge_flagfile = ['/usr/share/nova/nova-dist.conf', '/etc/nova/nova.conf'] instance_dns_manager = nova.network.dns_driver.DNSDriver auto_assign_floating_ip = False routing_source_ip = 10.255.254.36 networks_path = /var/lib/nova/networks floating_range = 4.4.4.0/24 flat_network_dns = 8.8.4.4 floating_ip_dns_manager = nova.network.dns_driver.DNSDriver public_interface = eth0 network_size = 256 fixed_range = 10.0.0.0/8 default_floating_pool = nova I also saw some entries from Nova Volume but I'll leave that for another time. Thank you, Rick From: Gary Kotton [mailto:gkotton at redhat.com] Sent: Wednesday, May 08, 2013 10:02 AM To: Minton, Rich Cc: rhos-list at redhat.com Subject: EXTERNAL: Re: [rhos-list] Questions about Quantum. On 05/08/2013 04:19 PM, Minton, Rich wrote: We are running into difficulty implementing RHEL Openstack using Quantum Networking and need to verify the level of Quantum Support provided by RHEL. Sorry to hear that. Hopefully we will be able to help out. Which of the RHEL Openstack Folsum Quantum implementations are supported (FLAT, FLAT DHCP, VLAN, GRE)? GRE is not supported. This is due to the fact that there are some missing parts from the current kernel. Which of those implementations support the Openstack Metadata Service? In Folsom a VM can only access the metadata service if the L3 agent is running. In Grizzly this can be done via the L3 agent or the DHCP agent (there may be cases where one would not want to run the L3 agent). Please note that due to the fact that there is no namespace support in RHEL this support does not work when there are networks with overlapping IP ranges. Can Metadata be supported without using the Quantum L3 agent, such as in the FLAT/FLATDHCP models? Only with Grizzly. Which Openstack Quantum Use Cases are supported? Single Flat Network Multiple Flat Network Mixed Flat and Private Provider Router with Private Networks Per-tenant Routers with Private Networks I think all of the above. Thanks for the help. Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From rich.minton at lmco.com Thu May 9 15:06:15 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Thu, 9 May 2013 15:06:15 +0000 Subject: [rhos-list] EXTERNAL: Re: Questions about Quantum. In-Reply-To: <518A5AC5.6060702@redhat.com> References: <518A5AC5.6060702@redhat.com> Message-ID: Here is more information regarding our problems: Overview of Environment. Five ibm hs22v blades 72gb mem two 10gb nics (eth0,eth1) rhel 6.3 openstack 2.1 folsum Quantum with openvswitch blade28 (controller, horizon, keystone, glance, quantum server, quantum l2, compute) blade27 (quantum l2, dhcp agent) blade26 (quantum l2, compute) blade25 (quantum l2, compute) blade24 (quantum l2, compute) eth0.500 (rhel server administration, data center access, internet access) (172.17.0.0/24) eth0.502 (rhel server access to data center nfs storage systems) (10.0.0.0/24) eth0.159 (openstack management network) (10.255.254.0/24) eth1 (vm access to datacenter and internet, vm access host to host) (10.0.56.0/21) vms up and running in environment blade28 (10.0.56.5 and 10.0.56.50) blade27 (no vms but active dhcp server 10.0.56.2) blade26 (10.0.56.53 and 10.0.56.54) blade25 (10.0.56.55 and 10.0.56.56) blade24 (10.0.56.58 and 10.0.56.59) data center core router @ 10.0.56.1/21 data center core router has all arp entries (10.0.56.(2,5,50,53,54,55,56,58,59)) data center core router can ping 10.0.56.2, 10.0.56.5, and 10.0.56.50) data center core router cannot ping vms on blades 26,25,24) All vms able to reach dhcp server All vms able to get dhcp address blade28 vms able to communicate to data center thru 10.0.56.1 as gateway blade28 vms able to reach internet thru 10.0.56.1 as gateway issues blade 26 vms can reach each other, but not 10.0.56.1, 10.0.56.2, or other host vms, or data center, or internet) blade 25 vms can reach each other, but not 10.0.56.1, 10.0.56.2, or other host vms, or data center, or internet) blade 24 vms can reach each other, but not 10.0.56.1, 10.0.56.2, or other host vms, or data center, or internet) data center core router (initial ping to vm on blade 28 (10.0.56.50) is rejected by blade25 eth0.500 address subsequest pings ok data center core router (initial ping to vm on blade 28 (10.0.56.5) is rejected by blade24 eth0.500 address subsequest pings ok ovs-vsctl show commands on blades 28,27,25 show qvo/tap interfaces with tag 1 ovs-vsctl show commands on blades 26,24 show qvo/tap interfaces with tag 2 priorities resolve vm ip connectivity between blades, access to data center networks, access to internet determine how to modify environment to support vm access to metadata service (use of quantum l3) we have been following the attached document - bk-quantum-admin-guide-trunk.pdf, 4 Feb 2013 use cases are on pages 5 thru 8 demo setups are on pages 55 thru 74 we were attempting to set up "Use case: Single Flat Network" (5) we were attempting to set up "Demo Setup: Single Flat Network" (55-60) this may have to change based on metadata solution From: Gary Kotton [mailto:gkotton at redhat.com] Sent: Wednesday, May 08, 2013 10:02 AM To: Minton, Rich Cc: rhos-list at redhat.com Subject: EXTERNAL: Re: [rhos-list] Questions about Quantum. On 05/08/2013 04:19 PM, Minton, Rich wrote: We are running into difficulty implementing RHEL Openstack using Quantum Networking and need to verify the level of Quantum Support provided by RHEL. Sorry to hear that. Hopefully we will be able to help out. Which of the RHEL Openstack Folsum Quantum implementations are supported (FLAT, FLAT DHCP, VLAN, GRE)? GRE is not supported. This is due to the fact that there are some missing parts from the current kernel. Which of those implementations support the Openstack Metadata Service? In Folsom a VM can only access the metadata service if the L3 agent is running. In Grizzly this can be done via the L3 agent or the DHCP agent (there may be cases where one would not want to run the L3 agent). Please note that due to the fact that there is no namespace support in RHEL this support does not work when there are networks with overlapping IP ranges. Can Metadata be supported without using the Quantum L3 agent, such as in the FLAT/FLATDHCP models? Only with Grizzly. Which Openstack Quantum Use Cases are supported? Single Flat Network Multiple Flat Network Mixed Flat and Private Provider Router with Private Networks Per-tenant Routers with Private Networks I think all of the above. Thanks for the help. Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: bk-quantum-admin-guide-trunk.pdf Type: application/pdf Size: 777291 bytes Desc: bk-quantum-admin-guide-trunk.pdf URL: From prmarino1 at gmail.com Thu May 9 16:09:00 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Thu, 9 May 2013 12:09:00 -0400 Subject: [rhos-list] I think I found something missing in iproute2 Message-ID: I got this error in my logs while doing testing " 2013-04-22 12:35:01 INFO [quantum.common.config] Logging enabled! 2013-04-22 12:35:01 DEBUG [quantum.agent.linux.utils] Running command: sudo quantum-rootwrap /etc/quantum/rootwrap.conf ip netns list 2013-04-22 12:35:02 DEBUG [quantum.agent.linux.utils] Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 'netns', 'list'] Exit code: 255 Stdout: '' Stderr: 'Object "netns" is unknown, try "ip help".\n' " this is on a RHEL 6.4 host and im using Linux bridge with the DHCP agent and ive seen simmilar errors in both it looks as though the we may need to update iproute2 to support network namespaces -------------- next part -------------- An HTML attachment was scrubbed... URL: From prmarino1 at gmail.com Thu May 9 16:09:58 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Thu, 9 May 2013 12:09:58 -0400 Subject: [rhos-list] I think I found something missing in iproute2 In-Reply-To: References: Message-ID: note this happened with folsum and quantum linux bridge with namespaces enabled On Thu, May 9, 2013 at 12:09 PM, Paul Robert Marino wrote: > I got this error in my logs while doing testing > " > 2013-04-22 12:35:01 INFO [quantum.common.config] Logging enabled! > 2013-04-22 12:35:01 DEBUG [quantum.agent.linux.utils] Running command: > sudo quantum-rootwrap /etc/quantum/rootwrap.conf ip netns list > 2013-04-22 12:35:02 DEBUG [quantum.agent.linux.utils] > Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', > 'netns', 'list'] > Exit code: 255 > Stdout: '' > Stderr: 'Object "netns" is unknown, try "ip help".\n' > " > this is on a RHEL 6.4 host and im using Linux bridge with the DHCP agent > and ive seen simmilar errors in both it looks as though the we may need to > update iproute2 to support network namespaces > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at redhat.com Thu May 9 17:03:46 2013 From: gkotton at redhat.com (Gary Kotton) Date: Thu, 09 May 2013 20:03:46 +0300 Subject: [rhos-list] I think I found something missing in iproute2 In-Reply-To: References: Message-ID: <518BD6F2.6090600@redhat.com> On 05/09/2013 07:09 PM, Paul Robert Marino wrote: > note this happened with folsum and quantum linux bridge with > namespaces enabled > > > On Thu, May 9, 2013 at 12:09 PM, Paul Robert Marino > > wrote: > > I got this error in my logs while doing testing > " > 2013-04-22 12:35:01 INFO [quantum.common.config] Logging enabled! > 2013-04-22 12:35:01 DEBUG [quantum.agent.linux.utils] Running > command: sudo quantum-rootwrap /etc/quantum/rootwrap.conf ip netns > list > 2013-04-22 12:35:02 DEBUG [quantum.agent.linux.utils] > Command: ['sudo', 'quantum-rootwrap', > '/etc/quantum/rootwrap.conf', 'ip', 'netns', 'list'] > Exit code: 255 > Stdout: '' > Stderr: 'Object "netns" is unknown, try "ip help".\n' > " > this is on a RHEL 6.4 host and im using Linux bridge with the DHCP > agent and ive seen simmilar errors in both it looks as though the > we may need to update iproute2 to support network namespaces > This is a known issue and we are currently working on it. It is being tracked by: https://bugzilla.redhat.com/show_bug.cgi?id=869004 Thanks Gary > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From prmarino1 at gmail.com Thu May 9 17:26:23 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Thu, 9 May 2013 13:26:23 -0400 Subject: [rhos-list] I think I found something missing in iproute2 In-Reply-To: <518BD6F2.6090600@redhat.com> References: <518BD6F2.6090600@redhat.com> Message-ID: Thanks Gary I found the bug report right before you responded. out of curiosity does any one know if the plan to patch or update iproute2 to version 3.x ? Its really a funny bug because netns is in the man page in the original source (no patches) but its not in the command, and it looks like Ubuntu hit this bug too. it seems like someone from the upstream project had coppied the man page at the thim from there bleeding edge development version and copied it to the release with out checking if it was accurate. On Thu, May 9, 2013 at 1:03 PM, Gary Kotton wrote: > On 05/09/2013 07:09 PM, Paul Robert Marino wrote: > > note this happened with folsum and quantum linux bridge with namespaces > enabled > > > On Thu, May 9, 2013 at 12:09 PM, Paul Robert Marino wrote: > >> I got this error in my logs while doing testing >> " >> 2013-04-22 12:35:01 INFO [quantum.common.config] Logging enabled! >> 2013-04-22 12:35:01 DEBUG [quantum.agent.linux.utils] Running command: >> sudo quantum-rootwrap /etc/quantum/rootwrap.conf ip netns list >> 2013-04-22 12:35:02 DEBUG [quantum.agent.linux.utils] >> Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', >> 'netns', 'list'] >> Exit code: 255 >> Stdout: '' >> Stderr: 'Object "netns" is unknown, try "ip help".\n' >> " >> this is on a RHEL 6.4 host and im using Linux bridge with the DHCP agent >> and ive seen simmilar errors in both it looks as though the we may need to >> update iproute2 to support network namespaces >> > > This is a known issue and we are currently working on it. It is being > tracked by: > > https://bugzilla.redhat.com/show_bug.cgi?id=869004 > > Thanks > Gary > > > > > _______________________________________________ > rhos-list mailing listrhos-list at redhat.comhttps://www.redhat.com/mailman/listinfo/rhos-list > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at redhat.com Thu May 9 17:39:10 2013 From: gkotton at redhat.com (Gary Kotton) Date: Thu, 09 May 2013 20:39:10 +0300 Subject: [rhos-list] I think I found something missing in iproute2 In-Reply-To: References: <518BD6F2.6090600@redhat.com> Message-ID: <518BDF3E.6010007@redhat.com> On 05/09/2013 08:26 PM, Paul Robert Marino wrote: > Thanks Gary > I found the bug report right before you responded. > out of curiosity does any one know if the plan to patch or update > iproute2 to version 3.x ? The plan is to update the iproute2 to provide the necessary support. At the moment it is work in progress. > Its really a funny bug because netns is in the man page in the > original source (no patches) but its not in the command, and it looks > like Ubuntu hit this bug too. it seems like someone from the upstream > project had coppied the man page at the thim from there bleeding edge > development version and copied it to the release with out checking if > it was accurate. > > > > > > > > > On Thu, May 9, 2013 at 1:03 PM, Gary Kotton > wrote: > > On 05/09/2013 07:09 PM, Paul Robert Marino wrote: >> note this happened with folsum and quantum linux bridge with >> namespaces enabled >> >> >> On Thu, May 9, 2013 at 12:09 PM, Paul Robert Marino >> > wrote: >> >> I got this error in my logs while doing testing >> " >> 2013-04-22 12:35:01 INFO [quantum.common.config] Logging >> enabled! >> 2013-04-22 12:35:01 DEBUG [quantum.agent.linux.utils] >> Running command: sudo quantum-rootwrap >> /etc/quantum/rootwrap.conf ip netns list >> 2013-04-22 12:35:02 DEBUG [quantum.agent.linux.utils] >> Command: ['sudo', 'quantum-rootwrap', >> '/etc/quantum/rootwrap.conf', 'ip', 'netns', 'list'] >> Exit code: 255 >> Stdout: '' >> Stderr: 'Object "netns" is unknown, try "ip help".\n' >> " >> this is on a RHEL 6.4 host and im using Linux bridge with the >> DHCP agent and ive seen simmilar errors in both it looks as >> though the we may need to update iproute2 to support network >> namespaces >> > > This is a known issue and we are currently working on it. It is > being tracked by: > > https://bugzilla.redhat.com/show_bug.cgi?id=869004 > > Thanks > Gary > >> >> >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisw at redhat.com Thu May 9 17:45:02 2013 From: chrisw at redhat.com (Chris Wright) Date: Thu, 9 May 2013 10:45:02 -0700 Subject: [rhos-list] I think I found something missing in iproute2 In-Reply-To: References: <518BD6F2.6090600@redhat.com> Message-ID: <20130509174502.GM4016@x200.localdomain> * Paul Robert Marino (prmarino1 at gmail.com) wrote: > I found the bug report right before you responded. > out of curiosity does any one know if the plan to patch or update iproute2 > to version 3.x ? I don't expect a rebase, I expect backport patches. Please note, that simply updating iproute2 is insufficient to give netns support. There's kernel support required which is not in RHEL 6.4. RH engineers are scoping this effort to see if this work is eligible for an update. thanks, -chris From prmarino1 at gmail.com Thu May 9 17:57:02 2013 From: prmarino1 at gmail.com (Paul Robert Marino) Date: Thu, 9 May 2013 13:57:02 -0400 Subject: [rhos-list] I think I found something missing in iproute2 In-Reply-To: <20130509174502.GM4016@x200.localdomain> References: <518BD6F2.6090600@redhat.com> <20130509174502.GM4016@x200.localdomain> Message-ID: Chris I'm confused by your statement " # grep -P '(NET_NS|NAMESPACE)' /boot/config-2.6.32-358.6.1.el6.x86_64 CONFIG_NAMESPACES=y CONFIG_NET_NS=y " It looks to me as though its already enabled in the kernel compile configuration, and I thought supporting it was part of the original plan for RHEL 6.4 specifically because OpenStack needs it. On Thu, May 9, 2013 at 1:45 PM, Chris Wright wrote: > * Paul Robert Marino (prmarino1 at gmail.com) wrote: > > I found the bug report right before you responded. > > out of curiosity does any one know if the plan to patch or update > iproute2 > > to version 3.x ? > > I don't expect a rebase, I expect backport patches. > > Please note, that simply updating iproute2 is insufficient to give > netns support. There's kernel support required which is not in RHEL 6.4. > RH engineers are scoping this effort to see if this work is eligible > for an update. > > thanks, > -chris > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisw at redhat.com Thu May 9 18:16:44 2013 From: chrisw at redhat.com (Chris Wright) Date: Thu, 9 May 2013 11:16:44 -0700 Subject: [rhos-list] I think I found something missing in iproute2 In-Reply-To: References: <518BD6F2.6090600@redhat.com> <20130509174502.GM4016@x200.localdomain> Message-ID: <20130509181644.GN4016@x200.localdomain> * Paul Robert Marino (prmarino1 at gmail.com) wrote: > # grep -P '(NET_NS|NAMESPACE)' /boot/config-2.6.32-358.6.1.el6.x86_64 > CONFIG_NAMESPACES=y > CONFIG_NET_NS=y That is the right config items enabled, however, there are internal implementation details that are missing. So what is in 2.6.32-358.6.1.el6.x86_64 is not sufficient. > It looks to me as though its already enabled in the kernel compile > configuration, and I thought supporting it was part of the original plan > for RHEL 6.4 specifically because OpenStack needs it. RHEL 6.4 has already shipped and does not support it. If you are interested in helping testing the feature, please let me know. I have packages here for iproute2: http://et.redhat.com/~chrisw/rhel6/6.4/bz869004/iproute/netns.1/ And will try to push out test kernel asap. thanks, -chris From rich.minton at lmco.com Thu May 9 20:06:56 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Thu, 9 May 2013 20:06:56 +0000 Subject: [rhos-list] EXTERNAL: Re: Questions about Quantum. In-Reply-To: <518A5AC5.6060702@redhat.com> References: <518A5AC5.6060702@redhat.com> Message-ID: Just wanted to let you know we found our problem. We were using pings to check network connectivity and the ICMP packets were being rejected by a rule in iptables on our compute nodes. Our controller/compute node did not have this rule so everything functioned properly on that node. Once I removed the REJECT rule all pings returned replies. Just goes to show you... don't always use ping to test network connectivity. If we had tried to ssh to another VM on another host it probably would have worked fine. More lessons learned. Rick From: Gary Kotton [mailto:gkotton at redhat.com] Sent: Wednesday, May 08, 2013 10:02 AM To: Minton, Rich Cc: rhos-list at redhat.com Subject: EXTERNAL: Re: [rhos-list] Questions about Quantum. On 05/08/2013 04:19 PM, Minton, Rich wrote: We are running into difficulty implementing RHEL Openstack using Quantum Networking and need to verify the level of Quantum Support provided by RHEL. Sorry to hear that. Hopefully we will be able to help out. Which of the RHEL Openstack Folsum Quantum implementations are supported (FLAT, FLAT DHCP, VLAN, GRE)? GRE is not supported. This is due to the fact that there are some missing parts from the current kernel. Which of those implementations support the Openstack Metadata Service? In Folsom a VM can only access the metadata service if the L3 agent is running. In Grizzly this can be done via the L3 agent or the DHCP agent (there may be cases where one would not want to run the L3 agent). Please note that due to the fact that there is no namespace support in RHEL this support does not work when there are networks with overlapping IP ranges. Can Metadata be supported without using the Quantum L3 agent, such as in the FLAT/FLATDHCP models? Only with Grizzly. Which Openstack Quantum Use Cases are supported? Single Flat Network Multiple Flat Network Mixed Flat and Private Provider Router with Private Networks Per-tenant Routers with Private Networks I think all of the above. Thanks for the help. Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Thu May 9 21:59:02 2013 From: pmyers at redhat.com (Perry Myers) Date: Thu, 09 May 2013 17:59:02 -0400 Subject: [rhos-list] EXTERNAL: Re: Questions about Quantum. In-Reply-To: References: <518A5AC5.6060702@redhat.com> Message-ID: <518C1C26.3010908@redhat.com> On 05/09/2013 04:06 PM, Minton, Rich wrote: > Just wanted to let you know we found our problem. We were using pings to > check network connectivity and the ICMP packets were being rejected by a > rule in iptables on our compute nodes. Our controller/compute node did > not have this rule so everything functioned properly on that node. Once > I removed the REJECT rule all pings returned replies. Just goes to show > you? don?t always use ping to test network connectivity. If we had tried > to ssh to another VM on another host it probably would have worked fine. Thanks for the info :) Would this have been fixed by adding icmp to your security group configuration? (i.e. did you have port 22 open to your guests but not icmp?) From gkotton at redhat.com Fri May 10 05:46:35 2013 From: gkotton at redhat.com (Gary Kotton) Date: Fri, 10 May 2013 08:46:35 +0300 Subject: [rhos-list] EXTERNAL: Re: Questions about Quantum. In-Reply-To: <518C1C26.3010908@redhat.com> References: <518A5AC5.6060702@redhat.com> <518C1C26.3010908@redhat.com> Message-ID: <518C89BB.5000109@redhat.com> On 05/10/2013 12:59 AM, Perry Myers wrote: > On 05/09/2013 04:06 PM, Minton, Rich wrote: >> Just wanted to let you know we found our problem. We were using pings to >> check network connectivity and the ICMP packets were being rejected by a >> rule in iptables on our compute nodes. Our controller/compute node did >> not have this rule so everything functioned properly on that node. Once >> I removed the REJECT rule all pings returned replies. Just goes to show >> you? don?t always use ping to test network connectivity. If we had tried >> to ssh to another VM on another host it probably would have worked fine. Rich, thanks for the update. I have just seen this now. Would it be possible to let us know which rule caused the problem. Please note that RHOS 3.0 will have security groups implemented in Quantum. This hopefully will work better than the current implementation. Have a good weekend. Thanks Gary > Thanks for the info :) > > Would this have been fixed by adding icmp to your security group > configuration? (i.e. did you have port 22 open to your guests but not > icmp?) From rich.minton at lmco.com Fri May 10 12:52:57 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Fri, 10 May 2013 12:52:57 +0000 Subject: [rhos-list] EXTERNAL: Re: Questions about Quantum. In-Reply-To: <518C1C26.3010908@redhat.com> References: <518A5AC5.6060702@redhat.com> <518C1C26.3010908@redhat.com> Message-ID: These are the rules in iptables: -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited If these entries are there you need to flush the rules or delete the rules from iptables and then do a "service iptables save" otherwise they will keep showing up when the iptables service is restarted. I did have the icmp rule set in my security group. This did not affect VM to VM pings on the same host. Only when trying to ping VMs on separate hosts did this show up. Rick -----Original Message----- From: Perry Myers [mailto:pmyers at redhat.com] Sent: Thursday, May 09, 2013 5:59 PM To: Minton, Rich Cc: gkotton at redhat.com; rhos-list at redhat.com Subject: Re: [rhos-list] EXTERNAL: Re: Questions about Quantum. On 05/09/2013 04:06 PM, Minton, Rich wrote: > Just wanted to let you know we found our problem. We were using pings > to check network connectivity and the ICMP packets were being rejected > by a rule in iptables on our compute nodes. Our controller/compute > node did not have this rule so everything functioned properly on that > node. Once I removed the REJECT rule all pings returned replies. Just > goes to show you... don't always use ping to test network connectivity. > If we had tried to ssh to another VM on another host it probably would have worked fine. Thanks for the info :) Would this have been fixed by adding icmp to your security group configuration? (i.e. did you have port 22 open to your guests but not icmp?) From rich.minton at lmco.com Fri May 10 13:18:48 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Fri, 10 May 2013 13:18:48 +0000 Subject: [rhos-list] Metadata with Quantum. Message-ID: Guys and Gals, I'm looking for some direction with regards to implementing Metadata with Quantum. I'm using Openstack Networking with a Flat provider network, which is working great at the moment. I have a Controller/compute node running the quantum server, a Network node running openvswitch and dhcp agents, and three compute nodes running the openvswitch agent. I was going to install the L3 agent on the controller node since I read somewhere that for this implementation the L3 agent should not be run with the DHCP agent on the same host. From there I need some help with the configuration. I have these entries in my nova.conf file on the Controller host (L3 agent host) enabled_apis=ec2,osapi_compute,metadata metadata_host=172.17.0.68 # This is the external IP of my Controller host metadata_port=8775 metadata_listen=172.17.0.68 service_quantum_metadata_proxy = true Is this all I need in nova? Do I need a port on br-ex that routes to my external network? Do I need to create a router in quantum? My External network is 172.17.0.0/24 My management network is 10.255.254.0/24 (this is used for the hosts to talk to each other, i.e., qpid and mysql) My guest network is 10.0.56.0/21 My l3-agent.conf file: [DEFAULT] #sql_connection = mysql://quantum:XXXXXXXX at 10.255.254.38/ovs_quantum # Show more verbose log output (sets INFO log level output). verbose = True # Show debugging output in log (sets DEBUG log level output). debug = True # L3 agent requires that an interface driver be set. Choose the one # that best matches your plugin. There is no default. # interface_driver = # # OVS interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver # LinuxBridge # interface_driver = quantum.agent.linux.interface.BridgeInterfaceDriver # The Quantum user information for accessing the Quantum API. auth_strategy = keystone auth_url = http://10.255.254.38:35357/v2.0/ auth_region = lmicc admin_tenant_name = services admin_user = quantum admin_password = XXXXXXXXXX # Use "sudo quantum-rootwrap /etc/quantum/rootwrap.conf" to use the real # root filter facility. # Change to "sudo" to skip the filtering and just run the comand directly # root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf # Without network namespaces, each L3 agent can only configure one # router. This is done by setting the specific router_id. # router_id = # Each L3 agent can be associated with at most one external network. This # value should be set to the UUID of that external network. If empty, # the agent will enforce that only a single external networks exists and # use that external network id. # gateway_external_network_id = # Indicates that this L3 agent should also handle routers that do not have # an external network gateway configured. This option should be True only # for a single agent in a Quantum deployment, and may be False for all agents # if all routers must have an external network gateway. # handle_internal_only_routers = True # Name of bridge used for external network traffic. This should be set to # empty value for the linuxbridge plugin. # external_network_bridge = br-ex # IP address used by Nova metadata server. metadata_ip = 172.17.0.68 # TCP Port used by Nova metadata server. metadata_port = 8775 use_namespaces = False # The time in seconds between state poll requests. # polling_interval = 3 Thank you for your help and patience. Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at redhat.com Fri May 10 13:36:37 2013 From: gkotton at redhat.com (Gary Kotton) Date: Fri, 10 May 2013 16:36:37 +0300 Subject: [rhos-list] EXTERNAL: Re: Questions about Quantum. In-Reply-To: References: <518A5AC5.6060702@redhat.com> <518C1C26.3010908@redhat.com> Message-ID: <518CF7E5.60107@redhat.com> On 05/10/2013 03:52 PM, Minton, Rich wrote: > These are the rules in iptables: > > -A INPUT -j REJECT --reject-with icmp-host-prohibited > -A FORWARD -j REJECT --reject-with icmp-host-prohibited Thanks! > > If these entries are there you need to flush the rules or delete the rules from iptables and then do a "service iptables save" otherwise they will keep showing up when the iptables service is restarted. > > I did have the icmp rule set in my security group. This did not affect VM to VM pings on the same host. Only when trying to ping VMs on separate hosts did this show up. > > Rick > > -----Original Message----- > From: Perry Myers [mailto:pmyers at redhat.com] > Sent: Thursday, May 09, 2013 5:59 PM > To: Minton, Rich > Cc: gkotton at redhat.com; rhos-list at redhat.com > Subject: Re: [rhos-list] EXTERNAL: Re: Questions about Quantum. > > On 05/09/2013 04:06 PM, Minton, Rich wrote: >> Just wanted to let you know we found our problem. We were using pings >> to check network connectivity and the ICMP packets were being rejected >> by a rule in iptables on our compute nodes. Our controller/compute >> node did not have this rule so everything functioned properly on that >> node. Once I removed the REJECT rule all pings returned replies. Just >> goes to show you... don't always use ping to test network connectivity. >> If we had tried to ssh to another VM on another host it probably would have worked fine. > Thanks for the info :) > > Would this have been fixed by adding icmp to your security group configuration? (i.e. did you have port 22 open to your guests but not > icmp?) From gkotton at redhat.com Fri May 10 13:44:51 2013 From: gkotton at redhat.com (Gary Kotton) Date: Fri, 10 May 2013 16:44:51 +0300 Subject: [rhos-list] Metadata with Quantum. In-Reply-To: References: Message-ID: <518CF9D3.3010600@redhat.com> On 05/10/2013 04:18 PM, Minton, Rich wrote: > > Guys and Gals, > > I'm looking for some direction with regards to implementing Metadata > with Quantum. > > I'm using Openstack Networking with a Flat provider network, which is > working great at the moment. I have a Controller/compute node running > the quantum server, a Network node running openvswitch and dhcp > agents, and three compute nodes running the openvswitch agent. I was > going to install the L3 agent on the controller node since I read > somewhere that for this implementation the L3 agent should not be run > with the DHCP agent on the same host. From there I need some help with > the configuration. > Yes, this is correct. At the moment RHEL does not support namespaces so in order to have network isolation is is recommended that the l3 agent and the dhcp agent do not run on the same host. If this is for a POC then you can certainly do this as there is no risk of a security hole. Hopefully in the coming versions we will have a better solution for this. Please note that in the RHOS 3 version will will be able to invoke the metadata service form the DHCP agent if you choose. > I have these entries in my nova.conf file on the Controller host (L3 > agent host) > > enabled_apis=ec2,osapi_compute,metadata > > metadata_host=172.17.0.68 # This is the external IP of my Controller host > > metadata_port=8775 > > metadata_listen=172.17.0.68 > > service_quantum_metadata_proxy = true > > Is this all I need in nova? > I think so. > Do I need a port on br-ex that routes to my external network? > You only need the br-ex on the host that is running the l3-agent. > Do I need to create a router in quantum? > Yes, you need to do this and you need to assign the router to the subnet with the private ip. This will ensure that the traffic is sent to the l3 -agent which in turn will redirect it to the metadata service. > My External network is 172.17.0.0/24 > > My management network is 10.255.254.0/24 (this is used for the hosts > to talk to each other, i.e., qpid and mysql) > > My guest network is 10.0.56.0/21 > > My l3-agent.conf file: > > [DEFAULT] > > #sql_connection = mysql://quantum:XXXXXXXX at 10.255.254.38/ovs_quantum > > # Show more verbose log output (sets INFO log level output). > > verbose = True > > # Show debugging output in log (sets DEBUG log level output). > > debug = True > > # L3 agent requires that an interface driver be set. Choose the one > > # that best matches your plugin. There is no default. > > # interface_driver = > > # > > # OVS > > interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver > > # LinuxBridge > > # interface_driver = quantum.agent.linux.interface.BridgeInterfaceDriver > > # The Quantum user information for accessing the Quantum API. > > auth_strategy = keystone > > auth_url = http://10.255.254.38:35357/v2.0/ > > auth_region = lmicc > > admin_tenant_name = services > > admin_user = quantum > > admin_password = XXXXXXXXXX > > # Use "sudo quantum-rootwrap /etc/quantum/rootwrap.conf" to use the real > > # root filter facility. > > # Change to "sudo" to skip the filtering and just run the comand directly > > # root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf > > # Without network namespaces, each L3 agent can only configure one > > # router. This is done by setting the specific router_id. > > # router_id = > Due to the fact that namespaces is not supported you need to create a router and then update this with the router id and restart the service (sorry it is a real pain). Hopefully in the near future we will have packstack support for Quantum that will do all of the above automatically. > > # Each L3 agent can be associated with at most one external network. This > > # value should be set to the UUID of that external network. If empty, > > # the agent will enforce that only a single external networks exists and > > # use that external network id. > > # gateway_external_network_id = > > # Indicates that this L3 agent should also handle routers that do not have > > # an external network gateway configured. This option should be True only > > # for a single agent in a Quantum deployment, and may be False for all > agents > > # if all routers must have an external network gateway. > > # handle_internal_only_routers = True > > # Name of bridge used for external network traffic. This should be set to > > # empty value for the linuxbridge plugin. > > # external_network_bridge = br-ex > > # IP address used by Nova metadata server. > > metadata_ip = 172.17.0.68 > > # TCP Port used by Nova metadata server. > > metadata_port = 8775 > > use_namespaces = False > > # The time in seconds between state poll requests. > > # polling_interval = 3 > > Thank you for your help and patience. > > Rick > > _Richard Minton_ > > LMICC Systems Administrator > > 4000 Geerdes Blvd, 13D31 > > King of Prussia, PA 19406 > > Phone: 610-354-5482 > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From rich.minton at lmco.com Mon May 13 14:19:56 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Mon, 13 May 2013 14:19:56 +0000 Subject: [rhos-list] EXTERNAL: Re: Metadata with Quantum. In-Reply-To: <518CF9D3.3010600@redhat.com> References: <518CF9D3.3010600@redhat.com> Message-ID: Gary, Right now, I have my VMs on a flat network (10.0.56.0/21). Our external physical router acts as the gateway (10.0.56.1) for VMs to get to the external network. If I create an L3 router with the 10.0.56.1 IP as the gateway I get conflicts on my physical router. Is using the L3 agent and an L3 router the only way to access the metadata service on my external network? Is it possible to put a NAT on my physical router to accomplish the same thing or is it absolutely necessary to route through the L3 router? Thanks, Rick From: rhos-list-bounces at redhat.com [mailto:rhos-list-bounces at redhat.com] On Behalf Of Gary Kotton Sent: Friday, May 10, 2013 9:45 AM To: rhos-list at redhat.com Subject: EXTERNAL: Re: [rhos-list] Metadata with Quantum. On 05/10/2013 04:18 PM, Minton, Rich wrote: Guys and Gals, I'm looking for some direction with regards to implementing Metadata with Quantum. I'm using Openstack Networking with a Flat provider network, which is working great at the moment. I have a Controller/compute node running the quantum server, a Network node running openvswitch and dhcp agents, and three compute nodes running the openvswitch agent. I was going to install the L3 agent on the controller node since I read somewhere that for this implementation the L3 agent should not be run with the DHCP agent on the same host. From there I need some help with the configuration. Yes, this is correct. At the moment RHEL does not support namespaces so in order to have network isolation is is recommended that the l3 agent and the dhcp agent do not run on the same host. If this is for a POC then you can certainly do this as there is no risk of a security hole. Hopefully in the coming versions we will have a better solution for this. Please note that in the RHOS 3 version will will be able to invoke the metadata service form the DHCP agent if you choose. I have these entries in my nova.conf file on the Controller host (L3 agent host) enabled_apis=ec2,osapi_compute,metadata metadata_host=172.17.0.68 # This is the external IP of my Controller host metadata_port=8775 metadata_listen=172.17.0.68 service_quantum_metadata_proxy = true Is this all I need in nova? I think so. Do I need a port on br-ex that routes to my external network? You only need the br-ex on the host that is running the l3-agent. Do I need to create a router in quantum? Yes, you need to do this and you need to assign the router to the subnet with the private ip. This will ensure that the traffic is sent to the l3 -agent which in turn will redirect it to the metadata service. My External network is 172.17.0.0/24 My management network is 10.255.254.0/24 (this is used for the hosts to talk to each other, i.e., qpid and mysql) My guest network is 10.0.56.0/21 My l3-agent.conf file: [DEFAULT] #sql_connection = mysql://quantum:XXXXXXXX at 10.255.254.38/ovs_quantum # Show more verbose log output (sets INFO log level output). verbose = True # Show debugging output in log (sets DEBUG log level output). debug = True # L3 agent requires that an interface driver be set. Choose the one # that best matches your plugin. There is no default. # interface_driver = # # OVS interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver # LinuxBridge # interface_driver = quantum.agent.linux.interface.BridgeInterfaceDriver # The Quantum user information for accessing the Quantum API. auth_strategy = keystone auth_url = http://10.255.254.38:35357/v2.0/ auth_region = lmicc admin_tenant_name = services admin_user = quantum admin_password = XXXXXXXXXX # Use "sudo quantum-rootwrap /etc/quantum/rootwrap.conf" to use the real # root filter facility. # Change to "sudo" to skip the filtering and just run the comand directly # root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf # Without network namespaces, each L3 agent can only configure one # router. This is done by setting the specific router_id. # router_id = Due to the fact that namespaces is not supported you need to create a router and then update this with the router id and restart the service (sorry it is a real pain). Hopefully in the near future we will have packstack support for Quantum that will do all of the above automatically. # Each L3 agent can be associated with at most one external network. This # value should be set to the UUID of that external network. If empty, # the agent will enforce that only a single external networks exists and # use that external network id. # gateway_external_network_id = # Indicates that this L3 agent should also handle routers that do not have # an external network gateway configured. This option should be True only # for a single agent in a Quantum deployment, and may be False for all agents # if all routers must have an external network gateway. # handle_internal_only_routers = True # Name of bridge used for external network traffic. This should be set to # empty value for the linuxbridge plugin. # external_network_bridge = br-ex # IP address used by Nova metadata server. metadata_ip = 172.17.0.68 # TCP Port used by Nova metadata server. metadata_port = 8775 use_namespaces = False # The time in seconds between state poll requests. # polling_interval = 3 Thank you for your help and patience. Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at redhat.com Mon May 13 14:26:22 2013 From: gkotton at redhat.com (Gary Kotton) Date: Mon, 13 May 2013 17:26:22 +0300 Subject: [rhos-list] EXTERNAL: Re: Metadata with Quantum. In-Reply-To: References: <518CF9D3.3010600@redhat.com> Message-ID: <5190F80E.30701@redhat.com> On 05/13/2013 05:19 PM, Minton, Rich wrote: > > Gary, > > Right now, I have my VMs on a flat network (10.0.56.0/21). Our > external physical router acts as the gateway (10.0.56.1) for VMs to > get to the external network. If I create an L3 router with the > 10.0.56.1 IP as the gateway I get conflicts on my physical router. Is > using the L3 agent and an L3 router the only way to access the > metadata service on my external network? > In RHOS 2.0 this is the only way. In RHOS 3.0 you will be able to do this via the DHCP agent. > Is it possible to put a NAT on my physical router to accomplish the > same thing or is it absolutely necessary to route through the L3 router? > Yes, that is certainly possible. I am actually happy that you mentioned this as it is something that I would have done. I think that you can do this pretty easily: 1. If your router will be the default gateway for the VMs (this can be ensured when you create your subnet) 2. If you create a NAT rule on the router - all traffic that is destined to the metadata service should be re routed to the the meta data service My understanding is that some hardware vendors are implementing l3 functionality in their routers (well it is something that they have had for decades and do it a lot better and more efficiently that the l3 agent - with the added bonus of HA) The problem with the above is that it is something that is done manually and is not automated via quantum at the moment. Thanks Gary > Thanks, > > Rick > > *From:*rhos-list-bounces at redhat.com > [mailto:rhos-list-bounces at redhat.com] *On Behalf Of *Gary Kotton > *Sent:* Friday, May 10, 2013 9:45 AM > *To:* rhos-list at redhat.com > *Subject:* EXTERNAL: Re: [rhos-list] Metadata with Quantum. > > On 05/10/2013 04:18 PM, Minton, Rich wrote: > > Guys and Gals, > > I'm looking for some direction with regards to implementing Metadata > with Quantum. > > I'm using Openstack Networking with a Flat provider network, which is > working great at the moment. I have a Controller/compute node running > the quantum server, a Network node running openvswitch and dhcp > agents, and three compute nodes running the openvswitch agent. I was > going to install the L3 agent on the controller node since I read > somewhere that for this implementation the L3 agent should not be run > with the DHCP agent on the same host. From there I need some help with > the configuration. > > > Yes, this is correct. At the moment RHEL does not support namespaces > so in order to have network isolation is is recommended that the l3 > agent and the dhcp agent do not run on the same host. If this is for a > POC then you can certainly do this as there is no risk of a security hole. > > Hopefully in the coming versions we will have a better solution for this. > > Please note that in the RHOS 3 version will will be able to invoke the > metadata service form the DHCP agent if you choose. > > > I have these entries in my nova.conf file on the Controller host (L3 > agent host) > > enabled_apis=ec2,osapi_compute,metadata > > metadata_host=172.17.0.68 # This is the external IP of my Controller host > > metadata_port=8775 > > metadata_listen=172.17.0.68 > > service_quantum_metadata_proxy = true > > Is this all I need in nova? > > > I think so. > > > Do I need a port on br-ex that routes to my external network? > > > You only need the br-ex on the host that is running the l3-agent. > > > Do I need to create a router in quantum? > > > Yes, you need to do this and you need to assign the router to the > subnet with the private ip. This will ensure that the traffic is sent > to the l3 -agent which in turn will redirect it to the metadata service. > > > My External network is 172.17.0.0/24 > > My management network is 10.255.254.0/24 (this is used for the hosts > to talk to each other, i.e., qpid and mysql) > > My guest network is 10.0.56.0/21 > > My l3-agent.conf file: > > [DEFAULT] > > #sql_connection = mysql://quantum:XXXXXXXX at 10.255.254.38/ovs_quantum > > > # Show more verbose log output (sets INFO log level output). > > verbose = True > > # Show debugging output in log (sets DEBUG log level output). > > debug = True > > # L3 agent requires that an interface driver be set. Choose the one > > # that best matches your plugin. There is no default. > > # interface_driver = > > # > > # OVS > > interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver > > # LinuxBridge > > # interface_driver = quantum.agent.linux.interface.BridgeInterfaceDriver > > # The Quantum user information for accessing the Quantum API. > > auth_strategy = keystone > > auth_url = http://10.255.254.38:35357/v2.0/ > > auth_region = lmicc > > admin_tenant_name = services > > admin_user = quantum > > admin_password = XXXXXXXXXX > > # Use "sudo quantum-rootwrap /etc/quantum/rootwrap.conf" to use the real > > # root filter facility. > > # Change to "sudo" to skip the filtering and just run the comand directly > > # root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf > > # Without network namespaces, each L3 agent can only configure one > > # router. This is done by setting the specific router_id. > > # router_id = > > > Due to the fact that namespaces is not supported you need to create a > router and then update this with the router id and restart the service > (sorry it is a real pain). Hopefully in the near future we will have > packstack support for Quantum that will do all of the above automatically. > > # Each L3 agent can be associated with at most one external network. This > > # value should be set to the UUID of that external network. If empty, > > # the agent will enforce that only a single external networks exists and > > # use that external network id. > > # gateway_external_network_id = > > # Indicates that this L3 agent should also handle routers that do not have > > # an external network gateway configured. This option should be True only > > # for a single agent in a Quantum deployment, and may be False for all > agents > > # if all routers must have an external network gateway. > > # handle_internal_only_routers = True > > # Name of bridge used for external network traffic. This should be set to > > # empty value for the linuxbridge plugin. > > # external_network_bridge = br-ex > > # IP address used by Nova metadata server. > > metadata_ip = 172.17.0.68 > > # TCP Port used by Nova metadata server. > > metadata_port = 8775 > > use_namespaces = False > > # The time in seconds between state poll requests. > > # polling_interval = 3 > > Thank you for your help and patience. > > Rick > > _Richard Minton_ > > LMICC Systems Administrator > > 4000 Geerdes Blvd, 13D31 > > King of Prussia, PA 19406 > > Phone: 610-354-5482 > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From prashanth.prahal at gmail.com Mon May 13 19:37:00 2013 From: prashanth.prahal at gmail.com (Prashanth Prahalad) Date: Mon, 13 May 2013 12:37:00 -0700 Subject: [rhos-list] quantum multi-node setup... Message-ID: Hi Folks, Needed a quick help. I'm following the guide here to setup RedHat openstack ( https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack_Preview/2/html-single/Getting_Started_Guide/index.html#idm17139632 ). section 12.3.1 - 12.3.3 - talks about installing quantum service on the network node. section 12.3.4 - talks on configuring the compute node section 12.3.5 (Installing Openstack networking agents) - this is not clear on where the agents need to be installed. I'm assuming that these need to be the network node (in a multi node setup). Is that right ? Another question I had was that in a multi-node setup, can I issue quantum client commands (quantum net-create ..., quantum subnet-create .... etc) from any node(compute or otherwise) or does it have to be the node on which the quantum-server is running on ? Thanks ! Prashanth -------------- next part -------------- An HTML attachment was scrubbed... URL: From rich.minton at lmco.com Mon May 13 20:08:40 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Mon, 13 May 2013 20:08:40 +0000 Subject: [rhos-list] EXTERNAL: Re: Metadata with Quantum. In-Reply-To: <5190F80E.30701@redhat.com> References: <518CF9D3.3010600@redhat.com> <5190F80E.30701@redhat.com> Message-ID: Ok, success. I was able to get the metadata service up and running. I'm using a Flat network, no GRE tunnels or VLANs, except for my host external interfaces. 1. Installed L3-agent on my controller/compute node 2. Nova-api is running on controller/compute node 3. "quantum router-create router1" 4. "quantum router-interface-add router1 " 5. Ensure port eth1 is attached to br-eth1 using "ovs-vsctl add-port br-eth1 eth1" (only if eth1 is your VM NIC). I loose eth1 off of br-eth1 after a service network restart or a host reboot. Any ideas on this one? 6. We also ran "ip addr add 169.254.169.254/32 dev eth0.500" to make route all requests to 169... to my external interface. I think this was the ticket for us. Hope this helps somebody. Rick From: Gary Kotton [mailto:gkotton at redhat.com] Sent: Monday, May 13, 2013 10:26 AM To: Minton, Rich Cc: rhos-list at redhat.com Subject: Re: EXTERNAL: Re: [rhos-list] Metadata with Quantum. On 05/13/2013 05:19 PM, Minton, Rich wrote: Gary, Right now, I have my VMs on a flat network (10.0.56.0/21). Our external physical router acts as the gateway (10.0.56.1) for VMs to get to the external network. If I create an L3 router with the 10.0.56.1 IP as the gateway I get conflicts on my physical router. Is using the L3 agent and an L3 router the only way to access the metadata service on my external network? In RHOS 2.0 this is the only way. In RHOS 3.0 you will be able to do this via the DHCP agent. Is it possible to put a NAT on my physical router to accomplish the same thing or is it absolutely necessary to route through the L3 router? Yes, that is certainly possible. I am actually happy that you mentioned this as it is something that I would have done. I think that you can do this pretty easily: 1. If your router will be the default gateway for the VMs (this can be ensured when you create your subnet) 2. If you create a NAT rule on the router - all traffic that is destined to the metadata service should be re routed to the the meta data service My understanding is that some hardware vendors are implementing l3 functionality in their routers (well it is something that they have had for decades and do it a lot better and more efficiently that the l3 agent - with the added bonus of HA) The problem with the above is that it is something that is done manually and is not automated via quantum at the moment. Thanks Gary Thanks, Rick From: rhos-list-bounces at redhat.com [mailto:rhos-list-bounces at redhat.com] On Behalf Of Gary Kotton Sent: Friday, May 10, 2013 9:45 AM To: rhos-list at redhat.com Subject: EXTERNAL: Re: [rhos-list] Metadata with Quantum. On 05/10/2013 04:18 PM, Minton, Rich wrote: Guys and Gals, I'm looking for some direction with regards to implementing Metadata with Quantum. I'm using Openstack Networking with a Flat provider network, which is working great at the moment. I have a Controller/compute node running the quantum server, a Network node running openvswitch and dhcp agents, and three compute nodes running the openvswitch agent. I was going to install the L3 agent on the controller node since I read somewhere that for this implementation the L3 agent should not be run with the DHCP agent on the same host. From there I need some help with the configuration. Yes, this is correct. At the moment RHEL does not support namespaces so in order to have network isolation is is recommended that the l3 agent and the dhcp agent do not run on the same host. If this is for a POC then you can certainly do this as there is no risk of a security hole. Hopefully in the coming versions we will have a better solution for this. Please note that in the RHOS 3 version will will be able to invoke the metadata service form the DHCP agent if you choose. I have these entries in my nova.conf file on the Controller host (L3 agent host) enabled_apis=ec2,osapi_compute,metadata metadata_host=172.17.0.68 # This is the external IP of my Controller host metadata_port=8775 metadata_listen=172.17.0.68 service_quantum_metadata_proxy = true Is this all I need in nova? I think so. Do I need a port on br-ex that routes to my external network? You only need the br-ex on the host that is running the l3-agent. Do I need to create a router in quantum? Yes, you need to do this and you need to assign the router to the subnet with the private ip. This will ensure that the traffic is sent to the l3 -agent which in turn will redirect it to the metadata service. My External network is 172.17.0.0/24 My management network is 10.255.254.0/24 (this is used for the hosts to talk to each other, i.e., qpid and mysql) My guest network is 10.0.56.0/21 My l3-agent.conf file: [DEFAULT] #sql_connection = mysql://quantum:XXXXXXXX at 10.255.254.38/ovs_quantum # Show more verbose log output (sets INFO log level output). verbose = True # Show debugging output in log (sets DEBUG log level output). debug = True # L3 agent requires that an interface driver be set. Choose the one # that best matches your plugin. There is no default. # interface_driver = # # OVS interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver # LinuxBridge # interface_driver = quantum.agent.linux.interface.BridgeInterfaceDriver # The Quantum user information for accessing the Quantum API. auth_strategy = keystone auth_url = http://10.255.254.38:35357/v2.0/ auth_region = lmicc admin_tenant_name = services admin_user = quantum admin_password = XXXXXXXXXX # Use "sudo quantum-rootwrap /etc/quantum/rootwrap.conf" to use the real # root filter facility. # Change to "sudo" to skip the filtering and just run the comand directly # root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf # Without network namespaces, each L3 agent can only configure one # router. This is done by setting the specific router_id. # router_id = Due to the fact that namespaces is not supported you need to create a router and then update this with the router id and restart the service (sorry it is a real pain). Hopefully in the near future we will have packstack support for Quantum that will do all of the above automatically. # Each L3 agent can be associated with at most one external network. This # value should be set to the UUID of that external network. If empty, # the agent will enforce that only a single external networks exists and # use that external network id. # gateway_external_network_id = # Indicates that this L3 agent should also handle routers that do not have # an external network gateway configured. This option should be True only # for a single agent in a Quantum deployment, and may be False for all agents # if all routers must have an external network gateway. # handle_internal_only_routers = True # Name of bridge used for external network traffic. This should be set to # empty value for the linuxbridge plugin. # external_network_bridge = br-ex # IP address used by Nova metadata server. metadata_ip = 172.17.0.68 # TCP Port used by Nova metadata server. metadata_port = 8775 use_namespaces = False # The time in seconds between state poll requests. # polling_interval = 3 Thank you for your help and patience. Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list -------------- next part -------------- An HTML attachment was scrubbed... URL: From prashanth.prahal at gmail.com Tue May 14 03:26:42 2013 From: prashanth.prahal at gmail.com (Prashanth Prahalad) Date: Mon, 13 May 2013 20:26:42 -0700 Subject: [rhos-list] libvirt error Message-ID: Hi Folks, I'm seeing this error when I restart openstack-nova-compute : /var/log/libvirtd/libvirtd.log 2013-05-14 03:24:20.314+0000: 3097: error : virNetSocketReadWire:1184 : End of file while reading data: Input/output error 2013-05-14 03:24:20.315+0000: 3097: error : virNetSocketReadWire:1184 : End of file while reading data: Input/output error And I guess subsequently, creating VMs seem to be failing. Any clues on what could be going on ? Thanks ! Prashanth -------------- next part -------------- An HTML attachment was scrubbed... URL: From prashanth.prahal at gmail.com Tue May 14 06:36:24 2013 From: prashanth.prahal at gmail.com (Prashanth Prahalad) Date: Mon, 13 May 2013 23:36:24 -0700 Subject: [rhos-list] keystone dead after a reboot Message-ID: Hi Folks, Keystone on my machine came up dead after a reboot. It didn't log anything at /var/log/keystone/keystone.log. [prashp at r5-20 ~(keystone_admin)]$ sudo service openstack-keystone start Starting keystone: [ OK ] You have new mail in /var/spool/mail/root [prashp at r5-20 ~(keystone_admin)]$ ps -ef | grep keystone prashp 18436 12452 0 23:31 pts/4 00:00:00 grep keystone [prashp at r5-20 ~(keystone_admin)]$ sudo service openstack-keystone status keystone dead but pid file exists Tried to start the service by-hand and it throws up this error : [root at r5-20 ~]# /usr/bin/python /usr/bin/keystone-all --config-file /etc/keystone/keystone.conf Traceback (most recent call last): File "/usr/bin/keystone-all", line 102, in options = deploy.appconfig('config:%s' % paste_config) File "/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py", line 261, in appconfig global_conf=global_conf) File "/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py", line 296, in loadcontext global_conf=global_conf) File "/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py", line 320, in _loadconfig return loader.get_context(object_type, name, global_conf) File "/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py", line 408, in get_context object_type, name=name) File "/usr/lib/python2.6/site-packages/PasteDeploy-1.5.0-py2.6.egg/paste/deploy/loadwsgi.py", line 587, in find_config_section self.filename)) LookupError: No section 'main' (prefixed by 'app' or 'application' or 'composite' or 'composit' or 'pipeline' or 'filter-app') found in config /etc/keystone/keystone.conf Nothing's changed in the keystone.conf and it was working fine before reboot. Interestingly, I have another machine which is running keystone with almost identical keystone conf and that seems to be fine. Here's my keystone.conf: [DEFAULT] admin_token = 2f3896f0eaf14c55a17f3df693eee01b bind_host = 0.0.0.0 public_port = 5000 admin_port = 35357 compute_port = 3000 verbose = False debug = False log_file = /var/log/keystone/keystone.log log_dir = /var/log/keystone [sql] connection = mysql://keystone_admin:b25ec036c7404ed5 at 10.9.10.43/keystone idle_timeout = 200 [identity] [catalog] driver = keystone.catalog.backends.sql.Catalog [token] [policy] [ec2] [ssl] [signing] [ldap] [paste_deploy] -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at redhat.com Tue May 14 07:20:21 2013 From: gkotton at redhat.com (Gary Kotton) Date: Tue, 14 May 2013 10:20:21 +0300 Subject: [rhos-list] EXTERNAL: Re: Metadata with Quantum. In-Reply-To: References: <518CF9D3.3010600@redhat.com> <5190F80E.30701@redhat.com> Message-ID: <5191E5B5.3050903@redhat.com> Hi, Thanks for the inputs, please see below. I think that we are in two different time zones (we also have a holiday here this afternoon and tomorrow). Thanks Gary On 05/13/2013 11:08 PM, Minton, Rich wrote: > > Ok, success. > Cool > I was able to get the metadata service up and running. I'm using a > Flat network, no GRE tunnels or VLANs, except for my host external > interfaces. > > 1.Installed L3-agent on my controller/compute node > > 2.Nova-api is running on controller/compute node > > 3."quantum router-create router1" > > 4."quantum router-interface-add router1 " > > 5.Ensure port eth1 is attached to br-eth1 using "ovs-vsctl add-port > br-eth1 eth1" (only if eth1 is your VM NIC). I loose eth1 off of > br-eth1 after a service network restart or a host reboot. *Any ideas > on this one?* > I have a number of questions and comments regarding this one. i. If you have an interface /etc/sysconfig/network-scripts/ifcfg-br-int configured then each time that you run the network restart then the ovs bridges will be purged of all of their tap devices. ii. When the quantum agent restarts the interface is added to the bridge. It is not really clear why this is happening at reboot. I'll try and reproduce on my side. > 6.We also ran "ip addr add 169.254.169.254/32 dev eth0.500" to make > route all requests to 169... to my external interface. I think this > was the ticket for us. > > Hope this helps somebody. > Yes, it sure does. Thank you Gary > Rick > > *From:*Gary Kotton [mailto:gkotton at redhat.com] > *Sent:* Monday, May 13, 2013 10:26 AM > *To:* Minton, Rich > *Cc:* rhos-list at redhat.com > *Subject:* Re: EXTERNAL: Re: [rhos-list] Metadata with Quantum. > > On 05/13/2013 05:19 PM, Minton, Rich wrote: > > Gary, > > Right now, I have my VMs on a flat network (10.0.56.0/21). Our > external physical router acts as the gateway (10.0.56.1) for VMs to > get to the external network. If I create an L3 router with the > 10.0.56.1 IP as the gateway I get conflicts on my physical router. Is > using the L3 agent and an L3 router the only way to access the > metadata service on my external network? > > > In RHOS 2.0 this is the only way. In RHOS 3.0 you will be able to do > this via the DHCP agent. > > > Is it possible to put a NAT on my physical router to accomplish the > same thing or is it absolutely necessary to route through the L3 router? > > > Yes, that is certainly possible. I am actually happy that you > mentioned this as it is something that I would have done. I think that > you can do this pretty easily: > 1. If your router will be the default gateway for the VMs (this can be > ensured when you create your subnet) > 2. If you create a NAT rule on the router - all traffic that is > destined to the metadata service should be re routed to the the meta > data service > > My understanding is that some hardware vendors are implementing l3 > functionality in their routers (well it is something that they have > had for decades and do it a lot better and more efficiently that the > l3 agent - with the added bonus of HA) > > The problem with the above is that it is something that is done > manually and is not automated via quantum at the moment. > > Thanks > Gary > > > Thanks, > > Rick > > *From:*rhos-list-bounces at redhat.com > > [mailto:rhos-list-bounces at redhat.com] *On Behalf Of *Gary Kotton > *Sent:* Friday, May 10, 2013 9:45 AM > *To:* rhos-list at redhat.com > *Subject:* EXTERNAL: Re: [rhos-list] Metadata with Quantum. > > On 05/10/2013 04:18 PM, Minton, Rich wrote: > > Guys and Gals, > > I'm looking for some direction with regards to implementing Metadata > with Quantum. > > I'm using Openstack Networking with a Flat provider network, which is > working great at the moment. I have a Controller/compute node running > the quantum server, a Network node running openvswitch and dhcp > agents, and three compute nodes running the openvswitch agent. I was > going to install the L3 agent on the controller node since I read > somewhere that for this implementation the L3 agent should not be run > with the DHCP agent on the same host. From there I need some help with > the configuration. > > > Yes, this is correct. At the moment RHEL does not support namespaces > so in order to have network isolation is is recommended that the l3 > agent and the dhcp agent do not run on the same host. If this is for a > POC then you can certainly do this as there is no risk of a security hole. > > Hopefully in the coming versions we will have a better solution for this. > > Please note that in the RHOS 3 version will will be able to invoke the > metadata service form the DHCP agent if you choose. > > > > I have these entries in my nova.conf file on the Controller host (L3 > agent host) > > enabled_apis=ec2,osapi_compute,metadata > > metadata_host=172.17.0.68 # This is the external IP of my Controller host > > metadata_port=8775 > > metadata_listen=172.17.0.68 > > service_quantum_metadata_proxy = true > > Is this all I need in nova? > > > I think so. > > > > Do I need a port on br-ex that routes to my external network? > > > You only need the br-ex on the host that is running the l3-agent. > > > > Do I need to create a router in quantum? > > > Yes, you need to do this and you need to assign the router to the > subnet with the private ip. This will ensure that the traffic is sent > to the l3 -agent which in turn will redirect it to the metadata service. > > > > My External network is 172.17.0.0/24 > > My management network is 10.255.254.0/24 (this is used for the hosts > to talk to each other, i.e., qpid and mysql) > > My guest network is 10.0.56.0/21 > > My l3-agent.conf file: > > [DEFAULT] > > #sql_connection = mysql://quantum:XXXXXXXX at 10.255.254.38/ovs_quantum > > > # Show more verbose log output (sets INFO log level output). > > verbose = True > > # Show debugging output in log (sets DEBUG log level output). > > debug = True > > # L3 agent requires that an interface driver be set. Choose the one > > # that best matches your plugin. There is no default. > > # interface_driver = > > # > > # OVS > > interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver > > # LinuxBridge > > # interface_driver = quantum.agent.linux.interface.BridgeInterfaceDriver > > # The Quantum user information for accessing the Quantum API. > > auth_strategy = keystone > > auth_url = http://10.255.254.38:35357/v2.0/ > > auth_region = lmicc > > admin_tenant_name = services > > admin_user = quantum > > admin_password = XXXXXXXXXX > > # Use "sudo quantum-rootwrap /etc/quantum/rootwrap.conf" to use the real > > # root filter facility. > > # Change to "sudo" to skip the filtering and just run the comand directly > > # root_helper = sudo quantum-rootwrap /etc/quantum/rootwrap.conf > > # Without network namespaces, each L3 agent can only configure one > > # router. This is done by setting the specific router_id. > > # router_id = > > > Due to the fact that namespaces is not supported you need to create a > router and then update this with the router id and restart the service > (sorry it is a real pain). Hopefully in the near future we will have > packstack support for Quantum that will do all of the above automatically. > > > # Each L3 agent can be associated with at most one external network. This > > # value should be set to the UUID of that external network. If empty, > > # the agent will enforce that only a single external networks exists and > > # use that external network id. > > # gateway_external_network_id = > > # Indicates that this L3 agent should also handle routers that do not have > > # an external network gateway configured. This option should be True only > > # for a single agent in a Quantum deployment, and may be False for all > agents > > # if all routers must have an external network gateway. > > # handle_internal_only_routers = True > > # Name of bridge used for external network traffic. This should be set to > > # empty value for the linuxbridge plugin. > > # external_network_bridge = br-ex > > # IP address used by Nova metadata server. > > metadata_ip = 172.17.0.68 > > # TCP Port used by Nova metadata server. > > metadata_port = 8775 > > use_namespaces = False > > # The time in seconds between state poll requests. > > # polling_interval = 3 > > Thank you for your help and patience. > > Rick > > _Richard Minton_ > > LMICC Systems Administrator > > 4000 Geerdes Blvd, 13D31 > > King of Prussia, PA 19406 > > Phone: 610-354-5482 > > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dneary at redhat.com Thu May 16 08:37:22 2013 From: dneary at redhat.com (Dave Neary) Date: Thu, 16 May 2013 10:37:22 +0200 Subject: [rhos-list] LDAP integration In-Reply-To: References: <5188DCAE.2090208@redhat.com> , <5193B5E7.5050202@redhat.com> Message-ID: <51949AC2.3060601@redhat.com> Hi Nicolas, Bringing the topic back to the mailing list (you're using RDO, so I added rdo-list also). On 05/15/2013 06:50 PM, Vogel Nicolas wrote: > I installed Grizzly with the RDO packstack installation guide on CentOS 6.4. > nova --version = 2.13.0 > keystone --version = 0.2.3 > If you need more information you can ask any time. >> On 05/07/2013 08:00 AM, Vogel Nicolas wrote: >>> After successfully installing an ? all-in-one Node ? using Packstack, >>> I want to user LDAP to manage my users. >>> >>> The LDAP backend isn?t available in the keystone.conf. Do I have to >>> replace the SQL backend with the LDAP backend? >>> >>> Wenn I switch to LDAP, is my admin user created by Packstack usable >>> yet or do I have to modify everything so that one of my LDAP user >>> becomes the admin ? I'm pretty sure that Adam Young can answer your question. AFAIK, when you switch to the LDAP back-end for Keystone, that you will have to take care of mapping your schema to Keystone attributes and access control. This page seems to be pretty complete: http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-keystone-for-ldap-backend.html Thanks, Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From gregory.andrus at lmco.com Thu May 16 17:17:56 2013 From: gregory.andrus at lmco.com (Andrus, Gregory) Date: Thu, 16 May 2013 17:17:56 +0000 Subject: [rhos-list] Does openstack 2.1 folsum with quantum support internal and external networks on same NIC? Message-ID: Hi all, I have a blade environment where each blade has only 2 - 10gb nics The plan was to use them as follows eth0.500 (rhel server administration, rhel server data center access, rhel server internet access) (172.17.0.0/24) eth0.502 (rhel server access to data center nfs storage systems) (10.0.0.0/24) eth0.159 (openstack management network) (10.255.254.0/24) eth1 (vm access to datacenter and internet, vm access host to host) We are using metadata to configure vms therefore we were told we must use quantum L3 agent. Is there a way to configure quantum and ovs to use eth1 for both br-int as well as br-ex traffic. All the examples I have come across are like the following where the ports added to the bridges refer directly to an Ethernet interface, not an Ethernet vlan interface on the nic such as eth1.100 or eth1.200 etc: ovs-vsctl add-br br-int ovs-vsctl add-port br-int eth0 ovs-vsctl add-br br-ex ovs-vsctl add-port br-ex eth1 [cid:image001.png at 01CE5237.C2C5B9A0] Thank you grega J. Gregory Andrus Senior Staff Systems Administrator Lockheed Martin IS&GS Bldg D - Rm 13D31 PO Box 61511 King of Prussia, Pa. 19406-0911 (610) 531-3666 (v) gregory.andrus at lmco.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 61814 bytes Desc: image001.png URL: From rkukura at redhat.com Thu May 16 18:01:26 2013 From: rkukura at redhat.com (Robert Kukura) Date: Thu, 16 May 2013 14:01:26 -0400 Subject: [rhos-list] Does openstack 2.1 folsum with quantum support internal and external networks on same NIC? In-Reply-To: References: Message-ID: <51951EF6.8000303@redhat.com> On 05/16/2013 01:17 PM, Andrus, Gregory wrote: > Hi all, > > > > I have a blade environment where each blade has only 2 ? 10gb nics > > The plan was to use them as follows > > > > eth0.500 (rhel server administration, rhel server data center access, > rhel server internet access) (172.17.0.0/24) > > eth0.502 (rhel server access to data center nfs storage systems) > (10.0.0.0/24) > > eth0.159 (openstack management network) (10.255.254.0/24) > > > > eth1 (vm access to datacenter and internet, vm access host to host) > > > > We are using metadata to configure vms therefore we were told we must > use quantum L3 agent. > > > > Is there a way to configure quantum and ovs to use eth1 for both br-int > as well as br-ex traffic. Yes, its easy to use the same network interface for both your data and external networks. The key is to to use a provider network for your external network rather than using br-ex. Disable use of br-ex by setting the following in /etc/quantum/l3_agent.ini and restarting the l3-agent: external_network_bridge = Which you can do with: openstack-config --set /etc/quantum/l3_agent.ini DEFAULT external_network_bridge "" Then, decide on a name for the physical network that will be accessed via eth1. We'll call it "physnet1" here. When you create your external network, pass provider attributes describing the external network (here we are using VLAN 123 for the external network): quantum net-create MyExternalNet --router:external True --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id 123 You can also specify a flat (i.e. untagged) network, with "... provider:network_type flat --provider:physical_network physnet1". In fact, I'd don't recommend using br-ex for your external network, even if its the only network on the network interface. Then create the external subnet with something like: quantum subnet-create --gateway 10.1.1.254 --allocation-pool start=10.1.1.100,end=10.1.1.110 --disable-dhcp MyExternalNet 10.1.1.0/24 and create and configure your router. > > All the examples I have come across are like the following where the > ports added to the bridges refer directly to an Ethernet interface, not > an Ethernet vlan interface on the nic such as eth1.100 or eth1.200 etc: > > > > ovs-vsctl add-br br-int > > ovs-vsctl add-port br-int eth0 You should definitely never add any physical interface directly to br-int. Please let me know where you are seeing examples of this. What you need to do is: ovs-vsctl add-br br-eth1 ovs-vsctl add-port br-eth1 eth1 Finally, make make sure physnet1 is listed in network_vlan_ranges in /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini on the controller and is mapped to the appropriate network interface via bridge_mappings on each compute and networking node. Your ovs_quantum_plugin.ini on a combined controller/networking node might look something like: tenant_network_type = vlan network_vlan_ranges = physnet1:1000:1999 bridge_mappings = physnet1:br-eth1 Don't forget to restart daemons after making changes to their configurations. > > > > ovs-vsctl add-br br-ex > > ovs-vsctl add-port br-ex eth1 If using a provider network as your external network, don't create br-ex or add eth1 to any bridge other than br-eth1. Hope this helps, -Bob > > > > > > > > Thank you > > > > grega > > > > > > J. Gregory Andrus > Senior Staff Systems Administrator > Lockheed Martin IS&GS > Bldg D - Rm 13D31 > PO Box 61511 > King of Prussia, Pa. 19406-0911 > (610) 531-3666 (v) > gregory.andrus at lmco.com > > > > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > From rich.minton at lmco.com Thu May 23 17:11:05 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Thu, 23 May 2013 17:11:05 +0000 Subject: [rhos-list] Red Hat Linux VM freezes. Message-ID: I'm seeing a strange occurrence since I moved to Folsom with Quantum networking. I launch a red hat 6 VM, I can log into it using the console just fine, run commands, etc. When I open an ssh session to the VM using putty I can log in ok, I can run a couple of commands, but when I run a command such as "ls -al /etc" it looks like it's trying to return the result, I might get a couple lines or no result, the VM locks up. No manner of CTRL-C or D or anything can get me out of it. I can still log into the console but I don't see the command I ran in a process list so that leads me to believe that the process finished. I have created a new image with a fresh install and I didn't update the OS thinking that a recent update is causing this... same thing. My previous cluster with Folsom and Nova-network and using the same image files works fine. Any ideas? Thank you, Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rich.minton at lmco.com Thu May 23 18:28:44 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Thu, 23 May 2013 18:28:44 +0000 Subject: [rhos-list] Libvirt Error (warning). Message-ID: Does anyone know what this means and how to fix it... if it needs to be fixed? These are from "libvirtd.log" warning : qemuDomainObjTaint:1377 : Domain id=6 name='instance-000000f8' uuid=d5d6e9a4-10d0-41d1-b9ec-4d331ed70478 is tainted: high-privileges I also get these errors: error : virNWFilterDHCPSnoopEnd:2131 : internal error ifname "tap6197909b-f6" not in key map error : virNetDevGetIndex:653 : Unable to get index for interface tap6197909b-f6: No such device Thank you, Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Thu May 23 18:57:33 2013 From: pmyers at redhat.com (Perry Myers) Date: Thu, 23 May 2013 14:57:33 -0400 Subject: [rhos-list] Red Hat Linux VM freezes. In-Reply-To: References: Message-ID: <519E669D.3050704@redhat.com> On 05/23/2013 01:11 PM, Minton, Rich wrote: > I?m seeing a strange occurrence since I moved to Folsom with Quantum > networking. I launch a red hat 6 VM, I can log into it using the console > just fine, run commands, etc. When I open an ssh session to the VM using > putty I can log in ok, I can run a couple of commands, but when I run a > command such as ?ls ?al /etc? it looks like it?s trying to return the > result, I might get a couple lines or no result, the VM locks up. No > manner of CTRL-C or D or anything can get me out of it. I can still log > into the console but I don?t see the command I ran in a process list so > that leads me to believe that the process finished. > > > > I have created a new image with a fresh install and I didn?t update the > OS thinking that a recent update is causing this? same thing. My > previous cluster with Folsom and Nova-network and using the same image > files works fine. Any logs you can send would be useful. Can you try doing something like from the console trying to ssh somewhere else and seeing if you see the same issue? Or try from the console doing a wget of a semi-large file? Brent, can you take a look at this in more detail? Perry From pmyers at redhat.com Thu May 23 18:58:21 2013 From: pmyers at redhat.com (Perry Myers) Date: Thu, 23 May 2013 14:58:21 -0400 Subject: [rhos-list] Libvirt Error (warning). In-Reply-To: References: Message-ID: <519E66CD.4090304@redhat.com> On 05/23/2013 02:28 PM, Minton, Rich wrote: > Does anyone know what this means and how to fix it? if it needs to be > fixed? These are from ?libvirtd.log? > > > > warning : qemuDomainObjTaint:1377 : Domain id=6 name='instance-000000f8' > uuid=d5d6e9a4-10d0-41d1-b9ec-4d331ed70478 is tainted: high-privileges > > > > I also get these errors: > > > > error : virNWFilterDHCPSnoopEnd:2131 : internal error ifname > "tap6197909b-f6" not in key map > > error : virNetDevGetIndex:653 : Unable to get index for interface > tap6197909b-f6: No such device > Dan or Dave, can you shed light on this? Perry From beagles at redhat.com Thu May 23 19:46:20 2013 From: beagles at redhat.com (Brent Eagles) Date: Thu, 23 May 2013 17:16:20 -0230 Subject: [rhos-list] Red Hat Linux VM freezes. In-Reply-To: <519E669D.3050704@redhat.com> References: <519E669D.3050704@redhat.com> Message-ID: <519E720C.3070701@redhat.com> Hi all, On 23/05/13 04:27 PM, Perry Myers wrote: > On 05/23/2013 01:11 PM, Minton, Rich wrote: >> I?m seeing a strange occurrence since I moved to Folsom with Quantum >> networking. I launch a red hat 6 VM, I can log into it using the console >> just fine, run commands, etc. When I open an ssh session to the VM using >> putty I can log in ok, I can run a couple of commands, but when I run a >> command such as ?ls ?al /etc? it looks like it?s trying to return the >> result, I might get a couple lines or no result, the VM locks up. No >> manner of CTRL-C or D or anything can get me out of it. I can still log >> into the console but I don?t see the command I ran in a process list so >> that leads me to believe that the process finished. >> >> >> >> I have created a new image with a fresh install and I didn?t update the >> OS thinking that a recent update is causing this? same thing. My >> previous cluster with Folsom and Nova-network and using the same image >> files works fine. > > Any logs you can send would be useful. Can you try doing something like > from the console trying to ssh somewhere else and seeing if you see the > same issue? Or try from the console doing a wget of a semi-large file? > > Brent, can you take a look at this in more detail? > > Perry Sure. Rich, were you able to get any log info from the hanging VM? Also can you send along the output of "ifconfig eth0" from your VM (or whatever the network interface is called). Cheers, Brent From rich.minton at lmco.com Thu May 23 20:13:43 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Thu, 23 May 2013 20:13:43 +0000 Subject: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. In-Reply-To: <519E720C.3070701@redhat.com> References: <519E669D.3050704@redhat.com> <519E720C.3070701@redhat.com> Message-ID: Here is the ifconfig: eth0 Link encap:Ethernet HWaddr FA:16:3E:8F:68:5C inet addr:10.0.56.75 Bcast:10.0.63.255 Mask:255.255.248.0 inet6 addr: fe80::f816:3eff:fe8f:685c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1000 errors:0 dropped:0 overruns:0 frame:0 TX packets:951 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:99377 (97.0 KiB) TX bytes:101606 (99.2 KiB) Also, I tried to "ls" an NFS mount and it hung up and after a while returned: INFO: task cp:1734 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Having difficulty getting logs to you. Will keep trying. Any suggestion on what logs would be useful? Rick -----Original Message----- From: Brent Eagles [mailto:beagles at redhat.com] Sent: Thursday, May 23, 2013 3:46 PM To: Perry Myers Cc: Minton, Rich; rhos-list at redhat.com Subject: EXTERNAL: Re: [rhos-list] Red Hat Linux VM freezes. Hi all, On 23/05/13 04:27 PM, Perry Myers wrote: > On 05/23/2013 01:11 PM, Minton, Rich wrote: >> I'm seeing a strange occurrence since I moved to Folsom with Quantum >> networking. I launch a red hat 6 VM, I can log into it using the >> console just fine, run commands, etc. When I open an ssh session to >> the VM using putty I can log in ok, I can run a couple of commands, >> but when I run a command such as "ls -al /etc" it looks like it's >> trying to return the result, I might get a couple lines or no result, >> the VM locks up. No manner of CTRL-C or D or anything can get me out >> of it. I can still log into the console but I don't see the command I >> ran in a process list so that leads me to believe that the process finished. >> >> >> >> I have created a new image with a fresh install and I didn't update >> the OS thinking that a recent update is causing this... same thing. My >> previous cluster with Folsom and Nova-network and using the same >> image files works fine. > > Any logs you can send would be useful. Can you try doing something > like from the console trying to ssh somewhere else and seeing if you > see the same issue? Or try from the console doing a wget of a semi-large file? > > Brent, can you take a look at this in more detail? > > Perry Sure. Rich, were you able to get any log info from the hanging VM? Also can you send along the output of "ifconfig eth0" from your VM (or whatever the network interface is called). Cheers, Brent From dallan at redhat.com Thu May 23 20:50:43 2013 From: dallan at redhat.com (Dave Allan) Date: Thu, 23 May 2013 16:50:43 -0400 Subject: [rhos-list] Libvirt Error (warning). In-Reply-To: <519E66CD.4090304@redhat.com> References: <519E66CD.4090304@redhat.com> Message-ID: <20130523205043.GU1997@redhat.com> On Thu, May 23, 2013 at 02:58:21PM -0400, Perry Myers wrote: > On 05/23/2013 02:28 PM, Minton, Rich wrote: > > Does anyone know what this means and how to fix it? if it needs to be > > fixed? These are from ?libvirtd.log? > > > > > > > > warning : qemuDomainObjTaint:1377 : Domain id=6 name='instance-000000f8' > > uuid=d5d6e9a4-10d0-41d1-b9ec-4d331ed70478 is tainted: high-privileges > > > > > > > > I also get these errors: > > > > > > > > error : virNWFilterDHCPSnoopEnd:2131 : internal error ifname > > "tap6197909b-f6" not in key map > > > > error : virNetDevGetIndex:653 : Unable to get index for interface > > tap6197909b-f6: No such device > > > > Dan or Dave, can you shed light on this? > > Perry A quick look at the code suggests it should be harmless. Laine, can you give a deeper answer on what causes it? Dave From beagles at redhat.com Thu May 23 21:49:49 2013 From: beagles at redhat.com (Brent Eagles) Date: Thu, 23 May 2013 19:19:49 -0230 Subject: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. In-Reply-To: References: <519E669D.3050704@redhat.com> <519E720C.3070701@redhat.com> Message-ID: <519E8EFD.30402@redhat.com> On 23/05/13 05:43 PM, Minton, Rich wrote: > Here is the ifconfig: > > eth0 Link encap:Ethernet HWaddr FA:16:3E:8F:68:5C > inet addr:10.0.56.75 Bcast:10.0.63.255 Mask:255.255.248.0 > inet6 addr: fe80::f816:3eff:fe8f:685c/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:1000 errors:0 dropped:0 overruns:0 frame:0 > TX packets:951 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:99377 (97.0 KiB) TX bytes:101606 (99.2 KiB) > > Also, I tried to "ls" an NFS mount and it hung up and after a while returned: > > INFO: task cp:1734 blocked for more than 120 seconds. > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. > > Having difficulty getting logs to you. Will keep trying. Any suggestion on what logs would be useful? > > Rick Actually... two things before we go too far: 1.) Can you try the same thing with an SSH client other than PuTTy? 2.) What version of PuTTy are you using? Cheers, Brent From dallan at redhat.com Thu May 23 23:09:15 2013 From: dallan at redhat.com (Dave Allan) Date: Thu, 23 May 2013 19:09:15 -0400 Subject: [rhos-list] Libvirt Error (warning). In-Reply-To: <20130523205043.GU1997@redhat.com> References: <519E66CD.4090304@redhat.com> <20130523205043.GU1997@redhat.com> Message-ID: <20130523230915.GW1997@redhat.com> On Thu, May 23, 2013 at 04:50:43PM -0400, Dave Allan wrote: > On Thu, May 23, 2013 at 02:58:21PM -0400, Perry Myers wrote: > > On 05/23/2013 02:28 PM, Minton, Rich wrote: > > > Does anyone know what this means and how to fix it? if it needs to be > > > fixed? These are from ?libvirtd.log? > > > > > > > > > > > > warning : qemuDomainObjTaint:1377 : Domain id=6 name='instance-000000f8' > > > uuid=d5d6e9a4-10d0-41d1-b9ec-4d331ed70478 is tainted: high-privileges > > > > > > > > > > > > I also get these errors: > > > > > > > > > > > > error : virNWFilterDHCPSnoopEnd:2131 : internal error ifname > > > "tap6197909b-f6" not in key map > > > > > > error : virNetDevGetIndex:653 : Unable to get index for interface > > > tap6197909b-f6: No such device > > > > > > > Dan or Dave, can you shed light on this? > > > > Perry > > A quick look at the code suggests it should be harmless. Laine, can > you give a deeper answer on what causes it? Ok, confirmed harmless, and Stefan Berger posted patches to remove those messages: https://www.redhat.com/archives/libvir-list/2013-April/msg00953.html Dave From lstump at redhat.com Thu May 23 23:13:25 2013 From: lstump at redhat.com (Laine Stump) Date: Thu, 23 May 2013 19:13:25 -0400 Subject: [rhos-list] Libvirt Error (warning). In-Reply-To: <20130523205043.GU1997@redhat.com> References: <519E66CD.4090304@redhat.com> <20130523205043.GU1997@redhat.com> Message-ID: <519EA295.3050409@redhat.com> On 05/23/2013 04:50 PM, Dave Allan wrote: > On Thu, May 23, 2013 at 02:58:21PM -0400, Perry Myers wrote: >> On 05/23/2013 02:28 PM, Minton, Rich wrote: >>> Does anyone know what this means and how to fix it? if it needs to be >>> fixed? These are from ?libvirtd.log? >>> >>> >>> >>> warning : qemuDomainObjTaint:1377 : Domain id=6 name='instance-000000f8' >>> uuid=d5d6e9a4-10d0-41d1-b9ec-4d331ed70478 is tainted: high-privileges >>> >>> >>> >>> I also get these errors: >>> >>> >>> >>> error : virNWFilterDHCPSnoopEnd:2131 : internal error ifname >>> "tap6197909b-f6" not in key map >>> >>> error : virNetDevGetIndex:653 : Unable to get index for interface >>> tap6197909b-f6: No such device >>> >> Dan or Dave, can you shed light on this? >> >> Perry > A quick look at the code suggests it should be harmless. Laine, can > you give a deeper answer on what causes it? It's a harmless error during teardown of a nwfilter rule - the tap device used for dhcp snooping has already been removed, and part of the teardown tries to use it. However, the code does do the right thing if the tap device no longer exists, it just happens to needlessly complain in the process. There is actually a patch to eliminate this complaint, sent to the upstream libvirt list last month by the main nwfilter author (Stefan Berger from IBM). I ACKed the patch soon after he sent it, but for some reason he didn't push it. I asked him in IRC awhile ago if he wanted me to push it for him, but he logged off before replying, so I'll send him mail about it. From rich.minton at lmco.com Fri May 24 12:56:35 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Fri, 24 May 2013 12:56:35 +0000 Subject: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. In-Reply-To: <519E8EFD.30402@redhat.com> References: <519E669D.3050704@redhat.com> <519E720C.3070701@redhat.com> <519E8EFD.30402@redhat.com> Message-ID: I'm using putty .62. I get the same problem when I SSH from my Linux host to my Linux VM. It happens consistently if I do an "elel /etc" or some other long directory list. One thing is I used to have a similar problem with the automounter and NFS mounts to Isilon storage. I was able to resolve by having my storage name in DNS and using FQDN in my mounts which I now use in all my mounts. I do use the NFS driver with Cinder and all my instances and images reside on Isilon storage. I mount /var/lib/glance/images and /var/lib/nova/instances to NFS exports on the Isilon. Thanks, Rick -----Original Message----- From: Brent Eagles [mailto:beagles at redhat.com] Sent: Thursday, May 23, 2013 5:50 PM To: Minton, Rich Cc: rhos-list at redhat.com Subject: Re: EXTERNAL: Re: [rhos-list] Red Hat Linux VM freezes. On 23/05/13 05:43 PM, Minton, Rich wrote: > Here is the ifconfig: > > eth0 Link encap:Ethernet HWaddr FA:16:3E:8F:68:5C > inet addr:10.0.56.75 Bcast:10.0.63.255 Mask:255.255.248.0 > inet6 addr: fe80::f816:3eff:fe8f:685c/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:1000 errors:0 dropped:0 overruns:0 frame:0 > TX packets:951 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:99377 (97.0 KiB) TX bytes:101606 (99.2 KiB) > > Also, I tried to "ls" an NFS mount and it hung up and after a while returned: > > INFO: task cp:1734 blocked for more than 120 seconds. > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. > > Having difficulty getting logs to you. Will keep trying. Any suggestion on what logs would be useful? > > Rick Actually... two things before we go too far: 1.) Can you try the same thing with an SSH client other than PuTTy? 2.) What version of PuTTy are you using? Cheers, Brent From rich.minton at lmco.com Fri May 24 13:19:45 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Fri, 24 May 2013 13:19:45 +0000 Subject: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. In-Reply-To: <519E8EFD.30402@redhat.com> References: <519E669D.3050704@redhat.com> <519E720C.3070701@redhat.com> <519E8EFD.30402@redhat.com> Message-ID: One thing to note... I believe all this started when I ran a yum update and it updated libvirt to version libvirt-0.10.2-18.el6_4.5.x86_64 and the kernel to kernel-2.6.32-358.6.2.el6.x86_64. Is there a way to roll these back to the previous version? -----Original Message----- From: Brent Eagles [mailto:beagles at redhat.com] Sent: Thursday, May 23, 2013 5:50 PM To: Minton, Rich Cc: rhos-list at redhat.com Subject: Re: EXTERNAL: Re: [rhos-list] Red Hat Linux VM freezes. On 23/05/13 05:43 PM, Minton, Rich wrote: > Here is the ifconfig: > > eth0 Link encap:Ethernet HWaddr FA:16:3E:8F:68:5C > inet addr:10.0.56.75 Bcast:10.0.63.255 Mask:255.255.248.0 > inet6 addr: fe80::f816:3eff:fe8f:685c/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:1000 errors:0 dropped:0 overruns:0 frame:0 > TX packets:951 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:99377 (97.0 KiB) TX bytes:101606 (99.2 KiB) > > Also, I tried to "ls" an NFS mount and it hung up and after a while returned: > > INFO: task cp:1734 blocked for more than 120 seconds. > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. > > Having difficulty getting logs to you. Will keep trying. Any suggestion on what logs would be useful? > > Rick Actually... two things before we go too far: 1.) Can you try the same thing with an SSH client other than PuTTy? 2.) What version of PuTTy are you using? Cheers, Brent From beagles at redhat.com Fri May 24 14:30:21 2013 From: beagles at redhat.com (Brent Eagles) Date: Fri, 24 May 2013 12:00:21 -0230 Subject: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. In-Reply-To: References: <519E669D.3050704@redhat.com> <519E720C.3070701@redhat.com> <519E8EFD.30402@redhat.com> Message-ID: <519F797D.2020006@redhat.com> On 05/24/2013 10:49 AM, Minton, Rich wrote: > One thing to note... I believe all this started when I ran a yum update and it updated libvirt to version libvirt-0.10.2-18.el6_4.5.x86_64 and the kernel to kernel-2.6.32-358.6.2.el6.x86_64. > > Is there a way to roll these back to the previous version? > Interesting. I'm not sure about rolling back the kernel though. Is that not the version with the network namespace patches? FWIW, you can rollback with yum. Take a look at "yum history" and related functions (if the man page doesn't cover it, there are some good googl'able examples). You asked earlier about which log files might be relevant. Let's start with: - the contents of /var/log/quantum and /var/log/openvswitch - the output of dmesg - the output of ifconfig (don't specify an interface, let's get them all) Cheers, Brent From rich.minton at lmco.com Fri May 24 21:36:06 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Fri, 24 May 2013 21:36:06 +0000 Subject: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. In-Reply-To: <519F797D.2020006@redhat.com> References: <519E669D.3050704@redhat.com> <519E720C.3070701@redhat.com> <519E8EFD.30402@redhat.com> <519F797D.2020006@redhat.com> Message-ID: Rolling back the kernel and libvirt broke everything and I had reinstall the latest versions. It looks like it might be a routing problem. Our quantum router gateway IP is the same as our physical router IP and we think they are fighting over each other. I'll get my network engineer to look at it on Monday. I'll let you know the outcome. Rick -----Original Message----- From: Brent Eagles [mailto:beagles at redhat.com] Sent: Friday, May 24, 2013 10:30 AM To: Minton, Rich Cc: rhos-list at redhat.com Subject: Re: EXTERNAL: Re: [rhos-list] Red Hat Linux VM freezes. On 05/24/2013 10:49 AM, Minton, Rich wrote: > One thing to note... I believe all this started when I ran a yum update and it updated libvirt to version libvirt-0.10.2-18.el6_4.5.x86_64 and the kernel to kernel-2.6.32-358.6.2.el6.x86_64. > > Is there a way to roll these back to the previous version? > Interesting. I'm not sure about rolling back the kernel though. Is that not the version with the network namespace patches? FWIW, you can rollback with yum. Take a look at "yum history" and related functions (if the man page doesn't cover it, there are some good googl'able examples). You asked earlier about which log files might be relevant. Let's start with: - the contents of /var/log/quantum and /var/log/openvswitch - the output of dmesg - the output of ifconfig (don't specify an interface, let's get them all) Cheers, Brent From beagles at redhat.com Fri May 24 21:58:11 2013 From: beagles at redhat.com (Brent Eagles) Date: Fri, 24 May 2013 19:28:11 -0230 Subject: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. In-Reply-To: References: <519E669D.3050704@redhat.com> <519E720C.3070701@redhat.com> <519E8EFD.30402@redhat.com> <519F797D.2020006@redhat.com> Message-ID: <519FE273.10300@redhat.com> Hi, On 05/24/2013 07:06 PM, Minton, Rich wrote: > Rolling back the kernel and libvirt broke everything and I had reinstall the latest versions. Yes, that is not surprising. I probably should have been more clear that would not be a good thing to do. "Normal" quantum functionality depends on the network namespaces feature so compromising that would only complicate things. > It looks like it might be a routing problem. Our quantum router gateway IP is the same as our physical router IP and we think they are fighting over each other. I'll get my network engineer to look at it on Monday. > > I'll let you know the outcome. > > Rick I don't know if it will have any affect whatsover, but you might consider changing the sysctl variable "net.ipv4.conf.default.rp_filter" to 0 (it is 1 by default in RHEL). I doubt whether it will change things here, but when "routing" comes up in multi-interface environment, I always give it a shot. Cheers, Brent From mwaite at redhat.com Fri May 24 23:03:59 2013 From: mwaite at redhat.com (Michael Waite) Date: Fri, 24 May 2013 19:03:59 -0400 (EDT) Subject: [rhos-list] =?utf-8?q?EXTERNAL=3A_Re=3A__Red_Hat_Linux_VM_freezes?= =?utf-8?q?=2E?= In-Reply-To: References: <519E669D.3050704@redhat.com> <519E720C.3070701@redhat.com> <519E8EFD.30402@redhat.com> <519F797D.2020006@redhat.com> Message-ID: <1997916833.6187603.1369436639759.JavaMail.root@zmail14.collab.prod.int.phx2.redhat.com> Just an FYI Rich that we are officially closed on Monday..... sent from my phone. -----Original Message----- From: Minton, Rich [rich.minton at lmco.com] Received: Friday, 24 May 2013, 5:36pm To: Brent Eagles [beagles at redhat.com] CC: rhos-list at redhat.com [rhos-list at redhat.com] Subject: Re: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. Rolling back the kernel and libvirt broke everything and I had reinstall the latest versions. It looks like it might be a routing problem. Our quantum router gateway IP is the same as our physical router IP and we think they are fighting over each other. I'll get my network engineer to look at it on Monday. I'll let you know the outcome. Rick -----Original Message----- From: Brent Eagles [mailto:beagles at redhat.com] Sent: Friday, May 24, 2013 10:30 AM To: Minton, Rich Cc: rhos-list at redhat.com Subject: Re: EXTERNAL: Re: [rhos-list] Red Hat Linux VM freezes. On 05/24/2013 10:49 AM, Minton, Rich wrote: > One thing to note... I believe all this started when I ran a yum update and it updated libvirt to version libvirt-0.10.2-18.el6_4.5.x86_64 and the kernel to kernel-2.6.32-358.6.2.el6.x86_64. > > Is there a way to roll these back to the previous version? > Interesting. I'm not sure about rolling back the kernel though. Is that not the version with the network namespace patches? FWIW, you can rollback with yum. Take a look at "yum history" and related functions (if the man page doesn't cover it, there are some good googl'able examples). You asked earlier about which log files might be relevant. Let's start with: - the contents of /var/log/quantum and /var/log/openvswitch - the output of dmesg - the output of ifconfig (don't specify an interface, let's get them all) Cheers, Brent _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list From ronac07 at gmail.com Sat May 25 02:05:02 2013 From: ronac07 at gmail.com (Ronald Cronenwett) Date: Fri, 24 May 2013 22:05:02 -0400 Subject: [rhos-list] I think I found something missing in iproute2 In-Reply-To: <20130509181644.GN4016@x200.localdomain> References: <518BD6F2.6090600@redhat.com> <20130509174502.GM4016@x200.localdomain> <20130509181644.GN4016@x200.localdomain> Message-ID: Chris, I noticed iproute has been updated in the RDO grizzly repositories to the version in your link. There was also a kernel update. Does this mean namespaces can now be used for Quantum? I see "ip netns list" does not give an error but "ip netns add test" results in [root at os64-1 ~]# ip netns add test Bind /proc/self/ns/net -> /var/run/netns/test failed: No such file or directory I have the following loaded: iproute-2.6.32-23.el6_4.netns.1.x86_64 kernel-2.6.32-358.6.2.el6.x86_64 Thanks Ron Cronenwett On Thu, May 9, 2013 at 2:16 PM, Chris Wright wrote: > * Paul Robert Marino (prmarino1 at gmail.com) wrote: > > # grep -P '(NET_NS|NAMESPACE)' /boot/config-2.6.32-358.6.1.el6.x86_64 > > CONFIG_NAMESPACES=y > > CONFIG_NET_NS=y > > That is the right config items enabled, however, there are > internal implementation details that are missing. So what > is in 2.6.32-358.6.1.el6.x86_64 is not sufficient. > > > It looks to me as though its already enabled in the kernel compile > > configuration, and I thought supporting it was part of the original plan > > for RHEL 6.4 specifically because OpenStack needs it. > > RHEL 6.4 has already shipped and does not support it. > > If you are interested in helping testing the feature, please let me > know. > > I have packages here for iproute2: > > http://et.redhat.com/~chrisw/rhel6/6.4/bz869004/iproute/netns.1/ > > And will try to push out test kernel asap. > > thanks, > -chris > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From unicell at gmail.com Sat May 25 02:30:42 2013 From: unicell at gmail.com (unicell) Date: Sat, 25 May 2013 10:30:42 +0800 Subject: [rhos-list] I think I found something missing in iproute2 In-Reply-To: References: <518BD6F2.6090600@redhat.com> <20130509174502.GM4016@x200.localdomain> <20130509181644.GN4016@x200.localdomain> Message-ID: Hi Ronald, Some information for your reference, cause I'm also met the same issue. There're two parts of the story. For the iproute part, the version in the RDO grizzly repo has already netns support. However, for the Linux kernel portion, it need namespace file descriptor feature (/proc/[pid]/ns/net stuff) to make 'ip netns xxx' work. And this kernel feature so far is not included in any of RHEL 6.3/6.4 kernel releases. Best Regards, -- Qiu Yu http://www.unicell.info On Sat, May 25, 2013 at 10:05 AM, Ronald Cronenwett wrote: > Chris, > > I noticed iproute has been updated in the RDO grizzly repositories to the > version in your link. There was also a kernel update. Does this mean > namespaces can now be used for Quantum? I see "ip netns list" does not give > an error but "ip netns add test" results in > > [root at os64-1 ~]# ip netns add test > Bind /proc/self/ns/net -> /var/run/netns/test failed: No such file or > directory > > I have the following loaded: > > iproute-2.6.32-23.el6_4.netns.1.x86_64 > kernel-2.6.32-358.6.2.el6.x86_64 > > Thanks > > Ron Cronenwett > > > On Thu, May 9, 2013 at 2:16 PM, Chris Wright wrote: > >> * Paul Robert Marino (prmarino1 at gmail.com) wrote: >> > # grep -P '(NET_NS|NAMESPACE)' /boot/config-2.6.32-358.6.1.el6.x86_64 >> > CONFIG_NAMESPACES=y >> > CONFIG_NET_NS=y >> >> That is the right config items enabled, however, there are >> internal implementation details that are missing. So what >> is in 2.6.32-358.6.1.el6.x86_64 is not sufficient. >> >> > It looks to me as though its already enabled in the kernel compile >> > configuration, and I thought supporting it was part of the original plan >> > for RHEL 6.4 specifically because OpenStack needs it. >> >> RHEL 6.4 has already shipped and does not support it. >> >> If you are interested in helping testing the feature, please let me >> know. >> >> I have packages here for iproute2: >> >> http://et.redhat.com/~chrisw/rhel6/6.4/bz869004/iproute/netns.1/ >> >> And will try to push out test kernel asap. >> >> thanks, >> -chris >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list >> > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > -------------- next part -------------- An HTML attachment was scrubbed... URL: From unicell at gmail.com Sun May 26 09:34:40 2013 From: unicell at gmail.com (Qiu Yu) Date: Sun, 26 May 2013 17:34:40 +0800 Subject: [rhos-list] I think I found something missing in iproute2 In-Reply-To: References: <518BD6F2.6090600@redhat.com> <20130509174502.GM4016@x200.localdomain> <20130509181644.GN4016@x200.localdomain> Message-ID: I've built a kernel version based on RedHat 2.6.32-358.6.2, with necessary patches backported to make iproute2 works with namespace. Please find the information below, and bare in mind that it is for testing purpose only. Patches https://github.com/unicell/redpatch/commits/rhel-2.6.32-358.6.2.ns.el6 Prebuilt binaries http://trilocell.info/rpms/ Best Regards, -- Qiu Yu On Sat, May 25, 2013 at 10:30 AM, unicell wrote: > Hi Ronald, > > Some information for your reference, cause I'm also met the same issue. > There're two parts of the story. > > For the iproute part, the version in the RDO grizzly repo has already > netns support. > > However, for the Linux kernel portion, it need namespace file descriptor > feature (/proc/[pid]/ns/net stuff) to make 'ip netns xxx' work. And this > kernel feature so far is not included in any of RHEL 6.3/6.4 kernel > releases. > > Best Regards, > -- > Qiu Yu > http://www.unicell.info > > > On Sat, May 25, 2013 at 10:05 AM, Ronald Cronenwett wrote: > >> Chris, >> >> I noticed iproute has been updated in the RDO grizzly repositories to the >> version in your link. There was also a kernel update. Does this mean >> namespaces can now be used for Quantum? I see "ip netns list" does not give >> an error but "ip netns add test" results in >> >> [root at os64-1 ~]# ip netns add test >> Bind /proc/self/ns/net -> /var/run/netns/test failed: No such file or >> directory >> >> I have the following loaded: >> >> iproute-2.6.32-23.el6_4.netns.1.x86_64 >> kernel-2.6.32-358.6.2.el6.x86_64 >> >> Thanks >> >> Ron Cronenwett >> >> >> On Thu, May 9, 2013 at 2:16 PM, Chris Wright wrote: >> >>> * Paul Robert Marino (prmarino1 at gmail.com) wrote: >>> > # grep -P '(NET_NS|NAMESPACE)' /boot/config-2.6.32-358.6.1.el6.x86_64 >>> > CONFIG_NAMESPACES=y >>> > CONFIG_NET_NS=y >>> >>> That is the right config items enabled, however, there are >>> internal implementation details that are missing. So what >>> is in 2.6.32-358.6.1.el6.x86_64 is not sufficient. >>> >>> > It looks to me as though its already enabled in the kernel compile >>> > configuration, and I thought supporting it was part of the original >>> plan >>> > for RHEL 6.4 specifically because OpenStack needs it. >>> >>> RHEL 6.4 has already shipped and does not support it. >>> >>> If you are interested in helping testing the feature, please let me >>> know. >>> >>> I have packages here for iproute2: >>> >>> http://et.redhat.com/~chrisw/rhel6/6.4/bz869004/iproute/netns.1/ >>> >>> And will try to push out test kernel asap. >>> >>> thanks, >>> -chris >>> >>> _______________________________________________ >>> rhos-list mailing list >>> rhos-list at redhat.com >>> https://www.redhat.com/mailman/listinfo/rhos-list >>> >> >> >> _______________________________________________ >> rhos-list mailing list >> rhos-list at redhat.com >> https://www.redhat.com/mailman/listinfo/rhos-list >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ronac07 at gmail.com Sun May 26 10:52:29 2013 From: ronac07 at gmail.com (ronac07 at gmail.com) Date: Sun, 26 May 2013 06:52:29 -0400 Subject: [rhos-list] I think I found something missing in iproute2 In-Reply-To: References: <518BD6F2.6090600@redhat.com> <20130509174502.GM4016@x200.localdomain> <20130509181644.GN4016@x200.localdomain> Message-ID: <15554A73-0676-406D-A66C-8277845025CB@gmail.com> Thanks Qiu. I'll try your patched kernel on a test system. Ron Sent from my iPad On May 26, 2013, at 5:34 AM, Qiu Yu wrote: > I've built a kernel version based on RedHat 2.6.32-358.6.2, with necessary patches backported to make iproute2 works with namespace. > > Please find the information below, and bare in mind that it is for testing purpose only. > > Patches > https://github.com/unicell/redpatch/commits/rhel-2.6.32-358.6.2.ns.el6 > > Prebuilt binaries > http://trilocell.info/rpms/ > > Best Regards, > -- > Qiu Yu > > > On Sat, May 25, 2013 at 10:30 AM, unicell wrote: > Hi Ronald, > > Some information for your reference, cause I'm also met the same issue. There're two parts of the story. > > For the iproute part, the version in the RDO grizzly repo has already netns support. > > However, for the Linux kernel portion, it need namespace file descriptor feature (/proc/[pid]/ns/net stuff) to make 'ip netns xxx' work. And this kernel feature so far is not included in any of RHEL 6.3/6.4 kernel releases. > > Best Regards, > -- > Qiu Yu > http://www.unicell.info > > > On Sat, May 25, 2013 at 10:05 AM, Ronald Cronenwett wrote: > Chris, > > I noticed iproute has been updated in the RDO grizzly repositories to the version in your link. There was also a kernel update. Does this mean namespaces can now be used for Quantum? I see "ip netns list" does not give an error but "ip netns add test" results in > > [root at os64-1 ~]# ip netns add test > Bind /proc/self/ns/net -> /var/run/netns/test failed: No such file or directory > > I have the following loaded: > > iproute-2.6.32-23.el6_4.netns.1.x86_64 > kernel-2.6.32-358.6.2.el6.x86_64 > > Thanks > > Ron Cronenwett > > > On Thu, May 9, 2013 at 2:16 PM, Chris Wright wrote: > * Paul Robert Marino (prmarino1 at gmail.com) wrote: > > # grep -P '(NET_NS|NAMESPACE)' /boot/config-2.6.32-358.6.1.el6.x86_64 > > CONFIG_NAMESPACES=y > > CONFIG_NET_NS=y > > That is the right config items enabled, however, there are > internal implementation details that are missing. So what > is in 2.6.32-358.6.1.el6.x86_64 is not sufficient. > > > It looks to me as though its already enabled in the kernel compile > > configuration, and I thought supporting it was part of the original plan > > for RHEL 6.4 specifically because OpenStack needs it. > > RHEL 6.4 has already shipped and does not support it. > > If you are interested in helping testing the feature, please let me > know. > > I have packages here for iproute2: > > http://et.redhat.com/~chrisw/rhel6/6.4/bz869004/iproute/netns.1/ > > And will try to push out test kernel asap. > > thanks, > -chris > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Sun May 26 14:13:07 2013 From: pmyers at redhat.com (Perry Myers) Date: Sun, 26 May 2013 10:13:07 -0400 Subject: [rhos-list] I think I found something missing in iproute2 In-Reply-To: <15554A73-0676-406D-A66C-8277845025CB@gmail.com> References: <518BD6F2.6090600@redhat.com> <20130509174502.GM4016@x200.localdomain> <20130509181644.GN4016@x200.localdomain> <15554A73-0676-406D-A66C-8277845025CB@gmail.com> Message-ID: <51A21873.4080509@redhat.com> Just so folks know... We're working to get a kernel out on RDO based on the latest RHEL 6.4.z kernel that contains the netns functionality. Hopefully in the next week or two we should be able to put this on the RDO repos. We are just working out the minimal patch set required to backport the functionality from upstream into the RHEL 6 kernel line, and validating that netns works well enough to satisfy the use cases that OpenStack Networking needs it for. More info as we get it Cheers, Perry From dneary at redhat.com Mon May 27 13:29:41 2013 From: dneary at redhat.com (Dave Neary) Date: Mon, 27 May 2013 15:29:41 +0200 Subject: [rhos-list] Webinar: "RDO: An OpenStack community project" Message-ID: <51A35FC5.2050603@redhat.com> Hi everyone, This Wednesday, May 29th, Keith Basil, OpenStack Product Manager with Red Hat, and myself will be delivering a webinar on OpenStack, the RDO project, and Red Hat's involvement in the OpenStack project. You are all welcome to register to join here: http://www.redhat.com/about/events-webinars/webinars/203-05-29-rdo-openstack-community Thanks! Dave. -- Dave Neary - Community Action and Impact Open Source and Standards, Red Hat - http://community.redhat.com Ph: +33 9 50 71 55 62 / Cell: +33 6 77 01 92 13 From nicolas.vogel at heig-vd.ch Tue May 28 12:21:00 2013 From: nicolas.vogel at heig-vd.ch (Vogel Nicolas) Date: Tue, 28 May 2013 12:21:00 +0000 Subject: [rhos-list] "UnsupportedRpcVersion" on new compute node Message-ID: Hi, I just successfully installed a new compute node like described on the RDO website. The new compute node isn?t recognized by the controller, because the openstack-nova-compute service is unable to start. The ?openstack-status? command shows me the service as Dead. The compute logs say that there is a problem with the RPC version. But why is the version on the compute node different from the controller one? I already installed another compute node two weeks ago and I had no problem. Here are the compute logs: [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 430, in _process_data\n rval = self.proxy.dispatch(ctxt, version, method, **args)\n', u' File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", line 138, in dispatch\n raise rpc_common.UnsupportedRpcVersion(version=version)\n', u'UnsupportedRpcVersion: Specified RPC version, 1.47, not supported by this endpoint.\n']. 2013-05-27 11:43:23.247 1775 CRITICAL nova [-] Remote error: UnsupportedRpcVersion Specified RPC version, 1.47, not supported by this endpoint. [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 430, in _process_data\n rval = self.proxy.dispatch(ctxt, version, method, **args)\n', u' File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", line 138, in dispatch\n raise rpc_common.UnsupportedRpcVersion(version=version)\n', u'UnsupportedRpcVersion: Specified RPC version, 1.47, not supported by this endpoint.\n']. 2013-05-28 13:49:11.518 19840 CRITICAL nova [-] Remote error: UnsupportedRpcVersion Specified RPC version, 1.47, not supported by this endpoint. [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 430, in _process_data\n rval = self.proxy.dispatch(ctxt, version, method, **args)\n', u' File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", line 138, in dispatch\n raise rpc_common.UnsupportedRpcVersion(version=version)\n', u'UnsupportedRpcVersion: Specified RPC version, 1.47, not supported by this endpoint.\n']. If somebody knows what to do, please answer. Thanks, Nicolas. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rbryant at redhat.com Tue May 28 12:39:54 2013 From: rbryant at redhat.com (Russell Bryant) Date: Tue, 28 May 2013 08:39:54 -0400 Subject: [rhos-list] "UnsupportedRpcVersion" on new compute node In-Reply-To: References: Message-ID: <51A4A59A.5090105@redhat.com> On 05/28/2013 08:21 AM, Vogel Nicolas wrote: > Hi, > > > > I just successfully installed a new compute node like described on the > RDO website. > > The new compute node isn?t recognized by the controller, because the > openstack-nova-compute service is unable to start. > > The ?openstack-status? command shows me the service as Dead. > > The compute logs say that there is a problem with the RPC version. But > why is the version on the compute node different from the controller one? > > I already installed another compute node two weeks ago and I had no problem. > > > > Here are the compute logs: > > [u'Traceback (most recent call last):\n', u' File > "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", > line 430, in _process_data\n rval = self.proxy.dispatch(ctxt, > version, method, **args)\n', u' File > "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", > line 138, in dispatch\n raise > rpc_common.UnsupportedRpcVersion(version=version)\n', > u'UnsupportedRpcVersion: Specified RPC version, 1.47, not supported by > this endpoint.\n']. > > 2013-05-27 11:43:23.247 1775 CRITICAL nova [-] Remote error: > UnsupportedRpcVersion Specified RPC version, 1.47, not supported by this > endpoint. > > [u'Traceback (most recent call last):\n', u' File > "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", > line 430, in _process_data\n rval = self.proxy.dispatch(ctxt, > version, method, **args)\n', u' File > "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", > line 138, in dispatch\n raise > rpc_common.UnsupportedRpcVersion(version=version)\n', > u'UnsupportedRpcVersion: Specified RPC version, 1.47, not supported by > this endpoint.\n']. > > 2013-05-28 13:49:11.518 19840 CRITICAL nova [-] Remote error: > UnsupportedRpcVersion Specified RPC version, 1.47, not supported by this > endpoint. > > [u'Traceback (most recent call last):\n', u' File > "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", > line 430, in _process_data\n rval = self.proxy.dispatch(ctxt, > version, method, **args)\n', u' File > "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", > line 138, in dispatch\n raise > rpc_common.UnsupportedRpcVersion(version=version)\n', > u'UnsupportedRpcVersion: Specified RPC version, 1.47, not supported by > this endpoint.\n']. > > > > If somebody knows what to do, please answer. This happens when you have a version mismatch between your services. Please double check the versions of nova that you have installed on each node. -- Russell Bryant From rich.minton at lmco.com Tue May 28 15:02:37 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Tue, 28 May 2013 15:02:37 +0000 Subject: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. In-Reply-To: <519FE273.10300@redhat.com> References: <519E669D.3050704@redhat.com> <519E720C.3070701@redhat.com> <519E8EFD.30402@redhat.com> <519F797D.2020006@redhat.com> <519FE273.10300@redhat.com> Message-ID: This is interesting... We were able to resolve (or band aid) our problem by setting the VMs eth0 MTU to 1000. Has anyone else encountered this problem? Any ideas why this is happening? Rick -----Original Message----- From: Brent Eagles [mailto:beagles at redhat.com] Sent: Friday, May 24, 2013 5:58 PM To: Minton, Rich Cc: rhos-list at redhat.com Subject: Re: EXTERNAL: Re: [rhos-list] Red Hat Linux VM freezes. Hi, On 05/24/2013 07:06 PM, Minton, Rich wrote: > Rolling back the kernel and libvirt broke everything and I had reinstall the latest versions. Yes, that is not surprising. I probably should have been more clear that would not be a good thing to do. "Normal" quantum functionality depends on the network namespaces feature so compromising that would only complicate things. > It looks like it might be a routing problem. Our quantum router gateway IP is the same as our physical router IP and we think they are fighting over each other. I'll get my network engineer to look at it on Monday. > > I'll let you know the outcome. > > Rick I don't know if it will have any affect whatsover, but you might consider changing the sysctl variable "net.ipv4.conf.default.rp_filter" to 0 (it is 1 by default in RHEL). I doubt whether it will change things here, but when "routing" comes up in multi-interface environment, I always give it a shot. Cheers, Brent From beagles at redhat.com Tue May 28 17:14:05 2013 From: beagles at redhat.com (Brent Eagles) Date: Tue, 28 May 2013 13:14:05 -0400 (EDT) Subject: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. In-Reply-To: References: <519E8EFD.30402@redhat.com> <519F797D.2020006@redhat.com> <519FE273.10300@redhat.com> Message-ID: <1512494469.9163924.1369761245785.JavaMail.root@redhat.com> Hi Rick, ----- Original Message ----- > This is interesting... > > We were able to resolve (or band aid) our problem by setting the VMs eth0 MTU > to 1000. > > Has anyone else encountered this problem? Any ideas why this is happening? > > Rick I suspected that might be the case when I asked for the ifconfig at the beginning. I was having some difficulty reproducing it so I was hesitant to recommend altering it. I've heard of SSL/SSH related issues with MTU size and was wondering about the effect on payloads that network namespaces might have in conjunction with SSH. The hard drive on my test system failed yesterday so I'm a little behind unfortunately. I'd like to find out what the failure threshold is and what contributes to the delta between 1500 and that threshold. Cheers, Brent From rich.minton at lmco.com Tue May 28 17:22:41 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Tue, 28 May 2013 17:22:41 +0000 Subject: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. In-Reply-To: <1512494469.9163924.1369761245785.JavaMail.root@redhat.com> References: <519E8EFD.30402@redhat.com> <519F797D.2020006@redhat.com> <519FE273.10300@redhat.com> <1512494469.9163924.1369761245785.JavaMail.root@redhat.com> Message-ID: It starts to work at an MTU of 1468. -----Original Message----- From: Brent Eagles [mailto:beagles at redhat.com] Sent: Tuesday, May 28, 2013 1:14 PM To: Minton, Rich Cc: rhos-list at redhat.com Subject: Re: EXTERNAL: Re: [rhos-list] Red Hat Linux VM freezes. Hi Rick, ----- Original Message ----- > This is interesting... > > We were able to resolve (or band aid) our problem by setting the VMs > eth0 MTU to 1000. > > Has anyone else encountered this problem? Any ideas why this is happening? > > Rick I suspected that might be the case when I asked for the ifconfig at the beginning. I was having some difficulty reproducing it so I was hesitant to recommend altering it. I've heard of SSL/SSH related issues with MTU size and was wondering about the effect on payloads that network namespaces might have in conjunction with SSH. The hard drive on my test system failed yesterday so I'm a little behind unfortunately. I'd like to find out what the failure threshold is and what contributes to the delta between 1500 and that threshold. Cheers, Brent From rich.minton at lmco.com Tue May 28 19:16:55 2013 From: rich.minton at lmco.com (Minton, Rich) Date: Tue, 28 May 2013 19:16:55 +0000 Subject: [rhos-list] ovs-vswitchd config Message-ID: Is there a way to set this permanently so I don't have to run it each time my server reboots? "ovs-vsctl add-port br-eth1 eth1" Thank you, Rick Richard Minton LMICC Systems Administrator 4000 Geerdes Blvd, 13D31 King of Prussia, PA 19406 Phone: 610-354-5482 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pmyers at redhat.com Tue May 28 19:24:05 2013 From: pmyers at redhat.com (Perry Myers) Date: Tue, 28 May 2013 15:24:05 -0400 Subject: [rhos-list] ovs-vswitchd config In-Reply-To: References: Message-ID: <51A50455.9010009@redhat.com> On 05/28/2013 03:16 PM, Minton, Rich wrote: > Is there a way to set this permanently so I don?t have to run it each > time my server reboots? > > > > ?ovs-vsctl add-port br-eth1 eth1? I think we discussed having packstack precreate the bridges as part of install, but I'm not sure how the packstack/quantum folks plan to make that persistent (adding folks to cc list to answer) Perry From gkotton at redhat.com Wed May 29 06:35:41 2013 From: gkotton at redhat.com (Gary Kotton) Date: Wed, 29 May 2013 09:35:41 +0300 Subject: [rhos-list] ovs-vswitchd config In-Reply-To: <51A50455.9010009@redhat.com> References: <51A50455.9010009@redhat.com> Message-ID: <51A5A1BD.70803@redhat.com> On 05/28/2013 10:24 PM, Perry Myers wrote: > On 05/28/2013 03:16 PM, Minton, Rich wrote: >> Is there a way to set this permanently so I don?t have to run it each >> time my server reboots? >> >> >> >> ?ovs-vsctl add-port br-eth1 eth1? > I think we discussed having packstack precreate the bridges as part of > install, but I'm not sure how the packstack/quantum folks plan to make > that persistent (adding folks to cc list to answer) This operation only needs to be done and it should be persistent. Thanks Gary > > Perry > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From sdake at redhat.com Wed May 29 23:24:38 2013 From: sdake at redhat.com (Steven Dake) Date: Wed, 29 May 2013 16:24:38 -0700 Subject: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. In-Reply-To: References: <519E8EFD.30402@redhat.com> <519F797D.2020006@redhat.com> <519FE273.10300@redhat.com> <1512494469.9163924.1369761245785.JavaMail.root@redhat.com> Message-ID: <51A68E36.80106@redhat.com> On 05/28/2013 10:22 AM, Minton, Rich wrote: > It starts to work at an MTU of 1468. Rich, IP Header is 20 bytes, TCP header is 20 bytes for a total of 40 bytes. Not sure where the magic 32 bytes is coming from. Maybe a vlan tag? Is it possible your switch is configured with a smaller mtu then 1500 or some odd VLAN setup? Regards -steve > -----Original Message----- > From: Brent Eagles [mailto:beagles at redhat.com] > Sent: Tuesday, May 28, 2013 1:14 PM > To: Minton, Rich > Cc: rhos-list at redhat.com > Subject: Re: EXTERNAL: Re: [rhos-list] Red Hat Linux VM freezes. > > Hi Rick, > > ----- Original Message ----- >> This is interesting... >> >> We were able to resolve (or band aid) our problem by setting the VMs >> eth0 MTU to 1000. >> >> Has anyone else encountered this problem? Any ideas why this is happening? >> >> Rick > I suspected that might be the case when I asked for the ifconfig at the beginning. I was having some difficulty reproducing it so I was hesitant to recommend altering it. I've heard of SSL/SSH related issues with MTU size and was wondering about the effect on payloads that network namespaces might have in conjunction with SSH. The hard drive on my test system failed yesterday so I'm a little behind unfortunately. I'd like to find out what the failure threshold is and what contributes to the delta between 1500 and that threshold. > > Cheers, > > Brent > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list From SR0056850 at TechMahindra.com Thu May 30 04:51:24 2013 From: SR0056850 at TechMahindra.com (Sudhir R Venkatesalu) Date: Thu, 30 May 2013 10:21:24 +0530 Subject: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. In-Reply-To: <51A68E36.80106@redhat.com> References: <519E8EFD.30402@redhat.com> <519F797D.2020006@redhat.com> <519FE273.10300@redhat.com> <1512494469.9163924.1369761245785.JavaMail.root@redhat.com> <51A68E36.80106@redhat.com> Message-ID: <064950836F47F14DB2E256AD23CAFD501795CA869E@SINNODMBX001.TechMahindra.com> Hello, I am copy pasting this from openstack operation guide. Please read it, it might help you to solve your issue. Double VLAN I was on-site in Kelowna, British Columbia, Canada setting up a new OpenStack cloud. The deployment was fully automated: Cobbler deployed the OS on the bare metal, bootstrapped it, and Puppet took over from there. I had run the deployment scenario so many times in practice and took for granted that everything was working. On my last day in Kelowna, I was in a conference call from my hotel. In the background, I was fooling around on the new cloud. I launched an instance and logged in. Everything looked fine. Out of boredom, I ran ps aux and all of the sudden the instance locked up. Thinking it was just a one-off issue, I terminated the instance and launched a new one. By then, the conference call ended and I was off to the data center. At the data center, I was finishing up some tasks and remembered the lock-up. I logged into the new instance and ran ps aux again. It worked. Phew. I decided to run it one more time. It locked up. WTF. After reproducing the problem several times, I came to the unfortunate conclusion that this cloud did indeed have a problem. Even worse, my time was up in Kelowna and I had to return back to Calgary. Where do you even begin troubleshooting something like this? An instance just randomly locks when a command is issued. Is it the image? Nope - it happens on all images. Is it the compute node? Nope - all nodes. Is the instance locked up? No! New SSH connections work just fine! We reached out for help. A networking engineer suggested it was an MTU issue. Great! MTU! Something to go on! What's MTU and why would it cause a problem? MTU is maximum transmission unit. It specifies the maximum number of bytes that the interface accepts for each packet. If two interfaces have two different MTUs, bytes might OpenStack Operations Guide May 15, 2013 133 get chopped off and weird things happen -- such as random session lockups. It's important to note that not all packets have a size of 1500. Running the ls command over SSH might only create a single packets less than 1500 bytes. However, running a command with heavy output, such as ps aux requires several packets of 1500 bytes. OK, so where is the MTU issue coming from? Why haven't we seen this in any other deployment? What's new in this situation? Well, new data center, new uplink, new switches, new model of switches, new servers, first time using this model of servers... so, basically everything was new. Wonderful. We toyed around with raising the MTU at various areas: the switches, the NICs on the compute nodes, the virtual NICs in the instances, we even had the data center raise the MTU for our uplink interface. Some changes worked, some didn't. This line of troubleshooting didn't feel right, though. We shouldn't have to be changing the MTU in these areas. As a last resort, our network admin (Alvaro) and myself sat down with four terminal windows, a pencil, and a piece of paper. In one window, we ran ping. In the second window, we ran tcpdump on the cloud controller. In the third, tcpdump on the compute node. And the forth had tcpdump on the instance. For background, this cloud was a multinode, non-multi-host setup. There was one cloud controller that acted as a gateway to all compute nodes. VlanManager was used for the network config. This means that the cloud controller and all compute nodes had a different VLAN for each OpenStack project. We used the -s option of ping to change the packet size. We watched as sometimes packets would fully return, sometimes they'd only make it out and never back in, and sometimes the packets would stop at a random point. We changed tcpdump to start displaying the hex dump of the packet. We pinged between every combination of outside, controller, compute, and instance. Finally, Alvaro noticed something. When a packet from the outside hits the cloud controller, it should not be configured with a VLAN. We verified this as true. When the packet went from the cloud controller to the compute node, it should only have a VLAN if it was destined for an instance. This was still true. When the ping reply was sent from the instance, it should be in a VLAN. True. When it came back to the cloud controller and on its way out to the public internet, it should no longer have a VLAN. False. Uh oh. It looked as though the VLAN part of the packet was not being removed. That made no sense. While bouncing this idea around in our heads, I was randomly typing commands on the compute node: $ ip a ... 10: vlan100 at vlan20: mtu 1500 qdisc noqueue master br100 state UP ... "Hey Alvaro, can you run a VLAN on top of a VLAN?" "If you did, you'd add an extra 4 bytes to the packet..." Then it all made sense... $ grep vlan_interface /etc/nova/nova.conf vlan_interface=vlan20 OpenStack Operations Guide May 15, 2013 134 In nova.conf, vlan_interface specifies what interface OpenStack should attach all VLANs to. The correct setting should have been: vlan_interface=bond0 As this would be the server's bonded NIC. vlan20 is the VLAN that the data center gave us for outgoing public internet access. It's a correct VLAN and is also attached to bond0. By mistake, I configured OpenStack to attach all tenant VLANs to vlan20 instead of bond0 thereby stacking one VLAN on top of another which then added an extra 4 bytes to each packet which cause a packet of 1504 bytes to be sent out which would cause problems when it arrived at an interface that only accepted 1500! As soon as this setting was fixed, everything worked. Regards, Sudhir. -----Original Message----- From: rhos-list-bounces at redhat.com [mailto:rhos-list-bounces at redhat.com] On Behalf Of Steven Dake Sent: Thursday, May 30, 2013 4:55 AM To: Minton, Rich Cc: rhos-list at redhat.com Subject: Re: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. On 05/28/2013 10:22 AM, Minton, Rich wrote: > It starts to work at an MTU of 1468. Rich, IP Header is 20 bytes, TCP header is 20 bytes for a total of 40 bytes. Not sure where the magic 32 bytes is coming from. Maybe a vlan tag? Is it possible your switch is configured with a smaller mtu then 1500 or some odd VLAN setup? Regards -steve > -----Original Message----- > From: Brent Eagles [mailto:beagles at redhat.com] > Sent: Tuesday, May 28, 2013 1:14 PM > To: Minton, Rich > Cc: rhos-list at redhat.com > Subject: Re: EXTERNAL: Re: [rhos-list] Red Hat Linux VM freezes. > > Hi Rick, > > ----- Original Message ----- >> This is interesting... >> >> We were able to resolve (or band aid) our problem by setting the VMs >> eth0 MTU to 1000. >> >> Has anyone else encountered this problem? Any ideas why this is happening? >> >> Rick > I suspected that might be the case when I asked for the ifconfig at the beginning. I was having some difficulty reproducing it so I was hesitant to recommend altering it. I've heard of SSL/SSH related issues with MTU size and was wondering about the effect on payloads that network namespaces might have in conjunction with SSH. The hard drive on my test system failed yesterday so I'm a little behind unfortunately. I'd like to find out what the failure threshold is and what contributes to the delta between 1500 and that threshold. > > Cheers, > > Brent > > _______________________________________________ > rhos-list mailing list > rhos-list at redhat.com > https://www.redhat.com/mailman/listinfo/rhos-list _______________________________________________ rhos-list mailing list rhos-list at redhat.com https://www.redhat.com/mailman/listinfo/rhos-list ============================================================================================================================Disclaimer: This message and the information contained herein is proprietary and confidential and subject to the Tech Mahindra policy statement, you may review the policy at http://www.techmahindra.com/Disclaimer.html externally and http://tim.techmahindra.com/tim/disclaimer.html internally within Tech Mahindra.============================================================================================================================ From john.haller at alcatel-lucent.com Thu May 30 22:47:43 2013 From: john.haller at alcatel-lucent.com (Haller, John H (John)) Date: Thu, 30 May 2013 22:47:43 +0000 Subject: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. In-Reply-To: <064950836F47F14DB2E256AD23CAFD501795CA869E@SINNODMBX001.TechMahindra.com> References: <519E8EFD.30402@redhat.com> <519F797D.2020006@redhat.com> <519FE273.10300@redhat.com> <1512494469.9163924.1369761245785.JavaMail.root@redhat.com> <51A68E36.80106@redhat.com> <064950836F47F14DB2E256AD23CAFD501795CA869E@SINNODMBX001.TechMahindra.com> Message-ID: <7C1824C61EE769448FCE74CD83F0CB4F57C90144@US70TWXCHMBA11.zam.alcatel-lucent.com> Message content bottom posted, look at the bottom. > -----Original Message----- > From: rhos-list-bounces at redhat.com [mailto:rhos-list-bounces at redhat.com] On > Behalf Of Sudhir R Venkatesalu > Sent: Wednesday, May 29, 2013 11:51 PM > To: Steven Dake; Minton, Rich > Cc: rhos-list at redhat.com > Subject: Re: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. > > Hello, > > I am copy pasting this from openstack operation guide. Please read it, it might > help you to solve your issue. > > Double VLAN > I was on-site in Kelowna, British Columbia, Canada setting up a new OpenStack > cloud. > The deployment was fully automated: Cobbler deployed the OS on the bare > metal, bootstrapped it, and Puppet took over from there. I had run the > deployment scenario so many times in practice and took for granted that > everything was working. > On my last day in Kelowna, I was in a conference call from my hotel. In the > background, I was fooling around on the new cloud. I launched an instance and > logged in. Everything looked fine. Out of boredom, I ran ps aux and all of the > sudden the instance locked up. > Thinking it was just a one-off issue, I terminated the instance and launched a > new one. By then, the conference call ended and I was off to the data center. > At the data center, I was finishing up some tasks and remembered the lock-up. I > logged into the new instance and ran ps aux again. It worked. Phew. I decided to > run it one more time. It locked up. WTF. > After reproducing the problem several times, I came to the unfortunate > conclusion that this cloud did indeed have a problem. Even worse, my time was > up in Kelowna and I had to return back to Calgary. > Where do you even begin troubleshooting something like this? An instance just > randomly locks when a command is issued. Is it the image? Nope - it happens on > all images. Is it the compute node? Nope - all nodes. Is the instance locked up? > No! New SSH connections work just fine! > We reached out for help. A networking engineer suggested it was an MTU issue. > Great! > MTU! Something to go on! What's MTU and why would it cause a problem? > MTU is maximum transmission unit. It specifies the maximum number of bytes > that the interface accepts for each packet. If two interfaces have two different > MTUs, bytes might OpenStack Operations Guide May 15, 2013 > 133 > get chopped off and weird things happen -- such as random session lockups. It's > important to note that not all packets have a size of 1500. Running the ls > command over SSH might only create a single packets less than 1500 bytes. > However, running a command with heavy output, such as ps aux requires several > packets of 1500 bytes. > OK, so where is the MTU issue coming from? Why haven't we seen this in any > other deployment? What's new in this situation? Well, new data center, new > uplink, new switches, new model of switches, new servers, first time using this > model of servers... so, basically everything was new. Wonderful. We toyed > around with raising the MTU at various > areas: the switches, the NICs on the compute nodes, the virtual NICs in the > instances, we even had the data center raise the MTU for our uplink interface. > Some changes worked, some didn't. This line of troubleshooting didn't feel right, > though. We shouldn't have to be changing the MTU in these areas. > As a last resort, our network admin (Alvaro) and myself sat down with four > terminal windows, a pencil, and a piece of paper. In one window, we ran ping. In > the second window, we ran tcpdump on the cloud controller. In the third, > tcpdump on the compute node. And the forth had tcpdump on the instance. For > background, this cloud was a multinode, non-multi-host setup. > There was one cloud controller that acted as a gateway to all compute nodes. > VlanManager was used for the network config. This means that the cloud > controller and all compute nodes had a different VLAN for each OpenStack > project. We used the -s option of ping to change the packet size. We watched as > sometimes packets would fully return, sometimes they'd only make it out and > never back in, and sometimes the packets would stop at a random point. We > changed tcpdump to start displaying the hex dump of the packet. We pinged > between every combination of outside, controller, compute, and instance. > Finally, Alvaro noticed something. When a packet from the outside hits the > cloud controller, it should not be configured with a VLAN. We verified this as > true. When the packet went from the cloud controller to the compute node, it > should only have a VLAN if it was destined for an instance. This was still true. > When the ping reply was sent from the instance, it should be in a VLAN. True. > When it came back to the cloud controller and on its way out to the public > internet, it should no longer have a VLAN. False. Uh oh. It looked as though the > VLAN part of the packet was not being removed. > That made no sense. > While bouncing this idea around in our heads, I was randomly typing commands > on the compute node: > $ ip a > ... > 10: vlan100 at vlan20: mtu 1500 qdisc > noqueue master br100 state UP ... > "Hey Alvaro, can you run a VLAN on top of a VLAN?" > "If you did, you'd add an extra 4 bytes to the packet..." > Then it all made sense... > $ grep vlan_interface /etc/nova/nova.conf > vlan_interface=vlan20 > OpenStack Operations Guide May 15, 2013 > 134 > In nova.conf, vlan_interface specifies what interface OpenStack should attach all > VLANs to. The correct setting should have been: > vlan_interface=bond0 > As this would be the server's bonded NIC. > vlan20 is the VLAN that the data center gave us for outgoing public internet > access. It's a correct VLAN and is also attached to bond0. > By mistake, I configured OpenStack to attach all tenant VLANs to vlan20 instead > of bond0 thereby stacking one VLAN on top of another which then added an > extra 4 bytes to each packet which cause a packet of 1504 bytes to be sent out > which would cause problems when it arrived at an interface that only accepted > 1500! > As soon as this setting was fixed, everything worked. > > > > Regards, > Sudhir. > > -----Original Message----- > From: rhos-list-bounces at redhat.com [mailto:rhos-list-bounces at redhat.com] On > Behalf Of Steven Dake > Sent: Thursday, May 30, 2013 4:55 AM > To: Minton, Rich > Cc: rhos-list at redhat.com > Subject: Re: [rhos-list] EXTERNAL: Re: Red Hat Linux VM freezes. > > On 05/28/2013 10:22 AM, Minton, Rich wrote: > > It starts to work at an MTU of 1468. > Rich, > > IP Header is 20 bytes, TCP header is 20 bytes for a total of 40 bytes. > Not sure where the magic 32 bytes is coming from. Maybe a vlan tag? Is it > possible your switch is configured with a smaller mtu then 1500 or some odd > VLAN setup? > > Regards > -steve > > > -----Original Message----- > > From: Brent Eagles [mailto:beagles at redhat.com] > > Sent: Tuesday, May 28, 2013 1:14 PM > > To: Minton, Rich > > Cc: rhos-list at redhat.com > > Subject: Re: EXTERNAL: Re: [rhos-list] Red Hat Linux VM freezes. > > > > Hi Rick, > > > > ----- Original Message ----- > >> This is interesting... > >> > >> We were able to resolve (or band aid) our problem by setting the VMs > >> eth0 MTU to 1000. > >> > >> Has anyone else encountered this problem? Any ideas why this is happening? > >> > >> Rick > > I suspected that might be the case when I asked for the ifconfig at the > beginning. I was having some difficulty reproducing it so I was hesitant to > recommend altering it. I've heard of SSL/SSH related issues with MTU size and > was wondering about the effect on payloads that network namespaces might > have in conjunction with SSH. The hard drive on my test system failed yesterday > so I'm a little behind unfortunately. I'd like to find out what the failure threshold > is and what contributes to the delta between 1500 and that threshold. > > > > Cheers, > > > > Brent Is any particular encapsulation being used, such as GRE? Long ago, I had similar problems when I was using DSL, and the Ethernet was encapsulated into ATM which only allowed 1500 bytes, so all the packets got fragmented. OVS can support multiple encapsulation types, but encapsulations other than a single VLAN are likely to have problems with MTU. NVGRE, VXLAN, or MPLS would all have problems. It would be really helpful if Linux and it's network drivers and the physical devices supported 802.3 defined envelope frames*, which allows the frame to be up to 1982 bytes when an encapsulation layer is present, allowing the encapsulation layer to be present without having to change the IP layer MTU above or below 1500. The normal 802 standards only allow a single VLAN tag, with a subsequent extension of MTU to 1504, unless envelope frames are used. Also, Linux only supports double-tagged VLANs by accident, and only if they both use the same VLAN Ethernet tag type. Support is not ubiquitous, and in fact, with large VLAN IDs and multiple IP addresses, the interface name won't fit into 15 character limit. For example, eth0.1001.1002:0 is 16 characters. The only 802.3 approved way to support double-tagged VLANs is by using envelope frames, which, as stated above, are not supported in Linux. * 802.3 section 1.4.151 and 3.2.7. Regards, John Haller