[Rdo-list] HA with network isolation on virt howto

Sasha Chuzhoy sasha at redhat.com
Tue Oct 27 14:40:58 UTC 2015


Hi,
IIUC, this can work with tagged vlan used for the internalapi.

The relevant yaml file snippet (nic1 is where the provision and the internalapi are for example):
              type: ovs_bridge
              name: br-nic1
              use_dhcp: false
              dns_servers: {get_param: DnsServers}
              addresses:
                -
                  ip_netmask:
                    list_join:
                      - '/'
                      - - {get_param: ControlPlaneIp}
                        - {get_param: ControlPlaneSubnetCidr}
              routes:
                -
                  ip_netmask: 169.254.169.254/32
                  next_hop: {get_param: EC2MetadataIp}
                -
                  default: true
                  next_hop: {get_param: ControlPlaneDefaultRoute}
              members:
                -
                  type: interface
                  name: nic1
                  primary: true
                -
                  type: vlan
                  vlan_id: {get_param: InternalApiNetworkVlanID:}
                  addresses:
                  -
                    ip_netmask: {get_param: InternalApiIpSubnet}


The InternalApiNetworkVlanID (if different than 20, which is the defaut) has to be set respectively.



Best regards,
Sasha Chuzhoy.

----- Original Message -----
> From: "Pedro Sousa" <pgsousa at gmail.com>
> To: "Marius Cornea" <marius at remote-lab.net>
> Cc: "rdo-list" <rdo-list at redhat.com>
> Sent: Tuesday, October 27, 2015 10:06:44 AM
> Subject: Re: [Rdo-list] HA with network isolation on virt howto
> 
> Hi Marius,
> 
> the reason is that I would like for example to use internalapi network with
> provisioning network in the same interface, and since provisioning doesn't
> use bridge I wondered if this it's possible.
> 
> As I said, actually I was able to deploy overcloud with internalapi without
> the bridge but I had to specify the physical interface "device: enps0f0" on
> my heat template.
> 
> Thanks
> Pedro Sousa
> 
> On Tue, Oct 27, 2015 at 1:06 PM, Marius Cornea < marius at remote-lab.net >
> wrote:
> 
> 
> Hi Pedro,
> 
> Afaik in order to use a vlan interface you need to set it as part of a
> bridge - it actually gets created as an internal port within the ovs
> bridge with the specified vlan tag. Is there any specific reason you
> don't want to use a bridge for this?
> 
> I believe your understanding relates to the Neutron configuration. In
> regards to the network isolation the Tenant network relates to the
> network used for setting up the overlay networks tunnels ( which in
> turn will run the tenant networks created after deployment ).
> 
> On Tue, Oct 27, 2015 at 12:06 PM, Pedro Sousa < pgsousa at gmail.com > wrote:
> > Hi Marius,
> > 
> > I've tried to configure InternalAPI VLAN on the first interface that
> > doesn't
> > use a bridge, however it only seems to work if I define the physical device
> > "enp1s0f0" like this:
> > 
> > network_config:
> > -
> > type: interface
> > name: nic1
> > use_dhcp: false
> > addresses:
> > -
> > ip_netmask:
> > list_join:
> > - '/'
> > - - {get_param: ControlPlaneIp}
> > - {get_param: ControlPlaneSubnetCidr}
> > routes:
> > -
> > ip_netmask: 169.254.169.254/32
> > next_hop: {get_param: EC2MetadataIp}
> > -
> > type: vlan
> > device: enp1s0f0
> > vlan_id: {get_param: InternalApiNetworkVlanID}
> > addresses:
> > -
> > ip_netmask: {get_param: InternalApiIpSubnet}
> > 
> > 
> > So my question is if it's possible to create a VLAN attached to interface
> > without using a bridge and specifying the physical device?
> > 
> > My understanding is that you only require bridges when you use Tenant or
> > Floating networks, or is it supposed to work that way?
> > 
> > Thanks,
> > Pedro Sousa
> > 
> > 
> > 
> > 
> > On Wed, Oct 21, 2015 at 12:32 PM, Marius Cornea < marius at remote-lab.net >
> > wrote:
> >> 
> >> Here's an adjusted controller.yaml which disables DHCP on the first
> >> nic: enp1s0f0 so it doesn't get an IP address
> >> http://paste.openstack.org/show/476981/
> >> 
> >> Please note that this assumes that your overcloud nodes are PXE
> >> booting on the 2nd NIC(basically disabling the 1st nic)
> >> 
> >> Given your setup(I'm doing some assumptions here so I might be wrong)
> >> I would use the 1st nic for PXE booting and provisioning network and
> >> 2nd nic for running the isolated networks with this kind of template:
> >> http://paste.openstack.org/show/476986/
> >> 
> >> Let me know if it works for you.
> >> 
> >> Thanks,
> >> Marius
> >> 
> >> On Wed, Oct 21, 2015 at 1:16 PM, Pedro Sousa < pgsousa at gmail.com > wrote:
> >> > Hi,
> >> > 
> >> > here you go.
> >> > 
> >> > Regards,
> >> > Pedro Sousa
> >> > 
> >> > On Wed, Oct 21, 2015 at 12:05 PM, Marius Cornea < marius at remote-lab.net
> >> > >
> >> > wrote:
> >> >> 
> >> >> Hi Pedro,
> >> >> 
> >> >> One issue I can quickly see is that br-ex has assigned the same IP
> >> >> address as enp1s0f0. Can you post the nic templates you used for
> >> >> deployment?
> >> >> 
> >> >> 2: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state
> >> >> UP qlen 1000
> >> >> link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff
> >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic
> >> >> enp1s0f0
> >> >> 9: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> >> >> state
> >> >> UNKNOWN
> >> >> link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff
> >> >> inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex
> >> >> 
> >> >> Thanks,
> >> >> Marius
> >> >> 
> >> >> On Wed, Oct 21, 2015 at 12:39 PM, Pedro Sousa < pgsousa at gmail.com >
> >> >> wrote:
> >> >> > Hi Marius,
> >> >> > 
> >> >> > I've followed your howto and managed to get overcloud deployed in HA,
> >> >> > thanks. However I cannot login to it (via CLI or Horizon) :
> >> >> > 
> >> >> > ERROR (Unauthorized): The request you have made requires
> >> >> > authentication.
> >> >> > (HTTP 401) (Request-ID: req-96310dfa-3d64-4f05-966f-f4d92702e2b1)
> >> >> > 
> >> >> > So I rebooted the controllers and now I cannot login through
> >> >> > Provisioning
> >> >> > network, seems some openvswitch bridge conf problem, heres my conf:
> >> >> > 
> >> >> > # ip a
> >> >> > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> >> >> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >> >> > inet 127.0.0.1/8 scope host lo
> >> >> > valid_lft forever preferred_lft forever
> >> >> > inet6 ::1/128 scope host
> >> >> > valid_lft forever preferred_lft forever
> >> >> > 2: enp1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
> >> >> > state
> >> >> > UP
> >> >> > qlen 1000
> >> >> > link/ether 7c:a2:3e:fb:25:55 brd ff:ff:ff:ff:ff:ff
> >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global dynamic
> >> >> > enp1s0f0
> >> >> > valid_lft 84562sec preferred_lft 84562sec
> >> >> > inet6 fe80::7ea2:3eff:fefb:2555/64 scope link
> >> >> > valid_lft forever preferred_lft forever
> >> >> > 3: enp1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq
> >> >> > master
> >> >> > ovs-system state UP qlen 1000
> >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff
> >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link
> >> >> > valid_lft forever preferred_lft forever
> >> >> > 4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> >> >> > link/ether c2:15:45:c8:b3:04 brd ff:ff:ff:ff:ff:ff
> >> >> > 5: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> >> >> > link/ether e6:df:8e:fb:f0:42 brd ff:ff:ff:ff:ff:ff
> >> >> > 6: vlan20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> >> >> > state
> >> >> > UNKNOWN
> >> >> > link/ether e6:79:56:5d:07:f2 brd ff:ff:ff:ff:ff:ff
> >> >> > inet 192.168.100.12/24 brd 192.168.100.255 scope global vlan20
> >> >> > valid_lft forever preferred_lft forever
> >> >> > inet 192.168.100.10/32 brd 192.168.100.255 scope global vlan20
> >> >> > valid_lft forever preferred_lft forever
> >> >> > inet6 fe80::e479:56ff:fe5d:7f2/64 scope link
> >> >> > valid_lft forever preferred_lft forever
> >> >> > 7: vlan40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> >> >> > state
> >> >> > UNKNOWN
> >> >> > link/ether ea:43:69:c3:bf:a2 brd ff:ff:ff:ff:ff:ff
> >> >> > inet 192.168.102.11/24 brd 192.168.102.255 scope global vlan40
> >> >> > valid_lft forever preferred_lft forever
> >> >> > inet6 fe80::e843:69ff:fec3:bfa2/64 scope link
> >> >> > valid_lft forever preferred_lft forever
> >> >> > 8: vlan174: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> >> >> > state
> >> >> > UNKNOWN
> >> >> > link/ether 16:bf:9e:e0:9c:e0 brd ff:ff:ff:ff:ff:ff
> >> >> > inet 192.168.174.36/24 brd 192.168.174.255 scope global vlan174
> >> >> > valid_lft forever preferred_lft forever
> >> >> > inet 192.168.174.35/32 brd 192.168.174.255 scope global vlan174
> >> >> > valid_lft forever preferred_lft forever
> >> >> > inet6 fe80::14bf:9eff:fee0:9ce0/64 scope link
> >> >> > valid_lft forever preferred_lft forever
> >> >> > 9: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> >> >> > state
> >> >> > UNKNOWN
> >> >> > link/ether 7c:a2:3e:fb:25:56 brd ff:ff:ff:ff:ff:ff
> >> >> > inet 192.168.21.60/24 brd 192.168.21.255 scope global br-ex
> >> >> > valid_lft forever preferred_lft forever
> >> >> > inet6 fe80::7ea2:3eff:fefb:2556/64 scope link
> >> >> > valid_lft forever preferred_lft forever
> >> >> > 10: vlan50: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> >> >> > state
> >> >> > UNKNOWN
> >> >> > link/ether da:15:7f:b9:72:4b brd ff:ff:ff:ff:ff:ff
> >> >> > inet 10.0.20.10/24 brd 10.0.20.255 scope global vlan50
> >> >> > valid_lft forever preferred_lft forever
> >> >> > inet6 fe80::d815:7fff:feb9:724b/64 scope link
> >> >> > valid_lft forever preferred_lft forever
> >> >> > 11: vlan30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> >> >> > state
> >> >> > UNKNOWN
> >> >> > link/ether 7a:b3:4d:ad:f1:72 brd ff:ff:ff:ff:ff:ff
> >> >> > inet 192.168.101.11/24 brd 192.168.101.255 scope global vlan30
> >> >> > valid_lft forever preferred_lft forever
> >> >> > inet 192.168.101.10/32 brd 192.168.101.255 scope global vlan30
> >> >> > valid_lft forever preferred_lft forever
> >> >> > inet6 fe80::78b3:4dff:fead:f172/64 scope link
> >> >> > valid_lft forever preferred_lft forever
> >> >> > 12: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> >> >> > link/ether b6:88:6b:d7:3a:4c brd ff:ff:ff:ff:ff:ff
> >> >> > 
> >> >> > 
> >> >> > # ovs-vsctl show
> >> >> > 3ee4adeb-4a5a-49a6-a16e-1e5f6e22f101
> >> >> > Bridge br-ex
> >> >> > Port br-ex
> >> >> > Interface br-ex
> >> >> > type: internal
> >> >> > Port "enp1s0f1"
> >> >> > Interface "enp1s0f1"
> >> >> > Port "vlan40"
> >> >> > tag: 40
> >> >> > Interface "vlan40"
> >> >> > type: internal
> >> >> > Port "vlan20"
> >> >> > tag: 20
> >> >> > Interface "vlan20"
> >> >> > type: internal
> >> >> > Port phy-br-ex
> >> >> > Interface phy-br-ex
> >> >> > type: patch
> >> >> > options: {peer=int-br-ex}
> >> >> > Port "vlan50"
> >> >> > tag: 50
> >> >> > Interface "vlan50"
> >> >> > type: internal
> >> >> > Port "vlan30"
> >> >> > tag: 30
> >> >> > Interface "vlan30"
> >> >> > type: internal
> >> >> > Port "vlan174"
> >> >> > tag: 174
> >> >> > Interface "vlan174"
> >> >> > type: internal
> >> >> > Bridge br-int
> >> >> > fail_mode: secure
> >> >> > Port br-int
> >> >> > Interface br-int
> >> >> > type: internal
> >> >> > Port patch-tun
> >> >> > Interface patch-tun
> >> >> > type: patch
> >> >> > options: {peer=patch-int}
> >> >> > Port int-br-ex
> >> >> > Interface int-br-ex
> >> >> > type: patch
> >> >> > options: {peer=phy-br-ex}
> >> >> > Bridge br-tun
> >> >> > fail_mode: secure
> >> >> > Port "gre-0a00140b"
> >> >> > Interface "gre-0a00140b"
> >> >> > type: gre
> >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10",
> >> >> > out_key=flow, remote_ip="10.0.20.11"}
> >> >> > Port patch-int
> >> >> > Interface patch-int
> >> >> > type: patch
> >> >> > options: {peer=patch-tun}
> >> >> > Port "gre-0a00140d"
> >> >> > Interface "gre-0a00140d"
> >> >> > type: gre
> >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10",
> >> >> > out_key=flow, remote_ip="10.0.20.13"}
> >> >> > Port "gre-0a00140c"
> >> >> > Interface "gre-0a00140c"
> >> >> > type: gre
> >> >> > options: {df_default="true", in_key=flow, local_ip="10.0.20.10",
> >> >> > out_key=flow, remote_ip="10.0.20.12"}
> >> >> > Port br-tun
> >> >> > Interface br-tun
> >> >> > type: internal
> >> >> > ovs_version: "2.4.0"
> >> >> > 
> >> >> > Regards,
> >> >> > Pedro Sousa
> >> >> > 
> >> >> > 
> >> >> > On Sun, Oct 18, 2015 at 11:13 AM, Marius Cornea
> >> >> > < marius at remote-lab.net >
> >> >> > wrote:
> >> >> >> 
> >> >> >> Hi everyone,
> >> >> >> 
> >> >> >> I wrote a blog post about how to deploy a HA with network isolation
> >> >> >> overcloud on top of the virtual environment. I tried to provide some
> >> >> >> insights into what instack-virt-setup creates and how to use the
> >> >> >> network isolation templates in the virtual environment. I hope you
> >> >> >> find it useful.
> >> >> >> 
> >> >> >> https://remote-lab.net/rdo-manager-ha-openstack-deployment/
> >> >> >> 
> >> >> >> Thanks,
> >> >> >> Marius
> >> >> >> 
> >> >> >> _______________________________________________
> >> >> >> Rdo-list mailing list
> >> >> >> Rdo-list at redhat.com
> >> >> >> https://www.redhat.com/mailman/listinfo/rdo-list
> >> >> >> 
> >> >> >> To unsubscribe: rdo-list-unsubscribe at redhat.com
> >> >> > 
> >> >> > 
> >> > 
> >> > 
> > 
> > 
> 
> 
> _______________________________________________
> Rdo-list mailing list
> Rdo-list at redhat.com
> https://www.redhat.com/mailman/listinfo/rdo-list
> 
> To unsubscribe: rdo-list-unsubscribe at redhat.com




More information about the rdo-list mailing list