Hide Forgot
Description: After defining bond1 Linux bond interface in Heat template for Compute and Controller Overcloud nodes, a bond0 interface with no slaves is configured in the Overcloud. Details: I have configured bond1 Linux-Bond interface for the Overcloud Compute and Controller nodes using the following Heat templates: [stack@osp8bdr2 ~(UC)]$ cat ~/templates/network-environment.yaml resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/my-overcloud/network/config/bond-with-vlans/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/my-overcloud/network/config/bond-with-vlans/controller.yaml parameter_defaults: InternalApiNetCidr: 192.168.124.0/24 TenantNetCidr: 192.168.123.0/24 StorageNetCidr: 10.111.0.0/16 StorageMgmtNetCidr: 192.168.128.0/24 ExternalNetCidr: 192.168.126.0/24 InternalApiAllocationPools: [{'start': '192.168.124.20', 'end': '192.168.124.200'}] TenantAllocationPools: [{'start': '192.168.123.20', 'end': '192.168.123.200'}] StorageAllocationPools: [{'start': '10.111.150.1', 'end': '10.111.150.254'}] StorageMgmtAllocationPools: [{'start': '192.168.128.30', 'end': '192.168.128.60'}] # Leave room for floating IPs in the External allocation pool ExternalAllocationPools: [{'start': '192.168.126.20', 'end': '192.168.126.50'}] # Set to the router gateway on the external network ExternalInterfaceDefaultRoute: 192.168.126.1 # Gateway router for the provisioning network (or Undercloud IP) ControlPlaneDefaultRoute: 192.168.122.1 # The IP address of the EC2 metadata server. Generally the IP of the Undercloud EC2MetadataIp: 192.168.122.1 # Define the DNS servers (maximum 2) for the overcloud nodes DnsServers: ["172.17.75.103"] InternalApiNetworkVlanID: 124 StorageNetworkVlanID: 111 StorageMgmtNetworkVlanID: 128 TenantNetworkVlanID: 123 ExternalNetworkVlanID: 126 # Set to "br-ex" if using floating IPs on native VLAN on bridge br-ex NeutronExternalNetworkBridge: "''" # Customize bonding options if required #BondInterfaceOvsOptions: # "bond_mode=balance-tcp lacp=active other-config:lacp-fallback-ab=true" BondInterfaceOvsOptions: "mode=4 miimon=100 xmit_hash_policy=layer2+3" [stack@osp8bdr2 ~(UC)]$ cat /home/stack/templates/my-overcloud/network/config/bond-with-vlans/controller.yaml heat_template_version: 2015-04-30 description: > Software Config to drive os-net-config with 2 bonded nics on a bridge with VLANs attached for the controller role. parameters: ControlPlaneIp: default: '' description: IP address/subnet on the ctlplane network type: string ExternalIpSubnet: default: '' description: IP address/subnet on the external network type: string InternalApiIpSubnet: default: '' description: IP address/subnet on the internal API network type: string StorageIpSubnet: default: '' description: IP address/subnet on the storage network type: string StorageMgmtIpSubnet: default: '' description: IP address/subnet on the storage mgmt network type: string TenantIpSubnet: default: '' description: IP address/subnet on the tenant network type: string BondInterfaceOvsOptions: default: 'bond_mode=balance-tcp lacp=active other-config:lacp-fallback-ab=true' description: The ovs_options string for the bond interface. Set things like lacp=active and/or bond_mode=balance-slb using this option. Default wil attempt LACP, but will fall back to active-backup. type: string ExternalNetworkVlanID: default: 10 description: Vlan ID for the external network traffic. type: number InternalApiNetworkVlanID: default: 20 description: Vlan ID for the internal_api network traffic. type: number StorageNetworkVlanID: default: 30 description: Vlan ID for the storage network traffic. type: number StorageMgmtNetworkVlanID: default: 40 description: Vlan ID for the storage mgmt network traffic. type: number TenantNetworkVlanID: default: 50 description: Vlan ID for the tenant network traffic. type: number ExternalInterfaceDefaultRoute: default: '10.0.0.1' description: default route for the external network type: string ControlPlaneSubnetCidr: # Override this via parameter_defaults default: '24' description: The subnet CIDR of the control plane network. type: string DnsServers: # Override this via parameter_defaults default: [] description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf. type: comma_delimited_list EC2MetadataIp: # Override this via parameter_defaults description: The IP address of the EC2 metadata server. type: string resources: OsNetConfigImpl: type: OS::Heat::StructuredConfig properties: group: os-apply-config config: os_net_config: network_config: - type: interface name: ens2f0 use_dhcp: false defroute: false - type: interface name: ens2f1 use_dhcp: false defroute: false - type: interface name: ens4f1 use_dhcp: false defroute: false - type: interface name: ens4f0 use_dhcp: false addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.254.169.254/32 next_hop: {get_param: EC2MetadataIp} - type: ovs_bridge name: {get_input: bridge_name} dns_servers: {get_param: DnsServers} members: - type: linux_bond name: bond1 bonding_options: {get_param: BondInterfaceOvsOptions} members: - type: interface name: eno1 primary: true - type: interface name: eno2 - type: vlan device: bond1 vlan_id: {get_param: ExternalNetworkVlanID} addresses: - ip_netmask: {get_param: ExternalIpSubnet} routes: - ip_netmask: 0.0.0.0/0 next_hop: {get_param: ExternalInterfaceDefaultRoute} - type: vlan device: bond1 vlan_id: {get_param: InternalApiNetworkVlanID} addresses: - ip_netmask: {get_param: InternalApiIpSubnet} - type: vlan device: bond1 vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan device: bond1 vlan_id: {get_param: StorageMgmtNetworkVlanID} addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet} - type: vlan device: bond1 vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet} outputs: OS::stack_id: description: The OsNetConfigImpl resource. value: {get_resource: OsNetConfigImpl} [stack@osp8bdr2 ~(UC)]$ cat /home/stack/templates/my-overcloud/network/config/bond-with-vlans/compute.yaml heat_template_version: 2015-04-30 description: > Software Config to drive os-net-config with 2 bonded nics on a bridge with VLANs attached for the compute role. parameters: ControlPlaneIp: default: '' description: IP address/subnet on the ctlplane network type: string ExternalIpSubnet: default: '' description: IP address/subnet on the external network type: string InternalApiIpSubnet: default: '' description: IP address/subnet on the internal API network type: string StorageIpSubnet: default: '' description: IP address/subnet on the storage network type: string StorageMgmtIpSubnet: default: '' description: IP address/subnet on the storage mgmt network type: string TenantIpSubnet: default: '' description: IP address/subnet on the tenant network type: string BondInterfaceOvsOptions: default: '' description: The ovs_options string for the bond interface. Set things like lacp=active and/or bond_mode=balance-slb using this option. type: string InternalApiNetworkVlanID: default: 20 description: Vlan ID for the internal_api network traffic. type: number StorageNetworkVlanID: default: 30 description: Vlan ID for the storage network traffic. type: number TenantNetworkVlanID: default: 50 description: Vlan ID for the tenant network traffic. type: number ControlPlaneSubnetCidr: # Override this via parameter_defaults default: '24' description: The subnet CIDR of the control plane network. type: string ControlPlaneDefaultRoute: # Override this via parameter_defaults description: The default route of the control plane network. type: string DnsServers: # Override this via parameter_defaults default: [] description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf. type: comma_delimited_list EC2MetadataIp: # Override this via parameter_defaults description: The IP address of the EC2 metadata server. type: string resources: OsNetConfigImpl: type: OS::Heat::StructuredConfig properties: group: os-apply-config config: os_net_config: network_config: - type: interface name: ens2f0 use_dhcp: false defroute: false - type: interface name: ens2f1 use_dhcp: false defroute: false - type: interface name: ens4f1 use_dhcp: false defroute: false - type: interface name: ens4f0 use_dhcp: false dns_servers: {get_param: DnsServers} addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.254.169.254/32 next_hop: {get_param: EC2MetadataIp} - default: true next_hop: {get_param: ControlPlaneDefaultRoute} - type: ovs_bridge name: {get_input: bridge_name} members: - type: linux_bond name: bond1 bonding_options: {get_param: BondInterfaceOvsOptions} members: - type: interface name: eno1 primary: true - type: interface name: eno2 - type: vlan device: bond1 vlan_id: {get_param: InternalApiNetworkVlanID} addresses: - ip_netmask: {get_param: InternalApiIpSubnet} - type: vlan device: bond1 vlan_id: {get_param: StorageNetworkVlanID} addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan device: bond1 vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet} outputs: OS::stack_id: description: The OsNetConfigImpl resource. value: {get_resource: OsNetConfigImpl} I used "bond1" as name for the Linux bond interface. I have deployed the Overcloud using the following command: openstack overcloud deploy --stack osp8br2 --templates ~/templates/my-overcloud -e ~/templates/my-overcloud/environments/network-isolation.yaml -e ~/templates/network-environment.yaml -e ~/templates/my-overcloud/environments/puppet-pacemaker.yaml -e ~/templates/storage-environment.yaml -e ~/templates/pre-deployment.yaml -e ~/templates/post-deployment.yaml --control-scale 3 --compute-scale 1 --control-flavor control --compute-flavor compute --ntp-server time.pdb.fsc.net --neutron-network-type vxlan --neutron-tunnel-types vxlan On the resulting Overcloud nodes (Controller and Compute) I see the follwoing network configuration: [root@osp8br2-controller-0 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens4f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether a0:36:9f:7b:89:2c brd ff:ff:ff:ff:ff:ff inet 192.168.122.77/24 brd 192.168.122.255 scope global ens4f0 valid_lft forever preferred_lft forever inet6 fe80::a236:9fff:fe7b:892c/64 scope link valid_lft forever preferred_lft forever 3: ens4f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether a0:36:9f:7b:89:2d brd ff:ff:ff:ff:ff:ff 4: ens2f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 90:1b:0e:62:5c:93 brd ff:ff:ff:ff:ff:ff 5: ens2f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 90:1b:0e:62:5c:94 brd ff:ff:ff:ff:ff:ff 6: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP qlen 1000 link/ether 90:1b:0e:55:ff:80 brd ff:ff:ff:ff:ff:ff 7: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP qlen 1000 link/ether 90:1b:0e:55:ff:80 brd ff:ff:ff:ff:ff:ff 8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether f6:d8:ce:74:fa:8b brd ff:ff:ff:ff:ff:ff 9: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 90:1b:0e:55:ff:80 brd ff:ff:ff:ff:ff:ff inet6 fe80::d0ea:63ff:fe52:c747/64 scope link valid_lft forever preferred_lft forever 10: vlan124: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether ee:ac:0f:81:31:d7 brd ff:ff:ff:ff:ff:ff inet 192.168.124.24/24 brd 192.168.124.255 scope global vlan124 valid_lft forever preferred_lft forever inet 192.168.124.21/32 brd 192.168.124.255 scope global vlan124 valid_lft forever preferred_lft forever inet6 fe80::ecac:fff:fe81:31d7/64 scope link valid_lft forever preferred_lft forever 11: vlan126: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 5e:40:f6:fd:89:2a brd ff:ff:ff:ff:ff:ff inet 192.168.126.22/24 brd 192.168.126.255 scope global vlan126 valid_lft forever preferred_lft forever inet6 fe80::5c40:f6ff:fefd:892a/64 scope link valid_lft forever preferred_lft forever 12: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 96:be:22:a6:e3:ab brd ff:ff:ff:ff:ff:ff 13: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP link/ether 90:1b:0e:55:ff:80 brd ff:ff:ff:ff:ff:ff inet6 fe80::921b:eff:fe55:ff80/64 scope link valid_lft forever preferred_lft forever 14: vlan123: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 96:7c:f1:cd:61:8a brd ff:ff:ff:ff:ff:ff inet 192.168.123.22/24 brd 192.168.123.255 scope global vlan123 valid_lft forever preferred_lft forever inet6 fe80::947c:f1ff:fecd:618a/64 scope link valid_lft forever preferred_lft forever 15: vlan128: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 2a:ec:10:d7:3f:de brd ff:ff:ff:ff:ff:ff inet 192.168.128.33/24 brd 192.168.128.255 scope global vlan128 valid_lft forever preferred_lft forever inet6 fe80::28ec:10ff:fed7:3fde/64 scope link valid_lft forever preferred_lft forever 16: vlan111: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether a6:62:b1:e5:38:c7 brd ff:ff:ff:ff:ff:ff inet 10.111.150.4/16 brd 10.111.255.255 scope global vlan111 valid_lft forever preferred_lft forever inet 10.111.150.1/32 brd 10.111.255.255 scope global vlan111 valid_lft forever preferred_lft forever inet6 fe80::a462:b1ff:fee5:38c7/64 scope link valid_lft forever preferred_lft forever 17: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether e2:91:4a:86:53:44 brd ff:ff:ff:ff:ff:ff 18: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether c2:3d:05:71:f9:4d brd ff:ff:ff:ff:ff:ff As you can see I have also a "bond0" interface with no salves: [root@osp8br2-controller-0 ~]# ip a | grep bond 6: eno1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP qlen 1000 7: eno2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP qlen 1000 12: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN 13: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP [root@osp8br2-controller-0 ~]# ls -al /proc/net/bonding/ total 0 dr-xr-xr-x. 2 root root 0 Jan 11 11:30 . dr-xr-xr-x. 8 root root 0 Jan 11 11:30 .. -r--r--r--. 1 root root 0 Jan 11 11:30 bond0 -r--r--r--. 1 root root 0 Jan 11 11:30 bond1 [root@osp8br2-controller-0 ~]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: load balancing (round-robin) MII Status: down MII Polling Interval (ms): 0 Up Delay (ms): 0 Down Delay (ms): 0 There should no bond0 interface to be configured on the Overcloud nodes. Information: If I specify "bond0" as name in the Heat templates, only bond0 will be configured in the Overcloud. Bugzilla dependencies (if any): N/A Hardware dependencies (if any): N/A Upstream information Date it will be upstream: Version: 8.0 Beta 3 External links: Severity (U/H/M/L): L Business Priority: Must/High Want/Want: Want Business Justification: Why is this needed: What hardware is required (if any): Business impact: Primary Red Hat contact: Name: Martin Tessun <mtessun> Christian Horn (chorn) Daniel Messer <dmesser>
This bug did not make the OSP 8.0 release. It is being deferred to OSP 10.
This bug did not make the OSP 10.0 release. It is being deferred to OSP 11. Initial triage didn't reveal a way to stop bond0 from being created, but it isn't clear that there is a simple change to os-net-config that would prevent this.
Recommend naming bonds sequentially - e.g. bond0, bond1, etc. This should be doc'ed.
Dear Red Hat, Since Ralf left OpenStack related project, I'll comment this ticket on behalf of him. As Bob commented, naming bonds sequentially does not cause the issue. And it looks like sequentially naming is the policy of Red Hat and will be documented. So, I think we can close this ticket. Best regards, Yasuyuki KOBAYASHI
Last comment recommended Closing, so will do so.