Bug 1804303 - os-net-config does not run ifup bond<x> when both slave interfaces are ifdown'ed and ifup'ed
Summary: os-net-config does not run ifup bond<x> when both slave interfaces are ifdown...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: os-net-config
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z12
: 13.0 (Queens)
Assignee: Dan Sneddon
QA Contact: nlevinki
URL:
Whiteboard:
: 1814355 (view as bug list)
Depends On:
Blocks: epmosp13bugs
TreeView+ depends on / blocked
 
Reported: 2020-02-18 16:18 UTC by Andreas Karis
Modified: 2024-03-25 15:42 UTC (History)
12 users (show)

Fixed In Version: os-net-config-8.5.1-5.el7ost
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-06-24 11:33:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1870608 0 None None None 2020-04-03 19:41:50 UTC
OpenStack gerrit 718579 0 None MERGED Run ifup on a bond when a slave interface is restarted 2021-02-10 09:43:19 UTC
Red Hat Issue Tracker OSP-28345 0 None None None 2023-09-07 21:58:50 UTC
Red Hat Product Errata RHBA-2020:2718 0 None None None 2020-06-24 11:33:56 UTC

Description Andreas Karis 2020-02-18 16:18:07 UTC
Description of problem:
os-net-config does not run ifup bond<x> when both slave interfaces are ifdown'ed and ifup'ed


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

A customer pushes a network configuration change to the bond slaves, only, and therefore enabled:  https://access.redhat.com/solutions/2213711
~~~
parameter_defaults:
(...)
  NetworkDeploymentActions: ['CREATE','UPDATE']
~~~

Additionally, they change the slave configuration (not the bond configuration) from:
~~~
              # Interface eno5,eno6 - NUMA0, do not support sriov
              - type: ovs_bridge
                name: br-s1
                members:
                  - type: linux_bond
                    name: bond1
                    bonding_options: 'mode=4 lacp=passive lacp_rate=fast miimon=50'
                    members:
                      - type: interface
                        name: eno5
                        primary: true
                      - type: interface
                        name: eno6
~~~

To:
~~~
              - type: ovs_bridge
                name: br-s1
                members:
                  - type: linux_bond
                    name: bond1
                    mtu: 9000
                    bonding_options: 'mode=4 lacp=passive lacp_rate=fast miimon=50'
                    members:
                      - type: interface
                        name: eno5
                        mtu: 9000
                        primary: true
                        ethtool_opts: "-L ${DEVICE} combined 30; -G ${DEVICE} rx 8192 tx 8192"
                      - type: interface
                        name: eno6
                        mtu: 9000
                        ethtool_opts: "-L ${DEVICE} combined 30; -G ${DEVICE} rx 8192 tx 8192"
~~~

Due to the above settings, os-net-config is executed it flapps the bond slaves  only (because the ethtool_opts changed). It does not flap the bonds themselves:
~~~
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] adding bridge: br-s4
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] adding linux bond: bond4
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] adding interface: ens1f1
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] adding interface: ens6f1
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] applying network configs...
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] No changes required for interface: eno1
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] No changes required for interface: eno2
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] No changes required for interface: eno3
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] No changes required for bridge: br-s4
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] No changes required for bridge: br-s1
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] No changes required for bridge: br-s3
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] No changes required for bridge: br-s2
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] No changes required for linux bond: bond4
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] No changes required for linux bond: bond0
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] No changes required for linux bond: bond1
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] No changes required for linux bond: bond2
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] No changes required for linux bond: bond3
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] No changes required for vlan interface: vlan293
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] No changes required for vlan interface: vlan292
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] No changes required for vlan interface: vlan295
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] running ifdown on interface: ens6f0
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:14 AM] [INFO] running ifdown on interface: ens6f1
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:15 AM] [INFO] running ifdown on interface: eno5
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:15 AM] [INFO] running ifdown on interface: eno6
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:15 AM] [INFO] running ifdown on interface: ens1f1
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:16 AM] [INFO] running ifdown on interface: ens1f0
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:16 AM] [INFO] running ifdown on interface: ens3f1
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:16 AM] [INFO] running ifdown on interface: ens3f0
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eno5
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-eno6
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-ens6f1
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-ens6f0
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-ens1f0
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-ens1f1
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-ens3f0
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:17 AM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-ens3f1
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:17 AM] [INFO] running ifup on interface: ens6f0
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:19 AM] [INFO] running ifup on interface: ens6f1
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:21 AM] [INFO] running ifup on interface: eno5
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:23 AM] [INFO] running ifup on interface: eno6
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:24 AM] [INFO] running ifup on interface: ens1f1
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:26 AM] [INFO] running ifup on interface: ens1f0
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:28 AM] [INFO] running ifup on interface: ens3f1
Feb 13 09:49:33 compute-0 os-collect-config[36435]: [2020/02/13 09:49:30 AM] [INFO] running ifup on interface: ens3f0
Feb 13 09:49:33 compute-0 os-collect-config[36435]: + RETVAL=2
Feb 13 09:49:33 compute-0 os-collect-config[36435]: + set -e
Feb 13 09:49:33 compute-0 os-collect-config[36435]: + [[ 2 == 2 ]]
Feb 13 09:49:33 compute-0 os-collect-config[36435]: + ping_metadata_ip
Feb 13 09:49:33 compute-0 os-collect-config[36435]: ++ get_metadata_ip
Feb 13 09:49:33 compute-0 os-collect-config[36435]: ++ local METADATA_IP
~~~

We see from the above that ens6f0, ens6f1, eno5, eno6, ens1f1, ens1f0, ens3f1, ens3f0 were flapped.

However, there is a misconception in how bonds work in RHEL. If both slave interfaces are shut down with ifdown, then the way to bring up the bond is: ifup <slave 1> ; ifup <slave 2> ; ifup <master> ... Or even simply: ifup <master>

Just running ifup <slave> for all slaves will leave the master shut down:
~~~
=== proc/net/bonding/bond1 ===
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: down
MII Polling Interval (ms): 50
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: x:x:x:x:X:x
bond bond1 has no active aggregator

Slave Interface: eno5
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: x:x:x:x:X:x
Slave queue ID: 0
Aggregator ID: N/A

Slave Interface: ens3f1
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: x:x:x:x:X:x
Slave queue ID: 0
Aggregator ID: N/A
~~~

This can be reproduced easily:
~~~
[root@compute-0 ~]# cat test1.sh 
#!/bin/bash -x

member1=eno5
member2=eno6
master=bond1

echo "Baseline"
ifup $master
sleep 2

ip link ls dev $member1
ip link ls dev $member2
ip link ls dev $master
cat /proc/net/bonding/$master

echo "Flapping ports individually"
ifdown $member1
ifup $member1
ifdown $member2
ifup $member2

sleep 2

ip link ls dev $member1
ip link ls dev $member2
ip link ls dev $master
cat /proc/net/bonding/$master

echo "Flapping both ports at the  same time"
ifdown $member1
ifdown $member2
ifup $member1
ifup $member2

sleep 2

ip link ls dev $member1
ip link ls dev $member2
ip link ls dev $master
cat /proc/net/bonding/$master

echo "Restoring baseline"
ifup $master
~~~

~~~
[root@compute-0 ~]# bash -x test1.sh
+ member1=eno5
+ member2=eno6
+ master=bond1
+ echo Baseline
Baseline
+ ifup bond1
RTNETLINK answers: File exists
+ sleep 2
+ ip link ls dev eno5
10: eno5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond1 state UP mode DEFAULT group default qlen 10000
    link/ether x:x:x:x:X:x brd ff:ff:ff:ff:ff:ff
+ ip link ls dev eno6
11: eno6: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond1 state UP mode DEFAULT group default qlen 10000
    link/ether x:x:x:x:X:x brd ff:ff:ff:ff:ff:ff
+ ip link ls dev bond1
149: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue master ovs-system state UP mode DEFAULT group default qlen 1000
    link/ether x:x:x:x:X:x brd ff:ff:ff:ff:ff:ff
+ cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 50
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: x:x:x:x:X:x
Active Aggregator Info:
        Aggregator ID: 14
        Number of ports: 2
        Actor Key: 21
        Partner Key: 2011
        Partner Mac Address: x:x:x:x:X:x
Slave Interface: eno5
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: x:x:x:x:X:x
Slave queue ID: 0
Aggregator ID: 14
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: x:x:x:x:X:x
    port key: 21
    port priority: 255
    port number: 1
    port state: 63
details partner lacp pdu:
    system priority: 127
    system mac address: x:x:x:x:X:x
    oper key: 2011
    port priority: 127
    port number: 32
    port state: 63

Slave Interface: eno6
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: x:x:x:x:X:x
Slave queue ID: 0
Aggregator ID: 14
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: x:x:x:x:X:x
    port key: 21
    port priority: 255
    port number: 2
    port state: 63
details partner lacp pdu:
    system priority: 127
    system mac address: x:x:x:x:X:x
    oper key: 2011
    port priority: 127
    port number: 32800
    port state: 63
+ echo 'Flapping ports individually'
Flapping ports individually
+ ifdown eno5
+ ifup eno5
+ ifdown eno6
+ ifup eno6
+ sleep 2
+ ip link ls dev eno5
10: eno5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond1 state UP mode DEFAULT group default qlen 10000
    link/ether x:x:x:x:X:x brd ff:ff:ff:ff:ff:ff
+ ip link ls dev eno6
11: eno6: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond1 state UP mode DEFAULT group default qlen 10000
    link/ether x:x:x:x:X:x brd ff:ff:ff:ff:ff:ff
+ ip link ls dev bond1
149: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue master ovs-system state UP mode DEFAULT group default qlen 1000
    link/ether x:x:x:x:X:x brd ff:ff:ff:ff:ff:ff
+ cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 50
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: x:x:x:x:X:x8
Active Aggregator Info:
        Aggregator ID: 15
        Number of ports: 2
        Actor Key: 21
        Partner Key: 2011
        Partner Mac Address: x:x:x:x:X:x

Slave Interface: eno5
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: x:x:x:x:X:x
Slave queue ID: 0
Aggregator ID: 15
Actor Churn State: monitoring
Partner Churn State: monitoring
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: x:x:x:x:X:x
    port key: 21
    port priority: 255
    port number: 3
    port state: 7
details partner lacp pdu:
    system priority: 127
    system mac address: x:x:x:x:X:x
    oper key: 2011
    port priority: 127
    port number: 32
    port state: 15

Slave Interface: eno6
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: x:x:x:x:X:x
Slave queue ID: 0
Aggregator ID: 15
Actor Churn State: monitoring
Partner Churn State: monitoring
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: x:x:x:x:X:x
    port key: 21
    port priority: 255
    port number: 4
    port state: 7
details partner lacp pdu:
    system priority: 127
    system mac address: x:x:x:x:X:x
    oper key: 2011
    port priority: 127
    port number: 32800
    port state: 15
+ echo 'Flapping both ports at the  same time'
Flapping both ports at the  same time
+ ifdown eno5
+ ifdown eno6
+ ifup eno5
+ ifup eno6
+ sleep 2
+ ip link ls dev eno5
10: eno5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond1 state UP mode DEFAULT group default qlen 10000
    link/ether x:x:x:x:X:x brd ff:ff:ff:ff:ff:ff
+ ip link ls dev eno6
11: eno6: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond1 state UP mode DEFAULT group default qlen 10000
    link/ether x:x:x:x:X:x brd ff:ff:ff:ff:ff:ff
+ ip link ls dev bond1
149: bond1: <BROADCAST,MULTICAST,MASTER> mtu 9000 qdisc noqueue master ovs-system state DOWN mode DEFAULT group default qlen 1000
    link/ether x:x:x:x:X:x brd ff:ff:ff:ff:ff:ff
+ cat /proc/net/bonding/bond1
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: down
MII Polling Interval (ms): 50
Up Delay (ms): 0
Down Delay (ms): 0

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: x:x:x:x:X:x
bond bond1 has no active aggregator

Slave Interface: eno5
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: x:x:x:x:X:x
Slave queue ID: 0
Aggregator ID: N/A

Slave Interface: eno6
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: x:x:x:x:X:x
Slave queue ID: 0
Aggregator ID: N/A
+ echo 'Restoring baseline'
Restoring baseline
+ ifup bond1
~~~

Comment 1 Andreas Karis 2020-02-18 16:56:32 UTC
It's the same result when I run this in a lab of mine:
~~~
[root@overcloud-compute-0 ~]# cat test1.sh 
#!/bin/bash -x

member1=p2p1
member2=p2p2
master=bond_tenant

echo "Baseline"
ifup $master
sleep 2

ip link ls dev $member1
ip link ls dev $member2
ip link ls dev $master
cat /proc/net/bonding/$master

echo "Flapping ports individually"
ifdown $member1
ifup $member1
ifdown $member2
ifup $member2

sleep 2

ip link ls dev $member1
ip link ls dev $member2
ip link ls dev $master
cat /proc/net/bonding/$master

echo "Flapping both ports at the  same time"
ifdown $member1
ifdown $member2
ifup $member1
ifup $member2

sleep 2

ip link ls dev $member1
ip link ls dev $member2
ip link ls dev $master
cat /proc/net/bonding/$master

echo "Restoring baseline"
ifup $master
[root@overcloud-compute-0 ~]# ./test1.sh 
+ member1=p2p1
+ member2=p2p2
+ master=bond_tenant
+ echo Baseline
Baseline
+ ifup bond_tenant
+ sleep 2
+ ip link ls dev p2p1
8: p2p1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond_tenant state UP mode DEFAULT group default qlen 1000
    link/ether a0:36:9f:e5:df:c0 brd ff:ff:ff:ff:ff:ff
+ ip link ls dev p2p2
9: p2p2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond_tenant state UP mode DEFAULT group default qlen 1000
    link/ether a0:36:9f:e5:df:c0 brd ff:ff:ff:ff:ff:ff
+ ip link ls dev bond_tenant
14: bond_tenant: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP mode DEFAULT group default qlen 1000
    link/ether a0:36:9f:e5:df:c0 brd ff:ff:ff:ff:ff:ff
+ cat /proc/net/bonding/bond_tenant
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 1000
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: a0:36:9f:e5:df:c0
Active Aggregator Info:
	Aggregator ID: 8
	Number of ports: 2
	Actor Key: 15
	Partner Key: 10
	Partner Mac Address: 14:18:77:89:9a:8a

Slave Interface: p2p1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a0:36:9f:e5:df:c0
Slave queue ID: 0
Aggregator ID: 8
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: a0:36:9f:e5:df:c0
    port key: 15
    port priority: 255
    port number: 1
    port state: 61
details partner lacp pdu:
    system priority: 32768
    system mac address: 14:18:77:89:9a:8a
    oper key: 10
    port priority: 32768
    port number: 227
    port state: 63

Slave Interface: p2p2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a0:36:9f:e5:df:c2
Slave queue ID: 0
Aggregator ID: 8
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: a0:36:9f:e5:df:c0
    port key: 15
    port priority: 255
    port number: 2
    port state: 61
details partner lacp pdu:
    system priority: 32768
    system mac address: 14:18:77:89:9a:8a
    oper key: 10
    port priority: 32768
    port number: 226
    port state: 63
+ echo 'Flapping ports individually'
Flapping ports individually
+ ifdown p2p1
+ ifup p2p1
+ ifdown p2p2
+ ifup p2p2
+ sleep 2
+ ip link ls dev p2p1
8: p2p1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond_tenant state UP mode DEFAULT group default qlen 1000
    link/ether a0:36:9f:e5:df:c0 brd ff:ff:ff:ff:ff:ff
+ ip link ls dev p2p2
9: p2p2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond_tenant state UP mode DEFAULT group default qlen 1000
    link/ether a0:36:9f:e5:df:c0 brd ff:ff:ff:ff:ff:ff
+ ip link ls dev bond_tenant
14: bond_tenant: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP mode DEFAULT group default qlen 1000
    link/ether a0:36:9f:e5:df:c0 brd ff:ff:ff:ff:ff:ff
+ cat /proc/net/bonding/bond_tenant
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 1000
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: a0:36:9f:e5:df:c0
Active Aggregator Info:
	Aggregator ID: 10
	Number of ports: 2
	Actor Key: 15
	Partner Key: 10
	Partner Mac Address: 14:18:77:89:9a:8a

Slave Interface: p2p1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a0:36:9f:e5:df:c0
Slave queue ID: 0
Aggregator ID: 10
Actor Churn State: monitoring
Partner Churn State: monitoring
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: a0:36:9f:e5:df:c0
    port key: 15
    port priority: 255
    port number: 3
    port state: 5
details partner lacp pdu:
    system priority: 32768
    system mac address: 14:18:77:89:9a:8a
    oper key: 10
    port priority: 32768
    port number: 227
    port state: 7

Slave Interface: p2p2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a0:36:9f:e5:df:c2
Slave queue ID: 0
Aggregator ID: 10
Actor Churn State: monitoring
Partner Churn State: monitoring
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
    system priority: 65535
    system mac address: a0:36:9f:e5:df:c0
    port key: 15
    port priority: 255
    port number: 4
    port state: 5
details partner lacp pdu:
    system priority: 32768
    system mac address: 14:18:77:89:9a:8a
    oper key: 10
    port priority: 32768
    port number: 226
    port state: 7
+ echo 'Flapping both ports at the  same time'
Flapping both ports at the  same time
+ ifdown p2p1
+ ifdown p2p2
+ ifup p2p1
+ ifup p2p2
+ sleep 2
+ ip link ls dev p2p1
8: p2p1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond_tenant state UP mode DEFAULT group default qlen 1000
    link/ether a0:36:9f:e5:df:c0 brd ff:ff:ff:ff:ff:ff
+ ip link ls dev p2p2
9: p2p2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond_tenant state UP mode DEFAULT group default qlen 1000
    link/ether a0:36:9f:e5:df:c0 brd ff:ff:ff:ff:ff:ff
+ ip link ls dev bond_tenant
14: bond_tenant: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noqueue master ovs-system state DOWN mode DEFAULT group default qlen 1000
    link/ether a0:36:9f:e5:df:c0 brd ff:ff:ff:ff:ff:ff
+ cat /proc/net/bonding/bond_tenant
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: down
MII Polling Interval (ms): 100
Up Delay (ms): 1000
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: a0:36:9f:e5:df:c0
bond bond_tenant has no active aggregator

Slave Interface: p2p1
MII Status: down
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a0:36:9f:e5:df:c0
Slave queue ID: 0
Aggregator ID: N/A

Slave Interface: p2p2
MII Status: down
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a0:36:9f:e5:df:c2
Slave queue ID: 0
Aggregator ID: N/A
+ echo 'Restoring baseline'
Restoring baseline
+ ifup bond_tenant
~~~

Comment 2 Andreas Karis 2020-02-18 17:27:36 UTC
I can also reproduce this with a template change, the same way as the customer:
~~~
[root@overcloud-compute-0 ~]# cat /proc/net/bonding/bond_tenant 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: down
MII Polling Interval (ms): 100
Up Delay (ms): 1000
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: a0:36:9f:e5:df:c2
bond bond_tenant has no active aggregator

Slave Interface: p2p2
MII Status: down
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a0:36:9f:e5:df:c2
Slave queue ID: 0
Aggregator ID: N/A

Slave Interface: p2p1
MII Status: down
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a0:36:9f:e5:df:c0
Slave queue ID: 0
Aggregator ID: N/A
~~~

~~~
[root@overcloud-compute-1 ~]# cat /proc/net/bonding/bond_tenant
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: down
MII Polling Interval (ms): 100
Up Delay (ms): 1000
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: a0:36:9f:e5:e2:c2
bond bond_tenant has no active aggregator

Slave Interface: p2p2
MII Status: down
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a0:36:9f:e5:e2:c2
Slave queue ID: 0
Aggregator ID: N/A

Slave Interface: p2p1
MII Status: down
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a0:36:9f:e5:e2:c0
Slave queue ID: 0
Aggregator ID: N/A
~~~

~~~
[root@overcloud-compute-1 ~]# journalctl --since today | grep os-collect-config
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:43,718] (os-refresh-config) [INFO] Starting phase pre-configure
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:43 UTC 2020 ----------------------- PROFILING -----------------------
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:43 UTC 2020
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:43 UTC 2020 Target: pre-configure.d
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:43 UTC 2020
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:43 UTC 2020 Script                                     Seconds
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:43 UTC 2020 ---------------------------------------  ----------
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:43 UTC 2020
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:43 UTC 2020
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:43 UTC 2020 --------------------- END PROFILING ---------------------
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:43,742] (os-refresh-config) [INFO] Completed phase pre-configure
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:43,742] (os-refresh-config) [INFO] Starting phase configure
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:43 UTC 2020 Running /usr/libexec/os-refresh-config/configure.d/20-os-apply-config
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:43 PM] [INFO] writing /var/run/heat-config/heat-config
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:43 PM] [INFO] writing /etc/os-collect-config.conf
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:43 PM] [INFO] success
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:43 UTC 2020 20-os-apply-config completed
Feb 18 17:20:43 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:43 UTC 2020 Running /usr/libexec/os-refresh-config/configure.d/50-heat-config-docker-cmd
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:44 UTC 2020 50-heat-config-docker-cmd completed
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:44 UTC 2020 Running /usr/libexec/os-refresh-config/configure.d/55-heat-config
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,230] (heat-config) [WARNING] Skipping config c7cae773-ef10-4ad0-a2b1-0df6be19d06a, already deployed
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,230] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/c7cae773-ef10-4ad0-a2b1-0df6be19d06a.json
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,230] (heat-config) [WARNING] Skipping config bf68b093-e7a7-46c1-a9d2-cda8577af6cb, already deployed
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,230] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/bf68b093-e7a7-46c1-a9d2-cda8577af6cb.json
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,230] (heat-config) [WARNING] Skipping config 9b329ef8-eca9-4e38-8e77-ee7893058c00, already deployed
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,230] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/9b329ef8-eca9-4e38-8e77-ee7893058c00.json
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,230] (heat-config) [WARNING] Skipping config 76d347fb-57c3-40b1-b938-a13effd9b588, already deployed
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,230] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/76d347fb-57c3-40b1-b938-a13effd9b588.json
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,230] (heat-config) [WARNING] Skipping config 035ef5c3-a177-4753-a677-b6dcc20246cc, already deployed
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,231] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/035ef5c3-a177-4753-a677-b6dcc20246cc.json
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,231] (heat-config) [WARNING] Skipping config 5678da32-7388-4de8-90b0-e918c81d9030, already deployed
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,231] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/5678da32-7388-4de8-90b0-e918c81d9030.json
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,231] (heat-config) [WARNING] Skipping config bb0f6a7b-b96f-4748-a699-3519b8781493, already deployed
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,231] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/bb0f6a7b-b96f-4748-a699-3519b8781493.json
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,231] (heat-config) [WARNING] Skipping config fc843624-39d1-4548-ba16-c57de85f280b, already deployed
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,231] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/fc843624-39d1-4548-ba16-c57de85f280b.json
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,231] (heat-config) [WARNING] Skipping config 90368f4a-76be-4f35-92af-f4fa1f21ddd9, already deployed
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,231] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/90368f4a-76be-4f35-92af-f4fa1f21ddd9.json
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,231] (heat-config) [WARNING] Skipping config b583a712-8de4-4b14-b0ba-38a6b648f943, already deployed
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,231] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/b583a712-8de4-4b14-b0ba-38a6b648f943.json
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,231] (heat-config) [WARNING] Skipping config 7cefc2bd-3044-4996-abfb-171371318815, already deployed
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,231] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/7cefc2bd-3044-4996-abfb-171371318815.json
Feb 18 17:20:44 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,232] (heat-config) [DEBUG] Running /usr/libexec/heat-config/hooks/script < /var/lib/heat-config/deployed/e01ba4ba-0e6a-42f5-bdd2-7ee1a11fef32.json
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:47,469] (heat-config) [INFO] {"deploy_stdout": "Trying to ping metadata IP 192.168.24.1...SUCCESS\n", "deploy_stderr": "+ '[' -n '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.11.5.4\", \"10.11.5.3\"], \"name\": \"em4\", \"routes\": [{\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}, {\"default\": true, \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"bonding_options\": \"mode=802.3ad updelay=1000 miimon=100\", \"dns_servers\": [\"10.11.5.4\", \"10.11.5.3\"], \"members\": [{\"name\": \"p1p1\", \"primary\": true, \"type\": \"interface\"}, {\"name\": \"p1p2\", \"type\": \"interface\"}], \"name\": \"bond_api\", \"type\": \"linux_bond\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"172.17.0.27/24\"}], \"device\": \"bond_api\", \"type\": \"vlan\", \"vlan_id\": 201}, {\"addresses\": [{\"ip_netmask\": \"172.20.0.25/24\"}], \"device\": \"bond_api\", \"type\": \"vlan\", \"vlan_id\": 204}, {\"defroute\": false, \"name\": \"em1\", \"type\": \"interface\", \"use_dhcp\": true}, {\"members\": [{\"bonding_options\": \"mode=802.3ad updelay=1000 miimon=100\", \"members\": [{\"ethtool_opts\": \"-L ${DEVICE} combined 30\", \"name\": \"p2p1\", \"type\": \"interface\"}, {\"ethtool_opts\": \"-L ${DEVICE} combined 30\", \"name\": \"p2p2\", \"type\": \"interface\"}], \"name\": \"bond_tenant\", \"type\": \"linux_bond\"}, {\"addresses\": [{\"ip_netmask\": \"172.18.0.23/24\"}], \"type\": \"vlan\", \"vlan_id\": 202}], \"mtu\": 9000, \"name\": \"br-tenant\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}' ']'\n+ '[' -z '' ']'\n+ trap configure_safe_defaults EXIT\n+ mkdir -p /etc/os-net-config\n+ echo '{\"network_config\": [{\"addresses\": [{\"ip_netmask\": \"192.168.24.10/24\"}], \"dns_servers\": [\"10.11.5.4\", \"10.11.5.3\"], \"name\": \"em4\", \"routes\": [{\"ip_netmask\": \"169.254.169.254/32\", \"next_hop\": \"192.168.24.1\"}, {\"default\": true, \"next_hop\": \"192.168.24.1\"}], \"type\": \"interface\", \"use_dhcp\": false}, {\"bonding_options\": \"mode=802.3ad updelay=1000 miimon=100\", \"dns_servers\": [\"10.11.5.4\", \"10.11.5.3\"], \"members\": [{\"name\": \"p1p1\", \"primary\": true, \"type\": \"interface\"}, {\"name\": \"p1p2\", \"type\": \"interface\"}], \"name\": \"bond_api\", \"type\": \"linux_bond\", \"use_dhcp\": false}, {\"addresses\": [{\"ip_netmask\": \"172.17.0.27/24\"}], \"device\": \"bond_api\", \"type\": \"vlan\", \"vlan_id\": 201}, {\"addresses\": [{\"ip_netmask\": \"172.20.0.25/24\"}], \"device\": \"bond_api\", \"type\": \"vlan\", \"vlan_id\": 204}, {\"defroute\": false, \"name\": \"em1\", \"type\": \"interface\", \"use_dhcp\": true}, {\"members\": [{\"bonding_options\": \"mode=802.3ad updelay=1000 miimon=100\", \"members\": [{\"ethtool_opts\": \"-L ${DEVICE} combined 30\", \"name\": \"p2p1\", \"type\": \"interface\"}, {\"ethtool_opts\": \"-L ${DEVICE} combined 30\", \"name\": \"p2p2\", \"type\": \"interface\"}], \"name\": \"bond_tenant\", \"type\": \"linux_bond\"}, {\"addresses\": [{\"ip_netmask\": \"172.18.0.23/24\"}], \"type\": \"vlan\", \"vlan_id\": 202}], \"mtu\": 9000, \"name\": \"br-tenant\", \"type\": \"ovs_bridge\", \"use_dhcp\": false}]}'\n++ type -t network_config_hook\n+ '[' '' = function ']'\n+ sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json\n+ sed -i s/interface_name/nic1/ /etc/os-net-config/config.json\n+ set +e\n+ os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes\n[2020/02/18 05:20:44 PM] [INFO] Using config file at: /etc/os-net-config/config.json\n[2020/02/18 05:20:44 PM] [INFO] Ifcfg net config provider created.\n[2020/02/18 05:20:44 PM] [INFO] Not using any mapping file.\n[2020/02/18 05:20:45 PM] [INFO] Finding active nics\n[2020/02/18 05:20:45 PM] [INFO] bonding_masters is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] docker0 is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] br-external is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] bond0 is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] qvo8aaccc7a-da is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] p2p2 is an active nic\n[2020/02/18 05:20:45 PM] [INFO] p2p1 is an active nic\n[2020/02/18 05:20:45 PM] [INFO] p1p2 is an active nic\n[2020/02/18 05:20:45 PM] [INFO] p1p1 is an active nic\n[2020/02/18 05:20:45 PM] [INFO] lo is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] em3 is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] em2 is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] em1 is an embedded active nic\n[2020/02/18 05:20:45 PM] [INFO] em4 is an embedded active nic\n[2020/02/18 05:20:45 PM] [INFO] br-int is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] br-tun is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] bond_tenant is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] ovs-system is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] br-tenant is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] tap8aaccc7a-da is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] qvb8aaccc7a-da is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] qbr8aaccc7a-da is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] bond_api is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] vxlan_sys_4789 is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] vlan201 is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] vlan204 is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] vlan202 is not an active nic\n[2020/02/18 05:20:45 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)\n[2020/02/18 05:20:45 PM] [INFO] Active nics are ['em1', 'em4', 'p1p1', 'p1p2', 'p2p1', 'p2p2']\n[2020/02/18 05:20:45 PM] [INFO] nic3 mapped to: p1p1\n[2020/02/18 05:20:45 PM] [INFO] nic4 mapped to: p1p2\n[2020/02/18 05:20:45 PM] [INFO] nic2 mapped to: em4\n[2020/02/18 05:20:45 PM] [INFO] nic6 mapped to: p2p2\n[2020/02/18 05:20:45 PM] [INFO] nic5 mapped to: p2p1\n[2020/02/18 05:20:45 PM] [INFO] nic1 mapped to: em1\n[2020/02/18 05:20:45 PM] [INFO] adding interface: em4\n[2020/02/18 05:20:45 PM] [INFO] adding custom route for interface: em4\n[2020/02/18 05:20:45 PM] [INFO] adding linux bond: bond_api\n[2020/02/18 05:20:45 PM] [INFO] adding interface: p1p1\n[2020/02/18 05:20:45 PM] [INFO] adding interface: p1p2\n[2020/02/18 05:20:45 PM] [INFO] adding vlan: vlan201\n[2020/02/18 05:20:45 PM] [INFO] adding vlan: vlan204\n[2020/02/18 05:20:45 PM] [INFO] adding interface: em1\n[2020/02/18 05:20:45 PM] [INFO] adding bridge: br-tenant\n[2020/02/18 05:20:45 PM] [INFO] adding linux bond: bond_tenant\n[2020/02/18 05:20:45 PM] [INFO] adding interface: p2p1\n[2020/02/18 05:20:45 PM] [INFO] adding interface: p2p2\n[2020/02/18 05:20:45 PM] [INFO] adding vlan: vlan202\n[2020/02/18 05:20:45 PM] [INFO] applying network configs...\n[2020/02/18 05:20:45 PM] [INFO] No changes required for interface: p1p1\n[2020/02/18 05:20:45 PM] [INFO] No changes required for interface: p1p2\n[2020/02/18 05:20:45 PM] [INFO] No changes required for interface: em4\n[2020/02/18 05:20:45 PM] [INFO] No changes required for interface: em1\n[2020/02/18 05:20:45 PM] [INFO] No changes required for bridge: br-tenant\n[2020/02/18 05:20:45 PM] [INFO] No changes required for linux bond: bond_api\n[2020/02/18 05:20:45 PM] [INFO] No changes required for linux bond: bond_tenant\n[2020/02/18 05:20:45 PM] [INFO] No changes required for vlan interface: vlan201\n[2020/02/18 05:20:45 PM] [INFO] No changes required for vlan interface: vlan202\n[2020/02/18 05:20:45 PM] [INFO] No changes required for vlan interface: vlan204\n[2020/02/18 05:20:45 PM] [INFO] running ifdown on interface: p2p2\n[2020/02/18 05:20:45 PM] [INFO] running ifdown on interface: p2p1\n[2020/02/18 05:20:45 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-p2p2\n[2020/02/18 05:20:45 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-p2p1\n[2020/02/18 05:20:45 PM] [INFO] running ifup on interface: p2p2\n[2020/02/18 05:20:46 PM] [INFO] running ifup on interface: p2p1\n+ RETVAL=2\n+ set -e\n+ [[ 2 == 2 ]]\n+ ping_metadata_ip\n++ get_metadata_ip\n++ local METADATA_IP\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=\n++ '[' -n '' ']'\n++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url\n+++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw\n+++ sed -e 's|http.*://\\[\\?\\([^]]*\\)]\\?:.*|\\1|'\n++ METADATA_IP=192.168.24.1\n++ '[' -n 192.168.24.1 ']'\n++ break\n++ echo 192.168.24.1\n+ local METADATA_IP=192.168.24.1\n+ '[' -n 192.168.24.1 ']'\n+ is_local_ip 192.168.24.1\n+ local IP_TO_CHECK=192.168.24.1\n+ ip -o a\n+ grep 'inet6\\? 192.168.24.1/'\n+ return 1\n+ echo -n 'Trying to ping metadata IP 192.168.24.1...'\n++ getent hosts 192.168.24.1\n++ awk '{ print $1 }'\n+ _IP=192.168.24.1\n+ _ping=ping\n+ [[ 192.168.24.1 =~ : ]]\n+ local COUNT=0\n+ ping -c 1 192.168.24.1\n+ echo SUCCESS\n+ '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'\n+ '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'\n+ configure_safe_defaults\n+ [[ 0 == 0 ]]\n+ return 0\n", "deploy_status_code": 0}
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:47,469] (heat-config) [DEBUG] [2020-02-18 17:20:44,262] (heat-config) [INFO] interface_name=nic1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,262] (heat-config) [INFO] bridge_name=br-ex
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,262] (heat-config) [INFO] deploy_server_id=9ed34587-43f3-4001-8200-47f5b6c0c4a6
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,262] (heat-config) [INFO] deploy_action=UPDATE
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,262] (heat-config) [INFO] deploy_stack_id=overcloud-Compute-3icucweglm2x-1-crcpacdn4mw4/1631dafa-9d6b-4638-8317-fda75dd3a343
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,262] (heat-config) [INFO] deploy_resource_name=NetworkDeployment
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,262] (heat-config) [INFO] deploy_signal_transport=CFN_SIGNAL
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,262] (heat-config) [INFO] deploy_signal_id=http://192.168.24.1:8000/v1/signal/arn%3Aopenstack%3Aheat%3A%3A1fd77015bc854ceb8f6a9e558da3b3de%3Astacks/overcloud-Compute-3icucweglm2x-1-crcpacdn4mw4/1631dafa-9d6b-4638-8317-fda75dd3a343/resources/NetworkDeployment?Timestamp=2020-02-13T14%3A25%3A21Z&SignatureMethod=HmacSHA256&AWSAccessKeyId=eb7a15e3692e44bf98de76aa482dca24&SignatureVersion=2&Signature=EwP2%2F5J9lnEEJnrVRuYQ9E%2Fy0b5yn4ne05feJNFJ7J4%3D
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,262] (heat-config) [INFO] deploy_signal_verb=POST
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:44,263] (heat-config) [DEBUG] Running /var/lib/heat-config/heat-config-script/e01ba4ba-0e6a-42f5-bdd2-7ee1a11fef32
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:47,464] (heat-config) [INFO] Trying to ping metadata IP 192.168.24.1...SUCCESS
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:47,465] (heat-config) [DEBUG] + '[' -n '{"network_config": [{"addresses": [{"ip_netmask": "192.168.24.10/24"}], "dns_servers": ["10.11.5.4", "10.11.5.3"], "name": "em4", "routes": [{"ip_netmask": "169.254.169.254/32", "next_hop": "192.168.24.1"}, {"default": true, "next_hop": "192.168.24.1"}], "type": "interface", "use_dhcp": false}, {"bonding_options": "mode=802.3ad updelay=1000 miimon=100", "dns_servers": ["10.11.5.4", "10.11.5.3"], "members": [{"name": "p1p1", "primary": true, "type": "interface"}, {"name": "p1p2", "type": "interface"}], "name": "bond_api", "type": "linux_bond", "use_dhcp": false}, {"addresses": [{"ip_netmask": "172.17.0.27/24"}], "device": "bond_api", "type": "vlan", "vlan_id": 201}, {"addresses": [{"ip_netmask": "172.20.0.25/24"}], "device": "bond_api", "type": "vlan", "vlan_id": 204}, {"defroute": false, "name": "em1", "type": "interface", "use_dhcp": true}, {"members": [{"bonding_options": "mode=802.3ad updelay=1000 miimon=100", "members": [{"ethtool_opts": "-L ${DEVICE} combined 30", "name": "p2p1", "type": "interface"}, {"ethtool_opts": "-L ${DEVICE} combined 30", "name": "p2p2", "type": "interface"}], "name": "bond_tenant", "type": "linux_bond"}, {"addresses": [{"ip_netmask": "172.18.0.23/24"}], "type": "vlan", "vlan_id": 202}], "mtu": 9000, "name": "br-tenant", "type": "ovs_bridge", "use_dhcp": false}]}' ']'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + '[' -z '' ']'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + trap configure_safe_defaults EXIT
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + mkdir -p /etc/os-net-config
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + echo '{"network_config": [{"addresses": [{"ip_netmask": "192.168.24.10/24"}], "dns_servers": ["10.11.5.4", "10.11.5.3"], "name": "em4", "routes": [{"ip_netmask": "169.254.169.254/32", "next_hop": "192.168.24.1"}, {"default": true, "next_hop": "192.168.24.1"}], "type": "interface", "use_dhcp": false}, {"bonding_options": "mode=802.3ad updelay=1000 miimon=100", "dns_servers": ["10.11.5.4", "10.11.5.3"], "members": [{"name": "p1p1", "primary": true, "type": "interface"}, {"name": "p1p2", "type": "interface"}], "name": "bond_api", "type": "linux_bond", "use_dhcp": false}, {"addresses": [{"ip_netmask": "172.17.0.27/24"}], "device": "bond_api", "type": "vlan", "vlan_id": 201}, {"addresses": [{"ip_netmask": "172.20.0.25/24"}], "device": "bond_api", "type": "vlan", "vlan_id": 204}, {"defroute": false, "name": "em1", "type": "interface", "use_dhcp": true}, {"members": [{"bonding_options": "mode=802.3ad updelay=1000 miimon=100", "members": [{"ethtool_opts": "-L ${DEVICE} combined 30", "name": "p2p1", "type": "interface"}, {"ethtool_opts": "-L ${DEVICE} combined 30", "name": "p2p2", "type": "interface"}], "name": "bond_tenant", "type": "linux_bond"}, {"addresses": [{"ip_netmask": "172.18.0.23/24"}], "type": "vlan", "vlan_id": 202}], "mtu": 9000, "name": "br-tenant", "type": "ovs_bridge", "use_dhcp": false}]}'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ type -t network_config_hook
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + '[' '' = function ']'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + sed -i s/bridge_name/br-ex/ /etc/os-net-config/config.json
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + sed -i s/interface_name/nic1/ /etc/os-net-config/config.json
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + set +e
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + os-net-config -c /etc/os-net-config/config.json -v --detailed-exit-codes
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:44 PM] [INFO] Using config file at: /etc/os-net-config/config.json
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:44 PM] [INFO] Ifcfg net config provider created.
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:44 PM] [INFO] Not using any mapping file.
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] Finding active nics
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] bonding_masters is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] docker0 is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] br-external is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] bond0 is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] qvo8aaccc7a-da is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] p2p2 is an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] p2p1 is an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] p1p2 is an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] p1p1 is an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] lo is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] em3 is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] em2 is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] em1 is an embedded active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] em4 is an embedded active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] br-int is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] br-tun is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] bond_tenant is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] ovs-system is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] br-tenant is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] tap8aaccc7a-da is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] qvb8aaccc7a-da is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] qbr8aaccc7a-da is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] bond_api is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] vxlan_sys_4789 is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] vlan201 is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] vlan204 is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] vlan202 is not an active nic
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] No DPDK mapping available in path (/var/lib/os-net-config/dpdk_mapping.yaml)
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] Active nics are ['em1', 'em4', 'p1p1', 'p1p2', 'p2p1', 'p2p2']
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] nic3 mapped to: p1p1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] nic4 mapped to: p1p2
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] nic2 mapped to: em4
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] nic6 mapped to: p2p2
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] nic5 mapped to: p2p1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] nic1 mapped to: em1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] adding interface: em4
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] adding custom route for interface: em4
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] adding linux bond: bond_api
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] adding interface: p1p1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] adding interface: p1p2
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] adding vlan: vlan201
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] adding vlan: vlan204
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] adding interface: em1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] adding bridge: br-tenant
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] adding linux bond: bond_tenant
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] adding interface: p2p1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] adding interface: p2p2
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] adding vlan: vlan202
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] applying network configs...
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] No changes required for interface: p1p1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] No changes required for interface: p1p2
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] No changes required for interface: em4
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] No changes required for interface: em1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] No changes required for bridge: br-tenant
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] No changes required for linux bond: bond_api
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] No changes required for linux bond: bond_tenant
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] No changes required for vlan interface: vlan201
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] No changes required for vlan interface: vlan202
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] No changes required for vlan interface: vlan204
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] running ifdown on interface: p2p2
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] running ifdown on interface: p2p1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-p2p2
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] Writing config /etc/sysconfig/network-scripts/ifcfg-p2p1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:45 PM] [INFO] running ifup on interface: p2p2
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020/02/18 05:20:46 PM] [INFO] running ifup on interface: p2p1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + RETVAL=2
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + set -e
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + [[ 2 == 2 ]]
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + ping_metadata_ip
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ get_metadata_ip
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ local METADATA_IP
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: +++ os-apply-config --key os-collect-config.cfn.metadata_url --key-default '' --type raw
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: +++ sed -e 's|http.*://\[\?\([^]]*\)]\?:.*|\1|'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ METADATA_IP=
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ '[' -n '' ']'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: +++ os-apply-config --key os-collect-config.heat.auth_url --key-default '' --type raw
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: +++ sed -e 's|http.*://\[\?\([^]]*\)]\?:.*|\1|'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ METADATA_IP=
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ '[' -n '' ']'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ for URL in os-collect-config.cfn.metadata_url os-collect-config.heat.auth_url os-collect-config.request.metadata_url os-collect-config.zaqar.auth_url
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: +++ os-apply-config --key os-collect-config.request.metadata_url --key-default '' --type raw
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: +++ sed -e 's|http.*://\[\?\([^]]*\)]\?:.*|\1|'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ METADATA_IP=192.168.24.1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ '[' -n 192.168.24.1 ']'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ break
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ echo 192.168.24.1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + local METADATA_IP=192.168.24.1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + '[' -n 192.168.24.1 ']'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + is_local_ip 192.168.24.1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + local IP_TO_CHECK=192.168.24.1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + ip -o a
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + grep 'inet6\? 192.168.24.1/'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + return 1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + echo -n 'Trying to ping metadata IP 192.168.24.1...'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ getent hosts 192.168.24.1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: ++ awk '{ print $1 }'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + _IP=192.168.24.1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + _ping=ping
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + [[ 192.168.24.1 =~ : ]]
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + local COUNT=0
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + ping -c 1 192.168.24.1
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + echo SUCCESS
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + '[' -f /etc/udev/rules.d/99-dhcp-all-interfaces.rules ']'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/config.json ']'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + '[' -f /usr/libexec/os-apply-config/templates/etc/os-net-config/element_config.json ']'
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + configure_safe_defaults
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + [[ 0 == 0 ]]
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: + return 0
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:47,465] (heat-config) [INFO] Completed /var/lib/heat-config/heat-config-script/e01ba4ba-0e6a-42f5-bdd2-7ee1a11fef32
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:47,469] (heat-config) [INFO] Completed /usr/libexec/heat-config/hooks/script
Feb 18 17:20:47 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:47,470] (heat-config) [DEBUG] Running heat-config-notify /var/lib/heat-config/deployed/e01ba4ba-0e6a-42f5-bdd2-7ee1a11fef32.json < /var/lib/heat-config/deployed/e01ba4ba-0e6a-42f5-bdd2-7ee1a11fef32.notify.json
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:51,692] (heat-config) [INFO]
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:51,692] (heat-config) [DEBUG] [2020-02-18 17:20:48,002] (heat-config-notify) [DEBUG] Signaling to http://192.168.24.1:8000/v1/signal/arn%3Aopenstack%3Aheat%3A%3A1fd77015bc854ceb8f6a9e558da3b3de%3Astacks/overcloud-Compute-3icucweglm2x-1-crcpacdn4mw4/1631dafa-9d6b-4638-8317-fda75dd3a343/resources/NetworkDeployment?Timestamp=2020-02-13T14%3A25%3A21Z&SignatureMethod=HmacSHA256&AWSAccessKeyId=eb7a15e3692e44bf98de76aa482dca24&SignatureVersion=2&Signature=EwP2%2F5J9lnEEJnrVRuYQ9E%2Fy0b5yn4ne05feJNFJ7J4%3D via POST
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:51,660] (heat-config-notify) [DEBUG] Response <Response [200]>
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:51,692] (heat-config) [WARNING] Skipping config d82b2b8f-3620-44c2-b5d6-02e79fb722dc, already deployed
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:51,692] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/d82b2b8f-3620-44c2-b5d6-02e79fb722dc.json
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:51,693] (heat-config) [WARNING] Skipping config 4f45bc43-fc9d-453e-89e9-5cc2c67df112, already deployed
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:51,693] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/4f45bc43-fc9d-453e-89e9-5cc2c67df112.json
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:51,693] (heat-config) [WARNING] Skipping config 3be61913-a888-4425-8ba3-e0d1b2b619ec, already deployed
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:51,693] (heat-config) [WARNING] To force-deploy, rm /var/lib/heat-config/deployed/3be61913-a888-4425-8ba3-e0d1b2b619ec.json
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 55-heat-config completed
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 ----------------------- PROFILING -----------------------
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 Target: configure.d
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 Script                                     Seconds
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 ---------------------------------------  ----------
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 20-os-apply-config                            0.194
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 50-heat-config-docker-cmd                     0.237
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 55-heat-config                                7.509
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 --------------------- END PROFILING ---------------------
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:51,732] (os-refresh-config) [INFO] Completed phase configure
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:51,732] (os-refresh-config) [INFO] Starting phase post-configure
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 ----------------------- PROFILING -----------------------
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 Target: post-configure.d
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 Script                                     Seconds
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 ---------------------------------------  ----------
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 --------------------- END PROFILING ---------------------
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:51,755] (os-refresh-config) [INFO] Completed phase post-configure
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:51,756] (os-refresh-config) [INFO] Starting phase migration
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 ----------------------- PROFILING -----------------------
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 Target: migration.d
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 Script                                     Seconds
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 ---------------------------------------  ----------
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: dib-run-parts Tue Feb 18 17:20:51 UTC 2020 --------------------- END PROFILING ---------------------
Feb 18 17:20:51 overcloud-compute-1 os-collect-config[8229]: [2020-02-18 17:20:51,778] (os-refresh-config) [INFO] Completed phase migration
[root@overcloud-compute-1 ~]# 

~~~

Comment 3 Andreas Karis 2020-02-18 17:28:24 UTC
This happens after updating the templates with:
~~~
(undercloud) [stack@undercloud-0 ~]$ tail octavia/network-environment.yaml -n 2

  NetworkDeploymentActions: ['CREATE','UPDATE']
(undercloud) [stack@undercloud-0 ~]$ grep ethtool octavia/nic-configs/compute.yaml 
                      ethtool_opts: "-L ${DEVICE} combined 30"
                      ethtool_opts: "-L ${DEVICE} combined 30"
~~~

Comment 4 Dan Sneddon 2020-02-19 02:13:05 UTC
Andreas,

This looks like a legitimate bug. os-net-config is supposed to restart any interface if the member interfaces are restarted. The Linux bond should be flapped if the slaves were restarted. If the Linux bond is flapped, then the OVS bridge should also be flapped. 

Additionally, the MTU should match between member and parents, and the MTU should be the same on a bridge or bond as it is on slave interfaces. with the original setup the OVS bridge should have the MTU changed to match the MTU on the slave on bonds. I'm not sure if this will fix your issue, however, because interfaces are not restarted simply for MTU changes, instead the MTU changes will be changed on the running interface using 'ip' commands.

              - type: ovs_bridge
                name: br-s1
                mtu: 9000    # <-- MTU changed
                members:
                  - type: linux_bond
                    name: bond1
                    mtu: 9000
                    bonding_options: 'mode=4 lacp=passive lacp_rate=fast miimon=50'
                    members:
                      - type: interface
                        name: eno5
                        mtu: 9000
                        primary: true
                        ethtool_opts: "-L ${DEVICE} combined 30; -G ${DEVICE} rx 8192 tx 8192"
                      - type: interface
                        name: eno6
                        mtu: 9000
                        ethtool_opts: "-L ${DEVICE} combined 30; -G ${DEVICE} rx 8192 tx 8192"

I suspect that what is happening is that because os-net-config thinks that the only change to the Linux bond is the MTU change, it thinks that the Linux bond doesn't have to be restarted. This is probably overriding the behavior where the interface should be restarted due to restart of the slave interfaces.

Comment 5 Andreas Karis 2020-02-19 10:21:45 UTC
Hi,

If we leave aside the customer example (which also contains a wrong lacp=passive in the bonding_options), I can reproduce the issue in my lab simply with a change to the ethtool options:
~~~
[stack@undercloud-0 ~]$ diff -r  -U 15 -N octavia.orig/ octavia
diff -r -U 15 -N octavia.orig/network-environment.yaml octavia/network-environment.yaml
--- octavia.orig/network-environment.yaml	2020-02-19 05:19:18.628920134 -0500
+++ octavia/network-environment.yaml	2020-02-18 12:01:04.178744309 -0500
@@ -35,15 +35,16 @@
   NeutronDhcpAgentDnsmasqDnsServers: ["10.11.5.4","10.11.5.3"]
   NtpServer: "10.5.26.10"
   
   # Set to "br-ex" if using floating IPs on native VLAN on bridge br-ex
   NeutronExternalNetworkBridge: "''"
   # The OVS logical->physical bridge mappings to use.
   NeutronBridgeMappings: 'tenant:br-tenant,external:br-external'
   # The Neutron ML2 and OpenVSwitch vlan mapping range to support.
   NeutronNetworkVLANRanges: 'tenant:205:209' 
   NeutronFlatNetworks: 'external'
 
   NeutronEnableIsolatedMetadata: 'True'
   NeutronTunnelTypes: 'vxlan'
   NeutronNetworkType: 'vxlan'
 
+  NetworkDeploymentActions: ['CREATE','UPDATE']
diff -r -U 15 -N octavia.orig/nic-configs/compute.yaml octavia/nic-configs/compute.yaml
--- octavia.orig/nic-configs/compute.yaml	2020-02-19 05:19:38.451773653 -0500
+++ octavia/nic-configs/compute.yaml	2020-02-18 12:00:44.369886064 -0500
@@ -203,28 +203,30 @@
                 name: em1
                 use_dhcp: true
                 defroute: false
               -
                 type: ovs_bridge
                 name: br-tenant
                 mtu: 9000
                 use_dhcp: false
                 members:
                 - type: linux_bond
                   name: bond_tenant
                   bonding_options: "mode=802.3ad updelay=1000 miimon=100"
                   members:
                     - type: interface
                       name: p2p1
+                      ethtool_opts: "-L ${DEVICE} combined 30"
                     - type: interface
                       name: p2p2
+                      ethtool_opts: "-L ${DEVICE} combined 30"
                 -
                   type: vlan
                   vlan_id: {get_param: TenantNetworkVlanID}
                   addresses:
                     -
                       ip_netmask: {get_param: TenantIpSubnet}
 
 outputs:
   OS::stack_id:
     description: The OsNetConfigImpl resource.
     value: {get_resource: OsNetConfigImpl}
~~~

~~~
[stack@undercloud-0 ~]$ cat /etc/rhosp-release 
Red Hat OpenStack Platform release 13.0.10 (Queens)
[root@overcloud-compute-0 ~]# rpm -qa | grep os-net-config
os-net-config-8.5.1-1.el7ost.noarch
~~~

I can provide further info if needed.

Comment 6 Andreas Karis 2020-02-19 10:23:59 UTC
I wished I wasn't so quick on the "send" button.

This here is the configuration change in my lab:
https://bugzilla.redhat.com/show_bug.cgi?id=1804303#c5

And this here is the result in my lab: 
https://bugzilla.redhat.com/show_bug.cgi?id=1804303#c2

So I can easily trigger and reproduce this bug in my lab.

Comment 9 Bob Fournier 2020-04-09 16:02:25 UTC
Ken - yes, the backport has been proposed upstream to Queens and will be backported downstream when it merges in order to get it into OSP-13z12.

Comment 10 Bob Fournier 2020-04-13 17:22:49 UTC
*** Bug 1814355 has been marked as a duplicate of this bug. ***

Comment 17 Bob Fournier 2020-06-04 19:02:13 UTC
Verified fix is in puddle 2020-05-28.2.

Comment 21 errata-xmlrpc 2020-06-24 11:33:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2718

Comment 22 Red Hat Bugzilla 2024-01-06 04:28:04 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.