Description of problem: There is a known issue ref https://github.com/metal3-io/baremetal-operator/issues/458 where inspection data isn't handled correctly in the dual-stack case, we now need to fix that for dual-stack to work correctly. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Deploy in a dual stack configuration (ipv4 and ipv6 enabled on the machineNetwork) 2. Observe there is only an ipv4 ip in the baremetalhost status field Actual results: In the bmh resources we see something similar to: hostname: worker-0 nics: - ip: fd00:1101::1d38:f2bd:f996:72ce mac: 00:13:a6:60:73:93 model: 0x1af4 0x0001 name: enp1s0 pxe: true speedGbps: 0 vlanId: 0 - ip: 192.168.111.23 mac: 00:13:a6:60:73:95 model: 0x1af4 0x0001 name: enp2s0 enp1s0 is the provisioning nic, enp2s0 is the controlplane nic which should be configured on the machine network with both ipv4 and ipv6 in a dual-stack scenario. In the inspection data we see that that for worker-0 enp2s0 that it has both ipv4 and ipv6 addresses: 2021-01-04T01:07:45.319967318Z 2021-01-04 01:07:45.318 1 DEBUG ironic_inspector.main [req-4893e33b-8eaa-4159-98cb-97ca75b85e3b - - - - -] [node: MAC 00:13:a6:60:73:93] Received data from the ramdisk: {'inventory': {'interfaces': [{'name': 'enp1s0', 'mac_address': '00:13:a6:60:73:93', 'ipv4_address': None, 'ipv6_address': 'fd00:1101::1d38:f2bd:f996:72ce', 'has_carrier': True, 'lldp': [], 'vendor': '0x1af4', 'product': '0x0001', 'client_id': None, 'biosdevname': None}, {'name': 'enp2s0', 'mac_address': '00:13:a6:60:73:95', 'ipv4_address': '192.168.111.23', 'ipv6_address': 'fd2e:6f44:5dd8:c956::17', 'has_carrier': True, 'lldp': [], 'vendor': '0x1af4', 'product': '0x0001', 'client_id': None, 'biosdevname': None}] Expected results: The nics section of the bmh status should include both ips for enp2s0 Additional info: As discussed on the upstream issue, we need to decide on the appropriate interface for this, either add another nic with the same name but a different IP, or allow "ip" to contain a list - the former is probably simpler and less impactful to existing code/users of the BMH API.
Upstream PR pushed https://github.com/metal3-io/baremetal-operator/pull/758 - I'll set this to POST when the downstream backport is available.
Verified with the following steps, on a dual-stack machine with the version: [kni@provisionhost-0-0 ~]$ oc version Client Version: 4.7.0-0.nightly-2021-01-09-021054 Server Version: 4.7.0-0.nightly-2021-01-09-021054 Kubernetes Version: v1.20.0+6313d1d steps: ------ 1. Deployed dual-stack env 2. observed worker-0-1 with: $ oc describe bmh openshift-worker-0-1 -n openshift-machine-api Hostname: worker-0-1.ocp-edge-cluster-0.qe.lab.redhat.com Nics: Ip: 192.168.123.107 Mac: 52:54:00:83:da:cf Model: 0x1af4 0x0001 Name: enp5s0 Pxe: false Speed Gbps: 0 Vlan Id: 0 Ip: fd2e:6f44:5dd8::10f Mac: 52:54:00:83:da:cf Model: 0x1af4 0x0001 Name: enp5s0 Pxe: false Speed Gbps: 0 Vlan Id: 0 Ip: fd00:1101::ac0e:daa2:1d94:5906 Mac: 52:54:00:55:75:eb Model: 0x1af4 0x0001 Name: enp4s0 Pxe: true Speed Gbps: 0 Vlan Id: 0 We can see now that the control-plane nic(i.e. enp5s0) is configured with both ipv4 and ipv6 addresses. For the inspection part of worker-0-1 node: $ oc logs metal3-5c59cc865-j6zd5 -n openshift-machine-api -c metal3-ironic-inspector | grep 192.168.123.107 2021-01-10 09:30:51.274 1 DEBUG ironic_inspector.main [req-698fb2df-eb7e-418f-a239-29d17e15e9dd - - - - -] [node: MAC 52:54:00:55:75:eb] Received data from the ramdisk: {'inventory': {'interfaces': [{'name': 'enp5s0', 'mac_address': '52:54:00:83:da:cf', 'ipv4_address': '192.168.123.107', 'ipv6_address': 'fd2e:6f44:5dd8::10f', 'has_carrier': True, 'lldp': [], 'vendor': '0x1af4', 'product': '0x0001', 'client_id': None, 'biosdevname': None}, {'name': 'enp4s0', 'mac_address': '52:54:00:55:75:eb', 'ipv4_address': None, 'ipv6_address': 'fd00:1101::ac0e:daa2:1d94:5906', 'has_carrier': True, 'lldp': [], 'vendor': '0x1af4', 'product': '0x0001', 'client_id': None, 'biosdevname': None}]
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633