Bug 1912701 - Handle dual-stack configuration for NIC IP
Summary: Handle dual-stack configuration for NIC IP
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Bare Metal Hardware Provisioning
Version: 4.7
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.7.0
Assignee: Steven Hardy
QA Contact: Shelly Miron
URL:
Whiteboard:
Depends On:
Blocks: 1897336 1907639
TreeView+ depends on / blocked
 
Reported: 2021-01-05 09:23 UTC by Steven Hardy
Modified: 2021-03-23 20:20 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:49:43 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github metal3-io baremetal-operator pull 758 0 None closed Handle dual-stack configuration in inspection data 2021-01-19 14:36:06 UTC
Github openshift baremetal-operator pull 118 0 None closed Bug 1912701: Handle dual-stack configuration in inspection data 2021-01-19 14:36:07 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:50:10 UTC

Description Steven Hardy 2021-01-05 09:23:59 UTC
Description of problem:

There is a known issue ref https://github.com/metal3-io/baremetal-operator/issues/458 where inspection data isn't handled correctly in the dual-stack case, we now need to fix that for dual-stack to work correctly.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Deploy in a dual stack configuration (ipv4 and ipv6 enabled on the machineNetwork)
2. Observe there is only an ipv4 ip in the baremetalhost status field


Actual results:

In the bmh resources we see something similar to:

   hostname: worker-0
    nics:
    - ip: fd00:1101::1d38:f2bd:f996:72ce
      mac: 00:13:a6:60:73:93
      model: 0x1af4 0x0001
      name: enp1s0
      pxe: true
      speedGbps: 0
      vlanId: 0
    - ip: 192.168.111.23
      mac: 00:13:a6:60:73:95
      model: 0x1af4 0x0001
      name: enp2s0


enp1s0 is the provisioning nic, enp2s0 is the controlplane nic which should be configured on the machine network with both ipv4 and ipv6 in a dual-stack scenario.

In the inspection data we see that that for worker-0 enp2s0 that it has both ipv4 and ipv6 addresses:

 2021-01-04T01:07:45.319967318Z 2021-01-04 01:07:45.318 1 DEBUG ironic_inspector.main [req-4893e33b-8eaa-4159-98cb-97ca75b85e3b - - - - -] [node: MAC 00:13:a6:60:73:93] Received data from the ramdisk: {'inventory': {'interfaces': [{'name': 'enp1s0', 'mac_address': '00:13:a6:60:73:93', 'ipv4_address': None, 'ipv6_address': 'fd00:1101::1d38:f2bd:f996:72ce', 'has_carrier': True, 'lldp': [], 'vendor': '0x1af4', 'product': '0x0001', 'client_id': None, 'biosdevname': None}, {'name': 'enp2s0', 'mac_address': '00:13:a6:60:73:95', 'ipv4_address': '192.168.111.23', 'ipv6_address': 'fd2e:6f44:5dd8:c956::17', 'has_carrier': True, 'lldp': [], 'vendor': '0x1af4', 'product': '0x0001', 'client_id': None, 'biosdevname': None}]

Expected results:

The nics section of the bmh status should include both ips for enp2s0

Additional info:

As discussed on the upstream issue, we need to decide on the appropriate interface for this, either add another nic with the same name but a different IP, or allow "ip" to contain a list - the former is probably simpler and less impactful to existing code/users of the BMH API.

Comment 1 Steven Hardy 2021-01-05 15:34:43 UTC
Upstream PR pushed https://github.com/metal3-io/baremetal-operator/pull/758 - I'll set this to POST when the downstream backport is available.

Comment 3 Shelly Miron 2021-01-10 10:25:59 UTC
Verified with the following steps, on a dual-stack machine with the version:

[kni@provisionhost-0-0 ~]$ oc version

Client Version: 4.7.0-0.nightly-2021-01-09-021054
Server Version: 4.7.0-0.nightly-2021-01-09-021054
Kubernetes Version: v1.20.0+6313d1d


steps:
------

1. Deployed dual-stack env 
2. observed worker-0-1 with: $ oc describe bmh openshift-worker-0-1 -n openshift-machine-api

Hostname:     worker-0-1.ocp-edge-cluster-0.qe.lab.redhat.com
    Nics:
      Ip:           192.168.123.107
      Mac:          52:54:00:83:da:cf
      Model:        0x1af4 0x0001
      Name:         enp5s0
      Pxe:          false
      Speed Gbps:   0
      Vlan Id:      0
      Ip:           fd2e:6f44:5dd8::10f
      Mac:          52:54:00:83:da:cf
      Model:        0x1af4 0x0001
      Name:         enp5s0
      Pxe:          false
      Speed Gbps:   0
      Vlan Id:      0
      Ip:           fd00:1101::ac0e:daa2:1d94:5906
      Mac:          52:54:00:55:75:eb
      Model:        0x1af4 0x0001
      Name:         enp4s0
      Pxe:          true
      Speed Gbps:   0
      Vlan Id:      0

We can see now that the control-plane nic(i.e. enp5s0) is configured with both ipv4 and ipv6 addresses.
For the inspection part of worker-0-1 node:

$ oc logs metal3-5c59cc865-j6zd5 -n openshift-machine-api -c metal3-ironic-inspector | grep 192.168.123.107

2021-01-10 09:30:51.274 1 DEBUG ironic_inspector.main [req-698fb2df-eb7e-418f-a239-29d17e15e9dd - - - - -] [node: MAC 52:54:00:55:75:eb] Received data from the ramdisk: {'inventory': {'interfaces': [{'name': 'enp5s0', 'mac_address': '52:54:00:83:da:cf', 'ipv4_address': '192.168.123.107', 'ipv6_address': 'fd2e:6f44:5dd8::10f', 'has_carrier': True, 'lldp': [], 'vendor': '0x1af4', 'product': '0x0001', 'client_id': None, 'biosdevname': None}, {'name': 'enp4s0', 'mac_address': '52:54:00:55:75:eb', 'ipv4_address': None, 'ipv6_address': 'fd00:1101::ac0e:daa2:1d94:5906', 'has_carrier': True, 'lldp': [], 'vendor': '0x1af4', 'product': '0x0001', 'client_id': None, 'biosdevname': None}]

Comment 6 errata-xmlrpc 2021-02-24 15:49:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.