Bug 1927068 - Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64
Summary: Workers fail to PXE boot when IPv6 provisionining network has subnet other th...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 4.7
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: 4.8.0
Assignee: Stephen Benjamin
QA Contact: Aleksandra Malykhin
URL:
Whiteboard:
Depends On: 1925291 1966525
Blocks: 1905233 1933726
TreeView+ depends on / blocked
 
Reported: 2021-02-10 00:49 UTC by Stephen Benjamin
Modified: 2021-07-27 22:43 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: DNSmasq requires specifying the prefix length when an IPv6 network is other than a /64. Consequence: Hosts failed to PXE boot when using a non-/64 network. Fix: Include the prefix length in the DNSmasq configuration. Result: Hosts will now DHCP and PXE boot on IPv6 networks of any prefix length.
Clone Of: 1925291
Environment:
Last Closed: 2021-07-27 22:43:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-baremetal-operator pull 104 0 None open Bug 1927068: provisioning: configure DHCP range with netmask 2021-02-16 16:37:26 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 22:43:32 UTC

Description Stephen Benjamin 2021-02-10 00:49:23 UTC
+++ This bug was initially created as a clone of Bug #1925291 +++

Version:

$ openshift-install version
4.7.0-0.nightly-2021-02-04-075559

Platform:
baremetal IPI

What happened?
masters fail to PXE boot

from openshift-install.log:
time="2021-02-04T18:39:07Z" level=error msg="Error: could not inspect: could not inspect node, node is currently 'inspect failed', last error was 'timeout reached while inspecting the node'"
time="2021-02-04T18:39:07Z" level=error
time="2021-02-04T18:39:07Z" level=error msg=" on ../../tmp/openshift-install-069654167/masters/main.tf line 1, in resource \"ironic_node_v1\" \"openshift-master-host\":"
time="2021-02-04T18:39:07Z" level=error msg="  1: resource \"ironic_node_v1\" \"openshift-master-host\" {"
time="2021-02-04T18:39:07Z" level=error
time="2021-02-04T18:39:07Z" level=error
time="2021-02-04T18:39:07Z" level=fatal msg="failed to fetch Cluster: failed to generate asset \"Cluster\": failed to create cluster: failed to apply Terraform: error(BaremetalIronicInspectTimeout) from Infrastructure Provider: Timed out waiting for node inspection to complete. Please check the console on the host for more details."

What did you expect to happen?
I expect installer handle any subnet and configure dnsmasq accordingly

How to reproduce it (as minimally and precisely as possible)?

Provide in install-config.yaml:
machineCIDR: fd2e:6f44:5dd8:face:b001::/80 #for control plane network and
provisioningNetworkCIDR: fd2e:6f44:5dd8:face:b00c::/80 

http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html
declare that dnsmasq.conf doesn't specify the subnet prefix length in dhcp-range option so it defaults to /64 which doesn't match the interface(/80)

Comment 2 Victor Voronkov 2021-03-01 07:48:36 UTC
@stbenjam Please fix the target version of clone, probably should be 4.7.z, no?

Comment 3 Stephen Benjamin 2021-03-01 15:06:32 UTC
There are 2 places this needed to be fixed: for installation and day-2. Both BZ's need to be against 4.8, you can see the linked fixes point to different github repos.


4.7.z is covered by BZ#1933726 and BZ#1933728.

Comment 7 Aleksandra Malykhin 2021-07-21 06:55:17 UTC
Verified on the Build: 4.8.0 (GA)
Installation method: baremetal IPI

Jenkins job parameters:
      baremetal_net_ipv6: true
      baremetal_net_ipv4: false
      provisioning_net_ipv6: true
      fips_mode: false
      use_bond_interface: false
      provisioning_network_state: 'Disabled'
      management_interface_type: 'RedFish-VirtualMedia'
      network_type: 'OVNKubernetes'
      enable_ipsec: true
      disconnected_install: true
      provision_worker_count: "4"
      deploy_worker_count: "2"
      openshift_release_image: "quay.io/openshift-release-dev/ocp-release:4.8.0-x86_64"
      PROVISION_NETWORK_IPV6: 'fd00:1101:face:b00c:1::/118'

Flow:
1. Build the cluster
2. Change the provisioning yaml from Disabled to Managed with not default subnet
[kni@provisionhost-0-0 ~]$ oc get provisioning -o yaml
......
  spec:
    provisioningDHCPRange: fd00:1101:face:b00c:1::a,fd00:1101:face:b00c:1::03ff
    provisioningIP: fd00:1101:face:b00c:1::3
    provisioningInterface: enp0s3
    provisioningNetwork: Managed
    provisioningNetworkCIDR: fd00:1101:face:b00c:1::/118
    provisioningOSDownloadURL: http://registry.ocp-edge-cluster-0.qe.lab.redhat.com:8080/images/rhcos-48.84.202106091622-0-openstack.x86_64.qcow2.gz?sha256=2efc7539f200ffea150272523a9526ba393a9a0b8312b40031b13bfdeda36fde
.....

3. Switch to managed mode
[kni@provisionhost-0-0 ~]$ oc apply -f managed.yaml 
[kni@provisionhost-0-0 ~]$ oc get pods

Wait for metal3-54b4cdf656-l2tpl                        10/10     Running

4. Scale up: create the node yaml
[kni@provisionhost-0-0 ~]$ oc create -f new-nodeX2107.yaml
 
[kni@provisionhost-0-0 ~]$ oc get bmh
...
openshift-worker-0-2   ready

[kni@provisionhost-0-0 ~]$ oc get machineset
NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
ocp-edge-cluster-0-7hpsq-worker-0   2         2         2       2           19h
[kni@provisionhost-0-0 ~]$ oc scale machineset ocp-edge-cluster-0-7hpsq-worker-0 --replicas=3

5. Verify that the node successfully scaled up

[kni@provisionhost-0-0 ~]$ oc get machine
...
ocp-edge-cluster-0-7hpsq-worker-0-lzcb2   Running

[kni@provisionhost-0-0 ~]$ oc get bmh
...    
openshift-worker-0-2   provisioned              ocp-edge-cluster-0-7hpsq-worker-0-lzcb2   true


[kni@provisionhost-0-0 ~]$ oc get node
...
worker-0-2.ocp-edge-cluster-0.qe.lab.redhat.com   Ready    worker   10m   v1.21.1+f36aa36

Comment 10 errata-xmlrpc 2021-07-27 22:43:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438


Note You need to log in before you can comment on or make changes to this bug.