Bug 1927068

Summary: Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64
Product: OpenShift Container Platform Reporter: Stephen Benjamin <stbenjam>
Component: InstallerAssignee: Stephen Benjamin <stbenjam>
Installer sub component: OpenShift on Bare Metal IPI QA Contact: Aleksandra Malykhin <amalykhi>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: high CC: amalykhi, augol, bfournie, rbartal, rpittau, stbenjam, tsedovic, vvoronko, yprokule
Version: 4.7Keywords: Triaged
Target Milestone: ---   
Target Release: 4.8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Cause: DNSmasq requires specifying the prefix length when an IPv6 network is other than a /64. Consequence: Hosts failed to PXE boot when using a non-/64 network. Fix: Include the prefix length in the DNSmasq configuration. Result: Hosts will now DHCP and PXE boot on IPv6 networks of any prefix length.
Story Points: ---
Clone Of: 1925291 Environment:
Last Closed: 2021-07-27 22:43:10 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1925291, 1966525    
Bug Blocks: 1905233, 1933726    

Description Stephen Benjamin 2021-02-10 00:49:23 UTC
+++ This bug was initially created as a clone of Bug #1925291 +++

Version:

$ openshift-install version
4.7.0-0.nightly-2021-02-04-075559

Platform:
baremetal IPI

What happened?
masters fail to PXE boot

from openshift-install.log:
time="2021-02-04T18:39:07Z" level=error msg="Error: could not inspect: could not inspect node, node is currently 'inspect failed', last error was 'timeout reached while inspecting the node'"
time="2021-02-04T18:39:07Z" level=error
time="2021-02-04T18:39:07Z" level=error msg=" on ../../tmp/openshift-install-069654167/masters/main.tf line 1, in resource \"ironic_node_v1\" \"openshift-master-host\":"
time="2021-02-04T18:39:07Z" level=error msg="  1: resource \"ironic_node_v1\" \"openshift-master-host\" {"
time="2021-02-04T18:39:07Z" level=error
time="2021-02-04T18:39:07Z" level=error
time="2021-02-04T18:39:07Z" level=fatal msg="failed to fetch Cluster: failed to generate asset \"Cluster\": failed to create cluster: failed to apply Terraform: error(BaremetalIronicInspectTimeout) from Infrastructure Provider: Timed out waiting for node inspection to complete. Please check the console on the host for more details."

What did you expect to happen?
I expect installer handle any subnet and configure dnsmasq accordingly

How to reproduce it (as minimally and precisely as possible)?

Provide in install-config.yaml:
machineCIDR: fd2e:6f44:5dd8:face:b001::/80 #for control plane network and
provisioningNetworkCIDR: fd2e:6f44:5dd8:face:b00c::/80 

http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html
declare that dnsmasq.conf doesn't specify the subnet prefix length in dhcp-range option so it defaults to /64 which doesn't match the interface(/80)

Comment 2 Victor Voronkov 2021-03-01 07:48:36 UTC
@stbenjam Please fix the target version of clone, probably should be 4.7.z, no?

Comment 3 Stephen Benjamin 2021-03-01 15:06:32 UTC
There are 2 places this needed to be fixed: for installation and day-2. Both BZ's need to be against 4.8, you can see the linked fixes point to different github repos.


4.7.z is covered by BZ#1933726 and BZ#1933728.

Comment 7 Aleksandra Malykhin 2021-07-21 06:55:17 UTC
Verified on the Build: 4.8.0 (GA)
Installation method: baremetal IPI

Jenkins job parameters:
      baremetal_net_ipv6: true
      baremetal_net_ipv4: false
      provisioning_net_ipv6: true
      fips_mode: false
      use_bond_interface: false
      provisioning_network_state: 'Disabled'
      management_interface_type: 'RedFish-VirtualMedia'
      network_type: 'OVNKubernetes'
      enable_ipsec: true
      disconnected_install: true
      provision_worker_count: "4"
      deploy_worker_count: "2"
      openshift_release_image: "quay.io/openshift-release-dev/ocp-release:4.8.0-x86_64"
      PROVISION_NETWORK_IPV6: 'fd00:1101:face:b00c:1::/118'

Flow:
1. Build the cluster
2. Change the provisioning yaml from Disabled to Managed with not default subnet
[kni@provisionhost-0-0 ~]$ oc get provisioning -o yaml
......
  spec:
    provisioningDHCPRange: fd00:1101:face:b00c:1::a,fd00:1101:face:b00c:1::03ff
    provisioningIP: fd00:1101:face:b00c:1::3
    provisioningInterface: enp0s3
    provisioningNetwork: Managed
    provisioningNetworkCIDR: fd00:1101:face:b00c:1::/118
    provisioningOSDownloadURL: http://registry.ocp-edge-cluster-0.qe.lab.redhat.com:8080/images/rhcos-48.84.202106091622-0-openstack.x86_64.qcow2.gz?sha256=2efc7539f200ffea150272523a9526ba393a9a0b8312b40031b13bfdeda36fde
.....

3. Switch to managed mode
[kni@provisionhost-0-0 ~]$ oc apply -f managed.yaml 
[kni@provisionhost-0-0 ~]$ oc get pods

Wait for metal3-54b4cdf656-l2tpl                        10/10     Running

4. Scale up: create the node yaml
[kni@provisionhost-0-0 ~]$ oc create -f new-nodeX2107.yaml
 
[kni@provisionhost-0-0 ~]$ oc get bmh
...
openshift-worker-0-2   ready

[kni@provisionhost-0-0 ~]$ oc get machineset
NAME                                DESIRED   CURRENT   READY   AVAILABLE   AGE
ocp-edge-cluster-0-7hpsq-worker-0   2         2         2       2           19h
[kni@provisionhost-0-0 ~]$ oc scale machineset ocp-edge-cluster-0-7hpsq-worker-0 --replicas=3

5. Verify that the node successfully scaled up

[kni@provisionhost-0-0 ~]$ oc get machine
...
ocp-edge-cluster-0-7hpsq-worker-0-lzcb2   Running

[kni@provisionhost-0-0 ~]$ oc get bmh
...    
openshift-worker-0-2   provisioned              ocp-edge-cluster-0-7hpsq-worker-0-lzcb2   true


[kni@provisionhost-0-0 ~]$ oc get node
...
worker-0-2.ocp-edge-cluster-0.qe.lab.redhat.com   Ready    worker   10m   v1.21.1+f36aa36

Comment 10 errata-xmlrpc 2021-07-27 22:43:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438