Bug 1792467 - [ipi on baremetal] [4.4] DHCPv6 addresses break IP subnet check
Summary: [ipi on baremetal] [4.4] DHCPv6 addresses break IP subnet check
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 4.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 4.4.0
Assignee: Stephen Benjamin
QA Contact: Victor Voronkov
URL:
Whiteboard:
Depends On:
Blocks: 1792493
TreeView+ depends on / blocked
 
Reported: 2020-01-17 16:55 UTC by Ben Nemec
Modified: 2020-05-04 11:25 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1792493 (view as bug list)
Environment:
Last Closed: 2020-05-04 11:24:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift baremetal-runtimecfg pull 42 0 None closed Convert /128 addresses to /64 again 2020-04-28 10:53:35 UTC
Github openshift baremetal-runtimecfg pull 43 0 None closed [release-4.3] Bug 1792493: Convert /128 addresses to /64 again 2020-04-28 10:53:35 UTC
Red Hat Product Errata RHBA-2020:0581 0 None None None 2020-05-04 11:25:35 UTC

Description Ben Nemec 2020-01-17 16:55:43 UTC
Description of problem: DHCPv6 addresses all have a /128 netmask, which breaks the check in baremetal-runtimecfg for whether an IP is in the same subnet as the VIP.  This causes deployments using IPv6 to fail when starting mdns-publisher because it doesn't know what address to listen on.

In the short term we will just assume /64 for IPv6 addresses, but eventually we need to look up the correct netmask and use that.

Comment 2 Victor Voronkov 2020-03-12 09:15:31 UTC
Verified on 4.4.0-0.ci-2020-03-11-095511 where IPv6 cluster deployment finished successfully

[root@titan35 ~]# virsh net-dhcp-leases baremetal
 Expiry Time           MAC address         Protocol   IP address                    Hostname          Client ID or DUID
-------------------------------------------------------------------------------------------------------------------------------------
 2020-03-12 11:59:00   52:54:00:02:4d:ae   ipv6       fd2e:6f44:5dd8:c956::112/64   master-2          00:03:00:01:52:54:00:02:4d:ae
 2020-03-12 12:09:30   52:54:00:13:42:eb   ipv6       fd2e:6f44:5dd8:c956::14f/64   provisionhost-0   00:03:00:01:52:54:00:13:42:eb
 2020-03-12 11:51:42   52:54:00:44:d1:fc   ipv6       fd2e:6f44:5dd8:c956::145/64   worker-0          00:03:00:01:52:54:00:44:d1:fc
 2020-03-12 11:58:56   52:54:00:5a:16:6d   ipv6       fd2e:6f44:5dd8:c956::11c/64   master-1          00:03:00:01:52:54:00:5a:16:6d
 2020-03-12 12:01:23   52:54:00:f6:53:13   ipv6       fd2e:6f44:5dd8:c956::150/64   master-0          00:03:00:01:52:54:00:f6:53:13
 2020-03-12 11:50:25   52:54:00:fb:3e:67   ipv6       fd2e:6f44:5dd8:c956::11d/64   worker-1          00:03:00:01:52:54:00:fb:3e:67

Comment 4 errata-xmlrpc 2020-05-04 11:24:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581


Note You need to log in before you can comment on or make changes to this bug.