Description of problem: PXE boot “ipv6.disable=1” does not disable IPv6 and the loader still request or generate IPv6 addresses for the NICs Version-Release number of selected component (if applicable): openshift-install-linux-4.1.0-rc.5 rhcos-410.8.20190516.0-metal-bios.raw.gz How reproducible: Always Expected results: Should disable the use of IPv6 in the node
if you are adding `ipv6.disable=1` during the bare metal install it is not currently one of the arguments we carry forward to the firstboot: https://github.com/coreos/coreos-installer/blob/0e6979c426f26676ac0ca88a4175f53c2a9d7683/dracut/30coreos-installer/parse-coreos.sh#L27-L31 . We'd need to add it to the the list.
Created an upstream issue at https://github.com/coreos/coreos-installer/issues/44
This has been fixed in the installer by Micah Abbott: https://github.com/coreos/coreos-installer/pull/41
This is technically not ON_QA yet, as the latest RHCOS builds don't have an up to date `coreos-installer` package included. A rebuild is underway and new RHCOS builds should have the updated package Real Soon Now.
Verified in 42.80.20190730.1 with coreos-installer-0-9.rhaos4.2.git2fcf441.el8.noarch ``` ### Default status is IPv6 enabled $ rpm-ostree status State: idle AutomaticUpdates: disabled Deployments: ● ostree://rhcos:8ed5c484767406553db9d2da6a4f68e2ed5754523e55468ebd029b73104cfda8 OstreeRemoteStatus: Remote "rhcos" not found Version: 42.80.20190730.1 (2019-07-30T12:53:02Z) Commit: 8ed5c484767406553db9d2da6a4f68e2ed5754523e55468ebd029b73104cfda8 $ cat /proc/cmdline BOOT_IMAGE=/ostree/rhcos-349706babeb6637b76b92af29cfb933602c0f3bb6d41b9f462eb0b21b87b3058/vmlinuz-4.18.0-80.7.1.el8_0.x86_64 console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw ignition.firstboot rd.neednet=1 ip=dhcp BOOTIF=01-52-54-00-18-17-b2 root=UUID=80ea8171-3c99-4a90-ac99-db085253372f ostree=/ostree/boot.0/rhcos/349706babeb6637b76b92af29cfb933602c0f3bb6d41b9f462eb0b21b87b3058/0 coreos.oem.id=metal ignition.platform.id=metal $ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:18:17:b2 brd ff:ff:ff:ff:ff:ff inet 192.168.124.109/24 brd 192.168.124.255 scope global dynamic noprefixroute ens2 valid_lft 3520sec preferred_lft 3520sec inet6 fe80::5054:ff:fe18:17b2/64 scope link noprefixroute valid_lft forever preferred_lft forever ### Passing ipv6.disable=1 as part of the PXE config rpm-ostree status State: idle AutomaticUpdates: disabled Deployments: ● ostree://rhcos:8ed5c484767406553db9d2da6a4f68e2ed5754523e55468ebd029b73104cfda8 OstreeRemoteStatus: Remote "rhcos" not found Version: 42.80.20190730.1 (2019-07-30T12:53:02Z) Commit: 8ed5c484767406553db9d2da6a4f68e2ed5754523e55468ebd029b73104cfda8 $ cat /proc/cmdline BOOT_IMAGE=/ostree/rhcos-349706babeb6637b76b92af29cfb933602c0f3bb6d41b9f462eb0b21b87b3058/vmlinuz-4.18.0-80.7.1.el8_0.x86_64 console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw ignition.firstboot rd.neednet=1 ipv6.disable=1 ip=dhcp BOOTIF=01-52-54-00-cc-54-8c root=UUID=80ea8171-3c99-4a90-ac99-db085253372f ostree=/ostree/boot.0/rhcos/349706babeb6637b76b92af29cfb933602c0f3bb6d41b9f462eb0b21b87b3058/0 coreos.oem.id=metal ignition.platform.id=metal $ ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: ens2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:cc:54:8c brd ff:ff:ff:ff:ff:ff inet 192.168.124.59/24 brd 192.168.124.255 scope global dynamic noprefixroute ens2 valid_lft 3567sec preferred_lft 3567sec ```
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922