Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
`IPaddr2` resource agent now finds NIC for IPv6 addresses with 128 netmask
Previously, the `IPaddr2` resource agent failed to find the NIC for IPv6 addresses with 128 netmask. This fix corrects that issue.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2018:0757
Description of problem: Given the following script: [root@ctrl-r02-02 heartbeat]# more test.sh #!/bin/bash source ocf-shellfuncs source findif.sh OCF_RESKEY_ip="fd00:fd00:fd00:2000::7" unset OCF_RESKEY_cidr_netmask echo "IP: $OCF_RESKEY_ip - MASK: $OCF_RESKEY_cidr_netmask" findif OCF_RESKEY_ip="fd00:fd00:fd00:2000::7" OCF_RESKEY_cidr_netmask="128" echo "IP: $OCF_RESKEY_ip - MASK: $OCF_RESKEY_cidr_netmask" findif OCF_RESKEY_ip="172.16.17.222" unset OCF_RESKEY_cidr_netmask echo "IP: $OCF_RESKEY_ip - MASK: $OCF_RESKEY_cidr_netmask" findif OCF_RESKEY_ip="172.16.17.222" OCF_RESKEY_cidr_netmask="32" echo "IP: $OCF_RESKEY_ip - MASK: $OCF_RESKEY_cidr_netmask" findif Here is the output: [root@ctrl-r02-02 heartbeat]# ./test.sh IP: fd00:fd00:fd00:2000::7 - MASK: vlan200 netmask 64 broadcast IP: fd00:fd00:fd00:2000::7 - MASK: 128 ocf-exit-reason:Unable to find nic, or netmask mismatch. IP: 172.16.17.222 - MASK: eth4 netmask 25 broadcast 172.16.17.255 IP: 172.16.17.222 - MASK: 32 eth4 netmask 32 broadcast 172.16.17.255 Ideally, when we specify a mask of 128, the RA should be able to find the nic by itself just like it does for IPv4. he reason for this is that: a) In some situations knowing the nic beforehand is not ideal (*cough*) b) The nic name does not necessarily have to be the same on all nodes Version-Release number of selected component (if applicable): resource-agents-3.9.5-82.el7_3.9.x86_64 Additional info: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:8a:2e:e6 brd ff:ff:ff:ff:ff:ff inet 192.168.0.12/25 brd 192.168.0.127 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe8a:2ee6/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000 link/ether 52:54:00:81:b4:d7 brd ff:ff:ff:ff:ff:ff inet6 fe80::5054:ff:fe81:b4d7/64 scope link valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000 link/ether 52:54:00:3f:02:f2 brd ff:ff:ff:ff:ff:ff inet6 fe80::5054:ff:fe3f:2f2/64 scope link valid_lft forever preferred_lft forever 5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000 link/ether 52:54:00:ae:df:59 brd ff:ff:ff:ff:ff:ff inet6 fe80::5054:ff:feae:df59/64 scope link valid_lft forever preferred_lft forever 6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:ae:80:c0 brd ff:ff:ff:ff:ff:ff inet 172.16.17.222/25 brd 172.16.17.255 scope global eth4 valid_lft forever preferred_lft forever inet6 2001:db8:ca2:3:5054:ff:feae:80c0/64 scope global mngtmpaddr dynamic valid_lft 3598sec preferred_lft 3598sec inet6 fe80::5054:ff:feae:80c0/64 scope link valid_lft forever preferred_lft forever 7: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 26:1f:22:80:b3:92 brd ff:ff:ff:ff:ff:ff 8: br-infra: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether 52:54:00:3f:02:f2 brd ff:ff:ff:ff:ff:ff inet 10.0.1.132/25 brd 10.0.1.255 scope global br-infra valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe3f:2f2/64 scope link valid_lft forever preferred_lft forever 9: br-storage: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether 52:54:00:ae:df:59 brd ff:ff:ff:ff:ff:ff inet6 fe80::5054:ff:feae:df59/64 scope link valid_lft forever preferred_lft forever 10: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether 52:54:00:81:b4:d7 brd ff:ff:ff:ff:ff:ff inet6 fe80::5054:ff:fe81:b4d7/64 scope link valid_lft forever preferred_lft forever 11: vlan200: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether aa:e1:e7:32:da:a9 brd ff:ff:ff:ff:ff:ff inet6 fd00:fd00:fd00:2000::7/64 scope global valid_lft forever preferred_lft forever inet6 fd00:fd00:fd00:2000::22/64 scope global valid_lft forever preferred_lft forever inet6 fe80::a8e1:e7ff:fe32:daa9/64 scope link valid_lft forever preferred_lft forever 12: vlan301: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether 36:aa:2b:fc:16:91 brd ff:ff:ff:ff:ff:ff inet6 fd00:fd00:fd00:4000::5/64 scope global valid_lft forever preferred_lft forever inet6 fd00:fd00:fd00:4000::22/64 scope global valid_lft forever preferred_lft forever inet6 fe80::34aa:2bff:fefc:1691/64 scope link valid_lft forever preferred_lft forever 13: vlan300: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether 9a:16:13:f8:55:63 brd ff:ff:ff:ff:ff:ff inet6 fd00:fd00:fd00:3000::22/64 scope global valid_lft forever preferred_lft forever inet6 fe80::9816:13ff:fef8:5563/64 scope link valid_lft forever preferred_lft forever 14: vlan100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether 8e:e2:a7:b4:cd:3c brd ff:ff:ff:ff:ff:ff inet6 2001:db8:ca2:4::22/64 scope global valid_lft forever preferred_lft forever inet6 fe80::8ce2:a7ff:feb4:cd3c/64 scope link valid_lft forever preferred_lft forever