Bug 1276699
| Summary: | [RFE] IPaddr2: Use IPv6 DAD for collision detection | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Oyvind Albrigtsen <oalbrigt> | ||||
| Component: | resource-agents | Assignee: | Oyvind Albrigtsen <oalbrigt> | ||||
| Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | 7.3 | CC: | abeekhof, agk, cbuissar, cfeist, cluster-maint, cluster-qe, fdinitto, jruemker, mnovacek, royoung, sauchter, tbowling, tlavigne | ||||
| Target Milestone: | rc | Keywords: | FutureFeature | ||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | resource-agents-3.9.5-63.el7 | Doc Type: | Enhancement | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | 1276698 | Environment: | |||||
| Last Closed: | 2016-11-03 23:58:55 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | 1191247, 1276698 | ||||||
| Bug Blocks: | 1172231 | ||||||
| Attachments: |
|
||||||
Setup an IPv6 address resource that collides with one on the network: # pcs resource create IPv6 IPaddr2 ip=XXXX::10 Before: # rpm -q resource-agents resource-agents-3.9.5-54.el7_2.6.x86_64 # pcs resource enable IPv6 # tail -f /var/log/pacemaker.log ... IPaddr2(IPv6)[5109]: 2016/03/01_13:02:28 WARNING: XXXX::10 still has 'tentative' status. (ignored) After: # rpm -q resource-agents resource-agents-3.9.5-63.el7.x86_64 # pcs resource enable IPv6 # tail -f /var/log/pacemaker.log ... IPaddr2(IPv6)[7572]: 2016/03/01_13:08:24 ERROR: IPv6 address collision XXXX::10 [DAD]
I have verified that DAD is used for ipv6 address collision detection in
resource-agents-3.9.5-73.el6.x86_64
-----
setup:
Have active ipv6 address and stopped cluster resource with the same address.
[root@virt-140 ~]# ping6 -c 2 2620:52:0:2246:1800:ff:fe00:8c
PING 2620:52:0:2246:1800:ff:fe00:8c(2620:52:0:2246:1800:ff:fe00:8c) 56 data bytes
64 bytes from 2620:52:0:2246:1800:ff:fe00:8c: icmp_seq=1 ttl=64 time=0.057 ms
64 bytes from 2620:52:0:2246:1800:ff:fe00:8c: icmp_seq=2 ttl=64 time=0.064 ms
--- 2620:52:0:2246:1800:ff:fe00:8c ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.057/0.060/0.064/0.008 ms
[root@virt-140 ~]# pcs resource show ClusterIP
Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=2620:52:0:2246:1800:ff:fe00:8c cidr_netmask=64
Meta Attrs: target-role=Stopped
Operations: start interval=0s timeout=20s (ClusterIP-start-interval-0s)
stop interval=0s timeout=20s (ClusterIP-stop-interval-0s)
monitor interval=10s timeout=20s (ClusterIP-monitor-interval-10s)
[root@virt-140 ~]# pcs status
Cluster name: STSRHTS24626
Last updated: Wed Jul 27 11:32:59 2016 Last change: Wed Jul 27 11:28:54 2016 by root via crm_resource on virt-140
Stack: corosync
Current DC: virt-149 (version 1.1.13-10.el7-44eb2dd) - partition with quorum
4 nodes and 13 resources configured
Online: [ virt-140 virt-148 virt-149 virt-150 ]
Full list of resources:
fence-virt-140 (stonith:fence_xvm): Started virt-140
fence-virt-148 (stonith:fence_xvm): Started virt-148
fence-virt-149 (stonith:fence_xvm): Started virt-149
fence-virt-150 (stonith:fence_xvm): Started virt-150
Clone Set: dlm-clone [dlm]
Started: [ virt-140 virt-148 virt-149 virt-150 ]
Clone Set: clvmd-clone [clvmd]
Started: [ virt-140 virt-148 virt-149 virt-150 ]
ClusterIP (ocf::heartbeat:IPaddr2): (target-role:Stopped) Stopped
PCSD Status:
virt-140: Online
virt-148: Online
virt-149: Online
virt-150: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
before the fix (resource-agents-3.9.5-54.el7.x86_64)
====================================================
# pcs resource enable ClusterIP
# pcs status | grep ClusterIP
ClusterIP (ocf::heartbeat:IPaddr2): Stopped
# tail /var/log/messages
...
Jul 30 11:39:07 virt-140 IPaddr2(ClusterIP)[4523]: INFO: Adding inet6 address 2620:52:0:2246:1800:ff:fe00:8c/64 to device eth0
Jul 30 11:39:07 virt-140 IPaddr2(ClusterIP)[4523]: INFO: Bringing device eth0 up
>>> Jul 30 11:39:11 virt-140 IPaddr2(ClusterIP)[4523]: WARNING: 2620:52:0:2246:1800:ff:fe00:90 still has 'tentative' status. (ignored)
Jul 30 11:39:11 virt-140 IPaddr2(ClusterIP)[4523]: INFO: /usr/libexec/heartbeat/send_ua -i 200 -c 5 2620:52:0:2246:1800:ff:fe00:8c 64 eth0
after the fix (resource-agents-3.9.5-73.el7.x86_64)
===================================================
# pcs resource enable ClusterIP
# pcs status | grep ClusterIP
ClusterIP (ocf::heartbeat:IPaddr2): Stopped
# tail /var/log/messages
...
Jul 27 11:36:41 virt-150 IPaddr2(ClusterIP)[18826]: INFO: Adding inet6 address 2620:52:0:2246:1800:ff:fe00:8c/64 to device eth0
Jul 27 11:36:41 virt-150 IPaddr2(ClusterIP)[18826]: INFO: Bringing device eth0 up
Jul 27 11:36:41 virt-150 kernel: IPv6: eth0: IPv6 duplicate address 2620:52:0:2246:1800:ff:fe00:8c detected!
>>> Jul 27 11:36:42 virt-150 IPaddr2(ClusterIP)[18826]: ERROR: IPv6 address collision 2620:52:0:2246:1800:ff:fe00:8c [DAD]
Jul 27 11:36:42 virt-150 IPaddr2(ClusterIP)[18826]: ERROR: run_send_ua failed.
Jul 27 11:36:42 virt-150 lrmd[24755]: notice: ClusterIP_start_0:18826:stderr [ ocf-exit-reason:run_send_ua failed. ]
Jul 27 11:36:42 virt-150 crmd[24758]: notice: Operation ClusterIP_start_0: unknown error (node=virt-150, call=74, rc=1, cib-update=58, confirmed=true)
Jul 27 11:36:42 virt-150 crmd[24758]: notice: virt-150-ClusterIP_start_0:74 [ ocf-exit-reason:run_send_ua failed.\n ]
Jul 27 11:36:42 virt-150 IPaddr2(ClusterIP)[18911]: INFO: IP status = no, IP_CIP=
Jul 27 11:36:42 virt-150 crmd[24758]: notice: Operation ClusterIP_stop_0: ok (node=virt-150, call=75, rc=0, cib-update=59, confirmed=true)
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-2174.html |
Created attachment 1087945 [details] Working patch Tested and working patch.