Bug 1445861
| Summary: | IPaddr2 resource agent: add option for specifying IPv6's preferred_lft | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Damien Ciabrini <dciabrin> | |
| Component: | resource-agents | Assignee: | Oyvind Albrigtsen <oalbrigt> | |
| Status: | CLOSED ERRATA | QA Contact: | Udi Shkalim <ushkalim> | |
| Severity: | urgent | Docs Contact: | ||
| Priority: | urgent | |||
| Version: | 7.4 | CC: | agk, cfeist, cluster-maint, fdinitto, jeckersb, michele, mkrcmari, mnovacek, royoung | |
| Target Milestone: | rc | Keywords: | ZStream | |
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | resource-agents-3.9.5-96.el7 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1445889 (view as bug list) | Environment: | ||
| Last Closed: | 2017-08-01 14:57:40 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1445889, 1445905 | |||
|
Description
Damien Ciabrini
2017-04-26 16:16:46 UTC
So this is what we did to test it for OSP:
A) On an OSP deployment with IPv6 *without* the patch:
Created a bunch of VIPs with /128 netmask and observed that they all got created (so only by setting ip, nic and netmask) and that they show up in ip -6 addr like this (note the "preferred_lft forever"):
inet6 fd00:fd00:fd00:2000::9/128 scope global
valid_lft forever preferred_lft forever
B) On an OSP deployment with IPv6 *with* the patch:
Created a bunch of VIPs with /128 netmask and observed that they all got created (so only by setting ip, nic and netmask) and that they show up in ip -6 addr like this (note the "preferred_lft forever"):
inet6 fd00:fd00:fd00:2000::9/128 scope global
valid_lft forever preferred_lft forever
A) and B) are identical but we wanted to verify that the patch did not introduce any regression.
C) On an OSP deployment with IPv6 *with* the patch:
Created a bunch of VIPs with /128 netmask and observed that they all got created (so by setting ip, nic, netmask *and* preferred_lft=0) and that they show up in ip -6 addr like this (note the "preferred_lft 0sec and the deprecated"):
inet6 fd00:fd00:fd00:4000::9/128 scope global deprecated
valid_lft forever preferred_lft 0sec
Then we tried moving the VIPs around a few times and also observed no regression there. We also could confirm that no service (we checked both galera and rabbit) would bind to the VIP as a source address when creating inter-cluster communication sockers.
I have verified that new option 'prefered_lft' changes the appropriate value of
the ethernet interface with resource-agents-3.9.5-97.el7.x86_64.
---
1/ configure cluster with fencing (1) and IPaddr2 iv6 resource
2/ disable ipv6 addr resource
[root@virt-249 ~]# pcs resource
...
ipv6 (ocf::heartbeat:IPaddr2): Stopped (disabled)
before the patch (resource-agents-3.9.5-95.el7.x86_64)
======================================================
[root@virt-251 ~]# pcs resource update ipv6 preferred_lft=1
Error: resource option(s): 'preferred_lft', are not recognized for resource \
type: 'ocf:heartbeat:IPaddr2' (use --force to override)
patched version (resource-agents-3.9.5-97.el7.x86_64)
=====================================================
[root@virt-251 ~]# pcs resource enable ipv6
[root@virt-251 ~]# pcs resource
ipv6 (ocf::heartbeat:IPaddr2): Started virt-251
[root@virt-251 ~]# ip -6 a show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
inet6 2620:52:0:2246:1800:ff:fe00:fb/64 scope global noprefixroute dynamic
valid_lft 2591955sec preferred_lft 604755sec
inet6 fe80::5054:ff:fe8a:37b2/64 scope link
>>> valid_lft forever preferred_lft forever
inet6 fe80::1800:ff:fe00:fb/64 scope link
valid_lft forever preferred_lft forever
[root@virt-251 ~]# pcs resource update ipv6 preferred_lft=1
[root@virt-251 ~]# pcs resource show ipv6
Resource: ipv6 (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=fe80::5054:ff:fe8a:37b2 nic=eth0 preferred_lft=1
Operations: monitor interval=30s (ipv6-monitor-interval-30s)
start interval=0s timeout=20s (ipv6-start-interval-0s)
stop interval=0s timeout=20s (ipv6-stop-interval-0s)
[root@virt-251 ~]# pcs resource show
ipv6 (ocf::heartbeat:IPaddr2): Started virt-251
[root@virt-251 ~]# ip -6 a show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP qlen 1000
inet6 2620:52:0:2246:1800:ff:fe00:fb/64 scope global noprefixroute dynamic
valid_lft 2591989sec preferred_lft 604789sec
>>> inet6 fe80::5054:ff:fe8a:37b2/64 scope link deprecated
>>> valid_lft forever preferred_lft 0sec
inet6 fe80::1800:ff:fe00:fb/64 scope link
valid_lft forever preferred_lft forever
>> (1) pcs config
[root@virt-249 ~]# pcs config
Cluster Name: STSRHTS27446
Corosync Nodes:
virt-247 virt-249 virt-251
Pacemaker Nodes:
virt-247 virt-249 virt-251
Resources:
Clone: dlm-clone
Meta Attrs: interleave=true ordered=true
Resource: dlm (class=ocf provider=pacemaker type=controld)
Operations: monitor interval=30s on-fail=fence (dlm-monitor-interval-30s)
start interval=0s timeout=90 (dlm-start-interval-0s)
stop interval=0s timeout=100 (dlm-stop-interval-0s)
Clone: clvmd-clone
Meta Attrs: interleave=true ordered=true
Resource: clvmd (class=ocf provider=heartbeat type=clvm)
Attributes: with_cmirrord=1
Operations: monitor interval=30s on-fail=fence (clvmd-monitor-interval-30s)
start interval=0s timeout=90 (clvmd-start-interval-0s)
stop interval=0s timeout=90 (clvmd-stop-interval-0s)
Resource: ipv6 (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=fe80::5054:ff:fe8a:37b2 nic=eth0
Operations: monitor interval=30s (ipv6-monitor-interval-30s)
start interval=0s timeout=20s (ipv6-start-interval-0s)
stop interval=0s timeout=20s (ipv6-stop-interval-0s)
Stonith Devices:
Resource: fence-virt-247 (class=stonith type=fence_xvm)
Attributes: pcmk_host_check=static-list pcmk_host_list=virt-247 pcmk_host_map=virt-247:virt-247.cluster-qe.lab.eng.brq.redhat.com
Operations: monitor interval=60s (fence-virt-247-monitor-interval-60s)
Resource: fence-virt-249 (class=stonith type=fence_xvm)
Attributes: pcmk_host_check=static-list pcmk_host_list=virt-249 pcmk_host_map=virt-249:virt-249.cluster-qe.lab.eng.brq.redhat.com
Operations: monitor interval=60s (fence-virt-249-monitor-interval-60s)
Resource: fence-virt-251 (class=stonith type=fence_xvm)
Attributes: pcmk_host_check=static-list pcmk_host_list=virt-251 pcmk_host_map=virt-251:virt-251.cluster-qe.lab.eng.brq.redhat.com
Operations: monitor interval=60s (fence-virt-251-monitor-interval-60s)
Fencing Levels:
Location Constraints:
Ordering Constraints:
start dlm-clone then start clvmd-clone (kind:Mandatory)
Colocation Constraints:
clvmd-clone with dlm-clone (score:INFINITY)
Ticket Constraints:
Alerts:
No alerts defined
Resources Defaults:
No defaults set
Operations Defaults:
No defaults set
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: STSRHTS27446
dc-version: 1.1.16-8.el7-94ff4df
have-watchdog: false
last-lrm-refresh: 1494502157
no-quorum-policy: freeze
Quorum:
Options:
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1844 |