Bug 1224619 - nfs-ganesha:delete node throws error and pcs status also notifies about failures, in fact I/O also doesn't resume post grace period
Summary: nfs-ganesha:delete node throws error and pcs status also notifies about failu...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: nfs-ganesha
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ---
: RHGS 3.1.0
Assignee: Kaleb KEITHLEY
QA Contact: Saurabh
URL:
Whiteboard:
Depends On:
Blocks: 1202842 1234474 1234584
TreeView+ depends on / blocked
 
Reported: 2015-05-25 07:04 UTC by Saurabh
Modified: 2016-01-19 06:14 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.7.1-6
Doc Type: Bug Fix
Doc Text:
Previously, deleting a node was intentionally made disruptive. It removed the node from the Highly Available (HA) cluster and deleted the virtual IP address (VIP). Due to this, any clients that have NFS mounts on the deleted node(s) experienced I/O errors. With this release, when a node is deleted from the HA cluster, clients must remount using one of remaining valid VIPs. For a less disruptive experience, a fail-over can be initiated by administratively killing the ganesha.nfsd process on a node. The VIP will move to another node and clients will seamlessly switch.
Clone Of:
: 1234474 (view as bug list)
Environment:
Last Closed: 2015-07-29 04:52:40 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Saurabh 2015-05-25 07:04:22 UTC
Description of problem:
I tried to delete a node using the script ganesha-ha.sh. By delete node I mean the deleting thenode that is part of the nfs-ganesha HA cluster. 

Here is the information of the cluster,
 I have four nodes viz. nfs1 nfs2, nfs3, nfs4.
I mounted the volume using the Virtual IP of the node i.e of nfs4. while I/O is going I tried to delete the node nfs4, that's when I saw error, pcs status notifying failure, I/O not resuming post grace period.

Version-Release number of selected component (if applicable):
glusterfs-3.7.0-2.el6rhs.x86_64
nfs-ganesha-2.2.0-0.el6.x86_64

How reproducible:
first attempt the issue is seen

Steps to Reproduce:
1. create a volume of type 6x2, start it
2. bring up nfs-ganesha after completing the pre-requisites
3. mount the volume on a client
4. delete a node.

Actual results:
delete node command status,
[root@nfs1 ~]# time bash /usr/libexec/ganesha/ganesha-ha.sh --delete /etc/ganesha/ganesha-ha.conf nfs4
/usr/libexec/ganesha/ganesha-ha.sh: line 734: /etc/ganesha/ganesha-ha.conf/ganesha-ha.conf: Not a directory
Removing Constraint - colocation-nfs1-cluster_ip-1-nfs1-trigger_ip-1-INFINITY
Removing Constraint - location-nfs1-cluster_ip-1
Removing Constraint - location-nfs1-cluster_ip-1-nfs2-1000
Removing Constraint - location-nfs1-cluster_ip-1-nfs3-2000
Removing Constraint - location-nfs1-cluster_ip-1-nfs4-3000
Removing Constraint - location-nfs1-cluster_ip-1-nfs1-4000
Removing Constraint - order-nfs-grace-clone-nfs1-cluster_ip-1-mandatory
Deleting Resource - nfs1-cluster_ip-1
Removing Constraint - order-nfs1-trigger_ip-1-nfs-grace-clone-mandatory
Deleting Resource - nfs1-trigger_ip-1
Removing Constraint - colocation-nfs2-cluster_ip-1-nfs2-trigger_ip-1-INFINITY
Removing Constraint - location-nfs2-cluster_ip-1
Removing Constraint - location-nfs2-cluster_ip-1-nfs3-1000
Removing Constraint - location-nfs2-cluster_ip-1-nfs4-2000
Removing Constraint - location-nfs2-cluster_ip-1-nfs1-3000
Removing Constraint - location-nfs2-cluster_ip-1-nfs2-4000
Removing Constraint - order-nfs-grace-clone-nfs2-cluster_ip-1-mandatory
Deleting Resource - nfs2-cluster_ip-1
Removing Constraint - order-nfs2-trigger_ip-1-nfs-grace-clone-mandatory
Deleting Resource - nfs2-trigger_ip-1
Removing Constraint - colocation-nfs3-cluster_ip-1-nfs3-trigger_ip-1-INFINITY
Removing Constraint - location-nfs3-cluster_ip-1
Removing Constraint - location-nfs3-cluster_ip-1-nfs4-1000
Removing Constraint - location-nfs3-cluster_ip-1-nfs1-2000
Removing Constraint - location-nfs3-cluster_ip-1-nfs2-3000
Removing Constraint - location-nfs3-cluster_ip-1-nfs3-4000
Removing Constraint - order-nfs-grace-clone-nfs3-cluster_ip-1-mandatory
Deleting Resource - nfs3-cluster_ip-1
Removing Constraint - order-nfs3-trigger_ip-1-nfs-grace-clone-mandatory
Deleting Resource - nfs3-trigger_ip-1
Removing Constraint - colocation-nfs4-cluster_ip-1-nfs4-trigger_ip-1-INFINITY
Removing Constraint - location-nfs4-cluster_ip-1
Removing Constraint - location-nfs4-cluster_ip-1-nfs1-1000
Removing Constraint - location-nfs4-cluster_ip-1-nfs2-2000
Removing Constraint - location-nfs4-cluster_ip-1-nfs3-3000
Removing Constraint - location-nfs4-cluster_ip-1-nfs4-4000
Removing Constraint - order-nfs-grace-clone-nfs4-cluster_ip-1-mandatory
Deleting Resource - nfs4-cluster_ip-1
Removing Constraint - order-nfs4-trigger_ip-1-nfs-grace-clone-mandatory
Deleting Resource - nfs4-trigger_ip-1
Adding nfs1-trigger_ip-1 nfs-grace-clone (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs-grace-clone nfs1-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs2-trigger_ip-1 nfs-grace-clone (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs-grace-clone nfs2-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs3-trigger_ip-1 nfs-grace-clone (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs-grace-clone nfs3-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs4-trigger_ip-1 nfs-grace-clone (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs-grace-clone nfs4-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)
Error: unable to create resource/fence device 'nfs1-cluster_ip-1', 'nfs1-cluster_ip-1' already exists on this system
Error: unable to create resource/fence device 'nfs1-trigger_ip-1', 'nfs1-trigger_ip-1' already exists on this system
Adding nfs1-trigger_ip-1 nfs-grace-clone (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs-grace-clone nfs1-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)
Error: unable to create resource/fence device 'nfs2-cluster_ip-1', 'nfs2-cluster_ip-1' already exists on this system
Error: unable to create resource/fence device 'nfs2-trigger_ip-1', 'nfs2-trigger_ip-1' already exists on this system
Adding nfs2-trigger_ip-1 nfs-grace-clone (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs-grace-clone nfs2-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)
Error: unable to create resource/fence device 'nfs3-cluster_ip-1', 'nfs3-cluster_ip-1' already exists on this system
Error: unable to create resource/fence device 'nfs3-trigger_ip-1', 'nfs3-trigger_ip-1' already exists on this system
Adding nfs3-trigger_ip-1 nfs-grace-clone (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs-grace-clone nfs3-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)
CIB updated
CIB updated
Removing Constraint - location-nfs_stop-nfs4-nfs4-INFINITY
Attempting to stop: nfs_stop-nfs4...Stopped
Deleting Resource - nfs_stop-nfs4
nfs4: Successfully destroyed cluster
nfs1: Corosync updated
nfs2: Corosync updated
nfs3: Corosync updated
/usr/libexec/ganesha/ganesha-ha.sh: line 828: manage-service: command not found



pcs status, 
Cluster name: ganesha-ha-360
Last updated: Mon May 25 12:15:09 2015
Last change: Mon May 25 12:13:56 2015
Stack: cman
Current DC: nfs1 - partition with quorum
Version: 1.1.11-97629de
3 Nodes configured
14 Resources configured


Online: [ nfs1 nfs2 nfs3 ]

Full list of resources:

 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ nfs1 nfs2 nfs3 ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ nfs1 nfs2 nfs3 ]
 nfs1-cluster_ip-1	(ocf::heartbeat:IPaddr):	Stopped 
 nfs1-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs1 
 nfs2-cluster_ip-1	(ocf::heartbeat:IPaddr):	Stopped 
 nfs2-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs2 
 nfs3-cluster_ip-1	(ocf::heartbeat:IPaddr):	Stopped 
 nfs3-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs3 
 nfs4-cluster_ip-1	(ocf::heartbeat:IPaddr):	Stopped 
 nfs4-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs1 

Failed actions:
    nfs2-cluster_ip-1_start_0 on nfs2 'not configured' (6): call=446, status=complete, last-rc-change='Mon May 25 12:13:43 2015', queued=0ms, exec=21ms
    nfs4-cluster_ip-1_start_0 on nfs1 'not configured' (6): call=471, status=complete, last-rc-change='Mon May 25 12:13:51 2015', queued=0ms, exec=45ms
    nfs1-cluster_ip-1_start_0 on nfs1 'not configured' (6): call=457, status=complete, last-rc-change='Mon May 25 12:13:42 2015', queued=0ms, exec=102ms
    nfs3-cluster_ip-1_start_0 on nfs3 'not configured' (6): call=465, status=complete, last-rc-change='Mon May 25 12:13:42 2015', queued=0ms, exec=34ms


I/O status on mount-point and it has not resumed from last 5 minutes,

when node was deleted the I/O was at same point.
          131072    2048 1755562 2234854  4580695  5156274 5152166 2628803 4803970  3947840  4589528  1368883  1507267 4530488  4543068
          131072    4096 1696113 2306508  4553793  4608380 4514899 2577627 3885919  3951899  4601283  1131902  1287189 5190547  5198154
          131072    8192 1662315 2128107  4328792  4342126 4366024 2296008 3420989  3556337  5031914  1084429  1464754 4297424  4388223
          131072   16384 1551129 1787865  3379892  3390565 3384512 1964888 2749565  2361871  3612280  1228082  1228428 4196038  3566489
          262144      64  955004 2253588   167354  3705742 4149886 2162155 5074900  4487989  4621703  1404543  1807542 4983152  5577404
          262144     128 1693557 2129450   149411  3719356 4007772 2310025   94174  4090630  4659425  1465146  1769092  147737  3482477
          262144     256 1745636 2150891   153323  3712925 4119042 2360486  102683  3882582  4157984  1487771  1833124  154315  3724963
          262144     512 1686378 2045059  4876903  4993902 4364998 2438710 4756664  3948192  4405557  1543185  1907166 4395887  4415961
          262144    1024 1667359 2180736  5150689  4533642 4499007 2448752 4467202  3967253  4472744  1567978  1984048  151874  3808296
          262144    2048 1533354 2271315  4594338  4886005 5066761 2638383 4473308  3874892  4724859


nfs-ganeshs status on all nodes,

nfs1
root     19721     1  0 May22 ?        00:01:14 /usr/bin/ganesha.nfsd -L /var/log/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT -p /var/run/ganesha.nfsd.pid
---
nfs2
root     20620     1  0 May18 ?        01:32:35 /usr/bin/ganesha.nfsd -L /var/log/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT -p /var/run/ganesha.nfsd.pid
---
nfs3
root     30103     1  0 May19 ?        00:07:40 /usr/bin/ganesha.nfsd -L /var/log/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT -p /var/run/ganesha.nfsd.pid
---
nfs4
root     30288     1  3 12:00 ?        00:00:58 /usr/bin/ganesha.nfsd -L /var/log/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT -p /var/run/ganesha.nfsd.pid

Expected results:
delete node should not throw any error.
pcs status should be clean post deletion of the node

I/O should resume post grace-period.

Additional info:

Pcs status before starting the test case,
============================================
Cluster name: ganesha-ha-360
Last updated: Mon May 25 12:01:05 2015
Last change: Mon May 25 12:00:57 2015
Stack: cman
Current DC: nfs1 - partition with quorum
Version: 1.1.11-97629de
4 Nodes configured
16 Resources configured


Online: [ nfs1 nfs2 nfs3 nfs4 ]

Full list of resources:

 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ nfs1 nfs2 nfs3 nfs4 ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ nfs1 nfs2 nfs3 nfs4 ]
 nfs1-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs1 
 nfs1-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs1 
 nfs2-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs2 
 nfs2-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs2 
 nfs3-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs3 
 nfs3-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs3 
 nfs4-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs4 
 nfs4-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs4

Comment 2 Saurabh 2015-05-25 09:06:18 UTC
Well, later I tried to disable nfs-ganesha using the command "gluster nfs-ganesha disable" and it failed with this error,
[root@nfs1 ~]# gluster nfs-ganesha disable
nfs-ganesha: failed: Commit failed on 41146369-15cf-466f-9c9f-5264ef8cf6b2. Error: Could not stop NFS-Ganesha.

whereas nfs-ganesha stopped on all nodes,

here is the pcs status from all nodes and thing to notice is that it on all nodes it does not match.


nfs1
Error: cluster is not currently running on this node
---
nfs2
Cluster name: 
Last updated: Mon May 25 14:35:43 2015
Last change: Mon May 25 14:30:39 2015
Stack: cman
Current DC: nfs2 - partition WITHOUT quorum
Version: 1.1.11-97629de
4 Nodes configured
2 Resources configured


Online: [ nfs2 nfs3 ]
OFFLINE: [ nfs1 nfs4 ]

Full list of resources:

 nfs4-cluster_ip-1	(ocf::heartbeat:IPaddr):	Stopped 
 nfs4-trigger_ip-1	(ocf::heartbeat:Dummy):	Stopped 

Failed actions:
    nfs4-cluster_ip-1_start_0 on nfs2 'not configured' (6): call=488, status=complete, last-rc-change='Mon May 25 14:30:16 2015', queued=0ms, exec=36ms


---
nfs3
Cluster name: 
Last updated: Mon May 25 14:35:44 2015
Last change: Mon May 25 14:30:39 2015
Stack: cman
Current DC: nfs2 - partition WITHOUT quorum
Version: 1.1.11-97629de
4 Nodes configured
2 Resources configured


Online: [ nfs2 nfs3 ]
OFFLINE: [ nfs1 nfs4 ]

Full list of resources:

 nfs4-cluster_ip-1	(ocf::heartbeat:IPaddr):	Stopped 
 nfs4-trigger_ip-1	(ocf::heartbeat:Dummy):	Stopped 

Failed actions:
    nfs4-cluster_ip-1_start_0 on nfs2 'not configured' (6): call=488, status=complete, last-rc-change='Mon May 25 14:30:16 2015', queued=0ms, exec=36ms


---
nfs4
Error: cluster is not currently running on this node
---

Comment 3 Kaleb KEITHLEY 2015-06-05 12:12:44 UTC
if we delete a node we should not delete the VIP

Comment 4 Kaleb KEITHLEY 2015-06-25 01:19:53 UTC
>>
>> We'd have to decide between these two choices,
>> 1. Delete the node from the cluster, delete the resources and free the VIP (no VIP resource movement)
>>     Clients connected to this node will see an I/O disruption in this case
>> 2. Delete the node from the cluster, but move the VIP resource to another node in the cluster in the background.
>>     Clients connected to this node will not see any I/O disruption in this case.
>>
>> I request the opinions of the rest of the team to decide on the best choice here.
>>
>> Thanks
>> Meghana
>>
>
> The design intent from the start was to go with Option 1 listed above.
>
> Deletion of a node is a disruptive admin operation, something that would typically even be included in a maint window, only that in our case we can manage it online as well. All resources associated with that node need to be stopped first and then deleted. Only then should delete node (as a gluster TSP operation) be done. This needs to be reflected in the docs.
>
> An admin sh/would know that clients are connected to that node, in which case admins typically send out appropriate warning notices prior to such operations being performed. So I/O disruption is expected - only that it is a known. If some clients don't disconnect and keep using the mount (mounting from that node) then it is the admin's job to take a call on how to proceed with the delete node op. The idea was that he would need to first run the ha-script to do the appropriate stopping and deletion of cluster rsrcs associated with that node and only then proceed with the actual physical removal of that node from the TSP. Reverse is done during addition, when the physical addition of the node and associated bricks would be done first, followed by invocation of the ha-script which creates and starts all resources associated with the newly added node. The conf file needs to appropriately updated as well in both cases.
>
> So keep it simple for this first release and do not allow FO of the VIP etc. It will reduce other headaches.
>
> Anand
>


Please note that the above is from a simplification point of view. If you strongly feel that you *have* to FO the VIP and there is a technical rationale behind that decision please explain that as it is possible I have missed something. I don't feel there is anything wrong with doing that apart from one case which is:
suppose there is are 8 storage nodes which have 8 ganesha heads serving them (calculated on basis of workload requirements etc.). If one storage node is deleted, we have 7 and if VIP(8) is failed over and kept alive,
we have 8 VIPs serving the 7 storage nodes now. Now, that should not lead to a non-load-balanced case for some user scenarios. Because from a load-balancing perspective one node (now) will keep getting more connections
than the rest as it continues to host 2 VIPs (and maybe the admin does not intend to add another node anytime in the near future). We just need to be able to defend this configuration either way (failing over the vip of a deleted node OR doing away with it altogether).

Comment 5 Kaleb KEITHLEY 2015-06-27 09:51:41 UTC
merged downstream https://code.engineering.redhat.com/gerrit/51681
merged upstream (master) http://review.gluster.org/11353
merged upstream (release-3.7) http://review.gluster.org/11427

Comment 6 Saurabh 2015-07-07 08:25:05 UTC
Marking this BZ as verified as node is deleted and VIP is not floating any more,

[root@nfs11 ~]# time bash /usr/libexec/ganesha/ganesha-ha.sh --delete /etc/ganesha/ nfs16
Removing Constraint - colocation-nfs11-cluster_ip-1-nfs11-trigger_ip-1-INFINITY
Removing Constraint - location-nfs11-cluster_ip-1
Removing Constraint - location-nfs11-cluster_ip-1-nfs12-1000
Removing Constraint - location-nfs11-cluster_ip-1-nfs13-2000
Removing Constraint - location-nfs11-cluster_ip-1-nfs14-3000
Removing Constraint - location-nfs11-cluster_ip-1-nfs16-4000
Removing Constraint - location-nfs11-cluster_ip-1-nfs11-5000
Removing Constraint - order-nfs-grace-clone-nfs11-cluster_ip-1-mandatory
Deleting Resource - nfs11-cluster_ip-1
Removing Constraint - order-nfs11-trigger_ip-1-nfs-grace-clone-mandatory
Deleting Resource - nfs11-trigger_ip-1
Removing Constraint - colocation-nfs12-cluster_ip-1-nfs12-trigger_ip-1-INFINITY
Removing Constraint - location-nfs12-cluster_ip-1
Removing Constraint - location-nfs12-cluster_ip-1-nfs13-1000
Removing Constraint - location-nfs12-cluster_ip-1-nfs14-2000
Removing Constraint - location-nfs12-cluster_ip-1-nfs16-3000
Removing Constraint - location-nfs12-cluster_ip-1-nfs11-4000
Removing Constraint - location-nfs12-cluster_ip-1-nfs12-5000
Removing Constraint - order-nfs-grace-clone-nfs12-cluster_ip-1-mandatory
Deleting Resource - nfs12-cluster_ip-1
Removing Constraint - order-nfs12-trigger_ip-1-nfs-grace-clone-mandatory
Deleting Resource - nfs12-trigger_ip-1
Removing Constraint - colocation-nfs13-cluster_ip-1-nfs13-trigger_ip-1-INFINITY
Removing Constraint - location-nfs13-cluster_ip-1
Removing Constraint - location-nfs13-cluster_ip-1-nfs14-1000
Removing Constraint - location-nfs13-cluster_ip-1-nfs16-2000
Removing Constraint - location-nfs13-cluster_ip-1-nfs11-3000
Removing Constraint - location-nfs13-cluster_ip-1-nfs12-4000
Removing Constraint - location-nfs13-cluster_ip-1-nfs13-5000
Removing Constraint - order-nfs-grace-clone-nfs13-cluster_ip-1-mandatory
Deleting Resource - nfs13-cluster_ip-1
Removing Constraint - order-nfs13-trigger_ip-1-nfs-grace-clone-mandatory
Deleting Resource - nfs13-trigger_ip-1
Removing Constraint - colocation-nfs14-cluster_ip-1-nfs14-trigger_ip-1-INFINITY
Removing Constraint - location-nfs14-cluster_ip-1
Removing Constraint - location-nfs14-cluster_ip-1-nfs16-1000
Removing Constraint - location-nfs14-cluster_ip-1-nfs11-2000
Removing Constraint - location-nfs14-cluster_ip-1-nfs12-3000
Removing Constraint - location-nfs14-cluster_ip-1-nfs13-4000
Removing Constraint - location-nfs14-cluster_ip-1-nfs14-5000
Removing Constraint - order-nfs-grace-clone-nfs14-cluster_ip-1-mandatory
Deleting Resource - nfs14-cluster_ip-1
Removing Constraint - order-nfs14-trigger_ip-1-nfs-grace-clone-mandatory
Deleting Resource - nfs14-trigger_ip-1
Removing Constraint - colocation-nfs16-cluster_ip-1-nfs16-trigger_ip-1-INFINITY
Removing Constraint - location-nfs16-cluster_ip-1
Removing Constraint - location-nfs16-cluster_ip-1-nfs11-1000
Removing Constraint - location-nfs16-cluster_ip-1-nfs12-2000
Removing Constraint - location-nfs16-cluster_ip-1-nfs13-3000
Removing Constraint - location-nfs16-cluster_ip-1-nfs14-4000
Removing Constraint - location-nfs16-cluster_ip-1-nfs16-5000
Removing Constraint - order-nfs-grace-clone-nfs16-cluster_ip-1-mandatory
Deleting Resource - nfs16-cluster_ip-1
Removing Constraint - order-nfs16-trigger_ip-1-nfs-grace-clone-mandatory
Deleting Resource - nfs16-trigger_ip-1
Adding nfs11-trigger_ip-1 nfs-grace-clone (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs-grace-clone nfs11-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs12-trigger_ip-1 nfs-grace-clone (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs-grace-clone nfs12-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs13-trigger_ip-1 nfs-grace-clone (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs-grace-clone nfs13-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs14-trigger_ip-1 nfs-grace-clone (kind: Mandatory) (Options: first-action=start then-action=start)
Adding nfs-grace-clone nfs14-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)
CIB updated
CIB updated
Removing Constraint - location-nfs_stop-nfs16-nfs16-INFINITY
Deleting Resource - nfs_stop-nfs16
nfs16: Successfully destroyed cluster
nfs11: Corosync updated
nfs12: Corosync updated
nfs13: Corosync updated
nfs14: Corosync updated
ganesha-ha.conf                                                                                                                                                              100%  916     0.9KB/s   00:00    
ganesha-ha.conf                                                                                                                                                              100%  916     0.9KB/s   00:00    
ganesha-ha.conf                                                                                                                                                              100%  916     0.9KB/s   00:00    
[  OK  ] ganesha.nfsd: [  OK  ]

real	0m46.683s
user	0m16.353s
sys	0m4.389s




On the existing cluster,
[root@nfs11 ~]# pcs status
Cluster name: nozomer
Last updated: Tue Jul  7 19:17:50 2015
Last change: Tue Jul  7 19:15:19 2015
Stack: cman
Current DC: nfs11 - partition with quorum
Version: 1.1.11-97629de
4 Nodes configured
16 Resources configured


Online: [ nfs11 nfs12 nfs13 nfs14 ]

Full list of resources:

 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ nfs11 nfs12 nfs13 nfs14 ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ nfs11 nfs12 nfs13 nfs14 ]
 nfs11-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs11 
 nfs11-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs11 
 nfs12-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs12 
 nfs12-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs12 
 nfs13-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs13 
 nfs13-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs13 
 nfs14-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs14 
 nfs14-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs14 



on the deleted node,

[root@nfs16 ~]# pcs status
Error: cluster is not currently running on this node

Comment 7 Bhavana 2015-07-26 14:40:01 UTC
Made minor updates to the doc text.

Comment 8 errata-xmlrpc 2015-07-29 04:52:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.