Bug 1461114

Summary: [Ganesha+Gdeploy] While Adding a node to existing ganesha cluster "Unable to communicate with pcsd" messages are being observed
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Manisha Saini <msaini>
Component: gdeployAssignee: Devyani Kota <dkota>
Status: CLOSED WONTFIX QA Contact: Manisha Saini <msaini>
Severity: unspecified Docs Contact:
Priority: high    
Version: rhgs-3.3CC: amukherj, apaladug, dkota, msaini, rhinduja, rhs-bugs, sankarshan, sheggodu, smohan, storage-qa-internal, surs
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: RHGS-3.4.0-to-be-deferred
Fixed In Version: Doc Type: Known Issue
Doc Text:
Cause: While adding a node to existing Ganesha cluster the following messages are observed: Error: Some nodes had a newer tokens than the local node. Local node's tokens were updated. Please repeat the authentication if needed. Error: Unable to communicate with pcsd Consequence: This is intermittent, with no known consequence. The error messages are benign. Workaround (if any): There is no workaround, but the messages can be safely ignored since there is no known functionality impact. Result:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-08 07:19:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1503143    

Description Manisha Saini 2017-06-13 14:14:35 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Manisha Saini 2017-06-13 14:26:36 UTC
Submitted the bug without steps by mistake.


Description of problem:
While Adding a node to existing ganesha cluster "Unable to communicate with pcsd" messages are being observed 

Version-Release number of selected component (if applicable):
# rpm -qa | grep gdeploy
gdeploy-2.0.2-10.el7rhgs.noarch


How reproducible:
intermittent

Steps to Reproduce:
1.Create 4 node ganesha cluster.
2.Add a new node to existing ganesha cluster via gdeploy

# cat add_node.conf 
[hosts]
dhcp42-125.lab.eng.blr.redhat.com
dhcp42-127.lab.eng.blr.redhat.com
dhcp42-129.lab.eng.blr.redhat.com
dhcp42-119.lab.eng.blr.redhat.com
dhcp42-114.lab.eng.blr.redhat.com

[peer]
action=probe

[clients]
action=mount
volname=dhcp42-114.lab.eng.blr.redhat.com:gluster_shared_storage
hosts=dhcp42-114.lab.eng.blr.redhat.com
fstype=glusterfs
client_mount_points=/var/run/gluster/shared_storage/

[nfs-ganesha]
action=add-node
cluster_nodes=dhcp42-125.lab.eng.blr.redhat.com,dhcp42-127.lab.eng.blr.redhat.com,dhcp42-129.lab.eng.blr.redhat.com,dhcp42-119.lab.eng.blr.redhat.com
nodes=dhcp42-114.lab.eng.blr.redhat.com
vip=10.70.42.44


Actual results:
"Unable to communicate with pcsd" messages are observed while adding a node to cluster via gdeploy

"Error: Some nodes had a newer tokens than the local node. Local node's tokens were updated. Please repeat the authentication if needed.\nError: Unable to communicate with pcsd", "stdout": "", "stdout_lines": [], "warnings": []}
"

Expected results:
No such messages should be observed

Additional info:


# gdeploy -c add_node.conf 

PLAY [master] ******************************************************************

TASK [Creates a Trusted Storage Pool] ******************************************
changed: [dhcp42-125.lab.eng.blr.redhat.com]

TASK [Pause for a few seconds] *************************************************
Pausing for 5 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [dhcp42-125.lab.eng.blr.redhat.com]

PLAY RECAP *********************************************************************
dhcp42-125.lab.eng.blr.redhat.com : ok=2    changed=1    unreachable=0    failed=0   


PLAY [clients] *****************************************************************

TASK [Create the dir to mount the volume, skips if present] ********************
ok: [dhcp42-114.lab.eng.blr.redhat.com] => (item={u'mountpoint': u'/var/run/gluster/shared_storage/', u'fstype': u'fuse'})

PLAY RECAP *********************************************************************
dhcp42-114.lab.eng.blr.redhat.com : ok=1    changed=0    unreachable=0    failed=0   


PLAY [clients] *****************************************************************

TASK [Mount the volumes, if fstype is glusterfs] *******************************
ok: [dhcp42-114.lab.eng.blr.redhat.com] => (item={u'mountpoint': u'/var/run/gluster/shared_storage/', u'fstype': u'fuse'})

PLAY RECAP *********************************************************************
dhcp42-114.lab.eng.blr.redhat.com : ok=1    changed=0    unreachable=0    failed=0   


PLAY [clients] *****************************************************************

TASK [setup] *******************************************************************
ok: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Uncomment STATD_PORT for rpc.statd to listen on] *************************
skipping: [dhcp42-114.lab.eng.blr.redhat.com] => (item={u'mountpoint': u'/var/run/gluster/shared_storage/', u'fstype': u'fuse'}) 

TASK [Uncomment LOCKD_TCPPORT for rpc.lockd to listen on] **********************
skipping: [dhcp42-114.lab.eng.blr.redhat.com] => (item={u'mountpoint': u'/var/run/gluster/shared_storage/', u'fstype': u'fuse'}) 

TASK [Uncomment LOCKD_UDPPORT for rpc.lockd to listen on] **********************
skipping: [dhcp42-114.lab.eng.blr.redhat.com] => (item={u'mountpoint': u'/var/run/gluster/shared_storage/', u'fstype': u'fuse'}) 

TASK [Uncomment MOUNTD_PORT for rpc.mountd to listen on] ***********************
skipping: [dhcp42-114.lab.eng.blr.redhat.com] => (item={u'mountpoint': u'/var/run/gluster/shared_storage/', u'fstype': u'fuse'}) 

TASK [Restart nfs service (RHEL 6 only)] ***************************************
skipping: [dhcp42-114.lab.eng.blr.redhat.com] => (item={u'mountpoint': u'/var/run/gluster/shared_storage/', u'fstype': u'fuse'}) 

TASK [Restart rpc-statd service] ***********************************************
skipping: [dhcp42-114.lab.eng.blr.redhat.com] => (item={u'mountpoint': u'/var/run/gluster/shared_storage/', u'fstype': u'fuse'}) 

TASK [Restart nfs-config service] **********************************************
skipping: [dhcp42-114.lab.eng.blr.redhat.com] => (item={u'mountpoint': u'/var/run/gluster/shared_storage/', u'fstype': u'fuse'}) 

TASK [Restart nfs-mountd service] **********************************************
skipping: [dhcp42-114.lab.eng.blr.redhat.com] => (item={u'mountpoint': u'/var/run/gluster/shared_storage/', u'fstype': u'fuse'}) 

TASK [Restart nfslock service (RHEL 6 & 7)] ************************************
skipping: [dhcp42-114.lab.eng.blr.redhat.com] => (item={u'mountpoint': u'/var/run/gluster/shared_storage/', u'fstype': u'fuse'}) 

TASK [Mount the volumes if fstype is NFS] **************************************
skipping: [dhcp42-114.lab.eng.blr.redhat.com] => (item={u'mountpoint': u'/var/run/gluster/shared_storage/', u'fstype': u'fuse'}) 

PLAY RECAP *********************************************************************
dhcp42-114.lab.eng.blr.redhat.com : ok=1    changed=0    unreachable=0    failed=0   


PLAY [clients] *****************************************************************

TASK [Mount the volumes, if fstype is CIFS] ************************************
skipping: [dhcp42-114.lab.eng.blr.redhat.com] => (item={u'mountpoint': u'/var/run/gluster/shared_storage/', u'fstype': u'fuse'}) 

PLAY RECAP *********************************************************************
dhcp42-114.lab.eng.blr.redhat.com : ok=0    changed=0    unreachable=0    failed=0   


PLAY [master_node] *************************************************************

TASK [setup] *******************************************************************
ok: [dhcp42-125.lab.eng.blr.redhat.com]

TASK [Copy the public key to the local] ****************************************
changed: [dhcp42-125.lab.eng.blr.redhat.com]

TASK [Copy the private key to the local] ***************************************
changed: [dhcp42-125.lab.eng.blr.redhat.com]

PLAY RECAP *********************************************************************
dhcp42-125.lab.eng.blr.redhat.com : ok=3    changed=2    unreachable=0    failed=0   


PLAY [new_nodes] ***************************************************************

TASK [setup] *******************************************************************
ok: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Check if nfs-ganesha is installed] ***************************************
changed: [dhcp42-114.lab.eng.blr.redhat.com]
 [WARNING]: Consider using yum, dnf or zypper module rather than running rpm


TASK [fail] ********************************************************************
skipping: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Check if corosync is installed] ******************************************
changed: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [fail] ********************************************************************
skipping: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Check if pacemaker is installed] *****************************************
changed: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [fail] ********************************************************************
skipping: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Check if libntirpc is installed] *****************************************
changed: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [fail] ********************************************************************
skipping: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Check if pcs is installed] ***********************************************
changed: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [fail] ********************************************************************
skipping: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Stop kernel NFS] *********************************************************
ok: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Stop network manager service] ********************************************
ok: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Disable network manager service] *****************************************
ok: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Start network service] ***************************************************
ok: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Enable network service] **************************************************
ok: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Start pcsd service] ******************************************************
ok: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Enable pcsd service] *****************************************************
ok: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Enable pacemaker service] ************************************************
changed: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Create a user hacluster on new nodes] ************************************
ok: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Set the hacluster user the same password on new nodes] *******************
changed: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Copy the public key to remote nodes] *************************************
ok: [dhcp42-114.lab.eng.blr.redhat.com] => (item=dhcp42-125.lab.eng.blr.redhat.com)

TASK [Copy the private key to remote node] *************************************
ok: [dhcp42-114.lab.eng.blr.redhat.com] => (item=dhcp42-125.lab.eng.blr.redhat.com)

TASK [Deploy the pubkey ~/root/.ssh/authorized_keys on all nodes] **************
changed: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Define service port] *****************************************************
changed: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Restart statd service (RHEL 6 only)] *************************************
skipping: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Restart nfs-config service] **********************************************
changed: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Restart rpc-statd service] ***********************************************
changed: [dhcp42-114.lab.eng.blr.redhat.com]

TASK [Pcs cluster authenticate the hacluster on new nodes] *********************
changed: [dhcp42-114.lab.eng.blr.redhat.com] => (item=dhcp42-114.lab.eng.blr.redhat.com)

TASK [Pcs cluster authenticate the hacluster on existing nodes] ****************
changed: [dhcp42-114.lab.eng.blr.redhat.com] => (item=dhcp42-125.lab.eng.blr.redhat.com)
changed: [dhcp42-114.lab.eng.blr.redhat.com] => (item=dhcp42-127.lab.eng.blr.redhat.com)
changed: [dhcp42-114.lab.eng.blr.redhat.com] => (item=dhcp42-129.lab.eng.blr.redhat.com)
changed: [dhcp42-114.lab.eng.blr.redhat.com] => (item=dhcp42-119.lab.eng.blr.redhat.com)

TASK [Pause for a few seconds after pcs auth] **********************************
Pausing for 3 seconds
(ctrl+C then 'C' = continue early, ctrl+C then 'A' = abort)
ok: [dhcp42-114.lab.eng.blr.redhat.com]

PLAY RECAP *********************************************************************
dhcp42-114.lab.eng.blr.redhat.com : ok=25   changed=13   unreachable=0    failed=0   


PLAY [cluster_nodes] ***********************************************************

TASK [Pcs cluster authenticate the hacluster on new nodes] *********************
changed: [dhcp42-129.lab.eng.blr.redhat.com] => (item=dhcp42-114.lab.eng.blr.redhat.com)
failed: [dhcp42-125.lab.eng.blr.redhat.com] (item=dhcp42-114.lab.eng.blr.redhat.com) => {"changed": true, "cmd": "pcs cluster auth -u hacluster -p hacluster dhcp42-114.lab.eng.blr.redhat.com", "delta": "0:00:05.453812", "end": "2017-06-13 19:08:11.573044", "failed": true, "item": "dhcp42-114.lab.eng.blr.redhat.com", "rc": 1, "start": "2017-06-13 19:08:06.119232", "stderr": "Error: Some nodes had a newer tokens than the local node. Local node's tokens were updated. Please repeat the authentication if needed.\nError: Unable to communicate with pcsd", "stdout": "", "stdout_lines": [], "warnings": []}
changed: [dhcp42-119.lab.eng.blr.redhat.com] => (item=dhcp42-114.lab.eng.blr.redhat.com)
failed: [dhcp42-127.lab.eng.blr.redhat.com] (item=dhcp42-114.lab.eng.blr.redhat.com) => {"changed": true, "cmd": "pcs cluster auth -u hacluster -p hacluster dhcp42-114.lab.eng.blr.redhat.com", "delta": "0:00:05.669351", "end": "2017-06-13 19:08:12.091837", "failed": true, "item": "dhcp42-114.lab.eng.blr.redhat.com", "rc": 1, "start": "2017-06-13 19:08:06.422486", "stderr": "Error: Some nodes had a newer tokens than the local node. Local node's tokens were updated. Please repeat the authentication if needed.\nError: Unable to communicate with pcsd", "stdout": "", "stdout_lines": [], "warnings": []}
	to retry, use: --limit @/tmp/tmpPU0xKa/ganesha-pcs-auth-new-nodes.retry

PLAY RECAP *********************************************************************
dhcp42-119.lab.eng.blr.redhat.com : ok=1    changed=1    unreachable=0    failed=0   
dhcp42-125.lab.eng.blr.redhat.com : ok=0    changed=0    unreachable=0    failed=1   
dhcp42-127.lab.eng.blr.redhat.com : ok=0    changed=0    unreachable=0    failed=1   
dhcp42-129.lab.eng.blr.redhat.com : ok=1    changed=1    unreachable=0    failed=0   

Ignoring errors...

PLAY [master] ******************************************************************

TASK [Adds a node to the cluster] **********************************************
changed: [dhcp42-125.lab.eng.blr.redhat.com] => (item={u'host': u'dhcp42-114.lab.eng.blr.redhat.com', u'vip': u'10.70.42.44'})

TASK [Report ganesha add-node status] ******************************************
ok: [dhcp42-125.lab.eng.blr.redhat.com] => {
    "msg": [
        "Disabling SBD service...", 
        "dhcp42-114.lab.eng.blr.redhat.com: sbd disabled", 
        "dhcp42-125.lab.eng.blr.redhat.com: Corosync updated", 
        "dhcp42-127.lab.eng.blr.redhat.com: Corosync updated", 
        "dhcp42-129.lab.eng.blr.redhat.com: Corosync updated", 
        "dhcp42-119.lab.eng.blr.redhat.com: Corosync updated", 
        "Setting up corosync...", 
        "dhcp42-114.lab.eng.blr.redhat.com: Succeeded", 
        "Synchronizing pcsd certificates on nodes dhcp42-114.lab.eng.blr.redhat.com...", 
        "dhcp42-114.lab.eng.blr.redhat.com: Success", 
        "Restarting pcsd on the nodes in order to reload the certificates...", 
        "dhcp42-114.lab.eng.blr.redhat.com: Success", 
        "dhcp42-114.lab.eng.blr.redhat.com: Stopping Cluster (pacemaker)...", 
        "dhcp42-127.lab.eng.blr.redhat.com: Stopping Cluster (pacemaker)...", 
        "dhcp42-125.lab.eng.blr.redhat.com: Stopping Cluster (pacemaker)...", 
        "dhcp42-129.lab.eng.blr.redhat.com: Stopping Cluster (pacemaker)...", 
        "dhcp42-119.lab.eng.blr.redhat.com: Stopping Cluster (pacemaker)...", 
        "dhcp42-114.lab.eng.blr.redhat.com: Stopping Cluster (corosync)...", 
        "dhcp42-127.lab.eng.blr.redhat.com: Stopping Cluster (corosync)...", 
        "dhcp42-125.lab.eng.blr.redhat.com: Stopping Cluster (corosync)...", 
        "dhcp42-129.lab.eng.blr.redhat.com: Stopping Cluster (corosync)...", 
        "dhcp42-119.lab.eng.blr.redhat.com: Stopping Cluster (corosync)...", 
        "dhcp42-129.lab.eng.blr.redhat.com: Starting Cluster...", 
        "dhcp42-127.lab.eng.blr.redhat.com: Starting Cluster...", 
        "dhcp42-119.lab.eng.blr.redhat.com: Starting Cluster...", 
        "dhcp42-125.lab.eng.blr.redhat.com: Starting Cluster...", 
        "dhcp42-114.lab.eng.blr.redhat.com: Starting Cluster...", 
        "Removing group: dhcp42-119.lab.eng.blr.redhat.com-group (and all resources within group)", 
        "Stopping all resources in group: dhcp42-119.lab.eng.blr.redhat.com-group...", 
        "Deleting Resource - dhcp42-119.lab.eng.blr.redhat.com-nfs_block", 
        "Removing Constraint - order-nfs-grace-clone-dhcp42-119.lab.eng.blr.redhat.com-cluster_ip-1-mandatory", 
        "Deleting Resource - dhcp42-119.lab.eng.blr.redhat.com-cluster_ip-1", 
        "Removing Constraint - location-dhcp42-119.lab.eng.blr.redhat.com-group", 
        "Removing Constraint - location-dhcp42-119.lab.eng.blr.redhat.com-group-dhcp42-125.lab.eng.blr.redhat.com-1000", 
        "Removing Constraint - location-dhcp42-119.lab.eng.blr.redhat.com-group-dhcp42-127.lab.eng.blr.redhat.com-2000", 
        "Removing Constraint - location-dhcp42-119.lab.eng.blr.redhat.com-group-dhcp42-129.lab.eng.blr.redhat.com-3000", 
        "Removing Constraint - location-dhcp42-119.lab.eng.blr.redhat.com-group-dhcp42-119.lab.eng.blr.redhat.com-4000", 
        "Deleting Resource (and group) - dhcp42-119.lab.eng.blr.redhat.com-nfs_unblock", 
        "Removing group: dhcp42-125.lab.eng.blr.redhat.com-group (and all resources within group)", 
        "Stopping all resources in group: dhcp42-125.lab.eng.blr.redhat.com-group...", 
        "Deleting Resource - dhcp42-125.lab.eng.blr.redhat.com-nfs_block", 
        "Removing Constraint - order-nfs-grace-clone-dhcp42-125.lab.eng.blr.redhat.com-cluster_ip-1-mandatory", 
        "Deleting Resource - dhcp42-125.lab.eng.blr.redhat.com-cluster_ip-1", 
        "Removing Constraint - location-dhcp42-125.lab.eng.blr.redhat.com-group", 
        "Removing Constraint - location-dhcp42-125.lab.eng.blr.redhat.com-group-dhcp42-127.lab.eng.blr.redhat.com-1000", 
        "Removing Constraint - location-dhcp42-125.lab.eng.blr.redhat.com-group-dhcp42-129.lab.eng.blr.redhat.com-2000", 
        "Removing Constraint - location-dhcp42-125.lab.eng.blr.redhat.com-group-dhcp42-119.lab.eng.blr.redhat.com-3000", 
        "Removing Constraint - location-dhcp42-125.lab.eng.blr.redhat.com-group-dhcp42-125.lab.eng.blr.redhat.com-4000", 
        "Deleting Resource (and group) - dhcp42-125.lab.eng.blr.redhat.com-nfs_unblock", 
        "Removing group: dhcp42-127.lab.eng.blr.redhat.com-group (and all resources within group)", 
        "Stopping all resources in group: dhcp42-127.lab.eng.blr.redhat.com-group...", 
        "Deleting Resource - dhcp42-127.lab.eng.blr.redhat.com-nfs_block", 
        "Removing Constraint - order-nfs-grace-clone-dhcp42-127.lab.eng.blr.redhat.com-cluster_ip-1-mandatory", 
        "Deleting Resource - dhcp42-127.lab.eng.blr.redhat.com-cluster_ip-1", 
        "Removing Constraint - location-dhcp42-127.lab.eng.blr.redhat.com-group", 
        "Removing Constraint - location-dhcp42-127.lab.eng.blr.redhat.com-group-dhcp42-129.lab.eng.blr.redhat.com-1000", 
        "Removing Constraint - location-dhcp42-127.lab.eng.blr.redhat.com-group-dhcp42-119.lab.eng.blr.redhat.com-2000", 
        "Removing Constraint - location-dhcp42-127.lab.eng.blr.redhat.com-group-dhcp42-125.lab.eng.blr.redhat.com-3000", 
        "Removing Constraint - location-dhcp42-127.lab.eng.blr.redhat.com-group-dhcp42-127.lab.eng.blr.redhat.com-4000", 
        "Deleting Resource (and group) - dhcp42-127.lab.eng.blr.redhat.com-nfs_unblock", 
        "Removing group: dhcp42-129.lab.eng.blr.redhat.com-group (and all resources within group)", 
        "Stopping all resources in group: dhcp42-129.lab.eng.blr.redhat.com-group...", 
        "Deleting Resource - dhcp42-129.lab.eng.blr.redhat.com-nfs_block", 
        "Removing Constraint - order-nfs-grace-clone-dhcp42-129.lab.eng.blr.redhat.com-cluster_ip-1-mandatory", 
        "Deleting Resource - dhcp42-129.lab.eng.blr.redhat.com-cluster_ip-1", 
        "Removing Constraint - location-dhcp42-129.lab.eng.blr.redhat.com-group", 
        "Removing Constraint - location-dhcp42-129.lab.eng.blr.redhat.com-group-dhcp42-119.lab.eng.blr.redhat.com-1000", 
        "Removing Constraint - location-dhcp42-129.lab.eng.blr.redhat.com-group-dhcp42-125.lab.eng.blr.redhat.com-2000", 
        "Removing Constraint - location-dhcp42-129.lab.eng.blr.redhat.com-group-dhcp42-127.lab.eng.blr.redhat.com-3000", 
        "Removing Constraint - location-dhcp42-129.lab.eng.blr.redhat.com-group-dhcp42-129.lab.eng.blr.redhat.com-4000", 
        "Deleting Resource (and group) - dhcp42-129.lab.eng.blr.redhat.com-nfs_unblock", 
        "Adding nfs-grace-clone dhcp42-119.lab.eng.blr.redhat.com-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)", 
        "Adding nfs-grace-clone dhcp42-125.lab.eng.blr.redhat.com-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)", 
        "Adding nfs-grace-clone dhcp42-127.lab.eng.blr.redhat.com-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)", 
        "Adding nfs-grace-clone dhcp42-129.lab.eng.blr.redhat.com-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)", 
        "Adding nfs-grace-clone dhcp42-114.lab.eng.blr.redhat.com-cluster_ip-1 (kind: Mandatory) (Options: first-action=start then-action=start)", 
        "CIB updated"
    ]
}

PLAY RECAP *********************************************************************
dhcp42-125.lab.eng.blr.redhat.com : ok=2    changed=1    unreachable=0    failed=0

Comment 5 Manisha Saini 2017-06-19 09:46:47 UTC
Few Points-

The issue is only observed while adding the node to cluster ->Deleting the node from cluster ->Re adding the same node to existing cluster.

Adding and deleting the node from existing is not deleting the token from all the nodes including the new node.

A reference to the bug which has the similar issue reported in rhel 7 -
https://bugzilla.redhat.com/show_bug.cgi?id=1265925

Comment 10 Devyani Kota 2017-10-09 10:18:12 UTC
Hi all,

The issue was not reproducible on 4-5 nodes.
The issue is intermittent while performing add node/delete node/re-add node operations, on scaling upto 7 nodes.
Will try adding more sleep seconds during pcs auth check, to find out if 
it is gdeploy or nfs-ganesha issue.

Thanks,
Devyani Kota

Comment 11 Devyani Kota 2017-10-10 08:40:08 UTC
Hi,

This pull-request[1] should resolve this issue.
[1] https://github.com/gluster/gdeploy/pull/449

Thanks,
Devyani Kota

Comment 13 Sachidananda Urs 2017-11-15 15:21:27 UTC
This patch has not been merged, needs testing. Devyani can you please co-ordinate with Manisha and take this to closure?

Comment 14 Devyani Kota 2017-11-27 12:52:36 UTC
Hi all,

I had a word with Manisha, she will update the issue once she is done testing.
Manisha, PR link[1].

[1] https://github.com/gluster/gdeploy/pull/449

Thanks,
Devyani.

Comment 15 Manisha Saini 2018-01-24 06:52:43 UTC
I Tried reproducing the issue with 5 nodes.I am unable to repro. As mentioned in the BZ,issue is "intermittent".If in my further testing I hit the same issue again, I will update the BZ.

Comment 20 Devyani Kota 2018-04-20 07:21:07 UTC
Hi all,
This issue is intermittent, and the reason is unknown.
Spoke with the Ganesha development team, even they are not sure what is causing this error.
Therefore, moving this to 'known issues'.
Thanks.

Comment 25 Anand Paladugu 2018-05-01 08:38:34 UTC
Based on above information providing PM ack to defer this defect from 3.4.0.  If we decide to add it to known issues, then I recommend adding any workaround that we have found.

Comment 29 Sachidananda Urs 2018-11-08 07:19:14 UTC
Since we are moving towards ansible role based installation we are deprecating gdeploy. I will be closing this bug, please re-open if this issue is a blocker.