Bug 1436141

Summary: [RFE] Extend Capability of Gluster NFS process Failover with CTDB
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Abhishek Kumar <abhishku>
Component: ctdbAssignee: Michael Adam <madam>
Status: CLOSED DUPLICATE QA Contact: Vivek Das <vdas>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: abhishku, amukherj, anoopcs, jarrpa, madam, rhs-smb
Target Milestone: ---Keywords: FutureFeature, ZStream
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-21 10:46:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1472361    

Description Abhishek Kumar 2017-03-27 09:50:14 UTC
Description of problem:
extend the capability of gnfs process failover with CTDB

Version-Release number of selected component (if applicable):


How reproducible:

Everytime


Here is the configuration steps :

RHEL 6 (Gluster nfs with CTDB):
 
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.8 (Santiago)
 
# cat /etc/redhat-storage-release
Red Hat Gluster Storage Server 3.1 Update 3
 
# rpm -qa ctdb
ctdb-4.4.3-8.el6rhs.x86_64
 
Tested with two scenarios:

1). Without adding "CTDB_MANAGES_NFS=yes" and NFS_HOSTNAME="nfs_ctdb" parameter in /etc/sysconfig/nfs file
 
i. Among two, only one node was healthy(OK). Other one was UNHEALTHY.
 
ii. Even though one node was unhealthy, the failover took place without any problem when node was down.
 
iii. When failover took place, the status of node turned into Healthy(OK).
 
iv. When nfs process was killed manually, the failover didn't take place.

-------------------------------------------------------------------------------------------------------------
 
2). Adding CTDB_MANAGES_NFS=yes and NFS_HOSTNAME="nfs_ctdb" parameter in /etc/sysconfig/nfs
 
i. Among two, only one node was healthy(OK). Other one was UNHEALTHY.
 
ii. Even though one node was unhealthy, the failover took place without any problem when node was down.
 
iii. When failover took place, the status of node turned into Healthy(OK).
 
iv. When nfs process was killed manually, the failover didn't take place.

==============================================

RHEL 7 (Gluster nfs with CTDB):
 
# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.2 (Maipo)
 
# cat /etc/redhat-storage-release
Red Hat Gluster Storage Server 3.1 Update 3

# rpm -qa ctdb
ctdb-4.4.3-8.el7rhgs.x86_64
 
Tested with two scenarios:

1. Without adding CTDB_MANAGES_NFS=yes parameter in /etc/sysconfig/nfs

i. All nodes were Healthy. There was public IP running on one of the node.
 
ii. On client side, mounted volume with NFS (vers=3)
 
iii. When nfs process was killed, then client goes into stale.

But even after some time, it doesn't work out. As gluster NFS service is not monitored by CTDB so failover doesn't takes place.
 
iv. Even though nfs service was killed on one of the node, the cdtb status was showing all nodes as HEALTHY.
 
v. Client starts working only after restarting glusterd daemon or after rebooting the node.
 
======================================================
 
2). Adding CTDB_MANAGES_NFS=yes parameter in /etc/sysconfig/nfs
 
i. All nodes were Healthy. There was public IP running on one of the node.
 
ii. On client side, mounted volume with NFS (vers=3).
 
iii. When nfs process was killed, then client goes into hung state.
 
iv. Failover of public IP took place within approximately 30 seconds. After that, client was working fine.
 
Version-Release number of selected component (if applicable):

RHGS 3.1.3
ctdb-4.4.3-8.el6rhs.x86_64


Actual results:

CTDB doesn't manage gnfs.

Expected results:

ctdb gluster-nfs callout that only monitors but does not start/stop gnfs

Additional info:

Comment 6 Anoop C S 2018-11-21 10:46:54 UTC

*** This bug has been marked as a duplicate of bug 1371178 ***