Bug 1370090
Summary: | [GSS] - Unable to Failover Gluster NFS with CTDB | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Mukul Malhotra <mmalhotr> |
Component: | ctdb | Assignee: | Michael Adam <madam> |
Status: | CLOSED WONTFIX | QA Contact: | storage-qa-internal <storage-qa-internal> |
Severity: | urgent | Docs Contact: | |
Priority: | urgent | ||
Version: | rhgs-3.1 | CC: | amukherj, bkunal, jthottan, madam, pasik, rhs-bugs, rhs-smb, skoduri, storage-qa-internal |
Target Milestone: | --- | Keywords: | Triaged, ZStream |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-12-03 12:52:21 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1371178 | ||
Bug Blocks: | 1408949, 1472361 |
Description
Mukul Malhotra
2016-08-25 09:51:09 UTC
Neils/Surabhi, During my test on RHEL6, I had also observed that after applying CTDB_MANAGES_NFS=yes parameter in "/etc/sysconfig/nfs" or "/etc/sysconfig/ctdb" the failover state does not change & failover only works when a node is rebooted & not with gnfs process failover. Mukul Surabhi, RHEL 7: # grep -v ^# /etc/sysconfig/nfs RPCNFSDARGS="" RPCMOUNTDOPTS="" STATDARG="" SMNOTIFYARGS="" RPCIDMAPDARGS="" RPCGSSDARGS="" GSS_USE_PROXY="yes" RPCSVCGSSDARGS="" BLKMAPDARGS="" CTDB_MANAGES_NFS=yes --------------------------------------- RHEL 6: # grep -v ^# /etc/sysconfig/nfs CTDB_MANAGES_NFS=yes Thanks Mukul Thanks Surabhi >The node reboot and shutdown cases works fine. Yes, which is already been tested & it was working. So, the "CTDB_MANAGES_NFS=yes" parameter does not make any difference as Its for kernel nfs. >For ctdb to monitor gluster-nfs process, there might be additional configuration or settings needs to be done which I am not aware of atm. Ok, this is the primary concern & require guidlines or configuration steps after verified by QE which I can provide to the customer. Thanks Mukul (In reply to Mukul Malhotra from comment #9) ... > >For ctdb to monitor gluster-nfs process, there might be additional configuration or settings needs to be done which I am not aware of atm. > > Ok, this is the primary concern & require guidlines or configuration steps > after verified by QE which I can provide to the customer. I'm still missing a pointer to the script that CTDB uses to monitor the NFS-server. It should be sufficient for such a script to check the output of 'showmount -e localhost', as this is handled by the same process as the actual NFSv3 operations. In general, the CTDB configuration for Gluster/NFS is less mature than the newer NFS-Ganesha integration with pacemaker. Gluster/NFS is going to be deprecated in favor of NFS-Ganesha & pacemaker, any problems or questions about that solution need to be reported and addressed (file other bugs for them). Could you let us know if all your questions/concerns have been addressed? If there is something missing, please let me know. Otherwise you can close this :) Thanks Michael. I had opened RFE, bz#1371178 to extend the gnfs process failover capability with ctdb. Mukul |