Bug 1269443 - nfs-ganesha: the HA setup not done properly
nfs-ganesha: the HA setup not done properly
Product: GlusterFS
Classification: Community
Component: ganesha-nfs (Show other bugs)
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Kaleb KEITHLEY
: Triaged
Depends On:
  Show dependency treegraph
Reported: 2015-10-07 07:22 EDT by Saurabh
Modified: 2017-03-08 05:54 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2017-03-08 05:54:10 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Saurabh 2015-10-07 07:22:32 EDT
Description of problem:
with gluster 3.7.4 installed and nfs-ganesha rpms of nfs-ganesha-2.2.0-9.el7rhgs.x86_64, the HA setup is not done properly.

Version-Release number of selected component (if applicable):

How reproducible:
got this issue in first attempt

Steps to Reproduce:
1. install gluster-3.7.4 rpms
2. install nfs-ganesha-2.2.0-9
3. setup the gluster cluster
4. bring the up shared volume using the command,
gluster volume set all cluster.enable-shared-storage enable
5. do the pre-requisites for nfs-ganesha HA setup
6. execute gluster nfs-ganesha enable

Actual results:
pcs status
Cluster name: G1441043605.66
Last updated: Thu Oct  8 00:47:37 2015		Last change: Thu Oct  8 00:38:40 2015 by root via cibadmin on vm1
Stack: corosync
Current DC: vm4 (version 1.1.12-a14efad) - partition with quorum
4 nodes and 8 resources configured

Online: [ vm1 vm2 vm3 vm4 ]

Full list of resources:

 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ vm1 vm2 vm3 vm4 ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ vm1 vm2 vm3 vm4 ]

PCSD Status:
  vm1: Online
  vm2: Online
  vm3: Online
  vm4: Online

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/disabled

Expected results:
The virtual IP is not assigned as expected.

Additional info:
Comment 2 Kaushal 2017-03-08 05:54:10 EST
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.

Note You need to log in before you can comment on or make changes to this bug.