Bug 1227709

Summary: Not able to export volume using nfs-ganesha
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Apeksha <akhakhar>
Component: nfs-ganeshaAssignee: Meghana <mmadhusu>
Status: CLOSED ERRATA QA Contact: Apeksha <akhakhar>
Severity: urgent Docs Contact:
Priority: high    
Version: rhgs-3.1CC: annair, asrivast, mmadhusu, nlevinki, saujain, skoduri, vagarwal
Target Milestone: ---   
Target Release: RHGS 3.1.0   
Hardware: x86_64   
OS: FreeBSD   
Whiteboard:
Fixed In Version: glusterfs-3.7.1-4 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1228415 (view as bug list) Environment:
Last Closed: 2015-07-29 04:55:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1228415    
Bug Blocks: 1202842, 1232155    

Description Apeksha 2015-06-03 11:07:27 UTC
Description of problem:
Not able to export volume using nfs-ganesha
showmount -e localhost does not export the volume 

Version-Release number of selected component (if applicable):
glusterfs-3.7.0-3.el6rhs.x86_64
nfs-ganesha-2.2.0-0.el6.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Created a distribute volume and dist_rep volume on 4 node ganesha cluster
2. enabled ganesha on it

Actual results: showmount -e localhost does not export the volume 
[root@nfs5 ~]# showmount -e localhost
Export list for localhost:

Expected results: showmount e localhost must export the volume

Additional info:
Ganesha is running on all the nodes

gluster vol info:
Volume Name: tmp_vol2
Type: Distribute
Volume ID: 4e46b76c-b28e-4a9f-8a7a-3329ed3668b3
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.70.46.180:/rhs/brick1/tmp2
Options Reconfigured:
ganesha.enable: on
features.cache-invalidation: on
performance.readdir-ahead: on
nfs-ganesha: enable

/etc/ganesha/ganesha-ha.conf
NFS_Core_Param
{
        Rquota_Port = 4501 ;
}
%include "/etc/ganesha/exports/export.ecvol.conf"
%include "/etc/ganesha/exports/export.tmp_vol.conf"
%include "/etc/ganesha/exports/export.tmp_vol2.conf"

[root@nfs5 ~]# pcs status
Cluster name: new-ganesha
Last updated: Wed Jun  3 21:58:54 2015
Last change: Wed Jun  3 21:22:11 2015
Stack: cman
Current DC: nfs5 - partition with quorum
Version: 1.1.11-97629de
8 Nodes configured
24 Resources configured


Online: [ nfs5 nfs6 nfs7 nfs8 ]
OFFLINE: [ nfs1 nfs2 nfs3 nfs4 ]

Full list of resources:

 Clone Set: nfs-mon-clone [nfs-mon]
     Started: [ nfs5 nfs6 nfs7 nfs8 ]
     Stopped: [ nfs1 nfs2 nfs3 nfs4 ]
 Clone Set: nfs-grace-clone [nfs-grace]
     Started: [ nfs5 nfs6 nfs7 nfs8 ]
     Stopped: [ nfs1 nfs2 nfs3 nfs4 ]
 nfs5-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs5 
 nfs5-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs5 
 nfs6-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs6 
 nfs6-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs6 
 nfs7-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs7 
 nfs7-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs7 
 nfs8-cluster_ip-1	(ocf::heartbeat:IPaddr):	Started nfs8 
 nfs8-trigger_ip-1	(ocf::heartbeat:Dummy):	Started nfs8

Comment 2 Meghana 2015-06-03 15:59:27 UTC
Hi,

I looked into your set up and I observed the following things,

1. Your ganesha-ha.conf has only 4 nodes and the pcs status shows four more nodes. Your setup is inconsistent due to a previous cluster.
2. The config files are listed an order in the /etc/ganesha/ganesha.conf.
The first volume listed in that order was not not started. Ganesha fails to export the volume if it is not started and exits.

I really think this is a configuration issue and not a bug since you have been
testing with the same command in the past and exported volumes.
The cluster is in an inconsistent state. I suggest cleaning up and starting from the start. And check the "/etc/ganesha/ganesha.conf" once. If you hit it again,let me know. I'll take a look at your machine again.

Start with the shared volume and just another volume after that. 

Please attach the sos reports so that we have the state saved.

Comment 3 Vivek Agarwal 2015-06-04 07:46:18 UTC
team-nfs

Comment 4 Meghana 2015-06-04 21:33:39 UTC
I am putting a fix for the old state persisting issue. Please update the bug summary if you can't reproduce the export issue.

Comment 9 Apeksha 2015-07-08 12:21:33 UTC
Created a 8 node setup, tired it down.
Now setup of 4 node cluster on 4 servers which were part of the *node cluster earlier.
Able to setup ganesha cluster and export ganesha volume

Comment 10 errata-xmlrpc 2015-07-29 04:55:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html