Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1115901 - nfs-ganesha: showmount for nfs-ganesha process having permission for exporting one volume, displays the other volume also, issue seen post reboot [NEEDINFO]
nfs-ganesha: showmount for nfs-ganesha process having permission for exportin...
Status: CLOSED NOTABUG
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: nfs-ganesha (Show other bugs)
3.0
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Meghana
Saurabh
:
Depends On:
Blocks: 1087818
  Show dependency treegraph
 
Reported: 2014-07-03 05:58 EDT by Saurabh
Modified: 2016-01-19 01:13 EST (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
Multi-head nfs-ganesha is not supported in this release. Workaround (if any): In a multi-node volume setup, perform all CLI commands and steps on one of the nodes only.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-04-21 02:58:23 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
skoduri: needinfo? (saujain)


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2014-07-03 05:58:14 EDT
Description of problem:
Say we have four node cluster "A B C D"
and we have two volume "v1 v2"
v1 having nfs-ganesha.host as "A"
v2 having nfs-ganesha.host as "C"
showmount resultsfrom a client display are like this:-
showmount from node "A"
displays the exported volume as "v1" --- as expected
showmount from node "C"
displays the exported volume as "v2" --- as expected

now we reboot the node A.
bring the nfs-ganesha process back on node A
showmount from node "A"
displays the exported volume as "v1" --- as expected
showmount from node "C"
displays the exported volume as "v1 and v2" --- not as expected

Version-Release number of selected component (if applicable):
glusterfs-3.6.0.22-1.el6rhs.x86_64
nfs-ganesha-2.1.0.2-4.el6rhs.x86_64

How reproducible:
already seen once, first trial of test


Actual results:
pre reboot result,
[root@rhsauto034 ~]# showmount -e 10.70.37.44
Export list for 10.70.37.44:
/dist-rep  (everyone)
/          (everyone)
/dist-rep1 (everyone)
[root@rhsauto034 ~]# showmount -e 10.70.37.62
Export list for 10.70.37.62:
/dist-rep1 (everyone)
/          (everyone)


post reboot result,
[root@rhsauto034 ~]# showmount -e 10.70.37.44
Export list for 10.70.37.44:
/dist-rep  (everyone)
/          (everyone)
/dist-rep1 (everyone)
[root@rhsauto034 ~]# showmount -e 10.70.37.62
Export list for 10.70.37.62:
/dist-rep1 (everyone)
/          (everyone)
[root@rhsauto034 ~]# 

Expected results:
pre-reboot and post-reboot result should remain same

Additional info:
Comment 2 Saurabh 2014-07-03 07:09:40 EDT
even a mount is allowed from node "C"

as can be seen from this example
[root@rhsauto034 ~]# mount | grep 44
10.70.37.44:/dist-rep1 on /mnt/nfs-test1 type nfs (rw,vers=3,addr=10.70.37.44)
[root@rhsauto034 ~]# 

whereas gluster volume info dist-rep1 says the nfs-ganehsa.host is 10.70.37.62

[root@nfs1 ~]# gluster volume info dist-rep1
 
Volume Name: dist-rep1
Type: Distributed-Replicate
Volume ID: d0cc61c1-806d-42b7-8cc2-39559d6f187e
Status: Started
Snap Volume: no
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.62:/bricks/d1r11
Brick2: 10.70.37.215:/bricks/d1r21
Brick3: 10.70.37.44:/bricks/d2r11
Brick4: 10.70.37.201:/bricks/dr2r21
Brick5: 10.70.37.62:/bricks/d3r11
Brick6: 10.70.37.215:/bricks/d3r21
Brick7: 10.70.37.44:/bricks/d4r11
Brick8: 10.70.37.201:/bricks/dr4r21
Brick9: 10.70.37.62:/bricks/d5r11
Brick10: 10.70.37.215:/bricks/d5r21
Brick11: 10.70.37.44:/bricks/d6r11
Brick12: 10.70.37.201:/bricks/dr6r21
Options Reconfigured:
performance.readdir-ahead: on
nfs-ganesha.host: 10.70.37.62
nfs-ganesha.enable: on
nfs.disable: on
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable
Comment 3 Soumya Koduri 2014-07-07 07:35:27 EDT
Not able to reproduce the issue. Need more info from QA.

In anycase, we suspect that there could have been host configuration issue with respect to DBus service, while bringing up ganesha on host'C' which might have caused "showmount -e localhost" to differ from host 'A'.
Comment 4 Shalaka 2014-09-20 06:03:53 EDT
Please review and sign-off edited doc text.
Comment 5 Shalaka 2014-09-24 03:06:01 EDT
Meghana reviewed the doc text during online review meeting, hence removing need_info.
Comment 6 Meghana 2015-04-21 02:58:23 EDT
This bug doesn't apply to the present release. Will close it now. If QE
hits the issue, they can raise a new bug.

Note You need to log in before you can comment on or make changes to this bug.