Bug 1115901 - nfs-ganesha: showmount for nfs-ganesha process having permission for exporting one volume, displays the other volume also, issue seen post reboot
Summary: nfs-ganesha: showmount for nfs-ganesha process having permission for exportin...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: nfs-ganesha
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Meghana
QA Contact: Saurabh
URL:
Whiteboard:
Depends On:
Blocks: 1087818
TreeView+ depends on / blocked
 
Reported: 2014-07-03 09:58 UTC by Saurabh
Modified: 2023-09-14 02:11 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
Multi-head nfs-ganesha is not supported in this release. Workaround (if any): In a multi-node volume setup, perform all CLI commands and steps on one of the nodes only.
Clone Of:
Environment:
Last Closed: 2015-04-21 06:58:23 UTC
Embargoed:


Attachments (Terms of Use)

Description Saurabh 2014-07-03 09:58:14 UTC
Description of problem:
Say we have four node cluster "A B C D"
and we have two volume "v1 v2"
v1 having nfs-ganesha.host as "A"
v2 having nfs-ganesha.host as "C"
showmount resultsfrom a client display are like this:-
showmount from node "A"
displays the exported volume as "v1" --- as expected
showmount from node "C"
displays the exported volume as "v2" --- as expected

now we reboot the node A.
bring the nfs-ganesha process back on node A
showmount from node "A"
displays the exported volume as "v1" --- as expected
showmount from node "C"
displays the exported volume as "v1 and v2" --- not as expected

Version-Release number of selected component (if applicable):
glusterfs-3.6.0.22-1.el6rhs.x86_64
nfs-ganesha-2.1.0.2-4.el6rhs.x86_64

How reproducible:
already seen once, first trial of test


Actual results:
pre reboot result,
[root@rhsauto034 ~]# showmount -e 10.70.37.44
Export list for 10.70.37.44:
/dist-rep  (everyone)
/          (everyone)
/dist-rep1 (everyone)
[root@rhsauto034 ~]# showmount -e 10.70.37.62
Export list for 10.70.37.62:
/dist-rep1 (everyone)
/          (everyone)


post reboot result,
[root@rhsauto034 ~]# showmount -e 10.70.37.44
Export list for 10.70.37.44:
/dist-rep  (everyone)
/          (everyone)
/dist-rep1 (everyone)
[root@rhsauto034 ~]# showmount -e 10.70.37.62
Export list for 10.70.37.62:
/dist-rep1 (everyone)
/          (everyone)
[root@rhsauto034 ~]# 

Expected results:
pre-reboot and post-reboot result should remain same

Additional info:

Comment 2 Saurabh 2014-07-03 11:09:40 UTC
even a mount is allowed from node "C"

as can be seen from this example
[root@rhsauto034 ~]# mount | grep 44
10.70.37.44:/dist-rep1 on /mnt/nfs-test1 type nfs (rw,vers=3,addr=10.70.37.44)
[root@rhsauto034 ~]# 

whereas gluster volume info dist-rep1 says the nfs-ganehsa.host is 10.70.37.62

[root@nfs1 ~]# gluster volume info dist-rep1
 
Volume Name: dist-rep1
Type: Distributed-Replicate
Volume ID: d0cc61c1-806d-42b7-8cc2-39559d6f187e
Status: Started
Snap Volume: no
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.62:/bricks/d1r11
Brick2: 10.70.37.215:/bricks/d1r21
Brick3: 10.70.37.44:/bricks/d2r11
Brick4: 10.70.37.201:/bricks/dr2r21
Brick5: 10.70.37.62:/bricks/d3r11
Brick6: 10.70.37.215:/bricks/d3r21
Brick7: 10.70.37.44:/bricks/d4r11
Brick8: 10.70.37.201:/bricks/dr4r21
Brick9: 10.70.37.62:/bricks/d5r11
Brick10: 10.70.37.215:/bricks/d5r21
Brick11: 10.70.37.44:/bricks/d6r11
Brick12: 10.70.37.201:/bricks/dr6r21
Options Reconfigured:
performance.readdir-ahead: on
nfs-ganesha.host: 10.70.37.62
nfs-ganesha.enable: on
nfs.disable: on
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable

Comment 3 Soumya Koduri 2014-07-07 11:35:27 UTC
Not able to reproduce the issue. Need more info from QA.

In anycase, we suspect that there could have been host configuration issue with respect to DBus service, while bringing up ganesha on host'C' which might have caused "showmount -e localhost" to differ from host 'A'.

Comment 4 Shalaka 2014-09-20 10:03:53 UTC
Please review and sign-off edited doc text.

Comment 5 Shalaka 2014-09-24 07:06:01 UTC
Meghana reviewed the doc text during online review meeting, hence removing need_info.

Comment 6 Meghana 2015-04-21 06:58:23 UTC
This bug doesn't apply to the present release. Will close it now. If QE
hits the issue, they can raise a new bug.

Comment 7 Red Hat Bugzilla 2023-09-14 02:11:00 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.