Description of problem: Say we have four node cluster "A B C D" and we have two volume "v1 v2" v1 having nfs-ganesha.host as "A" v2 having nfs-ganesha.host as "C" showmount resultsfrom a client display are like this:- showmount from node "A" displays the exported volume as "v1" --- as expected showmount from node "C" displays the exported volume as "v2" --- as expected now we reboot the node A. bring the nfs-ganesha process back on node A showmount from node "A" displays the exported volume as "v1" --- as expected showmount from node "C" displays the exported volume as "v1 and v2" --- not as expected Version-Release number of selected component (if applicable): glusterfs-3.6.0.22-1.el6rhs.x86_64 nfs-ganesha-2.1.0.2-4.el6rhs.x86_64 How reproducible: already seen once, first trial of test Actual results: pre reboot result, [root@rhsauto034 ~]# showmount -e 10.70.37.44 Export list for 10.70.37.44: /dist-rep (everyone) / (everyone) /dist-rep1 (everyone) [root@rhsauto034 ~]# showmount -e 10.70.37.62 Export list for 10.70.37.62: /dist-rep1 (everyone) / (everyone) post reboot result, [root@rhsauto034 ~]# showmount -e 10.70.37.44 Export list for 10.70.37.44: /dist-rep (everyone) / (everyone) /dist-rep1 (everyone) [root@rhsauto034 ~]# showmount -e 10.70.37.62 Export list for 10.70.37.62: /dist-rep1 (everyone) / (everyone) [root@rhsauto034 ~]# Expected results: pre-reboot and post-reboot result should remain same Additional info:
even a mount is allowed from node "C" as can be seen from this example [root@rhsauto034 ~]# mount | grep 44 10.70.37.44:/dist-rep1 on /mnt/nfs-test1 type nfs (rw,vers=3,addr=10.70.37.44) [root@rhsauto034 ~]# whereas gluster volume info dist-rep1 says the nfs-ganehsa.host is 10.70.37.62 [root@nfs1 ~]# gluster volume info dist-rep1 Volume Name: dist-rep1 Type: Distributed-Replicate Volume ID: d0cc61c1-806d-42b7-8cc2-39559d6f187e Status: Started Snap Volume: no Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: 10.70.37.62:/bricks/d1r11 Brick2: 10.70.37.215:/bricks/d1r21 Brick3: 10.70.37.44:/bricks/d2r11 Brick4: 10.70.37.201:/bricks/dr2r21 Brick5: 10.70.37.62:/bricks/d3r11 Brick6: 10.70.37.215:/bricks/d3r21 Brick7: 10.70.37.44:/bricks/d4r11 Brick8: 10.70.37.201:/bricks/dr4r21 Brick9: 10.70.37.62:/bricks/d5r11 Brick10: 10.70.37.215:/bricks/d5r21 Brick11: 10.70.37.44:/bricks/d6r11 Brick12: 10.70.37.201:/bricks/dr6r21 Options Reconfigured: performance.readdir-ahead: on nfs-ganesha.host: 10.70.37.62 nfs-ganesha.enable: on nfs.disable: on snap-max-hard-limit: 256 snap-max-soft-limit: 90 auto-delete: disable
Not able to reproduce the issue. Need more info from QA. In anycase, we suspect that there could have been host configuration issue with respect to DBus service, while bringing up ganesha on host'C' which might have caused "showmount -e localhost" to differ from host 'A'.
Please review and sign-off edited doc text.
Meghana reviewed the doc text during online review meeting, hence removing need_info.
This bug doesn't apply to the present release. Will close it now. If QE hits the issue, they can raise a new bug.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days