Description of problem: Volume restart couldn't export the volume exported via ganesha. Version-Release number of selected component (if applicable): [root@dhcp43-116 ganesha]# rpm -qa|grep ganesha glusterfs-ganesha-3.8.3-0.6.git7956718.el7.centos.x86_64 nfs-ganesha-gluster-2.4-0.rc4.el7.centos.x86_64 nfs-ganesha-debuginfo-2.4-0.rc4.el7.centos.x86_64 nfs-ganesha-2.4-0.rc4.el7.centos.x86_64 How reproducible: Always Steps to Reproduce: 1.Create and start a volume [root@dhcp43-116 nfs-ganesha]# gluster volume create testvol replica 2 10.70.43.116:/bricks/brick0/b0 10.70.43.88:/bricks/brick0/b0 10.70.42.47:/bricks/brick0/b0 10.70.42.237:/bricks/brick0/b0 10.70.43.116:/bricks/brick1/b1 10.70.43.88:/bricks/brick1/b1 10.70.42.47:/bricks/brick1/b1 10.70.42.237:/bricks/brick1/b1 10.70.43.116:/bricks/brick2/b2 10.70.43.88:/bricks/brick2/b2 10.70.42.47:/bricks/brick2/b2 10.70.42.237:/bricks/brick2/b2 volume create: testvol: success: please start the volume to access data [root@dhcp43-116 nfs-ganesha]# gluster vol start testvol volume start: testvol: success 2. Export the volume [root@dhcp43-116 ~]# gluster vol set testvol ganesha.enable on volume set: success 3. Make sure showmount lists the exported volume [root@dhcp43-116 ~]# cd /var/run/gluster/shared_storage/nfs-ganesha/exports/ [root@dhcp43-116 exports]# ls export.testvol.conf [root@dhcp43-116 exports]# showmount -e localhost Export list for localhost: /testvol (everyone) 4. stop the volume [root@dhcp43-116 exports]# gluster vol stop testvol Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: testvol: success 5. start the volume again [root@dhcp43-116 exports]# gluster vol start testvol volume start: testvol: success 6. Observe that showmount doesn't list the exported volume after start of the volume [root@dhcp43-116 exports]# showmount -e localhost Export list for localhost: [root@dhcp43-116 exports]# ls export.testvol.conf 7. Observe that following error messages are seen in ganesha.log 19/09/2016 17:48:14 : epoch 57dbe5c8 : dhcp43-116.lab.eng.blr.redhat.com : ganesha.nfsd-6039[dbus_heartbeat] gsh_export_addexport :EXPORT :CRIT :Error while parsing /etc/ganesha/exports/export.testvol.conf 19/09/2016 17:48:14 : epoch 57dbe5c8 : dhcp43-116.lab.eng.blr.redhat.com : ganesha.nfsd-6039[dbus_heartbeat] dbus_message_entrypoint :DBUS :MAJ :Method (AddExport) on (org.ganesha.nfsd.exportmgr) failed: name = (org.freedesktop.DBus.Error.InvalidFileContent), message = (Error while parsing /etc/ganesha/exports/export.testvol.conf because of (token scan) errors. Details: Config File (<unknown file>:0): new file (/etc/ganesha/exports/export.testvol.conf) open error (No such file or directory), ignored Actual results: Volume restart couldn't export the volume exported via ganesha. Expected results: Volume should get exported properly after volume restart. Additional info:
Volume restart creates an export entry (/etc/ganesha/exports/export.v3.conf) for the same volume inside /etc/ganesha/ganesha.conf As below: #in the global part. Or create a separate file with the export block #and include in the following block. NFS_Core_Param { #Use supplied name other tha IP In NSM operations NSM_Use_Caller_Name = true; #Copy lock states into "/var/lib/nfs/ganesha" dir Clustered = false; #By default port number '2049' is used for NFS service. #Configure ports for MNT, NLM, RQuota services. #The ports chosen here are from '/etc/sysconfig/nfs' MNT_Port = 20048; NLM_Port = 32803; Rquota_Port = 875; } CACHEINODE { Entries_HWMark = 25000; } %include "/etc/ganesha/exports/export.v3.conf" is this expected?
Trying this particular scenario also removes the symlink created between /etc/ganesha/ganesha.conf and /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf: [root@dhcp43-116 ganesha]# ls -ltr total 12 -rw-r--r--. 1 root root 776 Aug 29 17:04 ganesha-ha.conf.sample -rw-r--r--. 1 root root 1054 Aug 29 23:12 ganesha-ha.conf -rw-r--r--. 1 root root 1403 Sep 16 02:20 ganesha.conf.rpmsave lrwxrwxrwx. 1 root root 56 Sep 19 22:12 ganesha.conf -> /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf [root@dhcp43-116 ganesha]# gluster vol set v1 ganesha.enable on volume set: success [root@dhcp43-116 ganesha]# ls -ltr total 12 -rw-r--r--. 1 root root 776 Aug 29 17:04 ganesha-ha.conf.sample -rw-r--r--. 1 root root 1054 Aug 29 23:12 ganesha-ha.conf -rw-r--r--. 1 root root 1403 Sep 16 02:20 ganesha.conf.rpmsave lrwxrwxrwx. 1 root root 56 Sep 19 22:12 ganesha.conf -> /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf [root@dhcp43-116 ganesha]# showmount -e localhost Export list for localhost: /v1 (everyone) [root@dhcp43-116 ganesha]# ls -ltr total 12 -rw-r--r--. 1 root root 776 Aug 29 17:04 ganesha-ha.conf.sample -rw-r--r--. 1 root root 1054 Aug 29 23:12 ganesha-ha.conf -rw-r--r--. 1 root root 1403 Sep 16 02:20 ganesha.conf.rpmsave lrwxrwxrwx. 1 root root 56 Sep 19 22:12 ganesha.conf -> /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf [root@dhcp43-116 ganesha]# gluster vol stop v1 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: v1: success [root@dhcp43-116 ganesha]# ls -ltr total 12 -rw-r--r--. 1 root root 776 Aug 29 17:04 ganesha-ha.conf.sample -rw-r--r--. 1 root root 1054 Aug 29 23:12 ganesha-ha.conf -rw-r--r--. 1 root root 1403 Sep 16 02:20 ganesha.conf.rpmsave lrwxrwxrwx. 1 root root 56 Sep 19 22:12 ganesha.conf -> /var/run/gluster/shared_storage/nfs-ganesha/ganesha.conf [root@dhcp43-116 ganesha]# showmount -e localhost Export list for localhost: [root@dhcp43-116 ganesha]# gluster vol start v1 volume start: v1: success [root@dhcp43-116 ganesha]# showmount -e localhost Export list for localhost: [root@dhcp43-116 ganesha]# ls -ltr total 16 -rw-r--r--. 1 root root 776 Aug 29 17:04 ganesha-ha.conf.sample -rw-r--r--. 1 root root 1054 Aug 29 23:12 ganesha-ha.conf -rw-r--r--. 1 root root 1403 Sep 16 02:20 ganesha.conf.rpmsave -rw-r--r--. 1 root root 1450 Sep 19 22:23 ganesha.conf
The issue is related to hook script used by ganesha during start. Patch[1] has posted upstream to address this issue [1] http://review.gluster.org/#/c/15535/1
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.