Description of problem: Current version of mount.glusterfs does not check the return of mount command and it causes failure in fetching proper return of the same. Version-Release number of selected component (if applicable): GlusterFS-3.6 How reproducible: [root@node ~]# mount -t glusterfs 10.19.96.13:test_vol2 /mnt/h [1] [root@node ~]# mount|grep mnt [root@node ~]# tail -n 10 /var/log/glusterfs/mnt-h.log [2014-08-08 09:54:28.054719] I [glusterfsd-mgmt.c:1817:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2014-08-08 09:54:28.054920] W [glusterfsd.c:1182:cleanup_and_exit] (--> 0-: received signum (1), shutting down [2014-08-08 09:54:28.054955] I [fuse-bridge.c:5561:fini] 0-fuse: Unmounting '/mnt/h'. [2014-08-08 13:08:56.651005] I [MSGID: 100030] [glusterfsd.c:1998:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.6.0.22 (args: /usr/sbin/glusterfs --volfile-server=10.19.96.13 --volfile-id=test_vol2 /mnt/h) [2014-08-08 13:08:56.769335] E [socket.c:2169:socket_connect_finish] 0-glusterfs: connection to 10.19.96.13:24007 failed (Connection refused) [2014-08-08 13:08:56.769409] E [glusterfsd-mgmt.c:1811:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed to connect with remote-host: 10.19.96.13 (Transport endpoint is not connected) [2014-08-08 13:08:56.769426] I [glusterfsd-mgmt.c:1817:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all volfile servers [2014-08-08 13:08:56.769622] W [glusterfsd.c:1182:cleanup_and_exit] (--> 0-: received signum (1), shutting down [2014-08-08 13:08:56.769656] I [fuse-bridge.c:5561:fini] 0-fuse: Unmounting '/mnt/h'. [2014-08-08 13:08:56.779998] W [glusterfsd.c:1182:cleanup_and_exit] (--> 0-: received signum (15), shutting down [root@node ~]# Even-though mount failed, there is no way to identify whether it was a successful mount or not. [1] No output returned to the user. With the patch : [root@node ~]# mount -t glusterfs 10.19.96.13:test_vol2 /mnt/h Mount failed. Please check the log file for more details. [root@node ~]# Actual results: No error returned for the user. Expected results: If failed, it should be messaged to the user. Additional info:
Still reproducible with # rpm -qa |grep gluster |sort glusterfs-3.6.1-1.el7.x86_64 glusterfs-api-3.6.1-1.el7.x86_64 glusterfs-cli-3.6.1-1.el7.x86_64 glusterfs-fuse-3.6.1-1.el7.x86_64 glusterfs-libs-3.6.1-1.el7.x86_64 glusterfs-rdma-3.6.1-1.el7.x86_64 glusterfs-server-3.6.1-1.el7.x86_64
http://review.gluster.org/8438 has been committed in the master branch, and should get backported to release-3.6 and 3.5. I'll clone this bug so that we can post the patches.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user