Bug 1788011 - glusterfs client mount failed but exit code was 0
Summary: glusterfs client mount failed but exit code was 0
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: libglusterfsclient
Version: 4.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-06 06:23 UTC by DanielQU
Modified: 2020-03-12 12:19 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: ---
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-12 12:19:33 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)
mount command failed log (34.06 KB, text/plain)
2020-01-06 06:23 UTC, DanielQU
no flags Details
mount command success (33.81 KB, text/plain)
2020-01-06 06:26 UTC, DanielQU
no flags Details

Description DanielQU 2020-01-06 06:23:40 UTC
Created attachment 1650035 [details]
mount command failed log

Description of problem:
  when gluster volue status is online, but client failed to mount it, even worse is that sometimes mount command exit with 0.
   
  We setup gluster-server cluster by heketi's gk-deploy scrpits in kubernetes.  every thing is fine , but sometimes  when the problem occured , A pod with gluster pv configured created  successeful but did not mount gluster volume . then the pod will write data to it's local directory , this is very dangerouse and high severity problem.
  The wierd thing is whe mount command is not always fail or success , just like random case.


$ mount -t glusterfs -o auto_unmount,backup-volfile-servers=192.168.0.35:192.168.0.36:192.168.0.37,log-file=/tmp/test-qujun-3.log,log-level=DEBUG 192.168.0.35:qujun-test  /home/qujun/mnt
[2020-01-06 06:11:22.682248] E [glusterfsd.c:795:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
[root: /home/qujun/mnt] 14:11:22 
$ echo $?        
0
[root: /home/qujun/mnt] 14:11:26 
$ mount -t glusterfs -o auto_unmount,backup-volfile-servers=192.168.0.35:192.168.0.36:192.168.0.37,log-file=/tmp/test-qujun-3.log,log-level=DEBUG 192.168.0.35:qujun-test  /home/qujun/mnt
[2020-01-06 06:11:28.542670] E [glusterfsd.c:795:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Please check the log file for more details.
[root: /home/qujun/mnt] 14:11:28 
$ echo $?
1
[root: /home/qujun/mnt] 14:11:30 
$ mount -t glusterfs -o auto_unmount,backup-volfile-servers=192.168.0.35:192.168.0.36:192.168.0.37,log-file=/tmp/test-qujun-3.log,log-level=DEBUG 192.168.0.35:qujun-test  /home/qujun/mnt
[2020-01-06 06:11:31.958008] E [glusterfsd.c:795:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Please check the log file for more details.
[root: /home/qujun/mnt] 14:11:32 
$ echo $?
1
[root: /home/qujun/mnt] 14:11:33 
$ mount -t glusterfs -o auto_unmount,backup-volfile-servers=192.168.0.35:192.168.0.36:192.168.0.37,log-file=/tmp/test-qujun-3.log,log-level=DEBUG 192.168.0.35:qujun-test  /home/qujun/mnt
[2020-01-06 06:11:38.196218] E [glusterfsd.c:795:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Please check the log file for more details.
[root: /home/qujun/mnt] 14:11:38 
$ echo $?
1
[root: /home/qujun/mnt] 14:11:39 
$ echo $?
0


Version-Release number of selected component (if applicable):
4.1.9


How reproducible:
It is rare happend ,may be after glusterd service restart or OS reboot.

Actual results:
client mount gluster volume failed ,but exit code is zero 

Expected results:
client mount volume failed with non-zero exit code.

Additional info:

[root@sh-tidu5 glusterfs]# gluster volume status qujun-test
Status of volume: qujun-test
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.0.35:/var/lib/heketi/mounts/v
g_388e881025bddc20831535c6fdcd44e6/brick_43
0caab224369e02d43546e6e578ddfd/brick        49153     0          Y       32287
Brick 192.168.0.36:/var/lib/heketi/mounts/v
g_7b6b6842ebb05301aa01615984ac168c/brick_6b
3f9163535e091793991aad8a0c2e3c/brick        49153     0          Y       13246
Brick 192.168.0.37:/var/lib/heketi/mounts/v
g_7bac8d0737a14ee8d834931052370c55/brick_e6
9d16bd006830124b07d2077e13d529/brick        49154     0          Y       32676
Self-heal Daemon on localhost               N/A       N/A        Y       32311
Self-heal Daemon on 192.168.0.36            N/A       N/A        Y       13269
Self-heal Daemon on 192.168.0.37            N/A       N/A        Y       32720
 
Task Status of Volume qujun-test
------------------------------------------------------------------------------
There are no active volume tasks

Comment 1 DanielQU 2020-01-06 06:26:29 UTC
Created attachment 1650036 [details]
mount command success

Comment 2 Worker Ant 2020-03-12 12:19:33 UTC
This bug is moved to https://github.com/gluster/glusterfs/issues/871, and will be tracked there from now on. Visit GitHub issues URL for further details


Note You need to log in before you can comment on or make changes to this bug.