Red Hat Bugzilla – Bug 763914
Anormal Gluster shutdown
Last modified: 2015-09-01 19:05:08 EDT
GlusterFS partition automatically shutdown when umounting a binded mount point with "-f" option (without it works).
How to reproduce:
mounted Gluster partition on /gluster (any config):
df: localhost:/gluster 4.5T 100G 4.4T 3% /gluster
mount: localhost:/gluster on /gluster type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
mount -n --bind /gluster /test
ls /test (verify you have the Gluster)
umount -f /test
df: `/gluster': Transport endpoint is not connected
[2010-12-02 14:48:56.38309] I [fuse-bridge.c:3138:fuse_thread_proc] fuse: unmounting /gluster
[2010-12-02 14:48:56.38364] I [glusterfsd.c:672:cleanup_and_exit] glusterfsd: shutting down
Before 3.1.x I did not have this bug.
*** Bug 2188 has been marked as a duplicate of this bug. ***
I was able to re-create the failure.
The bug claims pre 3.1 this behavior is not seen. However this behavior is seen in 3.0.x as well. Besides the behavior itself is a feature of FUSE kernel module and not GlusterFS. And I personally don't see an issue with this behaviour. If you want to preserve the original mount, just umount instead of umount -f.
I have this bug in 3.1.1 and this is very critical. I explain:
I am using OpenVZ containers and I want to put my GlusterFS partition inside the container with this feature: http://wiki.openvz.org/Bind_mounts.
It worked in 3.0.5 without any problem. But now, I have this bug (reported in 2009 but with previous version I did not see this): http://forum.openvz.org/index.php?t=msg&goto=37899&.
After some researches, I found that the problem is in the script /etc/init.d/umountfs that does umount -f by default! Finally, since I pass to GlusterFS 3.1.X I can not restart/stop any virtual engine without kill my gluster partition on the physical node. And so the Gluster goes down in all my over virtual engines on the same node.
Of course, I can modify the script /etc/init.d/umountfs and manually add my gluster partition to the: WEAK_MTPTS="" # be gentle, don't use force.
But it is not correct, by default, Gluster is killed by any stop/restart action on each virtual engine.
Maybe it is not GlusterFS and it is fuse kernel module, in this case, how can I report the problem?
The filesystem process exits even in 3.0.5 if you umount -f on a mount --bind'ed directory. Are you sure glusterfs was the only change in your system between the working and non-working setup?
I am not sure of this, but if I downgrade to GlusterFS 2.0.9 the "issue" is fixed. So, it is OK for me, I changed the umountfs script for all my ves and in my ve templates.
Thanks for your help. Hope this "issue" will be fixed in next OpenVZ version or in next Fuse kernel module version.
I close this issue.