| Summary: | Anormal Gluster shutdown | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Samuel Hassine <samuel> |
| Component: | core | Assignee: | Anand Avati <aavati> |
| Status: | CLOSED NOTABUG | QA Contact: | |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 3.1.1 | CC: | chrisw, craig, gluster-bugs |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | --- | |
| Regression: | --- | Mount Type: | fuse |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Samuel Hassine
2010-12-02 13:53:36 UTC
*** Bug 2188 has been marked as a duplicate of this bug. *** I was able to re-create the failure. Craig The bug claims pre 3.1 this behavior is not seen. However this behavior is seen in 3.0.x as well. Besides the behavior itself is a feature of FUSE kernel module and not GlusterFS. And I personally don't see an issue with this behaviour. If you want to preserve the original mount, just umount instead of umount -f. Avati Avati, I have this bug in 3.1.1 and this is very critical. I explain: I am using OpenVZ containers and I want to put my GlusterFS partition inside the container with this feature: http://wiki.openvz.org/Bind_mounts. It worked in 3.0.5 without any problem. But now, I have this bug (reported in 2009 but with previous version I did not see this): http://forum.openvz.org/index.php?t=msg&goto=37899&. After some researches, I found that the problem is in the script /etc/init.d/umountfs that does umount -f by default! Finally, since I pass to GlusterFS 3.1.X I can not restart/stop any virtual engine without kill my gluster partition on the physical node. And so the Gluster goes down in all my over virtual engines on the same node. Of course, I can modify the script /etc/init.d/umountfs and manually add my gluster partition to the: WEAK_MTPTS="" # be gentle, don't use force. But it is not correct, by default, Gluster is killed by any stop/restart action on each virtual engine. Maybe it is not GlusterFS and it is fuse kernel module, in this case, how can I report the problem? Regards. The filesystem process exits even in 3.0.5 if you umount -f on a mount --bind'ed directory. Are you sure glusterfs was the only change in your system between the working and non-working setup? Avati I am not sure of this, but if I downgrade to GlusterFS 2.0.9 the "issue" is fixed. So, it is OK for me, I changed the umountfs script for all my ves and in my ve templates. Thanks for your help. Hope this "issue" will be fixed in next OpenVZ version or in next Fuse kernel module version. I close this issue. Regards. |