| Summary: | Gluster inaccessible after LVM brick is disrupted | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | jason.yates |
| Component: | core | Assignee: | Amar Tumballi <amarts> |
| Status: | CLOSED WONTFIX | QA Contact: | |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 3.2.2 | CC: | gluster-bugs, jdarcy, vraman |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | --- | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
Want to understand what is the behavior of the native LVM mount point behavior when you repeat the steps. I did an 'ls' on the LVM mount point and it says "ls: cannot access /mnt/lvm: No such file or directory" and if I went into the mount directory and do 'ls' it says "ls: cannot open directory .: Input/output error". So, the LVM itself is inaccessible as well. |
I was testing worst case scenarios using Amazon EBS volumes. My setup was a replica 2 volume with no distribution. The 2 bricks are each LVMs composed of 3 1Tb EBS volumes. If the first disk in one of the LVMs is forcibly detached, the Gluster volume is no longer accessible ('ls' hangs indefinitely on the mount point). I was able to reproduce this after repeating the steps. I am unsure if this would be applicable to non-Amazon setups but I imagine it would be. As a side note, on a separate attempt I detached the second disk in one of the LVMs and the Gluster volume was still readable- although it listed several file descriptor errors, so it seems to be related to the fact that it is the first disk in the LVM.