Hide Forgot
I was testing worst case scenarios using Amazon EBS volumes. My setup was a replica 2 volume with no distribution. The 2 bricks are each LVMs composed of 3 1Tb EBS volumes. If the first disk in one of the LVMs is forcibly detached, the Gluster volume is no longer accessible ('ls' hangs indefinitely on the mount point). I was able to reproduce this after repeating the steps. I am unsure if this would be applicable to non-Amazon setups but I imagine it would be. As a side note, on a separate attempt I detached the second disk in one of the LVMs and the Gluster volume was still readable- although it listed several file descriptor errors, so it seems to be related to the fact that it is the first disk in the LVM.
Want to understand what is the behavior of the native LVM mount point behavior when you repeat the steps.
I did an 'ls' on the LVM mount point and it says "ls: cannot access /mnt/lvm: No such file or directory" and if I went into the mount directory and do 'ls' it says "ls: cannot open directory .: Input/output error". So, the LVM itself is inaccessible as well.
As per comment#2, as LVM itself is inaccessible, GlusterFS can't perform fine. GlusterFS just performs the user syscalls on backend, so closing this as WONTFIX for now.