Bug 765445 (GLUSTER-3713)

Summary: Gluster inaccessible after LVM brick is disrupted
Product: [Community] GlusterFS Reporter: jason.yates
Component: coreAssignee: Amar Tumballi <amarts>
Status: CLOSED WONTFIX QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: 3.2.2CC: gluster-bugs, jdarcy, vraman
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description jason.yates 2011-10-12 18:01:30 UTC
I was testing worst case scenarios using Amazon EBS volumes. My setup was a replica 2 volume with no distribution. The 2 bricks are each LVMs composed of 3 1Tb EBS volumes. If the first disk in one of the LVMs is forcibly detached, the Gluster volume is no longer accessible ('ls' hangs indefinitely on the mount point). I was able to reproduce this after repeating the steps. I am unsure if this would be applicable to non-Amazon setups but I imagine it would be. As a side note, on a separate attempt I detached the second disk in one of the LVMs and the Gluster volume was still readable- although it listed several file descriptor errors, so it seems to be related to the fact that it is the first disk in the LVM.

Comment 1 Amar Tumballi 2011-10-13 04:58:51 UTC
Want to understand what is the behavior of the native LVM mount point behavior when you repeat the steps.

Comment 2 jason.yates 2011-10-13 15:14:01 UTC
I did an 'ls' on the LVM mount point and it says "ls: cannot access /mnt/lvm: No such file or directory" and if I went into the mount directory and do 'ls' it says "ls: cannot open directory .: Input/output error". So, the LVM itself is inaccessible as well.

Comment 3 Amar Tumballi 2011-11-10 02:35:01 UTC
As per comment#2, as LVM itself is inaccessible, GlusterFS can't perform fine. GlusterFS just performs the user syscalls on backend, so closing this as WONTFIX for now.