Set-up:- 1) Create a Replicate Volume with count 2 on the Server 2) Start the replicate volume 3) Create a mount point /vm on the Dom0 4) Mount the replicate volume on /vm of the Dom0 Execution:- Step1:- ----- 1) check the md5sum of the VM image file(GuestOS) on each brick 2) Check the extended attributes of the image file on each brick The md5sum and extended attributes of the VM image file are same. Step2:- -------- 3) Bring down brick2 (Only brick1 is up) 4) Start VM(guest OS) 5) Perform dd of 1GB file 6) dd successfully complete 7) Shutdown VM 8) Brick2 is brought alive 9) Self-heal triggered immediately 10) Brick1 is brought down 11) Volume met split-brain condition 12) Start the VM (Brick1 is down and Brick2 is up) 13) Unable to start VM ( Expected and works fine) 14) Force Shutdown the VM. 15) Bring down Brick2 (Both Brick1 and Brick2 is down) 16) Bring back Brick1 17) Start VM(Guest OS start successful) 18) Perform IO Operations (IO operations successful) 19) Shutdown VM 20) Brick1 is brought down completely 21) Only Brick2 is brought up (no self-heal triggered) 22) Start VM 23) Successfully able to Start (UN-EXPECTED Behaviour since split-brain condition was already met) The details of the test cases and scree snapshot of errors has been recorded in the following document. https://docs.google.com/spreadsheet/ccc?key=0AlvBPsMsaL6edF96UmVxY3ZuYUZheWd0bGREcVN3VWc&hl=en_US&pli=1#gid=4
Outcast functionality is needed to fix this issue.
The version that this bug has been reported against, does not get any updates from the Gluster Community anymore. Please verify if this report is still valid against a current (3.4, 3.5 or 3.6) release and update the version, or close this bug. If there has been no update before 9 December 2014, this bug will get automatocally closed.