Bug 983145
Summary: | [RHEV-RHS] - remove-brick operation on distribute-replicate RHS 2.0 volume, used as VM image store on RHEV, leads to paused VM | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Rejy M Cyriac <rcyriac> |
Component: | glusterfs | Assignee: | Bug Updates Notification Mailing List <rhs-bugs> |
Status: | CLOSED ERRATA | QA Contact: | SATHEESARAN <sasundar> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 2.0 | CC: | grajaiya, rcyriac, rhs-bugs, sasundar, vagarwal, vbellur, vraman |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | RHGS 2.1.2 | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.4.0.44.1u2rhs-1 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: |
virt rhev integration
|
|
Last Closed: | 2014-02-25 07:33:02 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Rejy M Cyriac
2013-07-10 15:38:56 UTC
Targeting for 2.1.z (Big Bend) U1. https://code.engineering.redhat.com/gerrit/#/c/16039/ should fix this. Can we have a run of tests for this with glusterfs-3.4.0.44.1u2rhs build? Clearing the needinfo flag since this bug is now ON_QA for verification. Tested with glusterfs-3.4.0.57rhs-1.el6rhs and rhevm IS32.2 All below operation are performed from RHEVM UI 1. Created a GlusterFS Data center (3.3 compatibility ) 2. Created a gluster enabled cluster (3.3 compatibility) 3. Added 4 RHSS Node, one by one, to the above created cluster 4. Once all the RHSS Nodes are up in UI, create a distribute replicate volume of type 6X2 5. Optimize the volume for virt-store 6. Started the volume 7. Create a Data domain using the above created volume 8. Create 2 App VMs with root disk of size 30GB 9. Install the App VMs with RHEL 6.5 10. Run "dd" command in loop inside these vms (i.e) dd if=/dev/urandom of=/home/file$i bs=1024k count=1000 11. From one of the RHSS Node (gluster cli), start remove brick with data migration (i.e) gluster volume remove-brick <vol-name> <brick1> <brick2> start 12. Check for the status of remove brick operation (i.e) gluster volume remove-brick <vol-name> <brick1> <brick2> status This migration should get completed 13. Commit the bricks once the rebalance is completed (i.e) gluster volume remove-brick <vol-name> <brick1> <brick2> commit Now the volume has become 5X2 14. Repeated step 11, step 12, step 13 till the volume becomes 2X2 (i.e) repeat the remove-brick 3 more times App VMs are healthy Rebooted the App VMs multiple times and again they are healthy Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html |