Bug 1062848
| Summary: | [RHS-RHOS] Root disk corruption on a nova instance booted from a cinder volume after a remove-brick/rebalance | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | shilpa <smanjara> | ||||
| Component: | distribute | Assignee: | Nithya Balachandran <nbalacha> | ||||
| Status: | CLOSED DEFERRED | QA Contact: | storage-qa-internal <storage-qa-internal> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 2.1 | CC: | nlevinki, spalai, vbellur | ||||
| Target Milestone: | --- | ||||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | |||||||
| : | 1286133 (view as bug list) | Environment: | |||||
| Last Closed: | 2015-11-27 11:43:02 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | |||||||
| Bug Blocks: | 1286133 | ||||||
| Attachments: |
|
||||||
Created attachment 860851 [details]
Log messages from VM instance
Cloning this to 3.1. To be fixed in future. |
Description of problem: When a nova instance is rebooted while rebalance is in progress on the gluster volume, the root filesystem is mounted R/O after the instance comes back up. Corruption messages are seen. Version-Release number of selected component (if applicable): glusterfs-3.4.0.59rhs-1.el6_4.x86_64 How reproducible: Always Steps to Reproduce: 1. Create two 6*2 distribute-replicate volumes called glance-vol and cinder-vol for glance images and cinder volumes respectively. 2. Tag the volumes with group virt #gluster volume set glance-vol group virt 3. Set the storage.owner-uid and storage.owner-gid of glance-vol to 161 gluster volume set glance-vol storage.owner-uid 161 gluster volume set glance-vol storage.owner-gid 161 4. On RHOS machine, mount the RHS glance volume on /mnt/gluster/glance/images and start the glance-api service. Also configure glance volume for nova instances to use gluster glance-vol. 5. Mount RHS cinder-vol on /var/lib/cinder/volumes and configure RHOS to use RHS volume for cinder storage. 6. Create glance image, create cinder volume and copy the image the image to the volume. # cinder create --display-name vol3 --image-id dfac4c39-7946-4baa-9fb3-444ec6348a88 10 7. Boot a nova instance out of the bootable cinder volume. # nova boot --flavor 2 --boot-volume 71973975-7952-4d66-a3d8-3cd38de18431 instance-5 # getfattr -d -etext -m. -n trusted.glusterfs.pathinfo /var/lib/cinder/mnt/4db90e5492997091a102ba6ad764dade/volume-71973975-7952-4d66-a3d8-3cd38de18431 getfattr: Removing leading '/' from absolute path names # file: var/lib/cinder/mnt/4db90e5492997091a102ba6ad764dade/volume-71973975-7952-4d66-a3d8-3cd38de18431 trusted.glusterfs.pathinfo="(<DISTRIBUTE:cinder-vol-dht> (<REPLICATE:cinder-vol-replicate-0> <POSIX(/rhs/brick1/c2):rhs-vm2:/rhs/brick1/c2/volume-71973975-7952-4d66-a3d8-3cd38de18431> <POSIX(/rhs/brick1/c1):rhs-vm1:/rhs/brick1/c1/volume-71973975-7952-4d66-a3d8-3cd38de18431>))" 8. Now run a remove-brick on the bricks from above output. # gluster v remove-brick cinder-vol 10.70.37.180:/rhs/brick1/c1 10.70.37.120:/rhs/brick1/c2 start 9. When the volume 71973975-7952-4d66-a3d8-3cd38de18431 is being migrated, reboot the instance-8 that is created from this volume. 10. Check the instance console once it is rebooted. Look for corruption errors messages. Once the instance is up, the rootfs /dev/vda is mounted R/O. Ran fsck manually to correct errors which did not help. The instance is rendered unuseable. Expected results: The rootfs should be mounted R/W after the reboot and no corruption messages should be seen Additional info: Sosreports and VM screenshot is attached.