Bug 1004745
Summary: | [RHS-RHOS] Snapshot of instances with cinder boot volumes stuck during self-heal and rebalance. | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Anush Shetty <ashetty> |
Component: | replicate | Assignee: | Pranith Kumar K <pkarampu> |
Status: | CLOSED EOL | QA Contact: | Anush Shetty <ashetty> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 2.1 | CC: | divya, grajaiya, pkarampu, rhs-bugs, rwheeler, smanjara, ssaha, storage-qa-internal, vagarwal, vbellur |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: |
virt rhos cinder rhs integration
|
|
Last Closed: | 2015-12-03 17:22:10 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Anush Shetty
2013-09-05 11:57:16 UTC
Sosreports and statedumps here, http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1004745 Amar, This bug has been identified as a known issue for Big Bend release. Please provide CCFR information in the Doc Text field. Divya, as of now, the RCA for the bug is not done. hence the summary of the bug itself serves as the CCFR. I don't see any blocked locks or pending frames in brick statedumps. pk@localhost - ~/sos 12:56:01 :( ⚡ ls *dump* | xargs grep -i complete pk@localhost - ~/sos 12:56:10 :( ⚡ ls *dump* | xargs grep -i blocked Unfortunately no statedumps for mount or rebalance or glustershd are attached to the bug-report. So we couldn't find where the fops could have been stuck. Could we try re-creating this issue. Pranith Tested on RHOS4.0 with RHS2.1 glusterfs-3.4.0.59rhs-1.el6_4.x86_64. With client-quorum enabled with the latest RHS version, I only brought down second bricks in the cluster. Could not reproduce this issue. 1. Tested with instance being booted out of glance image: Works fine but takes time to upload the snapshot image to upload. 2. Tested with instance being booted out of voliume: Creates a zero byte snap that is unusable. Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release. |