Bug 889996
Summary: | Volume is shown as started even if volume start fails on one of the peers in the cluster | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Shruti Sampat <ssampat> | ||||||
Component: | glusterd | Assignee: | Krutika Dhananjay <kdhananj> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Shruti Sampat <ssampat> | ||||||
Severity: | unspecified | Docs Contact: | |||||||
Priority: | medium | ||||||||
Version: | 2.0 | CC: | kparthas, rhs-bugs, shaines, vbellur | ||||||
Target Milestone: | --- | ||||||||
Target Release: | --- | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | glusterfs-3.4.0.1rhs-1 | Doc Type: | Bug Fix | ||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2013-09-23 22:39:24 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
Created attachment 668427 [details]
gluster logs from the other server
CHANGE: http://review.gluster.org/4365 (glusterd: harden 'volume start' staging to check for brick dirs' presence) merged in master by Anand Avati (avati) Per 03/05 email exchange w/ PM, targeting for Big Bend. Fixed in glusterfs-3.4.0.1rhs by the patch http://review.gluster.org/4365. Hence moving the state of the bug to ON_QA. Verified as fixed in glusterfs 3.4.0.1rhs. Volume status is now displayed correctly on all nodes. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html |
Created attachment 668425 [details] gluster logs from server where start fails Description of problem: --------------------------------------- When volume start command is issued on one of the peers of the cluster, and it fails on another peer, the status of the volume is shown as started on the initiator even if the CLI output says that start failed. When volume stop command is issued, the following is seen as the output - --------------------------------------- volume stop: test: failed: Volume test is not in the started state volume start on both peers now fails with the following output - --------------------------------------- volume start: test: failed: Volume test already started Version-Release number of selected component (if applicable): glusterfs 3.4.0qa5 How reproducible: Always Steps to Reproduce: 1. Create a distribute volume with 1 brick on each peer in a 2-node cluster. 2. On one of the peers, delete the brick directory. 3. Issue volume start command from the other peer (where brick directories are still present) Actual results: Volume is seen as started on one of the machines, while it is not started on the other machine. Expected results: The status of the volume should reflect the correct status, (in this case, not started) on both the peers. Additional info: