Bug 823283
Summary: | Gluster - Backend: Can't stop volume | ||
---|---|---|---|
Product: | [Retired] oVirt | Reporter: | Daniel Paikov <dpaikov> |
Component: | ovirt-engine-core | Assignee: | Kaushal <kaushal> |
Status: | CLOSED WORKSFORME | QA Contact: | Haim <hateya> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | unspecified | CC: | acathrow, amureini, dyasny, hateya, iheim, kaushal, mgoldboi, vbellur, yeylon, ykaul |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | gluster | ||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2012-06-10 08:06:37 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Daniel Paikov
2012-05-20 15:02:28 UTC
Please attach complete engine log. Nothing on VDSM side? I suspect the Gluster CLI command for stopping the volume is failing on the host, and the vdsm log file will provide vital clues. Apart from the vdsm log file, can you please execute the gluster cli manually and see whether it throws any error? The command is: gluster volume stop <vol_name> Stop fails because volume doesn't exist. But it does appear in volume info. [root@localhost ~]# gluster volume stop vol1 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y Volume vol1 does not exist [root@localhost ~]# gluster volume info vol1 Volume Name: vol1 Type: Distribute Volume ID: 72e810f4-5a7f-4f1f-b52b-ced416b30732 Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 10.35.97.158:/fasdfdas Brick2: 10.35.97.159:/cvcvc Options Reconfigured: auth.allow: * nfs.disable: off Sounds like a bug in GlusterFS. Adding Vijay in CC. (In reply to comment #4) > Sounds like a bug in GlusterFS. Adding Vijay in CC. Can you please provide details of the peers in the cluster and ensure that gluster volume info lists vol1 on all peers in the cluster? If it does, please provide glusterd logs from all peers. Thanks, Vijay Waiting for response to Vijay's comments. (In reply to comment #5) > (In reply to comment #4) > > Sounds like a bug in GlusterFS. Adding Vijay in CC. > > Can you please provide details of the peers in the cluster and ensure that > gluster volume info lists vol1 on all peers in the cluster? > > If it does, please provide glusterd logs from all peers. > > Thanks, > Vijay Yes, the problem must be that only 2 out of 3 hosts in the cluster can see this volume. It's possible that I added the 3rd host after the volume was already created. Do we support this flow? A newly added peer will be synced with the volumes already created. If everything goes well, then there shouldn't be any problem. In this case looks like the sync wasn't successful. Can you provide the glusterd logs for all the 3 peers, so that we can take a look? Please provide details asked by Kaushal. In any case it looks like a combination of glusterfs environment related issue and https://bugzilla.redhat.com/823565 (which is fixed) Haven't been able to reproduce in recent builds. Closing the bug. |