Red Hat Bugzilla – Bug 867343
volume sync fails
Last modified: 2015-11-03 18:04:50 EST
+++ This bug was initially created as a clone of Bug #861481 +++
Description of problem:
When calling "gluster volume sync all", it's always "unsuccessful". All glusterd's:
[2012-09-28 12:05:35.830791] I [glusterd-handler.c:497:glusterd_handle_cluster_lock] 0-glusterd: Received LOCK from uuid: fcac92e9-b7c5-440a-bac0-8fb6dfe4b899
[2012-09-28 12:05:35.830849] I [glusterd-utils.c:285:glusterd_lock] 0-glusterd: Cluster lock held by fcac92e9-b7c5-440a-bac0-8fb6dfe4b899
[2012-09-28 12:05:35.830886] I [glusterd-handler.c:1315:glusterd_op_lock_send_resp] 0-glusterd: Responded, ret: 0
[2012-09-28 12:05:35.831385] I [glusterd-handler.c:1359:glusterd_handle_cluster_unlock] 0-glusterd: Received UNLOCK from uuid: fcac92e9-b7c5-440a-bac0-8fb6dfe4b899
[2012-09-28 12:05:35.831432] I [glusterd-handler.c:1335:glusterd_op_unlock_send_resp] 0-glusterd: Responded to unlock, ret: 0
If I specify a server to sync from:
gluster volume sync all ewcs2
Volume ��� does not exist
ewcs2 does report the same log sequence.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. stop glusterd on a server
2. rm -rf /var/lib/glusterd/vols
3. start glusterd
4. gluster volume sync all
5. gluster volume sync $volname
6. gluster volume sync all $hostname
7. gluster volume sync $volname $hostname
None of them work
Any of them should work
http://review.gluster.org/4188 (fixed for rhs-2.1.0, would need to backport it for 2.0.z).
marking ON_QA for rhs-2.1.0 flag. Let us know which update of RHS.2.0.z do we need this fix.
As per description, the 7th step of issuing volume sync command should be,
gluster volume sync $hostname $volumename
Verified this bug with, glusterfs 3.4.0qa5, since qa6 is not yet available.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA.
For information on the advisory, and where to find the updated files, follow the link below.
If the solution does not work for you, open a new bug report.