Description of problem: "gluster volume status all" and "gluster volume status" command fails to print the status information of the volumes Error message:- -------------- [04/16/12 - 20:22:52 root@APP-SERVER1 ~]# gluster volume status all Unable to obtain volume status information. Failed to get names of volumes Glusterd log message:- ---------------------- [2012-04-16 20:22:52.336241] I [glusterd-handler.c:858:glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req [2012-04-16 20:22:52.337360] I [glusterd-handler.c:858:glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req [2012-04-16 20:23:00.259487] I [glusterd-utils.c:282:glusterd_lock] 0-glusterd: Cluster lock held by 1d65697c-4438-4849-b171-65d808898a22 [2012-04-16 20:23:00.259549] I [glusterd-handler.c:456:glusterd_op_txn_begin] 0-management: Acquired local lock [2012-04-16 20:23:00.260287] I [glusterd-rpc-ops.c:547:glusterd3_1_cluster_lock_cbk] 0-glusterd: Received ACC from uuid: c40fcbe8-e43c-4e8c-b5cf-795939529c66 [2012-04-16 20:23:00.260350] I [glusterd-rpc-ops.c:547:glusterd3_1_cluster_lock_cbk] 0-glusterd: Received ACC from uuid: 2799571a-3673-4696-aef8-dbcaa2fac985 [2012-04-16 20:23:00.260379] C [glusterd-op-sm.c:1854:glusterd_op_build_payload] 0-management: volname is not present in operation ctx [2012-04-16 20:23:00.260435] I [glusterd-op-sm.c:1958:glusterd_op_ac_send_stage_op] 0-glusterd: Sent op req to 0 peers [2012-04-16 20:23:00.260780] I [glusterd-rpc-ops.c:606:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: 2799571a-3673-4696-aef8-dbcaa2fac985 [2012-04-16 20:23:00.260839] I [glusterd-rpc-ops.c:606:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: c40fcbe8-e43c-4e8c-b5cf-795939529c66 [2012-04-16 20:23:00.260865] I [glusterd-op-sm.c:2544:glusterd_op_txn_complete] 0-glusterd: Cleared local lock Version-Release number of selected component (if applicable): mainline How reproducible: often Steps to Reproduce: 1.create a distribute-replicate volume 2.execute "gluster volume status" Actual results: fails to report the volume status information Expected results: should report all the volumes status Additional info:"gluster volume status <volume_name>" reporting the status info. [04/16/12 - 20:19:43 root@APP-SERVER1 ~]# gluster volume status dstore Status of volume: dstore Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 192.168.2.35:/export1/dstore1 24009 Y 28249 Brick 192.168.2.36:/export1/dstore1 24009 Y 9453 Brick 192.168.2.37:/export1/dstore1 24009 Y 26594 Brick 192.168.2.35:/export2/dstore2 24010 Y 28254 Brick 192.168.2.36:/export2/dstore2 24010 Y 9459 Brick 192.168.2.37:/export2/dstore2 24010 Y 26600 NFS Server on localhost 38467 Y 28261 Self-heal Daemon on localhost N/A Y 28266 NFS Server on 192.168.2.37 38467 Y 26606 Self-heal Daemon on 192.168.2.37 N/A Y 26612 NFS Server on 192.168.2.36 38467 Y 9465 Self-heal Daemon on 192.168.2.36 N/A Y 9470
Bug is caused by KP's "glusterd: Added volume-id to 'op' dictionary" change. It does not account for "volume status all". Will do the necessary changes.
CHANGE: http://review.gluster.com/3157 (glusterd : Fixes for breakages caused by volume-id validation) merged in master by Vijay Bellur (vijay)
*** Bug 812738 has been marked as a duplicate of this bug. ***
Even gluster volume geo-replication <master> <slave> start fails in the latest git pull. root@vostro:~/programming# gluster volume geo-replication doa /root/geo start geo-replication command failed. All geo-replication commands are broken now.
sorry , i updated on the wrong bug ID.
Verified the bug on 3.3.0qa38.
This also affects "volume sync <hostname> all"