Bug 812801 - "gluster volume status all" fails to print status information of all volumes
"gluster volume status all" fails to print status information of all volumes
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: cli (Show other bugs)
mainline
Unspecified Unspecified
urgent Severity urgent
: ---
: ---
Assigned To: Kaushal
:
: 812738 (view as bug list)
Depends On:
Blocks: 817967
  Show dependency treegraph
 
Reported: 2012-04-16 05:32 EDT by Shwetha Panduranga
Modified: 2013-07-24 13:10 EDT (History)
4 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-24 13:10:38 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Shwetha Panduranga 2012-04-16 05:32:05 EDT
Description of problem:
"gluster volume status all" and "gluster volume status" command fails to print the status information of the volumes

Error message:-
--------------
[04/16/12 - 20:22:52 root@APP-SERVER1 ~]# gluster volume status all
Unable to obtain volume status information.
Failed to get names of volumes

Glusterd log message:-
----------------------
[2012-04-16 20:22:52.336241] I [glusterd-handler.c:858:glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req
[2012-04-16 20:22:52.337360] I [glusterd-handler.c:858:glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req
[2012-04-16 20:23:00.259487] I [glusterd-utils.c:282:glusterd_lock] 0-glusterd: Cluster lock held by 1d65697c-4438-4849-b171-65d808898a22
[2012-04-16 20:23:00.259549] I [glusterd-handler.c:456:glusterd_op_txn_begin] 0-management: Acquired local lock
[2012-04-16 20:23:00.260287] I [glusterd-rpc-ops.c:547:glusterd3_1_cluster_lock_cbk] 0-glusterd: Received ACC from uuid: c40fcbe8-e43c-4e8c-b5cf-795939529c66
[2012-04-16 20:23:00.260350] I [glusterd-rpc-ops.c:547:glusterd3_1_cluster_lock_cbk] 0-glusterd: Received ACC from uuid: 2799571a-3673-4696-aef8-dbcaa2fac985
[2012-04-16 20:23:00.260379] C [glusterd-op-sm.c:1854:glusterd_op_build_payload] 0-management: volname is not present in operation ctx
[2012-04-16 20:23:00.260435] I [glusterd-op-sm.c:1958:glusterd_op_ac_send_stage_op] 0-glusterd: Sent op req to 0 peers
[2012-04-16 20:23:00.260780] I [glusterd-rpc-ops.c:606:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: 2799571a-3673-4696-aef8-dbcaa2fac985
[2012-04-16 20:23:00.260839] I [glusterd-rpc-ops.c:606:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: c40fcbe8-e43c-4e8c-b5cf-795939529c66
[2012-04-16 20:23:00.260865] I [glusterd-op-sm.c:2544:glusterd_op_txn_complete] 0-glusterd: Cleared local lock

Version-Release number of selected component (if applicable):
mainline

How reproducible:
often

Steps to Reproduce:
1.create a distribute-replicate volume
2.execute "gluster volume status"

Actual results:
fails to report the volume status information

Expected results:
should report all the volumes status 

Additional info:"gluster volume status <volume_name>" reporting the status info. 

[04/16/12 - 20:19:43 root@APP-SERVER1 ~]# gluster volume status dstore

Status of volume: dstore
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 192.168.2.35:/export1/dstore1			24009	Y	28249
Brick 192.168.2.36:/export1/dstore1			24009	Y	9453
Brick 192.168.2.37:/export1/dstore1			24009	Y	26594
Brick 192.168.2.35:/export2/dstore2			24010	Y	28254
Brick 192.168.2.36:/export2/dstore2			24010	Y	9459
Brick 192.168.2.37:/export2/dstore2			24010	Y	26600
NFS Server on localhost					38467	Y	28261
Self-heal Daemon on localhost				N/A	Y	28266
NFS Server on 192.168.2.37				38467	Y	26606
Self-heal Daemon on 192.168.2.37			N/A	Y	26612
NFS Server on 192.168.2.36				38467	Y	9465
Self-heal Daemon on 192.168.2.36			N/A	Y	9470
Comment 1 Kaushal 2012-04-16 05:52:00 EDT
Bug is caused by KP's "glusterd: Added volume-id to 'op' dictionary" change.
It does not account for "volume status all".
Will do the necessary changes.
Comment 2 Anand Avati 2012-04-17 09:26:22 EDT
CHANGE: http://review.gluster.com/3157 (glusterd : Fixes for breakages caused by volume-id validation) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 3 Csaba Henk 2012-04-23 22:29:32 EDT
*** Bug 812738 has been marked as a duplicate of this bug. ***
Comment 4 Vijaykumar Koppad 2012-04-24 03:29:16 EDT
Even gluster volume geo-replication <master> <slave> start fails in the latest git pull. 


root@vostro:~/programming# gluster volume geo-replication doa /root/geo start

geo-replication command failed.

All geo-replication commands are broken now.
Comment 5 Vijaykumar Koppad 2012-04-24 03:39:18 EDT
sorry , i updated on the wrong bug ID.
Comment 6 Shwetha Panduranga 2012-04-25 05:13:41 EDT
Verified the bug on 3.3.0qa38.
Comment 7 Joe Julian 2012-09-14 04:39:48 EDT
This also affects "volume sync <hostname> all"

Note You need to log in before you can comment on or make changes to this bug.