Bug 765415 (GLUSTER-3683) - volume status gives improper results
Summary: volume status gives improper results
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: GLUSTER-3683
Product: GlusterFS
Classification: Community
Component: glusterd
Version: pre-release
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: krishnan parthasarathi
QA Contact:
URL:
Whiteboard:
: GLUSTER-3681 GLUSTER-3695 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-10-03 06:28 UTC by M S Vishwanath Bhat
Modified: 2016-06-01 01:57 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description M S Vishwanath Bhat 2011-10-03 06:28:44 UTC
I had 2*2 striped-replicated volume with each brick residing in a separate machines. Brought back two machines down and brought them back online. Now after starting glusterd and restarting all the servers, I issued 'gluster volume status command'. It gives different output each time it's executed even though nothing changed.

[root@ip-10-32-33-12 ~]# gluster volume status hosdu
Brick status for volume: hosdu
Brick                                                   Port    Online  PID
---------------------------------------------------------------------------
ec2-107-20-108-235.compute-1.amazonaws.com:/data/brick1 24009   Y       12477
---------------------------------------------------------------------------
ec2-107-20-94-243.compute-1.amazonaws.com:/data/brick2  24009   Y       1706


[root@ip-10-32-33-12 ~]# gluster volume status hosdu
Brick status for volume: hosdu
Brick                                                   Port    Online  PID
---------------------------------------------------------------------------
ec2-107-20-108-235.compute-1.amazonaws.com:/data/brick1 24009   Y       12477
---------------------------------------------------------------------------
ec2-107-20-94-243.compute-1.amazonaws.com:/data/brick2  24009   Y       1706
---------------------------------------------------------------------------
ec2-50-17-71-95.compute-1.amazonaws.com:/data/brick3    24009   Y       15140
---------------------------------------------------------------------------
ec2-107-20-89-30.compute-1.amazonaws.com:/data/brick4   24009   Y       1710


[root@ip-10-32-33-12 ~]# gluster volume status hosdu
Brick status for volume: hosdu
Brick                                                   Port    Online  PID
---------------------------------------------------------------------------
ec2-107-20-108-235.compute-1.amazonaws.com:/data/brick1 24009   Y       12477

It gave proper output when I executed the command second time, but again it gave wrong results next time. 

glusterd logs says the 'Unable to set key in dict'

f4fcc2] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd3_1_commit_op_cbk+0x5f9) [0x2aaaac966d49]))) 0-: Assertion failed: GD_OP_REBALANCE == op
[2011-10-03 01:53:29.26762] I [glusterd-rpc-ops.c:892:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: 52a7bd56-2d70-43e2-9269-e5b0e633dddf
[2011-10-03 01:53:29.26784] I [glusterd-op-sm.c:2034:glusterd_op_txn_complete] 0-glusterd: Cleared local lock
[2011-10-03 01:53:29.28653] I [glusterd-rpc-ops.c:1463:glusterd3_1_commit_op_cbk] 0-glusterd: Received ACC from uuid: 97697fbc-590d-4fed-a5a1-4f6b67798680
[2011-10-03 01:53:29.28718] E [glusterd-rpc-ops.c:1333:glusterd_volume_status_use_rsp_dict] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0x8d) [0x2aaaaaf4fead] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2) [0x2aaaaaf4f
cc2] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd3_1_commit_op_cbk+0x5e4) [0x2aaaac966d34]))) 0-: Assertion failed: GD_OP_STATUS_VOLUME == op
[2011-10-03 01:53:29.28792] W [dict.c:314:dict_set] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_use_rsp_dict+0xce) [0x2aaaac963afe] (-->/usr/local/lib/libglusterfs.so.0(dict_foreach+0x36) [0x2aaa
aacd8ec6] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_add_peer_rsp+0x60) [0x2aaaac963da0]))) 0-dict: !this || !value for key=brick2.hostname
[2011-10-03 01:53:29.28813] E [glusterd-rpc-ops.c:1309:glusterd_volume_status_add_peer_rsp] 0-: Unable to set key: brick2.hostname in dict
[2011-10-03 01:53:29.28871] W [dict.c:314:dict_set] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_use_rsp_dict+0xce) [0x2aaaac963afe] (-->/usr/local/lib/libglusterfs.so.0(dict_foreach+0x36) [0x2aaa
aacd8ec6] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_add_peer_rsp+0x60) [0x2aaaac963da0]))) 0-dict: !this || !value for key=brick2.path
[2011-10-03 01:53:29.28889] E [glusterd-rpc-ops.c:1309:glusterd_volume_status_add_peer_rsp] 0-: Unable to set key: brick2.path in dict
[2011-10-03 01:53:29.28945] W [dict.c:314:dict_set] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_use_rsp_dict+0xce) [0x2aaaac963afe] (-->/usr/local/lib/libglusterfs.so.0(dict_foreach+0x36) [0x2aaa
aacd8ec6] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_add_peer_rsp+0x60) [0x2aaaac963da0]))) 0-dict: !this || !value for key=brick2.port
[2011-10-03 01:53:29.28974] E [glusterd-rpc-ops.c:1309:glusterd_volume_status_add_peer_rsp] 0-: Unable to set key: brick2.port in dict
[2011-10-03 01:53:29.29032] W [dict.c:314:dict_set] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_use_rsp_dict+0xce) [0x2aaaac963afe] (-->/usr/local/lib/libglusterfs.so.0(dict_foreach+0x36) [0x2aaa
aacd8ec6] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_add_peer_rsp+0x60) [0x2aaaac963da0]))) 0-dict: !this || !value for key=brick2.pid
[2011-10-03 01:53:29.29050] E [glusterd-rpc-ops.c:1309:glusterd_volume_status_add_peer_rsp] 0-: Unable to set key: brick2.pid in dict
[2011-10-03 01:53:29.29107] W [dict.c:314:dict_set] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_use_rsp_dict+0xce) [0x2aaaac963afe] (-->/usr/local/lib/libglusterfs.so.0(dict_foreach+0x36) [0x2aaa
aacd8ec6] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_add_peer_rsp+0x60) [0x2aaaac963da0]))) 0-dict: !this || !value for key=brick2.status
[2011-10-03 01:53:29.29124] E [glusterd-rpc-ops.c:1309:glusterd_volume_status_add_peer_rsp] 0-: Unable to set key: brick2.status in dict
[2011-10-03 01:53:29.29174] W [dict.c:356:dict_del] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2) [0x2aaaaaf4fcc2] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd3_1_commit_op_cbk+0x5e4) [0x2aaaac96
6d34] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_use_rsp_dict+0xdd) [0x2aaaac963b0d]))) 0-dict: !this || key=count
[2011-10-03 01:53:29.29228] W [dict.c:314:dict_set] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2) [0x2aaaaaf4fcc2] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd3_1_commit_op_cbk+0x5e4) [0x2aaaac96
6d34] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_use_rsp_dict+0x106) [0x2aaaac963b36]))) 0-dict: !this || !value for key=count
[2011-10-03 01:53:29.29276] I [glusterd-rpc-ops.c:892:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: 97697fbc-590d-4fed-a5a1-4f6b67798680
[2011-10-03 02:15:55.529960] I [glusterd-handler.c:2500:glusterd_handle_status_volume] 0-glusterd: Received status volume req for volume hosdu




[2011-10-03 02:15:55.691964] E [glusterd-rpc-ops.c:1359:glusterd_volume_rebalance_use_rsp_dict] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0x8d) [0x2aaaaaf4fead] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2) [0x2aaaaaf4fcc2] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd3_1_commit_op_cbk+0x5f9) [0x2aaaac966d49]))) 0-: Assertion failed: GD_OP_REBALANCE == op
[2011-10-03 02:15:55.692415] I [glusterd-rpc-ops.c:892:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: 97697fbc-590d-4fed-a5a1-4f6b67798680
[2011-10-03 02:15:55.703702] I [glusterd-rpc-ops.c:1463:glusterd3_1_commit_op_cbk] 0-glusterd: Received ACC from uuid: 52a7bd56-2d70-43e2-9269-e5b0e633dddf
[2011-10-03 02:15:55.704691] E [glusterd-rpc-ops.c:1359:glusterd_volume_rebalance_use_rsp_dict] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0x8d) [0x2aaaaaf4fead] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2) [0x2aaaaaf4fcc2] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd3_1_commit_op_cbk+0x5f9) [0x2aaaac966d49]))) 0-: Assertion failed: GD_OP_REBALANCE == op
[2011-10-03 02:15:55.704750] I [glusterd-rpc-ops.c:892:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: 52a7bd56-2d70-43e2-9269-e5b0e633dddf
[2011-10-03 02:15:55.704773] I [glusterd-op-sm.c:2034:glusterd_op_txn_complete] 0-glusterd: Cleared local lock
[2011-10-03 02:15:55.758949] I [glusterd-rpc-ops.c:1463:glusterd3_1_commit_op_cbk] 0-glusterd: Received ACC from uuid: 6a02ed79-4275-460b-bc8d-ce786b403bb0
[2011-10-03 02:15:55.759033] E [glusterd-rpc-ops.c:1333:glusterd_volume_status_use_rsp_dict] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0x8d) [0x2aaaaaf4fead] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2) [0x2aaaaaf4fcc2] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd3_1_commit_op_cbk+0x5e4) [0x2aaaac966d34]))) 0-: Assertion failed: GD_OP_STATUS_VOLUME == op
[2011-10-03 02:15:55.759105] W [dict.c:314:dict_set] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_use_rsp_dict+0xce) [0x2aaaac963afe] (-->/usr/local/lib/libglusterfs.so.0(dict_foreach+0x36) [0x2aaaaacd8ec6] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_add_peer_rsp+0x60) [0x2aaaac963da0]))) 0-dict: !this || !value for key=brick1.hostname
[2011-10-03 02:15:55.759125] E [glusterd-rpc-ops.c:1309:glusterd_volume_status_add_peer_rsp] 0-: Unable to set key: brick1.hostname in dict
[2011-10-03 02:15:55.759183] W [dict.c:314:dict_set] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_use_rsp_dict+0xce) [0x2aaaac963afe] (-->/usr/local/lib/libglusterfs.so.0(dict_foreach+0x36) [0x2aaaaacd8ec6] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_add_peer_rsp+0x60) [0x2aaaac963da0]))) 0-dict: !this || !value for key=brick1.path
[2011-10-03 02:15:55.759200] E [glusterd-rpc-ops.c:1309:glusterd_volume_status_add_peer_rsp] 0-: Unable to set key: brick1.path in dict
[2011-10-03 02:15:55.759257] W [dict.c:314:dict_set] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_use_rsp_dict+0xce) [0x2aaaac963afe] (-->/usr/local/lib/libglusterfs.so.0(dict_foreach+0x36) [0x2aaaaacd8ec6] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_add_peer_rsp+0x60) [0x2aaaac963da0]))) 0-dict: !this || !value for key=brick1.port
[2011-10-03 02:15:55.759274] E [glusterd-rpc-ops.c:1309:glusterd_volume_status_add_peer_rsp] 0-: Unable to set key: brick1.port in dict
[2011-10-03 02:15:55.759330] W [dict.c:314:dict_set] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_use_rsp_dict+0xce) [0x2aaaac963afe] (-->/usr/local/lib/libglusterfs.so.0(dict_foreach+0x36) [0x2aaaaacd8ec6] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_add_peer_rsp+0x60) [0x2aaaac963da0]))) 0-dict: !this || !value for key=brick1.pid
[2011-10-03 02:15:55.759364] E [glusterd-rpc-ops.c:1309:glusterd_volume_status_add_peer_rsp] 0-: Unable to set key: brick1.pid in dict
[2011-10-03 02:15:55.759423] W [dict.c:314:dict_set] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_use_rsp_dict+0xce) [0x2aaaac963afe] (-->/usr/local/lib/libglusterfs.so.0(dict_foreach+0x36) [0x2aaaaacd8ec6] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_add_peer_rsp+0x60) [0x2aaaac963da0]))) 0-dict: !this || !value for key=brick1.status
[2011-10-03 02:15:55.759441] E [glusterd-rpc-ops.c:1309:glusterd_volume_status_add_peer_rsp] 0-: Unable to set key: brick1.status in dict
[2011-10-03 02:15:55.759492] W [dict.c:356:dict_del] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2) [0x2aaaaaf4fcc2] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd3_1_commit_op_cbk+0x5e4) [0x2aaaac966d34] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_use_rsp_dict+0xdd) [0x2aaaac963b0d]))) 0-dict: !this || key=count
[2011-10-03 02:15:55.759546] W [dict.c:314:dict_set] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa2) [0x2aaaaaf4fcc2] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd3_1_commit_op_cbk+0x5e4) [0x2aaaac966d34] (-->/usr/local/lib/glusterfs/3.3.0qa12/xlator/mgmt/glusterd.so(glusterd_volume_status_use_rsp_dict+0x106) [0x2aaaac963b36]))) 0-dict: !this || !value for key=count
[2011-10-03 02:15:55.868129] I [glusterd-rpc-ops.c:892:glusterd3_1_cluster_unlock_cbk] 0-glusterd: Received ACC from uuid: 6a02ed79-4275-460b-bc8d-ce786b403bb0
[2011-10-03 02:18:30.60698] I [glusterd-handler.c:2500:glusterd_handle_status_volume] 0-glusterd: Received status volume req for volume hosdu
[2011-10-03 02:18:30.60789] I [glusterd-utils.c:258:glusterd_lock] 0-glusterd: Cluster lock held by 65408176-c321-4cb3-9710-71699df490a9
[2011-10-03 02:18:30.60807] I [glusterd-handler.c:438:glusterd_op_txn_begin] 0-management: Acquired local lock

Comment 1 Anand Avati 2011-10-05 15:44:07 UTC
CHANGE: http://review.gluster.com/553 (Change-Id: I88b9935f93d9a06e46c3351c2fd37c969396bb0a) merged in master by Vijay Bellur (vijay)

Comment 2 krishnan parthasarathi 2011-10-06 04:16:15 UTC
*** Bug 3681 has been marked as a duplicate of this bug. ***

Comment 3 krishnan parthasarathi 2011-10-06 04:16:29 UTC
*** Bug 3695 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.