Bug 830106 - gluster volume status reports incorrect status message
gluster volume status reports incorrect status message
Status: CLOSED DEFERRED
Product: GlusterFS
Classification: Community
Component: cli (Show other bugs)
3.3-beta
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Kaushal
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-06-08 04:48 EDT by Shwetha Panduranga
Modified: 2014-12-14 14:40 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-12-14 14:40:28 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Shwetha Panduranga 2012-06-08 04:48:49 EDT
Description of problem:
-----------------------
In a replicate volume with 3 bricks, when 2 bricks are down and we perform "gluster volume heal <vol_name> full" the operation fails and a subsequent "gluster v status" command execution reports the following incorrect message is reported. 

[06/08/12 - 04:31:01 root@AFR-Server1 ~]# gluster v heal vol1 full
Operation failed on 10.16.159.196

[06/08/12 - 04:32:15 root@AFR-Server3 ~]# gluster v status
Unable to obtain volume status information.
 
Failed to get names of volumes


Version-Release number of selected component (if applicable):
------------------------------------------------------------
3.3.0qa45

How reproducible:
-----------------
often

Steps to Reproduce:
-------------------
1.Create a replicate volume (1x3)(brick1 on node1, brick2 on node2, brick3 on node3) 

[06/08/12 - 03:19:28 root@AFR-Server1 ~]# gluster v info
 
Volume Name: vol1
Type: Replicate
Volume ID: e5ff8b2b-7d44-405e-8266-54e5e68b0241
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.16.159.184:/export_b1/dir1
Brick2: 10.16.159.188:/export_b1/dir1
Brick3: 10.16.159.196:/export_b1/dir1
Options Reconfigured:
cluster.eager-lock: on
performance.write-behind: on

2.Start the volume. brink down brick1 and brick2

3.Create a fuse mount

4.execute "dd if=/dev/urandom of=./file bs=1M count=10" on fuse mount

5.On node1 perform : 
[06/08/12 - 04:19:21 root@AFR-Server1 ~]# gluster v heal vol1 full
Operation failed on 10.16.159.196

6.On node3 perform : gluster volume status

Actual results:
---------------
[06/08/12 - 04:19:26 root@AFR-Server3 ~]# gluster v status
Unable to obtain volume status information.
 
Failed to get names of volumes
[06/08/12 - 04:19:30 root@AFR-Server3 ~]# gluster v status vol1
Unable to obtain volume status information.


Expected results:
------------------
[06/08/12 - 04:30:13 root@AFR-Server3 ~]# gluster v status vol1
Status of volume: vol1
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.16.159.184:/export_b1/dir1			24009	N	8935
Brick 10.16.159.188:/export_b1/dir1			24009	N	12896
Brick 10.16.159.196:/export_b1/dir1			24009	Y	28360
NFS Server on localhost					38467	Y	28725
Self-heal Daemon on localhost				N/A	Y	28731
NFS Server on 10.16.159.188				38467	Y	13263
Self-heal Daemon on 10.16.159.188			N/A	Y	13269
NFS Server on 10.16.159.184				38467	Y	8867
Self-heal Daemon on 10.16.159.184			N/A	Y	8873


Additional info:
-----------------
glusterd log messages when "gluster v status" is executed on node3:-


[2012-06-08 04:19:21.872793] I [glusterd-handler.c:497:glusterd_handle_cluster_lock] 0-glusterd: Received LOCK from uuid: d5cf85e1-d674-4376-981f-db75f6aeb783
[2012-06-08 04:19:21.872929] I [glusterd-utils.c:285:glusterd_lock] 0-glusterd: Cluster lock held by d5cf85e1-d674-4376-981f-db75f6aeb783
[2012-06-08 04:19:21.873026] I [glusterd-handler.c:1315:glusterd_op_lock_send_resp] 0-glusterd: Responded, ret: 0
[2012-06-08 04:19:21.873618] I [glusterd-handler.c:542:glusterd_req_ctx_create] 0-glusterd: Received op from uuid: d5cf85e1-d674-4376-981f-db75f6aeb783
[2012-06-08 04:19:21.873746] I [glusterd-handler.c:1417:glusterd_op_stage_send_resp] 0-glusterd: Responded to stage, ret: 0
[2012-06-08 04:19:21.874442] I [glusterd-handler.c:542:glusterd_req_ctx_create] 0-glusterd: Received op from uuid: d5cf85e1-d674-4376-981f-db75f6aeb783
[2012-06-08 04:19:21.875522] I [glusterd-handler.c:1458:glusterd_op_commit_send_resp] 0-glusterd: Responded to commit, ret: 0
[2012-06-08 04:19:21.876046] I [glusterd-handler.c:1359:glusterd_handle_cluster_unlock] 0-glusterd: Received UNLOCK from uuid: d5cf85e1-d674-4376-981f-db75f6aeb783
[2012-06-08 04:19:21.876150] I [glusterd-handler.c:1335:glusterd_op_unlock_send_resp] 0-glusterd: Responded to unlock, ret: 0
Comment 1 Amar Tumballi 2012-07-11 01:12:57 EDT
Need to see if its the case still.
Comment 2 Niels de Vos 2014-11-27 09:53:39 EST
The version that this bug has been reported against, does not get any updates from the Gluster Community anymore. Please verify if this report is still valid against a current (3.4, 3.5 or 3.6) release and update the version, or close this bug.

If there has been no update before 9 December 2014, this bug will get automatocally closed.

Note You need to log in before you can comment on or make changes to this bug.