Bug 1260068 - gstatus: when half of the bricks are down in volume, gstatus shows volume as partial
gstatus: when half of the bricks are down in volume, gstatus shows volume as ...
Status: CLOSED WONTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gstatus (Show other bugs)
3.1
x86_64 Linux
unspecified Severity urgent
: ---
: ---
Assigned To: Sachidananda Urs
storage-qa-internal@redhat.com
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-09-04 07:11 EDT by Anil Shah
Modified: 2018-02-07 02:44 EST (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-02-07 02:44:23 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Anil Shah 2015-09-04 07:11:57 EDT
Description of problem:

When half of the bricks are down in distribute disperse 2 x (4 + 2), gstatus still show volume status as partial.

Version-Release number of selected component (if applicable):

[root@darkknight ~]# rpm -qa | grep glusterfs
glusterfs-server-3.7.1-14.el7rhgs.x86_64
glusterfs-api-3.7.1-14.el7rhgs.x86_64
glusterfs-cli-3.7.1-14.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-14.el7rhgs.x86_64
glusterfs-libs-3.7.1-14.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-14.el7rhgs.x86_64
glusterfs-fuse-3.7.1-14.el7rhgs.x86_64
glusterfs-rdma-3.7.1-14.el7rhgs.x86_64
glusterfs-3.7.1-14.el7rhgs.x86_64

[root@darkknight ~]# gstatus --version
gstatus 0.65


How reproducible:

100%

Steps to Reproduce:
1. Create 4+2 disperse volume
2. Add 6 bricks to the volume so that it becomes 2x(4+2) distribute disperse volume 
3. Create snapshot of the volume 
4. Restore snapshot 
5. make three bricks down from each distribute disperse set 
6. check gstatus -a

Actual results:

Volume information and volume fields shows volume as partial down

Expected results:

Volume should be down

Additional info:

[root@darkknight ~]# gluster v status
Status of volume: ecvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.47.2:/run/gluster/snaps/db7d0cd
317f24296905dd9f955e0454c/brick1/ec01       N/A       N/A        N       N/A  
Brick 10.70.47.3:/run/gluster/snaps/db7d0cd
317f24296905dd9f955e0454c/brick2/ec02       49155     0          Y       15746
Brick 10.70.47.143:/run/gluster/snaps/db7d0
cd317f24296905dd9f955e0454c/brick3/ec03     N/A       N/A        N       N/A  
Brick 10.70.47.145:/run/gluster/snaps/db7d0
cd317f24296905dd9f955e0454c/brick4/ec04     49155     0          Y       21686
Brick 10.70.47.2:/run/gluster/snaps/db7d0cd
317f24296905dd9f955e0454c/brick5/ec05       N/A       N/A        N       N/A  
Brick 10.70.47.3:/run/gluster/snaps/db7d0cd
317f24296905dd9f955e0454c/brick6/ec06       49156     0          Y       15764
Brick 10.70.47.2:/run/gluster/snaps/db7d0cd
317f24296905dd9f955e0454c/brick7/ec001      N/A       N/A        N       N/A  
Brick 10.70.47.3:/run/gluster/snaps/db7d0cd
317f24296905dd9f955e0454c/brick8/ec002      49157     0          Y       15782
Brick 10.70.47.143:/run/gluster/snaps/db7d0
cd317f24296905dd9f955e0454c/brick9/ec003    N/A       N/A        N       N/A  
Brick 10.70.47.145:/run/gluster/snaps/db7d0
cd317f24296905dd9f955e0454c/brick10/ec004   49156     0          Y       21704
Brick 10.70.47.143:/run/gluster/snaps/db7d0
cd317f24296905dd9f955e0454c/brick11/ec005   N/A       N/A        N       N/A  
Brick 10.70.47.145:/run/gluster/snaps/db7d0
cd317f24296905dd9f955e0454c/brick12/ec006   49157     0          Y       21722
NFS Server on localhost                     2049      0          Y       22019
Self-heal Daemon on localhost               N/A       N/A        Y       22024
NFS Server on 10.70.47.3                    2049      0          Y       15803
Self-heal Daemon on 10.70.47.3              N/A       N/A        Y       15808
NFS Server on 10.70.47.143                  2049      0          Y       16922
Self-heal Daemon on 10.70.47.143            N/A       N/A        Y       16927
NFS Server on 10.70.47.145                  2049      0          Y       21743
Self-heal Daemon on 10.70.47.145            N/A       N/A        Y       21748
 
=====================================================================

[root@darkknight ~]# gluster v info
 
Volume Name: ecvol
Type: Distributed-Disperse
Volume ID: 2725f579-58d3-4c80-bc5e-764be88c9e80
Status: Started
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.47.2:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick1/ec01
Brick2: 10.70.47.3:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick2/ec02
Brick3: 10.70.47.143:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick3/ec03
Brick4: 10.70.47.145:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick4/ec04
Brick5: 10.70.47.2:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick5/ec05
Brick6: 10.70.47.3:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick6/ec06
Brick7: 10.70.47.2:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick7/ec001
Brick8: 10.70.47.3:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick8/ec002
Brick9: 10.70.47.143:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick9/ec003
Brick10: 10.70.47.145:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick10/ec004
Brick11: 10.70.47.143:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick11/ec005
Brick12: 10.70.47.145:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick12/ec006
Options Reconfigured:
performance.readdir-ahead: on

==================================================================

[root@darkknightrises ~]# gstatus -a
 
     Product: RHGS Server v3.1   Capacity: 239.00 GiB(raw bricks)
      Status: UNHEALTHY(9)                 395.00 MiB(raw used)
   Glusterfs: 3.7.1                        129.00 GiB(usable from volumes)
  OverCommit: No                Snapshots:   0

   Nodes       :  4/  4		  Volumes:   0 Up
   Self Heal   :  4/  4		             0 Up(Degraded)
   Bricks      :  6/ 12		             1 Up(Partial)
   Connections :  4/  48                     0 Down

Volume Information
	ecvol            UP(PARTIAL) - 6/12 bricks up - Distributed-Disperse
	                 Capacity: (0% used) 198.00 MiB/129.00 GiB (used/total)
	                 Snapshots: 0
	                 Self Heal: 12/12
	                 Tasks Active: None
	                 Protocols: glusterfs:on  NFS:on  SMB:on
	                 Gluster Connectivty: 4 hosts, 48 tcp connections


Status Messages
  - Cluster is UNHEALTHY
  - Volume 'ecvol' is in a PARTIAL state, some data is inaccessible data, due to missing bricks
  - WARNING -> Write requests may fail against volume 'ecvol'
  - Brick 10.70.47.143:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick9/ec003 in volume 'ecvol' is down/unavailable
  - Brick 10.70.47.2:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick1/ec01 in volume 'ecvol' is down/unavailable
  - Brick 10.70.47.2:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick5/ec05 in volume 'ecvol' is down/unavailable
  - Brick 10.70.47.143:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick11/ec005 in volume 'ecvol' is down/unavailable
  - Brick 10.70.47.2:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick7/ec001 in volume 'ecvol' is down/unavailable
  - Brick 10.70.47.143:/run/gluster/snaps/db7d0cd317f24296905dd9f955e0454c/brick3/ec03 in volume 'ecvol' is down/unavailable
  - INFO -> Not all bricks are online, so capacity provided is NOT accurate

Note You need to log in before you can comment on or make changes to this bug.