Bug 1260739 - gstatus: Capacity field displays wrong usable volume size and total volume size [NEEDINFO]
gstatus: Capacity field displays wrong usable volume size and total volume size
Status: ASSIGNED
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gstatus (Show other bugs)
3.1
x86_64 Linux
unspecified Severity urgent
: ---
: ---
Assigned To: Sachidananda Urs
storage-qa-internal@redhat.com
: ZStream
Depends On:
Blocks: RHGS-3.4-GSS-proposed-tracker
  Show dependency treegraph
 
Reported: 2015-09-07 11:02 EDT by Anil Shah
Modified: 2017-10-27 13:15 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
amukherj: needinfo? (surs)


Attachments (Terms of Use)

  None (edit)
Description Anil Shah 2015-09-07 11:02:22 EDT
Description of problem:

In distribute replicate volume, if one of the replica pair has different bricks size, then capacity field displays wrong wrong usable size from volume and 
volume information display wrong total volume size.

Version-Release number of selected component (if applicable):

[root@rhs-client46 ~]# rpm -qa | grep glusterfs
glusterfs-3.7.1-14.el7rhgs.x86_64
glusterfs-cli-3.7.1-14.el7rhgs.x86_64
glusterfs-libs-3.7.1-14.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-14.el7rhgs.x86_64
glusterfs-fuse-3.7.1-14.el7rhgs.x86_64
glusterfs-server-3.7.1-14.el7rhgs.x86_64
glusterfs-rdma-3.7.1-14.el7rhgs.x86_64
glusterfs-api-3.7.1-14.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-14.el7rhgs.x86_64

[root@rhs-client46 ~]# gstatus --version
gstatus 0.65


How reproducible:

100%

Steps to Reproduce:
1. Create 2*2 distribute replica volume ( Such that one replica pair has two different brick size)
2. Mount volume as Fuse/NFS mount
3.  check gstatus -a

Actual results:

gstatus picks largest brick size from replica pair unlike df -h on client mount.

Expected results:

gstatus should pick  lowest brick size to check total volume size as df -h on client.

Additional info:

[root@rhs-client47 ~]# gstatus -vl
 
     Product: RHGS Server v3.1   Capacity:   6.30 TiB(raw bricks)
      Status: HEALTHY                        4.00 GiB(raw used)
   Glusterfs: 3.7.1                          3.20 TiB(usable from volumes)
  OverCommit: No                Snapshots:   0

Volume Information
	testvol          UP - 4/4 bricks up - Distributed-Replicate
	                 Capacity: (0% used) 2.00 GiB/3.20 TiB (used/total)
	                 Snapshots: 0
	                 Self Heal:  4/ 4
	                 Tasks Active: None
	                 Protocols: glusterfs:on  NFS:on  SMB:on
	                 Gluster Connectivty: 5 hosts, 204 tcp connections

	testvol--------- +
	                 |
                Distribute (dht)
                         |
                         +-- Replica Set0 (afr)
                         |     |
                         |     +--10.70.36.70:/rhs/brick1/b001(UP) 1.00 GiB/1.80 TiB 
                         |     |
                         |     +--10.70.36.71:/rhs/brick1/b002(UP) 1.00 GiB/1.80 TiB 
                         |
                         +-- Replica Set1 (afr)
                               |
                               +--10.70.36.46:/rhs/brick1/b003(UP) 866.00 MiB/1.80 TiB 
                               |
                               +--10.70.44.13:/rhs/brick1/b004(UP) 862.00 MiB/926.00 GiB

Note You need to log in before you can comment on or make changes to this bug.