Bug 1260739

Summary: gstatus: Capacity field displays wrong usable volume size and total volume size
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Anil Shah <ashah>
Component: gstatusAssignee: Sachidananda Urs <surs>
Status: CLOSED WONTFIX QA Contact: storage-qa-internal <storage-qa-internal>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: amukherj, bkunal, psony, sankarshan, surs
Target Milestone: ---Keywords: Reopened, ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-02 04:30:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1472361    

Description Anil Shah 2015-09-07 15:02:22 UTC
Description of problem:

In distribute replicate volume, if one of the replica pair has different bricks size, then capacity field displays wrong wrong usable size from volume and 
volume information display wrong total volume size.

Version-Release number of selected component (if applicable):

[root@rhs-client46 ~]# rpm -qa | grep glusterfs
glusterfs-3.7.1-14.el7rhgs.x86_64
glusterfs-cli-3.7.1-14.el7rhgs.x86_64
glusterfs-libs-3.7.1-14.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-14.el7rhgs.x86_64
glusterfs-fuse-3.7.1-14.el7rhgs.x86_64
glusterfs-server-3.7.1-14.el7rhgs.x86_64
glusterfs-rdma-3.7.1-14.el7rhgs.x86_64
glusterfs-api-3.7.1-14.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-14.el7rhgs.x86_64

[root@rhs-client46 ~]# gstatus --version
gstatus 0.65


How reproducible:

100%

Steps to Reproduce:
1. Create 2*2 distribute replica volume ( Such that one replica pair has two different brick size)
2. Mount volume as Fuse/NFS mount
3.  check gstatus -a

Actual results:

gstatus picks largest brick size from replica pair unlike df -h on client mount.

Expected results:

gstatus should pick  lowest brick size to check total volume size as df -h on client.

Additional info:

[root@rhs-client47 ~]# gstatus -vl
 
     Product: RHGS Server v3.1   Capacity:   6.30 TiB(raw bricks)
      Status: HEALTHY                        4.00 GiB(raw used)
   Glusterfs: 3.7.1                          3.20 TiB(usable from volumes)
  OverCommit: No                Snapshots:   0

Volume Information
	testvol          UP - 4/4 bricks up - Distributed-Replicate
	                 Capacity: (0% used) 2.00 GiB/3.20 TiB (used/total)
	                 Snapshots: 0
	                 Self Heal:  4/ 4
	                 Tasks Active: None
	                 Protocols: glusterfs:on  NFS:on  SMB:on
	                 Gluster Connectivty: 5 hosts, 204 tcp connections

	testvol--------- +
	                 |
                Distribute (dht)
                         |
                         +-- Replica Set0 (afr)
                         |     |
                         |     +--10.70.36.70:/rhs/brick1/b001(UP) 1.00 GiB/1.80 TiB 
                         |     |
                         |     +--10.70.36.71:/rhs/brick1/b002(UP) 1.00 GiB/1.80 TiB 
                         |
                         +-- Replica Set1 (afr)
                               |
                               +--10.70.36.46:/rhs/brick1/b003(UP) 866.00 MiB/1.80 TiB 
                               |
                               +--10.70.44.13:/rhs/brick1/b004(UP) 862.00 MiB/926.00 GiB