Bug 1260739 - gstatus: Capacity field displays wrong usable volume size and total volume size
Summary: gstatus: Capacity field displays wrong usable volume size and total volume size
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gstatus
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: ---
Assignee: Sachidananda Urs
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: RHGS-3.4-GSS-proposed-tracker
TreeView+ depends on / blocked
 
Reported: 2015-09-07 15:02 UTC by Anil Shah
Modified: 2021-12-10 14:31 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-02 04:30:55 UTC
Embargoed:


Attachments (Terms of Use)

Description Anil Shah 2015-09-07 15:02:22 UTC
Description of problem:

In distribute replicate volume, if one of the replica pair has different bricks size, then capacity field displays wrong wrong usable size from volume and 
volume information display wrong total volume size.

Version-Release number of selected component (if applicable):

[root@rhs-client46 ~]# rpm -qa | grep glusterfs
glusterfs-3.7.1-14.el7rhgs.x86_64
glusterfs-cli-3.7.1-14.el7rhgs.x86_64
glusterfs-libs-3.7.1-14.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-14.el7rhgs.x86_64
glusterfs-fuse-3.7.1-14.el7rhgs.x86_64
glusterfs-server-3.7.1-14.el7rhgs.x86_64
glusterfs-rdma-3.7.1-14.el7rhgs.x86_64
glusterfs-api-3.7.1-14.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-14.el7rhgs.x86_64

[root@rhs-client46 ~]# gstatus --version
gstatus 0.65


How reproducible:

100%

Steps to Reproduce:
1. Create 2*2 distribute replica volume ( Such that one replica pair has two different brick size)
2. Mount volume as Fuse/NFS mount
3.  check gstatus -a

Actual results:

gstatus picks largest brick size from replica pair unlike df -h on client mount.

Expected results:

gstatus should pick  lowest brick size to check total volume size as df -h on client.

Additional info:

[root@rhs-client47 ~]# gstatus -vl
 
     Product: RHGS Server v3.1   Capacity:   6.30 TiB(raw bricks)
      Status: HEALTHY                        4.00 GiB(raw used)
   Glusterfs: 3.7.1                          3.20 TiB(usable from volumes)
  OverCommit: No                Snapshots:   0

Volume Information
	testvol          UP - 4/4 bricks up - Distributed-Replicate
	                 Capacity: (0% used) 2.00 GiB/3.20 TiB (used/total)
	                 Snapshots: 0
	                 Self Heal:  4/ 4
	                 Tasks Active: None
	                 Protocols: glusterfs:on  NFS:on  SMB:on
	                 Gluster Connectivty: 5 hosts, 204 tcp connections

	testvol--------- +
	                 |
                Distribute (dht)
                         |
                         +-- Replica Set0 (afr)
                         |     |
                         |     +--10.70.36.70:/rhs/brick1/b001(UP) 1.00 GiB/1.80 TiB 
                         |     |
                         |     +--10.70.36.71:/rhs/brick1/b002(UP) 1.00 GiB/1.80 TiB 
                         |
                         +-- Replica Set1 (afr)
                               |
                               +--10.70.36.46:/rhs/brick1/b003(UP) 866.00 MiB/1.80 TiB 
                               |
                               +--10.70.44.13:/rhs/brick1/b004(UP) 862.00 MiB/926.00 GiB


Note You need to log in before you can comment on or make changes to this bug.