Bug 1162060

Summary: gstatus: capacity (raw used) field shows wrong data
Product: [Community] GlusterFS Reporter: Sachidananda Urs <surs>
Component: unclassifiedAssignee: Paul Cuzner <pcuzner>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: unspecified    
Version: mainlineCC: atumball, bugs
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: gstatus 0.62 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-18 08:31:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
gstatus output none

Description Sachidananda Urs 2014-11-10 07:34:24 UTC
Description of problem:

When gstatus is run without any options, the capacity displayed is wrong.
The (raw used) field constantly displays 12G.

[root@guido ~]# gstatus
 
     Product: RHSS v4            Capacity: 293.00 GiB(raw bricks)
      Status: HEALTHY                       12.00 GiB(raw used)
   Glusterfs: 3.4.0.68rhs                  146.00 GiB(usable from volumes)
  OverCommit: No                

[root@guido ~]# df -h /rhs/brick1/
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/RHS_vg1-RHS_lv1
                       81G  3.0G   78G   4% /rhs/brick1
[root@guido ~]# 

On client:

[root@bob-the-minion ~]# df -h /mnt/gluster
Filesystem            Size  Used Avail Use% Mounted on
10.70.34.100:gs       131G  6.0G  125G   5% /mnt/gluster


Version-Release number of selected component (if applicable):

[root@guido ~]# gstatus --version
gstatus 0.62


How reproducible:

Always.

Comment 1 Paul Cuzner 2014-11-11 19:02:20 UTC
(In reply to Sachidananda Urs from comment #0)
> Description of problem:
> 
> When gstatus is run without any options, the capacity displayed is wrong.
> The (raw used) field constantly displays 12G.
> 
> [root@guido ~]# gstatus
>  
>      Product: RHSS v4            Capacity: 293.00 GiB(raw bricks)
>       Status: HEALTHY                       12.00 GiB(raw used)
>    Glusterfs: 3.4.0.68rhs                  146.00 GiB(usable from volumes)
>   OverCommit: No                
> 
> [root@guido ~]# df -h /rhs/brick1/
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/mapper/RHS_vg1-RHS_lv1
>                        81G  3.0G   78G   4% /rhs/brick1
> [root@guido ~]# 
> 
> On client:
> 
> [root@bob-the-minion ~]# df -h /mnt/gluster
> Filesystem            Size  Used Avail Use% Mounted on
> 10.70.34.100:gs       131G  6.0G  125G   5% /mnt/gluster
> 
> 
> Version-Release number of selected component (if applicable):
> 
> [root@guido ~]# gstatus --version
> gstatus 0.62
> 
> 
> How reproducible:
> 
> Always.

I'll take a look. I'm also interested in your product identification showing RHSS v4?

Comment 2 Paul Cuzner 2014-11-11 19:25:01 UTC
Created attachment 956392 [details]
gstatus output

Comment 3 Paul Cuzner 2014-11-11 19:26:47 UTC
sac, can you provide a gstatus -al from your environment please?

On my test environment the numbers are correct - gstatus -al, shows how the bricks are consumed, and a gstatus shows the same numbers aggregated. I've attached the output of the two commands for reference.

Comment 4 Sachidananda Urs 2014-11-12 07:41:49 UTC
Paul, I think looking at -al makes more sense. It is the aggregation of the disk used on the bricks.

[root@guido ~]# gstatus -al
 
     Product: RHSS v5            Capacity: 293.00 GiB(raw bricks)
      Status: HEALTHY                       24.00 GiB(raw used)
   Glusterfs: 3.4.0.70rhs                  146.00 GiB(usable from volumes)
  OverCommit: No                

   Nodes    :  4/ 4             Volumes:  1 Up
   Self Heal:  4/ 4                       0 Up(Degraded)
   Bricks   :  4/ 4                       0 Up(Partial)
   Clients  :     1                       0 Down

Volume Information
        gs               UP - 4/4 bricks up - Distributed-Replicate
                         Capacity: (8% used) 12.00 GiB/146.00 GiB (used/total)
                         Self Heal:  4/ 4
                         Tasks Active: None
                         Protocols: glusterfs:on  NFS:on  SMB:on
                         Gluster Clients : 1

        gs-------------- +
                         |
                Distribute (dht)
                         |
                         +-- Repl Set 0 (afr)
                         |     |
                         |     +--10.70.34.100:/rhs/brick1/gs0r0(UP) 6.00 GiB/81.00 GiB 
                         |     |
                         |     +--10.70.34.101:/rhs/brick1/gs0r0(UP) 6.00 GiB/81.00 GiB 
                         |
                         +-- Repl Set 1 (afr)
                               |
                               +--10.70.34.102:/rhs/brick1/gs0r1(UP) 6.00 GiB/50.00 GiB 
                               |
                               +--10.70.34.103:/rhs/brick1/gs0r1(UP) 6.00 GiB/81.00 GiB 



Status Messages
  - Cluster is HEALTHY, all checks successful

Comment 5 Sachidananda Urs 2014-11-12 07:42:56 UTC
However the Product field, RHS V5, I would rather like to see something like.

RHSS 2.1 V5.

The output of below command.

[root@guido ~]# cat /etc/redhat-storage-release 
Red Hat Storage Server 2.1 Update 5

Comment 6 Paul Cuzner 2014-11-13 06:01:05 UTC
+1

I've updated the getVersion function so now you get

[root@rhs3-2 ~]# gstatus 
 
     Product: RHSS v2.1 u4       Capacity:  80.00 GiB(raw bricks)
      Status: HEALTHY                        4.00 GiB(raw used)
   Glusterfs: 3.4.0.68rhs                  200.00 GiB(usable from volumes)
  OverCommit: Yes               

There's a 0.62-1 rpm in the repo now.

Of course reporting on capacity available is now (v3.x) problematic since thin provisioning is supported, and gluster itself doesn't see this layer. All the data in the tool comes from gluster so, if it doesn't see it, neither will gstatus.

Let me know if you need anything else - if not feel free to close the BZ.

Comment 7 Sachidananda Urs 2014-11-13 07:02:43 UTC
Paul,

Thanks. Tested on 

# gstatus --version
gstatus 0.62

Version is fixed. Marking the bug as verified, I'll be creating a tracker bug to track all the Capacity related issues.