Bug 1744950 - glusterfs wrong size with total sum of brick.
Summary: glusterfs wrong size with total sum of brick.
Keywords:
Status: CLOSED DUPLICATE of bug 1632889
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 4.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Sanju
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-23 10:48 UTC by liuruit
Modified: 2019-10-23 09:33 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-09-19 10:52:37 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description liuruit 2019-08-23 10:48:32 UTC
Description of problem:
glusterfs wrong size with total sum of brick.

Version-Release number of selected component (if applicable):
3.12.15


Steps to Reproduce:
1.create a 10G size volume with 3-replica.
2.mount the volume to localhost, using command: df to check total size: 10G
3.expand the volume with another 40G, then it should be 50G in total, but it show only 25G.

Actual results:
it show only 25G with df command.

Expected results:
50G in total.

Additional info:

gluster volume info vol_xxx
Type: Distributed-Replicate
Volume ID: c47089b2-96c2-4ec2-9dfb-988d1e593cdc
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp


gluster volume status vol_xxx detail
Status of volume: vol_xxx
------------------------------------------------------------------------------
Brick                : Brick host1:/var/lib/heketi/mounts/vg_929e45b20519c80a714d7645061e354f/brick_5bd825a22e9511d539d24226a3d937a7/brick
TCP Port             : 49411               
RDMA Port            : 0                   
Online               : Y                   
Pid                  : 12284               
File System          : xfs                 
Device               : /dev/mapper/vg_929e45b20519c80a714d7645061e354f-brick_5bd825a22e9511d539d24226a3d937a7
Mount Options        : rw,noatime,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota
Inode Size           : 512                 
Disk Space Free      : 2.1GB               
Total Disk Space     : 10.0GB              
Inode Count          : 4420216             
Free Inodes          : 4322272             
------------------------------------------------------------------------------
Brick                : Brick host2:/var/lib/heketi/mounts/vg_d42ee5516f065e5f10b223bbb0a00d9b/brick_6078cfee3d8e48b50586b539fdfe8d61/brick
TCP Port             : 0                   
RDMA Port            : 0                   
Online               : Y                   
Pid                  : 98870               
File System          : xfs                 
Device               : /dev/mapper/vg_d42ee5516f065e5f10b223bbb0a00d9b-brick_6078cfee3d8e48b50586b539fdfe8d61
Mount Options        : rw,noatime,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota
Inode Size           : 512                 
Disk Space Free      : 2.1GB               
Total Disk Space     : 10.0GB              
Inode Count          : 4416176             
Free Inodes          : 4318083             
------------------------------------------------------------------------------
Brick                : Brick host3:/var/lib/heketi/mounts/vg_62960212c5851a4f597ee9ccfd6ae6d9/brick_edbd921fc1f3a9431eaa14eb8afff4d3/brick
TCP Port             : 49409               
RDMA Port            : 0                   
Online               : Y                   
Pid                  : 84219               
File System          : xfs                 
Device               : /dev/mapper/vg_62960212c5851a4f597ee9ccfd6ae6d9-brick_edbd921fc1f3a9431eaa14eb8afff4d3
Mount Options        : rw,noatime,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota
Inode Size           : 512                 
Disk Space Free      : 2.1GB               
Total Disk Space     : 10.0GB              
Inode Count          : 4419920             
Free Inodes          : 4321953             
------------------------------------------------------------------------------
Brick                : Brick host1:/var/lib/heketi/mounts/vg_58768cbf62201deef23eb06ab4161ca8/brick_fd4e796278c127f6a7b0d70d5689a24e/brick
TCP Port             : 49536               
RDMA Port            : 0                   
Online               : Y                   
Pid                  : 182649              
File System          : xfs                 
Device               : /dev/mapper/vg_58768cbf62201deef23eb06ab4161ca8-brick_fd4e796278c127f6a7b0d70d5689a24e
Mount Options        : rw,noatime,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota
Inode Size           : 512                 
Disk Space Free      : 8.1GB               
Total Disk Space     : 40.0GB              
Inode Count          : 17411656            
Free Inodes          : 16920686            
------------------------------------------------------------------------------
Brick                : Brick host2:/var/lib/heketi/mounts/vg_13de35a047bf8fd839f8b5b6c5aa7b20/brick_df9b0d0b41cd17848212a9e2215eba8a/brick
TCP Port             : 49549               
RDMA Port            : 0                   
Online               : Y                   
Pid                  : 11264               
File System          : xfs                 
Device               : /dev/mapper/vg_13de35a047bf8fd839f8b5b6c5aa7b20-brick_df9b0d0b41cd17848212a9e2215eba8a
Mount Options        : rw,noatime,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota
Inode Size           : 512                 
Disk Space Free      : 8.0GB               
Total Disk Space     : 40.0GB              
Inode Count          : 17361064            
Free Inodes          : 16873888            
------------------------------------------------------------------------------
Brick                : Brick host3:/var/lib/heketi/mounts/vg_19d11e2d0689d918b6affd2acfb2bcfe/brick_ebb2523fa96dbfe301c74e16428b04a0/brick
TCP Port             : 0                   
RDMA Port            : 0                   
Online               : Y                   
Pid                  : 15714               
File System          : xfs                 
Device               : /dev/mapper/vg_19d11e2d0689d918b6affd2acfb2bcfe-brick_ebb2523fa96dbfe301c74e16428b04a0
Mount Options        : rw,noatime,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota
Inode Size           : 512                 
Disk Space Free      : 8.1GB               
Total Disk Space     : 40.0GB              
Inode Count          : 17415944            
Free Inodes          : 16925792

Comment 2 liuruit 2019-08-23 10:58:57 UTC
the mount results:
command: df -h|grep vol_xxx
expected: host1:vol_xxx       50G   40G        ...
actual  : host1:vol_xxx       25G   20G        6G  80% /vol_xxx

Comment 3 Nithya Balachandran 2019-08-27 04:23:10 UTC
Are you using an upstream build?

I think you may be running into https://bugzilla.redhat.com/show_bug.cgi?id=1517260. This was fixed in 3.12.7 but some steps might need to be performed to get this working on your setup.


Please provide the following info:

1. Is each brick on its own separate partition which is not shared? It looks like it is but I would like you to confirm
2. The value of the option shared-brick-count in the brick volfiles.

On the gluster nodes:
grep shared-brick-option /var/lib/glusterd/vols/<volname>/* 

Assigning this to the glusterd team for workaround steps if it turns out to be the same issue.

Comment 4 liuruit 2019-08-27 07:17:34 UTC
firstly, yum install glusterfs 3.12.2, and create the volume.
then upgrade to 3.12.15, and expand the volume.

volume vol_xxx-posix
    type storage/posix
    option shared-brick-count 1
    option volume-id c47089b2-96c2-4ec2-9dfb-988d1e593cdc
    option directory /var/lib/heketi/mounts/vg_929e45b20519c80a714d7645061e354f/brick_5bd825a22e9511d539d24226a3d937a7/brick
end-volume

df -h|grep brick_5bd825a22e9511d539d24226a3d937a7
/dev/mapper/vg_929e45b20519c80a714d7645061e354f-brick_5bd825a22e9511d539d24226a3d937a7   10G  8.0G  2.0G  81% /var/lib/heketi/mounts/vg_929e45b20519c80a714d7645061e354f/brick_5bd825a22e9511d539d24226a3d937a7

------
volume vol_xxx-posix
    type storage/posix
    option shared-brick-count 1
    option volume-id c47089b2-96c2-4ec2-9dfb-988d1e593cdc
    option directory /var/lib/heketi/mounts/vg_13de35a047bf8fd839f8b5b6c5aa7b20/brick_df9b0d0b41cd17848212a9e2215eba8a/brick
end-volume

df -h|grep brick_df9b0d0b41cd17848212a9e2215eba8a
/dev/mapper/vg_13de35a047bf8fd839f8b5b6c5aa7b20-brick_df9b0d0b41cd17848212a9e2215eba8a   40G   33G  7.6G  82% /var/lib/heketi/mounts/vg_13de35a047bf8fd839f8b5b6c5aa7b20/brick_df9b0d0b41cd17848212a9e2215eba8a

Comment 5 Nithya Balachandran 2019-08-27 07:22:14 UTC
3.12.15 is an upstream build. Updating the BZ accordingly.

Comment 7 Nithya Balachandran 2019-08-27 07:54:31 UTC
(In reply to Nithya Balachandran from comment #6)
> (In reply to liuruit from comment #4)
> > firstly, yum install glusterfs 3.12.2, and create the volume.
> > then upgrade to 3.12.15, and expand the volume.
> > 
> > volume vol_xxx-posix
> >     type storage/posix
> >     option shared-brick-count 1
> >     option volume-id c47089b2-96c2-4ec2-9dfb-988d1e593cdc
> >     option directory

Please check the value of shared-brick-count for the bricks on all 3 nodes.

Comment 8 liuruit 2019-08-27 08:28:14 UTC
grep shared -r .
./vol_xxx.host1.var-lib-heketi-mounts-vg_929e45b20519c80a714d7645061e354f-brick_5bd825a22e9511d539d24226a3d937a7-brick.vol:    option shared-brick-count 1
./vol_xxx.host2.var-lib-heketi-mounts-vg_d42ee5516f065e5f10b223bbb0a00d9b-brick_6078cfee3d8e48b50586b539fdfe8d61-brick.vol:    option shared-brick-count 0
./vol_xxx.host3.var-lib-heketi-mounts-vg_62960212c5851a4f597ee9ccfd6ae6d9-brick_edbd921fc1f3a9431eaa14eb8afff4d3-brick.vol:    option shared-brick-count 0
./vol_xxx.host1.var-lib-heketi-mounts-vg_58768cbf62201deef23eb06ab4161ca8-brick_fd4e796278c127f6a7b0d70d5689a24e-brick.vol:    option shared-brick-count 0
./vol_xxx.host2.var-lib-heketi-mounts-vg_13de35a047bf8fd839f8b5b6c5aa7b20-brick_df9b0d0b41cd17848212a9e2215eba8a-brick.vol:    option shared-brick-count 1
./vol_xxx.host3.var-lib-heketi-mounts-vg_19d11e2d0689d918b6affd2acfb2bcfe-brick_ebb2523fa96dbfe301c74e16428b04a0-brick.vol:    option shared-brick-count 0

All brick on the 3 host have same value.

Comment 9 Nithya Balachandran 2019-08-27 08:58:34 UTC
(In reply to liuruit from comment #8)
> grep shared -r .
> ./vol_xxx.host1.var-lib-heketi-mounts-vg_929e45b20519c80a714d7645061e354f-
> brick_5bd825a22e9511d539d24226a3d937a7-brick.vol:    option
> shared-brick-count 1
> ./vol_xxx.host2.var-lib-heketi-mounts-vg_d42ee5516f065e5f10b223bbb0a00d9b-
> brick_6078cfee3d8e48b50586b539fdfe8d61-brick.vol:    option
> shared-brick-count 0
> ./vol_xxx.host3.var-lib-heketi-mounts-vg_62960212c5851a4f597ee9ccfd6ae6d9-
> brick_edbd921fc1f3a9431eaa14eb8afff4d3-brick.vol:    option
> shared-brick-count 0
> ./vol_xxx.host1.var-lib-heketi-mounts-vg_58768cbf62201deef23eb06ab4161ca8-
> brick_fd4e796278c127f6a7b0d70d5689a24e-brick.vol:    option
> shared-brick-count 0
> ./vol_xxx.host2.var-lib-heketi-mounts-vg_13de35a047bf8fd839f8b5b6c5aa7b20-
> brick_df9b0d0b41cd17848212a9e2215eba8a-brick.vol:    option
> shared-brick-count 1
> ./vol_xxx.host3.var-lib-heketi-mounts-vg_19d11e2d0689d918b6affd2acfb2bcfe-
> brick_ebb2523fa96dbfe301c74e16428b04a0-brick.vol:    option
> shared-brick-count 0
> 
> All brick on the 3 host have same value.

These look like the values from a single host. Is that correct? You need to run the grep on every node (host1, host2 and host3)

Please provide the values for all the nodes - you should see 1 entry for each brick on each node.

Comment 10 liuruit 2019-08-27 09:09:27 UTC
Another volume info.
gluster volume info vol_320b6dab471a7b810d92ff03e9ef05c6
 
Volume Name: vol_320b6dab471a7b810d92ff03e9ef05c6
Type: Distributed-Replicate
Volume ID: c47089b2-96c2-4ec2-9dfb-988d1e593cdc
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 10.10.2.20:/var/lib/heketi/mounts/vg_929e45b20519c80a714d7645061e354f/brick_5bd825a22e9511d539d24226a3d937a7/brick
Brick2: 10.10.2.22:/var/lib/heketi/mounts/vg_d42ee5516f065e5f10b223bbb0a00d9b/brick_6078cfee3d8e48b50586b539fdfe8d61/brick
Brick3: 10.10.2.21:/var/lib/heketi/mounts/vg_62960212c5851a4f597ee9ccfd6ae6d9/brick_edbd921fc1f3a9431eaa14eb8afff4d3/brick
Brick4: 10.10.2.19:/var/lib/heketi/mounts/vg_58768cbf62201deef23eb06ab4161ca8/brick_fd4e796278c127f6a7b0d70d5689a24e/brick
Brick5: 10.10.2.20:/var/lib/heketi/mounts/vg_13de35a047bf8fd839f8b5b6c5aa7b20/brick_df9b0d0b41cd17848212a9e2215eba8a/brick
Brick6: 10.10.2.22:/var/lib/heketi/mounts/vg_19d11e2d0689d918b6affd2acfb2bcfe/brick_ebb2523fa96dbfe301c74e16428b04a0/brick
Options Reconfigured:
nfs.disable: on

10.10.2.19:
grep shared -r .
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.20.var-lib-heketi-mounts-vg_929e45b20519c80a714d7645061e354f-brick_5bd825a22e9511d539d24226a3d937a7-brick.vol:    option shared-brick-count 0
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.22.var-lib-heketi-mounts-vg_d42ee5516f065e5f10b223bbb0a00d9b-brick_6078cfee3d8e48b50586b539fdfe8d61-brick.vol:    option shared-brick-count 0
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.21.var-lib-heketi-mounts-vg_62960212c5851a4f597ee9ccfd6ae6d9-brick_edbd921fc1f3a9431eaa14eb8afff4d3-brick.vol:    option shared-brick-count 0
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.19.var-lib-heketi-mounts-vg_58768cbf62201deef23eb06ab4161ca8-brick_fd4e796278c127f6a7b0d70d5689a24e-brick.vol:    option shared-brick-count 1
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.20.var-lib-heketi-mounts-vg_13de35a047bf8fd839f8b5b6c5aa7b20-brick_df9b0d0b41cd17848212a9e2215eba8a-brick.vol:    option shared-brick-count 0
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.22.var-lib-heketi-mounts-vg_19d11e2d0689d918b6affd2acfb2bcfe-brick_ebb2523fa96dbfe301c74e16428b04a0-brick.vol:    option shared-brick-count 0

10.10.2.20:
grep shared -r .
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.20.var-lib-heketi-mounts-vg_929e45b20519c80a714d7645061e354f-brick_5bd825a22e9511d539d24226a3d937a7-brick.vol:    option shared-brick-count 1
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.22.var-lib-heketi-mounts-vg_d42ee5516f065e5f10b223bbb0a00d9b-brick_6078cfee3d8e48b50586b539fdfe8d61-brick.vol:    option shared-brick-count 0
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.21.var-lib-heketi-mounts-vg_62960212c5851a4f597ee9ccfd6ae6d9-brick_edbd921fc1f3a9431eaa14eb8afff4d3-brick.vol:    option shared-brick-count 0
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.19.var-lib-heketi-mounts-vg_58768cbf62201deef23eb06ab4161ca8-brick_fd4e796278c127f6a7b0d70d5689a24e-brick.vol:    option shared-brick-count 0
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.20.var-lib-heketi-mounts-vg_13de35a047bf8fd839f8b5b6c5aa7b20-brick_df9b0d0b41cd17848212a9e2215eba8a-brick.vol:    option shared-brick-count 1
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.22.var-lib-heketi-mounts-vg_19d11e2d0689d918b6affd2acfb2bcfe-brick_ebb2523fa96dbfe301c74e16428b04a0-brick.vol:    option shared-brick-count 0

10.10.2.21:
grep shared -r .
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.20.var-lib-heketi-mounts-vg_929e45b20519c80a714d7645061e354f-brick_5bd825a22e9511d539d24226a3d937a7-brick.vol:    option shared-brick-count 0
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.22.var-lib-heketi-mounts-vg_d42ee5516f065e5f10b223bbb0a00d9b-brick_6078cfee3d8e48b50586b539fdfe8d61-brick.vol:    option shared-brick-count 0
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.21.var-lib-heketi-mounts-vg_62960212c5851a4f597ee9ccfd6ae6d9-brick_edbd921fc1f3a9431eaa14eb8afff4d3-brick.vol:    option shared-brick-count 1
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.19.var-lib-heketi-mounts-vg_58768cbf62201deef23eb06ab4161ca8-brick_fd4e796278c127f6a7b0d70d5689a24e-brick.vol:    option shared-brick-count 0
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.20.var-lib-heketi-mounts-vg_13de35a047bf8fd839f8b5b6c5aa7b20-brick_df9b0d0b41cd17848212a9e2215eba8a-brick.vol:    option shared-brick-count 0
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.22.var-lib-heketi-mounts-vg_19d11e2d0689d918b6affd2acfb2bcfe-brick_ebb2523fa96dbfe301c74e16428b04a0-brick.vol:    option shared-brick-count 0

10.10.2.22:
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.20.var-lib-heketi-mounts-vg_929e45b20519c80a714d7645061e354f-brick_5bd825a22e9511d539d24226a3d937a7-brick.vol:    option shared-brick-count 0
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.22.var-lib-heketi-mounts-vg_d42ee5516f065e5f10b223bbb0a00d9b-brick_6078cfee3d8e48b50586b539fdfe8d61-brick.vol:    option shared-brick-count 2
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.21.var-lib-heketi-mounts-vg_62960212c5851a4f597ee9ccfd6ae6d9-brick_edbd921fc1f3a9431eaa14eb8afff4d3-brick.vol:    option shared-brick-count 0
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.19.var-lib-heketi-mounts-vg_58768cbf62201deef23eb06ab4161ca8-brick_fd4e796278c127f6a7b0d70d5689a24e-brick.vol:    option shared-brick-count 0
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.20.var-lib-heketi-mounts-vg_13de35a047bf8fd839f8b5b6c5aa7b20-brick_df9b0d0b41cd17848212a9e2215eba8a-brick.vol:    option shared-brick-count 0
./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.22.var-lib-heketi-mounts-vg_19d11e2d0689d918b6affd2acfb2bcfe-brick_ebb2523fa96dbfe301c74e16428b04a0-brick.vol:    option shared-brick-count 2

Comment 11 Nithya Balachandran 2019-08-27 09:48:20 UTC
Does this volume have the same problem as the other one? If yes, the problem is with the volfiles for the bricks on 10.10.2.22:

./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.22.var-lib-heketi-mounts-vg_d42ee5516f065e5f10b223bbb0a00d9b-brick_6078cfee3d8e48b50586b539fdfe8d61-brick.vol:    option shared-brick-count 2

./vol_320b6dab471a7b810d92ff03e9ef05c6.10.51.2.22.var-lib-heketi-mounts-vg_19d11e2d0689d918b6affd2acfb2bcfe-brick_ebb2523fa96dbfe301c74e16428b04a0-brick.vol:    option shared-brick-count 2


Both these have a shared-brick-count value of 2 which causes gluster to internally halve the available disk size for these bricks. As they are on different replica sets and the lowest disk space value of of the bricks is taken for the disk space of the replica set, this means the value of the disk space is halved for the entire volume.



This is the same problem reported in https://bugzilla.redhat.com/show_bug.cgi?id=1517260.


To recover, please do the following:

1. Restart glusterd on each node
2. For each volume, run the following command from any one gluster node:

gluster v set <volname> cluster.min-free-disk 11%


This should regenerate the volfiles with the correct values. Recheck the shared-brick-count values after doing these steps - the values should be 0 or 1. The df values should also be correct.

Moving this to the glusterd component.

Comment 12 Nithya Balachandran 2019-08-28 02:53:43 UTC
Please note 3.12 is EOL so I have used version 4.1 in the BZ.

Comment 13 Atin Mukherjee 2019-08-28 04:26:34 UTC
We'd not need to keep this bug open given the root cause has been provided and it has been confirmed that this is same as BZ 1517260.

Comment 14 Atin Mukherjee 2019-08-28 04:50:53 UTC
Based on the discussion with Nithya, the fix went into 3.12.7, however this is reported against 3.12.15. So we need to cross check if there's any other code path where this issue still exists where the same hasn't been fixed in the latest releases.

Comment 15 Sanju 2019-08-28 09:07:46 UTC
As the user is saying he expanded the cluster, I checked the add-brick code path and I don't see any problem here:

excerpt from glusterd_op_perform_add_bricks : 

                ret = glusterd_resolve_brick (brickinfo);
                if (ret)
                        goto out;

                if (!gf_uuid_compare (brickinfo->uuid, MY_UUID)) {
                        ret = sys_statvfs (brickinfo->path, &brickstat);
                        if (ret) {
                                gf_msg (this->name, GF_LOG_ERROR, errno,
                                        GD_MSG_STATVFS_FAILED,
                                        "Failed to fetch disk utilization "
                                        "from the brick (%s:%s). Please check the health of "
                                        "the brick. Error code was %s",
                                        brickinfo->hostname, brickinfo->path,
                                        strerror (errno));

                                goto out;
                        }
                        brickinfo->statfs_fsid = brickstat.f_fsid;
                }

Did you upgrade from some older gluster release to 3.12.15? If you have upgraded from the older version, you are hitting https://bugzilla.redhat.com/show_bug.cgi?id=1632889. This bug got fixed in release-6 and backported to release-4 and release-5 branches.

Thanks,
Sanju

Comment 16 Sanju 2019-09-19 10:52:37 UTC
From comment 4:

firstly, yum install glusterfs 3.12.2, and create the volume.
then upgrade to 3.12.15, and expand the volume.

With above, I can confirm that you are hitting https://bugzilla.redhat.com/show_bug.cgi?id=1632889. Closing this bug as duplicate.

Thanks,
Sanju

*** This bug has been marked as a duplicate of bug 1632889 ***


Note You need to log in before you can comment on or make changes to this bug.