Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1492586

Summary: GlusterFS doesn't manage data for distributed volumes.
Product: [Community] GlusterFS Reporter: Pavel <pavel.kutishchev>
Component: distributeAssignee: bugs <bugs>
Status: CLOSED EOL QA Contact:
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 3.8CC: bugs, nbalacha
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-11-07 10:41:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Pavel 2017-09-18 09:12:05 UTC
Description of problem:
When executing touch <mount-path>/test 
returns no space left on device.
With distributed volume. 

Version-Release number of selected component (if applicable):
3.8

How reproducible:
Easily

Steps to Reproduce:
1. Create distributed volume with two bricks. 
2. once first brick is full try to run touch test file

Actual results:
No space on device

Expected results:
expected to write data to second brick automatically.

Additional info:
Status of volume: vol_5b9d0a6175cb8abcffee37b8e0977213
------------------------------------------------------------------------------
Brick                : Brick glusterfs-sas-server40.sds.default.svc.local:/var/lib/heketi/mounts/vg_237bd760341b4b966974281fa94bf0a5/brick_19eb8a82e035f2770eea1c1ad53754fa/brick
TCP Port             : 49173
RDMA Port            : 0
Online               : Y
Pid                  : 7922
File System          : xfs
Device               : /dev/mapper/vg_237bd760341b4b966974281fa94bf0a5-brick_19eb8a82e035f2770eea1c1ad53754fa
Mount Options        : rw,seclabel,noatime,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota
Inode Size           : 512
Disk Space Free      : 20.0KB
Total Disk Space     : 299.8GB
Inode Count          : 1064
Free Inodes          : 1034
------------------------------------------------------------------------------
Brick                : Brick glusterfs-sas-server40.sds.default.svc.local:/var/lib/heketi/mounts/vg_558d7a67385720b612ba6ae24c033094/brick_8f9d88312131640b35dbc3f9b083d7bd/brick
TCP Port             : 49174
RDMA Port            : 0
Online               : Y
Pid                  : 7941
File System          : xfs
Device               : /dev/mapper/vg_558d7a67385720b612ba6ae24c033094-brick_8f9d88312131640b35dbc3f9b083d7bd
Mount Options        : rw,seclabel,noatime,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota
Inode Size           : 512
Disk Space Free      : 298.6GB
Total Disk Space     : 299.8GB
Inode Count          : 157284352
Free Inodes          : 157284308

Comment 1 Nithya Balachandran 2017-09-20 14:19:01 UTC
New files will be created on the non-full bricks as long as there is sufficient space on the "full" brick to create the linkto file. If there isn't space for that either, file creation will probably fail. We are working on https://review.gluster.org/#/c/18008/ which will reserve some disk space for gluster metadata.

Writing to a file that is already on the full brick will fail as the distribute xlator does not split files across bricks. You would need to enable sharding to do that but sharding is only supported for VM use cases atm. Please contact the sharding developers on gluster-users mailing list to find out how to use it.

Comment 2 Niels de Vos 2017-11-07 10:41:28 UTC
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.