Description of problem: When executing touch <mount-path>/test returns no space left on device. With distributed volume. Version-Release number of selected component (if applicable): 3.8 How reproducible: Easily Steps to Reproduce: 1. Create distributed volume with two bricks. 2. once first brick is full try to run touch test file Actual results: No space on device Expected results: expected to write data to second brick automatically. Additional info: Status of volume: vol_5b9d0a6175cb8abcffee37b8e0977213 ------------------------------------------------------------------------------ Brick : Brick glusterfs-sas-server40.sds.default.svc.local:/var/lib/heketi/mounts/vg_237bd760341b4b966974281fa94bf0a5/brick_19eb8a82e035f2770eea1c1ad53754fa/brick TCP Port : 49173 RDMA Port : 0 Online : Y Pid : 7922 File System : xfs Device : /dev/mapper/vg_237bd760341b4b966974281fa94bf0a5-brick_19eb8a82e035f2770eea1c1ad53754fa Mount Options : rw,seclabel,noatime,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota Inode Size : 512 Disk Space Free : 20.0KB Total Disk Space : 299.8GB Inode Count : 1064 Free Inodes : 1034 ------------------------------------------------------------------------------ Brick : Brick glusterfs-sas-server40.sds.default.svc.local:/var/lib/heketi/mounts/vg_558d7a67385720b612ba6ae24c033094/brick_8f9d88312131640b35dbc3f9b083d7bd/brick TCP Port : 49174 RDMA Port : 0 Online : Y Pid : 7941 File System : xfs Device : /dev/mapper/vg_558d7a67385720b612ba6ae24c033094-brick_8f9d88312131640b35dbc3f9b083d7bd Mount Options : rw,seclabel,noatime,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota Inode Size : 512 Disk Space Free : 298.6GB Total Disk Space : 299.8GB Inode Count : 157284352 Free Inodes : 157284308
New files will be created on the non-full bricks as long as there is sufficient space on the "full" brick to create the linkto file. If there isn't space for that either, file creation will probably fail. We are working on https://review.gluster.org/#/c/18008/ which will reserve some disk space for gluster metadata. Writing to a file that is already on the full brick will fail as the distribute xlator does not split files across bricks. You would need to enable sharding to do that but sharding is only supported for VM use cases atm. Please contact the sharding developers on gluster-users mailing list to find out how to use it.
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.