Bug 1492586 - GlusterFS doesn't manage data for distributed volumes.
Summary: GlusterFS doesn't manage data for distributed volumes.
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: 3.8
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-09-18 09:12 UTC by Pavel
Modified: 2017-11-07 10:41 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-11-07 10:41:28 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Pavel 2017-09-18 09:12:05 UTC
Description of problem:
When executing touch <mount-path>/test 
returns no space left on device.
With distributed volume. 

Version-Release number of selected component (if applicable):
3.8

How reproducible:
Easily

Steps to Reproduce:
1. Create distributed volume with two bricks. 
2. once first brick is full try to run touch test file

Actual results:
No space on device

Expected results:
expected to write data to second brick automatically.

Additional info:
Status of volume: vol_5b9d0a6175cb8abcffee37b8e0977213
------------------------------------------------------------------------------
Brick                : Brick glusterfs-sas-server40.sds.default.svc.local:/var/lib/heketi/mounts/vg_237bd760341b4b966974281fa94bf0a5/brick_19eb8a82e035f2770eea1c1ad53754fa/brick
TCP Port             : 49173
RDMA Port            : 0
Online               : Y
Pid                  : 7922
File System          : xfs
Device               : /dev/mapper/vg_237bd760341b4b966974281fa94bf0a5-brick_19eb8a82e035f2770eea1c1ad53754fa
Mount Options        : rw,seclabel,noatime,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota
Inode Size           : 512
Disk Space Free      : 20.0KB
Total Disk Space     : 299.8GB
Inode Count          : 1064
Free Inodes          : 1034
------------------------------------------------------------------------------
Brick                : Brick glusterfs-sas-server40.sds.default.svc.local:/var/lib/heketi/mounts/vg_558d7a67385720b612ba6ae24c033094/brick_8f9d88312131640b35dbc3f9b083d7bd/brick
TCP Port             : 49174
RDMA Port            : 0
Online               : Y
Pid                  : 7941
File System          : xfs
Device               : /dev/mapper/vg_558d7a67385720b612ba6ae24c033094-brick_8f9d88312131640b35dbc3f9b083d7bd
Mount Options        : rw,seclabel,noatime,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota
Inode Size           : 512
Disk Space Free      : 298.6GB
Total Disk Space     : 299.8GB
Inode Count          : 157284352
Free Inodes          : 157284308

Comment 1 Nithya Balachandran 2017-09-20 14:19:01 UTC
New files will be created on the non-full bricks as long as there is sufficient space on the "full" brick to create the linkto file. If there isn't space for that either, file creation will probably fail. We are working on https://review.gluster.org/#/c/18008/ which will reserve some disk space for gluster metadata.

Writing to a file that is already on the full brick will fail as the distribute xlator does not split files across bricks. You would need to enable sharding to do that but sharding is only supported for VM use cases atm. Please contact the sharding developers on gluster-users mailing list to find out how to use it.

Comment 2 Niels de Vos 2017-11-07 10:41:28 UTC
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.


Note You need to log in before you can comment on or make changes to this bug.