Bug 764534 (GLUSTER-2802)

Summary: du reporting more than required size
Product: [Community] GlusterFS Reporter: Amar Tumballi <amarts>
Component: coreAssignee: Raghavendra G <raghavendra>
Status: CLOSED WONTFIX QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: mainlineCC: gluster-bugs, vijay, vraman
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: RTNR Mount Type: All
Documentation: DP CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Amar Tumballi 2011-04-18 17:55:22 UTC
Notice the size of the volume after rebalance is complete.

-----
root@home:~# cp -a /etc/ /mnt/gfs/
root@home:~# du -sh /etc/ /mnt/gfs/
14M	/etc/
14M	/mnt/gfs/
root@home:~# gluster volume quota test enable
Enabling quota has been successful
root@home:~# du -sh /etc/ /mnt/gfs/
14M	/etc/
25M	/mnt/gfs/
root@home:~# gluster volume add-brick test home:/tmp/export/t2  home:/tmp/export/t3
Add Brick successful
root@home:~# du -sh /mnt/gfs/
27M	/mnt/gfs/
root@home:~# gluster volume rebalance test start
starting rebalance on volume test has been successful
root@home:~# gluster volume rebalance test status
rebalance completed: rebalanced 1554 files of size 48040594 (total files scanned 4009)
root@home:~# du -sh /mnt/gfs/
66M	/mnt/gfs/
root@home:~# find /etc | wc -l
2712
root@home:~# find /mnt/gfs/etc | wc -l
2706
### 6 extra files because of add-brick and rebalance
-
On a plain volume

root@home:~# gluster volume create test1 home:/tmp/export/tt1
Creation of volume test1 has been successful. Please start the volume to access data.
root@home:~# gluster volume start test1
Starting volume test1 has been successful
root@home:~# mkdir /mnt/gfs1
root@home:~# mount -t glusterfs home:/test1 /mnt/gfs1
root@home:~# df -h /mnt/gfs1
Filesystem            Size  Used Avail Use% Mounted on
home:/test1            19G  3.9G   14G  23% /mnt/gfs1
root@home:~# cp -a /etc/ /mnt/gfs1
root@home:~# du -sh /etc/ /mnt/gfs1
14M	/etc/
14M	/mnt/gfs1
root@home:~# gluster volume add-brick test home:/tmp/export/tt2  home:/tmp/export/tt3
Add Brick successful
root@home:~# du -sh /mnt/gfs1
14M	/mnt/gfs1
root@home:~# du -sh /mnt/gfs1
14M	/mnt/gfs1
root@home:~# gluster volume add-brick test1 home:/tmp/export/tt12  home:/tmp/export/tt13
Add Brick successful
root@home:~# du -sh /mnt/gfs1
16M	/mnt/gfs1
root@home:~# gluster volume rebalance test1 start
starting rebalance on volume test1 has been successful
root@home:~# gluster volume rebalance test1 status
rebalance completed
root@home:~# du -sh /mnt/gfs1
67M	/mnt/gfs1
root@home:~# gluster volume info test1

Volume Name: test1
Type: Distribute
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: home:/tmp/export/tt1
Brick2: home:/tmp/export/tt12
Brick3: home:/tmp/export/tt13


---------------------

Investigated that its not exactly blocks (even though it has a contribution), which causes the overusage, but the size of the directory tree structure on each backend itself.

Comment 1 Raghavendra G 2011-06-27 03:45:31 UTC
The behavior is known and needs to mentioned in FAQ. Hence marking this bug for documentation pending and moving to resolved state.