Bug 764534 (GLUSTER-2802) - du reporting more than required size
Summary: du reporting more than required size
Keywords:
Status: CLOSED WONTFIX
Alias: GLUSTER-2802
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Raghavendra G
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-04-18 17:55 UTC by Amar Tumballi
Modified: 2013-12-19 00:06 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: RTNR
Mount Type: All
Documentation: DP
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Amar Tumballi 2011-04-18 17:55:22 UTC
Notice the size of the volume after rebalance is complete.

-----
root@home:~# cp -a /etc/ /mnt/gfs/
root@home:~# du -sh /etc/ /mnt/gfs/
14M	/etc/
14M	/mnt/gfs/
root@home:~# gluster volume quota test enable
Enabling quota has been successful
root@home:~# du -sh /etc/ /mnt/gfs/
14M	/etc/
25M	/mnt/gfs/
root@home:~# gluster volume add-brick test home:/tmp/export/t2  home:/tmp/export/t3
Add Brick successful
root@home:~# du -sh /mnt/gfs/
27M	/mnt/gfs/
root@home:~# gluster volume rebalance test start
starting rebalance on volume test has been successful
root@home:~# gluster volume rebalance test status
rebalance completed: rebalanced 1554 files of size 48040594 (total files scanned 4009)
root@home:~# du -sh /mnt/gfs/
66M	/mnt/gfs/
root@home:~# find /etc | wc -l
2712
root@home:~# find /mnt/gfs/etc | wc -l
2706
### 6 extra files because of add-brick and rebalance
-
On a plain volume

root@home:~# gluster volume create test1 home:/tmp/export/tt1
Creation of volume test1 has been successful. Please start the volume to access data.
root@home:~# gluster volume start test1
Starting volume test1 has been successful
root@home:~# mkdir /mnt/gfs1
root@home:~# mount -t glusterfs home:/test1 /mnt/gfs1
root@home:~# df -h /mnt/gfs1
Filesystem            Size  Used Avail Use% Mounted on
home:/test1            19G  3.9G   14G  23% /mnt/gfs1
root@home:~# cp -a /etc/ /mnt/gfs1
root@home:~# du -sh /etc/ /mnt/gfs1
14M	/etc/
14M	/mnt/gfs1
root@home:~# gluster volume add-brick test home:/tmp/export/tt2  home:/tmp/export/tt3
Add Brick successful
root@home:~# du -sh /mnt/gfs1
14M	/mnt/gfs1
root@home:~# du -sh /mnt/gfs1
14M	/mnt/gfs1
root@home:~# gluster volume add-brick test1 home:/tmp/export/tt12  home:/tmp/export/tt13
Add Brick successful
root@home:~# du -sh /mnt/gfs1
16M	/mnt/gfs1
root@home:~# gluster volume rebalance test1 start
starting rebalance on volume test1 has been successful
root@home:~# gluster volume rebalance test1 status
rebalance completed
root@home:~# du -sh /mnt/gfs1
67M	/mnt/gfs1
root@home:~# gluster volume info test1

Volume Name: test1
Type: Distribute
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: home:/tmp/export/tt1
Brick2: home:/tmp/export/tt12
Brick3: home:/tmp/export/tt13


---------------------

Investigated that its not exactly blocks (even though it has a contribution), which causes the overusage, but the size of the directory tree structure on each backend itself.

Comment 1 Raghavendra G 2011-06-27 03:45:31 UTC
The behavior is known and needs to mentioned in FAQ. Hence marking this bug for documentation pending and moving to resolved state.


Note You need to log in before you can comment on or make changes to this bug.