Description of problem: Have this gluster striped volume with EXT4 running on the bricks : Volume Name: pirstripe Type: Stripe Status: Started Number of Bricks: 5 Transport-type: tcp,rdma Bricks: Brick1: gluster1:/export/gluster1/pirstripe Brick2: gluster2:/export/gluster2/pirstripe Brick3: gluster3:/export/gluster3/pirstripe Brick4: gluster4:/export/gluster4/pirstripe Brick5: gluster5:/export/gluster5/pirstripe Options Reconfigured: auth.allow: 10.2.178.* Create a directory, go into it, and write a 100MB file (my wrapper that does dd if=/dev/zero of=someFile) : [root@gluster1 pirstripe]# mkdir tmp && cd tmp && ~me/nfsSpeedTest/nfsSpeedTest -s 100m -y -r -d gluster1: Write test (dd): 44.300 MB/s 354.398 mbps 2.257 seconds [root@gluster1 tmp]# stat nfsSpeedTest-71364644793634600136 File: `nfsSpeedTest-71364644793634600136' Size: 104857600 Blocks: 204840 IO Block: 131072 regular file Device: 1eh/30d Inode: 18446744070399556490 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2012-02-24 15:26:29.625841194 -0600 Modify: 2012-02-24 15:26:31.861762336 -0600 Change: 2012-02-24 15:26:31.861762336 -0600 [root@gluster1 tmp]# du -sh nfsSpeedTest-71364644793634600136 101M nfsSpeedTest-71364644793634600136 [root@gluster1 tmp]# du -sh --apparent-size nfsSpeedTest-71364644793634600136 100M nfsSpeedTest-71364644793634600136 So far good. [root@gluster1 tmp]# cd .. [root@gluster1 pirstripe]# du -sh tmp/ 21M tmp/ That was unexpected! That's the filesize / stripeSize (5) . [root@gluster1 pirstripe]# du -sh --apparent-size tmp/ 101M tmp/ du -sh should show me 101M not the total size of the directory / # of bricks in the stripe. Version-Release number of selected component (if applicable): gluster 3.2.5.2 How reproducible: Always Steps to Reproduce: See above Actual results: du -sh show the total size of the directory / # of bricks in the stripe. Expected results: du -sh should show me the total size of the directory Additional info:
Hi, Can you please try reproduce the bug by turning of stat-prefetch performance xlator (gluster volume set <volname> stat-prefetch off). Please let us know if you still encounter the issue.
that fixes it, but I'm leaving stat-prefetch on for now.
CHANGE: http://review.gluster.com/2833 (cluster/stripe: Readdirp - send aggregated block_size in stat) merged in master by Vijay Bellur (vijay)
root@shishirng:/mnt/dht/test# dd if=/dev/zero of=testfile bs=1024 count=100000 100000+0 records in 100000+0 records out 102400000 bytes (102 MB) copied, 45.742 s, 2.2 MB/s root@shishirng:/mnt/dht/test# stat testfile File: `testfile' Size: 102400000 Blocks: 200080 IO Block: 131072 regular file Device: 17h/23d Inode: 9443469641368063709 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2012-05-29 16:24:01.311725300 +0530 Modify: 2012-05-29 16:24:47.035952033 +0530 Change: 2012-05-29 16:25:10.704069395 +0530 Birth: - root@shishirng:/mnt/dht/test# du -sh testfile 98M testfile root@shishirng:/mnt/dht/test# cd .. root@shishirng:/mnt/dht# du -sh test/ 98M test/ root@shishirng:/mnt/dht# du -sh --apparent-size test/ 98M test/
gluster> volume info Volume Name: new Type: Stripe Volume ID: a777917c-0172-4d8d-aa0e-2214cb93d64a Status: Started Number of Bricks: 1 x 5 = 5 Transport-type: tcp Bricks: Brick1: sng:/exports/dir1 Brick2: sng:/exports/dir2 Brick3: sng:/exports/dir3 Brick4: sng:/exports/dir4 Brick5: sng:/exports/dir5