Bug 848247

Summary: [glusterfs-3.3.0qa19]: replace brick with some tests running increases quota size to more than the limit
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Vidya Sakar <vinaraya>
Component: glusterfsAssignee: Raghavendra G <rgowdapp>
Status: CLOSED WONTFIX QA Contact: Sudhir D <sdharane>
Severity: medium Docs Contact:
Priority: low    
Version: unspecifiedCC: amarts, gluster-bugs, rabhat, rfortier, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 771585 Environment:
Last Closed: 2013-01-15 11:39:19 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 771585    
Bug Blocks:    

Description Vidya Sakar 2012-08-15 01:42:05 UTC
+++ This bug was initially created as a clone of Bug #771585 +++

Created attachment 550626 [details]
fs-perf-test

Description of problem:
2x2 distributed replicate setup with quota enabled. 1 fuse and 1 nfs client. both fuse and nfs client running tests (fuse running the some multi-threades application and nfs running fs-perf-test). While the tests were going on did replace-brick. after the replace-brick the quota size increased to huge value.


Version-Release number of selected component (if applicable):


How reproducible:
With fuse and nfs clients running tests, replace-brick is performed.

Steps to Reproduce:
1.
2.
3.
  
Actual results:
----------------------------------------------------------------------------------
/test                      20GB              263.8GB
        path              limit_set          size
----------------------------------------------------------------------------------
/test                      20GB              263.8GB
        path              limit_set          size

df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb3             6.7G  4.7G  1.8G  73% /
tmpfs                1004M     0 1004M   0% /dev/shm
/dev/sdb1             194M   82M  103M  45% /boot
/dev/sda1              45G   33M   45G   1% /export
10.1.12.136:/opt       17G   14G  1.9G  89% /opt
10.1.11.130:mirror    5.0G  2.9G  2.2G  58% /client
10.1.11.130:new        90G  4.8G   86G   6% /dir
[root@RHEL6 ~]# cd /dir/test/
[root@RHEL6 test]# ls
a.out  new-fs-perf  new-fs-perf.c  playground  sync_field  thread_fops.c  thread_fops.h
[root@RHEL6 test]# touch k
touch: cannot touch `k': Disk quota exceeded
[root@RHEL6 test]# 


Expected results:


Additional info: