| Summary: | quota: Exceeding upto 100% of the hard limit on file creation with different sizes and quota limit is less than 2GB. | ||
|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Rahul Hinduja <rhinduja> |
| Component: | quota | Assignee: | Susant Kumar Palai <spalai> |
| Status: | CLOSED WONTFIX | QA Contact: | storage-qa-internal <storage-qa-internal> |
| Severity: | high | Docs Contact: | |
| Priority: | medium | ||
| Version: | 2.1 | CC: | asriram, gluster-bugs, grajaiya, mhideo, rhs-bugs, rwheeler, spalai, storage-doc, storage-qa-internal, vagarwal, vbellur, vmallika |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Known Issue | |
| Doc Text: |
Expected behavior of quota:
If the rate of I/O is more than the value of hard-timeout and soft-timeout, there is possibility of quota limit being exceeded
For example:
If the rate of IO is 1GB/sec
If hard-timeout is set to 5sec (default value).
If soft-timeout is set to 60sec (default value).
Then we may exceed quota limit by ~30GB - 60GB
In order to attain a strict checking of quota limit, then you need to lower the value of hard-timeout and soft-timeout
Command to set timeout:
gluster volume quota <volume-name> soft-timeout 0
gluster volume quota <volume-name> hard-timeout 0
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-01-16 08:08:32 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Bug Depends On: | 1182890, 1182921 | ||
| Bug Blocks: | 1020127 | ||
Elaborating the Description: ============================ While trying to create 100 files which are of different size each, the quota limit exceeds by more than 20% of its hard limit. Files were created using dd command as follows for i in `seq 1 100` ; do dd if=/dev/input_file of=file.$i bs=128K count=$i Where file.1 will be of (128*1=128K) size file.2 will be of (128*2=256K) size and so on.. Cumulative size of these files (file.1 to file.100) was ~631M , the quota was set to 512M on root. Still the file creation was successful exceeding the hard limit by more than 20% Have seen this issue again where the deviation is 100%. Creation of 100 files each of 10M when quota size was set to .5GB is successful.
Following is the output:
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 311.9MB 200.1MB
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 346.1MB 165.9MB
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 411.9MB 100.1MB
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 448.6MB 63.4MB
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 491.9MB 20.1MB
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 530.8MB 0Bytes
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 562.0MB 0Bytes
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 615.1MB 0Bytes
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 681.9MB 0Bytes
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 713.4MB 0Bytes
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 751.9MB 0Bytes
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 805.1MB 0Bytes
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 861.9MB 0Bytes
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 900.4MB 0Bytes
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 941.2MB 0Bytes
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 991.9MB 0Bytes
[root@dj ~]#
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 1002.0MB 0Bytes
[root@dj ~]#
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 1005.1MB 0Bytes
[root@dj ~]#
Steps:
======
1. Remove the existing data
2. Set the quota limit to .5GB
3. Create 100 files each of 10M using
cd test_diff_self_heal ; for i in `seq 1 100` ; do dd if=/dev/input_file of=file.$i bs=1M count=10; done; cd ../
[root@tia nfs]# cd test_diff_self_heal ; for i in `seq 1 100` ; do dd if=/dev/input_file of=file.$i bs=1M count=10; done; cd ../
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.478284 s, 21.9 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.397927 s, 26.4 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.138282 s, 75.8 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.372745 s, 28.1 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.248541 s, 42.2 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.30912 s, 33.9 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.356158 s, 29.4 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.305913 s, 34.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.326317 s, 32.1 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.32235 s, 32.5 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.300528 s, 34.9 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.608505 s, 17.2 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.233327 s, 44.9 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.339192 s, 30.9 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.366841 s, 28.6 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.235743 s, 44.5 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.471405 s, 22.2 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.108849 s, 96.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.429191 s, 24.4 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.154658 s, 67.8 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.186134 s, 56.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.350582 s, 29.9 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.323779 s, 32.4 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.188185 s, 55.7 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.406448 s, 25.8 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.304928 s, 34.4 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.100148 s, 105 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.277924 s, 37.7 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.166217 s, 63.1 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.415125 s, 25.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.282366 s, 37.1 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.516671 s, 20.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.325585 s, 32.2 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.223323 s, 47.0 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.120486 s, 87.0 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0821032 s, 128 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.132735 s, 79.0 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.520773 s, 20.1 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.137194 s, 76.4 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.158009 s, 66.4 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.274628 s, 38.2 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.389174 s, 26.9 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.401151 s, 26.1 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.195828 s, 53.5 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.428232 s, 24.5 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.262744 s, 39.9 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0675795 s, 155 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.24617 s, 42.6 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.390574 s, 26.8 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.105067 s, 99.8 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.353513 s, 29.7 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.215755 s, 48.6 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.234896 s, 44.6 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.236103 s, 44.4 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.184625 s, 56.8 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.330443 s, 31.7 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.241653 s, 43.4 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0867481 s, 121 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.174007 s, 60.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.175426 s, 59.8 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.268207 s, 39.1 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.21381 s, 49.0 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.20176 s, 52.0 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0752825 s, 139 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.119677 s, 87.6 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0871849 s, 120 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.208306 s, 50.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.407977 s, 25.7 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0989158 s, 106 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.353649 s, 29.7 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.349369 s, 30.0 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.244494 s, 42.9 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.351433 s, 29.8 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.252303 s, 41.6 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.290426 s, 36.1 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.267357 s, 39.2 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0705348 s, 149 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0783194 s, 134 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.238101 s, 44.0 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.236016 s, 44.4 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0825079 s, 127 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.120369 s, 87.1 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.302048 s, 34.7 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.516093 s, 20.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.341474 s, 30.7 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.268969 s, 39.0 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.447075 s, 23.5 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0978758 s, 107 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.224753 s, 46.7 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.465626 s, 22.5 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.0649222 s, 162 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.32437 s, 32.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.257982 s, 40.6 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.370989 s, 28.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.226439 s, 46.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.204088 s, 51.4 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.345574 s, 30.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.254169 s, 41.3 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.293833 s, 35.7 MB/s
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.481502 s, 21.8 MB/s
[root@tia nfs]#
[root@tia nfs]#
[root@tia nfs]# du -sh *
1001M test_diff_self_heal
[root@tia nfs]#
[root@tia nfs]# touch a
touch: cannot touch `a': Disk quota exceeded
[root@tia nfs]#
Tried the same use case with different quota limit set under build 3.4.0.36:
CLI used: cd test_data_self_heal ; for i in `seq 1 1000` ; do dd if=/dev/input_file of=file.$i bs=128K count=$i ; done ; cd ../
Following are the result
1) When limit was set to .5GB, issue is reproducible.
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 512.0MB 80% 631.2MB 0Bytes
[root@dj ~]#
2) When limit was set to 1GB, issue is reproducible.
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 1.0GB 80% 2.1GB 0Bytes
[root@dj ~]#
3) When limit was set to 2GB, issue is not reproducible.
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 2.0GB 80% 2.1GB 0Bytes
4) When limit was set to 10GB, issue is not reproducible.
[root@dj ~]# gluster volume quota vol-dr list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 10.0GB 80% 10.0GB 0Bytes
[root@dj ~]#
I tried the above steps on a slightly different set up. I created a volume of 180GB and set the hard limit of 15GB on a 2*2 volume. I used a 15G limit because I thought setting a smaller hard limit may affect the exceeding limit in terms of percentage.
For data creation I increased the block size to 1MB.
"for i in `seq 1 200` ; do dd if=/dev/input_file of=file.$i bs=1MB count=$i ; done ;"
And here is what I found. Write exceeded the hard limit about 0.6%.
[root@localhost mnt]# gluster volume quota test list
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 15.0GB 80% 15.1GB 0Bytes
I used the latest glusterfs-U1 source complied binary on 9th-Nov.
Moving the known issues to Doc team, to be documented in release notes for U1 Moving the known issues to Doc team, to be documented in release notes for U1 Moving the known issues to Doc team, to be documented in release notes for U1 I've documented this as a known issue in the Big Bend Update 1 Release Notes. Here is the link. http://documentation-devel.engineering.redhat.com/docs/en-US/Red_Hat_Storage/2.1/html/2.1_Update_1_Release_Notes/chap-Documentation-2.1_Update_1_Release_Notes-Known_Issues.html Documentation updated required from: Creating files of different sizes leads to the violation of the quota hard limit. to: When the quota hard-timeout is set to default value of 30, the quota limit is checked once in 30 seconds and during that 30 second time window there is possibility of quota hard-limit being exceeded. In order to attain a strict checking of quota limit it is recommended to set the quota soft-timeout and hard-timeout to lower value so that quota limit is checked frequently, and possibility of quota hard-limit being exceeded is reduced. |
Description of problem: ======================= Quota exceeding more than 20% of the hard limit [root@dj ~]# gluster volume quota new-vol-dr list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 512.0MB 80% 631.2MB 0Bytes [root@dj ~]# Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.4.0.35rhs-1.el6rhs.x86_64 How reproducible: ================= Hit twice with same steps Steps to Reproduce: =================== 1. Created a new 6*2 volume (new-vol-dr) 2. Enabled the quota 3. Set the quota limit to (.5GB) 4. Mounted on client (NFS) 5. Created some data from the mount point using [root@tia nfs1]# cp -rf /etc . [root@tia nfs1]# ls etc [root@tia nfs1]# 6. gluster volume quota new-vol-dr list [root@dj ~]# gluster volume quota new-vol-dr list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 512.0MB 80% 22.2MB 489.8MB 7. Removed the etc directory from the mount point [root@tia nfs1]# rm -rf * [root@tia nfs1]# ls [root@tia nfs1]# 8. [root@dj ~]# gluster volume quota new-vol-dr list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 512.0MB 80% 0Bytes 512.0MB [root@dj ~]# 9. Created a directory from the mount point"mkdir test_data_self_heal" [root@tia nfs1]# mkdir test_data_self_heal 10. Created files in the test_data_self_heal directory with different size [root@tia nfs1]# cd test_data_self_heal ; for i in `seq 1 100` ; do dd if=/dev/input_file of=file.$i bs=128K count=$i ; done ; cd ../ Note: /dev/input_file is created using dd if=/dev/urandom of=/dev/input_file bs=1M count=1024 11. File creation successful 12. [root@tia nfs1]# du -sh * 632M test_data_self_heal 13. list command displays the variation from hard limit to >20% [root@dj ~]# gluster volume quota new-vol-dr list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 512.0MB 80% 631.2MB 0Bytes [root@dj ~]# [root@dj ~]# gluster volume quota new-vol-dr list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 512.0MB 80% 631.2MB 0Bytes [root@dj ~]# Actual results: =============== [root@dj ~]# gluster volume quota new-vol-dr list Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- / 512.0MB 80% 631.2MB 0Bytes [root@dj ~]# Expected results: Should not variate from hard-limit to 20 % Additional info: ================ [root@dj ~]# getfattr -d -m . -e hex /rhs/brick1/r1 getfattr: Removing leading '/' from absolute path names # file: rhs/brick1/r1 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000555555547ffffffd trusted.glusterfs.quota.dirty=0x3000 trusted.glusterfs.quota.limit-set=0x0000000020000000ffffffffffffffff trusted.glusterfs.quota.size=0x0000000006d00000 trusted.glusterfs.volume-id=0xcc43bc700fd74114ae0864c9adae6b5e [root@dj ~]# getfattr -d -m . -e hex /rhs/brick1/r3 getfattr: Removing leading '/' from absolute path names # file: rhs/brick1/r3 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x00000001000000007ffffffeaaaaaaa7 trusted.glusterfs.quota.dirty=0x3000 trusted.glusterfs.quota.limit-set=0x0000000020000000ffffffffffffffff trusted.glusterfs.quota.size=0x0000000007a80000 trusted.glusterfs.volume-id=0xcc43bc700fd74114ae0864c9adae6b5e [root@dj ~]# getfattr -d -m . -e hex /rhs/brick1/r5 getfattr: Removing leading '/' from absolute path names # file: rhs/brick1/r5 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000aaaaaaa8d5555551 trusted.glusterfs.quota.dirty=0x3000 trusted.glusterfs.quota.limit-set=0x0000000020000000ffffffffffffffff trusted.glusterfs.quota.size=0x0000000008400000 trusted.glusterfs.volume-id=0xcc43bc700fd74114ae0864c9adae6b5e [root@fan ~]# getfattr -d -m . -e hex /rhs/brick1/r2 getfattr: Removing leading '/' from absolute path names # file: rhs/brick1/r2 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000555555547ffffffd trusted.glusterfs.quota.dirty=0x3000 trusted.glusterfs.quota.limit-set=0x0000000020000000ffffffffffffffff trusted.glusterfs.quota.size=0x0000000006d00000 trusted.glusterfs.volume-id=0xcc43bc700fd74114ae0864c9adae6b5e [root@fan ~]# getfattr -d -m . -e hex /rhs/brick1/r4 getfattr: Removing leading '/' from absolute path names # file: rhs/brick1/r4 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x00000001000000007ffffffeaaaaaaa7 trusted.glusterfs.quota.dirty=0x3000 trusted.glusterfs.quota.limit-set=0x0000000020000000ffffffffffffffff trusted.glusterfs.quota.size=0x0000000007a80000 trusted.glusterfs.volume-id=0xcc43bc700fd74114ae0864c9adae6b5e [root@fan ~]# getfattr -d -m . -e hex /rhs/brick1/r6 getfattr: Removing leading '/' from absolute path names # file: rhs/brick1/r6 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000aaaaaaa8d5555551 trusted.glusterfs.quota.dirty=0x3000 trusted.glusterfs.quota.limit-set=0x0000000020000000ffffffffffffffff trusted.glusterfs.quota.size=0x0000000008400000 trusted.glusterfs.volume-id=0xcc43bc700fd74114ae0864c9adae6b5e [root@mia ~]# getfattr -d -m . -e hex /rhs/brick1/r7 getfattr: Removing leading '/' from absolute path names # file: rhs/brick1/r7 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000d5555552ffffffff trusted.glusterfs.quota.dirty=0x3000 trusted.glusterfs.quota.limit-set=0x0000000020000000ffffffffffffffff trusted.glusterfs.quota.size=0x00000000061e0000 trusted.glusterfs.volume-id=0xcc43bc700fd74114ae0864c9adae6b5e [root@mia ~]# getfattr -d -m . -e hex /rhs/brick1/r9 getfattr: Removing leading '/' from absolute path names # file: rhs/brick1/r9 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000000000002aaaaaa9 trusted.glusterfs.quota.dirty=0x3000 trusted.glusterfs.quota.limit-set=0x0000000020000000ffffffffffffffff trusted.glusterfs.quota.size=0x0000000005920000 trusted.glusterfs.volume-id=0xcc43bc700fd74114ae0864c9adae6b5e [root@mia ~]# getfattr -d -m . -e hex /rhs/brick1/r11 getfattr: Removing leading '/' from absolute path names # file: rhs/brick1/r11 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x00000001000000002aaaaaaa55555553 trusted.glusterfs.quota.dirty=0x3000 trusted.glusterfs.quota.limit-set=0x0000000020000000ffffffffffffffff trusted.glusterfs.quota.size=0x00000000050c0000 trusted.glusterfs.volume-id=0xcc43bc700fd74114ae0864c9adae6b5e [root@wingo ~]# getfattr -d -m . -e hex /rhs/brick1/r8 getfattr: Removing leading '/' from absolute path names # file: rhs/brick1/r8 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000d5555552ffffffff trusted.glusterfs.quota.dirty=0x3000 trusted.glusterfs.quota.limit-set=0x0000000020000000ffffffffffffffff trusted.glusterfs.quota.size=0x00000000061e0000 trusted.glusterfs.volume-id=0xcc43bc700fd74114ae0864c9adae6b5e [root@wingo ~]# getfattr -d -m . -e hex /rhs/brick1/r10 getfattr: Removing leading '/' from absolute path names # file: rhs/brick1/r10 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x0000000100000000000000002aaaaaa9 trusted.glusterfs.quota.dirty=0x3000 trusted.glusterfs.quota.limit-set=0x0000000020000000ffffffffffffffff trusted.glusterfs.quota.size=0x0000000005920000 trusted.glusterfs.volume-id=0xcc43bc700fd74114ae0864c9adae6b5e [root@wingo ~]# getfattr -d -m . -e hex /rhs/brick1/r12 getfattr: Removing leading '/' from absolute path names # file: rhs/brick1/r12 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x00000001000000002aaaaaaa55555553 trusted.glusterfs.quota.dirty=0x3000 trusted.glusterfs.quota.limit-set=0x0000000020000000ffffffffffffffff trusted.glusterfs.quota.size=0x00000000050c0000 trusted.glusterfs.volume-id=0xcc43bc700fd74114ae0864c9adae6b5e