| Summary: | quota: list shows different result on different nodes | |||
|---|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Saurabh <saujain> | |
| Component: | glusterd | Assignee: | Pranith Kumar K <pkarampu> | |
| Status: | CLOSED ERRATA | QA Contact: | Saurabh <saujain> | |
| Severity: | high | Docs Contact: | ||
| Priority: | medium | |||
| Version: | 2.1 | CC: | grajaiya, kparthas, mzywusko, pkarampu, rgowdapp, rhs-bugs, saujain, spandura, vagarwal, vbellur | |
| Target Milestone: | --- | Keywords: | ZStream | |
| Target Release: | --- | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-3.4.0.34rhs | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1035576 (view as bug list) | Environment: | ||
| Last Closed: | 2013-11-27 15:31:49 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Bug Depends On: | ||||
| Bug Blocks: | 1035576 | |||
The next time you hit this issue, could you provide the xattrs (and perhaps the logs as well) on each node as it would be difficult to root-cause the problem without the same. Saurabh, I am not able to get sos reports. I am getting permission denied errors You don't have permission to access /sosreports/998914/sosreport-rhsauto032-20130821013955-eca0.tar.xz on this server. Can you please tell what is the volume configuration? regards, Raghavendra. The bug is not in cli. Hence reassigning the bug to Raghavendra G. After talking to Saurabh, updating this with steps to re-create the issue: The directory on which quota limit is set, should be partially filled to observe this issue. So full steps to re-create the issue is: 1) Create 3x2 dist-rep volume 2) Enable quota 3) Set volume quota of 1GB 4) Fill the directory partially. 5) Check quota list output on all the peers in cluster. Expected result: The outputs on all of the nodes should be same. I tried the steps mentioned in comment-6. It is working fine.
On all the machines I see the following output:
root@pranith-vm2 - ~
04:52:54 :) ⚡ watch gluster volume quota r2 list
Every 2.0s: gluster volume quota r2 list Mon Sep 30 04:57:48 2013
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/ 1.0GB 80% 512.1MB 511.9MB
Since the issue is not re-creatable with the steps to re-create. I am moving the bug to ON_QA.
I found a case where the quota sizes will be shown differently not just on two different nodes but also on the same node with multiple invocations of the command "quota list".
Here are the steps:
1) create pure replicate volume r2
2) start the volume and kill one of the bricks immediately
3) enable quota and set limit-usage on / with say 1GB
4) Now create a file of size 1M using "dd of=/mnt/r2/h if=/dev/zero bs=1M count=1"
5) Now if the brick is brought back up the quota size xattrs differ on both the bricks so it also gives different outputs on both the nodes.
xattrs in my run after performing the steps above are:
Please note the quota.size xattrs before, after self-heal.
On bad brick:
root@pranithk-vm1 - /mnt/r2
16:06:35 :) ⚡ getfattr -d -m. -e hex /brick/r2_0/{,h}
getfattr: Removing leading '/' from absolute path names
# file: brick/r2_0/
trusted.afr.r2-client-0=0x000000000000000000000000
trusted.afr.r2-client-1=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x0000000000000000
trusted.glusterfs.volume-id=0x241701a0e2e3468f946a56149312cfc5
# file: brick/r2_0/h
trusted.afr.r2-client-0=0x000000000000000000000000
trusted.afr.r2-client-1=0x000000000000000000000000
trusted.gfid=0xd3692e0073304d04b6ef62e80f347302
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x0000000000100000
trusted.pgfid.00000000-0000-0000-0000-000000000001=0x00000001
On Good brick:
root@pranith-vm2 - ~
04:02:28 :) ⚡ getfattr -d -m. -e hex /brick/r2_1/{,h}
getfattr: Removing leading '/' from absolute path names
# file: brick/r2_1/
trusted.afr.r2-client-0=0x000000000000000000000000
trusted.afr.r2-client-1=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x0000000000100000
trusted.glusterfs.volume-id=0x241701a0e2e3468f946a56149312cfc5
# file: brick/r2_1/h
trusted.afr.r2-client-0=0x000000000000000000000000
trusted.afr.r2-client-1=0x000000000000000000000000
trusted.gfid=0xd3692e0073304d04b6ef62e80f347302
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x0000000000100000
trusted.pgfid.00000000-0000-0000-0000-000000000001=0x00000001
After the fix:
root@pranithk-vm1 - /mnt/r2
15:46:40 :( ⚡ getfattr -d -m. -e hex /brick/r2_0/
getfattr: Removing leading '/' from absolute path names
# file: brick/r2_0/
trusted.afr.r2-client-0=0x000000000000000000000000
trusted.afr.r2-client-1=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x0000000000100000
trusted.glusterfs.volume-id=0x3a1dc83af3cf4a28a3257bf3f9364992
root@pranithk-vm1 - /mnt/r2
15:46:42 :) ⚡ getfattr -d -m. -e hex /brick/r2_0/h
getfattr: Removing leading '/' from absolute path names
# file: brick/r2_0/h
trusted.afr.r2-client-0=0x000000000000000000000000
trusted.afr.r2-client-1=0x000000000000000000000000
trusted.gfid=0x37b7ebcd4ad3493ab0d5dec4a5ff7ebb
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x0000000000100000
trusted.pgfid.00000000-0000-0000-0000-000000000001=0x00000001
Patch is at https://code.engineering.redhat.com/gerrit/13718
patch on u1: https://code.engineering.redhat.com/gerrit/13720 This issue still exist. Re-created the issue on build "glusterfs 3.4.0.35rhs built on Oct 15 2013 14:06:04"
Output from the quota list command:
==================================
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Source node:
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
root@rhs-client13 [Oct-17-2013-12:05:19] >gluster v quota vol_dis_rep list /user2
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/user2 1.0GB 80% 2.3GB 0Bytes
root@rhs-client13 [Oct-17-2013-12:05:05] >getfattr -d -e hex -m . /rhs/bricks/brick7/user2
getfattr: Removing leading '/' from absolute path names
# file: rhs/bricks/brick7/user2
trusted.afr.vol_dis_rep-client-6=0x000000000000000000000000
trusted.afr.vol_dis_rep-client-7=0x000000000000000000000000
trusted.gfid=0xb4dba3d0d212441e8a3f1f7c57e56cda
trusted.glusterfs.dht=0x00000001000000007ffffffeaaaaaaa7
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x0000000019800000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x0000000019800000
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Sink node:
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
root@rhs-client14 [Oct-17-2013-12:06:14] >gluster v quota vol_dis_rep list /user2
Path Hard-limit Soft-limit Used Available
--------------------------------------------------------------------------------
/user2 1.0GB 80% 1.9GB 0Bytes
root@rhs-client14 [Oct-17-2013-12:04:59] >getfattr -d -e hex -m . /rhs/bricks/brick8/user2
getfattr: Removing leading '/' from absolute path names
# file: rhs/bricks/brick8/user2
trusted.afr.vol_dis_rep-client-6=0x000000000000000000000000
trusted.afr.vol_dis_rep-client-7=0x000000000000000000000000
trusted.gfid=0xb4dba3d0d212441e8a3f1f7c57e56cda
trusted.glusterfs.dht=0x00000001000000007ffffffeaaaaaaa7
trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri=0x0000000000000000
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.limit-set=0x0000000040000000ffffffffffffffff
trusted.glusterfs.quota.size=0x0000000000000000
Moving the bug back to assigned state.
hi Shwetha,
Steps to re-create the bug seem to be different according to the quota sizes of the directory. Could you specify the steps you used to re-create the bug above.
Looking at the quota sizes in the comment above it seems more like the bug 1001556 for which a build is yet to be provided.
Pranith.
Executed the case mentioned in "comment 8" on build "glusterfs 3.4.0.36rhs built on Oct 22 2013 10:56:18" . Issue no longer exist. Moving the bug to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1769.html |
Description of problem: list command displays the different output on different nodes of the cluster Version-Release number of selected component (if applicable): glusterfs-server-3.4.0.20rhsquota1-1.el6.x86_64 glusterfs-fuse-3.4.0.20rhsquota1-1.el6.x86_64 glusterfs-3.4.0.20rhsquota1-1.el6.x86_64 How reproducible: happening on this build Actual results: on node2, node3, node4 [root@rhsauto033 ~]# gluster volume quota dist-rep2 list /dir1/dir3 Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /dir1/dir3 1.0GB 80% 1023.6MB 384.0KB on node1,(from this node nfs mount is done on a separate client) [root@rhsauto032 ~]# gluster volume quota dist-rep2 list /dir1/dir3 Path Hard-limit Soft-limit Used Available -------------------------------------------------------------------------------- /dir1/dir3 1.0GB 80% 1023.8MB 256.0KB Expected results: information should be same Additional info: