Bug 1318093
Summary: | [GSS] Client's App is having issues retrieving files from share 1002976973 | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Oonkwee Lim <olim> | |
Component: | quota | Assignee: | Raghavendra G <rgowdapp> | |
Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | storage-qa-internal <storage-qa-internal> | |
Severity: | urgent | Docs Contact: | ||
Priority: | urgent | |||
Version: | rhgs-3.1 | CC: | amukherj, bkunal, hchen, olim, rgowdapp, rhs-bugs, rnalakka, skoduri, smohan, vbellur | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1318158 (view as bug list) | Environment: | ||
Last Closed: | 2017-11-06 22:34:26 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1318170, 1320925, 1320926, 1324040 | |||
Bug Blocks: | 1318158, 1320024 |
Comment 3
Vijaikumar Mallikarjuna
2016-03-16 07:56:51 UTC
For quota error "ctx for the node ... is NULL", we have submitted a patch upstream: http://review.gluster.org/#/c/13748/2 Bug# 1318170 filed to track quota error "ctx for the node ... is NULL" Hi Oonkwee, There are 3 different issues mentioned in bug: Issue-1) Error in the brick logs "ctx for the node ... is NULL". This is actually not an error and has no impact with quota accounting. Can be ignored. Patch "http://review.gluster.org/#/c/13748/" will fix this error message Issue-2) Quota list show wrong usage: Some how quota accounting is miscalculated, I suspect this be same issue as: bug# 1240991 Issue-3) You have mentioned in comment# 6 >> The client suspects there is some data corruption, hence, their application is >> facing issues while retrieving files. Quota only set/gets metadata (xattrs), could ypu please provide more information what kind of data corruption is seen? I suspect, because the usage is miscalculated to "16384.0PB", quota-enforcer may not be allowing the writes to happen. Thanks, Vijay Hi Oonkwee, Regarding quota usage showing, we have couple of issues related to quota miscalculation fixed in 3.1.2. I will find the list of patches submitted related to this and update the bug with the patches. Please let us know if you need a workaround to correct the quota usage? Thanks, Vijay Hi Oonkwee, We have couple of fixes related to quota accounting: Fixed in 3.1.1 http://review.gluster.org/#/c/11863/ http://review.gluster.org/#/c/11995/ http://review.gluster.org/#/c/12032/ http://review.gluster.org/#/c/11403/ Fixed in 3.1.2 http://review.gluster.org/#/c/11578/ Thanks, Vijay Hi Vijay, They are open to a workaround to the quota issue if you can get one. Hi Oonkwee Lim, What is the gluster version the customer is running. If they are running 3.1.0, then there are multiple fixes went to 3.1.1 on quota size miscalculation. Is there any plan for the customer on upgrading Gluster? If yes, then it is good to apply workaround after the upgrade. Here is the workaround to correct quota size: 1) Find a directory whose size is calculated incorrectly In the description there is once directory which is showing wrong incorrect value: Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded? /1002976973 3.0TB 80% 16384.0PB 3.0TB No No 2) Execute below command on all the nodes and for all the bricks #find /brickpath/1002976973/ -type d | xargs /usr/bin/setfattr -n trusted.glusterfs.quota.dirty -v 0x3100 3) Mount a volume with fuse and no-readdirp options #mount -t glusterfs -o use-readdirp=no localhost:/volname /mnt 4) send lookup on all sub-dirs of dir 1234 #find /mnt/1002976973 -type d -exec stat {} \; 5) After completion of lookup, verify that quota usage is showing correct values #gluster volume quota volname list /1002976973 Thanks, Vijay Hi Bipin, Workaround provided is for correcting Quota size. Regarding data corruption, we are working on the RCA. Thanks, Vijay |