Bug 845330
Summary: | RHS volume mounted as NFS causing a lot of readdir loop messages | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Veda Shankar <veshanka> | |
Component: | glusterfs | Assignee: | Vivek Agarwal <vagarwal> | |
Status: | CLOSED CANTFIX | QA Contact: | Sudhir D <sdharane> | |
Severity: | unspecified | Docs Contact: | ||
Priority: | high | |||
Version: | 2.0 | CC: | bcompton, gluster-bugs, sankarshan, vbellur | |
Target Milestone: | --- | |||
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 858436 (view as bug list) | Environment: | ||
Last Closed: | 2013-07-12 16:44:29 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 858436 |
Description
Veda Shankar
2012-08-02 16:56:21 UTC
want to double check if the backend is XFS. If its ext4, there is a possibility of running into 64bit d_off issue. Kris, can you have a look on this one and help Veda with solution? The backend is XFS. Could be related to https://bugzilla.redhat.com/show_bug.cgi?id=770250 Veda, We had hit this too in our testing: https://bugzilla.redhat.com/show_bug.cgi?id=814052 pointing to bug 790729. Can you see if he is seeing this in RHEL 6.2 clients only or any other clients too? 1) The RHEL bug report says that this error is fixed in kernel-2.6.32-235.el6. The customer is seeing these messages even on clients running a newer kernel 2.6.32-279.1.1. 2)The customer sees these errors on both Redhat 6.2 and 6.3 clients. 3) On clients running Redhat 5.5, we see the following error messages: lockd: unexpected unlock status: 1 lockd: unexpected unlock status: 1 lockd: unexpected unlock status: 1 lockd: unexpected unlock status: 1 lockd: unexpected unlock status: 1 lockd: unexpected unlock status: 1 lockd: unexpected unlock status: 1 lockd: unexpected unlock status: 1 (In reply to comment #6) > 1) > The RHEL bug report says that this error is fixed in kernel-2.6.32-235.el6. > The customer is seeing these messages even on clients running a newer kernel > 2.6.32-279.1.1. > > 2)The customer sees these errors on both Redhat 6.2 and 6.3 clients. > > 3) > On clients running Redhat 5.5, we see the following error messages: > lockd: unexpected unlock status: 1 > lockd: unexpected unlock status: 1 > lockd: unexpected unlock status: 1 > lockd: unexpected unlock status: 1 > lockd: unexpected unlock status: 1 > lockd: unexpected unlock status: 1 > lockd: unexpected unlock status: 1 > lockd: unexpected unlock status: 1 So the customer has not seen readdir loop error on RH 5.5? Can you check the nfs log on the serverside for more info on the lockd/unlock error messages? Do you know of a way to reproduce the readdir loop error in house? Do you know what application runs on the mount point to cause this error? On the mountpoint, does doing "ls" in the affected directory (/simon) result in error? if so can you do "echo 3 > /proc/sys/vm/drop_caches" and see if "ls" still sees problem? Hi Krishna, Could you please respond to item#1 as to why this happens even with the newer version of the kernel? Veda, because it might be a bug in glusterfs and not in the kernel. We need to dig deeper to find out (and hence my questions in the previous comment) We need more info on my previous questions: ------------ So the customer has not seen readdir loop error on RH 5.5? Can you check the nfs log on the serverside for more info on the lockd/unlock error messages? Do you know of a way to reproduce the readdir loop error in house? Do you know what application runs on the mount point to cause this error? On the mountpoint, does doing "ls" in the affected directory (/simon) result in error? if so can you do "echo 3 > /proc/sys/vm/drop_caches" and see if "ls" still sees problem? ------------ If this is really a bug in glusterfs, there might be one situation which causes this - suppose there is readdirp going on, in the process if we delete and create a file, if the new file has the same offset as the deleted file (which was already read during readdirp) we might see this behavior. But I am not sure, we need a way to reproduce this bug to be sure. Based on discussion with Sayan and Scott closing this |