Bug 1575553
Summary: | [geo-rep]: [Errno 39] Directory not empty | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Rochelle <rallan> | |
Component: | distribute | Assignee: | Nithya Balachandran <nbalacha> | |
Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | Prasad Desala <tdesala> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.4 | CC: | csaba, khiremat, rallan, rgowdapp, rhs-bugs, sankarshan, sheggodu, storage-qa-internal, vdas | |
Target Milestone: | --- | Keywords: | Reopened | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | If docs needed, set a value | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1579615 (view as bug list) | Environment: | ||
Last Closed: | 2019-04-03 04:40:33 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1661258 | |||
Bug Blocks: | 1579615 |
Description
Rochelle
2018-05-07 09:44:01 UTC
Can we try to reproduce this issue setting following two options to the values specified (Since the issue is seen on slave, please set these options on slave volume)? * diagnostics.client-log-level to TRACE * diagnostics.brick-log-level to TRACE Please attach brick and client logs to the bz Rochelle, Is it possible to provide debug data asked in the previous email? regards, Raghavendra Since its a race and not much can be found from sos reports, there is no method other than code analysis to debug this issue. I need following information when we hit this issue: 1. ls -l of the problematic directory on mount point 2. ls -l of the problematic directory on all bricks 3. all extended attributes of the problematic directory on all bricks 4. all extended attributes of any children of the problematic directory on all bricks Since the automation run clears everything, there is no way to get this data. So, it would be of great help if we can capture the above information either through instrumentation in automation framework or to gsyncd. Though I am planning to spend some cycles on analysing the related code in DHT (my hypothesis is a deleted subdirectory is recreated due to a race and is not visible in readdir issued from mount on parent directory), I am not much hopeful that it'll yield any positive results. We've recently fixed such races and my previous attempts at finding any loopholes in the synchronization algorithm didn't yield any positive results. *** This bug has been marked as a duplicate of bug 1661258 *** Also see bz 1458215 |