| Summary: | Add brick unable to self-heal entry,data of files on new brick | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Shwetha Panduranga <shwetha.h.panduranga> | ||||
| Component: | replicate | Assignee: | Kaushal <kaushal> | ||||
| Status: | CLOSED WONTFIX | QA Contact: | |||||
| Severity: | high | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | mainline | CC: | gluster-bugs, vbellur | ||||
| Target Milestone: | --- | ||||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2012-02-28 08:08:55 UTC | Type: | --- | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Attachments: |
|
||||||
This should expected with stat-prefetch/md-cache xlator on. ls -lR does a readdir() on the given directory and then stat() on the different files to get their stats. When using stat-prefetch/md-cache, the readdir() call is converted to a readdirp() call, which causes the stats of the files to be returned as well. These stats will be cached by stat-prefetch/md-cache xlator. The converted call will reach the afr xlator, which then detects that self-heal needs to be performed on the given directory, and performs the entry self-heal on the directory. But, when ls performs stat() on the files to get their details, the stats will be returned by the stat-prefetch/md-cache xlator using the cached data. Hence, the stat() on the file doesn't reach the afr xlator, which will not know the files are not in sync and will not perform the data self-heal. Turning off stat-prefetch/md-cache, will solve this problem of 'ls -lR' not performing data-self-heal. Closing, as this is a known issue. The reasoning and a workaround are given in the comment above. |
Created attachment 562196 [details] Attaching client log Description:- *) The volume is created with one brick. *) Performed file/dir ops from mount point *) Added one more brick to make replicate volume *) performed lookup from mount point. *) Lookup didn't self heal the files on new brick. Version-Release number of selected component (if applicable): mainline How reproducible: often Steps to Reproduce: Server1:- -------- 1.gluster volume create test_vol <brick1> 2.gluster volume start test_vol 3.gluster volume set test_vol self-heal-deamon off Client:- -------- 3.mount to the volume 4.Create files/directories Server1:- 5.gluster volume add-brick test_vol replica 2 <brick2> Client:- -------- 6. ls -lR (on mount point) Actual results: Doesn't self heal (entry/data) files. Self-heals only directories. Expected results: Should self heal both directories and data Additional info: With self-heal-deamon on, the entry, data self-heal of files is complete.