Bug 1117283
| Summary: | DHT :- rm -rf is not removing stale link file and because of that unable to create file having same name as stale link file | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Rachana Patel <racpatel> | |
| Component: | distribute | Assignee: | Susant Kumar Palai <spalai> | |
| Status: | CLOSED ERRATA | QA Contact: | amainkar | |
| Severity: | medium | Docs Contact: | ||
| Priority: | medium | |||
| Version: | rhgs-3.0 | CC: | nsathyan, spalai, ssamanta | |
| Target Milestone: | --- | |||
| Target Release: | RHGS 3.0.0 | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-3.6.0.26-1.el6rhs | Doc Type: | Bug Fix | |
| Doc Text: |
Cause: In case a file is not found in its cached subvol we proceed with dht_lookup_everywhere. But as we dont add the linkto xattr to the dictionary, we fail to identify any linkto file encountered.The implication being we end up thinking the linkto file as a regular file and proceed with the fop.
Fix: Added the linkto xattr to the dictionary, so that the linkto file will not be identified as a regular file. And if it is stale it will be unlinked.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1117923 (view as bug list) | Environment: | ||
| Last Closed: | 2014-09-22 19:44:07 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1117923, 1138389, 1139170, 1139992 | |||
|
Description
Rachana Patel
2014-07-08 12:47:52 UTC
Tried on glusterfs 3.6.0.27 to verify the issue.
Here are the Steps :
created a 2*2 volume
Volume Name: test1
Type: Distributed-Replicate
Volume ID: 5e206611-f6a3-4f88-8a4b-e4854264e805
Status: Started
Snap Volume: no
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.122.11:/brick/1
Brick2: 192.168.122.11:/brick/2
Brick3: 192.168.122.11:/brick/3
Brick4: 192.168.122.11:/brick/4
Options Reconfigured:
performance.readdir-ahead: on
auto-delete: disable
snap-max-soft-limit: 90
snap-max-hard-limit: 256
[root@vm11 brick]#
2). Created few files and renamed them so that linkto files will be created. And then explicitly unlinked the data files from the bricks.
[root@vm11 brick]# ll *
1:
total 4
---------T. 2 root root 0 Aug 27 03:04 zile2
2:
total 4
---------T. 2 root root 0 Aug 27 03:04 zile2
3:
total 8
---------T. 2 root root 0 Aug 27 03:04 zile3
---------T. 2 root root 0 Aug 27 03:04 zile7
4:
total 8
---------T. 2 root root 0 Aug 27 03:04 zile3
---------T. 2 root root 0 Aug 27 03:04 zile7
3)Now from the mount point issued "touch zile{1..10}"
[root@vm11 mnt1]# ll
total 0
-rw-r--r--. 1 root root 0 Aug 27 03:07 zile1
-rw-r--r--. 1 root root 0 Aug 27 03:07 zile10
-rw-r--r--. 1 root root 0 Aug 27 03:07 zile2
-rw-r--r--. 1 root root 0 Aug 27 03:07 zile3
-rw-r--r--. 1 root root 0 Aug 27 03:07 zile4
-rw-r--r--. 1 root root 0 Aug 27 03:07 zile5
-rw-r--r--. 1 root root 0 Aug 27 03:07 zile6
-rw-r--r--. 1 root root 0 Aug 27 03:07 zile7
-rw-r--r--. 1 root root 0 Aug 27 03:07 zile8
-rw-r--r--. 1 root root 0 Aug 27 03:07 zile9
[root@vm11 mnt1]#
verified with -3.6.0.28-1.el6rhs.x86_64 working as expected hence moving to verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-1278.html |