Bug 877988
Summary: | creating hard link fails with 'File Exist' message even though the hard link file doesn't exist. | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | spandura |
Component: | glusterfs | Assignee: | Anjana Suparna Sriram <asriram> |
Status: | CLOSED WONTFIX | QA Contact: | spandura |
Severity: | unspecified | Docs Contact: | |
Priority: | medium | ||
Version: | 2.0 | CC: | aavati, psriniva, rhs-bugs, sdharane, shaines, vbellur |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Known Issue | |
Doc Text: |
Entry operations on replicated bricks may have few issues with md-cache module enabled on the
volume graph.
For example: When one brick is down, while the other is up an application is performing a hardlink
call link() would experience EEXIST error.
Workaround: Execute this command to avoid this issue:
gluster volume set VOLNAME stat-prefetch off
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2012-12-18 10:01:14 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
spandura
2012-11-19 11:29:00 UTC
Seems to be a meta-data-cache translator issue. If we disable stat-prefetch which in turn disables meta-data-cache. ln succeeds. Remove the commented out state-prefetch volume set to see the issue. I automated the script for ease of re-creating the issue: HOSTNAME=`hostname` mkdir /mnt/r2 mkdir /gfs gluster --mode=script volume create r2 replica 2 $HOSTNAME:/gfs/r2_0 $HOSTNAME:/gfs/r2_1 gluster --mode=script volume start r2 gluster --mode=script volume set r2 client-log-level DEBUG gluster --mode=script volume set r2 brick-log-level DEBUG #gluster --mode=script volume set r2 stat-prefetch off mount -t glusterfs $HOSTNAME:/r2 /mnt/r2 cd /mnt/r2 mkdir /mnt/r2/testdir dd if=/dev/urandom of=/mnt/r2/testdir/a bs=1k count=1024 kill -9 $(cat /var/lib/glusterd/vols/r2/run/$HOSTNAME-gfs-r2_0.pid) ln /mnt/r2/testdir/a /mnt/r2/testdir/b gluster --mode=script volume start r2 force && kill -9 $(cat /var/lib/glusterd/vols/r2/run/$HOSTNAME-gfs-r2_1.pid) sleep 1 ln /mnt/r2/testdir/a /mnt/r2/testdir/b Here is the run with stat-prefetch disabled: [root@pranithk-laptop ~]# glusterd && bash -x /home/pranithk/workspace/rhs-glusterfs/877988.sh ++ hostname + HOSTNAME=pranithk-laptop + mkdir /mnt/r2 mkdir: cannot create directory `/mnt/r2': File exists + mkdir /gfs mkdir: cannot create directory `/gfs': File exists + gluster --mode=script volume create r2 replica 2 pranithk-laptop:/gfs/r2_0 pranithk-laptop:/gfs/r2_1 Creation of volume r2 has been successful. Please start the volume to access data. + gluster --mode=script volume start r2 Starting volume r2 has been successful + gluster --mode=script volume set r2 client-log-level DEBUG Set volume successful + gluster --mode=script volume set r2 brick-log-level DEBUG Set volume successful + gluster --mode=script volume set r2 stat-prefetch off Set volume successful + mount -t glusterfs pranithk-laptop:/r2 /mnt/r2 + cd /mnt/r2 + mkdir /mnt/r2/testdir + dd if=/dev/urandom of=/mnt/r2/testdir/a bs=1k count=1024 1024+0 records in 1024+0 records out 1048576 bytes (1.0 MB) copied, 0.13735 s, 7.6 MB/s ++ cat /var/lib/glusterd/vols/r2/run/pranithk-laptop-gfs-r2_0.pid + kill -9 31244 + ln /mnt/r2/testdir/a /mnt/r2/testdir/b + gluster --mode=script volume start r2 force Starting volume r2 has been successful ++ cat /var/lib/glusterd/vols/r2/run/pranithk-laptop-gfs-r2_1.pid + kill -9 31250 + sleep 1 + ln /mnt/r2/testdir/a /mnt/r2/testdir/b [root@pranithk-laptop ~]# ls -l /mnt/r2/testdir/ total 2046 -rw-r--r-- 2 root root 1047552 Nov 20 12:26 a -rw-r--r-- 2 root root 1047552 Nov 20 12:26 b Here is the run with stat-prefetch enabled: [root@pranithk-laptop ~]# glusterd && bash -x /home/pranithk/workspace/rhs-glusterfs/877988.sh ++ hostname + HOSTNAME=pranithk-laptop + mkdir /mnt/r2 mkdir: cannot create directory `/mnt/r2': File exists + mkdir /gfs mkdir: cannot create directory `/gfs': File exists + gluster --mode=script volume create r2 replica 2 pranithk-laptop:/gfs/r2_0 pranithk-laptop:/gfs/r2_1 Creation of volume r2 has been successful. Please start the volume to access data. + gluster --mode=script volume start r2 Starting volume r2 has been successful + gluster --mode=script volume set r2 client-log-level DEBUG Set volume successful + gluster --mode=script volume set r2 brick-log-level DEBUG Set volume successful + mount -t glusterfs pranithk-laptop:/r2 /mnt/r2 + cd /mnt/r2 + mkdir /mnt/r2/testdir + dd if=/dev/urandom of=/mnt/r2/testdir/a bs=1k count=1024 1024+0 records in 1024+0 records out 1048576 bytes (1.0 MB) copied, 0.136642 s, 7.7 MB/s ++ cat /var/lib/glusterd/vols/r2/run/pranithk-laptop-gfs-r2_0.pid + kill -9 31531 + ln /mnt/r2/testdir/a /mnt/r2/testdir/b + gluster --mode=script volume start r2 force Starting volume r2 has been successful ++ cat /var/lib/glusterd/vols/r2/run/pranithk-laptop-gfs-r2_1.pid + kill -9 31538 + sleep 1 + ln /mnt/r2/testdir/a /mnt/r2/testdir/b ln: failed to create hard link `/mnt/r2/testdir/b': File exists [root@pranithk-laptop ~]# ls -l /mnt/r2/testdir total 1023 -rw-r--r-- 1 root root 1047552 Nov 20 12:28 a [root@pranithk-laptop ~]# Pranith, Thanks for the quick turn-around time on this. Avati, do you think this should be marked as KnownIssue and close it as WONTFIX? (as workaround exists)... Let me know what you think. I don't see any particular dev effort for this particular bug. updated the doc text. Avati, need your input on this. -------- Cause: with 'md-cache' translator in the graph, quick entry operations may have few issues when replicate bricks are not connected properly (ie, one brick down, while other is up). Consequence: We would hit EEXIST error for a link() (hardlink) call. Workaround (if any): 'gluster volume set <VOL> stat-prefetch off' Result: The issues are not seen. -------- Please re-open if the work around is not working :-) This is documented as a known issue for the Big Bend Update 1 Release Notes. Here is the link: http://documentation-devel.engineering.redhat.com/docs/en-US/Red_Hat_Storage/2.1/html/2.1_Update_1_Release_Notes/chap-Documentation-2.1_Update_1_Release_Notes-Known_Issues.html This is documented as a known issue for the Big Bend Update 1 Release Notes. Here is the link: http://documentation-devel.engineering.redhat.com/docs/en-US/Red_Hat_Storage/2.1/html/2.1_Update_1_Release_Notes/chap-Documentation-2.1_Update_1_Release_Notes-Known_Issues.html |