Bug 996200 - AFR : stat on a file from fuse mount reports "No such file or directory" when a brick goes offline and comes back online [NEEDINFO]
AFR : stat on a file from fuse mount reports "No such file or directory" when...
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
2.1
Unspecified Unspecified
medium Severity high
: ---
: ---
Assigned To: Ravishankar N
spandura
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-12 11:59 EDT by spandura
Modified: 2016-09-17 08:11 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:13:29 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
ravishankar: needinfo? (spandura)


Attachments (Terms of Use)

  None (edit)
Description spandura 2013-08-12 11:59:57 EDT
Description of problem:
========================
When a brick goes offline and comes back online, stat/ls <file_name> reports "No such file or directory" when a brick goes offline and comes back online

Version-Release number of selected component (if applicable):
===============================================================
glusterfs 3.4.0.18rhs built on Aug  7 2013 08:02:45


How reproducible:
================
Often

Steps to Reproduce:
===================
1. Create a replicate volume ( 1 x 2 ). Start the volume. 

2. Create 2 fuse mount. 

3. From fuse_mount1 create a file "dd if=/dev/urandom of=test_file bs=1M count=1"

4. From fuse_mount2 ls/stat on the file "stat test_file" 

5. Capture the brick1 process information . "ps -ef | grep <brick1>"

6. Kill brick1. 

7. Remove the file from fuse_mount1 and recreate the file with the same file name. 

8. restart the brick process : 

Example : "/usr/sbin/glusterfsd -s king --volfile-id vol_rep_2.king.rhs-bricks-vol_rep_2_b0 -p /var/lib/glusterd/vols/vol_rep_2/run/king-rhs-bricks-vol_rep_2_b0.pid -S /var/run/490b794d8ab69336c9c23eed09b4f1d8.socket --brick-name /rhs/bricks/vol_rep_2_b0 -l /var/log/glusterfs/bricks/rhs-bricks-vol_rep_2_b0.log --xlator-option *-posix.glusterd-uuid=8abd3f8f-1776-425c-b602-77a56726b804 --brick-port 49155 --xlator-option vol_rep_2-server.listen-port=49155" 

9. From fuse_mount2 ls/stat on the file "stat test_file" 

Actual results:
=================
root@darrel [Aug-12-2013-21:15:42] >stat test_file
stat: cannot stat `test_file': No such file or directory
root@darrel [Aug-12-2013-21:15:43] >ls test_file
ls: cannot access test_file: No such file or directory
root@darrel [Aug-12-2013-21:28:09] >

Expected results:
==================
stat/ls should be successful. 

Additional info:
================
Tested the case with "stat-prefetch" "off" . Test case fails.
Comment 2 Ravishankar N 2013-10-22 02:49:13 EDT
Hi Shwetha, I tried this on 3.4.0.35.1u2rhs and was not able to reproduce the issue. Could you please see if the issue still occurs with the latest release?
Comment 3 spandura 2013-11-11 07:05:05 EST
Hi Ravi,

I am able to recreate this issue on the build "glusterfs 3.4.0.35.1u2rhs built on Oct 21 2013 14:00:58" .
Comment 4 spandura 2013-11-11 07:20:18 EST
Mount 1 output:-
++++++++++++++++

root@rhs-client14 [Nov-11-2013-12:15:27] >dd if=/dev/urandom of=test_file bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.192468 s, 5.4 MB/s
root@rhs-client14 [Nov-11-2013-12:15:35] >rm -rf *
root@rhs-client14 [Nov-11-2013-12:16:18] >dd if=/dev/urandom of=test_file bs=1M count=1
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.200894 s, 5.2 MB/s


Mount 2 Output:-
++++++++++++++++
root@rhs-client14 [Nov-11-2013-12:15:43] >ls 
test_file
root@rhs-client14 [Nov-11-2013-12:15:44] >stat test_file
  File: `test_file'
  Size: 1048576         Blocks: 2048       IO Block: 131072 regular file
Device: 1eh/30d Inode: 11399896548514473629  Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2013-11-11 12:15:34.886859477 +0000
Modify: 2013-11-11 12:15:35.079853562 +0000
Change: 2013-11-11 12:15:37.147791075 +0000
root@rhs-client14 [Nov-11-2013-12:15:46] >
root@rhs-client14 [Nov-11-2013-12:15:49] >
root@rhs-client14 [Nov-11-2013-12:16:44] >stat test_file
stat: cannot stat `test_file': No such file or directory
Comment 6 Vivek Agarwal 2014-02-18 03:58:09 EST
Marking it to test with Denali
Comment 9 Ravishankar N 2014-06-23 06:28:26 EDT
Hi Shwetha, could you please check if this issue is still happening in RHS 3.0?
Comment 12 Vivek Agarwal 2015-12-03 12:13:29 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.