Description of problem: ======================== When a brick goes offline and comes back online, stat/ls <file_name> reports "No such file or directory" when a brick goes offline and comes back online Version-Release number of selected component (if applicable): =============================================================== glusterfs 3.4.0.18rhs built on Aug 7 2013 08:02:45 How reproducible: ================ Often Steps to Reproduce: =================== 1. Create a replicate volume ( 1 x 2 ). Start the volume. 2. Create 2 fuse mount. 3. From fuse_mount1 create a file "dd if=/dev/urandom of=test_file bs=1M count=1" 4. From fuse_mount2 ls/stat on the file "stat test_file" 5. Capture the brick1 process information . "ps -ef | grep <brick1>" 6. Kill brick1. 7. Remove the file from fuse_mount1 and recreate the file with the same file name. 8. restart the brick process : Example : "/usr/sbin/glusterfsd -s king --volfile-id vol_rep_2.king.rhs-bricks-vol_rep_2_b0 -p /var/lib/glusterd/vols/vol_rep_2/run/king-rhs-bricks-vol_rep_2_b0.pid -S /var/run/490b794d8ab69336c9c23eed09b4f1d8.socket --brick-name /rhs/bricks/vol_rep_2_b0 -l /var/log/glusterfs/bricks/rhs-bricks-vol_rep_2_b0.log --xlator-option *-posix.glusterd-uuid=8abd3f8f-1776-425c-b602-77a56726b804 --brick-port 49155 --xlator-option vol_rep_2-server.listen-port=49155" 9. From fuse_mount2 ls/stat on the file "stat test_file" Actual results: ================= root@darrel [Aug-12-2013-21:15:42] >stat test_file stat: cannot stat `test_file': No such file or directory root@darrel [Aug-12-2013-21:15:43] >ls test_file ls: cannot access test_file: No such file or directory root@darrel [Aug-12-2013-21:28:09] > Expected results: ================== stat/ls should be successful. Additional info: ================ Tested the case with "stat-prefetch" "off" . Test case fails.
Hi Shwetha, I tried this on 3.4.0.35.1u2rhs and was not able to reproduce the issue. Could you please see if the issue still occurs with the latest release?
Hi Ravi, I am able to recreate this issue on the build "glusterfs 3.4.0.35.1u2rhs built on Oct 21 2013 14:00:58" .
Mount 1 output:- ++++++++++++++++ root@rhs-client14 [Nov-11-2013-12:15:27] >dd if=/dev/urandom of=test_file bs=1M count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.192468 s, 5.4 MB/s root@rhs-client14 [Nov-11-2013-12:15:35] >rm -rf * root@rhs-client14 [Nov-11-2013-12:16:18] >dd if=/dev/urandom of=test_file bs=1M count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.200894 s, 5.2 MB/s Mount 2 Output:- ++++++++++++++++ root@rhs-client14 [Nov-11-2013-12:15:43] >ls test_file root@rhs-client14 [Nov-11-2013-12:15:44] >stat test_file File: `test_file' Size: 1048576 Blocks: 2048 IO Block: 131072 regular file Device: 1eh/30d Inode: 11399896548514473629 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2013-11-11 12:15:34.886859477 +0000 Modify: 2013-11-11 12:15:35.079853562 +0000 Change: 2013-11-11 12:15:37.147791075 +0000 root@rhs-client14 [Nov-11-2013-12:15:46] > root@rhs-client14 [Nov-11-2013-12:15:49] > root@rhs-client14 [Nov-11-2013-12:16:44] >stat test_file stat: cannot stat `test_file': No such file or directory
SOS Reports:- http://rhsqe-repo.lab.eng.blr.redhat.com/bugs_necessary_info/996200/
Marking it to test with Denali
Hi Shwetha, could you please check if this issue is still happening in RHS 3.0?
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days