Bug 961668

Summary: gfid links inside .glusterfs are not recreated when missing, even after a heal
Product: [Community] GlusterFS Reporter: Xavi Hernandez <jahernan>
Component: coreAssignee: bugs <bugs>
Status: CLOSED EOL QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: gluster-bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-08-07 11:11:07 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Xavi Hernandez 2013-05-10 08:45:50 UTC
Description of problem:

When there is a file or directory, with its corresponding trusted.gfid xattr correctly set, that does not have the associated gfid entry inside the .glusterfs directory, it is not recreated by any healing operation.

An ls -l shows the file but with 0 hardlinks. The first attempt to show the file contents returns "Operation not permited", however a second attempt shows the contents.


Version-Release number of selected component (if applicable): mainline


How reproducible:

Steps to Reproduce:
1. Create a file or directory in the gluster mount
2. Delete the gfid corresponding to the new file from the .glusterfs directory of one of the bricks
3. Do any operation (ls -l, cat, ...) and check that the gfid is still missing
  
Actual results:
The missing gfid is not recreated.

Expected results:
The missing gfid should by recreated by the storage/posix xlator when it detects it is missing

Additional info:

The only relevant thins in logs are these (produced by the 'cat' command, an ls -l does not report anything special):

Client log:
[2013-05-10 08:31:58.450202] W [client-rpc-fops.c:2677:client3_3_readv_cbk] 0-test-client-0: remote operation failed: Operation not permitted
[2013-05-10 08:31:58.450331] W [page.c:991:__ioc_page_error] 0-test-io-cache: page error for page = 0x7fdb30004ff0 & waitq = 0x7fdb30003a90
[2013-05-10 08:31:58.450428] W [fuse-bridge.c:2049:fuse_readv_cbk] 0-glusterfs-fuse: 55: READ => -1 (Operation not permitted)
[2013-05-10 08:31:58.451234] W [client-rpc-fops.c:2677:client3_3_readv_cbk] 0-test-client-0: remote operation failed: Operation not permitted
[2013-05-10 08:31:58.451289] W [page.c:991:__ioc_page_error] 0-test-io-cache: page error for page = 0x7fdb30004490 & waitq = 0x7fdb30002640
[2013-05-10 08:31:58.451416] W [fuse-bridge.c:2049:fuse_readv_cbk] 0-glusterfs-fuse: 56: READ => -1 (Operation not permitted)

Brick log:
[2013-05-10 08:31:58.449846] W [posix.c:1918:posix_readv] 0-test-posix: pfd is NULL from fd=0xca8fac
[2013-05-10 08:31:58.449975] I [server-rpc-fops.c:1489:server_readv_cbk] 0-test-server: 56: READV -2 (7d3e47e5-8809-409f-ade7-64a0fb488caa) ==> (Operation not permitted)
[2013-05-10 08:31:58.450978] W [posix.c:1918:posix_readv] 0-test-posix: pfd is NULL from fd=0xca8fac
[2013-05-10 08:31:58.451056] I [server-rpc-fops.c:1489:server_readv_cbk] 0-test-server: 57: READV -2 (7d3e47e5-8809-409f-ade7-64a0fb488caa) ==> (Operation not permitted)

Comment 1 Anand Avati 2013-05-14 09:46:21 UTC
REVIEW: http://review.gluster.org/5003 (storage/posix: recreate lost gfids inside .glusterfs) posted (#1) for review on master by Xavier Hernandez (xhernandez)

Comment 2 Anand Avati 2013-05-15 09:45:02 UTC
REVIEW: http://review.gluster.org/5003 (storage/posix: recreate lost gfids inside .glusterfs) posted (#2) for review on master by Xavier Hernandez (xhernandez)

Comment 3 Anand Avati 2013-05-17 09:11:10 UTC
REVIEW: http://review.gluster.org/5003 (storage/posix: recreate lost gfids inside .glusterfs) posted (#3) for review on master by Xavier Hernandez (xhernandez)

Comment 4 Anand Avati 2013-05-23 10:05:39 UTC
REVIEW: http://review.gluster.org/5003 (storage/posix: recreate lost gfids inside .glusterfs) posted (#4) for review on master by Xavier Hernandez (xhernandez)

Comment 5 Xavi Hernandez 2015-08-07 11:11:07 UTC
This problem was detected on 3.4 version. This version is not maintained anymore and the current maintained releases don't seem to have this problem.