Bug 1288228

Summary: Unlink on open file results in file being deleted before file is closed
Product: [Community] GlusterFS Reporter: Neil Van Lysel <nvanlysel>
Component: posixAssignee: Ashish Pandey <aspandey>
Status: CLOSED EOL QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: 3.5.5CC: aspandey, bugs, nvanlysel
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-06-17 16:23:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
C program that demonstrates unlink issue none

Description Neil Van Lysel 2015-12-03 22:12:17 UTC
Created attachment 1101991 [details]
C program that demonstrates unlink issue

Description of problem:
Performing an unlink operation on an open file results in the file being deleted before the file is closed.

Version-Release number of selected component (if applicable):

How reproducible:
always

Steps to Reproduce:
1. Create 8x2 distributed-replicate volume
2. Mount volume on client via fuse
3. Compile test.c (attached)
4. Run a.out

Actual results:
No such file or directory error.

Expected results:
No errors.

Additional info:
Results of running test.c on an ext4 filesystem vs. gluster filesystem are below.
EXT4 FILESYSTEM:
[user@client-1 tmp]$ ./a.out 
fopen returned: : Success
unlink returned: : Success
fprintf returned: : Success
rewind returned: : Success
Reading from file... Hello!
fclose returned: : Success

GLUSTER FILESYSTEM:
[user@client-1 ~]$ ./a.out
fopen returned: : Success
unlink returned: : Success
fprintf returned: : Success
rewind returned: : Success
Reading from file... 
fclose returned: : No such file or directory




Log entries seen during execution of test.c are below.
GLUSTER CLIENT LOG:
[2015-12-03 21:37:00.967186] W [client-rpc-fops.c:867:client3_3_writev_cbk] 0-home-client-12: remote operation failed: No such file or directory
[2015-12-03 21:37:00.967220] W [client-rpc-fops.c:867:client3_3_writev_cbk] 0-home-client-13: remote operation failed: No such file or directory
[2015-12-03 21:37:00.968374] W [fuse-bridge.c:1236:fuse_err_cbk] 0-glusterfs-fuse: 208061996: FLUSH() ERR => -1 (No such file or directory)


GLUSTER SERVER BRICK LOG (home-client-12):
[2015-12-03 21:37:00.966292] E [posix.c:2432:posix_open] 0-home-posix: open on /brick1/home/.glusterfs/bc/be/bcbe9fb3-094b-4fa5-98ae-ec81109ade0a: No such file or directory
[2015-12-03 21:37:00.966383] W [dict.c:480:dict_unref] (-->/usr/lib64/glusterfs/3.5.5/xlator/features/access-control.so(posix_acl_open_cbk+0xc2) [0x7f0ea57e9ee2] (-->/usr/lib64/glusterfs/3.5.5/xlator/features/locks.so(pl_open_cbk+0xee) [0x7f0ea55d7a2e] (-->/usr/lib64/glusterfs/3.5.5/xlator/performance/io-threads.so(iot_open_cbk+0xc2) [0x7f0ea53ba0d2]))) 0-dict: dict is NULL
[2015-12-03 21:37:00.966399] I [server-rpc-fops.c:1377:server_writev_cbk] 0-home-server: 16402045: WRITEV 36 (bcbe9fb3-094b-4fa5-98ae-ec81109ade0a) ==> (No such file or directory)


GLUSTER SERVER BRICK LOG (home-client-13):
[2015-12-03 21:37:00.966248] E [posix.c:2432:posix_open] 0-home-posix: open on /brick1/home/.glusterfs/bc/be/bcbe9fb3-094b-4fa5-98ae-ec81109ade0a: No such file or directory
[2015-12-03 21:37:00.966335] W [dict.c:480:dict_unref] (-->/usr/lib64/glusterfs/3.5.5/xlator/features/access-control.so(posix_acl_open_cbk+0xc2) [0x7f4b381d3ee2] (-->/usr/lib64/glusterfs/3.5.5/xlator/features/locks.so(pl_open_cbk+0xee) [0x7f4b33df1a2e] (-->/usr/lib64/glusterfs/3.5.5/xlator/performance/io-threads.so(iot_open_cbk+0xc2) [0x7f4b33bd40d2]))) 0-dict: dict is NULL
[2015-12-03 21:37:00.966350] I [server-rpc-fops.c:1377:server_writev_cbk] 0-home-server: 16367639: WRITEV 36 (bcbe9fb3-094b-4fa5-98ae-ec81109ade0a) ==> (No such file or directory)




Gluster setup and package information is below.
[root@storage-1 ~]# gluster volume info
Volume Name: home
Type: Distributed-Replicate
Volume ID: 2694f438-08f6-48fc-a072-324d4701f112
Status: Started
Number of Bricks: 8 x 2 = 16
Transport-type: tcp
Bricks:
Brick1: storage-7:/brick1/home
Brick2: storage-8:/brick1/home
Brick3: storage-9:/brick1/home
Brick4: storage-10:/brick1/home
Brick5: storage-1:/brick1/home
Brick6: storage-2:/brick1/home
Brick7: storage-3:/brick1/home
Brick8: storage-4:/brick1/home
Brick9: storage-5:/brick1/home
Brick10: storage-6:/brick1/home
Brick11: storage-11:/brick1/home
Brick12: storage-12:/brick1/home
Brick13: storage-13:/brick1/home
Brick14: storage-14:/brick1/home
Brick15: storage-15:/brick1/home
Brick16: storage-16:/brick1/home
Options Reconfigured:
performance.cache-size: 100MB
performance.write-behind-window-size: 100MB
nfs.disable: on
features.quota: on
features.default-soft-limit: 90%


GLUSTER SERVER PACKAGES:
[root@storage-1 ~]# rpm -qa |grep gluster
glusterfs-cli-3.5.5-2.el6.x86_64
glusterfs-server-3.5.5-2.el6.x86_64
glusterfs-libs-3.5.5-2.el6.x86_64
glusterfs-fuse-3.5.5-2.el6.x86_64
glusterfs-3.5.5-2.el6.x86_64
glusterfs-api-3.5.5-2.el6.x86_64


GLUSTER CLIENT PACKAGES:
[root@client-1 ~]# rpm -qa |grep gluster
glusterfs-api-3.5.5-2.el6.x86_64
glusterfs-libs-3.5.5-2.el6.x86_64
glusterfs-fuse-3.5.5-2.el6.x86_64
glusterfs-3.5.5-2.el6.x86_64

Comment 1 Krutika Dhananjay 2016-06-15 07:00:20 UTC
This is most probably fixed by the .unlink change in posix made by Ashish. Assigning it to him.

Ashish,

Could you confirm that the test case given above passes with your patch?

-Krutika

Comment 2 Niels de Vos 2016-06-17 16:23:25 UTC
This bug is getting closed because the 3.5 is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.