Bug 762740 (GLUSTER-1008)

Summary: deleting file in backend
Product: [Community] GlusterFS Reporter: Lakshmipathi G <lakshmipathi>
Component: nfsAssignee: Shehjar Tikoo <shehjart>
Status: CLOSED WONTFIX QA Contact:
Severity: low Docs Contact:
Priority: medium    
Version: mainlineCC: gluster-bugs, vijay
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: RTP Mount Type: nfs
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Lakshmipathi G 2010-06-18 05:25:10 UTC
Found with nfs-beta-rc7 in a  simple posix + iothread setup  
Steps to reproduce:
0.Create a large file using dd (300GB) command in nfs-client.
1.Check disk-usage using df  on the nfs-server.
2.Now from nfs-server backend , delete this large file.
3.Again check disk-usage using df command. - space occupied by the file is not freed and the space is reclaimed only after killing the nfs-server.

Comment 1 Shehjar Tikoo 2010-07-04 05:48:36 UTC
It occurs because nfs caches fd_t to avoid the network latency of a complete open-read/write-close circuit. We need to avoid this in order to map a single read/wrote NFS op to one read/write fop. Because NFSv3 is stateless, it does not have the equivalents of open/close.

Comment 2 Amar Tumballi 2011-04-25 09:33:02 UTC
Please update the status of this bug as its been more than 6months since its filed (bug id < 2000)

Please resolve it with proper resolution if its not valid anymore. If its still valid and not critical, move it to 'enhancement' severity.

Comment 3 Shehjar Tikoo 2011-05-25 05:49:34 UTC
This continues to be a problem but nothing can be done till we introduce stateless writes in gluster core to support writes without opening and closing fd's.