Bug 762740 - (GLUSTER-1008) deleting file in backend
deleting file in backend
Product: GlusterFS
Classification: Community
Component: nfs (Show other bugs)
All All
medium Severity low
: ---
: ---
Assigned To: Shehjar Tikoo
Depends On:
  Show dependency treegraph
Reported: 2010-06-18 01:25 EDT by Lakshmipathi G
Modified: 2011-05-25 04:49 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: ---
Regression: RTP
Mount Type: nfs
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Lakshmipathi G 2010-06-18 01:25:10 EDT
Found with nfs-beta-rc7 in a  simple posix + iothread setup  
Steps to reproduce:
0.Create a large file using dd (300GB) command in nfs-client.
1.Check disk-usage using df  on the nfs-server.
2.Now from nfs-server backend , delete this large file.
3.Again check disk-usage using df command. - space occupied by the file is not freed and the space is reclaimed only after killing the nfs-server.
Comment 1 Shehjar Tikoo 2010-07-04 01:48:36 EDT
It occurs because nfs caches fd_t to avoid the network latency of a complete open-read/write-close circuit. We need to avoid this in order to map a single read/wrote NFS op to one read/write fop. Because NFSv3 is stateless, it does not have the equivalents of open/close.
Comment 2 Amar Tumballi 2011-04-25 05:33:02 EDT
Please update the status of this bug as its been more than 6months since its filed (bug id < 2000)

Please resolve it with proper resolution if its not valid anymore. If its still valid and not critical, move it to 'enhancement' severity.
Comment 3 Shehjar Tikoo 2011-05-25 01:49:34 EDT
This continues to be a problem but nothing can be done till we introduce stateless writes in gluster core to support writes without opening and closing fd's.

Note You need to log in before you can comment on or make changes to this bug.