Red Hat Bugzilla – Bug 762740
deleting file in backend
Last modified: 2011-05-25 04:49:34 EDT
Found with nfs-beta-rc7 in a simple posix + iothread setup
Steps to reproduce:
0.Create a large file using dd (300GB) command in nfs-client.
1.Check disk-usage using df on the nfs-server.
2.Now from nfs-server backend , delete this large file.
3.Again check disk-usage using df command. - space occupied by the file is not freed and the space is reclaimed only after killing the nfs-server.
It occurs because nfs caches fd_t to avoid the network latency of a complete open-read/write-close circuit. We need to avoid this in order to map a single read/wrote NFS op to one read/write fop. Because NFSv3 is stateless, it does not have the equivalents of open/close.
Please update the status of this bug as its been more than 6months since its filed (bug id < 2000)
Please resolve it with proper resolution if its not valid anymore. If its still valid and not critical, move it to 'enhancement' severity.
This continues to be a problem but nothing can be done till we introduce stateless writes in gluster core to support writes without opening and closing fd's.