Red Hat Bugzilla – Full Text Bug Listing
|Summary:||deleting file in backend|
|Product:||[Community] GlusterFS||Reporter:||Lakshmipathi G <lakshmipathi>|
|Component:||nfs||Assignee:||Shehjar Tikoo <shehjart>|
|Status:||CLOSED WONTFIX||QA Contact:|
|Fixed In Version:||Doc Type:||Bug Fix|
|Doc Text:||Story Points:||---|
|oVirt Team:||---||RHEL 7.3 requirements from Atomic Host:|
Description Lakshmipathi G 2010-06-18 01:25:10 EDT
Found with nfs-beta-rc7 in a simple posix + iothread setup Steps to reproduce: 0.Create a large file using dd (300GB) command in nfs-client. 1.Check disk-usage using df on the nfs-server. 2.Now from nfs-server backend , delete this large file. 3.Again check disk-usage using df command. - space occupied by the file is not freed and the space is reclaimed only after killing the nfs-server.
Comment 1 Shehjar Tikoo 2010-07-04 01:48:36 EDT
It occurs because nfs caches fd_t to avoid the network latency of a complete open-read/write-close circuit. We need to avoid this in order to map a single read/wrote NFS op to one read/write fop. Because NFSv3 is stateless, it does not have the equivalents of open/close.
Comment 2 Amar Tumballi 2011-04-25 05:33:02 EDT
Please update the status of this bug as its been more than 6months since its filed (bug id < 2000) Please resolve it with proper resolution if its not valid anymore. If its still valid and not critical, move it to 'enhancement' severity.
Comment 3 Shehjar Tikoo 2011-05-25 01:49:34 EDT
This continues to be a problem but nothing can be done till we introduce stateless writes in gluster core to support writes without opening and closing fd's.