Bug 799833

Summary: [7fec9b41d8e1befa8d58a76d98207debddd60b65]: inodes are not getting forgotten even after unlink
Product: [Community] GlusterFS Reporter: Raghavendra Bhat <rabhat>
Component: coreAssignee: Raghavendra Bhat <rabhat>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: high    
Version: mainlineCC: amarts, gluster-bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.4.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-07-24 17:36:04 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 817967    

Description Raghavendra Bhat 2012-03-05 07:59:13 UTC
Description of problem:
Created a volume (2 replica volume), mounted it via fuse client and ran posix compliance test. After the tests (actually killed the tests inbetween only), removed everything from the mount point. Came out of the mount point and did drop-caches.

gluster volume status <volname> inode command shows that there are still some inodes in the server inode table.

ls
root@hyperspace:/mnt/client# rm -rfv *
root@hyperspace:/mnt/client# ls -a
.  ..
root@hyperspace:/mnt/client# echo 3 >/proc/sys/vm/drop_caches 
root@hyperspace:/mnt/client# cd
root@hyperspace:~# echo 3 >/proc/sys/vm/drop_caches 
root@hyperspace:~# gluster volume status mirror inode
Inode tables for volume mirror
----------------------------------------------
Brick : hyperspace:/mnt/sda7/export3
Active inodes:
GFID                                            Lookups            Ref   IA type
----                                            -------            ---   -------
00000000-0000-0000-0000-000000000001                  0            446         D
 
LRU inodes:
GFID                                            Lookups            Ref   IA type
----                                            -------            ---   -------
6eaa944e-c131-463a-94da-f2aee95b8484                  2              0         D
 
 
----------------------------------------------
Brick : hyperspace:/mnt/sda8/export3
Active inodes:
GFID                                            Lookups            Ref   IA type
----                                            -------            ---   -------
00000000-0000-0000-0000-000000000001                  0            451         D
 
LRU inodes:
GFID                                            Lookups            Ref   IA type
----                                            -------            ---   -------
7a81d998-1026-4444-88bf-d5e88bec494b                  2              0         D



Version-Release number of selected component (if applicable):


How reproducible:

Always
Steps to Reproduce:
1. Create a volume, start it and then mount it.
2. Run some tests (posix compliance test is suffecient) and clean the mount point
3. Then do gluster volume status <volname> inode
  
Actual results:

gluster volume status <volname> inode shows some inodes to be still present in the inode table of the servers (even after the files are deleted and no fd being opened on that inode)

Expected results:

After the deletion of the file if no fd is still open on that file, then there should not be any inode in the inode table apart from the root inode.

Additional info:

Comment 1 Anand Avati 2012-03-18 08:00:07 UTC
CHANGE: http://review.gluster.com/2874 (protocol/server: send forget on the renamed inode) merged in master by Anand Avati (avati)

Comment 2 Raghavendra Bhat 2012-03-27 09:13:57 UTC
Checked with glusterfs-3.3.0qa30. And now there are no inodes in the inode table after all the contents are deleted, since the inodes are getting forgets properly.