Bug 1163561 - A restarted child can not clean files/directories which were deleted while down
Summary: A restarted child can not clean files/directories which were deleted while down
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1165143
TreeView+ depends on / blocked
 
Reported: 2014-11-13 02:53 UTC by jiademing.dd
Modified: 2015-12-01 16:45 UTC (History)
5 users (show)

Fixed In Version:
Clone Of:
: 1165143 (view as bug list)
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description jiademing.dd 2014-11-13 02:53:56 UTC
Description of problem:
  I create a disperse volume (disperse 3 redundancy 1), copy same files and directoys to mountpoint, then kill one child-glusterfsd. I delete all the files and directorys frome mountpoint, then execute "gluster volume start test force", restart the killed child. the restart child can not clean the remaining files and directorys where has been delelte from mountpoint. 
  I can execute create/read/write/delete files, directorys as usual in mountpoint, even the same as remaining files and directorys. But it still remaining.

Version-Release number of selected component (if applicable):
3.6.1

How reproducible:


Steps to Reproduce:
1. create a  disperse volume
 Volume Name: test
 Type: Distributed-Disperse
 Volume ID: 1841beb3-001d-45b8-9d6c-6c34cfbfd6d0
 Status: Started
 Number of Bricks: 2 x (2 + 1) = 6
 Transport-type: tcp
 Bricks:
 Brick1: 10.10.21.50:/sda
 Brick2: 10.10.21.52:/sda
 Brick3: 10.10.21.50:/sdb
 Brick4: 10.10.21.52:/sdb
 Brick5: 10.10.21.50:/sdc
 Brick6: 10.10.21.52:/sdc
2.copy many files and directorys to mountpoint
3.kill Brick1-glusterfsd program.
4. execute "rm -rvf /mountpoint/*"
5. execute "gluster volume start test force", restart Brick1
6. I can execute create/read/write/delete files, directorys as usual in mountpoint, even the same as remaining files and directorys. But Brick1's dirty data still remaining.

Actual results:


Expected results:


Additional info:

Comment 1 Pranith Kumar K 2015-05-09 17:13:06 UTC
For fixing this bug, we had to bring in some version incompatible changes and directory self-heal implementation, so it can't be backported to 3.6.x. Please feel free to upgrade to 3.7.x where this bug is fixed:

root@pranithk-laptop - ~ 
22:38:45 :( ⚡ glusterd && gluster volume create ec2 disperse-data 2 redundancy 1 `hostname`:/home/gfs/ec_{0..2} force && gluster volume start ec2 && mount -t glusterfs `hostname`:/ec2 /mnt/ec2
volume create: ec2: success: please start the volume to access data
volume start: ec2: success

root@pranithk-laptop - ~ 
22:41:01 :) ⚡ cd /mnt/ec2/

root@pranithk-laptop - /mnt/ec2 
22:41:23 :) ⚡ for i in {1..10}; do echo abc > a; done

root@pranithk-laptop - /mnt/ec2 
22:41:42 :) ⚡ ls -l /home/gfs/ec_?
/home/gfs/ec_0:
total 8
-rw-r--r--. 2 root root 512 May  9 22:41 a

/home/gfs/ec_1:
total 8
-rw-r--r--. 2 root root 512 May  9 22:41 a

/home/gfs/ec_2:
total 8
-rw-r--r--. 2 root root 512 May  9 22:41 a

root@pranithk-laptop - /mnt/ec2 
22:41:47 :) ⚡ for i in {1..10}; do echo abc > $i; done

root@pranithk-laptop - /mnt/ec2 
22:41:52 :) ⚡ ls -l /home/gfs/ec_?
/home/gfs/ec_0:
total 88
-rw-r--r--. 2 root root 512 May  9 22:41 1
-rw-r--r--. 2 root root 512 May  9 22:41 10
-rw-r--r--. 2 root root 512 May  9 22:41 2
-rw-r--r--. 2 root root 512 May  9 22:41 3
-rw-r--r--. 2 root root 512 May  9 22:41 4
-rw-r--r--. 2 root root 512 May  9 22:41 5
-rw-r--r--. 2 root root 512 May  9 22:41 6
-rw-r--r--. 2 root root 512 May  9 22:41 7
-rw-r--r--. 2 root root 512 May  9 22:41 8
-rw-r--r--. 2 root root 512 May  9 22:41 9
-rw-r--r--. 2 root root 512 May  9 22:41 a

/home/gfs/ec_1:
total 88
-rw-r--r--. 2 root root 512 May  9 22:41 1
-rw-r--r--. 2 root root 512 May  9 22:41 10
-rw-r--r--. 2 root root 512 May  9 22:41 2
-rw-r--r--. 2 root root 512 May  9 22:41 3
-rw-r--r--. 2 root root 512 May  9 22:41 4
-rw-r--r--. 2 root root 512 May  9 22:41 5
-rw-r--r--. 2 root root 512 May  9 22:41 6
-rw-r--r--. 2 root root 512 May  9 22:41 7
-rw-r--r--. 2 root root 512 May  9 22:41 8
-rw-r--r--. 2 root root 512 May  9 22:41 9
-rw-r--r--. 2 root root 512 May  9 22:41 a

/home/gfs/ec_2:
total 88
-rw-r--r--. 2 root root 512 May  9 22:41 1
-rw-r--r--. 2 root root 512 May  9 22:41 10
-rw-r--r--. 2 root root 512 May  9 22:41 2
-rw-r--r--. 2 root root 512 May  9 22:41 3
-rw-r--r--. 2 root root 512 May  9 22:41 4
-rw-r--r--. 2 root root 512 May  9 22:41 5
-rw-r--r--. 2 root root 512 May  9 22:41 6
-rw-r--r--. 2 root root 512 May  9 22:41 7
-rw-r--r--. 2 root root 512 May  9 22:41 8
-rw-r--r--. 2 root root 512 May  9 22:41 9
-rw-r--r--. 2 root root 512 May  9 22:41 a

root@pranithk-laptop - /mnt/ec2 
22:41:54 :) ⚡ /home/pk1/.scripts/gfs -c k -v ec2 -k 0
Dir: /var/lib/glusterd/vols/ec2/run/
kill -9 14851


root@pranithk-laptop - /mnt/ec2 
22:41:58 :) ⚡ rm -rf *

root@pranithk-laptop - /mnt/ec2 
22:42:01 :) ⚡ ls -l /home/gfs/ec_?
/home/gfs/ec_0:
total 88
-rw-r--r--. 2 root root 512 May  9 22:41 1
-rw-r--r--. 2 root root 512 May  9 22:41 10
-rw-r--r--. 2 root root 512 May  9 22:41 2
-rw-r--r--. 2 root root 512 May  9 22:41 3
-rw-r--r--. 2 root root 512 May  9 22:41 4
-rw-r--r--. 2 root root 512 May  9 22:41 5
-rw-r--r--. 2 root root 512 May  9 22:41 6
-rw-r--r--. 2 root root 512 May  9 22:41 7
-rw-r--r--. 2 root root 512 May  9 22:41 8
-rw-r--r--. 2 root root 512 May  9 22:41 9
-rw-r--r--. 2 root root 512 May  9 22:41 a

/home/gfs/ec_1:
total 0

/home/gfs/ec_2:
total 0

root@pranithk-laptop - /mnt/ec2 
22:42:05 :) ⚡ gluster v start ec2 force
volume start: ec2: success

root@pranithk-laptop - /mnt/ec2 
22:42:12 :) ⚡ gluster v heal ec2 info
Brick pranithk-laptop:/home/gfs/ec_0/
Number of entries: 0

Brick pranithk-laptop:/home/gfs/ec_1/
Number of entries: 0

Brick pranithk-laptop:/home/gfs/ec_2/
Number of entries: 0


root@pranithk-laptop - /mnt/ec2 
22:42:23 :) ⚡ ls -l /home/gfs/ec_?
/home/gfs/ec_0:
total 0

/home/gfs/ec_1:
total 0

/home/gfs/ec_2:
total 0

root@pranithk-laptop - /mnt/ec2 
22:42:29 :) ⚡

Comment 2 Nagaprasad Sathyanarayana 2015-10-25 14:55:09 UTC
Fix for this bug is already made in a GlusterFS release. The cloned BZ has details of the fix and the release. Hence closing this mainline BZ.

Comment 3 Nagaprasad Sathyanarayana 2015-10-25 15:01:24 UTC
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.


Note You need to log in before you can comment on or make changes to this bug.