Bug 862986 - [FEATURE] gluster volume heal "info" and "info healed" does not list the self-heal info of missing entries.
Summary: [FEATURE] gluster volume heal "info" and "info healed" does not list the self...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: 2.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Pranith Kumar K
QA Contact: spandura
URL:
Whiteboard: FutureFeature
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-10-04 06:27 UTC by spandura
Modified: 2016-09-17 12:19 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-01-12 15:35:38 UTC
Embargoed:


Attachments (Terms of Use)

Description spandura 2012-10-04 06:27:28 UTC
Description of problem:
------------------------
In a pure replicate volume (1x2) when one of the brick is offline few entries were deleted from mount point. 

1. "gluster volume heal <volume_name> info" command execution doesn't list the deleted entries that are to be self-healed. It only reports directory under which the entries were deleted and to be self-healed. (for ex:- in case the files were created under root , it shows only "/" in the command output)

2. when the brick comes back online and "gluster volume heal <volume_name> info healed" also doesn't report the removal of entries from brick which was offline.  

listing the entries (complete path of file including file name) is necessary to check what "files" were deleted rather than just listing the "directory" in which the file was deleted as users/customers will be more concerned on the deleted items. 

Version-Release number of selected component (if applicable):
------------------------------------------------------------
[root@darrel ~]# rpm -qa | grep gluster
glusterfs-devel-3.3.0.3rhs-31.el6rhs.x86_64
glusterfs-3.3.0.3rhs-31.el6rhs.x86_64
glusterfs-server-3.3.0.3rhs-31.el6rhs.x86_64

[root@darrel ~]# gluster --version
glusterfs 3.3.0.3rhs built on Sep 27 2012 07:13:27


How reproducible:
------------------
often

Steps to Reproduce:
-----------------
1.create a pure replicate volume (1x2, 2 servers and 1 brick on each server). start the volume
2.create a fuse mount.
3.create a file from the mount point.
4.killall gluster process on server1 (killall -r -9 gluster)
5.from the mount point delete the file which was created in step 3.

6. execute : "gluster volume heal <volume_name> info" on server2 . Ideally it should list the entries that were deleted. Currently list the parent directory under which the entries were deleted.

7.start glusterd on server1. This will start the brick process of the volume, nfs, self-heal daemon process on server1.

8. execute : "gluster volume heal <volume_name> info healed" on server2. 
  
Actual results:-
--------------
[root@darrel ~]# gluster volume info rep
 
Volume Name: rep
Type: Replicate
Volume ID: 4d745ea9-8197-4349-90db-0ee4c73b782f
Status: Created
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: darrel.lab.eng.blr.redhat.com:/home/rep
Brick2: hicks.lab.eng.blr.redhat.com:/home/rep
Options Reconfigured:
global-option-version: 0


create the file from the mount point:-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[10/04/12 - 01:35:14 root@flea rep]# mkdir testdir

[10/04/12 - 01:35:57 root@flea rep]# dd if=/dev/urandom of=testdir/new_file.1 bs=512k count=1
1+0 records in
1+0 records out
524288 bytes (524 kB) copied, 0.134031 s, 3.9 MB/s

[10/04/12 - 01:36:02 root@flea rep]# ls -lh testdir/
total 512K
-rw-r--r--. 1 root root 512K Oct  4 01:36 new_file.1


Kill all gluster process on Server1:-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[root@darrel ~]#  killall -r -9 gluster

[root@darrel ~]# ps -ef | grep gluster
root       455 32761  0 01:39 pts/0    00:00:00 grep gluster


remove the file from the mount point:-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[10/04/12 - 01:38:57 root@flea rep]# rm -f testdir/new_file.1

[10/04/12 - 01:39:46 root@flea rep]# ls -lh testdir/
total 0


Server2 command execution output:-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[root@hicks ~]# ls -lh /home/rep/.glusterfs/indices/xattrop/
total 0
----------. 2 root root 0 Oct  4 01:36 18597591-f89f-4431-b0fe-fd13462b2b63
----------. 2 root root 0 Oct  4 01:36 xattrop-fe658b12-6c99-48a0-8743-c36b69283233

[root@hicks ~]# gluster volume heal rep info
Heal operation on volume rep has been successful

Brick darrel.lab.eng.blr.redhat.com:/home/rep
Number of entries: 0

Brick hicks.lab.eng.blr.redhat.com:/home/rep
Number of entries: 1
/testdir


start glusterd on server1:-
~~~~~~~~~~~~~~~~~~~~~~~~~~~
[root@darrel ~]# glusterd

execute "volume heal" on server2:-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[root@hicks ~]# gluster volume heal rep 
Heal operation on volume rep has been successful

[root@hicks ~]# gluster volume heal rep info
Heal operation on volume rep has been successful

Brick darrel.lab.eng.blr.redhat.com:/home/rep
Number of entries: 0

Brick hicks.lab.eng.blr.redhat.com:/home/rep
Number of entries: 0

[root@hicks ~]# ls -lh /home/rep/.glusterfs/indices/xattrop/
total 0
[root@hicks ~]# gluster volume heal rep info healed
Heal operation on volume rep has been successful

Brick darrel.lab.eng.blr.redhat.com:/home/rep
Number of entries: 0

Brick hicks.lab.eng.blr.redhat.com:/home/rep
Number of entries: 1
at                    path on brick
-----------------------------------
2012-10-04 01:40:38 /testdir


Expected results:
-------------------
Ideally it should list the entries that were deleted. Currently list the parent directory under which the entries were deleted.

Comment 3 Pranith Kumar K 2016-01-12 15:35:38 UTC
Whenever a file is deleted because of heal, it is logged in the
   process which did that heal. In the past years people are more interested in knowing how long the heal will take and how many more entries are
   needed to heal more than the exact information about what files need heal. Considering we can get this information about the files/directoroes
   deleted from logs, we are not going to fix it in CLI


Note You need to log in before you can comment on or make changes to this bug.