Bug 1261878 - AFR: gluster v restart force heals the files, when self-heal daemon is off
AFR: gluster v restart force heals the files, when self-heal daemon is off
Status: CLOSED NOTABUG
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
3.1
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Pranith Kumar K
storage-qa-internal@redhat.com
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-09-10 07:11 EDT by Anil Shah
Modified: 2016-09-17 08:14 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-09-11 02:13:52 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Anil Shah 2015-09-10 07:11:36 EDT
Description of problem:

Gluster volume restart force, heals the files which need to be healed even when self-heal-daemon is off

Version-Release number of selected component (if applicable):

[root@darkknight ~]# rpm -qa | grep glusterfs
glusterfs-libs-3.7.1-14.el7rhgs.x86_64
glusterfs-fuse-3.7.1-14.el7rhgs.x86_64
glusterfs-3.7.1-14.el7rhgs.x86_64
glusterfs-api-3.7.1-14.el7rhgs.x86_64
glusterfs-cli-3.7.1-14.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-14.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-14.el7rhgs.x86_64
glusterfs-server-3.7.1-14.el7rhgs.x86_64
[root@darkknight ~]# gstatus --version
gstatus 0.65


How reproducible:

100%

Steps to Reproduce:
1. Create 2*2 distribute replicate volume
2. Mount volume as FUSE/NFS on clients
3. Kill one of the brick of each replica pair
4. make self-heal-daemon off , data-self-heal , metadata-self-heal and entry-self-heal off
5. Create some files on mount point
6. Check gluster v heal <volname> info
7.  Restart the volume with force option   

Actual results:

Files are getting healed even though self-heal-daemon is off

Expected results:

Files should not get healed

Additional info:

[root@rhs-client46 ~]# gluster v info
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: a999d386-cf82-44ae-b459-4cba10ba2519
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.36.70:/rhs/brick1/b001
Brick2: 10.70.36.71:/rhs/brick1/b002
Brick3: 10.70.36.46:/rhs/brick1/b003
Brick4: 10.70.44.13:/rhs/brick1/b004
Options Reconfigured:
cluster.entry-self-heal: off
cluster.metadata-self-heal: off
cluster.data-self-heal: off
cluster.self-heal-daemon: on
features.quota-deem-statfs: on
performance.readdir-ahead: on
features.quota: on
features.inode-quota: on
Comment 2 Pranith Kumar K 2015-09-11 02:13:52 EDT
(In reply to Anil Shah from comment #0)
> Description of problem:
> 
> Gluster volume restart force, heals the files which need to be healed even
> when self-heal-daemon is off
> 
> Version-Release number of selected component (if applicable):
> 
> [root@darkknight ~]# rpm -qa | grep glusterfs
> glusterfs-libs-3.7.1-14.el7rhgs.x86_64
> glusterfs-fuse-3.7.1-14.el7rhgs.x86_64
> glusterfs-3.7.1-14.el7rhgs.x86_64
> glusterfs-api-3.7.1-14.el7rhgs.x86_64
> glusterfs-cli-3.7.1-14.el7rhgs.x86_64
> glusterfs-geo-replication-3.7.1-14.el7rhgs.x86_64
> glusterfs-client-xlators-3.7.1-14.el7rhgs.x86_64
> glusterfs-server-3.7.1-14.el7rhgs.x86_64
> [root@darkknight ~]# gstatus --version
> gstatus 0.65
> 
> 
> How reproducible:
> 
> 100%
> 
> Steps to Reproduce:
> 1. Create 2*2 distribute replicate volume
> 2. Mount volume as FUSE/NFS on clients
> 3. Kill one of the brick of each replica pair
> 4. make self-heal-daemon off , data-self-heal , metadata-self-heal and
> entry-self-heal off
> 5. Create some files on mount point
> 6. Check gluster v heal <volname> info
> 7.  Restart the volume with force option   
> 
> Actual results:
> 
> Files are getting healed even though self-heal-daemon is off
> 
> Expected results:
> 
> Files should not get healed
> 
> Additional info:
> 
> [root@rhs-client46 ~]# gluster v info
>  
> Volume Name: testvol
> Type: Distributed-Replicate
> Volume ID: a999d386-cf82-44ae-b459-4cba10ba2519
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: 10.70.36.70:/rhs/brick1/b001
> Brick2: 10.70.36.71:/rhs/brick1/b002
> Brick3: 10.70.36.46:/rhs/brick1/b003
> Brick4: 10.70.44.13:/rhs/brick1/b004
> Options Reconfigured:
> cluster.entry-self-heal: off
> cluster.metadata-self-heal: off
> cluster.data-self-heal: off
> cluster.self-heal-daemon: on
^^^ self-heal daemon is on and only client healing is turned off.
> features.quota-deem-statfs: on
> performance.readdir-ahead: on
> features.quota: on
> features.inode-quota: on

Note You need to log in before you can comment on or make changes to this bug.