Bug 1290965

Summary: [Tiering] + [DHT] - Detach tier fails to migrate the files when there are corrupted objects in hot tier.
Product: [Community] GlusterFS Reporter: Ravishankar N <ravishankar>
Component: replicateAssignee: Ravishankar N <ravishankar>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: mainlineCC: asrivast, bugs, dlambrig, josferna, knarra, nbalacha, nchilaka, sankarshan
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8rc2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1289228
: 1293300 (view as bug list) Environment:
Last Closed: 2016-06-16 13:50:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1289228    
Bug Blocks: 1293300    

Description Ravishankar N 2015-12-12 06:36:12 UTC
+++ This bug was initially created as a clone of Bug #1289228 +++

Description of problem:
When there are corrupted objects in hot tier, running detach tier on the volume fails to migrate the files.Detach tier should display a message saying there are some corrupted files, please recover them before performing detach tier.

When there is a corrupted file in one of the subvolume in replica pair in hot tier and another subvolume has a good copy, detach tier fails to migrate the good files to cold tier.Detach tier should migrate the files since there is a good copy of the file.

Version-Release number of selected component (if applicable):
glusterfs-3.7.5-9.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Create a tiered volume with both hot and cold tier as distribute replicate.
2. Mount the volume using NFS and create some data.
3. Edit the files from backend and wait for scrubber to mark it as corrupted files.
4. Now run the command 'gluster volume detach tier start' to detach the hot tier.

Actual results:
Detach tier does not demote the files to cold tier or does not complain anything about corrupted files. This will lead to data loss if user removes the tier by committing it.

Expected results:
Detach tier should complain about corrupted files or it should migrate the files since there is a good copy available in the other subvolume of replica pair.

Additional info:

--- Additional comment from RamaKasturi on 2015-12-07 12:11:11 EST ---


gluster vol info ouput:
===========================
[root@rhs-client2 ~]# gluster vol info vol1
 
Volume Name: vol1
Type: Tier
Volume ID: 385fdb1e-1034-40ca-9a14-e892e68b500b
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: rhs-client38:/bricks/brick3/h4
Brick2: rhs-client2:/bricks/brick3/h3
Brick3: rhs-client38:/bricks/brick2/h2
Brick4: rhs-client2:/bricks/brick2/h1
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: rhs-client2:/bricks/brick0/c1
Brick6: rhs-client38:/bricks/brick0/c2
Brick7: rhs-client2:/bricks/brick1/c3
Brick8: rhs-client38:/bricks/brick1/c4
Options Reconfigured:
cluster.tier-mode: cache
features.ctr-enabled: on
cluster.watermark-hi: 2
cluster.watermark-low: 1
features.scrub-freq: hourly
features.scrub: Active
features.bitrot: on
performance.readdir-ahead: on



Files inside the bricks before detach tier:
=================================================
[root@rhs-client2 ~]# ls -l /bricks/brick*/h*
/bricks/brick2/h1:
total 4
-rw-r--r--. 2 root root 75 Dec  7 12:17 ff1

/bricks/brick3/h3:
total 1361028
-rw-r--r--. 2 root root         76 Dec  7 12:17 ff2
-rw-r--r--. 2 root root 1393688576 Dec  7 12:18 rhgsc-appliance005
[root@rhs-client2 ~]# getfattr -d -m . -e hex /bricks/brick2/h1/ff1
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick2/h1/ff1
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.bit-rot.bad-file=0x3100
trusted.bit-rot.signature=0x0102000000000000006fde5302afacc9901c00de8ffc8c7aeaa4ea094d7edfe0e216b094d3877660da
trusted.bit-rot.version=0x02000000000000005665763d00014924
trusted.gfid=0xc00e4e43eb5849618f0a0f37501f7613

[root@rhs-client2 ~]# getfattr -d -m . -e hex /bricks/brick3/h3/ff2
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick3/h3/ff2
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.bit-rot.bad-file=0x3100
trusted.bit-rot.signature=0x0102000000000000000afb51faf10aa5d634c55290d8a3b579d1935c80c3c1a3f7f92fd812239c5ef8
trusted.bit-rot.version=0x02000000000000005665763d0001f921
trusted.gfid=0x810ff1ae3dd84d1a8422afa1283d2a78

[root@rhs-client2 ~]# ls -l /bricks/brick*/c*
/bricks/brick0/c1:
total 2722064
---------T. 2 root root          0 Dec  7 12:14 ff2
-rw-r--r--. 2 root root         21 Dec  4 12:21 file2_hot
-rw-r--r--. 2 root root          8 Dec  4 07:42 file3
-rw-r--r--. 2 root root          8 Dec  4 07:42 file4
-rw-r--r--. 2 root root         67 Dec  7 11:40 file_demote1
-rw-r--r--. 2 root root 1393688576 Dec  7 09:36 rhgsc-appliance004
---------T. 2 root root          0 Dec  7 12:14 rhgsc-appliance005
-rw-r--r--. 2 root root 1393688576 Dec  7 09:28 rhgsc-appliance-03

/bricks/brick1/c3:
total 4083100
---------T. 2 root root          0 Dec  7 12:14 ff1
-rw-r--r--. 2 root root        237 Dec  7 09:18 file1
-rw-r--r--. 2 root root        201 Dec  7 09:12 file2
-rw-r--r--. 2 root root         51 Dec  4 12:22 file2_hot1
-rw-r--r--. 2 root root          8 Dec  4 07:42 file5
-rw-r--r--. 2 root root          8 Dec  4 07:42 file6
-rw-r--r--. 2 root root         77 Dec  7 11:40 file_demote
-rw-r--r--. 2 root root         19 Dec  7 10:58 file_demote2
-rw-r--r--. 2 root root 1393688576 Dec  7 09:31 rhgsc-appliance00
-rw-r--r--. 2 root root 1393688576 Dec  7 09:26 rhgsc-appliance-02
-rw-r--r--. 2 root root 1393688576 Dec  4 12:06 rhgsc-appliance03


gluster volume detach-tier status after starting it:
==========================================================

[root@rhs-client2 ~]# gluster volume detach-tier vol1 status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes            19             0             0            completed               0.00
                             10.70.36.62                3         1.3GB             3             0             0            completed              42.00


Once the status says migration is completed, still bricks are seen in hot tier:
===============================================================================

[root@rhs-client2 ~]# ls -l /bricks/brick*/c*
/bricks/brick0/c1:
total 4083088
---------T. 2 root root          0 Dec  7 16:53 ff2
-rw-r--r--. 2 root root         21 Dec  4 12:21 file2_hot
-rw-r--r--. 2 root root          8 Dec  4 07:42 file3
-rw-r--r--. 2 root root          8 Dec  4 07:42 file4
-rw-r--r--. 2 root root         67 Dec  7 11:40 file_demote1
-rw-r--r--. 2 root root 1393688576 Dec  7 09:36 rhgsc-appliance004
-rw-r--r--. 2 root root 1393688576 Dec  7 12:17 rhgsc-appliance005
-rw-r--r--. 2 root root 1393688576 Dec  7 09:28 rhgsc-appliance-03

/bricks/brick1/c3:
total 4083100
---------T. 2 root root          0 Dec  7 16:53 ff1
-rw-r--r--. 2 root root        237 Dec  7 09:18 file1
-rw-r--r--. 2 root root        201 Dec  7 09:12 file2
-rw-r--r--. 2 root root         51 Dec  4 12:22 file2_hot1
-rw-r--r--. 2 root root          8 Dec  4 07:42 file5
-rw-r--r--. 2 root root          8 Dec  4 07:42 file6
-rw-r--r--. 2 root root         77 Dec  7 11:40 file_demote
-rw-r--r--. 2 root root         19 Dec  7 10:58 file_demote2
-rw-r--r--. 2 root root 1393688576 Dec  7 09:31 rhgsc-appliance00
-rw-r--r--. 2 root root 1393688576 Dec  7 09:26 rhgsc-appliance-02
-rw-r--r--. 2 root root 1393688576 Dec  4 12:06 rhgsc-appliance03

--- Additional comment from RamaKasturi on 2015-12-07 12:28:13 EST ---

Comment 1 Vijay Bellur 2015-12-12 06:37:57 UTC
REVIEW: http://review.gluster.org/12955 (afr: handle bad objects during lookup) posted (#1) for review on master by Ravishankar N (ravishankar)

Comment 2 Vijay Bellur 2015-12-15 16:52:30 UTC
REVIEW: http://review.gluster.org/12955 (afr: handle bad objects during lookup) posted (#2) for review on master by Ravishankar N (ravishankar)

Comment 3 Vijay Bellur 2015-12-15 16:59:36 UTC
REVIEW: http://review.gluster.org/12955 (afr: handle bad objects during lookup/inode_refresh) posted (#3) for review on master by Ravishankar N (ravishankar)

Comment 4 Vijay Bellur 2015-12-16 16:12:12 UTC
REVIEW: http://review.gluster.org/12955 (afr: handle bad objects during lookup/inode_refresh) posted (#4) for review on master by Ravishankar N (ravishankar)

Comment 5 Vijay Bellur 2015-12-18 06:14:51 UTC
REVIEW: http://review.gluster.org/12955 (afr: handle bad objects during lookup/inode_refresh) posted (#5) for review on master by Ravishankar N (ravishankar)

Comment 6 Vijay Bellur 2015-12-18 09:41:54 UTC
REVIEW: http://review.gluster.org/12955 (afr: handle bad objects during lookup/inode_refresh) posted (#6) for review on master by Ravishankar N (ravishankar)

Comment 7 Vijay Bellur 2015-12-21 04:05:27 UTC
REVIEW: http://review.gluster.org/12955 (afr: handle bad objects during lookup/inode_refresh) posted (#7) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 8 Vijay Bellur 2015-12-21 05:29:02 UTC
COMMIT: http://review.gluster.org/12955 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit 2b7226f9d3470d8fe4c98c1fddb06e7f641e364d
Author: Ravishankar N <ravishankar>
Date:   Sat Dec 12 11:49:20 2015 +0530

    afr: handle bad objects during lookup/inode_refresh
    
    If an object (file) is marked bad by bitrot, do not consider the brick
    on which the object is present  as a potential read subvolume for AFR
    irrespective of the pending xattr values.
    
    Also do not consider the brick containing the bad object while
    performing afr_accuse_smallfiles(). Otherwise if the bad object's size
    is bigger, we may end up considering that as the source.
    
    Change-Id: I4abc68e51e5c43c5adfa56e1c00b46db22c88cf7
    BUG: 1290965
    Signed-off-by: Ravishankar N <ravishankar>
    Reviewed-on: http://review.gluster.org/12955
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>
    Tested-by: Pranith Kumar Karampuri <pkarampu>

Comment 9 Vijay Bellur 2015-12-21 13:10:27 UTC
REVIEW: http://review.gluster.org/13044 (tests: handle bad objects during lookup/inode_refresh) posted (#2) for review on master by Ravishankar N (ravishankar)

Comment 10 Vijay Bellur 2015-12-28 11:03:24 UTC
COMMIT: http://review.gluster.org/13044 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit a370013898585a87657ae41e4f266da5d98cc5d2
Author: Ravishankar N <ravishankar>
Date:   Mon Dec 21 12:07:51 2015 +0530

    tests: handle bad objects during lookup/inode_refresh
    
    Change-Id: I1848f0e9243c9376e0deba6738757350fe8b704a
    BUG: 1290965
    Signed-off-by: Ravishankar N <ravishankar>
    Reviewed-on: http://review.gluster.org/13044
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Venky Shankar <vshankar>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>

Comment 11 Niels de Vos 2016-06-16 13:50:01 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user