Bug 1293300 - [Tiering] + [DHT] - Detach tier fails to migrate the files when there are corrupted objects in hot tier.
Summary: [Tiering] + [DHT] - Detach tier fails to migrate the files when there are cor...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.7.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On: 1289228 1290965
Blocks: glusterfs-3.7.7
TreeView+ depends on / blocked
 
Reported: 2015-12-21 11:20 UTC by Ravishankar N
Modified: 2016-04-19 07:51 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.7.7
Doc Type: Bug Fix
Doc Text:
Clone Of: 1290965
Environment:
Last Closed: 2016-04-19 07:51:26 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Ravishankar N 2015-12-21 11:20:52 UTC
+++ This bug was initially created as a clone of Bug #1290965 +++

+++ This bug was initially created as a clone of Bug #1289228 +++

Description of problem:
When there are corrupted objects in hot tier, running detach tier on the volume fails to migrate the files.Detach tier should display a message saying there are some corrupted files, please recover them before performing detach tier.

When there is a corrupted file in one of the subvolume in replica pair in hot tier and another subvolume has a good copy, detach tier fails to migrate the good files to cold tier.Detach tier should migrate the files since there is a good copy of the file.

Version-Release number of selected component (if applicable):
glusterfs-3.7.5-9.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Create a tiered volume with both hot and cold tier as distribute replicate.
2. Mount the volume using NFS and create some data.
3. Edit the files from backend and wait for scrubber to mark it as corrupted files.
4. Now run the command 'gluster volume detach tier start' to detach the hot tier.

Actual results:
Detach tier does not demote the files to cold tier or does not complain anything about corrupted files. This will lead to data loss if user removes the tier by committing it.

Expected results:
Detach tier should complain about corrupted files or it should migrate the files since there is a good copy available in the other subvolume of replica pair.

Additional info:

--- Additional comment from RamaKasturi on 2015-12-07 12:11:11 EST ---


gluster vol info ouput:
===========================
[root@rhs-client2 ~]# gluster vol info vol1
 
Volume Name: vol1
Type: Tier
Volume ID: 385fdb1e-1034-40ca-9a14-e892e68b500b
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: rhs-client38:/bricks/brick3/h4
Brick2: rhs-client2:/bricks/brick3/h3
Brick3: rhs-client38:/bricks/brick2/h2
Brick4: rhs-client2:/bricks/brick2/h1
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: rhs-client2:/bricks/brick0/c1
Brick6: rhs-client38:/bricks/brick0/c2
Brick7: rhs-client2:/bricks/brick1/c3
Brick8: rhs-client38:/bricks/brick1/c4
Options Reconfigured:
cluster.tier-mode: cache
features.ctr-enabled: on
cluster.watermark-hi: 2
cluster.watermark-low: 1
features.scrub-freq: hourly
features.scrub: Active
features.bitrot: on
performance.readdir-ahead: on



Files inside the bricks before detach tier:
=================================================
[root@rhs-client2 ~]# ls -l /bricks/brick*/h*
/bricks/brick2/h1:
total 4
-rw-r--r--. 2 root root 75 Dec  7 12:17 ff1

/bricks/brick3/h3:
total 1361028
-rw-r--r--. 2 root root         76 Dec  7 12:17 ff2
-rw-r--r--. 2 root root 1393688576 Dec  7 12:18 rhgsc-appliance005
[root@rhs-client2 ~]# getfattr -d -m . -e hex /bricks/brick2/h1/ff1
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick2/h1/ff1
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.bit-rot.bad-file=0x3100
trusted.bit-rot.signature=0x0102000000000000006fde5302afacc9901c00de8ffc8c7aeaa4ea094d7edfe0e216b094d3877660da
trusted.bit-rot.version=0x02000000000000005665763d00014924
trusted.gfid=0xc00e4e43eb5849618f0a0f37501f7613

[root@rhs-client2 ~]# getfattr -d -m . -e hex /bricks/brick3/h3/ff2
getfattr: Removing leading '/' from absolute path names
# file: bricks/brick3/h3/ff2
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.afr.dirty=0x000000000000000000000000
trusted.bit-rot.bad-file=0x3100
trusted.bit-rot.signature=0x0102000000000000000afb51faf10aa5d634c55290d8a3b579d1935c80c3c1a3f7f92fd812239c5ef8
trusted.bit-rot.version=0x02000000000000005665763d0001f921
trusted.gfid=0x810ff1ae3dd84d1a8422afa1283d2a78

[root@rhs-client2 ~]# ls -l /bricks/brick*/c*
/bricks/brick0/c1:
total 2722064
---------T. 2 root root          0 Dec  7 12:14 ff2
-rw-r--r--. 2 root root         21 Dec  4 12:21 file2_hot
-rw-r--r--. 2 root root          8 Dec  4 07:42 file3
-rw-r--r--. 2 root root          8 Dec  4 07:42 file4
-rw-r--r--. 2 root root         67 Dec  7 11:40 file_demote1
-rw-r--r--. 2 root root 1393688576 Dec  7 09:36 rhgsc-appliance004
---------T. 2 root root          0 Dec  7 12:14 rhgsc-appliance005
-rw-r--r--. 2 root root 1393688576 Dec  7 09:28 rhgsc-appliance-03

/bricks/brick1/c3:
total 4083100
---------T. 2 root root          0 Dec  7 12:14 ff1
-rw-r--r--. 2 root root        237 Dec  7 09:18 file1
-rw-r--r--. 2 root root        201 Dec  7 09:12 file2
-rw-r--r--. 2 root root         51 Dec  4 12:22 file2_hot1
-rw-r--r--. 2 root root          8 Dec  4 07:42 file5
-rw-r--r--. 2 root root          8 Dec  4 07:42 file6
-rw-r--r--. 2 root root         77 Dec  7 11:40 file_demote
-rw-r--r--. 2 root root         19 Dec  7 10:58 file_demote2
-rw-r--r--. 2 root root 1393688576 Dec  7 09:31 rhgsc-appliance00
-rw-r--r--. 2 root root 1393688576 Dec  7 09:26 rhgsc-appliance-02
-rw-r--r--. 2 root root 1393688576 Dec  4 12:06 rhgsc-appliance03


gluster volume detach-tier status after starting it:
==========================================================

[root@rhs-client2 ~]# gluster volume detach-tier vol1 status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes            19             0             0            completed               0.00
                             10.70.36.62                3         1.3GB             3             0             0            completed              42.00


Once the status says migration is completed, still bricks are seen in hot tier:
===============================================================================

[root@rhs-client2 ~]# ls -l /bricks/brick*/c*
/bricks/brick0/c1:
total 4083088
---------T. 2 root root          0 Dec  7 16:53 ff2
-rw-r--r--. 2 root root         21 Dec  4 12:21 file2_hot
-rw-r--r--. 2 root root          8 Dec  4 07:42 file3
-rw-r--r--. 2 root root          8 Dec  4 07:42 file4
-rw-r--r--. 2 root root         67 Dec  7 11:40 file_demote1
-rw-r--r--. 2 root root 1393688576 Dec  7 09:36 rhgsc-appliance004
-rw-r--r--. 2 root root 1393688576 Dec  7 12:17 rhgsc-appliance005
-rw-r--r--. 2 root root 1393688576 Dec  7 09:28 rhgsc-appliance-03

/bricks/brick1/c3:
total 4083100
---------T. 2 root root          0 Dec  7 16:53 ff1
-rw-r--r--. 2 root root        237 Dec  7 09:18 file1
-rw-r--r--. 2 root root        201 Dec  7 09:12 file2
-rw-r--r--. 2 root root         51 Dec  4 12:22 file2_hot1
-rw-r--r--. 2 root root          8 Dec  4 07:42 file5
-rw-r--r--. 2 root root          8 Dec  4 07:42 file6
-rw-r--r--. 2 root root         77 Dec  7 11:40 file_demote
-rw-r--r--. 2 root root         19 Dec  7 10:58 file_demote2
-rw-r--r--. 2 root root 1393688576 Dec  7 09:31 rhgsc-appliance00
-rw-r--r--. 2 root root 1393688576 Dec  7 09:26 rhgsc-appliance-02
-rw-r--r--. 2 root root 1393688576 Dec  4 12:06 rhgsc-appliance03

--- Additional comment from RamaKasturi on 2015-12-07 12:28:13 EST ---

--- Additional comment from Vijay Bellur on 2015-12-12 01:37:57 EST ---

REVIEW: http://review.gluster.org/12955 (afr: handle bad objects during lookup) posted (#1) for review on master by Ravishankar N (ravishankar)

--- Additional comment from Vijay Bellur on 2015-12-15 11:52:30 EST ---

REVIEW: http://review.gluster.org/12955 (afr: handle bad objects during lookup) posted (#2) for review on master by Ravishankar N (ravishankar)

--- Additional comment from Vijay Bellur on 2015-12-15 11:59:36 EST ---

REVIEW: http://review.gluster.org/12955 (afr: handle bad objects during lookup/inode_refresh) posted (#3) for review on master by Ravishankar N (ravishankar)

--- Additional comment from Vijay Bellur on 2015-12-16 11:12:12 EST ---

REVIEW: http://review.gluster.org/12955 (afr: handle bad objects during lookup/inode_refresh) posted (#4) for review on master by Ravishankar N (ravishankar)

--- Additional comment from Vijay Bellur on 2015-12-18 01:14:51 EST ---

REVIEW: http://review.gluster.org/12955 (afr: handle bad objects during lookup/inode_refresh) posted (#5) for review on master by Ravishankar N (ravishankar)

--- Additional comment from Vijay Bellur on 2015-12-18 04:41:54 EST ---

REVIEW: http://review.gluster.org/12955 (afr: handle bad objects during lookup/inode_refresh) posted (#6) for review on master by Ravishankar N (ravishankar)

--- Additional comment from Vijay Bellur on 2015-12-20 23:05:27 EST ---

REVIEW: http://review.gluster.org/12955 (afr: handle bad objects during lookup/inode_refresh) posted (#7) for review on master by Pranith Kumar Karampuri (pkarampu)

--- Additional comment from Vijay Bellur on 2015-12-21 00:29:02 EST ---

COMMIT: http://review.gluster.org/12955 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit 2b7226f9d3470d8fe4c98c1fddb06e7f641e364d
Author: Ravishankar N <ravishankar>
Date:   Sat Dec 12 11:49:20 2015 +0530

    afr: handle bad objects during lookup/inode_refresh
    
    If an object (file) is marked bad by bitrot, do not consider the brick
    on which the object is present  as a potential read subvolume for AFR
    irrespective of the pending xattr values.
    
    Also do not consider the brick containing the bad object while
    performing afr_accuse_smallfiles(). Otherwise if the bad object's size
    is bigger, we may end up considering that as the source.
    
    Change-Id: I4abc68e51e5c43c5adfa56e1c00b46db22c88cf7
    BUG: 1290965
    Signed-off-by: Ravishankar N <ravishankar>
    Reviewed-on: http://review.gluster.org/12955
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>
    Tested-by: Pranith Kumar Karampuri <pkarampu>

Comment 1 Vijay Bellur 2015-12-21 11:22:55 UTC
REVIEW: http://review.gluster.org/13041 (afr: handle bad objects during lookup/inode_refresh) posted (#1) for review on release-3.7 by Ravishankar N (ravishankar)

Comment 2 Vijay Bellur 2015-12-28 12:00:41 UTC
REVIEW: http://review.gluster.org/13099 (tests: handle bad objects during lookup/inode_refresh) posted (#1) for review on release-3.7 by Ravishankar N (ravishankar)

Comment 3 Vijay Bellur 2015-12-29 07:37:13 UTC
REVIEW: http://review.gluster.org/13041 (afr: handle bad objects during lookup/inode_refresh) posted (#2) for review on release-3.7 by Ravishankar N (ravishankar)

Comment 4 Vijay Bellur 2016-01-04 05:05:25 UTC
REVIEW: http://review.gluster.org/13099 (tests: handle bad objects during lookup/inode_refresh) posted (#2) for review on release-3.7 by Ravishankar N (ravishankar)

Comment 5 Vijay Bellur 2016-01-04 08:16:27 UTC
REVIEW: http://review.gluster.org/13041 (afr: handle bad objects during lookup/inode_refresh) posted (#3) for review on release-3.7 by Ravishankar N (ravishankar)

Comment 6 Vijay Bellur 2016-01-06 13:45:02 UTC
REVIEW: http://review.gluster.org/13041 (afr: handle bad objects during lookup/inode_refresh) posted (#4) for review on release-3.7 by Ravishankar N (ravishankar)

Comment 7 Vijay Bellur 2016-01-11 04:32:47 UTC
REVIEW: http://review.gluster.org/13041 (afr: handle bad objects during lookup/inode_refresh) posted (#5) for review on release-3.7 by Ravishankar N (ravishankar)

Comment 8 Vijay Bellur 2016-01-12 10:44:05 UTC
REVIEW: http://review.gluster.org/13041 (afr: handle bad objects during lookup/inode_refresh) posted (#6) for review on release-3.7 by Ravishankar N (ravishankar)

Comment 9 Vijay Bellur 2016-01-22 05:28:38 UTC
COMMIT: http://review.gluster.org/13041 committed in release-3.7 by Pranith Kumar Karampuri (pkarampu) 
------
commit 317eea0e182f45c0ccd14b1e83c832ba435ee283
Author: Ravishankar N <ravishankar>
Date:   Sat Dec 12 11:49:20 2015 +0530

    afr: handle bad objects during lookup/inode_refresh
    
    Backport of  http://review.gluster.org/12955,
    http://review.gluster.org/#/c/13077/ and http://review.gluster.org/#/c/13185/
    
    If an object (file) is marked bad by bitrot, do not consider the brick
    on which the object is present  as a potential read subvolume for AFR
    irrespective of the pending xattr values.
    
    Also do not consider the brick containing the bad object while
    performing afr_accuse_smallfiles(). Otherwise if the bad object's size
    is bigger,we may end up considering that as the source.
    
    Change-Id: I4abc68e51e5c43c5adfa56e1c00b46db22c88cf7
    BUG: 1293300
    Signed-off-by: Ravishankar N <ravishankar>
    Reviewed-on: http://review.gluster.org/13041
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>
    Tested-by: Pranith Kumar Karampuri <pkarampu>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.com>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 10 Vijay Bellur 2016-01-25 11:39:42 UTC
REVIEW: http://review.gluster.org/13099 (tests: handle bad objects during lookup/inode_refresh) posted (#3) for review on release-3.7 by Ravishankar N (ravishankar)

Comment 11 Vijay Bellur 2016-01-26 02:43:16 UTC
REVIEW: http://review.gluster.org/13099 (tests: handle bad objects during lookup/inode_refresh) posted (#4) for review on release-3.7 by Ravishankar N (ravishankar)

Comment 12 Vijay Bellur 2016-01-27 05:43:04 UTC
REVIEW: http://review.gluster.org/13099 (tests: handle bad objects during lookup/inode_refresh) posted (#5) for review on release-3.7 by Ravishankar N (ravishankar)

Comment 13 Vijay Bellur 2016-01-27 05:49:37 UTC
REVIEW: http://review.gluster.org/13099 (tests: handle bad objects during lookup/inode_refresh) posted (#6) for review on release-3.7 by Ravishankar N (ravishankar)

Comment 14 Vijay Bellur 2016-02-18 05:59:07 UTC
REVIEW: http://review.gluster.org/13099 (tests: handle bad objects during lookup/inode_refresh) posted (#7) for review on release-3.7 by Venky Shankar (vshankar)

Comment 15 Ravishankar N 2016-03-11 04:30:19 UTC
Moving to POST as the .t is not yet merged.

Comment 16 Mike McCune 2016-03-28 23:24:30 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 17 Kaushal 2016-04-19 07:51:26 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.7, please open a new bug report.

glusterfs-3.7.7 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2016-February/025292.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.