Bug 1272334 - Data Tiering:Promotions fail when brick of EC (disperse) cold layer are down
Summary: Data Tiering:Promotions fail when brick of EC (disperse) cold layer are down
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: 3.7.5
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ---
Assignee: Dan Lambright
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks: 1272341 1273215 glusterfs-3.7.6
TreeView+ depends on / blocked
 
Reported: 2015-10-16 07:01 UTC by Nag Pavan Chilakam
Modified: 2015-11-17 06:00 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.7.6
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1272341 1273215 (view as bug list)
Environment:
Last Closed: 2015-11-17 06:00:20 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Nag Pavan Chilakam 2015-10-16 07:01:13 UTC
Description of problem:
======================
When a brick of a EC cold layer is down, promotions fail due to following error
[2015-10-16 06:43:59.589013] E [socket.c:2278:socket_connect_finish] 0-ecvol-client-5: connection to 10.70.34.43:49243 failed (Connection refused)
[2015-10-16 06:44:00.128952] E [MSGID: 109037] [tier.c:939:tier_process_brick] 0-tier: Failed gettingjournal_mode of sql db /rhs/brick3/ecvol/.glusterfs/ecvol.db
[2015-10-16 06:44:00.128989] E [MSGID: 109087] [tier.c:1033:tier_build_migration_qfile] 0-ecvol-tier-dht: Brick query failed



While the error makes complete sense, we must handle this scenario, where db can be built or vetoed using the remaining up and running bricks as long as EC quorum is not breached.

I.e in a 4+2 ec cold layer, even if 2 bricks are down, then the data is as good as available as the quorum of 4 bricks is met. hence db should be built and used based on these 4.

Version-Release number of selected component (if applicable):
============================================================
glusterfs-server-3.7.5-0.22.gitb8ba012.el7.centos.x86_64



Steps to Reproduce:
=================
1.create a ec 4+2 vol and start it
2.Now create a file f1 and then bring down 1 or 2 bricks
3.Now modify f1
4. attach tier and modify f1 to heat it
5. Now create new file h1 and wait for it to be demoted
6. Now touch/modify h1 after it is demoted

Actual results:
===================
both f1 and h1 fail to get promoted

Comment 1 Nag Pavan Chilakam 2015-10-16 07:18:30 UTC
logs and sosreport available @ /home/repo/sosreports/nchilaka/bug.1272334
[nchilaka@rhsqe-repo bug.1272334]$ hostname
rhsqe-repo.lab.eng.blr.redhat.com

Comment 2 Vijay Bellur 2015-10-20 17:26:13 UTC
REVIEW: http://review.gluster.org/12405 (cluster/tier do not abort migration if a single brick is down) posted (#1) for review on release-3.7 by Dan Lambright (dlambrig)

Comment 3 Vijay Bellur 2015-10-21 13:11:36 UTC
COMMIT: http://review.gluster.org/12405 committed in release-3.7 by Dan Lambright (dlambrig) 
------
commit 6c6b4bb361fb6fa3adc69e43d185c755b2f4c771
Author: Dan Lambright <dlambrig>
Date:   Mon Oct 19 20:42:56 2015 -0400

    cluster/tier do not abort migration if a single brick is down
    
    backport fix 12397
    
    When a bricks are down, promotion/demotion should still be possible.
    For example, if an EC brick is down, the other bricks are able to
    recover the data and migrate it.
    
    > Change-Id: I8e650c640bce22a3ad23d75c363fbb9fd027d705
    > BUG: 1273215
    > Signed-off-by: Dan Lambright <dlambrig>
    > Reviewed-on: http://review.gluster.org/12397
    > Tested-by: NetBSD Build System <jenkins.org>
    > Tested-by: Gluster Build System <jenkins.com>
    > Reviewed-by: Joseph Fernandes
    Signed-off-by: Dan Lambright <dlambrig>
    
    Change-Id: I6688757eaf97426c8e1ea1038c598b34bf6b8ccc
    BUG: 1272334
    Signed-off-by: Dan Lambright <dlambrig>
    Reviewed-on: http://review.gluster.org/12405
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>

Comment 4 Raghavendra Talur 2015-11-17 06:00:20 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.6, please open a new bug report.

glusterfs-3.7.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-November/024359.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.