Bug 1273215 - Data Tiering:Promotions fail when brick of EC (disperse) cold layer are down
Data Tiering:Promotions fail when brick of EC (disperse) cold layer are down
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: tiering (Show other bugs)
mainline
Unspecified Unspecified
urgent Severity high
: ---
: ---
Assigned To: Dan Lambright
bugs@gluster.org
:
Depends On: 1272334
Blocks: 1272341
  Show dependency treegraph
 
Reported: 2015-10-19 20:32 EDT by Dan Lambright
Modified: 2016-06-16 09:40 EDT (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1272334
Environment:
Last Closed: 2016-06-16 09:40:47 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Dan Lambright 2015-10-19 20:32:33 EDT
+++ This bug was initially created as a clone of Bug #1272334 +++

Description of problem:
======================
When a brick of a EC cold layer is down, promotions fail due to following error
[2015-10-16 06:43:59.589013] E [socket.c:2278:socket_connect_finish] 0-ecvol-client-5: connection to 10.70.34.43:49243 failed (Connection refused)
[2015-10-16 06:44:00.128952] E [MSGID: 109037] [tier.c:939:tier_process_brick] 0-tier: Failed gettingjournal_mode of sql db /rhs/brick3/ecvol/.glusterfs/ecvol.db
[2015-10-16 06:44:00.128989] E [MSGID: 109087] [tier.c:1033:tier_build_migration_qfile] 0-ecvol-tier-dht: Brick query failed



While the error makes complete sense, we must handle this scenario, where db can be built or vetoed using the remaining up and running bricks as long as EC quorum is not breached.

I.e in a 4+2 ec cold layer, even if 2 bricks are down, then the data is as good as available as the quorum of 4 bricks is met. hence db should be built and used based on these 4.

Version-Release number of selected component (if applicable):
============================================================
glusterfs-server-3.7.5-0.22.gitb8ba012.el7.centos.x86_64



Steps to Reproduce:
=================
1.create a ec 4+2 vol and start it
2.Now create a file f1 and then bring down 1 or 2 bricks
3.Now modify f1
4. attach tier and modify f1 to heat it
5. Now create new file h1 and wait for it to be demoted
6. Now touch/modify h1 after it is demoted

Actual results:
===================
both f1 and h1 fail to get promoted

--- Additional comment from nchilaka on 2015-10-16 03:18:30 EDT ---

logs and sosreport available @ /home/repo/sosreports/nchilaka/bug.1272334
[nchilaka@rhsqe-repo bug.1272334]$ hostname
rhsqe-repo.lab.eng.blr.redhat.com
Comment 1 Vijay Bellur 2015-10-19 20:48:23 EDT
REVIEW: http://review.gluster.org/12397 (cluster/tier do not abort migration if a single brick is down) posted (#1) for review on master by Dan Lambright (dlambrig@redhat.com)
Comment 2 Vijay Bellur 2015-10-19 21:55:12 EDT
REVIEW: http://review.gluster.org/12397 (cluster/tier do not abort migration if a single brick is down) posted (#2) for review on master by Dan Lambright (dlambrig@redhat.com)
Comment 3 Vijay Bellur 2015-10-20 13:16:47 EDT
COMMIT: http://review.gluster.org/12397 committed in master by Dan Lambright (dlambrig@redhat.com) 
------
commit a7b57f8a0d24d0ed1cd3a8700e52f70181000038
Author: Dan Lambright <dlambrig@redhat.com>
Date:   Mon Oct 19 20:42:56 2015 -0400

    cluster/tier do not abort migration if a single brick is down
    
    When a bricks are down, promotion/demotion should still be possible.
    For example, if an EC brick is down, the other bricks are able to
    recover the data and migrate it.
    
    Change-Id: I8e650c640bce22a3ad23d75c363fbb9fd027d705
    BUG: 1273215
    Signed-off-by: Dan Lambright <dlambrig@redhat.com>
    Reviewed-on: http://review.gluster.org/12397
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Joseph Fernandes
Comment 4 Niels de Vos 2016-06-16 09:40:47 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.