Bug 1272334 - Data Tiering:Promotions fail when brick of EC (disperse) cold layer are down
Data Tiering:Promotions fail when brick of EC (disperse) cold layer are down
Product: GlusterFS
Classification: Community
Component: tiering (Show other bugs)
Unspecified Unspecified
urgent Severity high
: ---
: ---
Assigned To: Dan Lambright
Depends On:
Blocks: 1272341 1273215 glusterfs-3.7.6
  Show dependency treegraph
Reported: 2015-10-16 03:01 EDT by nchilaka
Modified: 2015-11-17 01:00 EST (History)
4 users (show)

See Also:
Fixed In Version: glusterfs-3.7.6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1272341 1273215 (view as bug list)
Last Closed: 2015-11-17 01:00:20 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description nchilaka 2015-10-16 03:01:13 EDT
Description of problem:
When a brick of a EC cold layer is down, promotions fail due to following error
[2015-10-16 06:43:59.589013] E [socket.c:2278:socket_connect_finish] 0-ecvol-client-5: connection to failed (Connection refused)
[2015-10-16 06:44:00.128952] E [MSGID: 109037] [tier.c:939:tier_process_brick] 0-tier: Failed gettingjournal_mode of sql db /rhs/brick3/ecvol/.glusterfs/ecvol.db
[2015-10-16 06:44:00.128989] E [MSGID: 109087] [tier.c:1033:tier_build_migration_qfile] 0-ecvol-tier-dht: Brick query failed

While the error makes complete sense, we must handle this scenario, where db can be built or vetoed using the remaining up and running bricks as long as EC quorum is not breached.

I.e in a 4+2 ec cold layer, even if 2 bricks are down, then the data is as good as available as the quorum of 4 bricks is met. hence db should be built and used based on these 4.

Version-Release number of selected component (if applicable):

Steps to Reproduce:
1.create a ec 4+2 vol and start it
2.Now create a file f1 and then bring down 1 or 2 bricks
3.Now modify f1
4. attach tier and modify f1 to heat it
5. Now create new file h1 and wait for it to be demoted
6. Now touch/modify h1 after it is demoted

Actual results:
both f1 and h1 fail to get promoted
Comment 1 nchilaka 2015-10-16 03:18:30 EDT
logs and sosreport available @ /home/repo/sosreports/nchilaka/bug.1272334
[nchilaka@rhsqe-repo bug.1272334]$ hostname
Comment 2 Vijay Bellur 2015-10-20 13:26:13 EDT
REVIEW: http://review.gluster.org/12405 (cluster/tier do not abort migration if a single brick is down) posted (#1) for review on release-3.7 by Dan Lambright (dlambrig@redhat.com)
Comment 3 Vijay Bellur 2015-10-21 09:11:36 EDT
COMMIT: http://review.gluster.org/12405 committed in release-3.7 by Dan Lambright (dlambrig@redhat.com) 
commit 6c6b4bb361fb6fa3adc69e43d185c755b2f4c771
Author: Dan Lambright <dlambrig@redhat.com>
Date:   Mon Oct 19 20:42:56 2015 -0400

    cluster/tier do not abort migration if a single brick is down
    backport fix 12397
    When a bricks are down, promotion/demotion should still be possible.
    For example, if an EC brick is down, the other bricks are able to
    recover the data and migrate it.
    > Change-Id: I8e650c640bce22a3ad23d75c363fbb9fd027d705
    > BUG: 1273215
    > Signed-off-by: Dan Lambright <dlambrig@redhat.com>
    > Reviewed-on: http://review.gluster.org/12397
    > Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    > Tested-by: Gluster Build System <jenkins@build.gluster.com>
    > Reviewed-by: Joseph Fernandes
    Signed-off-by: Dan Lambright <dlambrig@redhat.com>
    Change-Id: I6688757eaf97426c8e1ea1038c598b34bf6b8ccc
    BUG: 1272334
    Signed-off-by: Dan Lambright <dlambrig@redhat.com>
    Reviewed-on: http://review.gluster.org/12405
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
Comment 4 Raghavendra Talur 2015-11-17 01:00:20 EST
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.6, please open a new bug report.

glusterfs-3.7.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-November/024359.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.