Bug 1118629 - Erasure coding translator
Summary: Erasure coding translator
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
Assignee: Xavi Hernandez
QA Contact:
: 871986 (view as bug list)
Depends On:
TreeView+ depends on / blocked
Reported: 2014-07-11 08:02 UTC by Xavi Hernandez
Modified: 2014-11-11 08:36 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.6.0beta1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2014-11-11 08:36:59 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)

Description Xavi Hernandez 2014-07-11 08:02:13 UTC
Description of problem:

Bug to track changes related to the erasure coding translator.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results:

Expected results:

Additional info:

Comment 1 Anand Avati 2014-07-11 09:48:32 UTC
REVIEW: http://review.gluster.org/7749 (cluster/ec: Added erasure code translator) posted (#23) for review on master by Xavier Hernandez (xhernandez@datalab.es)

Comment 2 Anand Avati 2014-07-11 09:49:31 UTC
REVIEW: http://review.gluster.org/7782 (cli/glusterd: Added support for dispersed volumes) posted (#18) for review on master by Xavier Hernandez (xhernandez@datalab.es)

Comment 3 Anand Avati 2014-07-11 17:33:47 UTC
COMMIT: http://review.gluster.org/7749 committed in master by Vijay Bellur (vbellur@redhat.com) 
commit ad112305a1c7452b13c92238b40ded80361838f3
Author: Xavier Hernandez <xhernandez@datalab.es>
Date:   Mon May 5 12:57:34 2014 +0200

    cluster/ec: Added erasure code translator
    Change-Id: I293917501d5c2ca4cdc6303df30cf0b568cea361
    BUG: 1118629
    Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
    Reviewed-on: http://review.gluster.org/7749
    Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
    Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Vijay Bellur <vbellur@redhat.com>

Comment 4 Anand Avati 2014-07-11 17:34:31 UTC
COMMIT: http://review.gluster.org/7782 committed in master by Vijay Bellur (vbellur@redhat.com) 
commit 1392da3e237d8ea080573909015916e3544a6d2c
Author: Xavier Hernandez <xhernandez@datalab.es>
Date:   Thu May 15 10:35:14 2014 +0200

    cli/glusterd: Added support for dispersed volumes
    Two new options have been added to the 'create' command of the cli
        disperse [<count>] redundancy <count>
    Both are optional. A dispersed volume is created by specifying, at
    least, one of them. If 'disperse' is missing or it's present but
    '<count>' does not, the number of bricks enumerated in the command
    line is taken as the disperse count.
    If 'redundancy' is missing, the lowest optimal value is assumed. A
    configuration is considered optimal (for most workloads) when the
    disperse count - redundancy count is a power of 2. If the resulting
    redundancy is 1, the volume is created normally, but if it's greater
    than 1, a warning is shown to the user and he/she must answer yes/no
    to continue volume creation. If there isn't any optimal value for
    the given number of bricks, a warning is also shown and, if the user
    accepts, a redundancy of 1 is used.
    If 'redundancy' is specified and the resulting volume is not optimal,
    another warning is shown to the user.
    A distributed-disperse volume can be created using a number of bricks
    multiple of the disperse count.
    Change-Id: Iab93efbe78e905cdb91f54f3741599f7ea6645e4
    BUG: 1118629
    Signed-off-by: Xavier Hernandez <xhernandez@datalab.es>
    Reviewed-on: http://review.gluster.org/7782
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
    Reviewed-by: Vijay Bellur <vbellur@redhat.com>

Comment 5 Niels de Vos 2014-09-22 12:44:46 UTC
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 6 Niels de Vos 2014-09-27 13:21:07 UTC
*** Bug 871986 has been marked as a duplicate of this bug. ***

Comment 7 Niels de Vos 2014-11-11 08:36:59 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report.

glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html
[2] http://supercolony.gluster.org/mailman/listinfo/gluster-users

Note You need to log in before you can comment on or make changes to this bug.