Bug 1065551 - Unable to add bricks to replicated volume
Summary: Unable to add bricks to replicated volume
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.4.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: SATHEESARAN
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-02-14 22:49 UTC by nik
Modified: 2014-11-11 08:27 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.6.0beta1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-11-11 08:27:48 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description nik 2014-02-14 22:49:35 UTC
Description of problem:

Unable to add new bricks to an existing replicated volume.

--

Version-Release number of selected component (if applicable):

Running Red Hat Enterprise Linux Everything release 7.0 Beta (Maipo).  The following packages are installed:

glusterfs.x86_64                  3.4.2-1.el7                    @glusterfs-epel
glusterfs-cli.x86_64              3.4.2-1.el7                    @glusterfs-epel
glusterfs-fuse.x86_64             3.4.2-1.el7                    @glusterfs-epel
glusterfs-libs.x86_64             3.4.2-1.el7                    @glusterfs-epel
glusterfs-server.x86_64           3.4.2-1.el7                    @glusterfs-epel

--

How reproducible:

I can reproduce when starting with a two-brick volume (two servers) and adding two more -or- starting with a four-brick volume, removing two bricks, then adding two new bricks.

--

Steps to Reproduce:

gluster> volume create vol1 replica 2 rhel1:/gluster/vol1 rhel2:/gluster/vol1 force
volume create: vol1: success: please start the volume to access data

gluster> volume start vol1
volume start: vol1: success

gluster> volume info vol1
Volume Name: vol1
Type: Replicate
Volume ID: c8f6217d-7c3b-45f0-b555-25c1f32512fc
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rhel1:/gluster/vol1
Brick2: rhel2:/gluster/vol1

gluster> volume add-brick vol1 replica 2 rhel3:/gluster/vol1 rhel4:/gluster/vol1
volume add-brick: failed: 

--

Actual results:

Failed to add bricks

--

Expected results:

Added two additional bricks (two new servers).

--

Additional info:

Following are logs from start to finish of the above steps.  Gluster started via `glusterd --debug`

host 1 (rhel1): http://ur1.ca/gmn8r
host 2 (rhel2): http://ur1.ca/gmn91
host 3 (rhel3): http://ur1.ca/gmn99
host 4 (rhel4): http://ur1.ca/gmn9h

Comment 1 nik 2014-02-14 22:59:14 UTC
Note - /gluster is a directory on /, it's not it's own volume.

--

[root@rhel1 ~]# df /gluster/vol1
Filesystem     1K-blocks   Used Available Use% Mounted on
/dev/sda3       19325952 902340  18423612   5% /

[root@rhel2 ~]# df /gluster/vol1
Filesystem     1K-blocks   Used Available Use% Mounted on
/dev/sda3       19325952 903156  18422796   5% /

[root@rhel3 ~]# df /gluster/vol1
df: '/gluster/vol1': No such file or directory

[root@rhel4 ~]# df /gluster/vol1
df: '/gluster/vol1': No such file or directory

Comment 2 nik 2014-02-18 21:37:38 UTC
i tried this on a centos 6.5 host to make sure it's not isolated to rhel 7-beta and got the same behavior.

i discovered how to fix the actual brick add problem - i created a new physical disk and mounted it on /gluster, at that point adding the new bricks to my volume worked fine.

--

there is still one problem - the actual failure on the cli was not shown.  this should definitely be fixed.

contents of rhel1:/var/log/gluster/etc-glusterfs-glusterd.vol.log:

[2014-02-18 16:33:55.591046] I [glusterd-brick-ops.c:370:__glusterd_handle_add_brick] 0-management: Received add brick req
[2014-02-18 16:33:55.591185] I [glusterd-brick-ops.c:417:__glusterd_handle_add_brick] 0-management: replica-count is 2

contents of rhel3:/var/log/gluster/etc-glusterfs-glusterd.vol.log:

[2014-02-18 16:33:54.406180] E [glusterd-op-sm.c:3719:glusterd_op_ac_stage_op] 0-management: Stage failed on operation 'Volume Add brick', Status : -1

--

another issue: there is a typo here, "is is" should be "is"

volume create: vol1: failed: The brick rhel1:/gluster/vol1 is is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.

Comment 3 Anand Avati 2014-03-06 11:10:36 UTC
REVIEW: http://review.gluster.org/7198 (glusterd: Fixed typo in console message during volume create) posted (#1) for review on master by Satheesaran Sundaramoorthi (satheesaran)

Comment 4 SATHEESARAN 2014-03-06 11:50:55 UTC
I have tried the first issue with centos 6.5
And I get the appropriate error messages,
[root@my-centos gluster-rpms]# gluster volume create repvol replica 2 192.168.122.31:/exp/1 192.168.122.31:/exp/2 force
Multiple bricks of a replicate volume are present on the same server. This setup is not optimal.
Do you still want to continue creating the volume?  (y/n) y
volume create: repvol: success: please start the volume to access data

[root@my-centos gluster-rpms]# gluster volume start repvol
volume start: repvol: success
[root@my-centos gluster-rpms]# gluster volume add-brick repvol replica 2 192.168.122.31:/exp/3 192.168.122.31:/exp/4
volume add-brick: failed: The brick 192.168.122.31:/exp/3 is is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.

[root@my-centos gluster-rpms]# rpm -qa | grep gluster
glusterfs-fuse-3.4.2-1.el6.x86_64
glusterfs-libs-3.4.2-1.el6.x86_64
glusterfs-server-3.4.2-1.el6.x86_64
glusterfs-cli-3.4.2-1.el6.x86_64
glusterfs-rdma-3.4.2-1.el6.x86_64
glusterfs-3.4.2-1.el6.x86_64

For the typo in the error message, I have sent a patch here :
http://review.gluster.org/#/c/7198/


I couldn't reproduce the first issue

Comment 5 Anand Avati 2014-03-09 07:22:56 UTC
COMMIT: http://review.gluster.org/7198 committed in master by Vijay Bellur (vbellur) 
------
commit f1c4c9e6d47b637939b62b473178e1c3095651fc
Author: Satheesaran <satheesaran>
Date:   Thu Mar 6 15:40:31 2014 +0530

    glusterd: Fixed typo in console message during volume create
    
    While creating a volume, if the brick is created on the root
    partition, then the error statement is thrown.
    
    This error statements was containing two "is" in it.
    Removed one of the "is"
    
    Change-Id: I0d83f0feccda34989f7e2b97041d1f15ec9e2f00
    BUG: 1065551
    Signed-off-by: Satheesaran <satheesaran>
    Reviewed-on: http://review.gluster.org/7198
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Krishnan Parthasarathi <kparthas>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 6 Niels de Vos 2014-09-22 12:35:53 UTC
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 7 Niels de Vos 2014-11-11 08:27:48 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report.

glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html
[2] http://supercolony.gluster.org/mailman/listinfo/gluster-users


Note You need to log in before you can comment on or make changes to this bug.