Bug 814273 - Previously used brick cannot be used in a new volume, documentation needs to be updated.
Summary: Previously used brick cannot be used in a new volume, documentation needs to ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: Documentation
Version: 2.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Divya
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks: 817967
TreeView+ depends on / blocked
 
Reported: 2012-04-19 14:02 UTC by Vimal Kumar
Modified: 2015-04-10 07:17 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-04-10 07:17:09 UTC
Embargoed:


Attachments (Terms of Use)

Description Vimal Kumar 2012-04-19 14:02:07 UTC
1) Description of problem:

While using bricks which were already used for another volume (which is deleted now), gluster says that the brick has already been part of a previous volume and the volume creation fails.

This is because of the volumeid present on the bricks and AFAIK is a feature in gluster 3.3. In order to add this brick to a new volume, we have to delete the volumeid using the following command on each brick which is intended to be used.

# setfattr -x trusted.glusterfs.volume-id <partition>

This should be included in the documentation of Red Hat Storage.

2) Version-Release number of selected component (if applicable):

RHS2.0 beta 1

3) How reproducible:

Always

4) Steps to Reproduce:

a) Delete all the bricks from an existing volume/
b) Delete the volume itself.
c) Try creating a new volume using the same bricks which were in the previous volume.

<snip>
# gluster volume  remove-brick volume1 node1:/share1
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Remove Brick successful
[root@node1-rhs2 ~]# gluster volume  remove-brick volume1 node2:/share1
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Remove Brick successful

[root@node1-rhs2 ~]# gluster volume info
Volume Name: volume1
Type: Replicate
Volume ID: fbc4d43d-f0d8-43a0-9a40-d4db3f762859
Status: Stopped
Number of Bricks: 0 x 3 = 1
Transport-type: tcp
Bricks:
Brick1: node3:/share1

[root@node1-rhs2 ~]# gluster volume delete volume1
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
Deleting volume volume1 has been successful

[root@node1-rhs2 ~]# gluster volume create volume1 node1:/share1 node2:/share2 node3:/share3
'node1:/share1' has been part of a deleted volume with id fbc4d43d-f0d8-43a0-9a40-d4db3f762859. Please re-create the brick directory.
</snip>

Comment 3 Ujjwala 2012-05-31 08:10:53 UTC
Verified it on the link provided above and the changes have been made in the documentation.


Note You need to log in before you can comment on or make changes to this bug.