Bug 814273

Summary: Previously used brick cannot be used in a new volume, documentation needs to be updated.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Vimal Kumar <vikumar>
Component: DocumentationAssignee: Divya <divya>
Status: CLOSED CURRENTRELEASE QA Contact: Sudhir D <sdharane>
Severity: medium Docs Contact:
Priority: medium    
Version: 2.0CC: asriram, gluster-bugs, rwheeler, sdharane, storage-doc, vikumar
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-04-10 07:17:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 817967    

Description Vimal Kumar 2012-04-19 14:02:07 UTC
1) Description of problem:

While using bricks which were already used for another volume (which is deleted now), gluster says that the brick has already been part of a previous volume and the volume creation fails.

This is because of the volumeid present on the bricks and AFAIK is a feature in gluster 3.3. In order to add this brick to a new volume, we have to delete the volumeid using the following command on each brick which is intended to be used.

# setfattr -x trusted.glusterfs.volume-id <partition>

This should be included in the documentation of Red Hat Storage.

2) Version-Release number of selected component (if applicable):

RHS2.0 beta 1

3) How reproducible:

Always

4) Steps to Reproduce:

a) Delete all the bricks from an existing volume/
b) Delete the volume itself.
c) Try creating a new volume using the same bricks which were in the previous volume.

<snip>
# gluster volume  remove-brick volume1 node1:/share1
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Remove Brick successful
[root@node1-rhs2 ~]# gluster volume  remove-brick volume1 node2:/share1
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Remove Brick successful

[root@node1-rhs2 ~]# gluster volume info
Volume Name: volume1
Type: Replicate
Volume ID: fbc4d43d-f0d8-43a0-9a40-d4db3f762859
Status: Stopped
Number of Bricks: 0 x 3 = 1
Transport-type: tcp
Bricks:
Brick1: node3:/share1

[root@node1-rhs2 ~]# gluster volume delete volume1
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
Deleting volume volume1 has been successful

[root@node1-rhs2 ~]# gluster volume create volume1 node1:/share1 node2:/share2 node3:/share3
'node1:/share1' has been part of a deleted volume with id fbc4d43d-f0d8-43a0-9a40-d4db3f762859. Please re-create the brick directory.
</snip>

Comment 3 Ujjwala 2012-05-31 08:10:53 UTC
Verified it on the link provided above and the changes have been made in the documentation.