This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 814273 - Previously used brick cannot be used in a new volume, documentation needs to be updated.
Previously used brick cannot be used in a new volume, documentation needs to ...
Status: CLOSED CURRENTRELEASE
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: Documentation (Show other bugs)
2.0
All Linux
medium Severity medium
: ---
: ---
Assigned To: Divya
Sudhir D
:
Depends On:
Blocks: 817967
  Show dependency treegraph
 
Reported: 2012-04-19 10:02 EDT by Vimal Kumar
Modified: 2015-04-10 03:17 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-04-10 03:17:09 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Vimal Kumar 2012-04-19 10:02:07 EDT
1) Description of problem:

While using bricks which were already used for another volume (which is deleted now), gluster says that the brick has already been part of a previous volume and the volume creation fails.

This is because of the volumeid present on the bricks and AFAIK is a feature in gluster 3.3. In order to add this brick to a new volume, we have to delete the volumeid using the following command on each brick which is intended to be used.

# setfattr -x trusted.glusterfs.volume-id <partition>

This should be included in the documentation of Red Hat Storage.

2) Version-Release number of selected component (if applicable):

RHS2.0 beta 1

3) How reproducible:

Always

4) Steps to Reproduce:

a) Delete all the bricks from an existing volume/
b) Delete the volume itself.
c) Try creating a new volume using the same bricks which were in the previous volume.

<snip>
# gluster volume  remove-brick volume1 node1:/share1
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Remove Brick successful
[root@node1-rhs2 ~]# gluster volume  remove-brick volume1 node2:/share1
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Remove Brick successful

[root@node1-rhs2 ~]# gluster volume info
Volume Name: volume1
Type: Replicate
Volume ID: fbc4d43d-f0d8-43a0-9a40-d4db3f762859
Status: Stopped
Number of Bricks: 0 x 3 = 1
Transport-type: tcp
Bricks:
Brick1: node3:/share1

[root@node1-rhs2 ~]# gluster volume delete volume1
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
Deleting volume volume1 has been successful

[root@node1-rhs2 ~]# gluster volume create volume1 node1:/share1 node2:/share2 node3:/share3
'node1:/share1' has been part of a deleted volume with id fbc4d43d-f0d8-43a0-9a40-d4db3f762859. Please re-create the brick directory.
</snip>
Comment 3 Ujjwala 2012-05-31 04:10:53 EDT
Verified it on the link provided above and the changes have been made in the documentation.

Note You need to log in before you can comment on or make changes to this bug.