Bug 887311 - gluster volume create command creates brick directory in / of storage node if the specified directory does not exist
Summary: gluster volume create command creates brick directory in / of storage node if...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.0
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: ---
Assignee: Krutika Dhananjay
QA Contact: shylesh
URL:
Whiteboard:
Depends On:
Blocks: 948729
TreeView+ depends on / blocked
 
Reported: 2012-12-14 16:25 UTC by Patric Uebele
Modified: 2013-09-23 22:34 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.4.0.4rhs-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 948729 (view as bug list)
Environment:
Last Closed: 2013-09-23 22:34:43 UTC
Embargoed:


Attachments (Terms of Use)

Description Patric Uebele 2012-12-14 16:25:05 UTC
Description of problem:
gluster volume create command creates brick directory in / of storage node if the specified directory does not exist, e.g. due to a typo. This results in unsupported configuration (no xfs/LVM) and worse may fill up / of the storage node if not discovered.

Version-Release number of selected component (if applicable): 2.0 Bugfix3


How reproducible:
Consistently

Steps to Reproduce:
1.gluster volume create wrongvol rhs5.eth0:/foo rhs8.eth0:/foo
Creation of volume wrongvol has been successful. Please start the volume to access data.

/foo did not exist before

2.[root@rhs13 ~]# gluster volume start wrongvol
Starting volume wrongvol has been successful

3. 
  
Actual results:

[root@rhs13 ~]# gluster volume info wrongvol
 
Volume Name: wrongvol
Type: Distribute
Volume ID: 9589cddc-9456-4c8a-93d2-02629a448e0f
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: rhs5.eth0:/foo
Brick2: rhs8.eth0:/foo


[root@rhs5 ~]# df -h /foo
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                       50G   18G   29G  39% /
[root@rhs8 ~]# df -h /foo
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
                       50G   17G   31G  36% /


Expected results:
Reject the command or at least issue a warning. Ideally, check if the specified bricks are separate mount points with xfs on LVM volumes.

Additional info:

Comment 2 Patric Uebele 2013-01-29 10:19:53 UTC
See also https://bugzilla.redhat.com/show_bug.cgi?id=901561

Comment 3 senaik 2013-06-28 06:33:34 UTC
Verified in Version : 3.4.0.12rhs-1.el6rhs.x86_64
=================== 

Steps : 
===== 
- Volume Creation gives the following warning message while trying to create a volume if the specified directory is not present 

[ /Volume did not exist before on the nodes ]

gluster v create Volume 10.70.34.105:/Volume 10.70.34.86:/Volume 10.70.34.85:/Volume

volume create: Volume: failed: The brick 10.70.34.86:/Volume is is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.

- On using 'force' as mentioned in the warning , volume creation is successful 

gluster v create Volume 10.70.34.105:/Volume 10.70.34.86:/Volume 10.70.34.85:/Volume force
volume create: Volume: success: please start the volume to access data


[root@jay ~]# gluster v i Volume
 
Volume Name: Volume
Type: Distribute
Volume ID: c2e6b608-13e1-4c59-98de-752daeb4ea0f
Status: Created
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.34.105:/Volume
Brick2: 10.70.34.86:/Volume
Brick3: 10.70.34.85:/Volume


Marking is as 'Verified'

Comment 5 Scott Haines 2013-09-23 22:34:43 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html


Note You need to log in before you can comment on or make changes to this bug.