Bug 864638

Summary: create volume property to allow/disallow starting a volume with bricks on the root filesystem
Product: [Community] GlusterFS Reporter: Shawn Heisey <redhat>
Component: glusterdAssignee: bugs <bugs>
Status: CLOSED EOL QA Contact:
Severity: unspecified Docs Contact:
Priority: medium    
Version: 3.4.6CC: bugs, gluster-bugs, kparthas, redhat, rwheeler
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-10-07 14:01:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Shawn Heisey 2012-10-09 20:19:44 UTC
This came up in IRC discussion prior to filing bug 864611.

In normal production situations with large volumes, especially if they are distributed, gluster bricks will live on filesystems that are separate from the root filesystem.

Allowing gluster to operate on the root filesystem in a situation like the above is a major problem, because typically the root filesystem will be very small (gigabytes) but the bricks will be larger (terabytes).  Any "didn't mean to type that directory" or "disk didn't get mounted" mistake that points a brick at a location on the root filesystem will quickly run out of disk space and potentially cause fatal problems.

In testbeds or production situations that don't require a lot of complexity, you might actually want your brick data to live on your root filesystem.

I would propose two things.  When creating a volume with brick paths that are on the root filesystem, give the user a warning to that effect, optionally displaying an "are you sure?" prompt unless script mode is enabled.  When the user (or glusterd itself on reboot) then attempts to *start* the volume, that operation should fail, unless a volume property is set to allow it.  My current suggestion for the property name would be allow.rootfs.  The failure message should probably mention the property name.

This new feature would have to be able to seamlessly handle upgrades from previous versions where bricks do exist on the root filesystem.

If (and only if) you can reliably detect that 1) you're doing an upgrade (volume was last handled by a previous version) AND 2) whether all of the bricks are on their own filesystems, you can automatically set the new parameter to false or true according to what you detect.  If these detections are not possible, you would have to always assume you're in an upgrade situation, so if the property is missing and the volume was previously started, you have to set it to true.

The documentation would need to be updated in several locations.

Comment 1 Shawn Heisey 2012-10-09 20:29:52 UTC
Detecting that the volume was last handled by a previous version might be as simple as noting that the new property is not present.

Comment 3 Niels de Vos 2014-11-27 14:54:00 UTC
The version that this bug has been reported against, does not get any updates from the Gluster Community anymore. Please verify if this report is still valid against a current (3.4, 3.5 or 3.6) release and update the version, or close this bug.

If there has been no update before 9 December 2014, this bug will get automatocally closed.

Comment 4 Shawn Heisey 2014-11-27 16:37:43 UTC
It's been a while since I did testing, but I believe that 3.4.2 (and probably newer versions as well) will happily create and start a volume even if the bricks live on the root filesystem.

For a testbed, such a setup is perfectly acceptable, so what I propose here would prevent *starting* a volume with bricks on the root filesystem, unless a volume property is set that says it's OK.  The "getting started" instructions could include a step to set that property, with an explanation of what it does and why you might not want to use it on a production volume.

Comment 5 Niels de Vos 2015-05-17 22:00:02 UTC
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs".

If there is no response by the end of the month, this bug will get automatically closed.

Comment 6 Kaleb KEITHLEY 2015-10-07 14:01:36 UTC
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.