Bug 1035587

Summary: "volume start force" will assign "trusted.glusterfs.volume-id" to the brick if volume-id is absent even though the brick path is not a mount point with xfs
Product: Red Hat Gluster Storage Reporter: spandura
Component: glusterdAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED EOL QA Contact: storage-qa-internal <storage-qa-internal>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2.1CC: vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: glusterd
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1284385 (view as bug list) Environment:
Last Closed: 2015-12-03 17:20:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1284385    

Description spandura 2013-11-28 07:10:55 UTC
Description of problem:
=======================
In a 1 x 2 replicate volume a server node crashed. The node comes online. Even before mounting bricks to a valid device containing xfs if we perform "gluster volume start <volume_name> force" we assign "trusted.glusterfs.volume-id" to the brick without checking whether the brick is not a mount point with xfs. 

This results in unsupported configuration (no xfs/LVM) and worse may fill up / of the storage node if not discovered.

Refer to https://bugzilla.redhat.com/show_bug.cgi?id=860999 for the case which filled "/" file system. 

Version-Release number of selected component (if applicable):
=============================================================
glusterfs 3.4.0.43.1u2rhs built on Nov 12 2013 07:38:20

How reproducible:
=================
Often

Steps to Reproduce:
===================
1. Create a 1 x 2 replicate volume with brick mounted on a mount point with xfs. Do not add the automount of these bricks in "/etc/fstab"

2. Restart the node2.

3. When node2 comes online execute: "gluster v start <volume_name> force"

Actual results:
================
Assigns the "trusted-volume-id" to the brick and starts the brick process. 

Expected results:
================
"gluster v start <volume_name> force" should check if the specified bricks are separate mount points with xfs on LVM volumes and reject the command or at least issue a warning

Comment 2 Vivek Agarwal 2015-12-03 17:20:20 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.