Hide Forgot
Description of problem: ======================= In a 1 x 2 replicate volume a server node crashed. The node comes online. Even before mounting bricks to a valid device containing xfs if we perform "gluster volume start <volume_name> force" we assign "trusted.glusterfs.volume-id" to the brick without checking whether the brick is not a mount point with xfs. This results in unsupported configuration (no xfs/LVM) and worse may fill up / of the storage node if not discovered. Refer to https://bugzilla.redhat.com/show_bug.cgi?id=860999 for the case which filled "/" file system. Version-Release number of selected component (if applicable): ============================================================= glusterfs 3.4.0.43.1u2rhs built on Nov 12 2013 07:38:20 How reproducible: ================= Often Steps to Reproduce: =================== 1. Create a 1 x 2 replicate volume with brick mounted on a mount point with xfs. Do not add the automount of these bricks in "/etc/fstab" 2. Restart the node2. 3. When node2 comes online execute: "gluster v start <volume_name> force" Actual results: ================ Assigns the "trusted-volume-id" to the brick and starts the brick process. Expected results: ================ "gluster v start <volume_name> force" should check if the specified bricks are separate mount points with xfs on LVM volumes and reject the command or at least issue a warning
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.