Bug 1375017

Summary: RHV Self Hosted fails to deploy with gluster
Product: Red Hat Quickstart Cloud Installer Reporter: James Olin Oden <joden>
Component: Installation - RHEVAssignee: John Matthews <jmatthew>
Status: CLOSED NOTABUG QA Contact: Sudhir Mallamprabhakara <smallamp>
Severity: high Docs Contact: Dan Macpherson <dmacpher>
Priority: unspecified    
Version: 1.0CC: bmorriso, bthurber, tsanders
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-09-12 20:48:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description James Olin Oden 2016-09-11 19:30:58 UTC
Description of problem:
The gluster server was setup to have three gluster volumes of 100GB a piece.
The volumes were set to be used by the three mount points (vms, exports and self_hosted). At some point the deployment died saying there wasn't enough space.   The error message on host was:

Storage domain for self hosted engine is too small: you should have at least 20.00 GB free

The problem was the engine had already been deployed and it had only consumed about 50Gb of the self hosted volume leaving 50 Gb.

Version-Release number of selected component (if applicable):
QCI-1.0-RHEL-7-20160902.5

How reproducible:
Every time

Steps to Reproduce:
1.  Create a gluster server and server out three gluster volumes of 100Gb
    a piece.
2.  Do self hosted RHV deployment with just one host.

Actual results:
It dies saying a remote command return 1 instead of zero.   When you look
through the logs you will find the message:

Storage domain for self hosted engine is too small: you should have at least 20.00 GB free

Expected results:
The deployment to succeed.

Comment 6 James Olin Oden 2016-09-12 20:48:17 UTC
I figured out that the virtual disks were not being mounted such that the gluster volume back and store was actually the root filesystem.   Hence anytime I changed the size of the virtual disks it had not effect on how much space the gluster volumes had.   When my provisioning script to do a "mount -a" after the fstab had been updated and before the gluster volumes were created, the deployment succeeded.

It did have a "puppet sync" issue, but after hitting resume some amount of times, the deployment carried on and succeeded.