Bug 1375017 - RHV Self Hosted fails to deploy with gluster
Summary: RHV Self Hosted fails to deploy with gluster
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Quickstart Cloud Installer
Classification: Red Hat
Component: Installation - RHEV
Version: 1.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: John Matthews
QA Contact: Sudhir Mallamprabhakara
Dan Macpherson
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-11 19:30 UTC by James Olin Oden
Modified: 2016-09-12 20:49 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-09-12 20:48:17 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description James Olin Oden 2016-09-11 19:30:58 UTC
Description of problem:
The gluster server was setup to have three gluster volumes of 100GB a piece.
The volumes were set to be used by the three mount points (vms, exports and self_hosted). At some point the deployment died saying there wasn't enough space.   The error message on host was:

Storage domain for self hosted engine is too small: you should have at least 20.00 GB free

The problem was the engine had already been deployed and it had only consumed about 50Gb of the self hosted volume leaving 50 Gb.

Version-Release number of selected component (if applicable):
QCI-1.0-RHEL-7-20160902.5

How reproducible:
Every time

Steps to Reproduce:
1.  Create a gluster server and server out three gluster volumes of 100Gb
    a piece.
2.  Do self hosted RHV deployment with just one host.

Actual results:
It dies saying a remote command return 1 instead of zero.   When you look
through the logs you will find the message:

Storage domain for self hosted engine is too small: you should have at least 20.00 GB free

Expected results:
The deployment to succeed.

Comment 6 James Olin Oden 2016-09-12 20:48:17 UTC
I figured out that the virtual disks were not being mounted such that the gluster volume back and store was actually the root filesystem.   Hence anytime I changed the size of the virtual disks it had not effect on how much space the gluster volumes had.   When my provisioning script to do a "mount -a" after the fstab had been updated and before the gluster volumes were created, the deployment succeeded.

It did have a "puppet sync" issue, but after hitting resume some amount of times, the deployment carried on and succeeded.


Note You need to log in before you can comment on or make changes to this bug.