Bug 1464789 - Enable gluster based hyperconverged installation with only 1 replica for testing / small setups
Enable gluster based hyperconverged installation with only 1 replica for test...
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: cockpit-ovirt (Show other bugs)
Unspecified Unspecified
unspecified Severity medium
: ---
: ---
Assigned To: Ryan Barry
Virtualization Bugs
Depends On:
  Show dependency treegraph
Reported: 2017-06-25 15:40 EDT by Joachim Schröder
Modified: 2017-06-26 05:21 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2017-06-26 05:21:11 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: Gluster
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Joachim Schröder 2017-06-25 15:40:50 EDT
Description of problem:
In case one wants to create a gluster based hyperconverged RHV setup but is limited to only 1 host, the installation will fail as there currently is a hard coded requirement of 3 replicas. 
This should be still the default but changeable by the user who explicitely wants to start with a 1 hypervisor setup but maybe later extend to a 2 or 3 node setup.

Version-Release number of selected component (if applicable):

How reproducible:
Install RHV-H with underlying hyperconverged glusterfs.

Actual results:
Installation will not continue without specifying at least 3 hosts for gluster.

Expected results:
A warning like "this is not recommended and not supported" if only one host is specified for glusterfs, but still continues with setting up glusterfs.

Additional info:
Comment 1 Yaniv Kaul 2017-06-26 05:21:11 EDT
If you want to test, using ovirt-system-test (http://ovirt-system-tests.readthedocs.io/en/latest/index.html) . It provides a great-all-in-one, while using replica 3 as needed. I can run it on a 12GB RAM host.

Note You need to log in before you can comment on or make changes to this bug.