Bug 1217448 - [RFE][HC] - Add support to Hosted Engine for provisioning gluster replica 3 storage given 3 clean hosts.
Summary: [RFE][HC] - Add support to Hosted Engine for provisioning gluster replica 3 s...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-hosted-engine-setup
Classification: oVirt
Component: RFEs
Version: 2.0.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ovirt-4.0.0-alpha
: ---
Assignee: bugs@ovirt.org
QA Contact: Aharon Canan
URL:
Whiteboard:
Depends On: 1225462
Blocks: Hosted_Engine_HC
TreeView+ depends on / blocked
 
Reported: 2015-04-30 12:04 UTC by Yaniv Lavi
Modified: 2016-03-30 12:16 UTC (History)
15 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2016-03-30 12:16:44 UTC
oVirt Team: Gluster
Embargoed:
sabose: ovirt-4.1?
ylavi: planning_ack?
dfediuck: devel_ack+
ylavi: testing_ack?


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1238093 0 medium CLOSED vdsm doesn't support workflow with replica 1 as gluster storage domain 2021-02-22 00:41:40 UTC
oVirt gerrit 41332 0 master ABANDONED HC: provisioning replica 3 volume Never

Internal Links: 1238093

Description Yaniv Lavi 2015-04-30 12:04:33 UTC
Description of problem:
Hosted engine setup should add verb for HC installation for deployment.
Should be able to: 
- Choose nic for gluster and other\same for virt.
- Two additional host names for replica 3 bricks.
- DC and cluster name.
- Gluster path to use for each host.

Comment 1 Sandro Bonazzola 2015-05-21 10:17:47 UTC
(In reply to Yaniv Dary from comment #0)
> Description of problem:
> Hosted engine setup should add verb for HC installation for deployment.
> Should be able to: 
> - Choose nic for gluster and other\same for virt.

Supposing to change the hosted-engine --deploy UX as follow:

--== NETWORK CONFIGURATION ==--
         
Please indicate a nic to set ovirtmgmt bridge on: (p4p1, em1) [em1]:
Please indicate a nic to be used for storage purposes: (p4p1) [p4p1]:

1) The question about storage must be shown only if more than 1 nic is detected
2) The question about storage must not allow to choose the same nic used for ovirtmgmt

Them how can I tell engine / vdsm / gluster to use specified nic?


> - Two additional host names for replica 3 bricks.

Supposing to ask for 3 host names:
3) who's going to peer the 3 hosts? The user? The script via SSH connection?
4) who's going to create the bricks on the 2 remote hosts? The user? The script via SSH?


> - DC and cluster name.
> - Gluster path to use for each host.

looking at this last line looks like it's the script that need to do both the peering and the brick provisioning on remote hosts.
In this case, it has also to ask for root password for the 2 remote hosts.

Comment 2 Sahina Bose 2015-05-21 11:05:43 UTC
(In reply to Sandro Bonazzola from comment #1)
> (In reply to Yaniv Dary from comment #0)
> > Description of problem:
> > Hosted engine setup should add verb for HC installation for deployment.
> > Should be able to: 
> > - Choose nic for gluster and other\same for virt.
> 
> Supposing to change the hosted-engine --deploy UX as follow:
> 
> --== NETWORK CONFIGURATION ==--
>          
> Please indicate a nic to set ovirtmgmt bridge on: (p4p1, em1) [em1]:
> Please indicate a nic to be used for storage purposes: (p4p1) [p4p1]:
> 
> 1) The question about storage must be shown only if more than 1 nic is
> detected
> 2) The question about storage must not allow to choose the same nic used for
> ovirtmgmt
> 
> Them how can I tell engine / vdsm / gluster to use specified nic?

To use the additional nic, you will need to
1. gluster peer probe <storage-nic> -- if more than 1 host
2. use the <storage-nic> in brick path for gluster volume create.

> 
> 
> > - Two additional host names for replica 3 bricks.
> 
> Supposing to ask for 3 host names:
> 3) who's going to peer the 3 hosts? The user? The script via SSH connection?

The setup script should peer probe the 3 additional hosts., using the vdsm verb for glusterHostAdd

> 4) who's going to create the bricks on the 2 remote hosts? The user? The
> script via SSH?

The provisioning of the bricks, i.e. creating LVs and mounting would be done by the user. The user would then provide the mount path of the bricks on the remote hosts as an input to the script, which can be used in vdsm verb glusterVolumeBrickAdd

> 
> 
> > - DC and cluster name.
> > - Gluster path to use for each host.
> 
> looking at this last line looks like it's the script that need to do both
> the peering and the brick provisioning on remote hosts.
> In this case, it has also to ask for root password for the 2 remote hosts.

Just fyi...root password is not required for gluster peer probe

Comment 3 Sandro Bonazzola 2015-05-21 11:18:29 UTC
(In reply to Sahina Bose from comment #2)

> > Them how can I tell engine / vdsm / gluster to use specified nic?
> 
> To use the additional nic, you will need to
> 1. gluster peer probe <storage-nic> -- if more than 1 host
> 2. use the <storage-nic> in brick path for gluster volume create.
> 
> > 
> > 
> > > - Two additional host names for replica 3 bricks.
> > 
> > Supposing to ask for 3 host names:
> > 3) who's going to peer the 3 hosts? The user? The script via SSH connection?
> 
> The setup script should peer probe the 3 additional hosts., using the vdsm
> verb for glusterHostAdd

I can see glusterHostAdd in vdsClient code, but can't find the vdscli or the JSON Schema related to that verb. Can you provide documentation for it?

Is glusterHostAdd to be called on all of the 3 hosts? Or will it be enough to call it on the first host passing the 2 additional hosts?

> 
> > 4) who's going to create the bricks on the 2 remote hosts? The user? The
> > script via SSH?
> 
> The provisioning of the bricks, i.e. creating LVs and mounting would be done
> by the user. The user would then provide the mount path of the bricks on the
> remote hosts as an input to the script, which can be used in vdsm verb
> glusterVolumeBrickAdd

same here, I can see it in vdsClient but I can't see it in vdscli or in JSON Schema API.

> 
> > 
> > 
> > > - DC and cluster name.
> > > - Gluster path to use for each host.
> > 
> > looking at this last line looks like it's the script that need to do both
> > the peering and the brick provisioning on remote hosts.
> > In this case, it has also to ask for root password for the 2 remote hosts.
> 
> Just fyi...root password is not required for gluster peer probe

So everything will be done from the first host without requiring ssh access to the other 2?

Comment 4 Sandro Bonazzola 2015-05-21 12:15:50 UTC
Done some testing:
 vdsClient -s 0 glusterHostAdd hostName=<hostname>
do the peering for the local and remote host. It requires glusterd to be up and running on both the hosts.

 glusterVolumeCreate take cares of creating a volume and all the bricks specified in the brick list on the local and remote hosts.

Comment 5 Sandro Bonazzola 2015-05-21 12:26:08 UTC
(In reply to Sahina Bose from comment #2)

> To use the additional nic, you will need to
> 1. gluster peer probe <storage-nic> -- if more than 1 host
> 2. use the <storage-nic> in brick path for gluster volume create.

Looking at gluster command help and glusterHostAdd help, looks like I can only specify a host name or an ip address.
So must I use the IP address of the <storage-nic> (since the FQDN will be the same on the 2 nics)?

Comment 6 Red Hat Bugzilla Rules Engine 2015-10-19 10:50:09 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 7 Sahina Bose 2016-03-08 09:20:08 UTC
Re-targeting to 4.1.0, as we are working on ansible based gdeploy to prepare 3 nodes and create a gluster volume

Comment 8 Mike McCune 2016-03-28 22:23:05 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 9 Yaniv Kaul 2016-03-30 12:16:44 UTC
CLOSE-NEXTRELEASE - gdeploy will do it.


Note You need to log in before you can comment on or make changes to this bug.