Description of problem: Hosted engine setup should add verb for HC installation for deployment. Should be able to: - Choose nic for gluster and other\same for virt. - Two additional host names for replica 3 bricks. - DC and cluster name. - Gluster path to use for each host.
(In reply to Yaniv Dary from comment #0) > Description of problem: > Hosted engine setup should add verb for HC installation for deployment. > Should be able to: > - Choose nic for gluster and other\same for virt. Supposing to change the hosted-engine --deploy UX as follow: --== NETWORK CONFIGURATION ==-- Please indicate a nic to set ovirtmgmt bridge on: (p4p1, em1) [em1]: Please indicate a nic to be used for storage purposes: (p4p1) [p4p1]: 1) The question about storage must be shown only if more than 1 nic is detected 2) The question about storage must not allow to choose the same nic used for ovirtmgmt Them how can I tell engine / vdsm / gluster to use specified nic? > - Two additional host names for replica 3 bricks. Supposing to ask for 3 host names: 3) who's going to peer the 3 hosts? The user? The script via SSH connection? 4) who's going to create the bricks on the 2 remote hosts? The user? The script via SSH? > - DC and cluster name. > - Gluster path to use for each host. looking at this last line looks like it's the script that need to do both the peering and the brick provisioning on remote hosts. In this case, it has also to ask for root password for the 2 remote hosts.
(In reply to Sandro Bonazzola from comment #1) > (In reply to Yaniv Dary from comment #0) > > Description of problem: > > Hosted engine setup should add verb for HC installation for deployment. > > Should be able to: > > - Choose nic for gluster and other\same for virt. > > Supposing to change the hosted-engine --deploy UX as follow: > > --== NETWORK CONFIGURATION ==-- > > Please indicate a nic to set ovirtmgmt bridge on: (p4p1, em1) [em1]: > Please indicate a nic to be used for storage purposes: (p4p1) [p4p1]: > > 1) The question about storage must be shown only if more than 1 nic is > detected > 2) The question about storage must not allow to choose the same nic used for > ovirtmgmt > > Them how can I tell engine / vdsm / gluster to use specified nic? To use the additional nic, you will need to 1. gluster peer probe <storage-nic> -- if more than 1 host 2. use the <storage-nic> in brick path for gluster volume create. > > > > - Two additional host names for replica 3 bricks. > > Supposing to ask for 3 host names: > 3) who's going to peer the 3 hosts? The user? The script via SSH connection? The setup script should peer probe the 3 additional hosts., using the vdsm verb for glusterHostAdd > 4) who's going to create the bricks on the 2 remote hosts? The user? The > script via SSH? The provisioning of the bricks, i.e. creating LVs and mounting would be done by the user. The user would then provide the mount path of the bricks on the remote hosts as an input to the script, which can be used in vdsm verb glusterVolumeBrickAdd > > > > - DC and cluster name. > > - Gluster path to use for each host. > > looking at this last line looks like it's the script that need to do both > the peering and the brick provisioning on remote hosts. > In this case, it has also to ask for root password for the 2 remote hosts. Just fyi...root password is not required for gluster peer probe
(In reply to Sahina Bose from comment #2) > > Them how can I tell engine / vdsm / gluster to use specified nic? > > To use the additional nic, you will need to > 1. gluster peer probe <storage-nic> -- if more than 1 host > 2. use the <storage-nic> in brick path for gluster volume create. > > > > > > > > - Two additional host names for replica 3 bricks. > > > > Supposing to ask for 3 host names: > > 3) who's going to peer the 3 hosts? The user? The script via SSH connection? > > The setup script should peer probe the 3 additional hosts., using the vdsm > verb for glusterHostAdd I can see glusterHostAdd in vdsClient code, but can't find the vdscli or the JSON Schema related to that verb. Can you provide documentation for it? Is glusterHostAdd to be called on all of the 3 hosts? Or will it be enough to call it on the first host passing the 2 additional hosts? > > > 4) who's going to create the bricks on the 2 remote hosts? The user? The > > script via SSH? > > The provisioning of the bricks, i.e. creating LVs and mounting would be done > by the user. The user would then provide the mount path of the bricks on the > remote hosts as an input to the script, which can be used in vdsm verb > glusterVolumeBrickAdd same here, I can see it in vdsClient but I can't see it in vdscli or in JSON Schema API. > > > > > > > > - DC and cluster name. > > > - Gluster path to use for each host. > > > > looking at this last line looks like it's the script that need to do both > > the peering and the brick provisioning on remote hosts. > > In this case, it has also to ask for root password for the 2 remote hosts. > > Just fyi...root password is not required for gluster peer probe So everything will be done from the first host without requiring ssh access to the other 2?
Done some testing: vdsClient -s 0 glusterHostAdd hostName=<hostname> do the peering for the local and remote host. It requires glusterd to be up and running on both the hosts. glusterVolumeCreate take cares of creating a volume and all the bricks specified in the brick list on the local and remote hosts.
(In reply to Sahina Bose from comment #2) > To use the additional nic, you will need to > 1. gluster peer probe <storage-nic> -- if more than 1 host > 2. use the <storage-nic> in brick path for gluster volume create. Looking at gluster command help and glusterHostAdd help, looks like I can only specify a host name or an ip address. So must I use the IP address of the <storage-nic> (since the FQDN will be the same on the 2 nics)?
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
Re-targeting to 4.1.0, as we are working on ansible based gdeploy to prepare 3 nodes and create a gluster volume
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions
CLOSE-NEXTRELEASE - gdeploy will do it.