Bug 1455169

Summary: [RFE] Hosted-Engine Node zero based deployment
Product: [oVirt] ovirt-engine Reporter: Yaniv Kaul <ykaul>
Component: Setup.CoreAssignee: Simone Tiraboschi <stirabos>
Status: CLOSED CURRENTRELEASE QA Contact: Nikolai Sednev <nsednev>
Severity: high Docs Contact:
Priority: high    
Version: futureCC: amureini, bugs, cshao, dfediuck, mavital, msivak, qiyuan, rhv-bugzilla-bot, stirabos, trichard, ycui, ylavi, yzhao
Target Milestone: ovirt-4.2.0Keywords: FutureFeature, Improvement, Tracking, Triaged
Target Release: 4.2.0Flags: rule-engine: ovirt-4.2+
nsednev: testing_plan_complete-
ylavi: planning_ack+
dfediuck: devel_ack+
mavital: testing_ack+
Hardware: Unspecified   
OS: Unspecified   
URL: https://gerrit.ovirt.org/#/c/81712/
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
This release introduces a new alternative deployment flow for self-hosted engines. The new flow will use the current engine code to create all entities needed for successful self-hosted engine deployment. A local bootstrap VM with the RHV-M Appliance will be created on the host and hosted-engine-setup will use it (via Ansible) to add a host (the same one it is running on), storage domain, storage disks, and finally a VM that will later become the Manager VM in the engine (thus eliminating the need for importing it). Once all the entities are created, hosted-engine-setup can shut down the bootstrap VM, copy its disk to the disk it created using the engine, create the self-hosted engine configuration files, start the agent and the broker, and the Manager VM will start.
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-05-04 10:44:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Integration RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1512472, 1512534, 1514068, 1517810, 1524119, 1527135, 1527876, 1528253, 1530968, 1537153, 1539391, 1540107, 1541233, 1541759, 1547595, 1549642, 1551289, 1560655, 1564873, 1571467    
Bug Blocks: 1193961, 1315074, 1320126, 1353713, 1359265, 1387085, 1393902, 1404606, 1406169, 1420115, 1422535, 1426517, 1434423, 1438412, 1451653, 1553523    

Description Yaniv Kaul 2017-05-24 12:12:32 UTC
Storage connection details, storage domain and DC should be injected into Engine as part of engine-setup, so when engine runs, it already has the data about the storage parts of the host (including the connections, the domain, the DC and that the host is a SPM).

TBD: inject host data as well (to relieve some of the work from host-deploy, or just because it's needed for the above!)

Comment 4 Martin Sivák 2017-10-16 10:25:47 UTC
The change description is here: https://github.com/oVirt/ovirt-site/pull/1273

Comment 5 RHV bug bot 2017-12-06 16:01:15 UTC
Adding 'tracking' since this bug doesn't include patches to match criteria to move to ON_QA. and its tracking all the dependent bugs on it.

Comment 6 RHV bug bot 2017-12-06 16:14:41 UTC
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[No relevant external trackers attached]

For more info please contact: infra

Comment 7 Nikolai Sednev 2017-12-11 17:05:22 UTC
Using ovirt-hosted-engine-setup-2.2.1-0.0.master.20171206172737.gitd3001c8.el7.centos.noarch and ovirt-engine-appliance-4.2-20171210.1.el7.centos.noarch:
1.Deployed over NFS storage - success.
2.Deployed over iSCSI storage - success.
3.Deployed over Gluster storage - success.
4.Deployed over FC storage - TBD.
The initial SHE-VM that was created during deployment and then powered-off, still appears and shown as "external-HostedEngineLocal".

Comment 8 Nikolai Sednev 2017-12-18 13:42:20 UTC
SHE regular deployment over FC has failed. More details available from here: https://bugzilla.redhat.com/show_bug.cgi?id=1525907#c14.I think that unless regular deployment fixed, node zero will also fail the same way.

Comment 9 Nikolai Sednev 2017-12-18 15:32:46 UTC
I'm getting these during FC deployment with node zero:
[ INFO  ] TASK [Remove host-deploy configuration file]
[ INFO  ] changed: [localhost]
          Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: fc
[ INFO  ] Getting Fibre Channel LUNs list
[ ERROR ] ERROR! the playbook: /usr/share/ovirt-hosted-engine-setup/ansible/fc_getdevices.yml could not be found
         
[ ERROR ] Unable to get target list
          Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]:

Comment 10 Nikolai Sednev 2017-12-18 15:34:42 UTC
[root@purple-vds2 ~]# ll -lsh /usr/share/ovirt-hosted-engine-setup/ansible/
total 44K
 12K -rw-r--r--. 1 root root 8.3K Dec 14 17:47 bootstrap_local_vm.yml
   0 drwxr-xr-x. 2 root root   77 Dec 18 17:00 callback_plugins
4.0K -rw-r--r--. 1 root root 1.1K Dec 14 17:47 clean_environment.yml
8.0K -rw-r--r--. 1 root root 5.2K Dec 14 17:47 create_storage_domain.yml
 12K -rw-r--r--. 1 root root  12K Dec 14 17:47 create_target_vm.yml
4.0K -rw-r--r--. 1 root root  588 Dec 14 17:47 iscsi_discover.yml
4.0K -rw-r--r--. 1 root root 1.2K Dec 14 17:47 iscsi_getdevices.yml
   0 drwxr-xr-x. 2 root root  104 Dec 18 17:00 library
   0 drwxr-xr-x. 2 root root  249 Dec 18 17:00 templates

Comment 13 Nikolai Sednev 2018-05-03 16:51:04 UTC
Works for me on these components:
ovirt-hosted-engine-setup-2.2.20-1.el7ev.noarch
ovirt-hosted-engine-ha-2.2.11-1.el7ev.noarch
rhvm-appliance-4.2-20180427.0.el7.noarch
Linux 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.5 (Maipo)

Tested using CLI on all types of storages (iSCSI, NFS, Gluster, FC) for both vintage and Node 0.


Moving to verified.
Feel free to reopen if you see any new issues.

Comment 14 Sandro Bonazzola 2018-05-04 10:44:23 UTC
This bugzilla is included in oVirt 4.2.0 release, published on Dec 20th 2017.

Since the problem described in this bug report should be
resolved in oVirt 4.2.0 release, published on Dec 20th 2017, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.