Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1893387

Summary: On RHV-4.3 EUS deployments rhvm-appliance-4.4 requires a RHEL 8 host
Product: Red Hat Enterprise Virtualization Manager Reporter: Steffen Froemer <sfroemer>
Component: DocumentationAssignee: rhev-docs <rhev-docs>
Status: CLOSED WONTFIX QA Contact: rhev-docs <rhev-docs>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 4.4.2CC: aoconnor, dfediuck, dholler, lsurette, lsvaty, mhicks, mkalinin, sbonazzo, sgoodman, srevivo
Target Milestone: ---Keywords: Documentation, NoDocsQEReview
Target Release: ---Flags: aoconnor: needinfo-
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-12-01 18:57:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Integration RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Steffen Froemer 2020-10-30 23:51:48 UTC
Description of problem:
To comply with RHV 4.3 EUS it's required to upgrade Manager to RHVM-4.4.
For HE this is only possible on RHEL-8 hosts, which contradicts the RHV-4.3 EUS idea.

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. Install RHV-H 4.3
2. Try to install RHV-4.4 HostedEngine
3.

Actual results:
It fails, due to unavailability of rhvm-appliance-4.4 for el7

Expected results:
the rhvm-appliance-4.4 should be available on RHV-4.3 channels 

Additional info:

Comment 1 Sandro Bonazzola 2020-11-02 13:26:31 UTC
We need to document that in order to have a 4.4 hosted engine managing 4.3 eus, at least one 4.4 host is needed.

Comment 2 Marina Kalinin 2020-11-03 02:06:48 UTC
(In reply to Sandro Bonazzola from comment #1)
> We need to document that in order to have a 4.4 hosted engine managing 4.3
> eus, at least one 4.4 host is needed.

I believe, this is not the best approach for our customers.
If the customer decides they are willing to stay on EUS, this means they want to stay on RHEL7 host and do not plan deploying RHEL8 hosts.

We should provide same solution we had for RHEL6 to RHEL7 migration and build 4.4 RHEL8 based appliance to be available via RHEL7 channels for EUS deployments.

Comment 4 Steve Goodman 2020-11-05 16:02:43 UTC
Additional background info from https://access.redhat.com/solutions/5469041: 

What is covered by RHV 4.3 EUS:

    RHV 4.3 EUS is available for x86 systems only. EUS for POWER systems is not available at this time.
    RHV 4.3 EUS is available for x86 hypervisors only, both RHEL based and RHVH.
    *There is no EUS for the engine, instead the 4.4 engine should be used, with cluster Compatibility Level(CL) 4.3.*

Comment 5 Marina Kalinin 2020-11-05 19:34:05 UTC
I added a note about this to the KCS until we agree on the final solution.

Comment 7 Marina Kalinin 2020-11-10 18:01:57 UTC
(In reply to Marina Kalinin from comment #2)
> (In reply to Sandro Bonazzola from comment #1)
> > We need to document that in order to have a 4.4 hosted engine managing 4.3
> > eus, at least one 4.4 host is needed.
> 
> I believe, this is not the best approach for our customers.
> If the customer decides they are willing to stay on EUS, this means they
> want to stay on RHEL7 host and do not plan deploying RHEL8 hosts.
> 
> We should provide same solution we had for RHEL6 to RHEL7 migration and
> build 4.4 RHEL8 based appliance to be available via RHEL7 channels for EUS
> deployments.

Also, if we do not provide customers a convenient way to run a supported configuration, there might be a risk that they will continue running with manager 4.3 and this is not part of QE testing anymore, i.e. more risk to the customer to run into new bugs, as we keep updating the hosts.

I cannot give an ack for this bug as a documentation bug.

Comment 8 Steffen Froemer 2020-11-11 10:10:20 UTC
Hi all,

I just made a test, if it would be possible to deploy the 4.4 engine on a RHV-4.3 host.
This scenario indeed is only required, if the customer is running RHV-4.3 environment with hosted-engine and focus on 4.3 EUS and does not have ability or willingness to separate the HE into separate cluster.

Please find below my current results. I only tested a new deployment. A restore from previous RHV-4.3 environment is still outstanding.

## used setup
ovirt-ansible-hosted-engine-setup-1.0.38-1.el7ev.noarch (included in RHVH-4.3-20200922.1-RHVH-x86_64-dvd1.iso)


## first, copy the engine-4.4.ova

  # scp /usr/share/ovirt-engine-appliance/rhvm-appliance-4.4-20200915.0.el8ev.ova root.example.com:/usr/share/ovirt-engine-appliance/rhvm-appliance-4.4-20200915.0.el8ev.ova


## a few changes required, to be able to run the `hosted-engine --deploy` command

# cat << EOF > ovirt-hosted-engine-deploy.patch
--- /usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/vm/boot_disk.py.orig   2020-11-05 21:53:08.850070941 +0000
+++ /usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/vm/boot_disk.py        2020-11-05 14:05:32.237841851 +0000
@@ -134,9 +134,9 @@
             disk = tree.find('Section/Disk')
             self.environment[
                 ohostedcons.StorageEnv.OVF_SIZE_GB
-            ] = int(
+            ] = int(float(
                 disk.attrib['{http://schemas.dmtf.org/ovf/envelope/1/}size']
-            )
+            ))+1
             try:
                 self.environment[
                     ohostedcons.StorageEnv.IMAGE_DESC
EOF

# patch /usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-common/vm/boot_disk.py < ovirt-hosted-engine-deploy.patch

## appropriate information to above patch are from commit
  https://github.com/oVirt/ovirt-hosted-engine-setup/commit/8994e143ffb831a2811164f47a9f4cba8b2ac6d4


## patch to be able to change default Cluster Compatibility Version from 4.4 to 4.3 (because we have a 4.3 host - EUS requirement)
cat << EOF > ovirt-hosted-engine-deploy.05_add_host.patch
--- /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/05_add_host.yml.orig    2020-11-06 16:31:10.937205232 +0000
+++ /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/05_add_host.yml 2020-11-06 16:10:43.672486545 +0000
@@ -58,7 +58,9 @@
     ovirt_cluster:
       state: present
       name: "{{ he_cluster }}"
+      compatibility_version: "{{ he_cluster_comp_version | default(omit) }}"
       data_center: "{{ he_data_center }}"
+      cpu_type: "{{ he_cluster_cpu_type | default(omit) }}"
       wait: true
       auth: "{{ ovirt_auth }}"
     register: cluster_result_presence
@@ -74,6 +76,7 @@
     ovirt_cluster:
       data_center: "{{ he_data_center }}"
       name: "{{ he_cluster }}"
+      compatibility_version: "{{ he_cluster_comp_version | default(omit) }}"
       auth: "{{ ovirt_auth }}"
       virt: true
       gluster: true
EOF


# patch /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/05_add_host.yml < ovirt-hosted-engine-deploy.05_add_host.patch
# echo he_cluster_comp_version: "4.3" >> /usr/share/ansible/roles/ovirt.hosted_engine_setup/defaults/main.yml


## remove the pg_scl dependency for RHEL-7 Hosted-Engine systems
# cat << EOF > 02_engine_vm_configuration.patch
--- /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/02_engine_vm_configuration.yml.orig       2020-11-09 13:28:41.359544641 +0000
+++ /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/02_engine_vm_configuration.yml    2020-11-09 14:29:47.472189110 +0000
@@ -1,8 +1,6 @@
 ---
 - name: Engine VM configuration tasks
   block:
-  - include_tasks: pg_scl.yml
-  - debug: var=scl_conf
   - name: Create a temporary directory for ansible as postgres user
     file:
       path: /var/lib/pgsql/.ansible/tmp
@@ -12,7 +10,7 @@
       mode: 0700
   - name: Update target VM details at DB level
     command: >-
-      {{ scl_pg_prefix }} psql -d engine -c
+      psql -d engine -c
       "UPDATE vm_static SET {{ item.field }}={{ item.value }} WHERE
       vm_guid='{{ hostvars[he_ansible_host_name]['he_vm_details']['vm']['id'] }}'"
     environment: "{{ he_cmd_lang }}"
@@ -26,7 +24,7 @@
   - debug: var=db_vm_update
   - name: Insert Hosted Engine configuration disk uuid into Engine database
     command: >-
-      {{ scl_pg_prefix }} psql -d engine -c
+      psql -d engine -c
       "UPDATE vdc_options SET option_value=
       '{{ hostvars[he_ansible_host_name]['he_conf_disk_details']['disk']['id'] }}'
       WHERE option_name='HostedEngineConfigurationImageGuid' AND version='general'"
@@ -39,7 +37,7 @@
   - debug: var=db_conf_update
   - name: Fetch host SPM_ID
     command: >-
-      {{ scl_pg_prefix }} psql -t -d engine -c
+      psql -t -d engine -c
       "SELECT vds_spm_id FROM vds WHERE vds_name='{{ hostvars[he_ansible_host_name]['he_host_name'] }}'"
     environment: "{{ he_cmd_lang }}"
     become: true
EOF

# patch /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/02_engine_vm_configuration.yml < 02_engine_vm_configuration.patch



## because of BZ#1739583 (https://bugzilla.redhat.com/show_bug.cgi?id=1739583) it's required to use latest appliance of libguestfs to able to make changes in RHEL-8 filesystem from a RHEL-7 host


# [[ -f "/root/appliance-1.40.1.tar.xz" ]] || curl -L -O http://download.libguestfs.org/binaries/appliance/appliance-1.40.1.tar.xz 
# tar xJf appliance-1.40.1.tar.xz -C /root
# export LIBGUESTFS_PATH=/root/appliance

# hosted-engine --deploy

Comment 11 Marina Kalinin 2020-12-01 18:57:13 UTC
It is decided to close this bug WONTFIX with the resolution documented in this KCS: https://access.redhat.com/solutions/5469041.