Bug 1575904 - [downstream clone - 4.0] [RFE] - Support direct to version upgrade of the manager
Summary: [downstream clone - 4.0] [RFE] - Support direct to version upgrade of the man...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-fast-forward-upgrade
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ovirt-4.0.3
: ---
Assignee: Douglas Schilling Landgraf
QA Contact: Jiri Belka
URL: https://github.com/oVirt/ovirt-engine...
Whiteboard:
Depends On: 1574590
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-08 08:58 UTC by RHV bug bot
Modified: 2019-05-16 13:02 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
This release includes a new utility called ovirt-engine-hyper-upgrade which can help users to upgrade their systems from version 4.0 or later to 4.2.
Clone Of: 1574590
Environment:
Last Closed: 2018-06-19 15:42:56 UTC
oVirt Team: Integration
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:1945 0 None None None 2018-06-19 15:43:02 UTC
oVirt gerrit 63647 0 master MERGED packaging: spec: Allow upgrade directly from 3.6 2018-05-08 09:03:35 UTC
oVirt gerrit 63718 0 master MERGED run engine upgrade from 3.6 to master 2018-05-08 09:03:35 UTC
oVirt gerrit 81858 0 master ABANDONED packaging: spec: Prevent upgrade from oVirt <= 4.0 2018-05-08 09:03:35 UTC

Description RHV bug bot 2018-05-08 08:58:42 UTC
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1574590 +++
======================================================================

+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1366900 +++
======================================================================

Description of problem:
Provide tool that will allow a single flow to get to latest version of the manager.

(Originally by ylavi)

(Originally by rhv-bugzilla-bot)

Comment 1 RHV bug bot 2018-05-08 08:58:50 UTC
Is there a problem with deciding that the tool will be 'engine-setup'?

We always change the Require/Conflict lines at the point during development where they break. We can simply keep current, add jenkins jobs to upgrade 3.6 to 4.1, and fix things when they break. AFAIU. Unless the intention is different - keep current behavior and provide a different tool that will do a list of upgrades.

(Originally by didi)

(Originally by rhv-bugzilla-bot)

Comment 4 RHV bug bot 2018-05-08 08:58:59 UTC
(In reply to Yedidyah Bar David from comment #1)
> Is there a problem with deciding that the tool will be 'engine-setup'?

Not really.

> 
> We always change the Require/Conflict lines at the point during development
> where they break. We can simply keep current, add jenkins jobs to upgrade
> 3.6 to 4.1, and fix things when they break. AFAIU. Unless the intention is
> different - keep current behavior and provide a different tool that will do
> a list of upgrades.

We need to ensure that all the sql scripts needed to perform the upgrade 3.6 -> 4.0 -> 4.1 are there and nothing is dropped in the meanwhile.

We need also to ensure that the upgrade works with a platform release coming in between. Example is 4.0 + 7.2 -> 4.2 + 7.3

We need to decide also how to support Fedora on this. 3.6 was on FC22, 4.0 is on FC23 and 4.1 will be on FC24; so this feature doesn't seem to apply to Fedora.

Also Yaniv, how long has to be maintained backward compatibility? I understood we wanted to drop 3.6 cluster support in 4.1 and this collides with this feature.
Is this ffeature meant to support 4.0 -> 4.2 skipping 4.1?

(Originally by Sandro Bonazzola)

(Originally by rhv-bugzilla-bot)

Comment 5 RHV bug bot 2018-05-08 08:59:04 UTC
The validations are introduced in every minor. If we orchestrate the stepped setup, we can just fail if one of the conditions are not met.
So we will try : 4.0 -> 4.1 -> 4.2 -> 4.3, but we can fail in 4.1 if the setup fails for any reason.

(Originally by ylavi)

(Originally by rhv-bugzilla-bot)

Comment 6 RHV bug bot 2018-05-08 08:59:08 UTC
(In reply to Sandro Bonazzola from comment #2)
> (In reply to Yedidyah Bar David from comment #1)
> > Is there a problem with deciding that the tool will be 'engine-setup'?
> 
> Not really.
> 
> > 
> > We always change the Require/Conflict lines at the point during development
> > where they break. We can simply keep current, add jenkins jobs to upgrade
> > 3.6 to 4.1, and fix things when they break. AFAIU. Unless the intention is
> > different - keep current behavior and provide a different tool that will do
> > a list of upgrades.
> 
> We need to ensure that all the sql scripts needed to perform the upgrade 3.6
> -> 4.0 -> 4.1 are there and nothing is dropped in the meanwhile.

Indeed, that's the main issue. IIRC this already works as expected, need to verify.

> 
> We need also to ensure that the upgrade works with a platform release coming
> in between. Example is 4.0 + 7.2 -> 4.2 + 7.3

Not sure that's relevant to current bug. Platform upgrades need to be handled anyway, and IIRC we always require to run latest platform update prior to upgrading the engine (and probably hosts, which IIUC are irrelevant to this bug as well).

> 
> We need to decide also how to support Fedora on this. 3.6 was on FC22, 4.0
> is on FC23 and 4.1 will be on FC24; so this feature doesn't seem to apply to
> Fedora.

Depends on scope :-) If it turns out that the main issue is dbscripts, irrelevant. We'll mainly need to document the process, not add more code. I think.

> 
> Also Yaniv, how long has to be maintained backward compatibility? I
> understood we wanted to drop 3.6 cluster support in 4.1 and this collides
> with this feature.
> Is this ffeature meant to support 4.0 -> 4.2 skipping 4.1?

?

(In reply to Yaniv Dary from comment #3)
> The validations are introduced in every minor.

Which validations?

IIRC engine-setup does not "validate" the previous version. That's enforced simply using rpm Require:/Conflict: . These can easily be set to whatever we want.

> If we orchestrate the stepped
> setup, we can just fail if one of the conditions are not met.

Not sure I follow.

Do we need to "orchestrate"? Shouldn't a single run of the existing engine-setup be enough (with minor changes as needed)?

> So we will try : 4.0 -> 4.1 -> 4.2 -> 4.3, but we can fail in 4.1 if the
> setup fails for any reason.

Why not upgrade directly from 4.0 to 4.3 (say)? Just run all the dbscripts added since 4.0?

As I currently see this, most if not all of current bug is about the future, not the past or present. This means that if until now we decided that something has to happen at a certain upgrade point and later removed the code doing this with the assumption this already happened, from now on we simply have to keep all the existing and new code doing such things. In practice it means more testing - CI/QE - not more code. Have additional CI jobs that regularly run upgrades from 3.6 to 4.1, then from 3.6 to 4.2 and also from 4.0 to 4.2, etc.

BTW, in practice, we already suffer from not fully complying with our own policy - see e.g. bug 1315744. So ideally we should probably run upgrade jobs also from all/some minor versions - e.g. 3.6.0 -> 4.0, 3.6.1 -> 4.0, ..., 3.6.0 -> 4.1.0, ...

(Originally by didi)

(Originally by rhv-bugzilla-bot)

Comment 7 RHV bug bot 2018-05-08 08:59:13 UTC
Didi, can you please verify 3.6 -> master upgrade works?
We'll probably need to create a jenkins job for that as well.

(Originally by Sandro Bonazzola)

(Originally by rhv-bugzilla-bot)

Comment 8 RHV bug bot 2018-05-08 08:59:18 UTC
(In reply to Yedidyah Bar David from comment #4)
> Why not upgrade directly from 4.0 to 4.3 (say)? Just run all the dbscripts
> added since 4.0?

Because that is going to be quite impossible for QE to test every possible flow all the time. Stepped is a much more sane path, since we test upgrade between minors, therefore if we orchestrate this we should not require additional testing and if the bug is found we will keep the system in a mid-step. 

> 
> As I currently see this, most if not all of current bug is about the future,
> not the past or present. This means that if until now we decided that
> something has to happen at a certain upgrade point and later removed the
> code doing this with the assumption this already happened, from now on we
> simply have to keep all the existing and new code doing such things. In
> practice it means more testing - CI/QE - not more code. Have additional CI
> jobs that regularly run upgrades from 3.6 to 4.1, then from 3.6 to 4.2 and
> also from 4.0 to 4.2, etc.
> 
> BTW, in practice, we already suffer from not fully complying with our own
> policy - see e.g. bug 1315744. So ideally we should probably run upgrade
> jobs also from all/some minor versions - e.g. 3.6.0 -> 4.0, 3.6.1 -> 4.0,
> ..., 3.6.0 -> 4.1.0, ...

Thinking we will be able to maintain and test all this is optimistic, stepped is much safer.

(Originally by ylavi)

(Originally by rhv-bugzilla-bot)

Comment 9 RHV bug bot 2018-05-08 08:59:22 UTC
Actually the exact opposite is true.

We currently already "support" all of these:

3.6.0 -> 4.0.0
3.6.1 -> 4.0.0
3.6.2 -> 4.0.0
...
3.6.8 -> 4.0.0
3.6.0 -> 4.0.1
3.6.1 -> 4.0.1
3.6.2 -> 4.0.2
...
3.6.8 -> 4.0.2
3.6.0 -> 4.0.3
...
3.6.8 -> 4.0.3

But do we "maintain" them? AFAIK, only when someone randomly finds a bug.

Do we test them? AFAIK no.

How do we maintain them? By forbidding the bad ones. E.g. it was found in bug 1315744 that 3.5.1 -> 3.6.5 fails, so we bumped the minimum source version to 3.5.6. Is that a good fix? I don't think so. Our docs do not mention that at all. We simply assume that most people won't notice, because they'll already be on a rather-recent z-stream before trying an upgrade. Which might be true. But if CI/QE failed right when the offending patch was merged, or a day later, we'd immediately notice and fix - make the patch work from all versions. This way admins won't need to start thinking what to do, downgrade setup packages in order to upgrade to later previous z-stream, etc.

So we really must start testing them. You might claim we can skip all the non-latest-per-z-stream *targets*, but not *sources*. So at minimum we must start testing:

3.6.0 -> 4.0.3
3.6.1 -> 4.0.3
...
3.6.8 -> 4.0.3
4.0.0 -> 4.0.3
4.0.1 -> 4.0.3
4.0.2 -> 4.0.3

And when 4.0.4 is out, replace that with:

3.6.0 -> 4.0.4
...
3.6.8 -> 4.0.4
4.0.0 -> 4.0.4
...
4.0.3 -> 4.0.4

Once we do that, which we really should, adding also:

3.6.0 -> 4.1.master
...
4.0.3 -> 4.1.master

is not that much more load on the CI/QE machines and almost zero more human work.

(Originally by didi)

(Originally by rhv-bugzilla-bot)

Comment 10 RHV bug bot 2018-05-08 08:59:27 UTC
And, BTW, 3.6-> 4.0 is a migration, not an upgrade. So more delicate and time-consuming. So it's even more important that we verify all 3.6.z->4.0.z cases regularly.

(Originally by didi)

(Originally by rhv-bugzilla-bot)

Comment 11 RHV bug bot 2018-05-08 08:59:32 UTC
(In reply to Sandro Bonazzola from comment #5)
> Didi, can you please verify 3.6 -> master upgrade works?

Seems to work, see below.

> We'll probably need to create a jenkins job for that as well.

Indeed. And define clearly what we want to verify... Specifically, I had a machine with engine+dwh 3.6.3.4, and upgraded it twice to 4.1 - once directly (required the small patch linked to this bug to loosen Requires:) and once through 4.0 (on a snapshot taken when it was 3.6). In each, ran this:

psql -c 'select version,script,checksum,state,current,comment from schema_version order by script' engine

Following is diff of the outputs:

# diff -ub ~postgres/3.6-4.0-4.1-schema_version 3.6-4.1-schema_version 
--- /var/lib/pgsql/3.6-4.0-4.1-schema_version   2016-09-12 17:26:56.451041671 +0300
+++ 3.6-4.1-schema_version      2016-09-12 17:27:12.000000000 +0300
@@ -360,7 +360,7 @@
  04000120 | upgrade/04_00_0120_fix_invalid_macs.sql                                                       | 24012afa69881c84fea4428b004d94ad | SKIPPED   | f       | Installed already by 03061980
  04000130 | upgrade/04_00_0130_add_host_device_column_to_vm_device.sql                                    | d140bfabaf0a8c73294ae6d254fd3449 | INSTALLED | f       | 
  04000140 | upgrade/04_00_0140_convert_memory_snapshots_to_disks.sql                                      | 7932c1a865ea21cd5ca594c271c3dee7 | INSTALLED | f       | 
- 04000150 | upgrade/04_00_0150_remove_internal_policy_units.sql                                           | 7494981c761b6757c39e60e8bcc63fbe | INSTALLED | f       | 
+ 04000150 | upgrade/04_00_0150_remove_internal_policy_units.sql                                           | 133ac06c8dbe241891824fa50acbb07a | INSTALLED | f       | 
  04000160 | upgrade/04_00_0160_fix_migratedowntime_option.sql                                             | 8a3c3f427e78a16c092d3599a1d4341d | SKIPPED   | f       | Installed already by 03062000
  04000170 | upgrade/04_00_0170_chance_min_allocated_mem_of_blank.sql                                      | c58813bfb78a355002c2dd0689788fa9 | SKIPPED   | f       | Installed already by 03061990
  04000180 | upgrade/04_00_0180_attach_cpu_profile_permissions.sql                                         | e392640d8bf2ce4908ba8155bbee1c97 | SKIPPED   | f       | Installed already by 03062040
@@ -421,33 +421,22 @@
  04000730 | upgrade/04_00_0730_remove_filter_from_passthrough_vnics.sql                                   | 95f214a9af709a1fd6be8ceee1b9325b | INSTALLED | f       | 
  04000740 | upgrade/04_00_0740_remove_allow_dhcp_server_filter.sql                                        | 76c9de1ed8e789d403fb721fad98c0b4 | INSTALLED | f       | 
  04000750 | upgrade/04_00_0750_change_cluster_default_policy.sql                                          | 75c7c90ef69c9e5d7564312cbd7f3f25 | INSTALLED | f       | 
- 04000760 | upgrade/04_00_0760_drop_unique_constraint.sql                                                 | ee3245bdf8250ffce39f823c77cd0e41 | INSTALLED | f       | 
- 04000770 | upgrade/04_00_0770_set_pool_vms_stateless.sql                                                 | 0cac647f8ae83dbed1fd59c6305b36c6 | INSTALLED | f       | 
- 04000780 | upgrade/04_00_0780_add_gluster_server_peer_status.sql                                         | a96fa84129e4f7209756c0160371ba49 | INSTALLED | f       | 
- 04000790 | upgrade/04_00_0790_change_disk_status_to_ok.sql                                               | 0c2aa967928cff47130182ef91d26318 | INSTALLED | f       | 
- 04000800 | upgrade/04_00_0800_change_vm_device_null_plugged_values_to_false.sql                          | 49ad263432af39ce11f240f2b83077c1 | INSTALLED | f       | 
- 04000810 | upgrade/04_00_0810_remove_hosted_engine_storage_domain_name_config_value.sql                  | ce43dd761b3e388666cc4976f65fa0d3 | INSTALLED | f       | 
- 04000820 | upgrade/04_00_0820_add_vds_dynamic_pretty_name.sql                                            | 7404b0a567eba4aff9c4725617a682b6 | INSTALLED | f       | 
- 04000830 | upgrade/04_00_0830_external_mac_events.sql                                                    | 43cda642d53a78fcc7a1d907411a2da9 | INSTALLED | f       | 
- 04000840 | upgrade/04_00_0840_update_macs_to_lower_case.sql                                              | 6c47daaa6de1e96654faffcf1326c01d | INSTALLED | f       | 
- 04000850 | upgrade/04_00_0850_add_switch_type_to_vds_interface_and_cluster.sql                           | 1d55e7f7852e0e8c26a548b553bbca62 | INSTALLED | f       | 
- 04000860 | upgrade/04_00_0860_fix_null_cpu_profile_id.sql                                                | 359560e7fef298e09f5ffe67bb8203bb | INSTALLED | f       | 
  04010010 | upgrade/04_01_0010_add_mac_pool_id_to_vds_group.sql                                           | c49970abed1840e96ec1f28224bdb511 | INSTALLED | f       | 
  04010020 | upgrade/04_01_0020_empty_current_cd_to_null.sql                                               | 601feba54b6f998e724bbddbfe3f4db5 | INSTALLED | f       | 
  04010030 | upgrade/04_01_0030_remove_mac_pool_id_from_storage_pool.sql                                   | db614994bd8e46d7114d2f47666f7509 | INSTALLED | f       | 
  04010040 | upgrade/04_01_0040_move_guest_mem_fields_to_statistics.sql                                    | 656a9f1722a8215477c76b09876d1b0d | INSTALLED | f       | 
  04010050 | upgrade/04_01_0050_remove_el7_upgrade_policy_units.sql                                        | 961069f2a02e9ea75c1c55cec6314fea | INSTALLED | f       | 
- 04010060 | upgrade/04_01_0060_add_switch_type_to_vds_interface_and_cluster.sql                           | 1d55e7f7852e0e8c26a548b553bbca62 | SKIPPED   | f       | Installed already by 04000850
- 04010070 | upgrade/04_01_0070_set_pool_vms_stateless.sql                                                 | 0cac647f8ae83dbed1fd59c6305b36c6 | SKIPPED   | f       | Installed already by 04000770
- 04010080 | upgrade/04_01_0080_add_gluster_server_peer_status.sql                                         | a96fa84129e4f7209756c0160371ba49 | SKIPPED   | f       | Installed already by 04000780
- 04010090 | upgrade/04_01_0090_drop_unique_constraint.sql                                                 | ee3245bdf8250ffce39f823c77cd0e41 | SKIPPED   | f       | Installed already by 04000760
- 04010100 | upgrade/04_01_0100_remove_hosted_engine_storage_domain_name_config_value.sql                  | ce43dd761b3e388666cc4976f65fa0d3 | SKIPPED   | f       | Installed already by 04000810
- 04010110 | upgrade/04_01_0110_change_disk_status_to_ok.sql                                               | 0c2aa967928cff47130182ef91d26318 | SKIPPED   | f       | Installed already by 04000790
- 04010120 | upgrade/04_01_0120_add_vds_dynamic_pretty_name.sql                                            | 7404b0a567eba4aff9c4725617a682b6 | SKIPPED   | f       | Installed already by 04000820
- 04010130 | upgrade/04_01_0130_change_vm_device_null_plugged_values_to_false.sql                          | 49ad263432af39ce11f240f2b83077c1 | SKIPPED   | f       | Installed already by 04000800
- 04010140 | upgrade/04_01_0140_external_mac_events.sql                                                    | 43cda642d53a78fcc7a1d907411a2da9 | SKIPPED   | f       | Installed already by 04000830
+ 04010060 | upgrade/04_01_0060_add_switch_type_to_vds_interface_and_cluster.sql                           | 1d55e7f7852e0e8c26a548b553bbca62 | INSTALLED | f       | 
+ 04010070 | upgrade/04_01_0070_set_pool_vms_stateless.sql                                                 | 0cac647f8ae83dbed1fd59c6305b36c6 | INSTALLED | f       | 
+ 04010080 | upgrade/04_01_0080_add_gluster_server_peer_status.sql                                         | a96fa84129e4f7209756c0160371ba49 | INSTALLED | f       | 
+ 04010090 | upgrade/04_01_0090_drop_unique_constraint.sql                                                 | ee3245bdf8250ffce39f823c77cd0e41 | INSTALLED | f       | 
+ 04010100 | upgrade/04_01_0100_remove_hosted_engine_storage_domain_name_config_value.sql                  | ce43dd761b3e388666cc4976f65fa0d3 | INSTALLED | f       | 
+ 04010110 | upgrade/04_01_0110_change_disk_status_to_ok.sql                                               | 0c2aa967928cff47130182ef91d26318 | INSTALLED | f       | 
+ 04010120 | upgrade/04_01_0120_add_vds_dynamic_pretty_name.sql                                            | 7404b0a567eba4aff9c4725617a682b6 | INSTALLED | f       | 
+ 04010130 | upgrade/04_01_0130_change_vm_device_null_plugged_values_to_false.sql                          | 49ad263432af39ce11f240f2b83077c1 | INSTALLED | f       | 
+ 04010140 | upgrade/04_01_0140_external_mac_events.sql                                                    | 43cda642d53a78fcc7a1d907411a2da9 | INSTALLED | f       | 
  04010150 | upgrade/04_01_0150_delete_datacenterwithoutspm_conf_value.sql                                 | 556bc3ace270c6d7502dbdeebd0c49a4 | INSTALLED | f       | 
- 04010160 | upgrade/04_01_0160_update_macs_to_lower_case.sql                                              | 6c47daaa6de1e96654faffcf1326c01d | SKIPPED   | f       | Installed already by 04000840
+ 04010160 | upgrade/04_01_0160_update_macs_to_lower_case.sql                                              | 6c47daaa6de1e96654faffcf1326c01d | INSTALLED | f       | 
  04010170 | upgrade/04_01_0170_remove_deprecated_config_EnableDeprecatedClientModeSpicePlugin.sql         | d235d270c4485230e82301ae29d5d2d3 | INSTALLED | f       | 
  04010180 | upgrade/04_01_0180_add_fencing_policy_for_gluster_bricks.sql                                  | c29121948679a2b0b2cb10e2a2092410 | INSTALLED | f       | 
  04010190 | upgrade/04_01_0190_add_default_quota.sql                                                      | 95ccac5bdde6abbee893a72516c15c3a | INSTALLED | f       | 
@@ -457,9 +446,9 @@
  04010230 | upgrade/04_01_0230_automatic_storage_select.sql                                               | 65c4e30fc28a4298b8298b22d9e33f63 | INSTALLED | f       | 
  04010240 | upgrade/04_01_0240_add_step_subject_entites.sql                                               | 46a643b1677fe036fe0da01e6aadb784 | INSTALLED | f       | 
  04010250 | upgrade/04_01_0250_add_progress_column_to_step_table.sql                                      | 06aa24f598790778d738e858ccdc646a | INSTALLED | f       | 
- 04010260 | upgrade/04_01_0260_fix_null_cpu_profile_id.sql                                                | 359560e7fef298e09f5ffe67bb8203bb | SKIPPED   | f       | Installed already by 04000860
+ 04010260 | upgrade/04_01_0260_fix_null_cpu_profile_id.sql                                                | 359560e7fef298e09f5ffe67bb8203bb | INSTALLED | f       | 
  04010270 | upgrade/04_01_0270_add_vm_auto_login_column_to_user_profiles_table.sql                        | 4d8ebecdc1d5b7415fdcabd36d068ed6 | INSTALLED | f       | 
  04010280 | upgrade/04_01_0280_display_migration_cluster_network.sql                                      | 6d2852bcf6d2c17944c67e4ce08cb61f | INSTALLED | f       | 
  04010290 | upgrade/04_01_0290_delete_audit_log_records_of_reverted_audit_log_type.sql                    | fe7dfd64a2b7ec18c57bfaa256c177d5 | INSTALLED | t       | 
-(461 rows)
+(450 rows)

As you can see, there is some diff. Ideally I guess there shouldn't be. Not sure that's possible. AFAIU (didn't try) a similar diff can be seen depending on the source version you start from - e.g. if some file was added after 3.6.0 we released, first to 4.0/4.1 then backported to 3.6.z, if you upgrade from 3.6.z-1 you'll get this file on the upgrade to 4.0, but from 3.6.z+ you'll already have it.

Also I didn't test other things in the system that can possibly be different.

Anything else?

Adding also Eli for review.

(Originally by didi)

(Originally by rhv-bugzilla-bot)

Comment 12 RHV bug bot 2018-05-08 08:59:38 UTC
>As you can see, there is some diff. Ideally I guess there shouldn't be. Not >sure that's possible. AFAIU (didn't try) a similar diff can be seen depending >on the source version you start from - e.g. if some file was added after 3.6.0 >we released, first to 4.0/4.1 then backported to 3.6.z, if you upgrade from >3.6.z-1 you'll get this file on the upgrade to 4.0, but from 3.6.z+ you'll >already have it.

That's right , when a script is run its checksum is recorded into the schema_version table and if after that another (later) script has the same checksum , it will be skipped

(Originally by Eli Mesika)

(Originally by rhv-bugzilla-bot)

Comment 13 RHV bug bot 2018-05-08 08:59:42 UTC
First jenkins build for the upgrade job [1] finished successfully.

Moving to MODIFIED. Please move back if anything else is needed.

[1] http://jenkins.ovirt.org/job/ovirt-engine_master_upgrade-from-3.6_el7_merged/

(Originally by didi)

(Originally by rhv-bugzilla-bot)

Comment 14 RHV bug bot 2018-05-08 08:59:47 UTC
Sandro - do we want also restore from earlier versions?

Current engine-backup can restore 3.6 on 4.0. Other combinations with different source and target are not allowed. It's a one-line fix to add a combination (if it works), but we do not have a jenkins job that tests that, and we probably need one to allow that. We probably need one even if we don't :-)

(Originally by didi)

(Originally by rhv-bugzilla-bot)

Comment 15 RHV bug bot 2018-05-08 08:59:52 UTC
(In reply to Yedidyah Bar David from comment #12)
> Sandro - do we want also restore from earlier versions?
> 
> Current engine-backup can restore 3.6 on 4.0. Other combinations with
> different source and target are not allowed. It's a one-line fix to add a
> combination (if it works), but we do not have a jenkins job that tests that,
> and we probably need one to allow that. We probably need one even if we
> don't :-)

Please discuss with Sandro, this is not the design we will be taking. Make sure the patches are reverted where needed.

(Originally by ylavi)

(Originally by rhv-bugzilla-bot)

Comment 16 RHV bug bot 2018-05-08 08:59:57 UTC
(In reply to Yedidyah Bar David from comment #12)
> Sandro - do we want also restore from earlier versions?

No we don't.

(Originally by Sandro Bonazzola)

(Originally by rhv-bugzilla-bot)

Comment 17 RHV bug bot 2018-05-08 09:00:02 UTC
Just FYI:

Today we always require latest minor version of current release before starting the upgrade.
For example, look into the very first step of the upgrade helper:
https://access.redhat.com/labs/rhevupgradehelper/

"Always update to the latest minor version of your current Red Hat Enterprise Virtualization Manager version before you upgrade to the next major version."


However, I think if we want to implement this bug fully, we should indeed allow upgrade from any version.

(Originally by Marina Kalinin)

(Originally by rhv-bugzilla-bot)

Comment 22 RHV bug bot 2018-05-08 09:00:24 UTC
ok

ovirt-engine-4.0.6.3-1.el7.centos.noarch -> ovirt-engine-4.2.0-0.0.master.20170913112412.git2eb3c0a.el7.centos.noarch

postgresql-9.2.21-1.el7.x86_64 -> rh-postgresql95-postgresql-9.5.7-2.el7.x86_64

# systemctl list-units | grep postgres
  rh-postgresql95-postgresql.service                                                          loaded active running   PostgreSQL database server
# systemctl list-unit-files | grep postgres
postgresql.service                            disabled
rh-postgresql95-postgresql.service            enabled 
rh-postgresql95-postgresql@.service           disabled

(Originally by Jiri Belka)

(Originally by rhv-bugzilla-bot)

Comment 23 RHV bug bot 2018-05-08 09:00:28 UTC
(In reply to Jiri Belka from comment #20)
> ok
> 

Thanks for the report, but we decided to not do this, in the long discussion concluding in comment 13.

> ovirt-engine-4.0.6.3-1.el7.centos.noarch ->
> ovirt-engine-4.2.0-0.0.master.20170913112412.git2eb3c0a.el7.centos.noarch

I guess we should prevent that, unless QE (and CEE) want to support this officially. Pushed this to prevent:

https://gerrit.ovirt.org/81858

Moving to NEW anyway, as even if we do not merge above patch, we decided to have a different solution to current bug, and we do not have it yet.

> 
> postgresql-9.2.21-1.el7.x86_64 ->
> rh-postgresql95-postgresql-9.5.7-2.el7.x86_64
> 
> # systemctl list-units | grep postgres
>   rh-postgresql95-postgresql.service                                        
> loaded active running   PostgreSQL database server
> # systemctl list-unit-files | grep postgres
> postgresql.service                            disabled
> rh-postgresql95-postgresql.service            enabled 
> rh-postgresql95-postgresql@.service           disabled

(Originally by didi)

(Originally by rhv-bugzilla-bot)

Comment 24 RHV bug bot 2018-05-08 09:00:33 UTC
Please be specific about functionality of this RFE. DocText states 4.0 -> 4.2 is supported (if ovirt\*setup\* would be 4.2 version), thus I'm confused what we should expect from this BZ.

(Originally by Jiri Belka)

(Originally by rhv-bugzilla-bot)

Comment 25 RHV bug bot 2018-05-08 09:00:38 UTC
(In reply to Jiri Belka from comment #22)
> Please be specific about functionality of this RFE. DocText states 4.0 ->
> 4.2 is supported (if ovirt\*setup\* would be 4.2 version),

Doc text was written before comment 13. There, PM decided to not support this.

> thus I'm confused
> what we should expect from this BZ.

A new simple tool that will guide the user through the same process as a manual upgrade - add repo, engine-setup, in a loop.

(Originally by didi)

(Originally by rhv-bugzilla-bot)

Comment 26 RHV bug bot 2018-05-08 09:00:43 UTC
Based on c#23
- Do we have a new RFE for that tool mentioned? BZ# ?
Yanivs - are you ok with closing this RFE as WONTFIX?

(Originally by Pavel Stehlik)

(Originally by rhv-bugzilla-bot)

Comment 27 RHV bug bot 2018-05-08 09:00:48 UTC
(In reply to Pavel Stehlik from comment #24)
> Based on c#23
> - Do we have a new RFE for that tool mentioned? BZ# ?

No.

> Yanivs - are you ok with closing this RFE as WONTFIX?

No, I believe this should be a tracker BZ.

(Originally by Yaniv Kaul)

(Originally by rhv-bugzilla-bot)

Comment 28 RHV bug bot 2018-05-08 09:00:52 UTC
Testing has indicated this request is declined. You may appeal this decision by reopening this request.

(Originally by rule-engine)

(Originally by rhv-bugzilla-bot)

Comment 29 RHV bug bot 2018-05-08 09:00:56 UTC
initial brain dump: https://github.com/oVirt/ovirt-engine-hyper-upgrade

(Originally by Sandro Bonazzola)

(Originally by rhv-bugzilla-bot)

Comment 30 RHV bug bot 2018-05-08 09:01:02 UTC
I have prepared a pull request with the improvements based on initial draft: https://github.com/oVirt/ovirt-engine-hyper-upgrade/pull/1

The tool is called ovirt-engine-hyper-upgrade but there is a link to engine-hyper-upgrade when installed by rpm. There is also a man page.

# engine-hyper-upgrade --help
usage: engine-hyper-upgrade [-h] [--check-upgrade-rhv-4-0]
                                 [--check-upgrade-rhv-4-1]

Tool to upgrade RHV environments

optional arguments:
  -h, --help            show this help message and exit
  --check-upgrade-rhv-4-0
                        Check if RHV 4.0 channels have upgrade available. Also enable 4.1 channels
  --check-upgrade-rhv-4-1
                        Check if RHV 4.1 channels have zstream upgrade available. Also enable 4.2 channels

Example of use:
ovirt-engine-hyper-upgrade --check-upgrade-rhv-4-0

Log is available: /var/log/engine-hyper-upgrade.log


Step by step what the tool will execute, example using --check-upgrade-rhv-4-0 flag:

1) Check if official repositories for RHV 4 are available, if not warn users 
   (Not exit or add because they might use internal and different repos name)
2) Look for upgrades via engine-upgrade-check (zstream)
3) Update engine-setup* and ovirt-engine-dwh-setup* packages
4) Run engine-setup
5) Update the system via yum update
6) Enable 4.1 repository
7) Update engine-setup* and ovirt-engine-dwh-setup* packages
8) Run engine-setup
9) Update the system via yum update
10) Disable 4.0 repository

In case users use --check-upgrade-rhv-4-1, will be the same as described above but enabling in the end 4.2 repos.

(Originally by dougsland)

(Originally by rhv-bugzilla-bot)

Comment 31 RHV bug bot 2018-05-08 09:01:07 UTC
doc-team: Previous doc text was written before that idea was rejected. Now it's irrelevant. Something like this should be used instead, but please wait until the bug is on MODIFIED, because we didn't fully make up our minds yet. Copying it also to doc-text field, but discussion should probably happen in comments, which are easier to read:

A new utility is now supplied, called (TBD, suggestions are more than welcome, also for the package name to include it) engine-hyper-upgrade, that can be used to guide the user through upgrading 4.0 or later systems.

(Originally by didi)

(Originally by rhv-bugzilla-bot)

Comment 32 RHV bug bot 2018-05-08 09:01:12 UTC
Will we only support this in for downstream?

(Originally by ylavi)

(Originally by rhv-bugzilla-bot)

Comment 33 RHV bug bot 2018-05-08 09:01:17 UTC
(In reply to Yaniv Lavi from comment #30)
> Will we only support this in for downstream?

You are asking me? I am asking you!

I thought we'll support both.

Current (pending review) code is downstream-only.

Current code assumes subscription-manager. Not sure if we must support also Satellite, or other means, for current bug - if not, please open additional ones.

It should not be too hard to adapt it to upstream, or to custom repos/commands/etc. - can be done later on.

And probably create a new bugzilla product/component and move there, as you decided you do not want it as part of engine-setup.

(Originally by didi)

(Originally by rhv-bugzilla-bot)

Comment 34 RHV bug bot 2018-05-08 09:01:22 UTC
(In reply to Yedidyah Bar David from comment #31)
> (In reply to Yaniv Lavi from comment #30)
> > Will we only support this in for downstream?
> 
> You are asking me? I am asking you!
> 
> I thought we'll support both.

That is what was discussed.

> 
> Current (pending review) code is downstream-only.
> 
> Current code assumes subscription-manager. Not sure if we must support also
> Satellite, or other means, for current bug - if not, please open additional
> ones.
> 
> It should not be too hard to adapt it to upstream, or to custom
> repos/commands/etc. - can be done later on.
> 
> And probably create a new bugzilla product/component and move there, as you
> decided you do not want it as part of engine-setup.

For downstream, we only need subscription-manager (which supports Satellite), the list to iterate on should be provided with defaults, but changeable by the user to custom.
For upstream you should be able to set repos list.
I'm not sure why there are flags for specific releases on the draft.

(Originally by ylavi)

(Originally by rhv-bugzilla-bot)

Comment 35 RHV bug bot 2018-05-08 09:01:28 UTC
(In reply to Yedidyah Bar David from comment #29)
> doc-team: Previous doc text was written before that idea was rejected. Now
> it's irrelevant. Something like this should be used instead, but please wait
> until the bug is on MODIFIED, because we didn't fully make up our minds yet.
> Copying it also to doc-text field, but discussion should probably happen in
> comments, which are easier to read:
> 
> A new utility is now supplied, called (TBD, suggestions are more than
> welcome, also for the package name to include it) engine-hyper-upgrade, that
> can be used to guide the user through upgrading 4.0 or later systems.

Thanks for letting us know. Please update the doc text again when the details are decided.

(Originally by Tahlia Richardson)

(Originally by rhv-bugzilla-bot)

Comment 39 RHV bug bot 2018-05-08 09:01:48 UTC
not executed, just read briefly...

please stick to usual conventions - ovirt\*setup\*

https://github.com/oVirt/ovirt-engine-hyper-upgrade/blob/master/src/ovirt-engine-hyper-upgrade#L383

i suppose it won't upgrade for example ovirt-imageio-proxy-setup for us.

from 4.1.8:

# rpm -qa ovirt\*setup\*
ovirt-engine-setup-4.1.8.1-0.1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.1.8.1-0.1.el7.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.1.8.1-0.1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.1.8.1-0.1.el7.noarch
ovirt-imageio-proxy-setup-1.0.0-0.el7ev.noarch
ovirt-setup-lib-1.1.4-1.el7ev.noarch
ovirt-engine-dwh-setup-4.1.8-1.el7ev.noarch
ovirt-engine-setup-base-4.1.8.1-0.1.el7.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.1.8.1-0.1.el7.noarch

<nitpicking>
ovirt tools usually use python yum module, not calling yum binary directly
</nitpicking>

please describe what it really does, i'm not sure why each person should read code to get idea what it does - "manager upgrade/help to upgrade" is not really enough in manpage. it messes with whole OS (https://github.com/oVirt/ovirt-engine-hyper-upgrade/blob/master/src/ovirt-engine-hyper-upgrade#L406) so the user should be aware of it.

(Originally by Jiri Belka)

(Originally by rhv-bugzilla-bot)

Comment 40 RHV bug bot 2018-05-08 09:01:53 UTC
(In reply to Jiri Belka from comment #37)
> not executed, just read briefly...

I'd encourage you to execute the tool, specially to give us a feedback.
As early we have the feedback, will be faster the improvement it.


> 
> please stick to usual conventions - ovirt\*setup\*
> 

> https://github.com/oVirt/ovirt-engine-hyper-upgrade/blob/master/src/ovirt-
> engine-hyper-upgrade#L383
> 
> i suppose it won't upgrade for example ovirt-imageio-proxy-setup for us.
> 
> from 4.1.8:
> 
> # rpm -qa ovirt\*setup\*
> ovirt-engine-setup-4.1.8.1-0.1.el7.noarch
> ovirt-engine-setup-plugin-ovirt-engine-common-4.1.8.1-0.1.el7.noarch
> ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.1.8.1-0.1.el7.noarch
> ovirt-engine-setup-plugin-ovirt-engine-4.1.8.1-0.1.el7.noarch
> ovirt-imageio-proxy-setup-1.0.0-0.el7ev.noarch
> ovirt-setup-lib-1.1.4-1.el7ev.noarch
> ovirt-engine-dwh-setup-4.1.8-1.el7ev.noarch
> ovirt-engine-setup-base-4.1.8.1-0.1.el7.noarch
> ovirt-engine-setup-plugin-websocket-proxy-4.1.8.1-0.1.el7.noarch
> 

Agreed, let's use it the conventional. 


> <nitpicking>
> ovirt tools usually use python yum module, not calling yum binary directly
> </nitpicking>

Not really required for such tool, but if you think it's required please file a RFE.

> 
> please describe what it really does, i'm not sure why each person should
> read code to get idea what it does - "manager upgrade/help to upgrade" is
> not really enough in manpage. it messes with whole OS
> (https://github.com/oVirt/ovirt-engine-hyper-upgrade/blob/master/src/ovirt-
> engine-hyper-upgrade#L406) so the user should be aware of it.

Sure, let's improve the man page. Please let me know if you have a suggestion or even a patch. :)

(Originally by dougsland)

(Originally by rhv-bugzilla-bot)

Comment 41 RHV bug bot 2018-05-08 09:01:58 UTC
package built and will be included in 4.2 beta.
Please open BZ against it in separate BZ

(Originally by Sandro Bonazzola)

(Originally by rhv-bugzilla-bot)

Comment 43 RHV bug bot 2018-05-08 09:02:07 UTC
SM is super slow tool, and this wrapper is adding additional delay while executing for each repo to add individual SM commands:

...
[ INFO  ] Executing: env LC_ALL=C subscription-manager repos --enable rhel-7-server-rhv-4.1-rpms
Repository 'rhel-7-server-rhv-4.1-rpms' is enabled for this system.
[ INFO  ] Enabling repository: rhel-7-server-rhv-4-tools-rpms
[ INFO  ] Executing: env LC_ALL=C subscription-manager repos --enable rhel-7-server-rhv-4-tools-rpms
...

(Originally by Jiri Belka)

(Originally by rhv-bugzilla-bot)

Comment 44 RHV bug bot 2018-05-08 09:02:11 UTC
IMO if I choose to do backup in the wrapper, engine-setup should not ask me again for doing backup :)

(Originally by Jiri Belka)

(Originally by rhv-bugzilla-bot)

Comment 45 RHV bug bot 2018-05-08 09:02:16 UTC
(In reply to Jiri Belka from comment #42)
> IMO if I choose to do backup in the wrapper, engine-setup should not ask me
> again for doing backup :)

These are different kinds of backup. If you ask engine-setup to backup and it fails, it will (should, at least) automatically rollback, and that's the only planned/designed use for this backup.

The wrapper's backup is done using engine-backup, and:

1. There is no automatic rollback.

2. You can restore on a different machine, just as any manual engine-backup backup. Or reinstall current and restore on it.

You might claim that it's a waste to run both, especially with a possibly large dwh db, and I agree. Not sure what's the best way to solve this, though. Open an RFE if you want...

(Originally by didi)

(Originally by rhv-bugzilla-bot)

Comment 46 RHV bug bot 2018-05-08 09:02:21 UTC
ok, ovirt-engine-hyper-upgrade-1.0.0-3.el7ev.noarch

it works fine, all other improvements can be solved in other BZs.

(Originally by Jiri Belka)

(Originally by rhv-bugzilla-bot)

Comment 47 RHV bug bot 2018-05-08 09:02:26 UTC
Hey Didi
I'm documenting this feature and I have a few questions.

1. Do you have a document on which I can use as a basis for my documentation?

2. How final is the process. For example, I can see that Jiri had a few suggestions for improving the process. Have RFEs been opened, and if yes, will the be done by GA?

I want to make the documentation process as efficient as possible, therefore it's crucial that I understand whether this is completely ready for me to take on.
Thanks!

(Originally by Emma Heftman)

(Originally by rhv-bugzilla-bot)

Comment 48 RHV bug bot 2018-05-08 09:02:31 UTC
(In reply to Emma Heftman from comment #45)
> Hey Didi
> I'm documenting this feature and I have a few questions.
> 
> 1. Do you have a document on which I can use as a basis for my documentation?
> 
> 2. How final is the process. For example, I can see that Jiri had a few
> suggestions for improving the process. Have RFEs been opened, and if yes,
> will the be done by GA?
> 
> I want to make the documentation process as efficient as possible, therefore
> it's crucial that I understand whether this is completely ready for me to
> take on.
> Thanks!

And one more question. Can you confirm that this tool should only be used for 4.0-4.2.

(Originally by Emma Heftman)

(Originally by rhv-bugzilla-bot)

Comment 49 RHV bug bot 2018-05-08 09:02:37 UTC
Hey Douglas. If you could answer my questions instead of or in addition to Didi that would be great.
Thanks!

(Originally by Emma Heftman)

(Originally by rhv-bugzilla-bot)

Comment 50 RHV bug bot 2018-05-08 09:02:41 UTC
(In reply to Emma Heftman from comment #46)
> (In reply to Emma Heftman from comment #45)
> > Hey Didi
> > I'm documenting this feature and I have a few questions.
> > 
> > 1. Do you have a document on which I can use as a basis for my documentation?
> > 
> > 2. How final is the process. For example, I can see that Jiri had a few
> > suggestions for improving the process. Have RFEs been opened, and if yes,
> > will the be done by GA?
> > 
> > I want to make the documentation process as efficient as possible, therefore
> > it's crucial that I understand whether this is completely ready for me to
> > take on.
> > Thanks!
> 
> And one more question. Can you confirm that this tool should only be used
> for 4.0-4.2.

It is possible to use from 4.0 to 4.2 or from 4.1 to 4.2.

(Originally by dougsland)

(Originally by rhv-bugzilla-bot)

Comment 51 RHV bug bot 2018-05-08 09:02:46 UTC
(In reply to Emma Heftman from comment #45)
> Hey Didi
> I'm documenting this feature and I have a few questions.
> 
> 1. Do you have a document on which I can use as a basis for my documentation?

We have the man page available:

DESCRIPTION
       The ovit-engine-hyper-upgrade is a wrapper tool to automate RHV Manager upgrades.

       First, the tool detects the current version of RHVM running on the system and check if there are updates
       available related to minor version detected. If there are updates available, update all ovirt-engine-*setup
       packages via yum and execute engine-setup to complete the update. After the update is completed, update the
       entire system via yum update command.

       Second stage, all system is up to date, enable the new channels required for the next major release via
       subscription-manager and update all ovirt-engine-*setup packages. As soon the packages are updated,
       execute engine-setup to complete the major upgrade.

       Final stage, as new channel were added into the system, execute, yum update, to have the system
       up to date and disable previous major versions related channels not required anymore.

       --backup
              Execute engine-backup before the upgrade

       --backup-dir
              Directory where the engine-backup will save the backup file.  If not provided, it will use /tmp

LOGGING
       See /var/log/ovirt-engine/engine-hyper-upgrade.log

EXAMPLES
       Upgrade through latest version, backup won't be created
       # ovirt-engine-hyper-upgrade

       Upgrade to version 4.1 creating a backup from engine, backup will be saved in /backup
       # ovirt-engine-hyper-upgrade --backup --backup-dir=/backup

       Upgrade only, backup won't created
       # ovirt-engine-hyper-upgrade



> 
> 2. How final is the process. For example, I can see that Jiri had a few
> suggestions for improving the process. Have RFEs been opened, and if yes,
> will the be done by GA?
> 

Yes, all fixed and targeted to GA.



> I want to make the documentation process as efficient as possible, therefore
> it's crucial that I understand whether this is completely ready for me to
> take on.

Sure thing, let me know if additional help is required.

(Originally by dougsland)

(Originally by rhv-bugzilla-bot)

Comment 56 RHV bug bot 2018-05-08 09:03:06 UTC
(In reply to Emma Heftman from comment #46)
> (In reply to Emma Heftman from comment #45)
> > Hey Didi
> > I'm documenting this feature and I have a few questions.
> > 
> > 1. Do you have a document on which I can use as a basis for my documentation?
> > 
> > 2. How final is the process. For example, I can see that Jiri had a few
> > suggestions for improving the process. Have RFEs been opened, and if yes,
> > will the be done by GA?
> > 
> > I want to make the documentation process as efficient as possible, therefore
> > it's crucial that I understand whether this is completely ready for me to
> > take on.
> > Thanks!
> 
> And one more question. Can you confirm that this tool should only be used
> for 4.0-4.2.

As mentioned, it's incremental upgrade, from 4.0 -> 4.1 -> 4.2. It's not supported by RHV a direct upgrade to 4.2 from 4.0 even without this tool.

(Originally by dougsland)

(Originally by rhv-bugzilla-bot)

Comment 63 Jiri Belka 2018-05-22 14:42:27 UTC
ok, 4.0 -> 4.2

ovirt-engine-hyper-upgrade-1.0.0-7.el7ev.noarch

Comment 68 errata-xmlrpc 2018-06-19 15:42:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1945

Comment 69 Franta Kust 2019-05-16 13:02:59 UTC
BZ<2>Jira Resync


Note You need to log in before you can comment on or make changes to this bug.