Bug 1156009 - Upgrade procedure of DWH from local setup to remote
Summary: Upgrade procedure of DWH from local setup to remote
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: Documentation
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
: 3.5.0
Assignee: Lucy Bopf
QA Contact: Tahlia Richardson
URL:
Whiteboard: integration
: 1159612 (view as bug list)
Depends On:
Blocks: rhev35betablocker 1156015 rhev35rcblocker rhev35gablocker 1193259
TreeView+ depends on / blocked
 
Reported: 2014-10-23 11:51 UTC by Shirly Radco
Modified: 2015-09-22 13:10 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1156015 (view as bug list)
Environment:
Last Closed: 2015-03-02 03:37:13 UTC
oVirt Team: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Shirly Radco 2014-10-23 11:51:06 UTC
Description of problem:
The should be a wiki for upgrade procedure of DWH from local setup to remote.

Version-Release number of selected component (if applicable):
3.5

Comment 1 Yedidyah Bar David 2014-10-23 12:37:16 UTC
Overview of the process:

Start with a 3.5 engine/dwh/reports setup on a single host (from clean or upgraded)

* service ovirt-engine-dwhd stop
* Keep somewhere the db credentials (can be found at /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf)
* yum remove ovirt-engine-dwh (or rhevm-dwh)

On the new dwh machine:
* yum install ovirt-engine-dwh
* engine-setup - supply existing credentials

I didn't try that myself.

Comment 2 Shirly Radco 2014-10-26 12:55:43 UTC
This process was not tested fully and should be tested.

Comment 3 Shirly Radco 2014-11-11 13:03:30 UTC
I tested this scenario and it worked.

Start with a 3.5 engine/dwh/reports setup on a single host (from clean or upgraded)

On the new dwh machine:
* yum install ovirt-engine-dwh

On the engine machine:
* service ovirt-engine-dwhd stop

On the new dwh machine:
* engine-setup - supply existing credentials
  * User should choose to use Remote DWH database.
  * Get DWH and engine database credentials from the engine machine at:
    /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf
  * Allow to change the DWH to the new one.

On the engine machine:
* yum remove ovirt-engine-dwh (or rhevm-dwh)- Must (or after an hour the service will try to restart).

This is the scenario for etl process migration to a separate host.
The ovirt_engine_history database remains on the same host as the engine.

If the user also wants to migrate the ovirt_engine_history database then he should create database backup using pg_dump , create a new database in the new location and restore using the backup file. 
Then provide the correct credentials for it during the engine-setup.

Comment 4 Shirly Radco 2014-12-09 10:02:23 UTC
Updated:

Start with a 3.5 engine/dwh/reports setup on a single host (from clean or upgraded)

On the new dwh machine:
* yum install ovirt-engine-dwh

On the engine machine:
* service ovirt-engine-dwhd stop

* If the ovirt_engine_history database remains on the same host as the engine,
  then before running engine-setup:

  Edit on Engine machine the file /var/lib/pgsql/data/postgresql.conf

  Find there the line containing 'listen_addresses' and change it to be:

  listen_addresses = '*'

  If there is no such line there, or only a commented one, add a new such line.

  Restart postgresql with:

  service postgresql restart 


On the new dwh machine:
* engine-setup - supply existing credentials
  * User should choose to use Remote DWH database.
  * Get DWH and engine database credentials from the engine machine at:
    /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf
  * Allow to change the DWH to the new one.

On the engine machine:
* yum remove ovirt-engine-dwh (or rhevm-dwh)- Must (or after an hour the service will try to restart).

This is the scenario for etl process migration to a separate host.
The ovirt_engine_history database remains on the same host as the engine.

If the user also wants to migrate the ovirt_engine_history database then he should create database backup using pg_dump , create a new database in the new location and restore using the backup file. 
Then provide the correct credentials for it during the engine-setup.

Comment 12 Lucy Bopf 2015-02-08 23:14:33 UTC
Moving this bug back to assigned while I work on this content.

Comment 16 Yedidyah Bar David 2015-02-22 10:56:24 UTC
(In reply to Lucinda Bopf from comment #14)
> I tested this procedure today, and have updated the migration content with
> additional steps. I have one more question to add to the list:
> 
> 4. During engine-setup, the following option is given:
> "Setup can backup the existing database. The time and space required for the
> database backup depend on its size. This process takes time, and in some
> cases (for instance, when the size is few GBs) may take several hours to
> complete.
>           If you choose to not back up the database, and Setup later fails
> for some reason, it will not be able to restore the database and all DWH
> data will be lost.
>           Would you like to backup the existing database before upgrading
> it? (Yes, No) [Yes]: "
> Later in the setup script, the location of the file is given:
> "[ INFO  ] Backing up database
> lbopf-rhevm35.usersys.redhat.com:ovirt_engine_history to
> '/var/lib/ovirt-engine-dwh/backups/dwh-20150217042352.IPHlBP.dump'."
> In that case, do we need a separate pg_dump procedure, or can users
> migrate/restore their DWH database using this .dump file?

In principle it can be used, but will require changing a bit the instructions, as it uses a different pg_dump format ("custom" vs "plain" (which is the default)).

In practice this is not generally useful because it's up-to-date only for the moment of running engine-setup (usually for an upgrade). So if you want to migrate a month later, and use this dump, you loose a month's worth of history.

Comment 18 Lucy Bopf 2015-02-23 01:55:15 UTC
(In reply to Yedidyah Bar David from comment #16)
> (In reply to Lucinda Bopf from comment #14)
> > I tested this procedure today, and have updated the migration content with
> > additional steps. I have one more question to add to the list:
> > 
> > 4. During engine-setup, the following option is given:
> > "Setup can backup the existing database. The time and space required for the
> > database backup depend on its size. This process takes time, and in some
> > cases (for instance, when the size is few GBs) may take several hours to
> > complete.
> >           If you choose to not back up the database, and Setup later fails
> > for some reason, it will not be able to restore the database and all DWH
> > data will be lost.
> >           Would you like to backup the existing database before upgrading
> > it? (Yes, No) [Yes]: "
> > Later in the setup script, the location of the file is given:
> > "[ INFO  ] Backing up database
> > lbopf-rhevm35.usersys.redhat.com:ovirt_engine_history to
> > '/var/lib/ovirt-engine-dwh/backups/dwh-20150217042352.IPHlBP.dump'."
> > In that case, do we need a separate pg_dump procedure, or can users
> > migrate/restore their DWH database using this .dump file?
> 
> In principle it can be used, but will require changing a bit the
> instructions, as it uses a different pg_dump format ("custom" vs "plain"
> (which is the default)).
> 
> In practice this is not generally useful because it's up-to-date only for
> the moment of running engine-setup (usually for an upgrade). So if you want
> to migrate a month later, and use this dump, you loose a month's worth of
> history.

Right. I understand. So it's still a good idea to keep the separate pg_dump procedure (which actually appears in a more logical place, BEFORE running engine-setup). Looking again, I think it's clear in the topic that the backup is just a fail-safe in case engine-setup fails.

Comment 23 Lucy Bopf 2015-03-02 03:37:13 UTC
Included in the latest async release.

Comment 24 Andrew Dahms 2015-04-10 05:19:36 UTC
*** Bug 1159612 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.