Bug 1122021
Summary: | there must be at most one instance of dwh per engine | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Yedidyah Bar David <didi> |
Component: | ovirt-engine-dwh | Assignee: | Yedidyah Bar David <didi> |
Status: | CLOSED ERRATA | QA Contact: | Petr Matyáš <pmatyas> |
Severity: | urgent | Docs Contact: | |
Priority: | high | ||
Version: | 3.5.0 | CC: | didi, ecohen, gklein, iheim, rbalakri, Rhev-m-bugs, sbonazzo, sradco, yeylon, ylavi |
Target Milestone: | --- | ||
Target Release: | 3.5.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | integration | ||
Fixed In Version: | vt3 - rhevm-dwh-3.5.0-3.el6ev | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | 1118350 | Environment: | |
Last Closed: | 2015-02-11 18:16:05 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1118350 | ||
Bug Blocks: | 1142923, 1156165 |
Description
Yedidyah Bar David
2014-07-22 11:25:22 UTC
We need a fix for this for both setup and service start. Please consider options. Maybe register the service using unique hash and only allow reinstall from any other machine when that has is cleared from dwh_history_timekeeping. Setup will put this in the DWH context and it will match it at startup. Yaniv Proposed solution : 1. We also have the value of "DwhCurrentlyRunning" in the "dwh_history_timekeeping" table, in the engine database. This is updated each time the service start. 2. We want to address the issue of running more that one instance od DWH on seprate hosts. During setup - 3. Create a key generated based on the host name and a random number. 4. The key will be on the engine side and on the host side, were the dwh process is running. 5. When the service will start it will check that both sides are identical. 6.In case a user tries to install another instance of dwh this will fail during setup when he tries to conect to the engine. 6.1. If the value of "DwhCurrentlyRunning" is true then the user will get an error message saying to the user that he alredy has a running dwh with on <host name>. And please stop the processes on the other host if he wishes to replace it. If the process is not really running the user will have to manually update the "DwhCurrentlyRunning" to false. 6.2. If the value of "DwhCurrentlyRunning" is false then the user will get a warning message saying that he has another dwh installation on <host name> and if he wishes to replace it permenently and loss all data from the previus installation. If the user choses to replace the installation the the key in the engine will update according to the new host. ON cleanup - the data regarding the keys will be removed as well. If the user choses to replace the installation we also need to add to the etl process a check up on start up if it can collect data from the engine by compering the key on the engine side to the key on the host it is corrently on so if it is the old etl it will fail. Sound ok to me. Please move forward with this. Yaniv Moving to POST - for changes see upstream bug #1118350 Does not require doc text, this bug was needed only because we now allow separate hosts, see bug 1100200 . Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2015-0177.html |