+++ This bug was initially created as a clone of Bug #1118350 +++ Description of problem: Since we now allow running dwh and engine on separate hosts, it's possible to setup two (or more) dwh instances against a single engine. This will seem to work well - no conflicts/failures/etc are expected - but in practice only one of the dwh servers will get each update on the engine, so the history will be scattered around them and no-one will have a single correct view of the history. For now, we should prevent that. We should add a row somewhere in the engine db (Yaniv told me in which table but I don't remember currently) during setup, if it does not exist already, and do something if it does (abort, alert the user and ask for confirmation, etc.). In the future we might decide that there is use for more than one dwh and add support for that.
We need a fix for this for both setup and service start. Please consider options. Maybe register the service using unique hash and only allow reinstall from any other machine when that has is cleared from dwh_history_timekeeping. Setup will put this in the DWH context and it will match it at startup. Yaniv
Proposed solution : 1. We also have the value of "DwhCurrentlyRunning" in the "dwh_history_timekeeping" table, in the engine database. This is updated each time the service start. 2. We want to address the issue of running more that one instance od DWH on seprate hosts. During setup - 3. Create a key generated based on the host name and a random number. 4. The key will be on the engine side and on the host side, were the dwh process is running. 5. When the service will start it will check that both sides are identical. 6.In case a user tries to install another instance of dwh this will fail during setup when he tries to conect to the engine. 6.1. If the value of "DwhCurrentlyRunning" is true then the user will get an error message saying to the user that he alredy has a running dwh with on <host name>. And please stop the processes on the other host if he wishes to replace it. If the process is not really running the user will have to manually update the "DwhCurrentlyRunning" to false. 6.2. If the value of "DwhCurrentlyRunning" is false then the user will get a warning message saying that he has another dwh installation on <host name> and if he wishes to replace it permenently and loss all data from the previus installation. If the user choses to replace the installation the the key in the engine will update according to the new host. ON cleanup - the data regarding the keys will be removed as well.
If the user choses to replace the installation we also need to add to the etl process a check up on start up if it can collect data from the engine by compering the key on the engine side to the key on the host it is corrently on so if it is the old etl it will fail.
Sound ok to me. Please move forward with this. Yaniv
Moving to POST - for changes see upstream bug #1118350
Does not require doc text, this bug was needed only because we now allow separate hosts, see bug 1100200 .
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2015-0177.html