Description of problem: Since we now allow running dwh and engine on separate hosts, it's possible to setup two (or more) dwh instances against a single engine. This will seem to work well - no conflicts/failures/etc are expected - but in practice only one of the dwh servers will get each update on the engine, so the history will be scattered around them and no-one will have a single correct view of the history. For now, we should prevent that. We should add a row somewhere in the engine db (Yaniv told me in which table but I don't remember currently) during setup, if it does not exist already, and do something if it does (abort, alert the user and ask for confirmation, etc.). In the future we might decide that there is use for more than one dwh and add support for that.
I think we should have a field in the engine that has a key generated by the dwh that can verify this is the correct dwh process. Didi suggested to check on engine the value if the is a process running. I suggest a key so that if we use kill -9 and not update the engine db with the dwh stopping we will still be able to run it again. Can we generete such a key and save it for the next run if the process stopes?
(In reply to Shirly Radco from comment #1) > I think we should have a field in the engine that has a key generated by the > dwh that can verify this is the correct dwh process. Sounds reasonable. Where do you want to save it? Probably simplest is add a new file /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-dwhd.conf with a single key DWH_ID or something like that. > Didi suggested to check on engine the value if the is a process running. Don't remember that. Did I say who will check and how? > I suggest a key so that if we use kill -9 and not update the engine db with > the dwh stopping we will still be able to run it again. > Can we generete such a key and save it for the next run if the process > stopes?
Fails on clean setup same host as engine
*** Bug 1140986 has been marked as a duplicate of this bug. ***
*** Bug 1141205 has been marked as a duplicate of this bug. ***
oVirt 3.5 has been released and should include the fix for this issue.