Description of problem: VM that went down is handled by the hosts/vms monitoring twice: 1. as a VM that switched to DOWN (because it is reported as DOWN by VDSM) 2. as a VM that wasn't returned by VDSM (and running in the DB) Obviously, #2 shouldn't happen. It is a regression that was caused by http://gerrit.ovirt.org/#/c/25547: in VdsUpdateRunTimeInfo#removeVmsFromCache we are skipping a VM if its status wasn't change instead of if it was reported as running or not. As a result we are calling VmPoolHandler#processVmPoolOnStopVm twice and it is wrong. Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. kill qemu process 2. 3. Actual results: VmPoolHandler#processVmPoolOnStopVm is called twice Expected results: VmPoolHandler#processVmPoolOnStopVm should be called one time Additional info: changing _vmsMovedToDown to Set or ensure we don't put the same VM more than once to it is not the right solution, we should fix the logic or change the logic (and the documentation) properly.
Eventually as part of bz 1098791, I changed _vmsMovedToDown to be Set. Working on a better solution doesn't worth the time, as the work on the refactored monitoring is already in progress. *** This bug has been marked as a duplicate of bug 1098791 ***