Description of problem: DC, SD, SPM status stays active/up even after blocking the connection from the SPM to the SD (there is only 1 host) Version-Release number of selected component (if applicable): vdsm-4.13.2-0.11.el6ev.x86_64 rhevm-3.3.1-0.48.el6ev.noarch How reproducible: 100% Steps to Reproduce: 1. Create a dc with a cluster and add one host and one nfs sd 2. Wait for dc to be up 3. Block connection from the spm to the nfs sd 4. Go to Data Centers tab -> the DC *isn't* up 5. Go to Storage Domains tab, click on the SD, go to Storage Domains tab -> the DC *is* up 6. Go to Hosts tab -> The spm *is* up Actual results: DC, SD, SPM are up Expected results: DC and SD will be down, SPM will be in a non-responsive state.
Created attachment 884917 [details] logs
The bug will affect our automation testing, please help fix it asap, thanks.
(In reply to Meital Bourvine from comment #0) > 5. Go to Storage Domains tab, click on the SD, go to Storage Domains tab -> > the DC *is* up This is the SD's SHARED STATUS, not the DC's status. (In reply to Alex Jia from comment #2) > The bug will affect our automation testing, please help fix it asap, thanks. This is purely a UI issue. How can it effect automation tests?
(In reply to Allon Mureinik from comment #3) > (In reply to Alex Jia from comment #2) > > The bug will affect our automation testing, please help fix it asap, thanks. > This is purely a UI issue. How can it effect automation tests? Except UI issue, the host state is not expected, it is always up or connecting status after blocking connection from host to storage domain, but it's non-responsive(or non-operational) state ago, maybe, is it a regression bug? Meital, please help confirm this, thanks.
Meital, Alex the engine.log provided is 0 bytes :) please attach the case log. about the raised issues - *Go to Data Centers tab -> the DC *isn't* up - it shouldn't be, as we don't have connectivity to the domain. *Go to Hosts tab -> The spm *is* up that host should remain UP, why wouldn't it be? if it wasn't we had a bug here, we have no problem communicating with it and the problem is with the storage. the question is wether it retains the SPM mark in the engine which it shouldn't. *Go to Storage Domains tab, click on the SD, go to Storage Domains tab -> the DC is* up - possibly a UI issue as it seems, pending needinfo? response before proceeding with it.
Couldn't reproduce exactly (the host is up with SPM, dc and sd are down). Adding logs.
Created attachment 894760 [details] new logs
Meital, the host "SPM" role mark in the db should be released as the host isn't the spm anymore (obviously). Perhaps thing could be improved in the engine to update the db earlier, but that's non ciritical issue that i assume we won't fix. 2014-05-12 17:47:59,466 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (org.ovirt.thread.pool-4-thread-42) [7e42688d] Comm and org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand return value TaskStatusListReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=654, mMessage=Not SPM: ()]] Elad, if you can please confirm that on your env too. Allon, if there's no other issue i assume that this one could be postponed with low severity or be closed as wont fix.