Description of problem: After some manipulation with the hosts, one of them went to "Unassigned" state for a long time (more than 20 hrs). Statuses of the VMs on all _other_ host are not being updated (VM can be launched without errors from the host/engine but status is 9 "waiting for launch). VMs can be powered off and launched again (status changed from 0 to 9 and vice versa, run_on_vds is changed as well). Free memory of the host is not being updated. Version-Release number of selected component (if applicable): rhevm-3.2.1-0.39.el6ev.noarch How reproducible: Unknown. 2 systems are affected Actual results: One host is in Unassigned mode. New started VMs are in "Waiting for launch" status but actually up and running
Created attachment 779325 [details] eventq.btm
Still needs to be investigated, postponing to 3.2.4
This bug is about patch
the patch was accepted upstream long time ago and it is already in 3.3, I would like to test this scenario as a part of the scale testing for 3.3, Hence moving to ON_QA
This bug is currently attached to errata RHEA-2013:15231. If this change is not to be documented in the text for this errata please either remove it from the errata, set the requires_doc_text flag to minus (-), or leave a "Doc Text" value of "--no tech note required" if you do not have permission to alter the flag. Otherwise to aid in the development of relevant and accurate release documentation, please fill out the "Doc Text" field above with these four (4) pieces of information: * Cause: What actions or circumstances cause this bug to present. * Consequence: What happens when the bug presents. * Fix: What was done to fix the bug. * Result: What now happens when the actions or circumstances above occur. (NB: this is not the same as 'the bug doesn't present anymore') Once filled out, please set the "Doc Type" field to the appropriate value for the type of change made and submit your edits to the bug. For further details on the Cause, Consequence, Fix, Result format please refer to: https://bugzilla.redhat.com/page.cgi?id=fields.html#cf_release_notes Thanks in advance.
QE are unable to verify this scale bug for 3.3. will verify in 3.4
Added a 3.3.z flag to test it for 3.3.zstream
How to reproduced the bug?
Tested on 3.4(latest) 3.4.0-0.21.el6ev - I have created 37 hosts - running deactivate and active in high frequency. - hosts being unassigned for 2-3 min and then status Ok.
Closing as part of 3.4.0