Bug 1109544
| Summary: | host fail to become "UP", from maintenance, following VMs migration. | ||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Ilanit Stein <istein> | ||||||||||||||
| Component: | ovirt-engine | Assignee: | Nobody <nobody> | ||||||||||||||
| Status: | CLOSED NOTABUG | QA Contact: | Ilanit Stein <istein> | ||||||||||||||
| Severity: | high | Docs Contact: | |||||||||||||||
| Priority: | unspecified | ||||||||||||||||
| Version: | 3.4.0 | CC: | acathrow, gklein, iheim, lpeer, oourfali, Rhev-m-bugs, yeylon | ||||||||||||||
| Target Milestone: | --- | ||||||||||||||||
| Target Release: | --- | ||||||||||||||||
| Hardware: | Unspecified | ||||||||||||||||
| OS: | Unspecified | ||||||||||||||||
| Whiteboard: | virt | ||||||||||||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||||||||||||
| Doc Text: | Story Points: | --- | |||||||||||||||
| Clone Of: | Environment: | ||||||||||||||||
| Last Closed: | 2014-06-17 10:43:50 UTC | Type: | Bug | ||||||||||||||
| Regression: | --- | Mount Type: | --- | ||||||||||||||
| Documentation: | --- | CRM: | |||||||||||||||
| Verified Versions: | Category: | --- | |||||||||||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||||
| Embargoed: | |||||||||||||||||
| Attachments: |
|
||||||||||||||||
|
Description
Ilanit Stein
2014-06-15 07:02:16 UTC
Created attachment 908886 [details]
host1 event log
Created attachment 908887 [details]
engine log
Created attachment 908889 [details]
host2 vdsm log
Created attachment 908890 [details]
host 2 libvirt log
Created attachment 908891 [details]
host1 vdsm log
Created attachment 908892 [details]
host1 libvirt log
Problem did not reproduce: Run 5 VMs on host1, Move host1 to maintenance. error event: Failed to switch to maintenance, But right after this error, events for all VMs migrations to host2 were completed, and host1 became in maintenance. Then, activate host1 worked fine. QE storage guys investigation showed the problem to activate host, occurred because the hosts contained many old storage connections, that made the storage domain connection very long, more than 3 min, which is the default timeout configured in /etc/multipath.conf. The host connection to storage succeeded eventually, but as timeout expired, engine considered it as a failure. As this is not a bug, but a matter of configuration, and "slow" host, I am closing the bug. |