Bug 1625720
| Summary: | VM not starting with: Multiple 'scsi' controllers with index '0'. | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [oVirt] ovirt-engine | Reporter: | Andreas Elvers <andreas.elvers+redhat.bugzilla> | ||||
| Component: | General | Assignee: | bugs <bugs> | ||||
| Status: | CLOSED WORKSFORME | QA Contact: | meital avital <mavital> | ||||
| Severity: | low | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 4.2.6 | CC: | andreas.elvers+redhat.bugzilla, bugs, michal.skrivanek, rbarry, tnisan | ||||
| Target Milestone: | --- | ||||||
| Target Release: | --- | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2018-10-22 08:15:52 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Andreas Elvers
2018-09-05 16:13:57 UTC
Please attach ovirt-engine logs covering the time of VM creation and successful starts and the failed one Thanks Ryan, this is not necessarily storage related, might be domain XML related in which case it will be Virt. well, this sounds storage related. Correct XML for certain storage is still storage. Created attachment 1481507 [details]
Log with multiple scsi controller with id 0 error
This log shows a start of a vm that failed but oVirt eventually brought it up without user intervention on subsequent tries. I have to correct myself on the node versions though. The participating nodes are named node01, node02, node03. node01 is current on 4.2.6, node02 and node03 are still on 4.2.5.1.
> Log with multiple scsi controller with id 0 error
I filtered out Gluster messages, for clarity, since the gluster status messages are logged every few seconds.
when was this VM originally created? And whe it was run last time successfully before failure? The VM was created like one or two weeks ago. I can always run the VM eventually. The problems arise, when I add a new disk. I use this particular VM to move the contents of NFS backed VMs to our Ceph backed cluster. I add a new disk, rsync from the nfs side to the ceph side. After that it is attached to the replacement VM on our ceph cluster. Adding a new disk will usually trigger this error. I click on run, then it will try node01 and error, try node02 and error, try node03 and error out completely. After some clicks on run the VM will start eventually. So today I have moved two VMs to our ceph backed oVirt cluster, and every time I removed the finished disk and added a new disk, the run problem is there. But I can always start it successfully after a few minutes of trying. After starting everything is fine. the log doesn't contain the first start of the VM. Can you please reproduce and attach the log covering VM creating, initial start that fails, and then the start that succeeds? Also, please make sure you have 4.2.6 engine (hosts do not matter) and check if by any chance you've enabled iothreads I upgraded to 4.2.6 engine. Can no longer reproduce the error. good, then we can close this.... |