Bug 1281848 - [ppc64le] VM fails to start with a spapr-vscsi interface disk as boot device with message (XML error: target 'sda' duplicated for disk sources)
[ppc64le] VM fails to start with a spapr-vscsi interface disk as boot device ...
Status: CLOSED DUPLICATE of bug 1274677
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage (Show other bugs)
3.6.0
ppc64le Unspecified
unspecified Severity high (vote)
: ovirt-3.6.1
: 3.6.1
Assigned To: Amit Aviram
Aharon Canan
storage
:
Depends On:
Blocks: RHEV3.6PPC
  Show dependency treegraph
 
Reported: 2015-11-13 10:20 EST by Carlos Mestre González
Modified: 2016-03-10 02:02 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-11-16 09:51:45 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
derez: needinfo-
amureini: ovirt‑3.6.z?
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)
engine.log (56.53 KB, text/plain)
2015-11-13 10:22 EST, Carlos Mestre González
no flags Details
all logs in /var/log/vdsm/ (148.49 KB, application/x-gzip)
2015-11-16 08:30 EST, Carlos Mestre González
no flags Details

  None (edit)
Description Carlos Mestre González 2015-11-13 10:20:04 EST
Description of problem:
Starting a vm fails with a spapr vscsi disk as boot device. Seems having other disks with that interface attached works.

Version-Release number of selected component (if applicable):
vdsm-4.17.10.1-0.el7ev.noarch
vdsm-jsonrpc-4.17.10.1-0.el7ev.noarch
libvirt-client-1.2.17-13.el7.ppc64le
qemu-img-rhev-2.3.0-31.el7_2.1.ppc64le
qemu-kvm-rhev-2.3.0-31.el7_2.1.ppc64le
rhevm-3.6.0.3-0.1.el6.noarch

How reproducible:
100%

Steps to Reproduce:
1. Create a vm (with default values, Desktop, Custom OS, no nic)
2. Add a disk with spapr-vscsi interface (My tests were 10GB/ with either iscsi/nfs, sparse or not) - disk starts fine.
3. Start the vm

Actual results:
or] (ForkJoinPool-1-worker-1) [72c52cf1] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM test_vm_4 is down with error. Exit message: XML error: target 'sda' duplicated for disk sources '/rhev/data-center/a723a57a-e118-466e-9b8b-63cf66814f17/1bab0f28-536e-446b-80d3-4c4c0e07789e/images/2f06078a-7d90-4414-a2c5-db20395fe194/4450f84d-49e6-4d09-8f0b-06af212295cb' and '<null>'.


Expected results:
vm starts properly

Additional info:
This also happens if you create a boot device with a different interface and then edit the interface to spapr-vscsi.
There doesn't seems to be an issue
Comment 1 Carlos Mestre González 2015-11-13 10:21:40 EST
Adding to storage since I think the XML issue falls into that component. Severity high.
Comment 2 Carlos Mestre González 2015-11-13 10:22 EST
Created attachment 1093707 [details]
engine.log

Pretty straight forward. creation of test_vm_4 and fails after start (search for duplicated)
Comment 3 Allon Mureinik 2015-11-15 03:56:14 EST
Tal, this is high priority due to the PPC focus.
Let's get someone to take a look at this please?
Comment 4 Amit Aviram 2015-11-15 04:54:09 EST
Carlos, can you please attach the VDSM logs also? (all files in "/var/log/vdsm/" in the host which tried to activate the VM)
Comment 5 Carlos Mestre González 2015-11-16 08:30 EST
Created attachment 1094874 [details]
all logs in /var/log/vdsm/

A new run with same scenario.

starts at 08:20: Create vm, add disk, start the vm, fails at 08:21:56

vm: 4dba4d02-90a2-49ec-81ce-aba0f898a051
disk: 3b2d7501-25b9-47df-a361-5dfccdf40b5f
image id: a58830f9-564b-4bfa-98a7-f9d71f4db99a

qemu log only show 2015-11-16 13:21:50.360+0000: shutting down

Also I included libirtd.log that has a time shift, start at 13:20
Comment 6 Allon Mureinik 2015-11-16 08:54:48 EST
Daniel/Amit - is this a duplicate of bug 1274677?
Comment 7 Amit Aviram 2015-11-16 09:51:45 EST
Actually, it is. closing.

*** This bug has been marked as a duplicate of bug 1274677 ***

Note You need to log in before you can comment on or make changes to this bug.