Bug 1535907

Summary: 3rd virtio-scsi will fail a VM to run - vm device configuration problem
Product: [oVirt] ovirt-engine Reporter: Roy Golan <rgolan>
Component: Backend.CoreAssignee: Nobody <nobody>
Status: CLOSED DUPLICATE QA Contact: meital avital <mavital>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.2.0CC: bugs, michal.skrivanek, tnisan
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-01-19 13:17:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Roy Golan 2018-01-18 09:05:01 UTC
Description of problem:
Trying to run a vm with the 3rd virtio-scsi disk will fail.

Version-Release number of selected component (if applicable):
current master

How reproducible:
100%

Steps to Reproduce:
1. create a vm with 2 virtio-scsi disks
2. run the vm - wait till it populates the vm device (you should see it under vm device tab - 2 disks with 2 different addresses:
  {type=drive, bus=0, controller=0, target=0, unit=0} 
AND
  {type=drive, bus=0, controller=0, target=0, unit=2})

3. shutdown the vm
4. add the 3rd virtio-scsi disk
5. run vm

Actual results:
fails with:
Exit message: unsupported configuration: Found duplicate drive address for disk with target name 'sdb' controller='0' bus='0' target='0' unit='2'.

Expected results:
the 3rd disk get the proper unit 3 and so on.

Additional info:
Possibly a virt bug and not storage because of the vm structure creation.

Comment 1 Roy Golan 2018-01-18 09:10:28 UTC
To workaround it the 2nd disk can be removed (the one with the stable unit=2 address) and then re-adding 2 disks while the vm is down and then starting up will work fine.

I guess(and hope) that detaching the disk and then re-attaching will work the same and by that users won't loose their data.

Comment 2 Tal Nisan 2018-01-18 09:12:03 UTC
Michal, this bug resembles bug 1535907, is it a dup? Who's taking it Virt or Storage?

Comment 3 Michal Skrivanek 2018-01-19 13:10:27 UTC
this bug resembles itself very well indeed, but I guess you've meant bug 1529460
:)

the agreement was to not sort alphabetically but rather to keep the exiting devices and always plug "at the end"

Comment 4 Michal Skrivanek 2018-01-19 13:17:04 UTC
bug is missing logs, but indeed it's quite likely it's the same ordering issue

*** This bug has been marked as a duplicate of bug 1529460 ***