Bug 1278083 - Ironic introspection of qemu vm - no disk found
Ironic introspection of qemu vm - no disk found
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-ironic-python-agent (Show other bugs)
7.0 (Kilo)
Unspecified Linux
medium Severity high
: ga
: 8.0 (Liberty)
Assigned To: Dmitry Tantsur
Raviv Bar-Tal
: TestOnly, Triaged
Depends On:
  Show dependency treegraph
Reported: 2015-11-04 11:39 EST by hrosnet
Modified: 2016-07-25 10:26 EDT (History)
13 users (show)

See Also:
Fixed In Version: openstack-ironic-python-agent-1.1.0-8.el7ost
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2016-07-25 10:26:42 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description hrosnet 2015-11-04 11:39:03 EST
Description of problem:
Every element on the server is detected expect the disk.

Version-Release number of selected component (if applicable):

How reproducible:
We believe always

Steps to Reproduce:
1. Install undercloud in 7.1
2. Create an instance in qemu with virtio to introspect
3. Introspect the server

Actual results:
Will return a local_gb of 1, while should be more.

Expected results:
Having the actual size of the current disk

Additional info:
On the server we could observe for the log: "value for local_gb is missing or malformed"

Apparently the command (lsblk -bSo) is done here:
/usr/lib/python2.7/sites-packages/ironic_discoverd_ramdisk/discovery.py +141

The following command did not return anything (as not iSCSI):
 lsblk -bSo NAME,TYPE,SIZE

According to man lsblk 
 -S, --scsi

So we tried this one, which worked:
 lsblk -bo  NAME,SIZE,TYPE
 vda 4444444 disk

A bug that appeared to be similar in term of error message:


Applying virsh edit to swith to SATA instead of virtio, which results in the following differences:
> diff -u vers*
--- vers1	2015-11-04 17:21:11.401642846 +0100
+++ vers2	2015-11-04 17:21:22.651994390 +0100
@@ -1,6 +1,6 @@
 <disk type='block' device='disk'>
-      <driver name='qemu' type='raw' cache='none' io='native'/>
+      <driver name='qemu' type='raw'/>
       <source dev='/dev/rhel/az2ctrl0002'/>
-      <target dev='vda' bus='virtio'/>
-      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
+      <target dev='hda' bus='sata'/>
+      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Comment 3 Dmitry Tantsur 2015-11-26 06:51:00 EST
Chris, this bug is not present in rhos-director-8.0, so it's useless to target it to it. If you want it to be fixed in 7.2, please mark it as such.

And yes, the easiest work around is to use SATA disk type for VM's. You can switch it back to VirtIO after introspection, if you feel like.
Comment 4 Alexandre Maumené 2015-12-15 10:54:43 EST
Hi Dmitry,

I'd like to have this bug fix in 7.2

Could you explain me how to "mark it as such" ?

Thanks in advance.

Comment 5 Dmitry Tantsur 2015-12-15 10:56:19 EST
Convince PM's to treat it as a last-minute blocker for 7.2, which is, to be honest, extremely unlikely, provided that it has an easy work-around.
Comment 6 Alexandre Maumené 2015-12-15 11:00:09 EST
Maybe easy to apply but not really easy to explain to customers when you are on-site and they expect VirtIO disk performances, but I understand your point.
Comment 8 Mike Burns 2016-04-07 16:57:01 EDT
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.
Comment 9 Dmitry Tantsur 2016-04-08 04:09:19 EDT
Only OSPd7 was affected by this bug, as I mentioned above.

Note You need to log in before you can comment on or make changes to this bug.