Bug 1278083

Summary: Ironic introspection of qemu vm - no disk found
Product: Red Hat OpenStack Reporter: hrosnet
Component: openstack-ironic-python-agentAssignee: Dmitry Tantsur <dtantsur>
Status: CLOSED CURRENTRELEASE QA Contact: Raviv Bar-Tal <rbartal>
Severity: high Docs Contact:
Priority: medium    
Version: 7.0 (Kilo)CC: amaumene, dtantsur, gchenuet, glambert, hbrock, imouzann, kpichard, mburns, ochalups, rhel-osp-director-maint, sclewis, slinaber, tpapaioa
Target Milestone: gaKeywords: TestOnly, Triaged
Target Release: 8.0 (Liberty)   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: openstack-ironic-python-agent-1.1.0-8.el7ost Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-07-25 14:26:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description hrosnet 2015-11-04 16:39:03 UTC
Description of problem:
Every element on the server is detected expect the disk.


Version-Release number of selected component (if applicable):
7.1

How reproducible:
We believe always

Steps to Reproduce:
1. Install undercloud in 7.1
2. Create an instance in qemu with virtio to introspect
3. Introspect the server

Actual results:
Will return a local_gb of 1, while should be more.

Expected results:
Having the actual size of the current disk


Additional info:
On the server we could observe for the log: "value for local_gb is missing or malformed"

Apparently the command (lsblk -bSo) is done here:
/usr/lib/python2.7/sites-packages/ironic_discoverd_ramdisk/discovery.py +141

The following command did not return anything (as not iSCSI):
 lsblk -bSo NAME,TYPE,SIZE

According to man lsblk 
 -S, --scsi

So we tried this one, which worked:
 lsblk -bo  NAME,SIZE,TYPE
 NAME SIZE TYPE
 vda 4444444 disk

A bug that appeared to be similar in term of error message:
https://bugzilla.redhat.com/show_bug.cgi?id=1222124

Fix:

Applying virsh edit to swith to SATA instead of virtio, which results in the following differences:
> diff -u vers*
--- vers1	2015-11-04 17:21:11.401642846 +0100
+++ vers2	2015-11-04 17:21:22.651994390 +0100
@@ -1,6 +1,6 @@
 <disk type='block' device='disk'>
-      <driver name='qemu' type='raw' cache='none' io='native'/>
+      <driver name='qemu' type='raw'/>
       <source dev='/dev/rhel/az2ctrl0002'/>
-      <target dev='vda' bus='virtio'/>
-      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
+      <target dev='hda' bus='sata'/>
+      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
 </disk>

Comment 3 Dmitry Tantsur 2015-11-26 11:51:00 UTC
Chris, this bug is not present in rhos-director-8.0, so it's useless to target it to it. If you want it to be fixed in 7.2, please mark it as such.

And yes, the easiest work around is to use SATA disk type for VM's. You can switch it back to VirtIO after introspection, if you feel like.

Comment 4 Alexandre Maumené 2015-12-15 15:54:43 UTC
Hi Dmitry,

I'd like to have this bug fix in 7.2

Could you explain me how to "mark it as such" ?

Thanks in advance.

Regards,

Comment 5 Dmitry Tantsur 2015-12-15 15:56:19 UTC
Convince PM's to treat it as a last-minute blocker for 7.2, which is, to be honest, extremely unlikely, provided that it has an easy work-around.

Comment 6 Alexandre Maumené 2015-12-15 16:00:09 UTC
Maybe easy to apply but not really easy to explain to customers when you are on-site and they expect VirtIO disk performances, but I understand your point.

Comment 8 Mike Burns 2016-04-07 20:57:01 UTC
This bug did not make the OSP 8.0 release.  It is being deferred to OSP 10.

Comment 9 Dmitry Tantsur 2016-04-08 08:09:19 UTC
Only OSPd7 was affected by this bug, as I mentioned above.