Bug 1716358 - RHOS 15's default machine type, "q35", doesn't support IDE buses, but config drives are attached to IDE
Summary: RHOS 15's default machine type, "q35", doesn't support IDE buses, but config ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 15.0 (Stein)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 15.0 (Stein)
Assignee: Lee Yarwood
QA Contact: OSP DFG:Compute
URL:
Whiteboard:
: 1716221 1719938 1719939 (view as bug list)
Depends On:
Blocks: 1761862 1761863 1782659
TreeView+ depends on / blocked
 
Reported: 2019-06-03 10:20 UTC by Michele Baldessari
Modified: 2023-03-21 19:20 UTC (History)
14 users (show)

Fixed In Version: openstack-nova-19.0.2-0.20190616040418.acd2daa.el8ost
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1761862 1761863 (view as bug list)
Environment:
Last Closed: 2019-09-21 11:22:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1831538 0 None None None 2019-06-04 03:44:36 UTC
OpenStack gerrit 662887 0 None ABANDONED DNM: Run tempest-full-py3 with q35 machine type 2021-02-15 10:47:54 UTC
OpenStack gerrit 663011 0 None MERGED libvirt: Use SATA bus for cdrom devices when using Q35 machine type 2021-02-15 10:47:54 UTC
OpenStack gerrit 663677 0 None MERGED libvirt: Use SATA bus for cdrom devices when using Q35 machine type 2021-02-15 10:47:54 UTC
Red Hat Issue Tracker OSP-23465 0 None None None 2023-03-21 19:20:33 UTC
Red Hat Product Errata RHEA-2019:2811 0 None None None 2019-09-21 11:23:10 UTC

Description Michele Baldessari 2019-06-03 10:20:24 UTC
Description of problem:

https://rhos-qe-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/ReleaseDelivery/view/OSP15/job/phase2-15_director-rhel-8.0-virthost-3cont_2comp-ipv4-geneve-ceph-external/lastCompletedBuild/testReport/tempest.scenario.test_server_basic_ops/TestServerBasicOps/test_server_basic_ops_compute_id_7fff3fb3_91d8_4fd0_bd7d_0204f1f180ba_network_smoke_/


    Response - Headers: {'date': 'Fri, 31 May 2019 22:31:11 GMT', 'server': 'Apache', 'content-length': '0', 'openstack-api-version': 'compute 2.1', 'x-openstack-nova-api-version': '2.1', 'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 'req-5951786e-0c61-4d70-b285-18420674caa2', 'x-compute-request-id': 'req-5951786e-0c61-4d70-b285-18420674caa2', 'connection': 'close', 'content-type': 'application/json', 'status': '202', 'content-location': 'http://10.0.0.110:8774/v2.1/os-keypairs/tempest-TestServerBasicOps-625484720'}
        Body: b''
}}}

Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/tempest/common/utils/__init__.py", line 89, in wrapper
    return f(*func_args, **func_kwargs)
  File "/usr/lib/python3.6/site-packages/tempest/scenario/test_server_basic_ops.py", line 134, in test_server_basic_ops
    metadata=self.md)
  File "/usr/lib/python3.6/site-packages/tempest/scenario/manager.py", line 235, in create_server
    image_id=image_id, **kwargs)
  File "/usr/lib/python3.6/site-packages/tempest/common/compute.py", line 265, in create_test_server
    server['id'])
  File "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 220, in __exit__
    self.force_reraise()
  File "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
    six.reraise(self.type_, self.value, self.tb)
  File "/usr/lib/python3.6/site-packages/six.py", line 675, in reraise
    raise value
  File "/usr/lib/python3.6/site-packages/tempest/common/compute.py", line 236, in create_test_server
    clients.servers_client, server['id'], wait_until)
  File "/usr/lib/python3.6/site-packages/tempest/common/waiters.py", line 76, in wait_for_server_status
    server_id=server_id)
tempest.exceptions.BuildErrorException: Server f654584f-ea70-40af-a618-b9938bf7ebe1 failed to build and is in ERROR status
Details: {'code': 500, 'created': '2019-05-31T22:31:08Z', 'message': 'Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance f654584f-ea70-40af-a618-b9938bf7ebe1.'}

Seems to me the issue is more a tempest/libvirt/compute/env configuration problem, the compute logs show the following failure for the above f654584f-ea70-40af-a618-b9938bf7ebe1 test:
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:04.559 7 ERROR nova.virt.libvirt.driver [req-87008086-db59-4a05-9815-68cec5c7dff2 7f7175e2908e4e6d9eff7d169e5f8e30 b9caba079a294ca38775425d0ef8e331 - default default] [instance: f654584f-ea70-40af-a618-b9938bf7ebe1] Failed to start libvirt guest: libvirt.libvirtError: unsupported configuration: IDE controllers are unsupported for this QEMU binary or machine type
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [req-87008086-db59-4a05-9815-68cec5c7dff2 7f7175e2908e4e6d9eff7d169e5f8e30 b9caba079a294ca38775425d0ef8e331 - default default] [instance: f654584f-ea70-40af-a618-b9938bf7ebe1] Instance failed to spawn: libvirt.libvirtError: unsupported configuration: IDE controllers are unsupported for this QEMU binary or machine type
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1] Traceback (most recent call last):                                                            
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line 2474, in _build_resources
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     yield resources                                                                           
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line 2235, in _build_and_run_instance
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     block_device_info=block_device_info)                                                      
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 3172, in spawn    
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     destroy_disks_on_failure=True)                                                            
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 5729, in _create_domain_and_network
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     destroy_disks_on_failure)                                                                 
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 220, in __exit__       
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     self.force_reraise()                                                                      
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 196, in force_reraise  
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     six.reraise(self.type_, self.value, self.tb)                                              
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/six.py", line 693, in reraise                        
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     raise value                                                                               
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 5698, in _create_domain_and_network
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     post_xml_callback=post_xml_callback)                                                      
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line 5626, in _create_domain
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     guest = libvirt_guest.Guest.create(xml, self._host)                                       
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/guest.py", line 129, in create     
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     encodeutils.safe_decode(xml))                                                             
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 220, in __exit__       
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     self.force_reraise()                                                                      
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 196, in force_reraise  
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     six.reraise(self.type_, self.value, self.tb)                                              
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/six.py", line 693, in reraise                        
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     raise value                                                                               
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/guest.py", line 125, in create     
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     guest = host.write_instance_config(xml)                                                   
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/host.py", line 869, in write_instance_config
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     domain = self.get_connection().defineXML(xml)                                             
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 190, in doit                
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     result = proxy_call(self._autowrap, f, *args, **kwargs)                                   
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 148, in proxy_call          
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     rv = execute(f, *args, **kwargs)                                                          
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 129, in execute             
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     six.reraise(c, e, tb)                                                                     
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/six.py", line 693, in reraise                        
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     raise value                                                                               
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 83, in tworker              
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     rv = meth(*args, **kwargs)                                                                
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 3752, in defineXML               
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]     if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self)               
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1] libvirt.libvirtError: unsupported configuration: IDE controllers are unsupported for this QEMU binary or machine type
compute-1/var/log/containers/nova/nova-compute.log:2019-05-31 22:31:05.088 7 ERROR nova.compute.manager [instance: f654584f-ea70-40af-a618-b9938bf7ebe1]

Comment 1 Artom Lifshitz 2019-06-03 18:44:56 UTC
First some notes for myself (using upstream code for easier linking):

When we start _get_guest_xml in the libvirt driver, the following disk_info is passed to us:

disk_info={
  'disk_bus': 'virtio',
  'cdrom_bus': 'ide',
  'mapping': {
    'root': {
      'bus': 'virtio',
      'type': 'disk', 
      'dev': 'vda',
      'boot_index': '1'}, 
    'disk': {
      'bus': 'virtio',
      'type': 'disk',
      'dev': 'vda',
      'boot_index': '1'},
    'disk.config': {
      'bus': 'ide',
      'dev': 'hda',
      'type': 'cdrom'
    }
  }
}

The 'ide' bits are the culprit here. The default cdrom_bus is 'ide', so the config drive's bus ends up being 'ide' as well.

disk_info comes to us [1] by calling blockinfo.get_disk_info in spawn() [2]. That in turn calls get_disk_bus_for_device_type() for 'disk' and 'cdrom', with the default for 'cdrom' being 'ide' [3] (assuming the 'kvm' hypervisor, which we obviously are). Note that this can be overridden by the hw_cdrom_bus image property [4].

The 'mapping' element in disk_info is obtained by calling get_disk_mapping() in blockinfo, which just calls get_disk_bus_for_device_type() again [5].

So that's the problem. We have a default machine type downstream that doesn't accept IDE buses, but the default bus for config drive CDROMs is IDE, and currently the only way to change that is with an image property.

As a quick aside, upstream uses just '<type>hvm</type>' as a machine type, so while they still have IDE CDROMs, they've never hit this problem.

I would expect all CI tests involving config drives to fail. If that's not the case and we have a passing config drive test somewhere, I'd like to see it to understand what I've missed. Otherwise, I'm afraid I don't have a solution for now. We can't reasonably except all our clients to add the hw_cdrom_bus property to all their images as part of the OSP15 upgrade. I'll have a think, talk to the rest of the compute DFG, we'll try to come up with something.

[1] https://github.com/openstack/nova/blob/7feadd492f19a37b015d4ce62893cf27a0716033/nova/virt/libvirt/driver.py#L3165
[2] https://github.com/openstack/nova/blob/7feadd492f19a37b015d4ce62893cf27a0716033/nova/virt/libvirt/driver.py#L3144
[3] https://github.com/openstack/nova/blob/7feadd492f19a37b015d4ce62893cf27a0716033/nova/virt/libvirt/blockinfo.py#L272
[4] https://github.com/openstack/nova/blob/7feadd492f19a37b015d4ce62893cf27a0716033/nova/virt/libvirt/blockinfo.py#L239
[5] https://github.com/openstack/nova/blob/7feadd492f19a37b015d4ce62893cf27a0716033/nova/virt/libvirt/blockinfo.py#L530

Comment 2 Lee Yarwood 2019-06-03 19:54:18 UTC
(In reply to Artom Lifshitz from comment #1)
> So that's the problem. We have a default machine type downstream that
> doesn't accept IDE buses, but the default bus for config drive CDROMs is
> IDE, and currently the only way to change that is with an image property.

Nice work Artom! I wonder if we could we extract _get_machine_type [6] somewhere and call that from within get_disk_bus_for_device_type [5] so we can decide if the bus should be ide or scsi?

[6] https://github.com/openstack/nova/blob/7feadd492f19a37b015d4ce62893cf27a0716033/nova/virt/libvirt/driver.py#L4252

Comment 3 Artom Lifshitz 2019-06-04 00:01:26 UTC
I've proposed a DNM patch to reproduce this upstream. If that confirms my theory, we can think about how to fix this.

Comment 5 Lee Yarwood 2019-06-04 07:37:37 UTC
*** Bug 1716221 has been marked as a duplicate of this bug. ***

Comment 6 Kashyap Chamarthy 2019-06-04 09:13:01 UTC
(In reply to Artom Lifshitz from comment #1)
> First some notes for myself (using upstream code for easier linking):
> 
> When we start _get_guest_xml in the libvirt driver, the following disk_info
> is passed to us:
> 
> disk_info={
>   'disk_bus': 'virtio',
>   'cdrom_bus': 'ide',
>   'mapping': {
>     'root': {
>       'bus': 'virtio',
>       'type': 'disk', 
>       'dev': 'vda',
>       'boot_index': '1'}, 
>     'disk': {
>       'bus': 'virtio',
>       'type': 'disk',
>       'dev': 'vda',
>       'boot_index': '1'},
>     'disk.config': {
>       'bus': 'ide',
>       'dev': 'hda',
>       'type': 'cdrom'
>     }
>   }
> }
> 
> The 'ide' bits are the culprit here. The default cdrom_bus is 'ide', so the
> config drive's bus ends up being 'ide' as well.
> 
> disk_info comes to us [1] by calling blockinfo.get_disk_info in spawn() [2].
> That in turn calls get_disk_bus_for_device_type() for 'disk' and 'cdrom',
> with the default for 'cdrom' being 'ide' [3] (assuming the 'kvm' hypervisor,
> which we obviously are). Note that this can be overridden by the
> hw_cdrom_bus image property [4].
> 
> The 'mapping' element in disk_info is obtained by calling get_disk_mapping()
> in blockinfo, which just calls get_disk_bus_for_device_type() again [5].
> 
> So that's the problem. We have a default machine type downstream that
> doesn't accept IDE buses, but the default bus for config drive CDROMs is
> IDE, and currently the only way to change that is with an image property.
> 
> As a quick aside, upstream uses just '<type>hvm</type>' as a machine type,
> so while they still have IDE CDROMs, they've never hit this problem.
> 
> I would expect all CI tests involving config drives to fail. If that's not
> the case and we have a passing config drive test somewhere, I'd like to see
> it to understand what I've missed. Otherwise, I'm afraid I don't have a
> solution for now. We can't reasonably except all our clients to add the
> hw_cdrom_bus property to all their images as part of the OSP15 upgrade. I'll
> have a think, talk to the rest of the compute DFG, we'll try to come up with
> something.

Comment 7 Kashyap Chamarthy 2019-06-04 10:32:54 UTC
[Sorry for the previous empty comment; accidentally hit send too soon.]

A few notes based on earlier comments from Artom and Lee.

tl;dr: To address this, we need to make Nova usE "sata" disk bus for
       CD-ROM, when using Q35 machine type.

Long
----

(*) The libvirtError here is expected (it's a no-op), because Q35
    machine type does not support IDE — only SATA or SCSI (and QEMU's
    emulated SCSI is not recommended; however, 'virtio-scsi' is the most
    trustworthy, but it needs drivers.

(*) Given the above, for Nova, when using Q35, we should change the
    default bus for CD-ROM to unconditionally use "sata".  (I've
    double-checked it with the QEMU folks, too; they recommend the same
    Q35 has on-board SATA.)

(*) I think you saw this in the upstream Nova guest XML:

        ...
        <os>
          <type>hvm</type>
          ...
        </os>
        ...

    And said "upstream uses just '<type>hvm</type>' as a machine type".  
    Here, "hvm" [Hardware Virtual Machine] is not a machine type; it
    means "the Operating System is designed to run on bare metal, so 
    requires full virtualization [using CPU hardware extensions]"

(*) When you don't specify a machine type (as in the case of upstream
    CI), the default is whatever the QEMU binary on your system reports
    as such: `qemu-system-x86 -machine help | grep default`.
 
    So upstream Nova CI doesn't hit this problem because it uses QEMU's
    default machine type, which is "pc" (that has on-board IDE).

Comment 13 smooney 2019-06-13 17:36:50 UTC
*** Bug 1719938 has been marked as a duplicate of this bug. ***

Comment 14 Sylvain Bauza 2019-06-14 09:55:50 UTC
*** Bug 1719939 has been marked as a duplicate of this bug. ***

Comment 17 Joe H. Rahme 2019-07-24 09:06:40 UTC
[stack@undercloud-0 tempest]$ tempest run --regex test_server_basic_ops
{0} tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops [59.730387s] ... ok

======
Totals
======
Ran: 1 tests in 59.7304 sec.
 - Passed: 1
 - Skipped: 0
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 59.7304 sec.

Comment 21 errata-xmlrpc 2019-09-21 11:22:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2811


Note You need to log in before you can comment on or make changes to this bug.