Description of problem: By default the overcloud deployment uses qemu instead of kvm, and it's not obvious from the documentation that this needs to be changed.
Version-Release number of selected component (if applicable):
How reproducible: Always.
Steps to Reproduce:
1. Deploy to baremetal without specifying --libvirt-type kvm
2. Boot an instance. It will use unaccelerated qemu.
Actual results: Yugo performance from VM.
Expected results: Ferrari performance from VM.
Additional info: For RHOS 7 GA, this is likely a doc fix. We should discuss whether to change the default in future releases, however.
Note that this can be fixed after deployment by re-running the same deployment command with "--libvirt-type kvm" added.
This is fairly critical and should be fixed as soon as practical. In addition to affecting performance, it prevents the use of features such as SR-IOV PCI passthrough.
Added a few doc improvements.
I also added a patch to change the default to KVM instead of QEMU, if it is decided that this is a bug that should be fixed.
This bug needs documentation changes in any case, but we can also change the default from qemu to kvm, but I get conflicting opinions about if we should do that.
(In reply to Lennart Regebro from comment #10)
> This bug needs documentation changes in any case, but we can also change the
> default from qemu to kvm, but I get conflicting opinions about if we should
> do that.
WHY? We don't recommend the use of qemu emulation for real environments period from a Nova perspective.
There should no confusion any longer here. Default should be kvm. Thanks.
[stack@instack ~]$ rpm -qa | grep python-rdomanager-oscplugin
openstack overcloud deploy --templates
[heat-admin@overcloud-compute-0 ~]$ sudo grep virt_type /etc/nova/nova.conf | grep -v ^#
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
*** Red HatBug 1270397 has been marked as a duplicate of this bug. ***
*** Red HatBug 1270396 has been marked as a duplicate of this bug. ***