Red Hat Bugzilla – Bug 894813
Fix the default for libvirt_cpu_model so it works well when compute node is running in a VM
Last modified: 2015-06-04 17:50:25 EDT
This can be done by detecting if compute node is running in a VM and automatically changing the cpu model vs. requiring a config file change
Perry, is the current state that this doesn't work at all? Packstack allowed me to select the 'qemu' hypervisor (as opposed to kvm) but when I attempt to run instances I see this in libvirtd.log:
2013-01-15 03:15:19.927+0000: 1817: error : x86Decode:1399 : internal error Cannot find suitable CPU model for given data
2013-01-15 03:15:19.927+0000: 1817: warning : qemuCapsInit:848 : Failed to get host CPU
Is this the expected behaviour currently or a new/different bug?
If you select qemu in PackStack it should configure your ComputeNodes to use fully emulated. So what you encountered might be a different bug.
This bug is to remove the need for manually selecting qemu vs. kvm, and using some introspection for Nova to determine which to use automagically
(In reply to comment #2)
> If you select qemu in PackStack it should configure your ComputeNodes to use
> fully emulated. So what you encountered might be a different bug.
Digging further it looks like I've hit Bug # 895003. Nevermind me then ;).
Tested on Fedora-19 -- https://bugs.launchpad.net/nova/+bug/1100366/comments/4
Proposed patch upstream: https://review.openstack.org/31133
The consensus upstream is that this (detection of CPU environment) is really something that should be going on at install time in Packstack, Foreman, etc.
(In reply to Solly Ross from comment #7)
> The consensus upstream is that this (detection of CPU environment) is really
> something that should be going on at install time in Packstack, Foreman, etc.
ACK, I'm OK with letting this slide. I don't think running nova-compute within a KVM guest is a use case we are actively concentrating on.