Hide Forgot
Description of problem: If management console configures "emulatorpin" libvirt option to same isolated physical CPU's on which vCPU's are already pinned, guest won't boot. Libvirt can validate this setting and if required can configure emulator threads to run on different physical CPU's. This behaviour is visible in realtime KVM where vCPU's having realtime priority are run in dedicated isolated physical CPU's. Other non-realtime threads won't get chance to run resulting in non-execution of emulator threads, hence Guest won't boot up. Version-Release number of selected component (if applicable): libvirt-1.2.17-13.el7_2.4.x86_64 qemu-kvm-rhev-2.3.0-31.el7_2.7.x86_64 kernel-rt-kvm-3.10.0-306.0.1.rt56.179.el7.x86_64 How reproducible: Steps to Reproduce: 1. Isolate physical cores to run vCPU threads. 2. pin vCPU threads to these isolated cores with RT priority. 3. Configure emulator threads also run on same cores Actual results: Guest does not boot. Expected results: 1] Throw error in logs for configuration validation. 2] Configure guest emulator thread to run on any of the non-isolated core and throw a warning, so that atleast guest boots. Additional info:
Well, even though it does not boot, the configuration is valid from libvirt's point of view. This is the problem in management layers above libvirt, particularly whoever is setting up the isolated cores.
Martin, I already opened a BZ with Nova for this:1298079 I wanted suggestion if validation in 'libvirt' side for this can help irrespective of management layer. Thanks, Pankaj
(In reply to pagupta from comment #3) I don't see how else we could "help" management layers to see this error, especially if it originates from them. Since this configuration might be indistinguishable from a valid one from libvirt's POV, I'm closing it as NOTABUG. Feel free to add more information in case I missed something.