Red Hat Bugzilla – Bug 996174
xen domU kernel does not activate all vcpus
Last modified: 2015-02-17 11:43:51 EST
Description of problem:
I have a XEN domU instance running with two configured vcpus, but only one was activated at boot. The VM was recently upgraded to F19; the F18 version of the VM *did* activate both cpus.
Version-Release number of selected component (if applicable):
Hypervisor is running F18:
domU is running F19:
Steps to Reproduce:
1. Configure VM with 2 cpus
2. boot VM
hypervisor shows 2 vcpus for the VM, but only one is active. VM detects 2 cpus but only activates one of them.
Here is a snippet of the libvirt XML for the domain:
<domain type='xen' id='7'>
<type arch='x86_64' machine='xenpv'>linux</type>
<clock offset='utc' adjustment='reset'/>
# xl vcpu-list | grep " 7 "
xen-d975xbx-anon 7 0 3 -b- 15063.3 any cpu
xen-d975xbx-anon 7 1 - --p 0.0 any cpu
Then, on the VM:
[root@xen-d975xbx-anon ~]# dmesg | grep cpu
[ 0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Initializing cgroup subsys cpuacct
[ 0.000000] setup_percpu: NR_CPUS:128 nr_cpumask_bits:128 nr_cpu_ids:2 nr_node_ids:1
[ 0.000000] PERCPU: Embedded 28 pages/cpu @ffff88007fa00000 s85568 r8192 d20928 u1048576
[ 0.000000] pcpu-alloc: s85568 r8192 d20928 u1048576 alloc=1*2097152
[ 0.000000] pcpu-alloc:  0 1
[ 0.000000] RCU restricting CPUs from NR_CPUS=128 to nr_cpu_ids=2.
[ 0.000000] Xen: using vcpuop timer interface
[ 0.028029] NMI watchdog: disabled (cpu0): hardware events not enabled
[root@xen-d975xbx-anon ~]# grep -c processor /proc/cpuinfo
I browsed over to
and did some more experimentation, I find
1. libvirt domain configuration specifies 2 cpus for the guest
2. 'xl list' shows 1 cpu after the guest is started
3. /proc/cpuinfo on the guest shows 1 cpu
4. boot messages on the guest suggest 2 cpus, but only one is brought on-line
5. /sys/devices/system/cpu on the guest only shows 1 cpu
6. (as above) 'xl vcpu-list' shows two cpus, but one is paused/offline
Then, I tried
# xl vcpu-set xen-d975xbx-anon 2
At that point, the 2nd cpu entry shows up in /sys/devices/system/cpu.
Then, on the guest:
# echo "1" > /sys/devices/system/cpu/cpu1/online
and the cpu comes on-line. The /proc/cpuinfo is correct, as is the 'xl list' and 'xl vcpu-list' output on the host.
Perhaps something is not getting configured correctly in libxl based on the libvirt configuration. Please advise. What's the (intended) relationship between the 'vcpu' tag in the libvirt xml, and the boot-time vcpu settings for a guest? This did used to "just work" for F18.
The secondary issue (why a newly-available vcpu is not automatically brought on-line on the guest) appears to be a "known issue" with some debate surrounding it, so I won't bother.
(though the text there is a bit confusing)
should the 'current' attribute be specified to enable the vcpus at startup?
This message is a notice that Fedora 19 is now at end of life. Fedora
has stopped maintaining and issuing updates for Fedora 19. It is
Fedora's policy to close all bug reports from releases that are no
longer maintained. Approximately 4 (four) weeks from now this bug will
be closed as EOL if it remains open with a Fedora 'version' of '19'.
Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version'
to a later Fedora version.
Thank you for reporting this issue and we are sorry that we were not
able to fix it before Fedora 19 is end of life. If you would still like
to see this bug fixed and are able to reproduce it against a later version
of Fedora, you are encouraged change the 'version' to a later Fedora
version prior this bug is closed as described in the policy above.
Although we aim to fix as many bugs as possible during every release's
lifetime, sometimes those efforts are overtaken by events. Often a
more recent Fedora release includes newer upstream software that fixes
bugs or makes them obsolete.
Fedora 19 changed to end-of-life (EOL) status on 2015-01-06. Fedora 19 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.
If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
Thank you for reporting this bug and we are sorry it could not be fixed.