Bug 996174 - xen domU kernel does not activate all vcpus
xen domU kernel does not activate all vcpus
Status: CLOSED EOL
Product: Fedora
Classification: Fedora
Component: xen (Show other bugs)
19
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: Michael Young
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-12 10:59 EDT by Carl Roth
Modified: 2015-02-17 11:43 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-02-17 11:43:51 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Carl Roth 2013-08-12 10:59:53 EDT
Description of problem:

I have a XEN domU instance running with two configured vcpus, but only one was activated at boot.  The VM was recently upgraded to F19; the F18 version of the VM *did* activate both cpus.

Version-Release number of selected component (if applicable):

Hypervisor is running F18:
xen-4.2.2-10.fc18.x86_64
libvirt-1.1.0-1.fc18.x86_64

domU is running F19:
kernel-3.10.4-300.fc19.x86_64

How reproducible:

Always

Steps to Reproduce:
1. Configure VM with 2 cpus
2. boot VM
3.

Actual results:

hypervisor shows 2 vcpus for the VM, but only one is active.  VM detects 2 cpus but only activates one of them.

Expected results:


Additional info:

Here is a snippet of the libvirt XML for the domain:

<domain type='xen' id='7'>
  <name>xen-d975xbx-anon</name>
  <uuid>a98fbdd9-f7f7-4921-988f-14cd866d2f28</uuid>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <bootloader>pygrub</bootloader>
  <os>
    <type arch='x86_64' machine='xenpv'>linux</type>
  </os>
  <clock offset='utc' adjustment='reset'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  ...
</domain>

# xl vcpu-list | grep " 7 "
xen-d975xbx-anon                     7     0    3   -b-   15063.3  any cpu
xen-d975xbx-anon                     7     1    -   --p       0.0  any cpu

Then, on the VM:
[root@xen-d975xbx-anon ~]# dmesg | grep cpu
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] setup_percpu: NR_CPUS:128 nr_cpumask_bits:128 nr_cpu_ids:2 nr_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff88007fa00000 s85568 r8192 d20928 u1048576
[    0.000000] pcpu-alloc: s85568 r8192 d20928 u1048576 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 0 1
[    0.000000]  RCU restricting CPUs from NR_CPUS=128 to nr_cpu_ids=2.
[    0.000000] Xen: using vcpuop timer interface
[    0.028029] NMI watchdog: disabled (cpu0): hardware events not enabled

[root@xen-d975xbx-anon ~]# grep -c processor /proc/cpuinfo
1
Comment 1 Carl Roth 2013-09-26 13:59:35 EDT
I browsed over to

http://wiki.xen.org/wiki/Paravirt_Linux_CPU_Hotplug
http://lists.xen.org/archives/html/xen-devel/2010-05/msg00516.html

and did some more experimentation, I find

1. libvirt domain configuration specifies 2 cpus for the guest
2. 'xl list' shows 1 cpu after the guest is started
3. /proc/cpuinfo on the guest shows 1 cpu
4. boot messages on the guest suggest 2 cpus, but only one is brought on-line
5. /sys/devices/system/cpu on the guest only shows 1 cpu
6. (as above) 'xl vcpu-list' shows two cpus, but one is paused/offline

Then, I tried

# xl vcpu-set xen-d975xbx-anon 2

At that point, the 2nd cpu entry shows up in /sys/devices/system/cpu.

Then, on the guest:

# echo "1" > /sys/devices/system/cpu/cpu1/online

and the cpu comes on-line.  The /proc/cpuinfo is correct, as is the 'xl list' and 'xl vcpu-list' output on the host.

Perhaps something is not getting configured correctly in libxl based on the libvirt configuration.  Please advise.  What's the (intended) relationship between the 'vcpu' tag in the libvirt xml, and the boot-time vcpu settings for a guest?  This did used to "just work" for F18.

The secondary issue (why a newly-available vcpu is not automatically brought on-line on the guest) appears to be a "known issue" with some debate surrounding it, so I won't bother.
Comment 2 Carl Roth 2013-09-26 14:03:30 EDT
As per

http://libvirt.org/formatdomain.html#elementsCPUAllocation

(though the text there is a bit confusing)

should the 'current' attribute be specified to enable the vcpus at startup?
Comment 3 Fedora End Of Life 2015-01-09 14:25:13 EST
This message is a notice that Fedora 19 is now at end of life. Fedora 
has stopped maintaining and issuing updates for Fedora 19. It is 
Fedora's policy to close all bug reports from releases that are no 
longer maintained. Approximately 4 (four) weeks from now this bug will
be closed as EOL if it remains open with a Fedora 'version' of '19'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 19 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.
Comment 4 Fedora End Of Life 2015-02-17 11:43:51 EST
Fedora 19 changed to end-of-life (EOL) status on 2015-01-06. Fedora 19 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.

Note You need to log in before you can comment on or make changes to this bug.