Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
When I try to define the numa configuration of a guest on a system running
rhel7.0, the changes don't seem to be reflected in the guest.
This is a regression from rhel6.5.
For example, if I edit the guest's xml using virsh edit to include the <numa>
directives to define the guest as 4 numa nodes and then power off the guest and
and restart it, a numactl -H command executed on the guest still shows the
guest as 1 numa node.
My OS is RHEL7.0: 3.10.0-60.el7.x86_64
libvirt is 1.1.1-18.el7
qemu-kvm is 1.5.3-34.el7
virt-manager 0.10.0-9.el7
[root@harp33-sys ~]# virsh nodeinfo
CPU model: x86_64
CPU(s): 64
CPU frequency: 2499 MHz
CPU socket(s): 1
Core(s) per socket: 8
Thread(s) per core: 1
NUMA cell(s): 8
Memory size: 189209916 KiB
[root@harp33-sys ~]# virsh dominfo vhost1
Id: 2
Name: vhost1
UUID: 503397c0-58cd-462a-b2a2-52bb7b8225ba
OS Type: hvm
State: running
CPU(s): 32
CPU time: 117.0s
Max memory: 8290304 KiB
Used memory: 8290304 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0
I updated the guests xml file to include the following:
domain type='kvm' id='2'>
<name>vhost1</name>
<uuid>503397c0-58cd-462a-b2a2-52bb7b8225ba</uuid>
<memory unit='KiB'>8290304</memory>
<currentMemory unit='KiB'>8290304</currentMemory>
<vcpu placement='static'>32</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='1'/>
<vcpupin vcpu='2' cpuset='2'/>
<vcpupin vcpu='3' cpuset='3'/>
<vcpupin vcpu='4' cpuset='4'/>
<vcpupin vcpu='5' cpuset='5'/>
<vcpupin vcpu='6' cpuset='6'/>
<vcpupin vcpu='7' cpuset='7'/>
<vcpupin vcpu='8' cpuset='8'/>
<vcpupin vcpu='9' cpuset='9'/>
<vcpupin vcpu='10' cpuset='10'/>
<vcpupin vcpu='11' cpuset='11'/>
<vcpupin vcpu='12' cpuset='12'/>
<vcpupin vcpu='13' cpuset='13'/>
<vcpupin vcpu='14' cpuset='14'/>
<vcpupin vcpu='15' cpuset='15'/>
<vcpupin vcpu='16' cpuset='16'/>
<vcpupin vcpu='17' cpuset='17'/>
<vcpupin vcpu='18' cpuset='18'/>
<vcpupin vcpu='19' cpuset='19'/>
<vcpupin vcpu='20' cpuset='20'/>
<vcpupin vcpu='21' cpuset='21'/>
<vcpupin vcpu='22' cpuset='22'/>
<vcpupin vcpu='23' cpuset='23'/>
<vcpupin vcpu='24' cpuset='24'/>
<vcpupin vcpu='25' cpuset='25'/>
<vcpupin vcpu='26' cpuset='26'/>
<vcpupin vcpu='27' cpuset='27'/>
<vcpupin vcpu='28' cpuset='28'/>
<vcpupin vcpu='29' cpuset='29'/>
<vcpupin vcpu='30' cpuset='30'/>
<vcpupin vcpu='31' cpuset='31'/>
</cputune>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu>
<topology sockets='4' cores='8' threads='1'/>
<numa>
<cell cpus='0-7' memory='2072576'/>
<cell cpus='8-15' memory='2072576'/>
<cell cpus='16-23' memory='2072576'/>
<cell cpus='24-31' memory='2072576'/>
</numa>
</cpu>
Then I shutdown (forced off) the guest and restarted it. I then did a numactl -H
command on the guest:
[root@vhost1 ~]# numactl -H
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
node 0 size: 8095 MB
node 0 free: 7504 MB
node distances:
node 0
0: 10
I would have included the /var/log/libvirt/libvirtd.log file, but I couldn't
find it on the host. Has the libvirtd.log file been moved in rhel7.0?
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1.
2.
3.
Actual results:
Expected results:
Additional info:
(In reply to George Beshers from comment #0)
> I would have included the /var/log/libvirt/libvirtd.log file, but I couldn't
> find it on the host. Has the libvirtd.log file been moved in rhel7.0?
libvirt logs through journald by default on RHEL-7. You can follow http://wiki.libvirt.org/page/DebugLogs to get it back and even make it contain useful debug logs.
Comment 5Martin Kletzander
2014-01-17 07:28:39 UTC
Could you please check the command line of qemu that libvirt is running for this guest (either by `ps -ef | grep vhost1` or that should be also in /var/lib/libvirt/qemu/vhost1.log) and if possible, attach the daemon logs too (as described in comment #2? Thanks
Created attachment 851680[details]
libvirtd log with debug turned on
The libvirtd.log file with debug turned on. This includes the output from starting the guest.
(In reply to Sherry Crandall from comment #6)
> [root@harp33-sys libvirt]# ps -elf | grep vhost1
> 6 S qemu 5427 1 6 80 0 - 9364404 poll_s 09:34 ? 00:01:09
> /usr/libexec/qemu-kvm -name vhost1 -S -machine
> pc-i440fx-rhel7.0.0,accel=kvm,usb=off -m 30720 -realtime mlock=off -smp
> 16,sockets=2,cores=8,threads=1 -numa node,nodeid=0,cpus=0-7,mem=15360 -numa
> node,nodeid=1,cpus=8-15,mem=15360
Command-line looks correct. This is very likely to be bug 1048080. Can you please check the guest dmesg and see it has a message similar to:
SRAT: PXMs only cover 3583MB of your 4095MB e820 RAM. Not used.
Description of problem: When I try to define the numa configuration of a guest on a system running rhel7.0, the changes don't seem to be reflected in the guest. This is a regression from rhel6.5. For example, if I edit the guest's xml using virsh edit to include the <numa> directives to define the guest as 4 numa nodes and then power off the guest and and restart it, a numactl -H command executed on the guest still shows the guest as 1 numa node. My OS is RHEL7.0: 3.10.0-60.el7.x86_64 libvirt is 1.1.1-18.el7 qemu-kvm is 1.5.3-34.el7 virt-manager 0.10.0-9.el7 [root@harp33-sys ~]# virsh nodeinfo CPU model: x86_64 CPU(s): 64 CPU frequency: 2499 MHz CPU socket(s): 1 Core(s) per socket: 8 Thread(s) per core: 1 NUMA cell(s): 8 Memory size: 189209916 KiB [root@harp33-sys ~]# virsh dominfo vhost1 Id: 2 Name: vhost1 UUID: 503397c0-58cd-462a-b2a2-52bb7b8225ba OS Type: hvm State: running CPU(s): 32 CPU time: 117.0s Max memory: 8290304 KiB Used memory: 8290304 KiB Persistent: yes Autostart: disable Managed save: no Security model: none Security DOI: 0 I updated the guests xml file to include the following: domain type='kvm' id='2'> <name>vhost1</name> <uuid>503397c0-58cd-462a-b2a2-52bb7b8225ba</uuid> <memory unit='KiB'>8290304</memory> <currentMemory unit='KiB'>8290304</currentMemory> <vcpu placement='static'>32</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='5'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> <vcpupin vcpu='8' cpuset='8'/> <vcpupin vcpu='9' cpuset='9'/> <vcpupin vcpu='10' cpuset='10'/> <vcpupin vcpu='11' cpuset='11'/> <vcpupin vcpu='12' cpuset='12'/> <vcpupin vcpu='13' cpuset='13'/> <vcpupin vcpu='14' cpuset='14'/> <vcpupin vcpu='15' cpuset='15'/> <vcpupin vcpu='16' cpuset='16'/> <vcpupin vcpu='17' cpuset='17'/> <vcpupin vcpu='18' cpuset='18'/> <vcpupin vcpu='19' cpuset='19'/> <vcpupin vcpu='20' cpuset='20'/> <vcpupin vcpu='21' cpuset='21'/> <vcpupin vcpu='22' cpuset='22'/> <vcpupin vcpu='23' cpuset='23'/> <vcpupin vcpu='24' cpuset='24'/> <vcpupin vcpu='25' cpuset='25'/> <vcpupin vcpu='26' cpuset='26'/> <vcpupin vcpu='27' cpuset='27'/> <vcpupin vcpu='28' cpuset='28'/> <vcpupin vcpu='29' cpuset='29'/> <vcpupin vcpu='30' cpuset='30'/> <vcpupin vcpu='31' cpuset='31'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu> <topology sockets='4' cores='8' threads='1'/> <numa> <cell cpus='0-7' memory='2072576'/> <cell cpus='8-15' memory='2072576'/> <cell cpus='16-23' memory='2072576'/> <cell cpus='24-31' memory='2072576'/> </numa> </cpu> Then I shutdown (forced off) the guest and restarted it. I then did a numactl -H command on the guest: [root@vhost1 ~]# numactl -H available: 1 nodes (0) node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 node 0 size: 8095 MB node 0 free: 7504 MB node distances: node 0 0: 10 I would have included the /var/log/libvirt/libvirtd.log file, but I couldn't find it on the host. Has the libvirtd.log file been moved in rhel7.0? Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: