Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1054416

Summary: Guest numa xml definition doesn't work
Product: Red Hat Enterprise Linux 7 Reporter: George Beshers <gbeshers>
Component: libvirtAssignee: Martin Kletzander <mkletzan>
Status: CLOSED DUPLICATE QA Contact: Virtualization Bugs <virt-bugs>
Severity: high Docs Contact:
Priority: high    
Version: 7.0CC: acathrow, ctatman, dyuan, ehabkost, gbeshers, gsun, hhuang, honzhang, jdenemar, jdonohue, mzhan, nzimmer, rja, scrandall, tee, virt-maint
Target Milestone: rc   
Target Release: 7.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-01-17 19:48:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 722241    
Attachments:
Description Flags
sosreport
none
libvirtd log with debug turned on none

Description George Beshers 2014-01-16 19:01:35 UTC
Description of problem:

  When I try to define the numa configuration of a guest on a system running
  rhel7.0, the changes don't seem to be reflected in the guest.  

  This is a regression from rhel6.5.

  For example, if I edit the guest's xml using virsh edit to include the <numa>
  directives to define the guest as 4 numa nodes and then power off the guest and
  and restart it, a numactl -H command executed on the guest still shows the 
  guest as 1 numa node.

  My OS is RHEL7.0:  3.10.0-60.el7.x86_64
  libvirt is 1.1.1-18.el7
  qemu-kvm is 1.5.3-34.el7
  virt-manager 0.10.0-9.el7

  [root@harp33-sys ~]# virsh nodeinfo
  CPU model:           x86_64
  CPU(s):              64
  CPU frequency:       2499 MHz
  CPU socket(s):       1
  Core(s) per socket:  8
  Thread(s) per core:  1
  NUMA cell(s):        8
  Memory size:         189209916 KiB
  [root@harp33-sys ~]# virsh dominfo vhost1
  Id:             2
  Name:           vhost1
  UUID:           503397c0-58cd-462a-b2a2-52bb7b8225ba
  OS Type:        hvm
  State:          running
  CPU(s):         32
  CPU time:       117.0s
  Max memory:     8290304 KiB
  Used memory:    8290304 KiB
  Persistent:     yes
  Autostart:      disable
  Managed save:   no
  Security model: none
  Security DOI:   0

  I updated the guests xml file to include the following:

  domain type='kvm' id='2'>
    <name>vhost1</name>
    <uuid>503397c0-58cd-462a-b2a2-52bb7b8225ba</uuid>
    <memory unit='KiB'>8290304</memory>
    <currentMemory unit='KiB'>8290304</currentMemory>
    <vcpu placement='static'>32</vcpu>
    <cputune>
      <vcpupin vcpu='0' cpuset='0'/>
      <vcpupin vcpu='1' cpuset='1'/>
      <vcpupin vcpu='2' cpuset='2'/>
      <vcpupin vcpu='3' cpuset='3'/>
      <vcpupin vcpu='4' cpuset='4'/>
      <vcpupin vcpu='5' cpuset='5'/>
      <vcpupin vcpu='6' cpuset='6'/>
      <vcpupin vcpu='7' cpuset='7'/>
      <vcpupin vcpu='8' cpuset='8'/>
      <vcpupin vcpu='9' cpuset='9'/>
      <vcpupin vcpu='10' cpuset='10'/>
      <vcpupin vcpu='11' cpuset='11'/>
      <vcpupin vcpu='12' cpuset='12'/>
      <vcpupin vcpu='13' cpuset='13'/>
      <vcpupin vcpu='14' cpuset='14'/>
      <vcpupin vcpu='15' cpuset='15'/>
      <vcpupin vcpu='16' cpuset='16'/>
      <vcpupin vcpu='17' cpuset='17'/>
      <vcpupin vcpu='18' cpuset='18'/>
      <vcpupin vcpu='19' cpuset='19'/>
      <vcpupin vcpu='20' cpuset='20'/>
      <vcpupin vcpu='21' cpuset='21'/>
      <vcpupin vcpu='22' cpuset='22'/>
      <vcpupin vcpu='23' cpuset='23'/>
      <vcpupin vcpu='24' cpuset='24'/>
      <vcpupin vcpu='25' cpuset='25'/>
      <vcpupin vcpu='26' cpuset='26'/>
      <vcpupin vcpu='27' cpuset='27'/>
      <vcpupin vcpu='28' cpuset='28'/>
      <vcpupin vcpu='29' cpuset='29'/>
      <vcpupin vcpu='30' cpuset='30'/>
      <vcpupin vcpu='31' cpuset='31'/>
    </cputune>
    <resource>
      <partition>/machine</partition>
    </resource>
    <os>
      <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>
      <boot dev='hd'/>
    </os>
    <features>
      <acpi/>
      <apic/>
      <pae/>
    </features>
    <cpu>
      <topology sockets='4' cores='8' threads='1'/>
      <numa>
	<cell cpus='0-7' memory='2072576'/>
	<cell cpus='8-15' memory='2072576'/>
	<cell cpus='16-23' memory='2072576'/>
	<cell cpus='24-31' memory='2072576'/>
      </numa>
    </cpu>

  Then I shutdown (forced off) the guest and restarted it.  I then did a numactl -H
  command on the guest:

  [root@vhost1 ~]# numactl -H
  available: 1 nodes (0)
  node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
  node 0 size: 8095 MB
  node 0 free: 7504 MB
  node distances:
  node   0 
    0:  10 

  I would have included the /var/log/libvirt/libvirtd.log file, but I couldn't
  find it on the host.  Has the libvirtd.log file been moved in rhel7.0?


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Jiri Denemark 2014-01-16 21:09:59 UTC
(In reply to George Beshers from comment #0)
> I would have included the /var/log/libvirt/libvirtd.log file, but I couldn't
> find it on the host.  Has the libvirtd.log file been moved in rhel7.0?

libvirt logs through journald by default on RHEL-7. You can follow http://wiki.libvirt.org/page/DebugLogs to get it back and even make it contain useful debug logs.

Comment 3 George Beshers 2014-01-16 23:50:31 UTC
Created attachment 851346 [details]
sosreport

Comment 4 Wayne Sun 2014-01-17 05:36:11 UTC
bug 974374 is tracking for this and moved to 7.1
https://bugzilla.redhat.com/show_bug.cgi?id=974374

Comment 5 Martin Kletzander 2014-01-17 07:28:39 UTC
Could you please check the command line of qemu that libvirt is running for this guest (either by `ps -ef | grep vhost1` or that should be also in /var/lib/libvirt/qemu/vhost1.log) and if possible, attach the daemon logs too (as described in comment #2?  Thanks

Comment 6 Sherry Crandall 2014-01-17 15:54:18 UTC
Here is the qemu command line.  I will also attach the libvirtd log file with debug turned on.

[root@harp33-sys libvirt]# ps -elf | grep vhost1
6 S qemu      5427     1  6  80   0 - 9364404 poll_s 09:34 ?      00:01:09 /usr/libexec/qemu-kvm -name vhost1 -S -machine pc-i440fx-rhel7.0.0,accel=kvm,usb=off -m 30720 -realtime mlock=off -smp 16,sockets=2,cores=8,threads=1 -numa node,nodeid=0,cpus=0-7,mem=15360 -numa node,nodeid=1,cpus=8-15,mem=15360 -uuid e69c02b4-5493-427b-afff-ecf778f867b6 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/vhost1.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/var/lib/libvirt/images/vhost1.img,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=24 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:02:60:71,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev spicevmc,id=charchannel0,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0 -device usb-tablet,id=input0 -spice port=5900,addr=127.0.0.1,disable-ticketing,seamless-migration=on -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7

Comment 7 Sherry Crandall 2014-01-17 15:57:11 UTC
Created attachment 851680 [details]
libvirtd log with debug turned on

The libvirtd.log file with debug turned on.  This includes the output from starting the guest.

Comment 8 Eduardo Habkost 2014-01-17 19:27:11 UTC
(In reply to Sherry Crandall from comment #6)
> [root@harp33-sys libvirt]# ps -elf | grep vhost1
> 6 S qemu      5427     1  6  80   0 - 9364404 poll_s 09:34 ?      00:01:09
> /usr/libexec/qemu-kvm -name vhost1 -S -machine
> pc-i440fx-rhel7.0.0,accel=kvm,usb=off -m 30720 -realtime mlock=off -smp
> 16,sockets=2,cores=8,threads=1 -numa node,nodeid=0,cpus=0-7,mem=15360 -numa
> node,nodeid=1,cpus=8-15,mem=15360 


Command-line looks correct. This is very likely to be bug 1048080. Can you please check the guest dmesg and see it has a message similar to:

    SRAT: PXMs only cover 3583MB of your 4095MB e820 RAM. Not used.

Comment 9 Sherry Crandall 2014-01-17 19:41:20 UTC
Yes, I copied this from the guest's dmesg output:

SRAT: PXMs only cover 30207MB of your 30719MB e820 RAM. Not used.

Comment 10 Eduardo Habkost 2014-01-17 19:48:05 UTC

*** This bug has been marked as a duplicate of bug 1048080 ***