RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1007698 - The cpu_shares value of domain xml should be consistent with return value of schedinfo.
Summary: The cpu_shares value of domain xml should be consistent with return value of ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Martin Kletzander
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 998431
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-09-13 07:18 UTC by CongDong
Modified: 2016-04-26 16:22 UTC (History)
9 users (show)

Fixed In Version: libvirt-1.2.7-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of: 998431
Environment:
Last Closed: 2015-03-05 07:24:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0323 0 normal SHIPPED_LIVE Low: libvirt security, bug fix, and enhancement update 2015-03-05 12:10:54 UTC

Comment 2 zhengqin 2013-11-19 09:29:14 UTC
Can reproduce on RHEL7:
# rpm -qa libvirt qemu-kvm
libvirt-1.1.1-4.el7.x86_64
qemu-kvm-1.5.3-3.el7.x86_64

Comment 3 Martin Kletzander 2014-04-01 09:36:37 UTC
Fixed upstream with v1.2.0-76-g231656b -- v1.2.0-79-gea130e3:

commit ea130e3bf666397a05a674ffcf15b9ab170b2255
Author: Martin Kletzander <mkletzan>
Date:   Mon Dec 9 11:32:48 2013 +0100

    conf: don't format memtune with unlimited values
    
commit 8d7c668e64b5bcd2d08aa5057c9aff43d1f73dfd
Author: Martin Kletzander <mkletzan>
Date:   Wed Dec 4 18:59:52 2013 +0100

    qemu: Fix minor inconsistency in error message

commit 0c2fdd7b14cbfc6ced77ed2a24f01f07a8a2f657
Author: Martin Kletzander <mkletzan>
Date:   Wed Dec 4 18:56:02 2013 +0100

    qemu: Report VIR_DOMAIN_MEMORY_PARAM_UNLIMITED properly
    
commit 231656bbeb9e4d3bedc44362784c35eee21cf0f4
Author: Martin Kletzander <mkletzan>
Date:   Wed Dec 4 16:54:29 2013 +0100

    cgroups: Redefine what "unlimited" means wrt memory limits

Comment 7 Pei Zhang 2014-11-19 10:21:10 UTC
1> for patches in comment 3 , it seems like that is unrelated to this bug  .
and i found the following patch , but i am not sure if it is the correct one :
https://www.redhat.com/archives/libvir-list/2014-March/msg00921.html

2> for this bug , according to the description verify it via following steps:

version:

kernel-3.10.0-203.el7.x86_64
qemu-kvm-rhev-2.1.2-8.el7.x86_64
libvirt-1.2.8-6.el7.x86_64

steps to verify :

1. negative value

# virsh schedinfo r7
Scheduler      : posix
cpu_shares     : 1024
vcpu_period    : 100000
vcpu_quota     : -1
emulator_period: 100000
emulator_quota : -1

1.1 
set value as -1 :
# virsh schedinfo r7 --set cpu_shares=-1 --live
Scheduler      : posix
cpu_shares     : 262144
vcpu_period    : 100000
vcpu_quota     : -1
emulator_period: 100000
emulator_quota : -1

# virsh dumpxml r7 | grep cputune -A 5
  <cputune>
    <shares>262144</shares>    <=== the value is 262114 , same as return of  schedinfo
  </cputune>

1.2 
set value as -100 :
# virsh schedinfo r7 --set cpu_shares=-100 --live
Scheduler      : posix
cpu_shares     : 262144
vcpu_period    : 100000
vcpu_quota     : -1
emulator_period: 100000
emulator_quota : -1

# virsh dumpxml r7 | grep cputune -A 5
  <cputune>
    <shares>262144</shares>   <=== the value is 262114 , same as return of  schedinfo
  </cputune>
  <resource>

1.3 
set large value ,  larger values are capped at the maximum.

# virsh schedinfo r7 --set cpu_shares=18446744073709551615 --live
Scheduler      : posix
cpu_shares     : 262144
vcpu_period    : 100000
vcpu_quota     : -1
emulator_period: 100000
emulator_quota : -1

# virsh dumpxml r7 | grep cputune -A 3
  <cputune>
    <shares>262144</shares>
  </cputune>


2.
for set value 0 and 1, the minimal value of 2 will be added to domian xml.

# virsh schedinfo r7 --set cpu_shares=0 --live
Scheduler      : posix
cpu_shares     : 2
vcpu_period    : 100000
vcpu_quota     : -1
emulator_period: 100000
emulator_quota : -1

# virsh dumpxml r7 | grep cputune -A 3
  <cputune>
    <shares>2</shares>    <=====the minimal value of 2 was added , and same as return schedinfo
  </cputune>
  
# virsh schedinfo r7 --set cpu_shares=1 --live
Scheduler      : posix
cpu_shares     : 2
vcpu_period    : 100000
vcpu_quota     : -1
emulator_period: 100000
emulator_quota : -1

# virsh dumpxml r7 | grep cputune -A 3
  <cputune>
    <shares>2</shares>    <=====the minimal value of 2 was added , and same as return schedinfo
  </cputune>

3.
boundary value , larger values are capped at the maximum.:

# virsh schedinfo r7 --set cpu_shares=262145 --live
Scheduler      : posix
cpu_shares     : 262144
vcpu_period    : 100000
vcpu_quota     : -1
emulator_period: 100000
emulator_quota : -1

# virsh dumpxml r7 | grep cputune -A 3
  <cputune>
    <shares>262144</shares>  <=== the value is 262114 , same as return of  schedinfo
  </cputune>


4.
set value as -1 with --config
# virsh schedinfo r7 --set cpu_shares=-1 --config
Scheduler      : posix
cpu_shares     : 18446744073709551615
vcpu_period    : 0
vcpu_quota     : 0
emulator_period: 0
emulator_quota : 0

check in domain xml , the value will not be changed.
# virsh dumpxml r7 | grep cputune -A 3
  <cputune>
    <shares>2</shares>
  </cputune>

restart domain , check the value which will be capped at the maximum.
# virsh destroy r7 ; virsh start r7
Domain r7 destroyed

Domain r7 started

# virsh schedinfo r7
Scheduler      : posix
cpu_shares     : 262144   
vcpu_period    : 100000
vcpu_quota     : -1
emulator_period: 100000
emulator_quota : -1

# virsh dumpxml r7 | grep cputune -A 5
  <cputune>
    <shares>262144</shares>    <=====the value was capped as maximum
  </cputune>


5.for a shut off domain:

# virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     r7                             shut off

5.1
set value -1 with --config 
# virsh schedinfo r7 --set cpu_shares=-1 --config
Scheduler      : posix
cpu_shares     : 18446744073709551615
vcpu_period    : 0
vcpu_quota     : 0
emulator_period: 0
emulator_quota : 0
# virsh dumpxml r7 | grep cputune -A 5
  <cputune>
    <shares>18446744073709551615</shares>
  </cputune>

start the domain , the value is capped at the maximum :

# virsh start r7
Domain r7 started

# virsh dumpxml r7 | grep cputune -A 5
  <cputune>
    <shares>262144</shares>
  </cputune>

# virsh schedinfo r7
Scheduler      : posix
cpu_shares     : 262144
vcpu_period    : 100000
vcpu_quota     : -1
emulator_period: 100000
emulator_quota : -1


5.2 
for a shut off domain , set value 0 
# virsh schedinfo r7 --set cpu_shares=0 --config
Scheduler      : posix
cpu_shares     : 0
vcpu_period    : 0
vcpu_quota     : 0
emulator_period: 0
emulator_quota : 0

start to check , the minimal value of 2 will be added to domain.
# virsh start r7
Domain r7 started

# virsh schedinfo r7
Scheduler      : posix
cpu_shares     : 2    
vcpu_period    : 100000
vcpu_quota     : -1
emulator_period: 100000
emulator_quota : -1

# virsh dumpxml r7 | grep cputune -A 3
  <cputune>
    <shares>2</shares>
  </cputune>

According to bug description , now the cpu_shares value of domain xml are consistent with return value of schedinfo .

3> note : for --config option , a small question i am not sure if it is expected result .

# virsh schedinfo r7 --set cpu_shares=-1 --config
Scheduler      : posix
cpu_shares     : 18446744073709551615   <==== return a large value 
vcpu_period    : 0
vcpu_quota     : 0
emulator_period: 0
emulator_quota : 0

but it has no effect on domain , the value  is capped at the maximum (262144) after restart domain . details like above steps 4 and step 5 .

Comment 8 Martin Kletzander 2014-11-20 10:33:33 UTC
That is actually the desired output.  For values 0 and 1 the output is 2 because that's what you'll get from cgroups when you set 0 or 1 there.  This was intentionally done so that the output of virsh matches the data kernel operates with.  Different kernels might have different boundaries, so we cannot say/enforce any real limits.

However, lower or higher values are kept in the XML as specified.  So if you get the info from running domain (dumpxml without '--inactive' or schedinfo, while running) you'll get the capped value according to kernel.  If, however you get the info when the domain is not running (or use '--inactive' for dumpxml or'--config' for schedinfo), the value is taken right from the XML and it shows the exact data you specified there.

And -1 wraps to maximum we can use.  All this is in the manual already, and makes sense for me.  If there is a bit you want to adjust, I can have a look at it, but it makes sense from my point of view.

Comment 9 Pei Zhang 2014-11-21 03:43:56 UTC
Thanks for the info . 
According to comment 7 and comment 8 , move the bug to verified.

Comment 11 errata-xmlrpc 2015-03-05 07:24:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0323.html


Note You need to log in before you can comment on or make changes to this bug.