Bug 919372
Summary: | [RFE] virsh schedinfo should support multiple --set parameters | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Wayne Sun <gsun> | |
Component: | libvirt | Assignee: | Martin Kletzander <mkletzan> | |
Status: | CLOSED WONTFIX | QA Contact: | Virtualization Bugs <virt-bugs> | |
Severity: | medium | Docs Contact: | ||
Priority: | high | |||
Version: | 6.4 | CC: | acathrow, cwei, dallan, dyuan, jmiao, mzhan | |
Target Milestone: | rc | Keywords: | FutureFeature, Upstream | |
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Enhancement | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 919375 (view as bug list) | Environment: | ||
Last Closed: | 2014-04-04 20:58:32 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 919375 |
Description
Wayne Sun
2013-03-08 09:38:19 UTC
Could you try this with older versions to see where it started to happen? (In reply to comment #1) > Could you try this with older versions to see where it started to happen? Hi Martin, virDomainPinEmulator and virDomainGetEmulatorPinInfo are added since libvirt-0.10.0-0rc1.el6.x86_64.rpm as I tested. I've tried on: libvirt-0.10.0-0rc0.el6.x86_64.rpm # virsh schedinfo libvirt_test_api vcpu_quota=-1 vcpu_period=100000 emulator_period=1000000 --config Scheduler : posix cpu_shares : 0 vcpu_period : 100000 vcpu_quota : -1 notice 'emulator_period' not supported yet and just get ignored. # virsh schedinfo libvirt_test_api vcpu_quota=-1 vcpu_period=100000 emulator_period=1000000 emulator_quota=-1 --config error: unexpected data 'emulator_quota=-1' and here exceeds 3 will report error. So, it's aleady there before the emulatorpin functions been add. tried on: libvirt-0.9.10-21.el6_3.8.x86_64 # virsh schedinfo libvirt_test_api vcpu_quota=-1 vcpu_period=100000 emulator_period=1000000 --config Scheduler : posix cpu_shares : 0 vcpu_period : 100000 vcpu_quota : -1 [root@hp-dl585g7-01 ~]# virsh schedinfo libvirt_test_api vcpu_quota=-1 vcpu_period=100000 emulator_period=1000000 cpu_shares=0 --config error: unexpected data 'cpu_shares=0' it's the same. So it also exist on RHEL6u3, but as emulatorpin not added back then, set params fix to no larger than 3 is acceptable. This is as far as I can test, hope this can help you track the source. I'll try on rhel6u2 build later if i can get the rpms. No need to test on 6u2. Thanks a lot for the report on other versions. I've found out the problem is deeper than that, for example: # virsh schedinfo libvirt_test_api Scheduler : posix cpu_shares : 1023 vcpu_period : 100000 vcpu_quota : -1 emulator_period: 0 emulator_quota : 0 # virsh schedinfo libvirt_test_api emulator_period=100000 cpu_shares=0 vcpu_period=120000 Scheduler : posix cpu_shares : 1023 vcpu_period : 100000 vcpu_quota : -1 emulator_period: 100000 emulator_quota : 0 As you can see, cpu_shares and vcpu_period were not updated. Fix is on its way :) More importantly, what I forgot to mention is that this should have never worked. According to the 'virsh help schedinfo: schedinfo <domain> [--set <string>] [--weight <number>] [--cap <number>] [--current] [--config] [--live] and 'man virsh' schedinfo [--set parameter=value] domain [[--config] [--live] | [--current]] schedinfo [--weight number] [--cap number] domain only one --set parameter can be set at a time. I'm thus changing it to RFE, feel free to discuss in case you disagree. Patch proposed upstream: https://www.redhat.com/archives/libvir-list/2013-March/msg00739.html Moving to POST: commit e7cd2844ca2b0d716a520667eff286713963e2ec Author: Martin Kletzander <mkletzan> Date: Fri Mar 15 14:42:42 2013 +0100 Allow multiple parameters for schedinfo Development Management has reviewed and declined this request. You may appeal this decision by reopening this request. |