Bug 584684
Summary: | how to "unpin" a vcpu using the xm command is undocumented | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | Paolo Bonzini <pbonzini> | ||||
Component: | xen | Assignee: | Michal Novotny <minovotn> | ||||
Status: | CLOSED NOTABUG | QA Contact: | Virtualization Bugs <virt-bugs> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | low | ||||||
Version: | 5.4 | CC: | areis, drjones, xen-maint | ||||
Target Milestone: | rc | ||||||
Target Release: | --- | ||||||
Hardware: | All | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2010-04-22 14:38:53 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 514500 | ||||||
Attachments: |
|
Description
Paolo Bonzini
2010-04-22 07:44:02 UTC
I've changed the summary of this bug to show that this is an issue with the xm command. The libvirt supported way to "unpin" is to re-pin a vcpu to all cpus with a command like this virsh vcpupin <dom> 0,1,2,3,... On a side note an 'all' keyword would be nice in libvirt as well... Created attachment 408273 [details]
Support of 'all' keyword to unpin a vcpu using the xm command
I did testing with RHEL-5 x86_64 PV guest with one VCPU set the following results:
1. xm create rhel5-64pv
2. xm vcpu-list rhel5-64pv
- domain was showing affinity 'any cpu'
3. xm vcpu-pin rhel5-64pv 0 0
4. xm vcpu-list rhel5-64pv
- domain was showing affinity '0'
5. xm vcpu-pin rhel5-64pv all all
6. xm vcpu-list rhel5-64pv
- domain was showing affinity 'any cpu'
So, since the affinity output was the same in case of step 2 (before pinning) and step 6 (after unpinning) as tested on my x86_64 dom0 it's working fine.
As correctly stated, it's basically backport of upstream c/s 14364 and 15264 but most of the code from 15264 have been already in our codebase so that's reason why it was working fine with libvirt daemon.
Michal
It's actually not impossible with the xm command, it's just not documented. To unpin with xm you can do "xm vcpu-pin <dom> all 0-<(nr_cpus-1)>", or even just use some large number if you don't want to check how many cpus you have, e.g. "xm vcpu-pin <dom> all 0-255". I recommend we close this as NOTABUG, and then we can consider improving the interface and documentation for libvirt's virsh under a different bug. I changed the subject, but I still think it's worth fixing this. We can also clone it for libvirt. (In reply to comment #4) > I changed the subject, but I still think it's worth fixing this. We can also > clone it for libvirt. Actually, since I did the backport for this one already I think it's good to have. The patch is done and also sent to the list already so I agree with Paolo since it's no big undertaking to just apply it since it's already done and tested. Basically since Xen HV support max. 64 VCPUs with this patch applied "xm vcpu-pin <dom> all all" basically does "xm vcpu-pin <dom> all all" which is no problem here since as far as I know and like already stated in this comment, Xen HV supports only 64 VCPUs. Michal This patch addresses pcpus, not vcpus. So having 'all' hard coded to 64 is incorrect since we support up to 256 pcpus on x86_64. To illustrate, if you did "xm vcpu-pin <dom> all all" then with this patch that translates to "xm vcpu-pin <dom> all 0-63" If you system had, for example, 128 pcpus and you ran this "all all" command on your VMs under the assumption that afterwards no vcpu would be affiliated to any particular cpu, then your assumption would be wrong. You would in fact be only using the first 64 pcpus of your system for those VMs. Therefore the patch is wrong as it stands, and I've already NACKed it as such on the list. (In reply to comment #6) > This patch addresses pcpus, not vcpus. So having 'all' hard coded to 64 is > incorrect since we support up to 256 pcpus on x86_64. To illustrate, if you > did > > "xm vcpu-pin <dom> all all" > > then with this patch that translates to > > "xm vcpu-pin <dom> all 0-63" > > If you system had, for example, 128 pcpus and you ran this "all all" command on > your VMs under the assumption that afterwards no vcpu would be affiliated to > any particular cpu, then your assumption would be wrong. You would in fact be > only using the first 64 pcpus of your system for those VMs. Therefore the > patch is wrong as it stands, and I've already NACKed it as such on the list. Right, ok. In fact it was a feature request but since we support up to 256 PCPUs for x86_64 (as I was thinking we do just 64 PCPUs). Closing as NOTABUG. Michal |