Bug 584684

Summary: how to "unpin" a vcpu using the xm command is undocumented
Product: Red Hat Enterprise Linux 5 Reporter: Paolo Bonzini <pbonzini>
Component: xenAssignee: Michal Novotny <minovotn>
Status: CLOSED NOTABUG QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: low    
Version: 5.4CC: areis, drjones, xen-maint
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2010-04-22 14:38:53 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 514500    
Attachments:
Description Flags
Support of 'all' keyword to unpin a vcpu using the xm command none

Description Paolo Bonzini 2010-04-22 07:44:02 UTC
Description of problem:
Upstream xend supports unpinning a vcpu with something that would be "xm vcpu-pin DOM VCPU all".

It also supports passing "all" instead of the vcpu number.  Our xm supports that, but it's undocumented.

Xend already has everything that's needed, so this is entirely located in the xm command line.

Version-Release number of selected component (if applicable):
3.0.3-105.el5

Steps to Reproduce:
1. xm vcpu-pin DOMAIN 0 0
2. xm vcpu-list
3. xm vcpu-pin DOMAIN all all
4. xm vcpu-list
  
Actual results:
Step 3 fails, step 4 shows no change.

Expected results:
Step 3 should pass, and the second vcpu-list should show that the effect of step 1 has been undone.

Additional info:
Upstream c/s 14364 and 15264.

Comment 1 Andrew Jones 2010-04-22 08:02:09 UTC
I've changed the summary of this bug to show that this is an issue with the xm command. The libvirt supported way to "unpin" is to re-pin a vcpu to all cpus with a command like this

virsh vcpupin <dom> 0,1,2,3,...

On a side note an 'all' keyword would be nice in libvirt as well...

Comment 2 Michal Novotny 2010-04-22 09:15:29 UTC
Created attachment 408273 [details]
Support of 'all' keyword to unpin a vcpu using the xm command

I did testing with RHEL-5 x86_64 PV guest with one VCPU set the following results:

1. xm create rhel5-64pv
2. xm vcpu-list rhel5-64pv
   - domain was showing affinity 'any cpu'
3. xm vcpu-pin rhel5-64pv 0 0
4. xm vcpu-list rhel5-64pv
   - domain was showing affinity '0'
5. xm vcpu-pin rhel5-64pv all all
6. xm vcpu-list rhel5-64pv
   - domain was showing affinity 'any cpu'

So, since the affinity output was the same in case of step 2 (before pinning) and step 6 (after unpinning) as tested on my x86_64 dom0 it's working fine.

As correctly stated, it's basically backport of upstream c/s 14364 and 15264 but most of the code from 15264 have been already in our codebase so that's reason why it was working fine with libvirt daemon.

Michal

Comment 3 Andrew Jones 2010-04-22 12:31:13 UTC
It's actually not impossible with the xm command, it's just not documented. To unpin with xm you can do "xm vcpu-pin <dom> all 0-<(nr_cpus-1)>", or even just use some large number if you don't want to check how many cpus you have, e.g. "xm vcpu-pin <dom> all 0-255".

I recommend we close this as NOTABUG, and then we can consider improving the interface and documentation for libvirt's virsh under a different bug.

Comment 4 Paolo Bonzini 2010-04-22 12:46:17 UTC
I changed the subject, but I still think it's worth fixing this.  We can also clone it for libvirt.

Comment 5 Michal Novotny 2010-04-22 13:49:30 UTC
(In reply to comment #4)
> I changed the subject, but I still think it's worth fixing this.  We can also
> clone it for libvirt.    

Actually, since I did the backport for this one already I think it's good to have. The patch is done and also sent to the list already so I agree with Paolo since it's no big undertaking to just apply it since it's already done and tested. Basically since Xen HV support max. 64 VCPUs with this patch applied "xm vcpu-pin <dom> all all" basically does "xm vcpu-pin <dom> all all" which is no problem here since as far as I know and like already stated in this comment, Xen HV supports only 64 VCPUs.

Michal

Comment 6 Andrew Jones 2010-04-22 14:06:17 UTC
This patch addresses pcpus, not vcpus. So having 'all' hard coded to 64 is incorrect since we support up to 256 pcpus on x86_64.  To illustrate, if you did

"xm vcpu-pin <dom> all all" 

then with this patch that translates to

"xm vcpu-pin <dom> all 0-63"

If you system had, for example, 128 pcpus and you ran this "all all" command on your VMs under the assumption that afterwards no vcpu would be affiliated to any particular cpu, then your assumption would be wrong.  You would in fact be only using the first 64 pcpus of your system for those VMs.  Therefore the patch is wrong as it stands, and I've already NACKed it as such on the list.

Comment 7 Michal Novotny 2010-04-22 14:38:53 UTC
(In reply to comment #6)
> This patch addresses pcpus, not vcpus. So having 'all' hard coded to 64 is
> incorrect since we support up to 256 pcpus on x86_64.  To illustrate, if you
> did
> 
> "xm vcpu-pin <dom> all all" 
> 
> then with this patch that translates to
> 
> "xm vcpu-pin <dom> all 0-63"
> 
> If you system had, for example, 128 pcpus and you ran this "all all" command on
> your VMs under the assumption that afterwards no vcpu would be affiliated to
> any particular cpu, then your assumption would be wrong.  You would in fact be
> only using the first 64 pcpus of your system for those VMs.  Therefore the
> patch is wrong as it stands, and I've already NACKed it as such on the list.    

Right, ok. In fact it was a feature request but since we support up to 256 PCPUs for x86_64 (as I was thinking we do just 64 PCPUs). Closing as NOTABUG.

Michal