Bug 653530
Summary: | virsh freecell does not offer a way to list free memory per node | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Jes Sorensen <Jes.Sorensen> |
Component: | libvirt | Assignee: | Michal Privoznik <mprivozn> |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | low | ||
Version: | 6.0 | CC: | ccui, dallan, dyuan, eblake, jdenemar, llim, mzhan, syeghiay, veillard, xen-maint |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | libvirt-0.8.7-8.el6 | Doc Type: | Bug Fix |
Doc Text: |
An "--all" option has been added to the "virsh freecell" command to allow the command to iterate across all nodes instead of forcing users to run the command manually on each node. "virsh freecell --all" will list the free memory on all available nodes.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2011-05-19 13:24:09 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 693963 |
Description
Jes Sorensen
2010-11-15 16:54:46 UTC
The upstream community has the final say, but I favor the approach of adding a flag --all so that we don't change the existing default behavior. pushed into upstream: commit 30e21374ea30b5b70fdc0a101e3002a8c78498c9 Author: Michal Privoznik <mprivozn> Date: Fri Jan 28 19:21:57 2011 +0100 virsh: added --all flag to freecell command This will iterate over all NUMA nodes, showing free memory for each and sum at the end. Existing default behavior is not changed. v0.8.7-148-g30e2137 moving into POST: http://post-office.corp.redhat.com/archives/rhvirt-patches/2011-February/msg00408.html Test environment: libvirt-0.8.7-6.el6 Steps: 1. check the help of "freecell". "--all" option is added. # virsh help freecell NAME freecell - NUMA free memory SYNOPSIS freecell [--cellno <number>] [--all] DESCRIPTION display available free memory for the NUMA cell. OPTIONS --cellno <number> NUMA cell number --all show free memory for all NUMA cells 2. There are 3 cells in testing machine # virsh capabilities ............ <topology> <cells num='3'> <cell id='0'> <cpus num='4'> <cpu id='0'/> <cpu id='4'/> <cpu id='8'/> <cpu id='12'/> </cpus> </cell> <cell id='1'> <cpus num='4'> <cpu id='1'/> <cpu id='5'/> <cpu id='9'/> <cpu id='13'/> </cpus> </cell> <cell id='2'> <cpus num='8'> <cpu id='2'/> <cpu id='3'/> <cpu id='6'/> <cpu id='7'/> <cpu id='10'/> <cpu id='11'/> <cpu id='14'/> <cpu id='15'/> </cpus> </cell> </cells> </topology> ........... 3. Check with specified cell number # virsh freecell --cellno 0 0: 32164532 kB # virsh freecell --cellno 1 1: 32372352 kB # virsh freecell --cellno 2 2: 32387296 kB 4. Check "--all" option # virsh freecell --all 0: 32164556 kB -------------------- Total: 32164556 kB The question in step4: Only cellno 0 can be listed. Is it right? no. it is not. it seems like --all is getting wrong number of numa cells. which driver are you using? qemu? Packages info: qemu: qemu-img-0.12.1.2-2.144.el6.x86_64 gpxe-roms-qemu-0.9.7-6.3.el6.noarch qemu-kvm-0.12.1.2-2.144.el6.x86_64 libvirt: libvirt-0.8.7-6.el6.x86_64 libvirt-client-0.8.7-6.el6.x86_64 libvirt-python-0.8.7-6.el6.x86_64 libvirt-devel-0.8.7-6.el6.x86_64 kernel: kernel-2.6.32-113.el6.x86_64 Hi Michal, Any update for this bug? I rechecked it on qemu-kvm-0.12.1.2-2.145.el6.x86_64 and libvirt-0.8.7-6.el6.x86_64. The result of "--all" is same with comment 7. # virsh freecell --all 0: 32162144 kB -------------------- Total: 32162144 kB Re checked it on the following environment as per steps of comment7, it is passed. Test environment: libvirt-0.8.7-8.el6 qemu-kvm-0.12.1.2-2.147.el6 kernel-2.6.32-117.el6 # virsh freecell --all 0: 32015228 kB 1: 32409956 kB 2: 32387124 kB -------------------- Total: 96812308 kB The two patches pushed for this bug introduced a regression tracked in bug 693963 Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: Cause: missing virsh functionality Consequence: virsh does not offer a way to list free memory per node Fix: created new option to virsh command (freecell --all) Result: virsh does list free memory per node Technical note updated. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. Diffed Contents: @@ -1,8 +1,8 @@ Cause: - missing virsh functionality + users want to list free memory per node Consequence: - virsh does not offer a way to list free memory per node -Fix: + because of missing feature in virsh, users were unable to list it +Change: created new option to virsh command (freecell --all) Result: virsh does list free memory per node Technical note updated. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. Diffed Contents: @@ -1,8 +1 @@ -Cause: +An "--all" option has been added to the "virsh freecell" command to allow the command to iterate across all nodes instead of forcing users to run the command manually on each node. "virsh freecell --all" will list the free memory on all available nodes.- users want to list free memory per node -Consequence: - because of missing feature in virsh, users were unable to list it -Change: - created new option to virsh command (freecell --all) -Result: - virsh does list free memory per node * We didn't offer way to list free memory per node at once, because of missing code * So people needed to use a workaround: manually run command against each node and compose output themselves * We've created option which allows this * And thus people can list summary as requested An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2011-0596.html |