Bug 1070695
Summary: | dominfo get wrong info after set memory to an lxc guest | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Shanzhi Yu <shyu> |
Component: | libvirt | Assignee: | John Ferlan <jferlan> |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 7.0 | CC: | dyuan, jdenemar, lsu, mzhan, rbalakri, shyu, xuzhang |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | libvirt-1.2.14-1.el7 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-11-19 05:44:46 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Shanzhi Yu
2014-02-27 11:20:45 UTC
It's "Max memory" that should have been changed, "Used memory" gives the amount or memory really consumed by the domain. (In reply to Jiri Denemark from comment #2) > It's "Max memory" that should have been changed, "Used memory" gives the > amount or memory really consumed by the domain. Hi Jiri, Thanks for your quick response. setmaxmem will change the Max memory. And for qemu-kvm guest,the setmem works fine(it really changes Used memory) Hmm, right, so it's likely not be a bug at all then. LXC is a bit different than KVM, there's no hypervisor process that would allocate the memory from the host and use it for guest processes. In LXC, guest processes allocate memory directly from the host via host kernel, they are just limited so that memory used by all guest processes does not exceed the limit. The 'setmem' will set the 'memory.limit_in_bytes' or "the maximum amount of user memory (including file cache)". The 'dominfo' fetches the 'memory.usage_in_bytes' or "the total current memory usage by processes in the cgroup". So they clearly are not the same thing. Although I suppose if the query were sh-4.2# cat /proc/meminfo | grep -i mem MemTotal: 500000 kB MemFree: 498744 kB MemAvailable: 4621284 kB Shmem: 361028 kB From the host do the setmem command sh-4.2# cat /proc/meminfo | grep -i mem MemTotal: 100000 kB MemFree: 98744 kB MemAvailable: 4618716 kB Shmem: 361012 kB So when you 'setmem' on the container (or a vm in general) it balloons the guest memory (up or down)... This equates to the <currentMemory> XML value. It will not allow you to set beyond the "maximum memory" assigned to the VM (e.g., the <memory> value from the XML set while the VM is down). What it seems is desired perhaps is a way to see the "max max" (eg what setmaxmem would allow), the current max/limit/size one could get to (or currentMemory), and the currently used value, but I'm not sure/clear if that's the case. Right now it's hypervisor dependent whether you see the maxmax or you see the currrent value in the "Max memory" output from 'dominfo'. For qemu it's also dependent on whether the balloon is enabled or not. For lxc you see the maxmax, although I suppose it could show the current max instead. So while it seems the belief was the "Used" field in dominfo was wrong, perhaps the "Max memory" field is wrong... So is there a preference? See all 3 values or change lxc to return current max from the setmem (or whatever the container was started with)? FWIW: memtune will show the 100K value: # virsh -c lxc:/// memtune vm1 hard_limit : 100000 soft_limit : unlimited swap_hard_limit: unlimited A bit more investigation has found that other hypervisors set/return maxmax for "Max memory" and it varies for "Used memory" between what's currently set/used for the guest, to the currentMemory value, to the maximum value. Given Max memory is maxmax, I don't see changing that for lxc to be a good thing. And given Used memory is actually what is in use, so it shouldn't be changed either. That leaves displaying current which can be seen with memtune, so is there a need to repeat it and/or would it create confusion? The more I think about the solution to this issue is better documentation of 'setmem' and 'dominfo' to describe/set expectations on the result. Whether that's in the 'virsh' docs or the driver specific docs is still to be decided. Yeah, I agree that there's nothing to be really fixed for this bug other than improving the documentation. Posted patch upstream: http://www.redhat.com/archives/libvir-list/2015-February/msg01195.html also removing the needsinfo Fixes pushed upstream Commit id: 69db32f93d7a22e7be04c2e9dc41a357f9811404 $ git describe 69db32f93d7a22e7be04c2e9dc41a357f9811404 v1.2.13-29-g69db32f $ Verify this bug with build libvirt-1.2.17-2.el7.x86_64 check man virsh setmem domain size [[--config] [--live] | [--current]] ..... For LXC, the value being set is the cgroups value for limit_in_bytes or the maximum amount of user memory (including file cache). When viewing memory inside the container, this is the /proc/meminfo "MemTotal" value. When viewing the value from the host, use the virsh memtune command. In order to view the current memory in use and the maximum value allowed to set memory, use the virsh dominfo command. ... memtune domain [--hard-limit size] [--soft-limit size] [--swap-hard-limit size] [--min-guarantee size] [[--config] [--live] | [--current]] ... For LXC, the displayed hard_limit value is the current memory setting from the XML or the results from a virsh setmem command. --hard-limit The maximum memory the guest can use. --soft-limit The memory limit to enforce during memory contention. --swap-hard-limit The maximum memory plus swap the guest can use. This has to be more than hard-limit value provided. --min-guarantee The guaranteed minimum memory allocation for the guest. Specifying -1 as a value for these limits is interpreted as unlimited. move to verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2202.html |