Bug 1380194
| Summary: | [scale] - wrong memory utilization for a host | ||
|---|---|---|---|
| Product: | [oVirt] ovirt-engine | Reporter: | Eldad Marciano <emarcian> |
| Component: | Backend.Core | Assignee: | Andrej Krejcir <akrejcir> |
| Status: | CLOSED NOTABUG | QA Contact: | Eldad Marciano <emarcian> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 4.1.0 | CC: | bugs, dfediuck, emarcian, mgoldboi, rgolan |
| Target Milestone: | ovirt-4.1.0-beta | Keywords: | Reopened |
| Target Release: | --- | Flags: | dfediuck:
ovirt-4.1?
rule-engine: planning_ack? rule-engine: devel_ack? rule-engine: testing_ack? |
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-01-19 16:08:15 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | SLA | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Eldad Marciano
2016-09-28 22:06:51 UTC
Created attachment 1205702 [details]
engine logs
Could you upload debug engine logs too? (In reply to Andrej Krejcir from comment #3) > Could you upload debug engine logs too? already attached How much memory is assigned to a VM? It may be possible, that when a VM is running it only consumes the memory it actually uses, so the host reports unused memory as free, even if it is assigned to the VM. The scheduler considers the full assigned memory, not only the used portion. The attached logs have INFO level. DEBUG level would be useful to see details of scheduling. (In reply to Andrej Krejcir from comment #5) > How much memory is assigned to a VM? > > It may be possible, that when a VM is running it only consumes the memory it > actually uses, so the host reports unused memory as free, even if it is > assigned to the VM. > The scheduler considers the full assigned memory, not only the used portion. > > The attached logs have INFO level. > DEBUG level would be useful to see details of scheduling. 512mb 111 * (512 MiB + 64 MiB) = 63 936 MiB This looks as not a bug: the amount of VMs + default expected overhead per VM add up almost to the host's available memory. We do not use the actual physical free memory for this check. We are trying to guarantee that all VMs are allowed to eat all their memory at the same time when no over-commit is defined. Eldad, attach an engine log with DEBUG level enable if you want to reopen this so we see all the numbers that wen't into the equation. (In reply to Martin Sivák from comment #7) > 111 * (512 MiB + 64 MiB) = 63 936 MiB > > This looks as not a bug: the amount of VMs + default expected overhead per > VM add up almost to the host's available memory. > > We do not use the actual physical free memory for this check. We are trying > to guarantee that all VMs are allowed to eat all their memory at the same > time when no over-commit is defined. Martin, in the description I mention that host has 14GB available, when vm failed to start. https://bugzilla.redhat.com/show_bug.cgi?id=1380194#c0 And I am telling you that the engine does not care about physical memory. The host has 14 GiB available, because the VMs are not fully using their allocated memory. But we count them as if they were. Attach the debug log, there is no bug right now (the fact that we only allow 110 VMs to start instead of 111 is interesting, but not important enough by itself). (In reply to Martin Sivák from comment #10) > And I am telling you that the engine does not care about physical memory. > The host has 14 GiB available, because the VMs are not fully using their > allocated memory. But we count them as if they were. > > Attach the debug log, there is no bug right now (the fact that we only allow > 110 VMs to start instead of 111 is interesting, but not important enough by > itself). please raise the priority if needed. Well I am closing this again until you convince me we have a bug. All the information attached to this bug so far show correct and expected behaviour. |