Bug 1044853
Summary: | Migration sometimes failed on the destination host side | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Qunfang Zhang <qzhang> |
Component: | qemu-kvm | Assignee: | Hai Huang <hhuang> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 7.0 | CC: | acathrow, hhuang, juzhang, michen, owasserm, quintela, qzhang, virt-maint |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2014-01-03 13:19:49 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Qunfang Zhang
2013-12-19 05:58:17 UTC
Hi, I'm suspecting it related to the high memory usage of the XBZRLE feature. What is the amount of memory the hosts have? Can you print the memory usage when migration fails? does it happen when you set the cache size to a smaller value (migrate_set_cache_size)? Thanks, Orit (In reply to Orit Wasserman from comment #2) > Hi, > I'm suspecting it related to the high memory usage of the XBZRLE feature. > What is the amount of memory the hosts have? The host has 8G mem > Can you print the memory usage when migration fails? Below is the host memory usage when migration fails. I re-test and reproduce again still with 2G migration cache size. #cat /proc/meminfo MemTotal: 7911636 kB MemFree: 4280912 kB Buffers: 36 kB Cached: 1333200 kB SwapCached: 968 kB Active: 2421036 kB Inactive: 1050756 kB Active(anon): 2069740 kB Inactive(anon): 75688 kB Active(file): 351296 kB Inactive(file): 975068 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 8273916 kB SwapFree: 8272240 kB Dirty: 88 kB Writeback: 0 kB AnonPages: 2138048 kB Mapped: 21272 kB Shmem: 6852 kB Slab: 54464 kB SReclaimable: 19512 kB SUnreclaim: 34952 kB KernelStack: 1560 kB PageTables: 11228 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 12229732 kB Committed_AS: 2567384 kB VmallocTotal: 34359738367 kB VmallocUsed: 150644 kB VmallocChunk: 34359584748 kB HardwareCorrupted: 0 kB AnonHugePages: 1992704 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 98072 kB DirectMap2M: 4091904 kB DirectMap1G: 4194304 kB #free -m total used free shared buffers cached Mem: 7726 3545 4180 6 0 1301 -/+ buffers/cache: 2243 5482 Swap: 8079 1 8078 > does it happen when you set the cache size to a smaller value > (migrate_set_cache_size)? I have not reproduced the bug with a smaller value so far (I used 512M cache size and tried 5 times already) > > Thanks, > Orit (In reply to Qunfang Zhang from comment #0) > > How reproducible: > Always Sometimes (as in summary) |