Bug 2017672 - VM running on Hyper-V 2019 freezes up with hv_balloon messages
Summary: VM running on Hyper-V 2019 freezes up with hv_balloon messages
Keywords:
Status: CLOSED EOL
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 34
Hardware: x86_64
OS: Windows
unspecified
high
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-10-27 07:55 UTC by Fernando Viñan-Cano
Modified: 2022-06-07 22:49 UTC (History)
21 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-06-07 22:49:49 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
Screen shot from VM console (256.95 KB, image/png)
2021-10-27 07:55 UTC, Fernando Viñan-Cano
no flags Details

Description Fernando Viñan-Cano 2021-10-27 07:55:52 UTC
Created attachment 1837522 [details]
Screen shot from VM console

Description of problem:

After starting and a random amount of time, the machine starts reporting repeated "hv_balloon: Unhandled message: type: nnnn" messages (nnnn is different each reboot, but repeated numbers when it happens e.g. 1940 then 20736, 1754, 12651, 58826, 53345). This keeps happening until eventually the VM hangs and must be hard reset to reboot - no SSH possible, console non-responsive but Hyper-V reports CPU activity

How reproducible: Always

Steps to Reproduce:
1. Boot VM
2. Wait
3.

Additional info:

VM is running on Hyper-V 2019 (fully updated Server 2019), VM is set to use Dynamic Memory, 4096MB at start, 4096MB min, 16384MB max, assigned never reaches more than 8192MB
Fedora Server 34 (kernel 5.14.13-200) running latest updates as I can currently download. Tuned is installed and set to VM.

Going to change VM to fixed RAM just so I can keep the machine operational, but this only started happening in the past few weeks, cannot be more specific.

Comment 1 Vitaly Kuznetsov 2021-10-27 11:01:22 UTC
This seems to be an upstream kernel issue, hyperv-daemons package has nothing to do with dynamic memory/ballooning.

Comment 2 David Taylor 2022-02-10 17:30:04 UTC
Just to add another data point...  Centos Stream 8 with kernel 4.18.0-358.el8.x86_64 is doing the same thing.  Started happening mid January '22 to one of my VMs, others have not.  Also running on Server 2019 HyperV.  Memory was dynamic 2048 -> 8192.  Usually died when it ballooned up to about 5G of used memory.  In addition to the hv_balloon unhandled messages, was also (not surprisingly) getting systemd-journald missed kernel messages shortly after the hv_balloon messages started spewing.

Comment 3 Ben Cotton 2022-05-12 15:55:58 UTC
This message is a reminder that Fedora Linux 34 is nearing its end of life.
Fedora will stop maintaining and issuing updates for Fedora Linux 34 on 2022-06-07.
It is Fedora's policy to close all bug reports from releases that are no longer
maintained. At that time this bug will be closed as EOL if it remains open with a
'version' of '34'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, change the 'version' 
to a later Fedora Linux version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora Linux 34 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora Linux, you are encouraged to change the 'version' to a later version
prior to this bug being closed.

Comment 4 Ben Cotton 2022-06-07 22:49:49 UTC
Fedora Linux 34 entered end-of-life (EOL) status on 2022-06-07.

Fedora Linux 34 is no longer maintained, which means that it
will not receive any further security or bug fix updates. As a result we
are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.