RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1361245 - [Hyper-V][RHEL 7.2] VMs panic when configured with Dynamic Memory as opposed to Static Memory
Summary: [Hyper-V][RHEL 7.2] VMs panic when configured with Dynamic Memory as opposed ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: kernel
Version: 7.2
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: Vitaly Kuznetsov
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 1298243 1373818
TreeView+ depends on / blocked
 
Reported: 2016-07-28 15:21 UTC by Simon Sekidde
Modified: 2019-12-16 06:13 UTC (History)
32 users (show)

Fixed In Version: kernel-3.10.0-505.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1373818 1381617 (view as bug list)
Environment:
Last Closed: 2016-11-03 17:18:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Console log from Kernel 3.10.0-481.el7_bug1361245_2.x86_64.debug (41.43 KB, text/plain)
2016-08-02 03:15 UTC, xuli
no flags Details
vmstat from Kernel 3.10.0-481.el7_bug1361245_2.x86_64 before hung for reference (11.13 KB, text/plain)
2016-08-02 11:44 UTC, xuli
no flags Details
add_memory() alloc failure call trace (2.18 KB, text/plain)
2016-08-10 17:04 UTC, Alex Ng
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2574 0 normal SHIPPED_LIVE Important: kernel security, bug fix, and enhancement update 2016-11-03 12:06:10 UTC

Description Simon Sekidde 2016-07-28 15:21:49 UTC
Description of problem:

RHEL Linux VMs intermittently hang and cannot be recovered except via hard power off/on through their respective management UIs (i.e. vSphere/VMM).  In some cases, the hanging issue can be manually initiated if a specific use case can be isolated. This is a high impact issue as clients are ramping up their installs and Linux VMs deployed. 

Symptoms:

    Users operating in a SSH session to the server would simply have their terminals 'freeze' up and disconnect.  No messages would be written to the screen.
    Logging into the VM Management Console and opening a console session to the VM would reveal a screenshot similar to this:

           INFO: task <process>:<pid> blocked for more than 120 seconds". 
   
    VM was completely unresponsive; only solution was to hard reset (Power Off/Power On) VM.
    Mostly happens in Hyper-V environment on RHEL-7.2 VMs. Happened only once in VMWare. 
    Issue appears as if the VM is 'starving' for resources w.r.t I/O.  Uncertain if it's a memory I/O or a storage I/O issue. 
    McAfee has been disabled to isolate the problem but the hanging still persists. 
    Why it might be a memory issue: 
	A change was made to one of the RHEL-7.2 VMs at clients permission from dynamic to static 4GB of memory and left McAfee running. 
        This appeared to resolve issue. Statically assigned memory is not ideal for managing limited hypervisor resources.

Version-Release number of selected component (if applicable):

kernel-3.10.0-327.22.2.el7.x86_64 

How reproducible:
100% 

Steps to Reproduce:
1.https://blogs.technet.microsoft.com/server-cloud/2015/10/07/microsoft-loves-linux-deep-dive-3-linux-dynamic-memory-and-live-backup/

Actual results:

RHEL 7.2 VM hangs

Expected results:

RHEL 7.2 VM does not hang/freeze

Additional info:

https://technet.microsoft.com/windows-server-docs/compute/hyper-v/supported-centos-and-red-hat-enterprise-linux-virtual-machines-on-hyper-v

Comment 5 Vitaly Kuznetsov 2016-07-29 08:39:35 UTC
A bunch of questions:

1) What's the Windows/Hyper-V version?
2) Are we sure that the customer didn't install so-called "LIS" drivers from Microsoft?
3) It is unclear to me what are the dynamic memory settings
Startup memory - ?
Minimum memory - ?
Maximum memory - ?
I see that the change "from dynamic to static 4GB of memory" helped which probably means that the maximum memory = 4G but we need to know the rest.
4) In case startup memory != maximum memory, can we ask to set startup memory = maximum memory and see what happens?
5) Is it possible to ask the customer to test current RHEL-7.3 kernel?

To do further investigation we'll have to ask the customer to setup serial console for the VM and provide the output. In can be done by using virtual pipe.

Comment 6 Vitaly Kuznetsov 2016-07-29 08:49:11 UTC
The second customer case attached to the issue is probably unrelated. I'm not sure what "RHEL 7.5" means and hangs on VMware probably have different causes, ballooning drivers are different with Hyper-V. I suggest we treat these issues separately for now.

Comment 7 Yaju Cao 2016-07-29 09:58:21 UTC
I have tried to test with RHEL 7.2 Gen1 Guest on Hyper-V 2012 R2, and use eatmemory to give guest pressure, but not reproduce this issue. We will keep on research.

My reproduce step:

1. Guest enable Dynamic Memory, and setting as below:
Startup Memory 	4G
Minimum 		4G
Maximum		16G

2. Install eatmemory and run 'eatmemory 30G'

Comment 9 Patrick M 2016-07-29 17:37:51 UTC
Hi,

Q1) What's the Windows/Hyper-V version?
A1) Windows server 2012 with Hyper-V 

Q2) Are we sure that the customer didn't install so-called "LIS" drivers from Microsoft?
A2) We used Red Hat's core drivers and attempted MS's LIS drivers to see if that would alleviate the problem.  No go.

Q3) It is unclear to me what are the dynamic memory settings
Startup memory - ? MB
Minimum memory - ? MB
Maximum memory - ? MB
I see that the change "from dynamic to static 4GB of memory" helped which probably means that the maximum memory = 4G but we need to know the rest.

A3) One of our servers had this set when we experienced the issue:
Startup memory - 4096 MB
Minimum memory - 1024 MB
Maximum memory - 16384 MB

Q4) In case startup memory != maximum memory, can we ask to set startup memory = maximum memory and see what happens?
A4) This also alleviated the issue just like setting static memory did.

Q5) Is it possible to ask the customer to test current RHEL-7.3 kernel?
A5) Sorry, that's not possible at our site.  Even if it worked, the effort it would take to stage and roll that out is not trivial here.

Comment 10 Karen Noel 2016-07-30 19:50:40 UTC
(In reply to Patrick M from comment #9)
> Hi,
> 
> Q1) What's the Windows/Hyper-V version?
> A1) Windows server 2012 with Hyper-V 
> 
> Q2) Are we sure that the customer didn't install so-called "LIS" drivers
> from Microsoft?
> A2) We used Red Hat's core drivers and attempted MS's LIS drivers to see if
> that would alleviate the problem.  No go.
> 
> Q3) It is unclear to me what are the dynamic memory settings
> Startup memory - ? MB
> Minimum memory - ? MB
> Maximum memory - ? MB
> I see that the change "from dynamic to static 4GB of memory" helped which
> probably means that the maximum memory = 4G but we need to know the rest.
> 
> A3) One of our servers had this set when we experienced the issue:
> Startup memory - 4096 MB
> Minimum memory - 1024 MB
> Maximum memory - 16384 MB

How about setting min memory to a higher value? 

Startup=4G and min=2G?

> 
> Q4) In case startup memory != maximum memory, can we ask to set startup
> memory = maximum memory and see what happens?
> A4) This also alleviated the issue just like setting static memory did.
> 
> Q5) Is it possible to ask the customer to test current RHEL-7.3 kernel?
> A5) Sorry, that's not possible at our site.  Even if it worked, the effort
> it would take to stage and roll that out is not trivial here.

Is it possible to force a crash dump when the guest freezes? Then we will want access to the dump. I have asked some RHEL mm/kernel experts for help. Thanks.

Comment 19 xuli 2016-08-02 03:15:56 UTC
Created attachment 1186636 [details]
Console log from Kernel 3.10.0-481.el7_bug1361245_2.x86_64.debug

Comment 21 xuli 2016-08-02 11:44:20 UTC
Created attachment 1186763 [details]
vmstat from Kernel 3.10.0-481.el7_bug1361245_2.x86_64 before hung for reference

Comment 31 Vitaly Kuznetsov 2016-08-05 10:51:59 UTC
K. Y., Alex,

I just sent the "[PATCH 0/4] Drivers: hv: balloon: fix WS2012 memory hotplug issues and do some cleanup" patch series trying to address issues we see here.

Please take a look. Thanks!

Comment 33 Alex Ng 2016-08-05 19:09:49 UTC
Thanks Vitaly! I will help take a look and do some internal testing as well.

Comment 35 Alex Ng 2016-08-09 01:16:26 UTC
One other thing worth asking is whether the sudden memory increase is happening within the first 40-45 seconds after booting up.

The balloon driver does not send pressure reports to the host within the first 40-45 seconds in order to avoid reporting unstable memory pressure(which tends to happen after a boot).

During this time, the host will not send any hot-add requests for additional memory (since it hasn't received any pressure reports from the guest yet). So that may lead to all sorts of out-of-memory errors.

Comment 36 Patrick M 2016-08-09 01:31:42 UTC
No that's not the case.  Sometimes the VM will run for days even weeks without a hang.

We had a client that could not get past an Oracle installation; it hung invoking the UI, but that was done at varying times after a reboot.  Sometimes minutes, hours, or days.

Only after we set the memory to static or made the startup memory=max memory did the problem appear to go away.  However, for effective VM memory management, you can see neither is a solution.

Thanks!

Comment 37 Patrick M 2016-08-09 01:53:39 UTC
I know these things take time, but how are we looking for time?  I had another VM hang today.  Fortunately the client wasn't doing anything on it.  Thanks so much

Comment 41 dyuen 2016-08-09 18:03:01 UTC
Patrick, I just sent you the rpm package.  Please install and test it on your non-production systems.


Thanks.

Comment 42 Alex Ng 2016-08-09 18:31:28 UTC
(In reply to Vitaly Kuznetsov from comment #31)
> K. Y., Alex,
> 
> I just sent the "[PATCH 0/4] Drivers: hv: balloon: fix WS2012 memory hotplug
> issues and do some cleanup" patch series trying to address issues we see
> here.
> 
> Please take a look. Thanks!

Vitaly, I tried testing the patches and noticed that running "eatmemory 4G" would die due to OOM condition pretty frequently (eventually it succeeds). Without the patches, it's harder to hit this issue.

Starting memory = 2048MB
Minimum memory = 512MB
Max = 1048576 MB

I bisected this to the patch "Drivers: hv: balloon: replace ha_region_mutex with spinlock". Perhaps this is introducing some slow down in hot-add processing. I'll need to take a further look to confirm.

I'll attach a call trace as well with the out-of-memory condition (seems hot-add is failing because we are adding too quickly).

Comment 43 Patrick M 2016-08-09 20:12:28 UTC
Hi, 

We just installed kernel-3.10.0-485.el7_bug1361245_8_1.x86_64.rpm to test on our systems. 

Results are promising; we were not able to hang the VMs the way we did before.  Before we could hang them with 100% repeatability. Now we can't.  This is goodness. :)

Just thought you'd like the feedback from the floor.

Comment 47 Vitaly Kuznetsov 2016-08-10 09:28:24 UTC
(In reply to Patrick M from comment #43)
> Hi, 
> 
> We just installed kernel-3.10.0-485.el7_bug1361245_8_1.x86_64.rpm to test on
> our systems. 
> 
> Results are promising; we were not able to hang the VMs the way we did
> before.  Before we could hang them with 100% repeatability. Now we can't. 
> This is goodness. :)
> 
> Just thought you'd like the feedback from the floor.

Thanks for the confirmation!

Comment 48 Vitaly Kuznetsov 2016-08-10 09:33:23 UTC
(In reply to Alex Ng from comment #42)
> (In reply to Vitaly Kuznetsov from comment #31)
> > K. Y., Alex,
> > 
> > I just sent the "[PATCH 0/4] Drivers: hv: balloon: fix WS2012 memory hotplug
> > issues and do some cleanup" patch series trying to address issues we see
> > here.
> > 
> > Please take a look. Thanks!
> 
> Vitaly, I tried testing the patches and noticed that running "eatmemory 4G"
> would die due to OOM condition pretty frequently (eventually it succeeds).
> Without the patches, it's harder to hit this issue.
> 
> Starting memory = 2048MB
> Minimum memory = 512MB
> Max = 1048576 MB
> 
> I bisected this to the patch "Drivers: hv: balloon: replace ha_region_mutex
> with spinlock". Perhaps this is introducing some slow down in hot-add
> processing. I'll need to take a further look to confirm.

It should rather introduce a speed-up :-) But yes, please send me the details and I'll take a look.

> 
> I'll attach a call trace as well with the out-of-memory condition (seems
> hot-add is failing because we are adding too quickly).

OOM is always possible with the current protocol when guest just sends pressure reports to the host and the host decides when to add memory (or even when to bring ballooned out pages back). To avoid OOM swap space should be sufficient to accommodate the load.

Comment 49 Patrick M 2016-08-10 14:54:18 UTC
there's one thing I noticed on the install.  The patched kernel had a dependency: 

error: Failed dependencies:
        xfsprogs < 4.3.0 conflicts with kernel-3.10.0-485.el7_bug1361245_8_1.x86_64

We installed it via: 

# rpm -i --nodeps kernel-3.10.0-485.el7_bug1361245_8_1.x86_64.rpm

Comment 50 Vitaly Kuznetsov 2016-08-10 15:09:59 UTC
(In reply to Patrick M from comment #49)
> there's one thing I noticed on the install.  The patched kernel had a
> dependency: 
> 
> error: Failed dependencies:
>         xfsprogs < 4.3.0 conflicts with
> kernel-3.10.0-485.el7_bug1361245_8_1.x86_64
> 
> We installed it via: 
> 
> # rpm -i --nodeps kernel-3.10.0-485.el7_bug1361245_8_1.x86_64.rpm

This is OK, the kernel is based on 7.3 repo, new xfsprogs will be there as well.

Comment 51 Alex Ng 2016-08-10 17:04:27 UTC
Created attachment 1189740 [details]
add_memory() alloc failure call trace

Comment 52 Alex Ng 2016-08-10 17:25:27 UTC
(In reply to Vitaly Kuznetsov from comment #48)
> > I bisected this to the patch "Drivers: hv: balloon: replace ha_region_mutex
> > with spinlock". Perhaps this is introducing some slow down in hot-add
> > processing. I'll need to take a further look to confirm.
> 
> It should rather introduce a speed-up :-) But yes, please send me the
> details and I'll take a look.

Hi Vitaly, 

I attached the call trace for hot-add failure.

Is it necessary to remove the ol_waitevent in "Drivers: hv: balloon: get rid on ol_waitevent"? If we respond to the host too quickly, then the next hot-add request may not see the new pages come online and could fail to alloc memory as seen in the call trace.

Thoughts?

Comment 53 Patrick M 2016-08-10 17:30:05 UTC
Is there a timeline for the RHEL 7.3 release and will this patch be incorporated there?  Will it be backported to 7.2?

From Charles Haithcock's notes:

"The problem with the fix, however, is the fix is a set of proposed patches which have only been proposed in upstream. This means the fixes are not integrated upstream and therefore not eligible for backports or hotfixes at this point in time. To that end, as the upstream grows and modifies, the proposed patches may end up changing or altogether being dropped in favor of "better" patches (whatever upstream determines as "better"). The above prevents us from providing any sort of timeline on when a patch will be pulled to the RHEL kernel for General Availability. If the patches were already accepted and merged upstream, we would absolutely be able to follow the backport workflow to pull down the patch and integrate the patch into an errata. As such, unfortunately, I can not provide a timeline on which a patch will be available to resolve the issue in question. 

Fortunately, however, a workaround was discovered early on to actually workaround the issue. If the environment currently being used will ultimately be the environment pushed to production, the workaround should absolutely be implemented while the issue is continually worked on from both us at Red Hat and Microsoft. As I am sure you are aware of at this point in time, the test kernel, while showing promising results, is ultimately a test kernel and absolutely not supported, should not be deployed in any environment other than for testing purposes only, and should not be considered for a solution or workaround. Deploying the kernel will result in the systems with the test kernel being in a completely unsupported state. 

One thing I would like to point out, however, is the actual nature of the proposed patches in the test kernel; the patches ultimately help fix up some of the issues within the code paths which allow the kernel to work within the limitations of the Hyper-V platform and its hotplugging behavior. As such, the limitations will need to be worked on within the Microsoft side of this issue (which, fortunately, sounds like their engagement is confirmed so they can investigate this matter). Ultimately, this problem is certainly multifaceted, but the appropriate people are already engaged with this matter to actually work through this issue, assuring the issue is being worked on and that you all are on a path to resolution. 

Please let us know if we can be of any further assistance in the mean time."

Can you please comment?

Comment 54 Vitaly Kuznetsov 2016-08-10 17:45:50 UTC
(In reply to Alex Ng from comment #52)
> (In reply to Vitaly Kuznetsov from comment #48)
> > > I bisected this to the patch "Drivers: hv: balloon: replace ha_region_mutex
> > > with spinlock". Perhaps this is introducing some slow down in hot-add
> > > processing. I'll need to take a further look to confirm.
> > 
> > It should rather introduce a speed-up :-) But yes, please send me the
> > details and I'll take a look.
> 
> Hi Vitaly, 
> 
> I attached the call trace for hot-add failure.
> 
> Is it necessary to remove the ol_waitevent in "Drivers: hv: balloon: get rid
> on ol_waitevent"? If we respond to the host too quickly, then the next
> hot-add request may not see the new pages come online and could fail to
> alloc memory as seen in the call trace.
> 
> Thoughts?

This should not be an issue with CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE: we online pages when we add them (add_memory()) so when we reply to the host these pages are already online. But in case the onlining is done by an external tool (e.g. udev) this wait helps (not always, as if someone eats all memory before the next add_memory call we're still in trouble).

Comment 55 Patrick M 2016-08-10 18:38:48 UTC
Hi Folks, 

Was the fix you did implemented in the kernel or in the balloon driver?  The reason I'm asking is this:

If it is in the driver, perhaps code can be shared between RedHat and Microsoft so MS can implement the fix in their LIS drivers and release that?

In this way, we can build our Linux VMs (6.8 and 7.2) with the supported drivers from MS and use those until RH incorporates these changes in their kernel.

Can you provide info here?

Comment 58 Vitaly Kuznetsov 2016-08-11 13:28:37 UTC
(In reply to Patrick M from comment #55)
> Hi Folks, 
> 
> Was the fix you did implemented in the kernel or in the balloon driver?  The
> reason I'm asking is this:
> 
> If it is in the driver, perhaps code can be shared between RedHat and
> Microsoft so MS can implement the fix in their LIS drivers and release that?
> 
> In this way, we can build our Linux VMs (6.8 and 7.2) with the supported
> drivers from MS and use those until RH incorporates these changes in their
> kernel.
> 
> Can you provide info here?

Hi Patrick,

all currently discussed changes are for the balloon driver. We, however, can't say for sure when these patches will finally land upstream and will appear in RHEL. I can't comment for Microsoft LIS but I guess they'll also take these patches as soon as we have them accepted upstream.

Thank you for understanding.

Comment 63 dyuen 2016-08-18 14:14:04 UTC
Patrick, I attached the temporary fix file to the case.  You can test it on non-production system.



Thanks,

David.

Comment 78 Rafael Aquini 2016-09-06 16:31:28 UTC
Patch(es) committed on kernel repository and an interim kernel build is undergoing testing

Comment 80 Patrick M 2016-09-06 18:09:50 UTC
Thanks David, 

The kernel you shipped me unfortunately does resolve the issue in Hyper-V servers configured with Dynamic memory.  All the sosreports and procedure I executed are attached to the case.

Thanks
-Patrick

Comment 81 Marc Milgram 2016-09-06 19:02:27 UTC
(In reply to Patrick M from comment #80)
> Thanks David, 
> 
> The kernel you shipped me unfortunately does resolve the issue in Hyper-V
> servers configured with Dynamic memory.  All the sosreports and procedure I
> executed are attached to the case.
> 
> Thanks
> -Patrick

Patrick You were handed 3 builds: 1 built against rhel-7.3.z, and 2 against rhel-7.2.z.  The latest one you tested, kernel-3.10.0-327.37.1.el7.sfdc01669041.x86_64 was missing some patches, so it did not work.

The other 2 kernels should work.

Comment 83 Rafael Aquini 2016-09-07 12:50:41 UTC
Patch(es) available on kernel-3.10.0-505.el7

Comment 85 Patrick M 2016-09-08 13:58:48 UTC
Is there an ETA for that release? kernel-3.10.0-505.el7

Comment 86 xuli 2016-09-12 09:25:43 UTC
Verify pass on Hyper-v server 2012 host with kernel 3.10.0-505.el7.x86_64.

Steps:
1) set up dynamic memory as below:
Startup memory - 4096 MB
Minimum memory - 1024 MB
Maximum memory - 16384 MB
2) run # while true; do ./eatmemory 16G; done 
3) after wait for more than 30 minutes, VM keeps running, no any hung issue.

Test basic functionality of balloon, no new issue found, currently we are doing the full regression test on balloon feature on 2012R2, 2016RTM build.

Also test on Hyper-v server 2012 R2 Host, 2016 RTM build, sometimes it gets the timeout issue after keep running "while true; do ./eatmemory 16G" for long time, but it is hard to reproduce, please refer to https://bugzilla.redhat.com/show_bug.cgi?id=1375117.

Refer to Vitaly's comment as below from https://bugzilla.redhat.com/show_bug.cgi?id=1375117#c5, "I checked kernel 3.10.0-504.e17.x86_64 and it seems the behavior is the same so it is not a regression brought by https://bugzilla.redhat.com/show_bug.cgi?id=1361245 fix. From traces I was able to get I'm not convinced the hang is even Hyper-V related, it can be also be a OOM related stuff. I'll be investigating the issue but this is likely to be 7.4 material (with a possible z-stream fix but we need to find the root cause first)." Since it is not regression brought by kernel 3.10.0-505.el7.x86_64, close this bug as verified.

Comment 88 errata-xmlrpc 2016-11-03 17:18:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2574.html


Note You need to log in before you can comment on or make changes to this bug.