Bug 1284778 - packaging of glibc and malloc patches for madvise into RHEL 7
packaging of glibc and malloc patches for madvise into RHEL 7
Status: CLOSED DUPLICATE of bug 1284959
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: glibc (Show other bugs)
7.2
All Linux
unspecified Severity medium
: rc
: ---
Assigned To: Carlos O'Donell
qe-baseos-tools
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-24 03:35 EST by cmilsted
Modified: 2015-11-26 05:58 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-11-26 05:58:21 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description cmilsted 2015-11-24 03:35:25 EST
Description of problem:

Bugzilla to track two glibc issues and packaging of these into RHEL.

- Malloc free list cyclic fix (and followup race condition fix).
- Malloc consistent trimming fix for all arenas.


Version-Release number of selected component (if applicable):

7.2+


How reproducible:

Note that this behaviour is only triggered by certain applications and use cases, it has only been seen by a couple of customers using specific applications.

This performance loss is generally due to application design issues and triggers when the application allocates large amounts of memory, does little work, and then frees the memory. This means that as a total fraction of time the allocation and deallocation dominate. The glibc allocator in an attempt to minimize memory load on the system will trim the large deallocations and return the memory to the kernel. In these cases it is useful to increase the trimming threshold to avoid the deallocation via MADV_DONTNEED, but because trimming is only applied to the main memory arena, you can't do this for threads using non-main arenas.


Steps to Reproduce:
1. Application runs under load.
2. High sys% compared to usr% observed.
3. When tracing, notice a lot of calls to :madvise (is called from glibc heap management)

Actual results:

Before patch sys% dominates, after patch usr% dominates.

Expected results:

Sys% drops back down again and usr% dominates from the application.

Additional info:
Comment 2 cmilsted 2015-11-26 05:58:21 EST

*** This bug has been marked as a duplicate of bug 1284959 ***

Note You need to log in before you can comment on or make changes to this bug.