Bug 843478

Summary: glibc: memalign allocations are often not reused after free
Product: Red Hat Enterprise Linux 7 Reporter: Kirill Korotaev <dev>
Component: glibcAssignee: glibc team <glibc-bugzilla>
Status: CLOSED UPSTREAM QA Contact: qe-baseos-tools-bugs
Severity: high Docs Contact:
Priority: high    
Version: 7.6CC: 4951713, ashankar, bjwangxd, codonell, cww, dinghailong, dj, fernando, fweimer, jimtong, mfranc, mnewsome, nh2-redhatbugzilla, onestero, pfrankli, ppostler, spoyarek, sunweili, zpytela
Target Milestone: alphaKeywords: Reopened
Target Release: 7.7   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-04-28 16:03:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1594286    

Description Kirill Korotaev 2012-07-26 12:43:01 UTC
Description of problem:
It was observed that application using malloc/free interfaces was growing in RSS over time. Application was proved not to be leaking memory through accounting and valgrind checks. Finally we were able to craft an artificial test proving that glibc free() leaks memory and do not reuse it for further allocations.

See test.c for the issue below.

Steps to Reproduce:
Just run the test and see ps output in logs + output of glibc malloc_stats().

Actual results:
This application never allocates more then ~64MB RAM in total. However, application RSS grows up to ~700MB in our tests on RHEL6.3 and FC16.

Expected results:
RSS to be ~64MB all over the time as it happens with other allocators like TCmalloc we tested with.



----------------------- cut test.c -----------------------
#include <stdio.h>
#include <malloc.h>

#define LSIZE (2*65536+4096)
#define SSIZE 556
#define NL 500
#define NS 70000

int main(int argc, char **argv)
{
        void * bigalloc[NL];
        void * smallalloc[NS];
        int i;

        memset(bigalloc, 0, sizeof(bigalloc));
        memset(smallalloc, 0, sizeof(smallalloc));

        for (i = 0; i < (16*1024*1024*1024ULL)/65536; i++) {
                free(bigalloc[i % NL]);
                free(smallalloc[i % NS]);
                smallalloc[i % NS] = malloc(SSIZE);
                bigalloc[i % NL] = memalign(4096, LSIZE);
                memset(smallalloc[i % NS], 0, SSIZE);
                memset(bigalloc[i % NL], 0, LSIZE);

        }
        malloc_stats();

        system("ps axv|fgrep stressalloc");
}
------------------ cut ------------------

Comment 2 Jeff Law 2012-07-26 18:08:37 UTC
This test leaks significant amounts of memory:

==13419== HEAP SUMMARY:
==13419==     in use at exit: 106,504,000 bytes in 70,500 blocks
==13419==   total heap usage: 524,288 allocs, 453,788 frees, 35,579,232,256 bytes allocated
==13419== 
==13419== LEAK SUMMARY:
==13419==    definitely lost: 106,368,276 bytes in 70,498 blocks
==13419==    indirectly lost: 0 bytes in 0 blocks
==13419==      possibly lost: 135,724 bytes in 2 blocks
==13419==    still reachable: 0 bytes in 0 blocks
==13419==         suppressed: 0 bytes in 0 blocks
==13419== Rerun with --leak-check=full to see details of leaked memory


Furthermore, when memory is leaked, it is often the case that the heap becomes more fragmented than normal and as a result memory can not be released back to the system.  Heap based allocators can only return free memory at the top of the heap back to the system.

For your application, I suggest you look very carefully at heap fragmentation.  Just because you don't leak memory doesn't mean the allocator is able to return memory to the system.  I would also suggest you look carefully at changing the sbrk/mmap threshold which controls whether an allocation is handled by heap allocation or mmap allocation.  An mmap allocation can always be returned to the system when it's free.

Comment 3 Kirill Korotaev 2012-07-27 02:19:42 UTC
Jeff, have you looked at code itself before closing bug?
The test operates on a working set of *max* 70500 objects being allocated simultaneously. So no wonder valgrind output tells you about leaked objects in the end.

If you take a look at code you will find that each loop iteration frees 2 objects and allocates 2 new ones. But over time RSS of application grows infinitely.
If you increase number of iterations in a loop 10x times you will have ~7GB RSS, while all these 70500 objects use effectively only ~64GB RAM.

Still not a bug?

Comment 4 Kirill Korotaev 2012-08-15 13:21:58 UTC
Any comments to the bug? This is a really severe issue. Some of our applications simply trigger OOM after some time.

Comment 5 Jeff Law 2012-08-15 13:30:10 UTC
Refer to comment #2, particularly my comments about the testcase leaking objects and heap fragmentation.

If you're a Red Hat customer, please get in touch with your support contact if you need further help understanding heap based allocators, memory fragmentation and help debugging your OOM issues.

Comment 6 Kirill Korotaev 2012-08-15 13:35:07 UTC
I repeat once again: Test DO NOT LEAK OBJECTS! Please open your eyes!
It has a constant number of objects in memory all being limited in size, but RSS grows infinitely.

Comment 7 Kirill Korotaev 2012-08-15 13:38:02 UTC
Just run this modified test and you will see how over time your node will get into swap. OBJECTS ARE NOT LEAKING. And please do not tell me about memory fragmentation either - it has nothing to do with this particular case.

----------------------- cut test.c -----------------------
#include <stdio.h>
#include <malloc.h>

#define LSIZE (2*65536+4096)
#define SSIZE 556
#define NL 500
#define NS 70000

int main(int argc, char **argv)
{
        void * bigalloc[NL];
        void * smallalloc[NS];
        unsigned int i;

        memset(bigalloc, 0, sizeof(bigalloc));
        memset(smallalloc, 0, sizeof(smallalloc));

        for (i = 0; 1; i++) {
                free(bigalloc[i % NL]);
                free(smallalloc[i % NS]);
                smallalloc[i % NS] = malloc(SSIZE);
                bigalloc[i % NL] = memalign(4096, LSIZE);
                memset(smallalloc[i % NS], 0, SSIZE);
                memset(bigalloc[i % NL], 0, LSIZE);
        }
}
----------------------- cut test.c -----------------------

Comment 8 Oleg Nesterov 2012-08-15 14:12:24 UTC
(In reply to comment #7)
>
> OBJECTS ARE NOT LEAKING.

I tend to agree.

>         for (i = 0; 1; i++) {
>                 free(bigalloc[i % NL]);
>                 free(smallalloc[i % NS]);
>                 smallalloc[i % NS] = malloc(SSIZE);
>                 bigalloc[i % NL] = memalign(4096, LSIZE);
>                 memset(smallalloc[i % NS], 0, SSIZE);
>                 memset(bigalloc[i % NL], 0, LSIZE);
>         }

IOW, this does

         free(ptr);
         ptr = malloc(...);

in a loop, just "ptr" changes. After NS iterations the number
of allocated object should not grow.

Comment 9 Siddhesh Poyarekar 2012-09-12 14:24:25 UTC
I can't reproduce an unlimited RSS or even an unlimited VSZ with your reproducer. it stays at about 100MB, which is what I expected it to do. I've got glibc-2.12-1.80.el6_3.5.x86_64. What are you trying this on?

Comment 10 Kirill Korotaev 2012-09-13 09:51:21 UTC
Siddhesh, thanks for looking into this. Looks like the original test case is really no longer reproducible on latest glibc-2.12-1.80.el6_3.5.x86_64. However, I've slightly modified it (added small randomness to object sizes) and the effect is even more horrible: 1.8GB of RSS is eaten while the application has only ~90MB of allocated objects.

To clarify and be in sync with you I test now everything on glibc-2.12-1.80.el6_3.5.x86_64 and compile as
# gcc -m64 -O2 test2.c -o test2

Note what is the result of test run in my case for the code below:
----------------------
Compare 'system bytes' and 'in use bytes':
Arena 0:
system bytes     = 1857011712
in use bytes     =   93592688
Total (incl. mmap):
system bytes     = 1857011712
in use bytes     =   93592688
max mmap regions =        490
max mmap bytes   =   67297280
----------------------
i.e. GLIBC says 1.8GB allocated from system, while only 93MB is really used. This is confirmed by kernel and `top` RSS output.

The test code is:
---------------------------------------
#include <stdio.h>
#include <malloc.h>
#include <string.h>

#define LSIZE (2*65536-4096)
#define SSIZE 257
#define NL 500
#define NS 70000

void * bigalloc[NL], * smallalloc[NS];

int main(int argc, char **argv)
{
        int i;

        for (i = 0; i < (16*1024*1024*1024ULL)/65536; i++) {
                int bidx = i % NL, sidx = i % NS;
                int ssz = SSIZE + random() % SSIZE;   /* obj size: SSIZE .. 2*SSIZE */
                int bsz = LSIZE + random() % 8192;    /* obj size: LSIZE .. LSIZE + 8192 */
                /* total objs size < 2*SSIZE*NS + (LSIZE+8192)*NL = 103564000 = ~100MB */

                /* free current small and big object ... */
                free(bigalloc[bidx]);
                free(smallalloc[sidx]);

                /* ... and allocate a new one */
                smallalloc[sidx] = malloc(ssz);
                bigalloc[bidx] = memalign(4096, bsz);
                memset(smallalloc[sidx], 0, ssz);
                memset(bigalloc[bidx], 0, bsz);
        }
        printf("Compare 'system bytes' and 'in use bytes':\n");
        malloc_stats();

        system("ps axv|fgrep stressalloc");
}
---------------------------------------

Comment 11 Siddhesh Poyarekar 2012-09-13 11:33:37 UTC
That is due to heap fragmentation. The glibc malloc has a threshold that helps decide which memory blocks are allocated on the heap and which ones are allocated with mmap. The threshold is dynamic and starts out with 128k, with lower being on heap and higher allocated using mmap. As the program frees larger blocks of memory (tat were allocated using mmap), this threshold is increased to allow these larger blocks to be allocated on heap. The threshold may go up to 32MB in this manner.

This has the downside of fragmentation of the heap but it gives very good results in terms of speed. If the large size is a problem, then you could make this threshold static by exporting the MALLOC_MMAP_THRESHOLD_ environment variable to the desired value in bytes - note the underscore at the end of the variable name.

So if I run your program above as follows:

MALLOC_MMAP_THRESHOLD_=131072 ./stressalloc

I get a much better utilization of the heap:

Arena 0:
system bytes     =   28114944
in use bytes     =   28039360
Total (incl. mmap):
system bytes     =   96731136
in use bytes     =   96655552
max mmap regions =        500
max mmap bytes   =   68812800
23034 pts/18   S+     0:17      0     2 99129 95304  2.4 ./stressalloc
23055 pts/18   S+     0:00      0   869 113278 1252  0.0 sh -c ps axv|fgrep stressalloc
23057 pts/18   S+     0:00      0    81 106794  728  0.0 fgrep stressalloc

So at this point I'm inclined to close this out as WONTFIX since there is a tunable to tweak this behaviour - unless you have another take on the reproducer that can demonstrate a leak in glibc malloc that is not a result of heap fragmentation.

The glibc malloc source code has good documentation on its behaviour (malloc/malloc.c) and the libc manual has information on all the tunables available to tweak glibc malloc behaviour for specific allocation patterns. There are a few relevant knowledge articles and documents in the Red Hat Customer Portal as well if you have an active RHEL subscription.

Comment 12 Kirill Korotaev 2012-09-13 11:43:54 UTC
Siddhesh, tunings are great. However, what should I set if I don't know the object size in advance and have quite a uniform distribution of sizes? Set to some very small number?

Well, for me it looks like a huge issue, cause you can never predict when server application will start to grow in RSS like this. And things like that force people to switch to TCMalloc and other versions of libc :(

Comment 13 Siddhesh Poyarekar 2012-09-13 12:04:06 UTC
Allocation patterns are in fact quite predictable for most applications, which is why the current glibc scheme works well in the general case. It is for these special cases where the allocator itself cannot adjust well (or cases where the allocator does not know whether the user expects speed or low fragmentation), that user intervention is required to tune behaviour.

Typically something small enough is sufficient for the mmap threshold. For example, if your program worked well on RHEL-4 glibc, then you can rest assured that RHEL-5, RHEL-6, etc. will work well with 128K limit for MALLOC_MMAP_THRESHOLD_. If you don't have such information, then you set a small enough size, like 64K or 32K and you should be good to go. It all amounts to how much wastage you're willing to bear. Stick to a value above 8k though, or else you'll see another kind of fragmentation - in the anon memory maps.

Anyway, this is more of a support question than a bug, so I'd suggest either filing a support case to continue the discussion with our support techs or starting a discussion on the upstream libc-help mailing list:

http://www.gnu.org/software/libc/development.html

Closing this as NOTABUG since the behaviour is expected and tunable.

Comment 14 Kirill Korotaev 2012-09-14 07:42:35 UTC
Another simple example with CONSTANT object sizes which still reproduce the issue on latest RHEL6.3 (glibc-2.12-1.80.el6_3.5) and on FC16 (glibc-2.14.90-24.fc16.9) and demonstrating that glibc DO NOT adapt good and it happen with simplest applications. Please note, we faced this issue on a REAL LIFE applications and just narrowed down the problem to these examples:

--------------------------------------
#include <stdio.h>
#include <malloc.h>

#define LSIZE (65536+4096)
#define SSIZE 556
#define NL 500
#define NS 70000

int main(int argc, char **argv)
{
        void * bigalloc[NL];
        void * smallalloc[NS];
        int bptr = 0;
        int sptr = 0;
        int i;

        memset(bigalloc, 0, sizeof(bigalloc));
        memset(smallalloc, 0, sizeof(smallalloc));

        for (i = 0; i < (16*1024*1024*1024ULL)/65536; i++) {
                free(bigalloc[i % NL]);
                free(smallalloc[i % NS]);
                smallalloc[i % NS] = malloc(SSIZE);
                bigalloc[i % NL] = memalign(4096, LSIZE);
        }

        malloc_stats();
        system("ps axv|fgrep stressalloc");
}

--------------------------------------

Comment 15 Kirill Korotaev 2012-09-14 07:44:15 UTC
You also are not right about performance and tunings. Regardless of the tuning you suggested these tests run about 1.5x times faster with TCMalloc and performance doesn't depend on MALLOC_MMAP_THRESHOLD_ value.

Comment 16 Kirill Korotaev 2012-09-14 07:53:15 UTC
And the last argument about fragmentation which makes me believe it's a REAL BUG and I have to reopen it:
IT IS NOT HEAP FRAGMENTATION.
Look to the last example - OBJECT SIZES ARE CONSTANT. ONLY 2 VARIANTS AT ALL!

Each iteration does:
- 2 OBJS are freed
- 2 OBJS of same size are allocated

Do you think this is smth unusual for real-life apps? Or 556b and 68KB allocation sizes?

Comment 17 Kirill Korotaev 2012-09-14 10:23:00 UTC
New info:
one more finding: I can't reproduce bug if replace memalign with malloc().

Comment from ANK on this:
Interesting.

memalign() in glibc is very simple: it is just malloc(size + alignment + some_small_number)
then tail and head returned to pool.

Probably, this is the culprit. Long shot: when result of memalign is freed, malloc
does not merge fragments, so hole will be of three free parts: head, body, tail
and it is unusable for future memalign.

Comment 18 Jeff Law 2012-10-11 19:14:16 UTC
As has been explained multiple times by multiple engineers, this isn't a bug in glibc's malloc.  Your examples create significant fragmentation problems in the heap.

Heap fragmentation is inherent in all heap based allocators and certain allocation patterns simply aren't going to perform well in heap allocation schemes.

If you want to use glibc's allocator with these kind of memory access patterns, I would strongly recommend you look at lowering the mmap threshold and increasing the mmap max tunables.

Comment 19 Florian Weimer 2016-06-03 11:57:28 UTC
*** Bug 1342482 has been marked as a duplicate of this bug. ***

Comment 20 Will.Sun 2016-06-13 06:52:21 UTC
Reopen the ticket.

Comment 21 Will.Sun 2016-06-13 07:54:00 UTC
Hi RedHat Support:
 
We have been working with RedHat support and been suggest to communicate at Bugzilla on this ticket. We have made a small change to the sample code. Please check the sample code, Valgrind report and the "smaps" output which is acting the same way with our real application. We believe this is the memory fragmentation related with glibc's allocator. 

Help needed:
1. Is this a memory fragmentation based on the sample code provide? If not, why?
2. Any detailed information you can share about turning below 2 parameters?
   - mmap threshold 
   - mmap max 
 
PART 1: TEST CASE:
1. Leave last small allocation. 
    for (i=0; i < NL; i++) {
       free(bigalloc[i]);
   }
   for (i=0; i < (NS-1) ; i++) {
       free(smallalloc[i]);
   }
2. Memory used by this sample code is 1.7G. 

   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                   
1873490 root      20   0 1752m 1.7g 1024 S  0.0  2.7   0:10.52 a     

3. The valgrind report showing only 5k memory is allocated. 

==1854852== HEAP SUMMARY:
==1854852==     in use at exit: 5,671 bytes in 3 blocks
==1854852==   total heap usage: 524,290 allocs, 524,287 frees, 34,460,219,009 bytes allocated
==1854852== LEAK SUMMARY:
==1854852==    definitely lost: 0 bytes in 0 blocks
==1854852==    indirectly lost: 0 bytes in 0 blocks
==1854852==      possibly lost: 288 bytes in 1 blocks
==1854852==    still reachable: 5,383 bytes in 2 blocks
==1854852==         suppressed: 0 bytes in 0 blocks
 
4. "smaps" output showing small heap usage.
 
PART 2: Sample Code:
 
=========================
#include <stdio.h>       /* standard I/O routines                 */
#include <pthread.h>     /* pthread functions and data structures */
#include <stdlib.h>
#include <string.h>
#include <iostream>
#include <malloc.h>
#include <sstream>
#include <time.h>
#include <sys/time.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <vector>
#include <list>
#include <algorithm>
#include <fstream>

#include <pwd.h>
#include <shadow.h>

using namespace std;

#define LSIZE (2*65536-4096)
#define SSIZE 257
#define NL 500
#define NS 70000
void * bigalloc[NL], * smallalloc[NS];

struct stru
{
int a;
int b;
char c[500];
};

/* function to be executed by the new thread */
void*
do_loop(void* data)
{
    char aa[5000];
    char bb[5000];
    char * cc = new char[5000];
    struct stru dd;

    int i;

    for (i = 0; i < (16*1024*1024*1024ULL)/65536; i++) {
        int bidx = i % NL, sidx = i % NS;
        int ssz = SSIZE + random() % SSIZE;   /* obj size: SSIZE .. 2*SSIZE */
        int bsz = LSIZE + random() % 8192;    /* obj size: LSIZE .. LSIZE + 8192 */
        /* total objs size < 2*SSIZE*NS + (LSIZE+8192)*NL = 103564000 = ~100MB */

        /* free current small and big object ... */
        free(bigalloc[bidx]);
        free(smallalloc[sidx]);

        /* ... and allocate a new one */
        smallalloc[sidx] = malloc(ssz);
        bigalloc[bidx] = memalign(4096, bsz);
        memset(smallalloc[sidx], 0, ssz);
        memset(bigalloc[bidx], 0, bsz);
    }

    cout << "finished" << endl;  /* terminate the thread */
    sleep(5);
    
    for (i=0; i < NL; i++) {
        free(bigalloc[i]);
    }
    int j = NS-1;  /* leave the last allocation*/
    //int j = NS;
    for (i=0; i < j; i++) {
        free(smallalloc[i]);
    }

    cout << "released" << endl;

    sleep(20);
    pthread_exit(NULL);
}

/* like any C program, program's execution begins in main */
int
main(int argc, char* argv[])
{
    int        thr_id;         /* thread ID for the newly created thread */
    pthread_t  p_thread;       /* thread's structure                     */
    int        a         = 1;  /* thread 1 identifying number            */
    int        b         = 2;  /* thread 2 identifying number            */

    /* create a new thread that will execute 'do_loop()' */
    thr_id = pthread_create(&p_thread, NULL, do_loop, (void*)&a);
    /* run 'do_loop()' in the main thread as well */
    //do_loop((void*)&b);
    sleep(80);
    
    /* NOT REACHED */
    return 0;
}
=========================
 

PART 3: SMAPS output

=========================
00400000-00402000 r-xp 00000000 00:1a 7296472                            /home/ghsui/test/qinna/a
Size:                  8 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         4 kB
Private_Dirty:         0 kB
Referenced:            4 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
00601000-00602000 rw-p 00001000 00:1a 7296472                            /home/ghsui/test/qinna/a
Size:                  4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
00602000-0068b000 rw-p 00000000 00:00 0 
Size:                548 kB
Rss:                 548 kB
Pss:                 548 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:       548 kB
Referenced:          548 kB
Anonymous:           548 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
01c76000-01c97000 rw-p 00000000 00:00 0                                  [heap]
Size:                132 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32e9e00000-32e9e20000 r-xp 00000000 fd:00 2228632                        /lib64/ld-2.12.so
Size:                128 kB
Rss:                 108 kB
Pss:                   0 kB
Shared_Clean:        108 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:          108 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32ea01f000-32ea020000 r--p 0001f000 fd:00 2228632                        /lib64/ld-2.12.so
Size:                  4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32ea020000-32ea021000 rw-p 00020000 fd:00 2228632                        /lib64/ld-2.12.so
Size:                  4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32ea021000-32ea022000 rw-p 00000000 00:00 0 
Size:                  4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32ea600000-32ea789000 r-xp 00000000 fd:00 2228633                        /lib64/libc-2.12.so
Size:               1572 kB
Rss:                 316 kB
Pss:                   1 kB
Shared_Clean:        316 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:          316 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32ea789000-32ea988000 ---p 00189000 fd:00 2228633                        /lib64/libc-2.12.so
Size:               2044 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32ea988000-32ea98c000 r--p 00188000 fd:00 2228633                        /lib64/libc-2.12.so
Size:                 16 kB
Rss:                  16 kB
Pss:                   8 kB
Shared_Clean:          8 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         8 kB
Referenced:           16 kB
Anonymous:             8 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32ea98c000-32ea98d000 rw-p 0018c000 fd:00 2228633                        /lib64/libc-2.12.so
Size:                  4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32ea98d000-32ea992000 rw-p 00000000 00:00 0 
Size:                 20 kB
Rss:                  16 kB
Pss:                  16 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:        16 kB
Referenced:           16 kB
Anonymous:            16 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32eaa00000-32eaa17000 r-xp 00000000 fd:00 2228635                        /lib64/libpthread-2.12.so
Size:                 92 kB
Rss:                  68 kB
Pss:                   0 kB
Shared_Clean:         68 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:           68 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32eaa17000-32eac17000 ---p 00017000 fd:00 2228635                        /lib64/libpthread-2.12.so
Size:               2048 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32eac17000-32eac18000 r--p 00017000 fd:00 2228635                        /lib64/libpthread-2.12.so
Size:                  4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32eac18000-32eac19000 rw-p 00018000 fd:00 2228635                        /lib64/libpthread-2.12.so
Size:                  4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32eac19000-32eac1d000 rw-p 00000000 00:00 0 
Size:                 16 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32eb200000-32eb283000 r-xp 00000000 fd:00 2228638                        /lib64/libm-2.12.so
Size:                524 kB
Rss:                  20 kB
Pss:                   0 kB
Shared_Clean:         20 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:           20 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32eb283000-32eb482000 ---p 00083000 fd:00 2228638                        /lib64/libm-2.12.so
Size:               2044 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32eb482000-32eb483000 r--p 00082000 fd:00 2228638                        /lib64/libm-2.12.so
Size:                  4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
32eb483000-32eb484000 rw-p 00083000 fd:00 2228638                        /lib64/libm-2.12.so
Size:                  4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
3ed5400000-3ed5416000 r-xp 00000000 fd:00 2228227                        /lib64/libgcc_s-4.4.7-20120601.so.1
Size:                 88 kB
Rss:                  44 kB
Pss:                   0 kB
Shared_Clean:         44 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:           44 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
3ed5416000-3ed5615000 ---p 00016000 fd:00 2228227                        /lib64/libgcc_s-4.4.7-20120601.so.1
Size:               2044 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
3ed5615000-3ed5616000 rw-p 00015000 fd:00 2228227                        /lib64/libgcc_s-4.4.7-20120601.so.1
Size:                  4 kB
Rss:                   4 kB
Pss:                   4 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         4 kB
Referenced:            4 kB
Anonymous:             4 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
3ed5800000-3ed58e8000 r-xp 00000000 fd:00 3145733                        /usr/lib64/libstdc++.so.6.0.13
Size:                928 kB
Rss:                 452 kB
Pss:                   7 kB
Shared_Clean:        452 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:          452 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
3ed58e8000-3ed5ae8000 ---p 000e8000 fd:00 3145733                        /usr/lib64/libstdc++.so.6.0.13
Size:               2048 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
3ed5ae8000-3ed5aef000 r--p 000e8000 fd:00 3145733                        /usr/lib64/libstdc++.so.6.0.13
Size:                 28 kB
Rss:                  28 kB
Pss:                  28 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:        28 kB
Referenced:           28 kB
Anonymous:            28 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
3ed5aef000-3ed5af1000 rw-p 000ef000 fd:00 3145733                        /usr/lib64/libstdc++.so.6.0.13
Size:                  8 kB
Rss:                   8 kB
Pss:                   8 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         8 kB
Referenced:            8 kB
Anonymous:             8 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
3ed5af1000-3ed5b06000 rw-p 00000000 00:00 0 
Size:                 84 kB
Rss:                  12 kB
Pss:                  12 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:        12 kB
Referenced:           12 kB
Anonymous:            12 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a50000000-7f5a53ff2000 rw-p 00000000 00:00 0 
Size:              65480 kB
Rss:               65480 kB
Pss:               65480 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65480 kB
Referenced:        65480 kB
Anonymous:         65480 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a53ff2000-7f5a54000000 ---p 00000000 00:00 0 
Size:                 56 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a54000000-7f5a57ff2000 rw-p 00000000 00:00 0 
Size:              65480 kB
Rss:               54120 kB
Pss:               54120 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     54120 kB
Referenced:        54120 kB
Anonymous:         54120 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a57ff2000-7f5a58000000 ---p 00000000 00:00 0 
Size:                 56 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a58000000-7f5a5bff1000 rw-p 00000000 00:00 0 
Size:              65476 kB
Rss:               65476 kB
Pss:               65476 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65476 kB
Referenced:        65476 kB
Anonymous:         65476 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a5bff1000-7f5a5c000000 ---p 00000000 00:00 0 
Size:                 60 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a5c000000-7f5a5fff1000 rw-p 00000000 00:00 0 
Size:              65476 kB
Rss:               65476 kB
Pss:               65476 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65476 kB
Referenced:        65476 kB
Anonymous:         65476 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a5fff1000-7f5a60000000 ---p 00000000 00:00 0 
Size:                 60 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a60000000-7f5a63ff2000 rw-p 00000000 00:00 0 
Size:              65480 kB
Rss:               65480 kB
Pss:               65480 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65480 kB
Referenced:        65480 kB
Anonymous:         65480 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a63ff2000-7f5a64000000 ---p 00000000 00:00 0 
Size:                 56 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a64000000-7f5a67ff0000 rw-p 00000000 00:00 0 
Size:              65472 kB
Rss:               65472 kB
Pss:               65472 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65472 kB
Referenced:        65472 kB
Anonymous:         65472 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a67ff0000-7f5a68000000 ---p 00000000 00:00 0 
Size:                 64 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a68000000-7f5a6bff1000 rw-p 00000000 00:00 0 
Size:              65476 kB
Rss:               65476 kB
Pss:               65476 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65476 kB
Referenced:        65476 kB
Anonymous:         65476 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a6bff1000-7f5a6c000000 ---p 00000000 00:00 0 
Size:                 60 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a6c000000-7f5a6fff2000 rw-p 00000000 00:00 0 
Size:              65480 kB
Rss:               65480 kB
Pss:               65480 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65480 kB
Referenced:        65480 kB
Anonymous:         65480 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a6fff2000-7f5a70000000 ---p 00000000 00:00 0 
Size:                 56 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a70000000-7f5a73ff0000 rw-p 00000000 00:00 0 
Size:              65472 kB
Rss:               65472 kB
Pss:               65472 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65472 kB
Referenced:        65472 kB
Anonymous:         65472 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a73ff0000-7f5a74000000 ---p 00000000 00:00 0 
Size:                 64 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a74000000-7f5a77ff1000 rw-p 00000000 00:00 0 
Size:              65476 kB
Rss:               65476 kB
Pss:               65476 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65476 kB
Referenced:        65476 kB
Anonymous:         65476 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a77ff1000-7f5a78000000 ---p 00000000 00:00 0 
Size:                 60 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a78000000-7f5a7bff1000 rw-p 00000000 00:00 0 
Size:              65476 kB
Rss:               65476 kB
Pss:               65476 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65476 kB
Referenced:        65476 kB
Anonymous:         65476 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a7bff1000-7f5a7c000000 ---p 00000000 00:00 0 
Size:                 60 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a7c000000-7f5a7ffee000 rw-p 00000000 00:00 0 
Size:              65464 kB
Rss:               65464 kB
Pss:               65464 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65464 kB
Referenced:        65464 kB
Anonymous:         65464 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a7ffee000-7f5a80000000 ---p 00000000 00:00 0 
Size:                 72 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a80000000-7f5a83ff0000 rw-p 00000000 00:00 0 
Size:              65472 kB
Rss:               65472 kB
Pss:               65472 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65472 kB
Referenced:        65472 kB
Anonymous:         65472 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a83ff0000-7f5a84000000 ---p 00000000 00:00 0 
Size:                 64 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a84000000-7f5a87ff0000 rw-p 00000000 00:00 0 
Size:              65472 kB
Rss:               65472 kB
Pss:               65472 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65472 kB
Referenced:        65472 kB
Anonymous:         65472 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a87ff0000-7f5a88000000 ---p 00000000 00:00 0 
Size:                 64 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a88000000-7f5a8bff2000 rw-p 00000000 00:00 0 
Size:              65480 kB
Rss:               65480 kB
Pss:               65480 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65480 kB
Referenced:        65480 kB
Anonymous:         65480 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a8bff2000-7f5a8c000000 ---p 00000000 00:00 0 
Size:                 56 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a8c000000-7f5a8fff1000 rw-p 00000000 00:00 0 
Size:              65476 kB
Rss:               65476 kB
Pss:               65476 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65476 kB
Referenced:        65476 kB
Anonymous:         65476 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a8fff1000-7f5a90000000 ---p 00000000 00:00 0 
Size:                 60 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a90000000-7f5a93ff0000 rw-p 00000000 00:00 0 
Size:              65472 kB
Rss:               65472 kB
Pss:               65472 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65472 kB
Referenced:        65472 kB
Anonymous:         65472 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a93ff0000-7f5a94000000 ---p 00000000 00:00 0 
Size:                 64 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a94000000-7f5a97fef000 rw-p 00000000 00:00 0 
Size:              65468 kB
Rss:               65468 kB
Pss:               65468 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65468 kB
Referenced:        65468 kB
Anonymous:         65468 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a97fef000-7f5a98000000 ---p 00000000 00:00 0 
Size:                 68 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a98000000-7f5a9bfed000 rw-p 00000000 00:00 0 
Size:              65460 kB
Rss:               65460 kB
Pss:               65460 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65460 kB
Referenced:        65460 kB
Anonymous:         65460 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a9bfed000-7f5a9c000000 ---p 00000000 00:00 0 
Size:                 76 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a9c000000-7f5a9ffef000 rw-p 00000000 00:00 0 
Size:              65468 kB
Rss:               65468 kB
Pss:               65468 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65468 kB
Referenced:        65468 kB
Anonymous:         65468 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5a9ffef000-7f5aa0000000 ---p 00000000 00:00 0 
Size:                 68 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5aa0000000-7f5aa3ff0000 rw-p 00000000 00:00 0 
Size:              65472 kB
Rss:               65472 kB
Pss:               65472 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65472 kB
Referenced:        65472 kB
Anonymous:         65472 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5aa3ff0000-7f5aa4000000 ---p 00000000 00:00 0 
Size:                 64 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5aa4000000-7f5aa7ff1000 rw-p 00000000 00:00 0 
Size:              65476 kB
Rss:               65476 kB
Pss:               65476 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65476 kB
Referenced:        65476 kB
Anonymous:         65476 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5aa7ff1000-7f5aa8000000 ---p 00000000 00:00 0 
Size:                 60 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5aa8000000-7f5aabfef000 rw-p 00000000 00:00 0 
Size:              65468 kB
Rss:               65468 kB
Pss:               65468 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65468 kB
Referenced:        65468 kB
Anonymous:         65468 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5aabfef000-7f5aac000000 ---p 00000000 00:00 0 
Size:                 68 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5aac000000-7f5aaffed000 rw-p 00000000 00:00 0 
Size:              65460 kB
Rss:               65460 kB
Pss:               65460 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65460 kB
Referenced:        65460 kB
Anonymous:         65460 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5aaffed000-7f5ab0000000 ---p 00000000 00:00 0 
Size:                 76 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5ab0000000-7f5ab3ff5000 rw-p 00000000 00:00 0 
Size:              65492 kB
Rss:               65492 kB
Pss:               65492 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65492 kB
Referenced:        65492 kB
Anonymous:         65492 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5ab3ff5000-7f5ab4000000 ---p 00000000 00:00 0 
Size:                 44 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5ab4000000-7f5ab7feb000 rw-p 00000000 00:00 0 
Size:              65452 kB
Rss:               65452 kB
Pss:               65452 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65452 kB
Referenced:        65452 kB
Anonymous:         65452 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5ab7feb000-7f5ab8000000 ---p 00000000 00:00 0 
Size:                 84 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5ab8000000-7f5abbfe8000 rw-p 00000000 00:00 0 
Size:              65440 kB
Rss:               65440 kB
Pss:               65440 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:     65440 kB
Referenced:        65440 kB
Anonymous:         65440 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5abbfe8000-7f5abc000000 ---p 00000000 00:00 0 
Size:                 96 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5abf422000-7f5abf423000 ---p 00000000 00:00 0 
Size:                  4 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5abf423000-7f5abfe28000 rw-p 00000000 00:00 0 
Size:              10260 kB
Rss:                  36 kB
Pss:                  36 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:        36 kB
Referenced:           36 kB
Anonymous:            36 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7f5abfe36000-7f5abfe38000 rw-p 00000000 00:00 0 
Size:                  8 kB
Rss:                   8 kB
Pss:                   8 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         8 kB
Referenced:            8 kB
Anonymous:             8 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7fffeeea9000-7fffeeebe000 rw-p 00000000 00:00 0                          [stack]
Size:                 88 kB
Rss:                   8 kB
Pss:                   8 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         8 kB
Referenced:            8 kB
Anonymous:             8 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
7fffeeed9000-7fffeeeda000 r-xp 00000000 00:00 0                          [vdso]
Size:                  4 kB
Rss:                   4 kB
Pss:                   0 kB
Shared_Clean:          4 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            4 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  [vsyscall]
Size:                  4 kB
Rss:                   0 kB
Pss:                   0 kB
Shared_Clean:          0 kB
Shared_Dirty:          0 kB
Private_Clean:         0 kB
Private_Dirty:         0 kB
Referenced:            0 kB
Anonymous:             0 kB
AnonHugePages:         0 kB
Swap:                  0 kB
KernelPageSize:        4 kB
MMUPageSize:           4 kB

=========================
 


PART 4: Valgrind Report

=========================

==1854852== Memcheck, a memory error detector
==1854852== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==1854852== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==1854852== Command: ./a
==1854852== Parent PID: 1010185
==1854852== 
==1854852== 
==1854852== HEAP SUMMARY:
==1854852==     in use at exit: 5,671 bytes in 3 blocks
==1854852==   total heap usage: 524,290 allocs, 524,287 frees, 34,460,219,009 bytes allocated
==1854852== 
==1854852== 288 bytes in 1 blocks are possibly lost in loss record 1 of 3
==1854852==    at 0x4A05FEF: calloc (vg_replace_malloc.c:711)
==1854852==    by 0x32E9E11812: allocate_dtv (dl-tls.c:300)
==1854852==    by 0x32E9E11812: _dl_allocate_tls (dl-tls.c:466)
==1854852==    by 0x32EAA07068: allocate_stack (allocatestack.c:571)
==1854852==    by 0x32EAA07068: pthread_create@@GLIBC_2.2.5 (pthread_create.c:453)
==1854852==    by 0x400D6D: main (in /home/ghsui/test/qinna/a)
==1854852== 
==1854852== 383 bytes in 1 blocks are still reachable in loss record 2 of 3
==1854852==    at 0x4A0728A: malloc (vg_replace_malloc.c:299)
==1854852==    by 0x400BEF: do_loop(void*) (in /home/ghsui/test/qinna/a)
==1854852==    by 0x32EAA07850: start_thread (pthread_create.c:301)
==1854852==    by 0x32EA6E767C: clone (in /lib64/libc-2.12.so)
==1854852== 
==1854852== 5,000 bytes in 1 blocks are still reachable in loss record 3 of 3
==1854852==    at 0x4A07A02: operator new[](unsigned long) (vg_replace_malloc.c:422)
==1854852==    by 0x400AF0: do_loop(void*) (in /home/ghsui/test/qinna/a)
==1854852==    by 0x32EAA07850: start_thread (pthread_create.c:301)
==1854852==    by 0x32EA6E767C: clone (in /lib64/libc-2.12.so)
==1854852== 
==1854852== LEAK SUMMARY:
==1854852==    definitely lost: 0 bytes in 0 blocks
==1854852==    indirectly lost: 0 bytes in 0 blocks
==1854852==      possibly lost: 288 bytes in 1 blocks
==1854852==    still reachable: 5,383 bytes in 2 blocks
==1854852==         suppressed: 0 bytes in 0 blocks
==1854852== 
==1854852== For counts of detected and suppressed errors, rerun with: -v
==1854852== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 5 from 5)


=========================

Comment 22 Florian Weimer 2016-06-13 08:29:43 UTC
(In reply to Will.Sun from comment #21)
> Hi RedHat Support:
>  
> We have been working with RedHat support and been suggest to communicate at
> Bugzilla on this ticket. We have made a small change to the sample code.
> Please check the sample code, Valgrind report and the "smaps" output which
> is acting the same way with our real application. We believe this is the
> memory fragmentation related with glibc's allocator. 

Before we start, can you please indicate how you came up with the reproducer?

Does it match what your code does, or did you just look for a similar-looking bug report in Bugzilla and submit the test case attached to that bug to support?

We want to make sure that the reproducer matches what your software is doing.

Comment 23 Will.Sun 2016-06-13 15:24:08 UTC
Hi Florian 

Because we've been informed by RedHat support that they will not investigate on 3rd party application. Our application (SSM) is complicate than the Sample Code. However the Sample code does demonstrate the issue our application (SSM) is facing, that it doesn't release the free memory to OS. 

It caused two consequence:
1. Our end user complain our application (SSM) doesn't release the mrmory. But in fact it is glibc doesn't release the memory.
2. We have mechanism to prevent our application SSM use all the physical memories on the server. Because of the glibc's behavior mentioned above caused this mechanism triggered prematurely.

Our concern is not only the memory fragmentation but also the behavior after the memory get fragmented. The test case I mentioned on my last update shows we released most of the memories after memory get fragmented. But the process memory still keep high and does not return the free memory to system.

Thanks
Will

Comment 27 Will.Sun 2016-06-14 05:37:38 UTC
Hi Florian

As requested by Pavel.P I just registered a new account with IBM's email address. Let me know if you have any questions regrading my comment 21 and comment 23.

Thanks
Will

Comment 28 Will.Sun 2016-06-14 05:59:50 UTC
Hi Florian

We were asking below information during the weekly sync-up call with Pavel Postler from RedHat. We have not receive the requested information yet, Please help to follow up on below 2 parts:

1. Red Hat will check internally and provide the documentation that describes the very specific memory allocation behavior: "memory still assigned to the application although the function free() is called"

We quickly went through the documentation link provided by Pavel Postler, but still have not found the behavior documented. Can you directly point it out? The behavior is: "The big memory still assigned to the application although the function free() is called. When the application tries to request more memory, those big free memory cannot be used, Instead, additional memory was allocated from OS"

2. Red Hat will provide some guideline of tuning two parameters MMAP Max/Threshold.

Thanks
Will.Sun

Comment 29 Kirill Korotaev 2016-06-14 07:11:48 UTC
Common, guys, rather then providing guidelines to some people here and there please take care and fix the root cause of the issue. stock libc allocator simply doesn't work normally. I wouldn't believe myself if didn't hit it so many times here and there, in different companies, with different apps.

My current resume is the following:

1. Usage of memalign() leads to huge heap growth over time. This is what this bug is about. Funny, but replacing memalign() with malloc() helps.

2. Heavily multi-threaded apps with thousands of threads consume >10x more RAM then expected and usage grows over time uncontrollably. If this is your case - you are doomed to use smth like TCMalloc, rather then libc.

Comment 32 Hailong Ding 2016-07-10 14:55:01 UTC
Hi all,

I met the same problem a few years ago. And I fixed the problem by using google performance tools (https://github.com/gperftools/gperftools). It's very simple. No need to modify your source code. Just link the lib. And remember to call "MallocExtension::instance()->ReleaseFreeMemory()" frequently.

Comment 33 Jim Tong 2017-09-06 18:41:41 UTC
Hi Hailong, not sure I get if you say you don't need to modify your source code, but then you write and remember to call MallocExtension::instance()->ReleaseFreeMemory()" frequently.  Is that change necessary or not?

Comment 35 Florian Weimer 2018-07-30 08:14:28 UTC
(In reply to Kirill Korotaev from comment #17)
> memalign() in glibc is very simple: it is just malloc(size + alignment +
> some_small_number) then tail and head returned to pool.

Head and tail are put in the tcache or in a fastbin, so a subsequent free of the aligned block will not lead to immediate coalescing because from the point of view of the lower-level allocator, adjacent blocks are still in use.  (All this is not what happens in simple cases.)

> Probably, this is the culprit. Long shot: when result of memalign is freed,
> malloc
> does not merge fragments, so hole will be of three free parts: head, body,
> tail and it is unusable for future memalign.

The problem seems to be that a memalign cannot find an already-aligned block which has just been freed (or redo the allocation split used to create the aligned block in the first place), so another block has to be split.

The Ruby allocator uses memalign, so we really need to fix this.

Comment 40 Carlos O'Donell 2020-04-28 16:03:43 UTC
We are going to be tracking this work in upstream here:
https://sourceware.org/bugzilla/show_bug.cgi?id=14581

We are going to be marking this bug as CLOSED/UPSTREAM, and when the upstream bug is fixed we may consider this for backport into RHEL7 and RHEL8.

We are already looking at testing memalign fixes in Fedora and will continue to pursue that integration work to ensure that future RHEL releases take advantage of these aligned regions.

Comment 41 A. Fernando 2020-07-06 14:28:19 UTC
Hello,

I am using RH Workstation 7.6 (64 bit). On 'tcsh' shell I used "setenv MALLOC_ARENA_MAX 4" to launch the applications suspicious of causing problems with mem / cpu utilization.
My observation is this MALLOC_ARENA_MAX string glibc tuning helped me in my case not to overuse the cpu load.
Following is my glibc rpms:

rpm -qa | grep glibc
glibc-headers-2.17-260.el7_6.3.x86_64
glibc-2.17-260.el7_6.3.x86_64
glibc-devel-2.17-260.el7_6.3.i686
compat-glibc-headers-2.12-4.el7.x86_64
glibc-common-2.17-260.el7_6.3.x86_64
glibc-devel-2.17-260.el7_6.3.x86_64
compat-glibc-2.12-4.el7.x86_64
glibc-2.17-260.el7_6.3.i686

Thanks