The following has be reported by IBM LTC: Problem with malloc() Hardware Environment: zSeries Software Environment: RHEL 3 RC 2 Steps to Reproduce: 1. use malloc() 2. 3. Actual Results: Basically, you can't malloc a memory amount that's approximately >=1024MB (I say approximately because there may be some rounding issues. In testing it was successful at 1022*1024*1024 but not 1023*1024*1024. This can be tested by RedHat either by writing a simple program to execute the "malloc()" systemcall and checking the return code. Or by using LTP and running the mem01 test with the -m option. This was on a system with enough free memory where the call should have succeeded. Output of free -tm is; [root@osatest1 root]# free -tm total used free shared buffers cached Mem: 1889 245 1643 0 41 118 -/+ buffers/cache: 85 1804 Swap: 247 0 247 Total: 2136 245 1891 This test succeeded on 64bit with a value large enough to consume all available memory was used. The 31bit system had enough memory to satisfy the request with leftover free. Expected Results: Additional Information:
Why you expect something else? s390 has 31bit addressing, so total virtual address space is 2GB. asm-s390/processor.h:#define TASK_SIZE (0x80000000) asm-s390/processor.h:#define TASK_UNMAPPED_BASE (TASK_SIZE / 2) TASK_UNMAPPED_BASE is where mmap with NULL first argument starts to allocate memory. You have tried dynamically linked program, which means by the time when you run malloc you already have ld.so, libc.so and maybe a few other pages alread mapped at 0x40000000 .. (0x40000000 + a few MB) So there is no longer contiguous 2GB of memory. If you link your program statically, you should be able to use up to 1.9GB of memory in one chunk, otherwise you can allocate that much memory just in smaller chunks.
------ Additional Comments From billgo.com 2003-28-10 17:29 ------- subject--:Subject: [Bug 106708] LTC4859-Problem with malloc()