Bug 176650 - Out of memory handler kills oracle process when creating a large(ish) tablespace over nfs
Summary: Out of memory handler kills oracle process when creating a large(ish) tablesp...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: kernel
Version: 4.0
Hardware: i386
OS: Linux
medium
high
Target Milestone: ---
: ---
Assignee: Larry Woodman
QA Contact: Brian Brock
URL:
Whiteboard:
Depends On:
Blocks: 198694 422551 430698
TreeView+ depends on / blocked
 
Reported: 2005-12-28 16:09 UTC by Adrian Worthington
Modified: 2018-10-20 00:51 UTC (History)
8 users (show)

Fixed In Version: RHEL4.6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2008-02-28 20:20:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vmstat output during creation of tablespace (6.46 KB, text/plain)
2005-12-28 16:11 UTC, Adrian Worthington
no flags Details
output from top -b during tablespace creation (160.17 KB, text/plain)
2005-12-28 16:12 UTC, Adrian Worthington
no flags Details
kern.log file during tablespace creation. (283.06 KB, application/x-gzip)
2006-01-24 17:47 UTC, Adrian Worthington
no flags Details
contents of /proc/slabinfo following oom-killer. (2.21 KB, application/x-gzip)
2006-01-24 17:49 UTC, Adrian Worthington
no flags Details
OOM-kill log output (9.66 KB, text/plain)
2006-12-04 16:25 UTC, Guy Streeter
no flags Details

Description Adrian Worthington 2005-12-28 16:09:18 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.8) Gecko/20051202 Fedora/1.5-0.5.0 Firefox/1.5

Description of problem:
Using oracle 9.2.0.7 when attempting to create a large tablespace (ie 5Gb) on a partition mounted via NFS onto a NetApp Filer the process consumes all the available memory and is killed by the kernel.
The hardware is a HP BL25p Dual Processor Opteron 280 Dual Core server with 4Gb RAM and 8Gb swap, running 32 bit (Nahant Update 2) RHEL 4 with all updates applied and oracle 9.2.0.7. The disks are hardware mirrored scsi 36Gb (using a Smart Array controller) and are paritioned using lvm. There are two partitions dedicated to oracle /ora00 and /ora01. /ora00 is a local filesystem used to mirror online redo logs and is 4Gb, /ora01 is an NFS mounted partition to a NetApp filer, which is a 574Gb partition used for all oracle datafiles (on various servers) of which ~ 200Gb is unused, this is mounted using the following options
filer:/vol/ora01 /ora01 nfs hard,intr,bg,vers=3,proto=tcp,wsize=8192,rsize=8192
Immediately following a reboot an attempt to create a large tablespace on the /ora01 partition aborts and the following message can be found in the syslog.

===============================================================================

Dec 28 14:57:55 uord4 kernel: oom-killer: gfp_mask=0xd0
Dec 28 14:57:55 uord4 kernel: Mem-info:
Dec 28 14:57:55 uord4 kernel: DMA per-cpu:
Dec 28 14:57:55 uord4 kernel: cpu 0 hot: low 2, high 6, batch 1
Dec 28 14:57:55 uord4 kernel: cpu 0 cold: low 0, high 2, batch 1
Dec 28 14:57:55 uord4 kernel: cpu 1 hot: low 2, high 6, batch 1
Dec 28 14:57:55 uord4 kernel: cpu 1 cold: low 0, high 2, batch 1
Dec 28 14:57:55 uord4 kernel: cpu 2 hot: low 2, high 6, batch 1
Dec 28 14:57:55 uord4 kernel: cpu 2 cold: low 0, high 2, batch 1
Dec 28 14:57:55 uord4 kernel: cpu 3 hot: low 2, high 6, batch 1
Dec 28 14:57:55 uord4 kernel: cpu 3 cold: low 0, high 2, batch 1
Dec 28 14:57:55 uord4 kernel: Normal per-cpu:
Dec 28 14:57:55 uord4 kernel: cpu 0 hot: low 32, high 96, batch 16
Dec 28 14:57:56 uord4 kernel: cpu 0 cold: low 0, high 32, batch 16
Dec 28 14:57:56 uord4 kernel: cpu 1 hot: low 32, high 96, batch 16
Dec 28 14:57:56 uord4 kernel: cpu 1 cold: low 0, high 32, batch 16
Dec 28 14:57:56 uord4 kernel: cpu 2 hot: low 32, high 96, batch 16
Dec 28 14:57:56 uord4 kernel: cpu 2 cold: low 0, high 32, batch 16
Dec 28 14:57:56 uord4 kernel: cpu 3 hot: low 32, high 96, batch 16
Dec 28 14:57:56 uord4 kernel: cpu 3 cold: low 0, high 32, batch 16
Dec 28 14:57:56 uord4 kernel: HighMem per-cpu:
Dec 28 14:57:56 uord4 kernel: cpu 0 hot: low 32, high 96, batch 16
Dec 28 14:57:56 uord4 kernel: cpu 0 cold: low 0, high 32, batch 16
Dec 28 14:57:56 uord4 kernel: cpu 1 hot: low 32, high 96, batch 16
Dec 28 14:57:56 uord4 kernel: cpu 1 cold: low 0, high 32, batch 16
Dec 28 14:57:56 uord4 kernel: cpu 2 hot: low 32, high 96, batch 16
Dec 28 14:57:56 uord4 kernel: cpu 2 cold: low 0, high 32, batch 16
Dec 28 14:57:56 uord4 kernel: cpu 3 hot: low 32, high 96, batch 16
Dec 28 14:57:56 uord4 kernel: cpu 3 cold: low 0, high 32, batch 16
Dec 28 14:57:56 uord4 kernel: 
Dec 28 14:57:56 uord4 kernel: Free pages:       15060kB (1600kB HighMem)
Dec 28 14:57:56 uord4 kernel: Active:118363 inactive:834629 dirty:1565 writeback:342868 unstable:0 free:3765 slab:34843 mapped:35615 pagetables:930
Dec 28 14:57:56 uord4 kernel: DMA free:12588kB min:16kB low:32kB high:48kB active:0kB inactive:0kB present:16384kB pages_scanned:174 all_unreclaimable? yes
Dec 28 14:57:56 uord4 kernel: protections[]: 0 0 0
Dec 28 14:57:56 uord4 kernel: Normal free:872kB min:928kB low:1856kB high:2784kB active:0kB inactive:720116kB present:901120kB pages_scanned:1492029 all_unreclaimable? yes
Dec 28 14:57:56 uord4 kernel: protections[]: 0 0 0
Dec 28 14:57:56 uord4 kernel: HighMem free:1600kB min:512kB low:1024kB high:1536kB active:473452kB inactive:2618400kB present:3104728kB pages_scanned:0 all_unreclaimable? no
Dec 28 14:57:56 uord4 kernel: protections[]: 0 0 0
Dec 28 14:57:56 uord4 kernel: DMA: 3*4kB 4*8kB 4*16kB 2*32kB 4*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 2*4096kB = 12588kB
Dec 28 14:57:56 uord4 kernel: Normal: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 0*128kB 1*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 872kB
Dec 28 14:57:56 uord4 kernel: HighMem: 2*4kB 3*8kB 2*16kB 2*32kB 3*64kB 0*128kB 1*256kB 2*512kB 0*1024kB 0*2048kB 0*4096kB = 1600kB
Dec 28 14:57:56 uord4 kernel: Swap cache: add 0, delete 0, find 0/0, race 0+0
Dec 28 14:57:56 uord4 kernel: 0 bounce buffer pages
Dec 28 14:57:56 uord4 kernel: Free swap:       8388600kB
Dec 28 14:57:56 uord4 kernel: 1005558 pages of RAM
Dec 28 14:57:56 uord4 kernel: 776182 pages of HIGHMEM
Dec 28 14:57:56 uord4 kernel: 9317 reserved pages
Dec 28 14:57:56 uord4 kernel: 742211 pages shared
Dec 28 14:57:56 uord4 kernel: 0 pages swap cached
Dec 28 14:57:56 uord4 kernel: Out of Memory: Killed process 3349 (oracle)

===============================================================================

I can reproduce this result each time, although depending upon when the oom-killer kicks in, can make a difference as to when the oracle process is killed, sometimes running the database creation script fails, other times it gets as far as creating the larger tablespaces.
I looked in bugzilla for similar issues and thought that #156437 might be the cause, however using the test kernel from http://people.redhat.com/sct/.private/test-kernels/kernel-2.6.9-22.EL.sct.1/
did not fix the issue. (obviously using the noreservation mount flag is not applicable for NFS)

i have captured some vmstat and top traces whilst attempting this operation and they are attached to this report.

Version-Release number of selected component (if applicable):
kernel-smp-2.6.9-22.0.1.EL

How reproducible:
Always

Steps to Reproduce:
1. create tablespace CDB_DATA
        datafile '/ora01/oradata/cdbdm/cdbdata01.dbf'
        size 5120M
        autoextend on
        next 256M
        extent management local;
2. crash.

Actual Results:  see above for syslog output

Expected Results:  successful creation of tablespace

Additional info:

Comment 1 Adrian Worthington 2005-12-28 16:11:55 UTC
Created attachment 122615 [details]
vmstat output during creation of tablespace

Comment 2 Adrian Worthington 2005-12-28 16:12:49 UTC
Created attachment 122616 [details]
output from top -b during tablespace creation

Comment 4 Larry Woodman 2006-01-19 19:02:45 UTC
Is there any way to remove NFS from the picture and use only local file systems
and devices in order to see of this is an NFS problem or a generic memory
reclaming problem?

Larry Woodman


Comment 6 Adrian Worthington 2006-01-24 15:09:11 UTC
Larry,

this does seem to be a problem with NFS, i have successfully created the
tablespace on a local ext3 filesystem.
Using the same layout i created a local 6G (all the space left) fs and mounted
it as /ora01 and ran the following script
--
create tablespace cdb_data
datafile '/ora01/oradata/cdbdm/cdbdata01.dbf'
size 5120M
autoextend on
next 256M
extent management local
/
--
which completed successfully.

I have vmstat logs taken during this tablespace create operation if
they are useful, and a snaphost of the slabinfo after the operation
completed.

Can you let me know information you want me to capture during the
creation of the tablespace over NFS in order to diagnose the problem?

thanks

-- 
adrian


Comment 7 Larry Woodman 2006-01-24 15:57:29 UTC
Adrian, can you reproduce this problem and collect me AltSysrq-P, AltSysrq-W
and AltSysrq-T outputs so I can see exactly what the various processes and
kernel threads are doing and waiting on?

Thanks, Larry


Comment 8 Adrian Worthington 2006-01-24 17:44:36 UTC
Larry,

i have reproduced the crash with the outputs you asked for. i modified syslog to
output kernel messages to kern.log which attached.

I ran the following script to create the tablespace
create tablespace CDB_DATA
datafile '/ora01/oradata/cdbdm/cdbdata01.dbf'
size 8192M
autoextend on
next 256M
extent management local
/

and to get the traces you asked for i ran this command on the console
while /bin/true; do perl -e 'print "=" x 80' | tee -a /var/log/kern.log ; echo p
> /proc/sysrq-trigger; echo w > /proc/sysrq-trigger; echo t >
/proc/sysrq-trigger; sleep 5; done

the first tablespace command completed, this accounts for all the logging before
17:30

i then created a new tablespace (again 8G) using
create tablespace CDB_DATA2
datafile '/ora01/oradata/cdbdm/cdbdata02.dbf'
size 8192M
autoextend on
next 256M
extent management local
/

and changed the sysrq sh command to log every 15 seconds. this time the
oom-killer killed the oracle processes quite quickly. This is the situation
occuring after 17:30, delimited in the kern.log file by two rows of "==="'s
(line 29356).

also attached is the contents of /proc/slabinfo after the crash occured.

-- 
adrian

Comment 9 Adrian Worthington 2006-01-24 17:47:34 UTC
Created attachment 123620 [details]
kern.log file during tablespace creation.

Comment 10 Adrian Worthington 2006-01-24 17:49:15 UTC
Created attachment 123621 [details]
contents of /proc/slabinfo following oom-killer.

Comment 14 Bob Johnson 2006-04-11 16:14:09 UTC
This issue is on Red Hat Engineering's list of planned work items 
for the upcoming Red Hat Enterprise Linux 4.4 release.  Engineering 
resources have been assigned and barring unforeseen circumstances, Red 
Hat intends to include this item in the 4.4 release.

Comment 15 Larry Woodman 2006-04-18 20:40:02 UTC
The problem here is that the entire Normal zone is in writeback state.  All pages
have been queued up for IO but the completion has not occured.

Active:32046 inactive:937141 dirty:3 writeback:348575


Larry Woodman


Comment 16 Steve Dickson 2006-04-19 14:34:45 UTC
When the system is this state, is there any over-the-wire traffic?
If so, what is the traffic?

Comment 21 RHEL Program Management 2006-09-07 19:35:00 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.

Comment 22 RHEL Program Management 2006-09-07 19:35:05 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.

Comment 23 RHEL Program Management 2006-09-07 19:35:10 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.

Comment 26 Alexander N. Spitzer 2006-10-05 13:46:10 UTC
I opened bug 163555 about this same thing...
The problem is when you are writing a file larger that the physical memory that
you have in the machine...
I.E. you have 1GB of mem, and you try to right a 4GB file over NFS.

This solves the problem:

echo 100 > /proc/sys/vm/lower_zone_protection


Comment 27 Alexander N. Spitzer 2006-10-05 13:48:47 UTC
I meant "write a 4GB file" not "right a 4GB file"

Comment 29 Guy Streeter 2006-12-04 16:25:03 UTC
Created attachment 142745 [details]
OOM-kill log output

Comment 30 Larry Woodman 2006-12-04 16:36:46 UTC
Is the last output from an x86 system???  It doenst look like it to me, there is
no highmem so you need to increase /proc/sys/vm/min_free_kbytes to 4 times the
default value.  This is what was done upstream and what we are considering for
RHEL4.  Can someone try this ans see if it works?  It should yeild the same
results as increasing lower_zone_protection on an x86 machine.

Larry Woodman

Comment 32 Guy Streeter 2006-12-08 15:45:03 UTC
Larry,
 Should I create a separate bugzilla entry for this x68 issue?


Comment 35 Larry Woodman 2007-02-12 17:07:33 UTC
This is an 8GB/2-million page system and at the time of the OOM kill
significantly less than 1.6GB/400-thousand of those pages(20%) are accounted
for.  In addition, almost 1/2 of that 1.6GB/400-thousand pages of memory are in
writeback or unstable state indicating that the NFS server just stopped talking.
-------------------------------------------------------------------------------
Active:369 inactive:172724 dirty:0 writeback:85330 unstable:87150 free:889
slab:12512 mapped:29 pagetables:397
-------------------------------------------------------------------------------
We need to figure out 1.) who allocated 80% of the system memory and why and 2.)
 why the NFS server is no longer responding.

Larry Woodman


Comment 37 RHEL Program Management 2007-04-17 21:22:15 UTC
Although this bugzilla was approved for RHEL 4.5, we were unable
to resolve it in time to be included in the release.  Therefore
it is now proposed for RHEL 4.6.

Comment 38 RHEL Program Management 2007-04-17 21:26:35 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.

Comment 41 Larry Woodman 2007-09-04 14:05:07 UTC
At this point I am still waiting to hear whether increasing
/proc/sys/vm/min_free_kbytes resolves this issue for the customer.  I have
verified that it does fix the problem internally with the limited amount of
testing I have been able to do myself do.  I did increase the default value of
min_free_kbytes by a factor of 4 as that what was done for the upstream kernel.
 Please have someone try to reproduce this issue with the latest RHEL4-U6 kernel.

Larry Woodman
 

Comment 42 Issue Tracker 2007-09-04 15:36:31 UTC
increasing min_free_kbytes by 4x didn't help for LLNL, nor, did it appear
to resolve the issue for another customer in comment #31

Product changed from 'Red Hat Enterprise Linux 4 U2' to 'Red Hat
Enterprise Linux'
Internal Status set to 'Waiting on Engineering'
Version set to: '4 U2'
Ticket type set to: 'Problem'

This event sent from IssueTracker by kbaxley 
 issue 82347

Comment 43 Larry Woodman 2007-09-04 16:03:14 UTC

Ken, can you try to reproduce this issue with the latest RHEL4-U6 kernel? 
Several cahnges have been made to VM. OOMkiller, x86 and x86_64 zones, paging
thresholds and bounce buffer control.

Thanks, Larry



Comment 44 Issue Tracker 2007-09-14 13:18:16 UTC
Larry, 

I ran LLNL's reproducer several times over the last couple of days with
the latest RHEL4.6 kernels, and the OOM-killer issues in this particular
instance were not reproduced.  

In the past, on x86 nodes with 3 - 4GB of memory,  the OOM killer would
launch and over the course of a few minutes kill every userspace process
on the system until the system panicked. 

With the latest kernels, I can no longer reproduce this problem.

Internal Status set to 'Waiting on Engineering'

This event sent from IssueTracker by kbaxley 
 issue 82347

Comment 45 Alexander N. Spitzer 2007-09-14 13:48:44 UTC
I opened bug 163555 about this same thing...
The problem is when you are writing a file larger that the physical memory that
you have in the machine...
I.E. you have 1GB of mem, and you try to right a 4GB file over NFS.

This solves the problem:
echo 100 > /proc/sys/vm/lower_zone_protection


(In reply to comment #0)
> From Bugzilla Helper:
> User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.8) Gecko/20051202
Fedora/1.5-0.5.0 Firefox/1.5
> 
> Description of problem:
> Using oracle 9.2.0.7 when attempting to create a large tablespace (ie 5Gb) on
a partition mounted via NFS onto a NetApp Filer the process consumes all the
available memory and is killed by the kernel.
> The hardware is a HP BL25p Dual Processor Opteron 280 Dual Core server with
4Gb RAM and 8Gb swap, running 32 bit (Nahant Update 2) RHEL 4 with all updates
applied and oracle 9.2.0.7. The disks are hardware mirrored scsi 36Gb (using a
Smart Array controller) and are paritioned using lvm. There are two partitions
dedicated to oracle /ora00 and /ora01. /ora00 is a local filesystem used to
mirror online redo logs and is 4Gb, /ora01 is an NFS mounted partition to a
NetApp filer, which is a 574Gb partition used for all oracle datafiles (on
various servers) of which ~ 200Gb is unused, this is mounted using the following
options
> filer:/vol/ora01 /ora01 nfs hard,intr,bg,vers=3,proto=tcp,wsize=8192,rsize=8192
> Immediately following a reboot an attempt to create a large tablespace on the
/ora01 partition aborts and the following message can be found in the syslog.
> 
> ===============================================================================
> 
> Dec 28 14:57:55 uord4 kernel: oom-killer: gfp_mask=0xd0
> Dec 28 14:57:55 uord4 kernel: Mem-info:
> Dec 28 14:57:55 uord4 kernel: DMA per-cpu:
> Dec 28 14:57:55 uord4 kernel: cpu 0 hot: low 2, high 6, batch 1
> Dec 28 14:57:55 uord4 kernel: cpu 0 cold: low 0, high 2, batch 1
> Dec 28 14:57:55 uord4 kernel: cpu 1 hot: low 2, high 6, batch 1
> Dec 28 14:57:55 uord4 kernel: cpu 1 cold: low 0, high 2, batch 1
> Dec 28 14:57:55 uord4 kernel: cpu 2 hot: low 2, high 6, batch 1
> Dec 28 14:57:55 uord4 kernel: cpu 2 cold: low 0, high 2, batch 1
> Dec 28 14:57:55 uord4 kernel: cpu 3 hot: low 2, high 6, batch 1
> Dec 28 14:57:55 uord4 kernel: cpu 3 cold: low 0, high 2, batch 1
> Dec 28 14:57:55 uord4 kernel: Normal per-cpu:
> Dec 28 14:57:55 uord4 kernel: cpu 0 hot: low 32, high 96, batch 16
> Dec 28 14:57:56 uord4 kernel: cpu 0 cold: low 0, high 32, batch 16
> Dec 28 14:57:56 uord4 kernel: cpu 1 hot: low 32, high 96, batch 16
> Dec 28 14:57:56 uord4 kernel: cpu 1 cold: low 0, high 32, batch 16
> Dec 28 14:57:56 uord4 kernel: cpu 2 hot: low 32, high 96, batch 16
> Dec 28 14:57:56 uord4 kernel: cpu 2 cold: low 0, high 32, batch 16
> Dec 28 14:57:56 uord4 kernel: cpu 3 hot: low 32, high 96, batch 16
> Dec 28 14:57:56 uord4 kernel: cpu 3 cold: low 0, high 32, batch 16
> Dec 28 14:57:56 uord4 kernel: HighMem per-cpu:
> Dec 28 14:57:56 uord4 kernel: cpu 0 hot: low 32, high 96, batch 16
> Dec 28 14:57:56 uord4 kernel: cpu 0 cold: low 0, high 32, batch 16
> Dec 28 14:57:56 uord4 kernel: cpu 1 hot: low 32, high 96, batch 16
> Dec 28 14:57:56 uord4 kernel: cpu 1 cold: low 0, high 32, batch 16
> Dec 28 14:57:56 uord4 kernel: cpu 2 hot: low 32, high 96, batch 16
> Dec 28 14:57:56 uord4 kernel: cpu 2 cold: low 0, high 32, batch 16
> Dec 28 14:57:56 uord4 kernel: cpu 3 hot: low 32, high 96, batch 16
> Dec 28 14:57:56 uord4 kernel: cpu 3 cold: low 0, high 32, batch 16
> Dec 28 14:57:56 uord4 kernel: 
> Dec 28 14:57:56 uord4 kernel: Free pages:       15060kB (1600kB HighMem)
> Dec 28 14:57:56 uord4 kernel: Active:118363 inactive:834629 dirty:1565
writeback:342868 unstable:0 free:3765 slab:34843 mapped:35615 pagetables:930
> Dec 28 14:57:56 uord4 kernel: DMA free:12588kB min:16kB low:32kB high:48kB
active:0kB inactive:0kB present:16384kB pages_scanned:174 all_unreclaimable? yes
> Dec 28 14:57:56 uord4 kernel: protections[]: 0 0 0
> Dec 28 14:57:56 uord4 kernel: Normal free:872kB min:928kB low:1856kB
high:2784kB active:0kB inactive:720116kB present:901120kB pages_scanned:1492029
all_unreclaimable? yes
> Dec 28 14:57:56 uord4 kernel: protections[]: 0 0 0
> Dec 28 14:57:56 uord4 kernel: HighMem free:1600kB min:512kB low:1024kB
high:1536kB active:473452kB inactive:2618400kB present:3104728kB pages_scanned:0
all_unreclaimable? no
> Dec 28 14:57:56 uord4 kernel: protections[]: 0 0 0
> Dec 28 14:57:56 uord4 kernel: DMA: 3*4kB 4*8kB 4*16kB 2*32kB 4*64kB 1*128kB
1*256kB 1*512kB 1*1024kB 1*2048kB 2*4096kB = 12588kB
> Dec 28 14:57:56 uord4 kernel: Normal: 0*4kB 1*8kB 0*16kB 1*32kB 1*64kB 0*128kB
1*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 872kB
> Dec 28 14:57:56 uord4 kernel: HighMem: 2*4kB 3*8kB 2*16kB 2*32kB 3*64kB
0*128kB 1*256kB 2*512kB 0*1024kB 0*2048kB 0*4096kB = 1600kB
> Dec 28 14:57:56 uord4 kernel: Swap cache: add 0, delete 0, find 0/0, race 0+0
> Dec 28 14:57:56 uord4 kernel: 0 bounce buffer pages
> Dec 28 14:57:56 uord4 kernel: Free swap:       8388600kB
> Dec 28 14:57:56 uord4 kernel: 1005558 pages of RAM
> Dec 28 14:57:56 uord4 kernel: 776182 pages of HIGHMEM
> Dec 28 14:57:56 uord4 kernel: 9317 reserved pages
> Dec 28 14:57:56 uord4 kernel: 742211 pages shared
> Dec 28 14:57:56 uord4 kernel: 0 pages swap cached
> Dec 28 14:57:56 uord4 kernel: Out of Memory: Killed process 3349 (oracle)
> 
> ===============================================================================
> 
> I can reproduce this result each time, although depending upon when the
oom-killer kicks in, can make a difference as to when the oracle process is
killed, sometimes running the database creation script fails, other times it
gets as far as creating the larger tablespaces.
> I looked in bugzilla for similar issues and thought that #156437 might be the
cause, however using the test kernel from
http://people.redhat.com/sct/.private/test-kernels/kernel-2.6.9-22.EL.sct.1/
> did not fix the issue. (obviously using the noreservation mount flag is not
applicable for NFS)
> 
> i have captured some vmstat and top traces whilst attempting this operation
and they are attached to this report.
> 
> Version-Release number of selected component (if applicable):
> kernel-smp-2.6.9-22.0.1.EL
> 
> How reproducible:
> Always
> 
> Steps to Reproduce:
> 1. create tablespace CDB_DATA
>         datafile '/ora01/oradata/cdbdm/cdbdata01.dbf'
>         size 5120M
>         autoextend on
>         next 256M
>         extent management local;
> 2. crash.
> 
> Actual Results:  see above for syslog output
> 
> Expected Results:  successful creation of tablespace
> 
> Additional info:




Note You need to log in before you can comment on or make changes to this bug.