Bug 86064 - Kernel oops with very large number of software RAID arrays
Kernel oops with very large number of software RAID arrays
Status: CLOSED ERRATA
Product: Red Hat Linux
Classification: Retired
Component: kernel (Show other bugs)
7.3
i686 Linux
medium Severity high
: ---
: ---
Assigned To: Arjan van de Ven
Brian Brock
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2003-03-13 08:37 EST by Andrew Rechenberg
Modified: 2007-04-18 12:51 EDT (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2004-01-30 19:59:37 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Patch to convert /proc/mdstat to use seq_file (14.82 KB, patch)
2003-03-13 08:41 EST, Andrew Rechenberg
no flags Details | Diff

  None (edit)
Description Andrew Rechenberg 2003-03-13 08:37:17 EST
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)

Description of problem:
When using a large number of Linux software RAID arrays I receive a kernel 
OOPS.  With some help from users on the kernel mailing list and the linux-raid 
list we have determined that /proc/mdstat is overflowing it's 4k page and 
overwriting some other part of kernel memory causing the OOPS.

This problems appears to mainfest itself around 24-27 RAID1 arrays.  



Version-Release number of selected component (if applicable):
kernel-2.4.18-26.7.x

How reproducible:
Always

Steps to Reproduce:
1. Boot with 2.4.18-26.7.x
2. Create a large number of software RAID1 arrays.
3. Watch the kernel go bye-bye
    

Actual Results:  Kernel OOPS in do_try_to_free_pages

Expected Results:  No OOPS

Additional info:
Comment 1 Andrew Rechenberg 2003-03-13 08:41:00 EST
Created attachment 90582 [details]
Patch to convert /proc/mdstat to use seq_file

The attached patch seems to resolve the OOPS I was seeing.  I currently have 52
SCSI disks in 26 RAID1 arrays and one RAID0 stripe across those.  It has been
running successfully on test hardware under load for approximately 24 hours
Comment 2 Andrew Rechenberg 2004-01-30 19:59:37 EST
As an FYI, the 2.4.20-x series of errata kernels seem to have switched
to seq_file for md.  

Note You need to log in before you can comment on or make changes to this bug.