Bug 129727 - Can't install Opteron with ICP GDT8114RZ RAID with > 4 GB
Summary: Can't install Opteron with ICP GDT8114RZ RAID with > 4 GB
Alias: None
Product: Red Hat Enterprise Linux 3
Classification: Red Hat
Component: kernel (Show other bugs)
(Show other bugs)
Version: 3.0
Hardware: x86_64 Linux
Target Milestone: ---
Assignee: Doug Ledford
QA Contact: Brian Brock
Depends On:
Blocks: 132991
TreeView+ depends on / blocked
Reported: 2004-08-12 08:56 UTC by Daniel Riek
Modified: 2007-11-30 22:07 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2007-06-07 16:24:47 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
U3_install_max_ids-3_syslog (17.82 KB, text/plain)
2004-08-12 08:58 UTC, Daniel Riek
no flags Details
U3_install_max_ids-3_anacondadump (50.80 KB, text/plain)
2004-08-12 08:59 UTC, Daniel Riek
no flags Details
Messages File with failed gdth driver attempts (63.59 KB, text/plain)
2004-08-17 10:56 UTC, Need Real Name
no flags Details

Description Daniel Riek 2004-08-12 08:56:30 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040124

Description of problem:
System has 2 CPUs, AMD Opteron 248, TYAN Thunder K8W (S2885),        
     total 4 GB of RAM, 2 times 1 GB-stripes per CPU,
(TRS21192-modules),             PNY Quadro FX 3000 (NVidia), ICP
GDT8114RZ RAID controller.

The disk layout is:
2 disks in a RAID 0, with 70 GB ID 0 und 1
1 disk with 36 GB ID 2
Supermicro SCA backplane ID 6

During the installation process it seems that anaconda tries to
partition the SCSI hotplug backplane:
<4>Attached scsi disk sdc at scsi0, channel 2, id 6, lun 0
<4> SCSI device sda: 143347995 512-byte hdwr sectors (73394MB)
<6> Partition check:
<6> sda: unknown partition table
<4> SCSI devic3e sdb: 71665965 512-byte hdwr sectors (36699MB)
<6> sdb: unknown partition table
<6>scsi: device set offline - not ready or command retry failed after
bus reset:
 host 0 channel 2 id 6 lun 0
<4>sdc : READ CAPACITY failed.
<4>sdc : status = 1, message = 00, host = 0, driver = 00
<4>sdc : sense not available.
<4>sdc : block size assumed to be 512 bytes, disk size 1GB..
<6> sdc: I/O error: dev 08:20, sector 0
<4> I/O error: dev 08:20, sector 0
<4> unable to read partition table

With U3 the customer gets a different error when loading the gdth
driver with the option max_ids=3 to prevent access to the backplane.
In that case the install fails after starting LVM (see attached logs).

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Use a system with ICP RAID controller similar to the description above.
2. Try to install RHEL3U2 x86_64


Actual Results:  Installation fails after trying to partition the SCA
hotplug backplane.

Expected Results:  Partition discs and install OS

Additional info:

Comment 1 Daniel Riek 2004-08-12 08:58:37 UTC
Created attachment 102640 [details]

Syslog from U3 installation with gdth option max_ids=3

Comment 2 Daniel Riek 2004-08-12 08:59:51 UTC
Created attachment 102641 [details]

Anaconda dump from U3 installation attempt with gdth option max_ids=3

Comment 6 Need Real Name 2004-08-13 09:32:16 UTC
we are the ones with the problem and the aforementioned system.
If there are any suggestions or things to try out (patches, special boot
switches or alike) we are more then happy to try them out.


Roman Gugganig

Comment 9 Need Real Name 2004-08-17 10:56:55 UTC
Created attachment 102784 [details]
Messages File with failed gdth driver attempts

Comment 10 Need Real Name 2004-08-17 10:57:54 UTC
FYI we have now tried an installation on a different SCSI-controller
and putting in the ICP later. Now, I am able to load the gdth module,
but it returns weird values for the backplane. Sometimes it´s
recognized as a seperate harddisk, sometimes as a generic SCSI device.
When it´s recognized as a HD, the first attempt to "fdisk -l /dev/sdb"
freezes the machine immediately.
Recognized as a generic device I get the chance to partition and mkfs
on the disk, but after several I/O-operations on it (cp) the system
freezes shortly after producing a bunch of errors, all looking like

Aug 17 12:22:15 localhost kernel: EXT3-fs error (device sd(8,18)):
ext3_new_block: Allocating block in system zone - block = 1409024
Aug 17 12:22:15 localhost kernel: EXT3-fs error (device sd(8,18)):
ext3_new_block: Allocating block in system zone - block = 1409025
Aug 17 12:22:15 localhost kernel: EXT3-fs error (device sd(8,18)):
ext3_new_block: Allocating block in system zone - block = 1409026

and so on and so on.
It looks like the culprit IS the gdth driver itself.

Attached messages-file should demonstrate what I mean.

Comment 13 Need Real Name 2004-08-17 15:24:12 UTC
After reviewing the first post I have a correction to add:
The system has a total of 8Gb of RAM.
When I remove 4Gb and install with only 4Gb installed, all works fine.

I also tried to compile the driver from ICP, but the compilation-step
breaks with errors, complaining about various mismatches etc. in the
include files.

Comment 15 Bastien Nocera 2004-09-27 17:51:59 UTC
Using the 3.04 version of the gdth driver (as available from the
manufacturer) fixes issues with > 4GB RAM machines.

The driver is available from:

The current 2.6 upstream driver might even be better.

Comment 21 Doug Ledford 2005-03-01 08:12:56 UTC
This driver has bugs in regards to handling of highmem (above the 4GB
limit) memory pages.  It attempts to copy from command buffers in
highmem to locally allocated memory, but it fails to first kmap the
address in case it is outside of the 4GB address space.  There is a
fix for this in the 2.6 upstream kernel, but unfortunately, that
driver is markedly different from the one in the 2.4.21 kernel.  It
will require hand sorting not only the upstream 2.6 fix but also
additional fixes to solve the problem.

Deferred until U6.

Comment 23 Doug Ledford 2007-06-07 16:24:47 UTC
RHEL3 is in deep maintenance mode at this point and no further updates, other
than security issues, are planned.  Closing this bug out.

Note You need to log in before you can comment on or make changes to this bug.