Bug 66331

Summary: installer crashes when progressing past disk partitioning window
Product: [Retired] Red Hat Linux Reporter: Dave Larter <dave>
Component: kernelAssignee: Arjan van de Ven <arjanv>
Status: CLOSED CURRENTRELEASE QA Contact: Brock Organ <borgan>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.3   
Target Milestone: ---   
Target Release: ---   
Hardware: i686   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2004-09-30 15:39:39 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
anaconda dump none

Description Dave Larter 2002-06-07 23:05:09 UTC
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 4.0)

Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1.straight install of dist cd
2.
3.
	

Actual Results:  after any selecting any of the 3 options on disk partition 
window, installer crashes, no fdd has been used during install process

Expected Results:  install should continue

Additional info:

Comment 1 Dave Larter 2002-06-07 23:08:42 UTC
Created attachment 60124 [details]
anaconda dump

Comment 2 Michael Fulbright 2002-06-10 19:06:50 UTC
How large is the disk that is causing the problem?

Comment 3 Dave Larter 2002-06-10 19:20:18 UTC
3ware 7850 w/ 8 160g

ran across this today and I'm splitting the array in two...4x160's and will try 
again....thx

Hello,

I have run into a problem with software raid with large filesystems.
I am not sure if this is a software raid limitation or a filesystem
limitation.  I currently have two 3ware raid controllers that have
1.1TB (yes, terrabyte) of space on each.  I want to stripe the two
hardware raid controllers using software raid but when I try to make
the raid using the command mkraid /dev/md0 it pukes out this:

DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/sdb1, 1120597978kB, raid superblock at 1120597888kB
disk 1: /dev/sdc1, 1120597978kB, raid superblock at 1120597888kB
raid0: looking at sdb1
raid0:   comparing sdb1(1120597888) with sdb1(1120597888)
raid0:   END
raid0:   ==> UNIQUE
raid0: 1 zones
raid0: looking at sdc1
raid0:   comparing sdc1(1120597888) with sdb1(1120597888)
raid0:   EQUAL
raid0: FINAL 1 zones
raid0: zone 0
raid0: checking sdb1 ... contained as device 0
  (1120597888) is smallest!.
raid0: checking sdc1 ... contained as device 1
raid0: zone->nb_dev: 2, size: -2053771520
raid0: current zone offset: 1120597888
raid0: done.
raid0 : md_size is -2053771520 blocks.
raid0 : conf->smallest->size is -2053771520 blocks.
raid0 : nb_zone is 1.
raid0 : Allocating 8 bytes for hash.


Notice the huge negative numbers -2053771520.  I have tried this with
a couple different block sizes but it doesn't help.  When I build a 
filesystem
on it, in this case I am trying reiserfs, it builds it but in the 
message log I
have these errors:

09:00: rw=0, want=134217729, limit=-2053771520
attempt to access beyond end of device
09:00: rw=0, want=134217730, limit=-2053771520
attempt to access beyond end of device
09:00: rw=0, want=134217731, limit=-2053771520
attempt to access beyond end of device

There are quite a few, I didn't post them all.  Then when I mount the fs
it only reports it as about a 93GB filesystem instead of the 2.2TB it should
be.  I did this same configuration in RAID 1(mirror) and it all worked 
well, no
errors and the reported filesystem size was correct.  So I guess my question
is: Am I hitting a kernel limit here or a filesystem limit, or is this 
just a plain
ol bug?  What is the current maximum filesystem size?  I need to I can try
ext3 but it always takes a bit to format.  This is on RedHat 7.3 with the
stock 2.4.18-4smp kernel.  I also tried it with the standard 2.4.18 
kernel with
the same results.   Any suggestions/information would be appreciated.  Let
me know if any more info would help, I usually forget info that is 
important.

Here is a copy of my /etc/raidtab (currently set up for mirroring, not 
stripping)

raiddev /dev/md0
        raid-level      1
        nr-raid-disks   2
        persistent-superblock   1
        chunk-size      4
        device  /dev/sdb1
        raid-disk      0
        device  /dev/sdc1
        raid-disk       1

Thanks,
Steven




-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/Post a follow-up to this message

Message 2 in thread 
From: Neil Brown (neilb.edu.au)
Subject: Re: Software RAID/Filesystem problems? 
Newsgroups: mlist.linux.kernel
View this article only 
Date: 2002-05-28 18:22:59 PST 
 

On Tuesday May 28, kwijibo wrote:
> Hello,
> 
> I have run into a problem with software raid with large filesystems.
> I am not sure if this is a software raid limitation or a filesystem
> limitation.  I currently have two 3ware raid controllers that have
> 1.1TB (yes, terrabyte) of space on each.  I want to stripe the two
> hardware raid controllers using software raid but when I try to make
> the raid using the command mkraid /dev/md0 it pukes out this:

sorry.  No-Can-Do.  Not in 2.4 anyway.  Maybe in 2.6...

2 time 1.1TB > 2TB.

2TB is the most you can access with 32bits addressing for 512byte
sectors.

You are hitting an error at (1K) block 134217729 which is 1 block past
2TB.

If you make the sdb1 and sdc1 partitions less than 1TB you have a
better change of it working.

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/Post a follow-up to this message

 



Comment 4 Jeremy Katz 2002-07-10 18:23:56 UTC
Two parts -- the first is the anaconda traceback, which is fixed in CVS with the
move to python2.  The second is that the driver for the 3ware cards doesn't
support devices greater than 1TB

Comment 5 Bugzilla owner 2004-09-30 15:39:39 UTC
Thanks for the bug report. However, Red Hat no longer maintains this version of
the product. Please upgrade to the latest version and open a new bug if the problem
persists.

The Fedora Legacy project (http://fedoralegacy.org/) maintains some older releases, 
and if you believe this bug is interesting to them, please report the problem in
the bug tracker at: http://bugzilla.fedora.us/