Bug 90871

Summary: Anaconda crash when trying to create >2TB device
Product: [Retired] Red Hat Linux Reporter: Jason Tibbitts <j>
Component: anacondaAssignee: Mike McLean <mikem>
Status: CLOSED RAWHIDE QA Contact: Mike McLean <mikem>
Severity: low Docs Contact:
Priority: low    
Version: 9CC: stk
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2003-09-25 23:47:38 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Jason Tibbitts 2003-05-14 21:54:30 UTC
I tried to set up the following partitioning scheme:

part /boot  --size=512 --ondisk=sda --asprimary
part /      --size=512 --ondisk=sdb --asprimary
part /usr   --size=4096 --ondisk=sda
part /var   --size=4096 --ondisk=sdb
part /tmp   --size=8192 --ondisk=sda
part /cache --size=4096 --ondisk=sdb
part swap   --size=4096 --ondisk=sdb
 
part raid.00 --size=1 --ondisk=sda --grow
part raid.01 --size=1 --ondisk=sdb --grow

raid pv.00 --level=0 --device=md0 raid.00 raid.01

The caveat is that sda and sdb are each 1.1TB in size.  Anaconda fails with the
somewhat cryptic message:

"Error informing the kernel about modifications to /dev/sda2 - Invalid argument.
 This means that Linux won't know about any modifications made to /dev/sda2
until you reboot - so you shouldn't mount it or use it in any way before rebooting."

Removing --grow and setting --size to something "reasonable" like 1000000 makes
things work properly.  I'm not sure what limit is coming into play here.  It
doesn't seem to be the kernel's 2TB limit because the RAID array hasn't been set
up.  Running sfdisk -l on VT2 doesn't show any changes to the partition table at
all.

Comment 1 Michael Fulbright 2003-05-16 16:12:37 UTC
Mike can you test this on any of our available hardware?

Comment 2 Jeremy Katz 2003-09-25 23:47:38 UTC
We set limits to 1TB now.   Anything else is a crap shoot depending on what
hardware is being used and a few other factors.