Bug 90871 - Anaconda crash when trying to create >2TB device
Anaconda crash when trying to create >2TB device
Product: Red Hat Linux
Classification: Retired
Component: anaconda (Show other bugs)
All Linux
low Severity low
: ---
: ---
Assigned To: Mike McLean
Mike McLean
Depends On:
  Show dependency treegraph
Reported: 2003-05-14 17:54 EDT by Jason Tibbitts
Modified: 2005-10-31 17:00 EST (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2003-09-25 19:47:38 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Jason Tibbitts 2003-05-14 17:54:30 EDT
I tried to set up the following partitioning scheme:

part /boot  --size=512 --ondisk=sda --asprimary
part /      --size=512 --ondisk=sdb --asprimary
part /usr   --size=4096 --ondisk=sda
part /var   --size=4096 --ondisk=sdb
part /tmp   --size=8192 --ondisk=sda
part /cache --size=4096 --ondisk=sdb
part swap   --size=4096 --ondisk=sdb
part raid.00 --size=1 --ondisk=sda --grow
part raid.01 --size=1 --ondisk=sdb --grow

raid pv.00 --level=0 --device=md0 raid.00 raid.01

The caveat is that sda and sdb are each 1.1TB in size.  Anaconda fails with the
somewhat cryptic message:

"Error informing the kernel about modifications to /dev/sda2 - Invalid argument.
 This means that Linux won't know about any modifications made to /dev/sda2
until you reboot - so you shouldn't mount it or use it in any way before rebooting."

Removing --grow and setting --size to something "reasonable" like 1000000 makes
things work properly.  I'm not sure what limit is coming into play here.  It
doesn't seem to be the kernel's 2TB limit because the RAID array hasn't been set
up.  Running sfdisk -l on VT2 doesn't show any changes to the partition table at
Comment 1 Michael Fulbright 2003-05-16 12:12:37 EDT
Mike can you test this on any of our available hardware?
Comment 2 Jeremy Katz 2003-09-25 19:47:38 EDT
We set limits to 1TB now.   Anything else is a crap shoot depending on what
hardware is being used and a few other factors.

Note You need to log in before you can comment on or make changes to this bug.