Bug 90871 - Anaconda crash when trying to create >2TB device
Summary: Anaconda crash when trying to create >2TB device
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: anaconda (Show other bugs)
(Show other bugs)
Version: 9
Hardware: All Linux
Target Milestone: ---
Assignee: Mike McLean
QA Contact: Mike McLean
Depends On:
TreeView+ depends on / blocked
Reported: 2003-05-14 21:54 UTC by Jason Tibbitts
Modified: 2005-10-31 22:00 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2003-09-25 23:47:38 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

Description Jason Tibbitts 2003-05-14 21:54:30 UTC
I tried to set up the following partitioning scheme:

part /boot  --size=512 --ondisk=sda --asprimary
part /      --size=512 --ondisk=sdb --asprimary
part /usr   --size=4096 --ondisk=sda
part /var   --size=4096 --ondisk=sdb
part /tmp   --size=8192 --ondisk=sda
part /cache --size=4096 --ondisk=sdb
part swap   --size=4096 --ondisk=sdb
part raid.00 --size=1 --ondisk=sda --grow
part raid.01 --size=1 --ondisk=sdb --grow

raid pv.00 --level=0 --device=md0 raid.00 raid.01

The caveat is that sda and sdb are each 1.1TB in size.  Anaconda fails with the
somewhat cryptic message:

"Error informing the kernel about modifications to /dev/sda2 - Invalid argument.
 This means that Linux won't know about any modifications made to /dev/sda2
until you reboot - so you shouldn't mount it or use it in any way before rebooting."

Removing --grow and setting --size to something "reasonable" like 1000000 makes
things work properly.  I'm not sure what limit is coming into play here.  It
doesn't seem to be the kernel's 2TB limit because the RAID array hasn't been set
up.  Running sfdisk -l on VT2 doesn't show any changes to the partition table at

Comment 1 Michael Fulbright 2003-05-16 16:12:37 UTC
Mike can you test this on any of our available hardware?

Comment 2 Jeremy Katz 2003-09-25 23:47:38 UTC
We set limits to 1TB now.   Anything else is a crap shoot depending on what
hardware is being used and a few other factors.

Note You need to log in before you can comment on or make changes to this bug.