Bug 284481 - Anaconda fails to create big (8TB) partitions correctly during install
Anaconda fails to create big (8TB) partitions correctly during install
Status: CLOSED DUPLICATE of bug 447768
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: anaconda (Show other bugs)
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Anaconda Maintenance Team
Depends On:
  Show dependency treegraph
Reported: 2007-09-10 07:31 EDT by dijuremo
Modified: 2008-06-12 07:26 EDT (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2008-06-12 07:26:11 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description dijuremo 2007-09-10 07:31:15 EDT
Description of problem:
Using the graphical installer it is possible to install the operating system and
create big partitions up to the maximum allowed size of 8TBs (Supossedly RHEL 5
supports up to 16TBs but according to release notes this will be enabled in a
future update). However, after the system installation is finished and the
system boots up for the first time, the system will go directly into single user
mode and ask to perform an fsck of the 8TB partition.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Install the OS using the GUI interface and create the regular system partitions. 
2. Create at least one partition that is 8TB
3. Finish installation and upon first boot the problem will be evident when you
are prompted with an fsck.
Actual results:
fsck is necessary and system will never boot correctly.

Expected results:
System should boot up and have one usable 8TB partition

The bug may be due to the fact that anaconda does not know that it should use
gpt labels instead of mdsdos labels while creating partitions that are over 2GB.
So far, I have not been able to find any location where I can force gpt
partitions from the graphical installer. In any case, regardless of whether or
not I can find the option, the anaconda installer should have some sort of check
that if one of the partitions is over 2TB, then the disk label should be changed
to gpt from msdos to avoid the problem of corrupted file systems.

This may not have been a problem in the past since not too many computers would
have internal arrays which such large storage. However, nowadays it is so easy
to go over 2TB. My two servers currently have 16 750GB HDDs in raid 5 and I am
using one hot spare, so the total usable space is 9750GB. I was planning to
partition as follows:
1GB /boot 
50GB /
100GB /var
2GB swap
2GB drbd raw partition
All-space-left /export 

However I have hit two problems. The first is the limit of 8TB in ext3 as
apparently there is still no support for up to 16TB. The second is the problem
mentioned in this bug.
Comment 1 dijuremo 2007-09-10 07:33:48 EDT
One minor correction... I have 16 HDDs in Raid 6, with one Hot spare, so
15-2=13*750=9750GB total usable space.
Comment 2 Joel Andres Granados 2008-06-12 07:26:11 EDT
This is not an anaconda bug.  It is a parted bug.  it happens because parted
creates a msdos partition by default.  This will be fixed for 5.3.  duping....

*** This bug has been marked as a duplicate of 447768 ***

Note You need to log in before you can comment on or make changes to this bug.