Bug 602628
Summary: | Installation fails with --fsprofile largefile | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Alexander Todorov <atodorov> | ||||
Component: | anaconda | Assignee: | Anaconda Maintenance Team <anaconda-maint-list> | ||||
Status: | CLOSED CANTFIX | QA Contact: | Release Test Team <release-test-team-automation> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | low | ||||||
Version: | 6.0 | CC: | davejohansen, dlj, mbanas | ||||
Target Milestone: | rc | ||||||
Target Release: | --- | ||||||
Hardware: | All | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2010-06-10 14:49:00 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 582286 | ||||||
Attachments: |
|
Description
Alexander Todorov
2010-06-10 11:05:22 UTC
Created attachment 422868 [details]
tarball of all logs from stage2
program.log says:
ERROR : mdadm: largest drive (/dev/vda2) exceeds size (10484636K) by more than 1%
what does this mean?
Hi Alex, I also found this problem on PPC64, but it seems it's correct behaviour. For example if you create 200MB filesystem with --fsprofile largefile, inode_ratio is 1048576, and so the maximum number of inodes is 200. Max Inodes = (FS Size x 1024 x 1024) / inode_ratio I think you reached max count of inodes for the filesystem (which was about 8000). Correction: about 10000 inodes. This all sounds very possible. I'd like to hear from anaconda guys if there's a way to detect this and show a proper error message or improve logging probably. For example the number of inodes vs the number of all files installed on the system. Unlikely. This happens so far down in the stack that we're never going to hear about it. What's going to happen is that some write will fail way down in the kernel, which will get transmitted up a layer to whatever was doing the write, handled and reconverted into some other error, passed up another layer, ..., handled by anaconda as some generic error message. The loss of information is just going to be too great. Could a check at least be made on the root partition to output a warning/error about the number of inodes being small so that when this happens it would be easier to debug/diagnose? (In reply to Dave Johansen from comment #6) > Could a check at least be made on the root partition to output a > warning/error about the number of inodes being small so that when this > happens it would be easier to debug/diagnose? No, this is not at all practical. We don't need to know the various profiles, nor do we need to know so much about them that we can warn in advance if their use is ill-advised. This is advanced functionality, and expects that the user do the required research before using it. > (In reply to David Lehman from comment #7)
> No, this is not at all practical. We don't need to know the various profiles,
> nor do we need to know so much about them that we can warn in advance if their
> use is ill-advised. This is advanced functionality, and expects that the user do
> the required research before using it.
I agree. Knowledge of the specific profiles and other such issue is impractical, but couldn't some simple checks for things like the number of inodes or available disk space be done? Right now, it fails part way through the install with no indication as to what the problem is and a simple warning before hand or a meaningful error message when it happened would be very helpful to diagnose what the problem really is.
|