| Summary: | error message not informative when trying to convert partition to raid with full disks | ||
|---|---|---|---|
| Product: | [Fedora] Fedora | Reporter: | Alexey Torkhov <atorkhov> |
| Component: | anaconda | Assignee: | Anaconda Maintenance Team <anaconda-maint-list> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
| Severity: | low | Docs Contact: | |
| Priority: | low | ||
| Version: | 22 | CC: | anaconda-maint-list, awilliam, g.kaviyarasu, jonathan, robatino, vanmeeuwen+fedora |
| Target Milestone: | --- | Keywords: | Reopened |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-04-24 19:17:45 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Alexey Torkhov
2013-12-12 09:20:11 UTC
Unable to perform same with 2 devices. Should be a blocker according to critea: The installer must be able to create and install to any workable partition layout using any file system and/or container format combination offered in a default installer configuration. i'd vote -1, IIRC /boot on raid isn't supported. should be rejected with a proper error, but meh. F-19 certainly allowed to use /boot or / on RAID-1. It setups it with 1.0 metadata (that goes in the end of partition) so bootloader could access filesystem. hum, i may be misremembering then. This seem to be issue with this particular partitioning workflow, so its probably not really a blocker. It works when doing RAID setup other ways. And it also allows to use /boot (or / partition without /boot) on RAID-1 too. it would be great if you could specify precisely what workflow works, and what fails. thanks! (In reply to Adam Williamson from comment #7) > it would be great if you could specify precisely what workflow works, and > what fails. thanks! Steps to reproduce and actual results from comment 0 are not precise enough? looks OK for the 'not working' case, what case works? manually create mount points one by one? I didn't see this bug different situations with full layout regular partitions, when assiging LVM to RAID, when doing some mangling with sizes first or manually creating points one by one. The only situation where I've seen it is described in comment 0. Have seen same error when trying to reproduce bug 1021507 with complex RAID layout. The interactive partitioning does not reallocate all devices from scratch each time you change one of them. That means that after asking for the automatic partitioning layout the sizes of the PVs are fixed at whatever size they ended up at. This is why your attempt to change /boot to RAID failed -- there was only one disk with enough space at that point. You can achieve /boot on a RAID across all three disks, but you'll have to pay attention and ensure that there's enough space on each disk. I'd recommend you create /boot while the disks are empty and then add the LVM afterwards. I have no intention of changing the basic way this works, before you ask. (In reply to David Lehman from comment #12) > The interactive partitioning does not reallocate all devices from scratch > each time you change one of them. That means that after asking for the > automatic partitioning layout the sizes of the PVs are fixed at whatever > size they ended up at. This is why your attempt to change /boot to RAID > failed -- there was only one disk with enough space at that point. You can > achieve /boot on a RAID across all three disks, but you'll have to pay > attention and ensure that there's enough space on each disk. I'd recommend > you create /boot while the disks are empty and then add the LVM afterwards. > I have no intention of changing the basic way this works, before you ask. Ah, true, second disk is full at that moment. But why does it resets /boot size to 1MB in this case? dlehman: is it reasonable to keep the bug open as low priority to see if anaconda can possibly provide better feedback in this case, on that glorious day in 2063 when you'll have time to worry about low priority issues? :) It would be better if we could note that disks were removed from the specified set due to lack of space and somehow reflect that in the error message. This bug appears to have been reported against 'rawhide' during the Fedora 22 development cycle. Changing version to '22'. More information and reason for this action is here: https://fedoraproject.org/wiki/Fedora_Program_Management/HouseKeeping/Fedora22 Seems to work fine in F22 |