Description of problem: On MD arrays with metadata version 1.0 (with MD metadata at the end of the device) parted detects the GPT partition table on the underlying disks and when using parted users are prompted to fix the GPT destroying the RAID metadata in the process. This is especially a problem when running installer on such system, because during installation we let parted automatically fix all fixable issues. Steps to Reproduce: 1. Create MD RAID with metadata version 1.0 on directly on top of disks: sudo mdadm --create gpttest --run --level=1 --raid-devices=2 --metadata=1.0 /dev/vda /dev/vdb 2. Create GPT on the array 3. Run parted -l Actual results: parted shows a warning about the secondary GPT not being at the end of the disk and offers to fix it Warning: Not all of the space available to /dev/vda appears to be used, you can fix the GPT to use all of the space (an extra 32 blocks) or continue with the current setting? Expected results: parted doesn't detect GPT on vda, only on the MD array.
My initial thought on this is that it's working like it is supposed to, and that the installer is going to need to change how it uses parted when it knows that it has a MD RAID setup. eg. turn off automatic fixing. Also, why haven't we run into this before? Something must have changed, do you know what? I know that in the past I've experimented with using mdraid and UEFI and didn't hit this. But that has also been quite a while ago.
This gets more complicated every time I try to reproduce the original issue. But it looks like these steps are not steps to reproduce the original installer issue -- `parted -l` offers to fix GPT on the RAID member disks, but that doesn't happen with just blivet, we don't detect GPT on the disks and parted doesn't offer to fix it with blivet. So the installer error seems to be triggered by something else. But the system is still very confused in this situation thinking the partition table (and the partitions) are both on the disk and the array: ``` $ lsblk /dev/vda NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS vda 252:0 0 20G 0 disk ├─md127 9:127 0 20G 0 raid1 │ └─md127p1 259:1 0 1G 0 part └─vda1 252:1 0 1G 0 part $ sudo wipefs /dev/vda DEVICE OFFSET TYPE UUID LABEL vda 0x4ffffe000 linux_raid_member 52af8b57-57aa-bc58-81f2-16dced9ed8c2 fedora:gpttest vda 0x1000 gpt vda 0x1fe PMBR $ sudo wipefs /dev/md127 DEVICE OFFSET TYPE UUID LABEL md127 0x1000 gpt md127 0x4fffdf000 gpt md127 0x1fe PMBR ``` I'll continue playing with this to see if I am able to reproduce the original installer issue, but even with this setup I'd consider the `parted -l` to be if not a bug than a potentially dangerous behaviour. I think it might be a good idea to change `parted -l` behaviour to show just the warning without the fix/ignore prompt and only offering to fix the partition table when user explicitly runs `parted /dev/vda`.
No, parted is working just fine, and the prompt is there for good reason. The problem is that this setup mixes partitioning and mdraid. I know you can't avoid doing that if you want it to boot from it, but this also isn't a new situation.