From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.8b4) Gecko/20050915 Fedora/1.5-0.5.0.beta1 Firefox/1.4 Description of problem: mkinitrd has recently started experimenting loading modules and started raid devices that are minimally enough to get the PVs containing the root filesystem up and running. Unfortunately, that approach was a major failure, and we had to go back to bringing up all PVs in the volume group. Anyhow, the experiment has shown a few serious problems and room for improvement. First off, using vgscan -P and vgchange -P in initrd.img's init script enabled the root device to be mounted read only, but only if no ext3 journal replaying was necessary. If it was, there would be problems because DM would make the LV read-only. After mounting the root filesystem read only and switchrooting to it, it would bring up all other disk and raid devices and re-run vgscan. This wouldn't, however, turn the root LV read-write in the DM layer. Worse: other LVs that had been brought up with size 0, because their PVs were missing, would not be affected by this new vgscan, they would remain unusable. I'm not sure I understand why DM would force the partial LVs read only, nor whether there's any reason to not revise the mapping of LVs whose PVs were missing at the time of the first vgscan, but if these two issues could be improved, then not only would mkinitrd be able to bring up fewer devices, but also it would become much safer to keep the root filesystem on LVM along with other LVs in a large VG. Version-Release number of selected component (if applicable): lvm2-2.01.14-2 How reproducible: Always Steps to Reproduce: 1.Create a VG on say two or three raid or loop devices 2.Create an LV on one of these devices, and another on the other devices 3.Create a filesystem on the first device 4.De-activate the VG 5.Stop all but one of its PVs 6.vgscan -P 7.vgchange -P -ay the newly-created VG 8.mount the newly-created filesystem read-only 9.bring up the remaining PVs 10.vgscan; vgchange -ay 11.Try to remount the filesystem read-write 12.Try to access the other LV Actual Results: 10. doesn't change the active VG in any way, so 11 and 12 fail. Expected Results: Ideally, it should turn the partial VG into a full one, completing the missing parts of the LVs. LVs that don't have any parts in missing PVs could be made read-write, and there should probably be an option to make LVs writable even if there are missing parts. Additional info:
The partial VG flags are there to help recovery - they should not be used routinely! "Unfortunately, that approach was a major failure, and we had to go back to bringing up all PVs in the volume group." Correct. Lvm2 considers the VG to be the unit and you must bring up all the PVs in the VG before using lvm2 commands. If you don't want to bring up all the PVs initially, then split them off into a different VG. "mkinitrd has recently started experimenting loading modules and started raid devices that are minimally enough to get the PVs containing the root filesystem up and running." If anyone had asked about this, here is how to do this in a way that *is* supportable by lvm2 (current cvs version allows for creating PVs on LVs): vg0 consists of real disks pv0 and pv1 - minimum requirement for the root filesystem. vg0 contains rootlv with root filesystem and freespacelv with all the remaining space. pvcreate is run on freespacelv to create pv2. vg1 is created as pv2 + remaining pvs and used to hold remaining disk volumes. [Over time this will get nicer as the definition of a VG evolves and stacked VGs become commonplace.] Supporting recovery (semi-)automatically when PVs have disappeared permanently is a separate issue which 'vgck' needs to be written to address.
Based on the date this bug was created, it appears to have been reported against rawhide during the development of a Fedora release that is no longer maintained. In order to refocus our efforts as a project we are flagging all of the open bugs for releases which are no longer maintained. If this bug remains in NEEDINFO thirty (30) days from now, we will automatically close it. If you can reproduce this bug in a maintained Fedora version (7, 8, or rawhide), please change this bug to the respective version and change the status to ASSIGNED. (If you're unable to change the bug's version or status, add a comment to the bug and someone will change it for you.) Thanks for your help, and we apologize again that we haven't handled these issues to this point. The process we're following is outlined here: http://fedoraproject.org/wiki/BugZappers/F9CleanUp We will be following the process here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping to ensure this doesn't happen again.
inappropriate bug zapper comments there is work in progress helping to improve the automated recovery options
adding "bug zapper repellent" keyword of FutureFeature as it sounds like that is what this is :)
Seems that this old BZ covers no special "festure", so closing it. (vgck implementation is tracked by bug #185526, using of partial flag is explained in comment #1, moreover this situation is now solved in dracut, somehow;-) Also lvm should autorecover re-appeared PVs with old metadata now.