Red Hat Bugzilla – Bug 723303
device /dev/mapper/tmp--VolGroup01-xxxx does not exist
Last modified: 2011-09-15 13:21:03 EDT
Created attachment 513841 [details]
logs from install and error msgs from two attempts (logs from 1st attempt only)
Description of problem:
cannot mount existing lv on /mnt/<any name>
Version-Release number of selected component (if applicable):
generic installer 16.12 in desktop spin 20110717
Steps to Reproduce:
1. boot desktop spin 20110717
2. select install to hard drive
3. select custom layout
4. after selecting / and other ordinary paritions, select an existing LV to mount as /home or any point on /mnt
5. click next, answer yes to format,etc
after formating, get following error:
An error occurred mounting device /dev/mapper/VolGroup01-clydehome as /home: device /dev/mapper/tmp--VolGroup01-clydehome does not exist. This is a fatal error and the install cannot continue.
Press <Enter> to exit the installer.
by not selecting any existing lvs to mount, except for /, installation proceeds normally (except for grub2 error and unbootable system at end).
This is fallout from commit 2e73cc57a79d23b23e. The solution is probably to coerce the format's device attribute after setting up the device to ensure they stay in sync.
Issue is still in 16.13.
Also affects F16 Alpha TC1 in exactly the same way. Anaconda 16.14 (afaik).
Yep still there. Skipping the mount of an LV other than root LV works, but now running into a new error dialogue that seems similar:
Unable to Mount Filesystem
An error occurred mounting device /dev/sdf1 as /mnt/masterboot: mount failed: (30, Read-only file system). This is a fatal error and the install cannot continue.
Press <Enter> to exit the installer.
Is this related or should I enter a new bz?
Looks like a new bug to me.
The failure occurs whenever you try to assign a mountpoint to an existing lv without reformatting it. Without reading the criteria, this seems like at least a Beta blocker.
I understand. My workaround is to modify fstab after installation completes and before first boot. I have /home on an LV and reuse it from Fedora version to Fedora version.
(Haven't seen a repeat of the 30, Read-only file system.)
Created attachment 518735 [details]
/tmp files from mount error
Contents of /tmp from anaconda 16.14.6. Included is a file (devmapper) showing the contents of /dev/mapper at the time the error dialogue (not traceback) appeared. There is no /dev/mapper/tmp--VolGroup01-clydehome in /dev/mapper, but plenty of various incantations of clydehome.
Can't work around this error by specifying only the / partition and /boot partition. Then run into 731480 which appears same as 727966.
This is fixed by commit 12f5c4a9b234605b920145f4f3f90287b185c98c, which is
on rawhide and will be in the first post-alpha f16 composes.
Discussed in the 2011-08-26 blocker review meeting. Rejected as a Fedora 16 beta blocker because it uses custom layouts which aren't part of the beta release critera .
However, it was accepted as Fedora 16 beta NTH since a fix is already in.
should be a final blocker rather than a rejectedblocker, I believe.
(In reply to comment #12)
> should be a final blocker rather than a rejectedblocker, I believe.
Whoops, I was reading the minutes a little too fast. Thanks for catching that.
Accepted as a Fedora 16 final blocker as it violates the following final release criterion :
The installer must be able to create and install to any workable partition layout using any file system offered in a default installer configuration, LVM, software, hardware or BIOS RAID, or combination of the above.
This should be fixed in Beta TC1, as I read #10: clyde, can you verify? Thanks.
(In reply to comment #14)
> This should be fixed in Beta TC1, as I read #10: clyde, can you verify? Thanks.
Can't. I run into:
tried the boot.iso at:
but I get a GPF on mdadm each time (x2).
I understand that there are a couple of raid issues with raid1 and 10 that need fixing (and fixes are in the pipeline?). I do believe I am seeing them with this kernel.
(In reply to comment #16)
> tried the boot.iso at:
> but I get a GPF on mdadm each time (x2).
> I understand that there are a couple of raid issues with raid1 and 10 that need
> fixing (and fixes are in the pipeline?). I do believe I am seeing them with
> this kernel.
I didn't include the new kernel or mdadm builds in that boot.iso but I can build a new one if you want to try it.
(In reply to comment #17)
> I didn't include the new kernel or mdadm builds in that boot.iso but I can
> build a new one if you want to try it.
Hmmmm...strange, then this might be a regression due to something else.
However, build away. We're having serious flooding here at present (not affecting me directly, but others I support), but I should be able to test something as long as the build is done with the next 30 minutes.
This is probably resolved, but the definitive test has to wait for resolution of 737278. Was able to do an install by specifying the devices to use and not checking the raid component devices. Installer allowed specifying the home LV without barfing.
(In reply to comment #19)
> This is probably resolved, but the definitive test has to wait for resolution
> of 737278. Was able to do an install by specifying the devices to use and not
> checking the raid component devices. Installer allowed specifying the home LV
> without barfing.
Fix confirmed with Mr. Flink's latest boot.iso:
Great, closing. thanks!
*** Bug 736412 has been marked as a duplicate of this bug. ***