From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040217
Description of problem:
Unless raid 1 members are explicitly filtered out from vgscanning in
/etc/lvm/lvm.conf, lvm vgscan fails with messages such as:
Found duplicate PV o4Icc0yt8OloT89tnJ5xq156XCwYEczT: using /dev/sda3
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1.Create a raid 1 device out of two disk partitions
2.Add it to the volume group holding your root filesystem, and make
sure /etc/lvm/lvm.conf does NOT exclude the raid members
Actual Results: The machine fails to reboot because vgscan failed
Expected Results: Ideally one shouldn't have to exclude raid members
in lvm.conf. Very impractical, particularly for rescue disks.
lvm vgscan fails the same way after the boot completes, if at boot
time it has a working lvm.conf, so it's not just because /sys might
not be mounted by initrd (I haven't checked)
Users are required to set up an appropriate device name filter in case
of MD RAID 1 underneath device-mapper.
FYI: We're working on the lvm2 tools and device-mapper to support RAID
How do you expect users to do that in a rescue CD environment?
If you're really not going to fix this, then reassign the bug to
anaconda, because anaconda is going to have to create lvm.conf at
install time. If the root filesystem is in a logical volume, it has
to be done before mkinitrd. And, again, this will make the rescuecd
useless for LVM systems using RAID. Please reconsider. Why is it so
hard to enumerate raid components and automatically skip them?
Failing that, consider a plug in system (i.e., vgscan runs a script)
that generates the skip list on the fly. Then I can write a script or
program that scans /proc/mdstat and outputs a filter list of raid
members to skip. And then you can integrate that into lvm because it
really doesn't make sense for lvm to intentionally do something that
is obviously wrong. :-/
Agreed, we really can't simply walk away from this and decide that
rescue mode won't work on lvm-on-raid1.
I don't think that it is hard to filter out underlying devices making
up an md RAID 1.
Our recommending so far was an 'external' setup of
a filter in lvm.conf before running vgscan.
We're reconsidering the issue.
Firstly, the LVM2 installer and the RPM need to be more helpful and
install the default /etc/lvm/lvm.conf file if none exists.
Secondly, yes, we need to fix the md detection.
2.00.12 installs the default lvm.conf if none exists
I've not managed to reproduce this yet in my test environments.
LVM2 correctly ignores constituent md devices for me when the raid
device is active.
When it's inactive, I get the warning messages, but this doesn't cause
problems - vgscan still exits with success.
There must be some other factor at play here that I'm missing.
> 2.00.12 installs the default lvm.conf if none exists
overwriting my lvm.conf that actually worked, and replacing it with
one that fails to exclude raid 1 members, which fails next time I
update the kernel :-(
> There must be some other factor at play here that I'm missing.
This is with lvm vgscan as started within initrd. Did you actually
set up the root filesystem on LVM on raid 1, as described in the
original bug report? Are you using the default lvm.conf, or something
else that would exclude the raid components?
lvm.conf was recently marked %config(noreplace), so hopefully that
particular issue won't be a problem in the future.
Ah - I didn't realise you actually had rootfs on raid1: I read step 2
as adding raid components to the VG *after* the rootfs was created:-(
It's probably vgchange that's failing rather than vgscan.
vgchange -ay must not be run before mdadm in the boot sequence or
there'll be problems. Amd it's horrible to require that because some
people will need it the other way around.
OK, now I understand what's going on, I'll sort out a patch.
Hopefully fixed now in lvm2-2.00.14-1.1 (submitted to dist-fc2-HEAD).
Yay! Confirmed, thanks. For the first time, I can boot my desktop
without a custom lvm.conf.