rc.sysinit runs vgchange -ay --ignorelockingfailure which attempts to activate all visible LVs. It should only be activating LVs required for local filesystems such as /usr or swap. Clustered logical volumes then get activated later by the clvmd init script.
Follows on from: [Bug 151172] Updating kernel on node with lvm root vols renders machine unbootable That one stops you booting; this one means the wrong volume groups might be activated on the wrong machines. e.g. if you activated an LV exclusively on one node, this bug means another node would pay no regard to that during the boot sequence and happily mount it concurrently. This bug has no impact if every VG is meant to be activated on every node, or if the configuration is completely determined by the tagging & config files - you're just activating the LVs too soon, and when clvmd comes along it'll go and get the locks that should have been got earlier.
I'm investigating fixing this by adding a 'clustered' flag and ensuring that the initscripts ignore LVs that inherit this flag.
Flag added in 2.01.08: - VGs that are meant to be clustered have a 'CLUSTERED' flag added to them. - vgcreate adds the flag automatically if you're using clustered locking, otherwise it doesn't. - Flag can be changed with vgchange -c. - vgchange -ay --ignorelockingfailure (used by the initscripts and the new mkinitrd) now ignores LVs with the CLUSTERED flag set.
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on the solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2005-192.html