Description of problem: Version-Release number of selected component (if applicable): RHEL 5 RHCS beta1 How reproducible: Follow steps below. Steps to Reproduce: We started new test with cmirror activated (was not activated earlier); -> LVM configuration scratched! 1. Start of cluster on both nodes; service ccsd start service cman start service cmirror start service clvmd start service fenced start service rgmanager start OK - No problems! 2. Created a new VG; vgcreate testvg1 /dev/emcpowera /dev/emcpowerc OK - No problems! 3. Created a new mirrored LV; lvcreate -L 500M -m 1 --corelog -n testlv1 testvg1 OK - No problems! 4. Created ext3 filesystem on volume; mke2fs -j /dev/testvg1/testlv1 OK - No problems! 5. Configured cluster with LVM and filesystem as resources, and added it into failover service. OK - No problems! 6. With active cluster and a job writing against filesystem, we removed one of the disks in the mirrored volume from the SAN side (/dev/emcpowerc). OK - Volume is without downtime automatically converted to a linear volume. Job writing against filesystem continuing without problems. LVM status ok. 7. The same test as above but in addition we forced power-off on the active cluster node (with active writing job against filesystem). OK - Volume behave as above, in addition the cluster is doing a failover to seccond node. LVM status ok. Write job is forced to halt as expected ;-) -> Problems from this point! We are now activating the node which has been forced down with power-off and joining it to the cluster. SAN disk is also activated. From this point the cluster service fails to handle the filesystem/volume. As an example we get the following message when trying to activate the volume (on both nodes); [root@tnscl02cn001 ~]# vgchange -a y Volume group "testvg1" inconsistent Inconsistent metadata copies found - updating to use version 188 Error locking on node tnscl02cn001: Volume group for uuid not found: kigNllj6NfwPVqvTyihozk2MBX2Z3hqNyXXyTC4s5jR8RJVo18CqqCgqkyJCiCWn Error locking on node tnscl02cn002: Volume group for uuid not found: kigNllj6NfwPVqvTyihozk2MBX2Z3hqNyXXyTC4s5jR8RJVo18CqqCgqkyJCiCWn 0 logical volume(s) in volume group "testvg1" now active So at the moment we are not able to activate the volumes..... any tip (cleaning etc.)? Actual results: Unable to activate volume on host that re-joined after power failure. Expected results: Able to activate volume on host that re-joined after power failure. Additional info:
*** This bug has been marked as a duplicate of 237175 ***