Description of problem: Unable to activate newly created clvm mirrored volumes. Previously, the system used RHEL 4.5 beta. We wanted to test beta1 to see if the problems in BZ 235948 persisted and if so, gather more information about that, as requested. Version-Release number of selected component (if applicable): RHEL 4.5 beta1 How reproducible: Every time. Steps to Reproduce: 1. Stopped cluster, and removed lvm and fs def. from cluster configuration. service rgmanager stop service fenced stop service clvmd stop service cman stop service ccsd stop 2. Upgraded to beta1 (rpm -Fvh *). 3. Removed LVM configuration (LVs and VGs). 4. Started cluster; service ccsd start service cman start service clvmd start service fenced start service rgmanager start 5. Redefined all SAN disks with pvcreate. 6. Created two VGs; vgcreate testvg1 /dev/emcpowera /dev/emcpowerc vgcreate testvg2 /dev/emcpoweri /dev/emcpowerj We can from this see that the VGs are automaticly set with cluster flag. 7. Created two mirrored LVs; lvcreate -L 500M -m 1 --corelog -n testlv1 testvg1 lvcreate -L 500M -m 1 --corelog -n testlv2 testvg2 When creating the LVs we get error messages ragarding cluster locking etc., but the LVs are created. -> From this point we are not able to contine the test. We can se the LVM information about VGs and LVs but we are not able to activate them and therefore not able to set up a filsystem, and further on to wrap it in to the cluster. Actual results: Unable to activate volume. Expected results: Volume activation returns ok. Additional info: The customer wants to upload sysreports for his servers but needs an FTP location from us to do that. Some output: root@tnscl02cn001 ~]# lvs -a -o +devices; dmsetup status; dmsetup table LV VG Attr LSize Origin Snap% Move Log Copy% Devices testlv1 testvg1 mwi-d- 500.00M 0.00 testlv1_mimage_0(0),testlv1_mimage_1(0) [testlv1_mimage_0] testvg1 iwi-a- 500.00M /dev/emcpowera(0) [testlv1_mimage_1] testvg1 iwi-a- 500.00M /dev/emcpowerc(0) testlv2 testvg1 mwi-d- 500.00M 0.00 testlv2_mimage_0(0),testlv2_mimage_1(0) [testlv2_mimage_0] testvg1 iwi-a- 500.00M /dev/emcpowera(125) [testlv2_mimage_1] testvg1 iwi-a- 500.00M /dev/emcpowerc(125) testvg1-testlv2_mimage_1: 0 1024000 linear testvg1-testlv2_mimage_0: 0 1024000 linear testvg1-testlv1_mimage_1: 0 1024000 linear testvg1-testlv2: testvg1-testlv1_mimage_0: 0 1024000 linear testvg1-testlv1: testvg1-testlv2_mimage_1: 0 1024000 linear 120:32 1024384 testvg1-testlv2_mimage_0: 0 1024000 linear 120:0 1024384 testvg1-testlv1_mimage_1: 0 1024000 linear 120:32 384 testvg1-testlv2: testvg1-testlv1_mimage_0: 0 1024000 linear 120:0 384 testvg1-testlv1: [root@tnscl02cn002 ~]# lvs -a -o +devices; dmsetup status; dmsetup table LV VG Attr LSize Origin Snap% Move Log Copy% Devices testlv1 testvg1 mwi-d- 500.00M 0.00 testlv1_mimage_0(0),testlv1_mimage_1(0) [testlv1_mimage_0] testvg1 iwi-a- 500.00M /dev/emcpowera(0) [testlv1_mimage_1] testvg1 iwi-a- 500.00M /dev/emcpoweri(0) testlv2 testvg1 mwi-d- 500.00M 0.00 testlv2_mimage_0(0),testlv2_mimage_1(0) [testlv2_mimage_0] testvg1 iwi-a- 500.00M /dev/emcpowera(125) [testlv2_mimage_1] testvg1 iwi-a- 500.00M /dev/emcpoweri(125) testvg1-testlv2_mimage_1: 0 1024000 linear testvg1-testlv2_mimage_0: 0 1024000 linear testvg1-testlv1_mimage_1: 0 1024000 linear testvg1-testlv2: testvg1-testlv1_mimage_0: 0 1024000 linear testvg1-testlv1: testvg1-testlv2_mimage_1: 0 1024000 linear 120:128 1024384 testvg1-testlv2_mimage_0: 0 1024000 linear 120:0 1024384 testvg1-testlv1_mimage_1: 0 1024000 linear 120:128 384 testvg1-testlv2: testvg1-testlv1_mimage_0: 0 1024000 linear 120:0 384 testvg1-testlv1:
Created attachment 152534 [details] lvmdump of tnscl02cn001
Created attachment 152535 [details] lvmdump of tnscl02cn002
wipe out the PVs, reboot the systems and try again? Is it reproducible from a clean start? I'm wondering if steps 1,2,3 should have been 3,1,2
"How reproducible: Every time." What part of the process is reproducible every time? "When creating the LVs we get error messages ragarding cluster locking etc., but the LVs are created." What are the error messages?!? Anything in /var/log/messages?
Apr 13 10:04:13 tnscl02cn001 kernel: device-mapper: dm-mirror: Error creating mirror dirty log Apr 13 10:04:13 tnscl02cn001 kernel: device-mapper: error adding target to table Did you load the cmirror module? You can do that by either 'service cmirror [re]start' or modprobe dm-cmirror. You can check for it by doing 'lsmod'.
ok, I'm going to assume that you haven't [re]loaded the dm-cmirror module. (This module should get loaded by the cmirror init script.) marking as NOTABUG Please feel free to reopen if this is not the case.
This is really a dup of bz 182432. *** This bug has been marked as a duplicate of 182432 ***
Yes, this was caused by the cmirror module not being loaded.