Hide Forgot
Description of problem: This issue is already described in: https://bugzilla.redhat.com/show_bug.cgi?id=676909#c5. [root@taft-01 ~]# lvs -a -o +devices LV VG Attr LSize Log Copy% Devices cmirror-origin taft mwi-a- 100.00m cmirror-origin_mlog 100.00 cmirror-origin_mimage_0(0),cmirror-origin_mimage_1(0) [cmirror-origin_mimage_0] taft iwi-ao 100.00m /dev/sdb1(0) [cmirror-origin_mimage_1] taft iwi-ao 100.00m /dev/sdc1(0) [cmirror-origin_mlog] taft lwi-ao 4.00m /dev/sdh1(0) [root@taft-01 ~]# lvchange -an taft/cmirror-origin [root@taft-01 ~]# lvchange -aye taft/cmirror-origin [root@taft-01 ~]# lvcreate -s taft/cmirror-origin -L 50M -n snap1 Rounding up size to full physical extent 52.00 MiB Logical volume "snap1" created [root@taft-01 ~]# lvcreate -s taft/cmirror-origin -L 50M -n snap2 Rounding up size to full physical extent 52.00 MiB Logical volume "snap2" created [root@taft-01 ~]# lvcreate -s taft/cmirror-origin -L 50M -n snap3 Rounding up size to full physical extent 52.00 MiB Logical volume "snap3" created [root@taft-01 ~]# lvs LV VG Attr LSize Origin Snap% Log Copy% cmirror-origin taft owi-a- 100.00m cmirror-origin_mlog 100.00 snap1 taft swi-a- 52.00m cmirror-origin 0.00 snap2 taft swi-a- 52.00m cmirror-origin 0.00 snap3 taft swi-a- 52.00m cmirror-origin 0.00 [root@taft-01 ~]# lvremove taft/snap1 Do you really want to remove active clustered logical volume snap1? [y/n]: y Logical volume "snap1" successfully removed [root@taft-01 ~]# lvremove taft/snap2 Do you really want to remove active clustered logical volume snap2? [y/n]: y Logical volume "snap2" successfully removed [root@taft-01 ~]# lvremove taft/snap3 Do you really want to remove active clustered logical volume snap3? [y/n]: y Error locking on node taft-01: Command timed out Failed to resume cmirror-origin. Error locking on node taft-01: Command timed out [DEADLOCK] Version-Release number of selected component (if applicable): 2.6.32-94.el6.x86_64 lvm2-2.02.83-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 lvm2-libs-2.02.83-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 lvm2-cluster-2.02.83-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 udev-147-2.31.el6 BUILT: Wed Jan 26 05:39:15 CST 2011 device-mapper-1.02.62-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 device-mapper-libs-1.02.62-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 device-mapper-event-1.02.62-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 device-mapper-event-libs-1.02.62-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 cmirror-2.02.83-2.el6 BUILT: Tue Feb 8 10:10:57 CST 2011 How reproducible: everytime
I verified that this only happens with exclusive cluster mirror snaps, not exclusive cluster linears.
Must either find solution or disallow snaps of mirrors in a cluster...
The creation of the snapshot preserves the exclusive lock on the origin LV, but it causes a reload of the table that uses the cluster log. So, we go from single machine mirror to cluster mirror on one node...
Patch posted to lvm-devel... waiting for feedback before checking-in
upstream in version 2.02.85
This test case now works with the latest rpms. 2.6.32-94.el6.x86_64 lvm2-2.02.83-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 lvm2-libs-2.02.83-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 lvm2-cluster-2.02.83-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 udev-147-2.31.el6 BUILT: Wed Jan 26 05:39:15 CST 2011 device-mapper-1.02.62-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 device-mapper-libs-1.02.62-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 device-mapper-event-1.02.62-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 device-mapper-event-libs-1.02.62-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 cmirror-2.02.83-3.el6 BUILT: Fri Mar 18 09:31:10 CDT 2011 SCENARIO - [snaphot_exclusive_mirror] Snapshot an exclusively activated mirror taft-03: lvcreate -m 1 -n exclusive_origin -L 100M mirror_sanity Deactivate and then exclusively activate mirror Taking multiple snapshots of exclusive mirror 1 2 3 4 5 Removing snapshots of exclusive mirror 1 2 3 4 5 Deactivating mirror exclusive_origin... and removing
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2011-0772.html