Hide Forgot
The split mirror only works when the last leg PV is chosen. [root@taft-01 ~]# lvs -a -o +devices LV VG Attr LSize Copy% Devices split_pvs_sequentially split_image rwi-a-m- 300.00m 100.00 split_pvs_sequentially_rimage_0(0),split_pvs_sequentially_rimage_1(0),split_pvs_sequentially_rimage_2(0),split_pvs_sequentially_rimage_3(0) [split_pvs_sequentially_rimage_0] split_image iwi-aor- 300.00m /dev/sdh1(1) [split_pvs_sequentially_rimage_1] split_image iwi-aor- 300.00m /dev/sdg1(1) [split_pvs_sequentially_rimage_2] split_image iwi-aor- 300.00m /dev/sdf1(1) [split_pvs_sequentially_rimage_3] split_image iwi-aor- 300.00m /dev/sde1(1) [split_pvs_sequentially_rmeta_0] split_image ewi-aor- 4.00m /dev/sdh1(0) [split_pvs_sequentially_rmeta_1] split_image ewi-aor- 4.00m /dev/sdg1(0) [split_pvs_sequentially_rmeta_2] split_image ewi-aor- 4.00m /dev/sdf1(0) [split_pvs_sequentially_rmeta_3] split_image ewi-aor- 4.00m /dev/sde1(0) [root@taft-01 ~]# lvconvert --splitmirrors 1 --name new1 split_image/split_pvs_sequentially /dev/sdg1 device-mapper: rename ioctl on split_image-split_pvs_sequentially_rimage_3 failed: Device or resource busy Failed to rename split_image-split_pvs_sequentially_rimage_3 (253:9) to split_image-split_pvs_sequentially_rimage_2 Failed to resume split_image/split_pvs_sequentially after committing changes libdevmapper exiting with 9 device(s) still suspended. Version-Release number of selected component (if applicable): 2.6.32-220.4.2.el6.x86_64 lvm2-2.02.95-2.el6 BUILT: Fri Mar 16 08:39:54 CDT 2012 lvm2-libs-2.02.95-2.el6 BUILT: Fri Mar 16 08:39:54 CDT 2012 lvm2-cluster-2.02.95-2.el6 BUILT: Fri Mar 16 08:39:54 CDT 2012 udev-147-2.40.el6 BUILT: Fri Sep 23 07:51:13 CDT 2011 device-mapper-1.02.74-2.el6 BUILT: Fri Mar 16 08:39:54 CDT 2012 device-mapper-libs-1.02.74-2.el6 BUILT: Fri Mar 16 08:39:54 CDT 2012 device-mapper-event-1.02.74-2.el6 BUILT: Fri Mar 16 08:39:54 CDT 2012 device-mapper-event-libs-1.02.74-2.el6 BUILT: Fri Mar 16 08:39:54 CDT 2012 cmirror-2.02.95-2.el6 BUILT: Fri Mar 16 08:39:54 CDT 2012 How reproducible: Everytime
I came across this same issue when I was trying to put together an example for the documentation that specified which PV to remove from a RAID volume. This was the volume I started with: [root@doc-04 lvm]# lvs -a -o name,copy_percent,devices vg001 LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/vda(1) [my_lv_rimage_1] /dev/vdb(1) [my_lv_rimage_2] /dev/vdc(1) [my_lv_rmeta_0] /dev/vda(0) [my_lv_rmeta_1] /dev/vdb(0) [my_lv_rmeta_2] /dev/vdc(0) When I executed the splitmirror command and specified the middle PV, this was the result: [root@doc-04 lvm]# lvconvert --splitmirror 1 -n new vg001/my_lv /dev/vdb device-mapper: rename ioctl on vg001-my_lv_rimage_2 failed: Device or resource busy Failed to rename vg001-my_lv_rimage_2 (253:7) to vg001-my_lv_rimage_1 Failed to resume vg001/my_lv after committing changes libdevmapper exiting with 7 device(s) still suspended. This left the system frozen. It's a virtual system, and it wouldn't shutdown -- I had to destroy it and reboot it. This was the state when I brought things back up: [root@doc-04 ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert my_lv vg001 rwi-a-m- 100.00m 100.00 my_lv_rmeta_1_extracted vg001 -wi-a--- 4.00m new vg001 -wi-a--- 100.00m lv_root vg_doc04 -wi-ao-- 2.54g lv_swap vg_doc04 -wi-ao-- 1.97g [root@doc-04 ~]# lvs -a -o name,copy_percent,devices vg001 LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/vda(1) [my_lv_rimage_1] /dev/vdc(1) [my_lv_rmeta_0] /dev/vda(0) [my_lv_rmeta_1] /dev/vdc(0) my_lv_rmeta_1_extracted /dev/vdb(0) new /dev/vdb(1) I had to lvremove the volume "new" and the volume "my_lv_rmeta_1_extracted". I'm adding myself to the CC list so I can see updates to this. For the moment I won't be documenting an example of specifyin gwhich PV to remove with a splitmirror command.
Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: New feature in rhel6.3. No release notes necessary.
Fix verified in the latest rpms. 2.6.32-262.el6.x86_64 lvm2-2.02.95-5.el6 BUILT: Thu Apr 19 10:29:01 CDT 2012 lvm2-libs-2.02.95-5.el6 BUILT: Thu Apr 19 10:29:01 CDT 2012 lvm2-cluster-2.02.95-5.el6 BUILT: Thu Apr 19 10:29:01 CDT 2012 udev-147-2.41.el6 BUILT: Thu Mar 1 13:01:08 CST 2012 device-mapper-1.02.74-5.el6 BUILT: Thu Apr 19 10:29:01 CDT 2012 device-mapper-libs-1.02.74-5.el6 BUILT: Thu Apr 19 10:29:01 CDT 2012 device-mapper-event-1.02.74-5.el6 BUILT: Thu Apr 19 10:29:01 CDT 2012 device-mapper-event-libs-1.02.74-5.el6 BUILT: Thu Apr 19 10:29:01 CDT 2012 cmirror-2.02.95-5.el6 BUILT: Thu Apr 19 10:29:01 CDT 2012 ============================================================ Iteration 10 of 10 started at Thu Apr 19 13:59:07 CDT 2012 ============================================================ SCENARIO - [sequentially_split_off_all_raid1_pvs] Create a raid1 with many legs and then sequentially split off each one of the PVs taft-01: lvcreate --type raid1 -m 4 -n split_pvs_sequentially -L 300M split_image Waiting until all mirror|raid volumes become fully syncd... 0/1 mirror(s) are fully synced: ( 86.25% ) 1/1 mirror(s) are fully synced: ( 100.00% ) splitting off legs: /dev/sde1 /dev/sdh1 /dev/sdd1 /dev/sdf1 Deactivating mirror new0... and removing Deactivating mirror new1... and removing Deactivating mirror new2... and removing Deactivating mirror new3... and removing Deactivating mirror split_pvs_sequentially... and removing
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2012-0962.html