Hide Forgot
Description of problem: Basically, when you specify too few mimages to remove during a mirror down convert, LVM picks remaining images to remove in order to complete the requested down convert. However when attempting the same thing in the raid world you just get an error stating now enough rimages to satisfy request. This isn't really a big deal other then the behaviors should probably be the same for both types of volumes. # MIRROR DOWN CONVERT [root@taft-01 ~]# lvcreate -m 2 -n mirror -L 500M taft Logical volume "mirror" created [root@taft-01 ~]# lvs -a -o +devices LV VG Attr LSize Log Copy% Devices mirror taft mwi-a-m- 500.00m mirror_mlog 100.00 mirror_mimage_0(0),mirror_mimage_1(0),mirror_mimage_2(0) [mirror_mimage_0] taft iwi-aom- 500.00m /dev/sdb1(0) [mirror_mimage_1] taft iwi-aom- 500.00m /dev/sdc1(0) [mirror_mimage_2] taft iwi-aom- 500.00m /dev/sdd1(0) [mirror_mlog] taft lwi-aom- 4.00m /dev/sdh1(0) [root@taft-01 ~]# lvconvert -m 0 taft/mirror /dev/sdd1 Logical volume mirror converted. [root@taft-01 ~]# lvs -a -o +devices LV VG Attr LSize Log Copy% Devices mirror taft -wi-a--- 500.00m /dev/sdb1(0) # RAID DOWN CONVERT [root@taft-01 ~]# lvcreate -m 2 --type raid1 -n raid -L 500M taft Logical volume "raid" created [root@taft-01 ~]# lvs -a -o +devices LV VG ttr LSize Log Copy% Devices raid taft wi-a-m- 500.00m 100.00 raid_rimage_0(0),raid_rimage_1(0),raid_rimage_2(0) [raid_rimage_0] taft wi-aor- 500.00m /dev/sdb1(1) [raid_rimage_1] taft wi-aor- 500.00m /dev/sdc1(1) [raid_rimage_2] taft wi-aor- 500.00m /dev/sdd1(1) [raid_rmeta_0] taft wi-aor- 4.00m /dev/sdb1(0) [raid_rmeta_1] taft wi-aor- 4.00m /dev/sdc1(0) [raid_rmeta_2] taft wi-aor- 4.00m /dev/sdd1(0) [root@taft-01 ~]# lvconvert -m 0 taft/raid /dev/sdd1 Unable to extract enough images to satisfy request Failed to extract images from taft/raid Version-Release number of selected component (if applicable): 2.6.32-251.el6.x86_64 lvm2-2.02.95-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012 lvm2-libs-2.02.95-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012 lvm2-cluster-2.02.95-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012 udev-147-2.40.el6 BUILT: Fri Sep 23 07:51:13 CDT 2011 device-mapper-1.02.74-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012 device-mapper-libs-1.02.74-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012 device-mapper-event-1.02.74-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012 device-mapper-event-libs-1.02.74-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012 cmirror-2.02.95-3.el6 BUILT: Fri Mar 30 09:54:10 CDT 2012
Not for 6.3. For now, users will either have to specify a complete listing or none - no partial listings. (I actually prefer this method and would rather make mirroring behave like RAID...)
We'll have to decide which way to go on this issue before fixing this bug. I prefer the way RAID handles it, but the way mirror does it might be the way we are forced to go due to history.
Alternatively just leave things how they are. And I don't find "Unable to extract enough images to satisfy request" particularly helpful.
I don't mind the difference in behavior, but agk is right that the message should be better. Perhaps we could count the PV list ahead of time and then tell the user they haven't specified enough PVs. I'll make this bug about the message then, not the behavior.
Fix posted upstream: commit 6c6468f91d9b7a93a85726dfb1b397b555502c1c Author: Jonathan Brassow <jbrassow> Date: Thu Apr 3 16:57:41 2014 -0500 RAID: Improve an error message When down-converting a RAID1 LV, if the user specifies too few devices, they will get a confusing message. Ex: [root]# lvcreate -m 2 --type raid1 -n raid -L 500M taft Logical volume "raid" created [root]# lvconvert -m 0 taft/raid /dev/sdd1 Unable to extract enough images to satisfy request Failed to extract images from taft/raid This patch makes the error message a bit clearer by telling the user the count they are trying to remove and the number of devices they supplied. [root@bp-01 lvm2]# lvcreate --type raid1 -m 3 -L 200M -n lv vg Logical volume "lv" created [root@bp-01 lvm2]# lvconvert -m -3 vg/lv /dev/sdb1 Unable to remove 3 images: Only 1 device given. Failed to extract images from vg/lv [root@bp-01 lvm2]# lvconvert -m -3 vg/lv /dev/sd[bc]1 Unable to remove 3 images: Only 2 devices given. Failed to extract images from vg/lv [root@bp-01 lvm2]# lvconvert -m -3 vg/lv /dev/sd[bcd]1 [root@bp-01 lvm2]# lvs -a -o name,attr,devices vg LV Attr Devices lv -wi-a----- /dev/sde1(1) This patch doesn't work in all cases. The user can specify the right number of devices, but not a sufficient amount of devices from the LV. This will produce the old error message: [root@bp-01 lvm2]# lvconvert -m -3 vg/lv /dev/sd[bcf]1 Unable to extract enough images to satisfy request Failed to extract images from vg/lv However, I think this error message is sufficient for this case.
Tested with: lvm2-2.02.107-1.el6 BUILT: Mon Jun 23 16:44:45 CEST 2014 lvm2-libs-2.02.107-1.el6 BUILT: Mon Jun 23 16:44:45 CEST 2014 lvm2-cluster-2.02.107-1.el6 BUILT: Mon Jun 23 16:44:45 CEST 2014 udev-147-2.55.el6 BUILT: Wed Jun 18 13:30:21 CEST 2014 device-mapper-1.02.86-1.el6 BUILT: Mon Jun 23 16:44:45 CEST 2014 device-mapper-libs-1.02.86-1.el6 BUILT: Mon Jun 23 16:44:45 CEST 2014 device-mapper-event-1.02.86-1.el6 BUILT: Mon Jun 23 16:44:45 CEST 2014 device-mapper-event-libs-1.02.86-1.el6 BUILT: Mon Jun 23 16:44:45 CEST 2014 device-mapper-persistent-data-0.3.2-1.el6 BUILT: Fri Apr 4 15:43:06 CEST 2014 cmirror-2.02.107-1.el6 BUILT: Mon Jun 23 16:44:45 CEST 2014 The error messages make more sense and behave as described in the last comment (Comment #11). Marking VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-1387.html