Description of problem: This may be related to bz 207204. I created two very segmented LVs using lvextend and then attempted a pvmmove on that volume. [root@link-08 ~]# lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices test1 V -wi-a- 46.00G /dev/sda1(0) test1 V -wi-a- 46.00G /dev/sda1(2560) test1 V -wi-a- 46.00G /dev/sda1(3584) test1 V -wi-a- 46.00G /dev/sda1(4608) test1 V -wi-a- 46.00G /dev/sda1(5632) test1 V -wi-a- 46.00G /dev/sda1(6656) test1 V -wi-a- 46.00G /dev/sda1(7680) test1 V -wi-a- 46.00G /dev/sda1(8704) test1 V -wi-a- 46.00G /dev/sda1(9728) test1 V -wi-a- 46.00G /dev/sda1(10752) test1 V -wi-a- 46.00G /dev/sda1(12800) test1 V -wi-a- 46.00G /dev/sda1(23296) test2 V -wi-a- 47.00G /dev/sda1(1280) test2 V -wi-a- 47.00G /dev/sda1(3072) test2 V -wi-a- 47.00G /dev/sda1(4096) test2 V -wi-a- 47.00G /dev/sda1(5120) test2 V -wi-a- 47.00G /dev/sda1(6144) test2 V -wi-a- 47.00G /dev/sda1(7168) test2 V -wi-a- 47.00G /dev/sda1(8192) test2 V -wi-a- 47.00G /dev/sda1(9216) test2 V -wi-a- 47.00G /dev/sda1(10240) test2 V -wi-a- 47.00G /dev/sda1(11776) test2 V -wi-a- 47.00G /dev/sda1(17920) test2 V -wi-a- 47.00G /dev/sda1(23552) [root@link-08 ~]# pvmove -n test1 /dev/sda1 Error locking on node link-08: Volume is busy on another node Failed to activate test1 [root@link-08 ~]# vgchange -an 0 logical volume(s) in volume group "V" now active [root@link-08 ~]# pvmove -n test1 /dev/sda1 /dev/sda1: Moved: 0.6% /dev/sda1: Moved: 1.2% /dev/sda1: Moved: 1.8% /dev/sda1: Moved: 2.4% /dev/sda1: Moved: 3.1% /dev/sda1: Moved: 3.7% /dev/sda1: Moved: 4.3% /dev/sda1: Moved: 4.9% /dev/sda1: Moved: 5.5% /dev/sda1: Moved: 6.1% /dev/sda1: Moved: 6.7% /dev/sda1: Moved: 7.3% /dev/sda1: Moved: 8.0% /dev/sda1: Moved: 8.6% /dev/sda1: Moved: 9.3% /dev/sda1: Moved: 9.9% /dev/sda1: Moved: 10.6% /dev/sda1: Moved: 10.9% Error locking on node link-08: device-mapper: reload ioctl failed: Invalid argument Unable to reactivate logical volume "pvmove0" ABORTING: Segment progression failed. [root@link-08 ~]# vgscan Reading all physical volumes. This may take a while... Found volume group "V" using metadata type lvm2 [root@link-08 ~]# dmsetup ls | grep V V-test1 (253, 2) VolGroup00-LogVol01 (253, 1) VolGroup00-LogVol00 (253, 0) [root@link-08 ~]# dmsetup table | grep V V-test1: 0 10485760 linear 8:17 384 V-test1: 10485760 4194304 linear 8:1 20971904 V-test1: 14680064 4194304 linear 8:1 29360512 V-test1: 18874368 4194304 linear 8:1 37749120 V-test1: 23068672 4194304 linear 8:1 46137728 V-test1: 27262976 4194304 linear 8:1 54526336 V-test1: 31457280 4194304 linear 8:1 62914944 V-test1: 35651584 4194304 linear 8:1 71303552 V-test1: 39845888 4194304 linear 8:1 79692160 V-test1: 44040192 8388608 linear 8:1 88080768 V-test1: 52428800 41943040 linear 8:1 104857984 V-test1: 94371840 2097152 linear 8:1 190841216 VolGroup00-LogVol01: 0 4063232 linear 3:2 151912832 VolGroup00-LogVol00: 0 151912448 linear 3:2 384 Version-Release number of selected component (if applicable): [root@link-08 ~]# rpm -qa | grep lvm2 lvm2-cluster-2.02.13-1 lvm2-cluster-debuginfo-2.02.06-7.0.RHEL4 lvm2-2.02.13-1 [root@link-08 ~]# rpm -qa | grep device-mapper device-mapper-debuginfo-1.02.07-4.0.RHEL4 device-mapper-1.02.12-3
I will try this on single node lvm as well.
As pvmove isn't even slightly cluster-aware I would be very surprised if it behaved differently on a single-node.
this is most likely a single node lvm2 bug.
this is reproducable.
*** Bug 207204 has been marked as a duplicate of this bug. ***
Key information is missing - complete dmsetup info -c & table output, kernel message log, kernel version etc. The 'lvmdump' script now exists to help you gather all this info.
I just run pvmove tests with upstream code for very segmented volumes and it works. (just memory requirements are during monitoring too high IMHO) Is this still reproducible with current lvm2 package ? If so, please attach lvmdump from failing system.
This bug hasn't been seen in over a year, and this test case is run quite often as a part of mirror sanity testing. Closing...