Bug 213887 - pvmove of very segmented lv fails
pvmove of very segmented lv fails
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: lvm2 (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Milan Broz
Cluster QE
: 207204 (view as bug list)
Depends On:
  Show dependency treegraph
Reported: 2006-11-03 11:47 EST by Corey Marthaler
Modified: 2013-02-28 23:04 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2007-12-19 15:48:45 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Corey Marthaler 2006-11-03 11:47:16 EST
Description of problem:
This may be related to bz 207204.

I created two very segmented LVs using lvextend and then attempted a pvmmove on
that volume.

[root@link-08 ~]# lvs -a -o +devices
  LV    VG   Attr   LSize  Origin Snap%  Move Log Copy%  Devices
  test1 V    -wi-a- 46.00G                               /dev/sda1(0)
  test1 V    -wi-a- 46.00G                               /dev/sda1(2560)
  test1 V    -wi-a- 46.00G                               /dev/sda1(3584)
  test1 V    -wi-a- 46.00G                               /dev/sda1(4608)
  test1 V    -wi-a- 46.00G                               /dev/sda1(5632)
  test1 V    -wi-a- 46.00G                               /dev/sda1(6656)
  test1 V    -wi-a- 46.00G                               /dev/sda1(7680)
  test1 V    -wi-a- 46.00G                               /dev/sda1(8704)
  test1 V    -wi-a- 46.00G                               /dev/sda1(9728)
  test1 V    -wi-a- 46.00G                               /dev/sda1(10752)
  test1 V    -wi-a- 46.00G                               /dev/sda1(12800)
  test1 V    -wi-a- 46.00G                               /dev/sda1(23296)
  test2 V    -wi-a- 47.00G                               /dev/sda1(1280)
  test2 V    -wi-a- 47.00G                               /dev/sda1(3072)
  test2 V    -wi-a- 47.00G                               /dev/sda1(4096)
  test2 V    -wi-a- 47.00G                               /dev/sda1(5120)
  test2 V    -wi-a- 47.00G                               /dev/sda1(6144)
  test2 V    -wi-a- 47.00G                               /dev/sda1(7168)
  test2 V    -wi-a- 47.00G                               /dev/sda1(8192)
  test2 V    -wi-a- 47.00G                               /dev/sda1(9216)
  test2 V    -wi-a- 47.00G                               /dev/sda1(10240)
  test2 V    -wi-a- 47.00G                               /dev/sda1(11776)
  test2 V    -wi-a- 47.00G                               /dev/sda1(17920)
  test2 V    -wi-a- 47.00G                               /dev/sda1(23552)

[root@link-08 ~]# pvmove -n test1 /dev/sda1
  Error locking on node link-08: Volume is busy on another node
  Failed to activate test1
[root@link-08 ~]# vgchange -an
  0 logical volume(s) in volume group "V" now active
[root@link-08 ~]# pvmove -n test1 /dev/sda1
  /dev/sda1: Moved: 0.6%
  /dev/sda1: Moved: 1.2%
  /dev/sda1: Moved: 1.8%
  /dev/sda1: Moved: 2.4%
  /dev/sda1: Moved: 3.1%
  /dev/sda1: Moved: 3.7%
  /dev/sda1: Moved: 4.3%
  /dev/sda1: Moved: 4.9%
  /dev/sda1: Moved: 5.5%
  /dev/sda1: Moved: 6.1%
  /dev/sda1: Moved: 6.7%
  /dev/sda1: Moved: 7.3%
  /dev/sda1: Moved: 8.0%
  /dev/sda1: Moved: 8.6%
  /dev/sda1: Moved: 9.3%
  /dev/sda1: Moved: 9.9%
  /dev/sda1: Moved: 10.6%
  /dev/sda1: Moved: 10.9%
  Error locking on node link-08: device-mapper: reload ioctl failed: Invalid
  Unable to reactivate logical volume "pvmove0"
  ABORTING: Segment progression failed.

[root@link-08 ~]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "V" using metadata type lvm2
[root@link-08 ~]# dmsetup ls | grep V
V-test1 (253, 2)
VolGroup00-LogVol01     (253, 1)
VolGroup00-LogVol00     (253, 0)
[root@link-08 ~]# dmsetup table | grep V
V-test1: 0 10485760 linear 8:17 384
V-test1: 10485760 4194304 linear 8:1 20971904
V-test1: 14680064 4194304 linear 8:1 29360512
V-test1: 18874368 4194304 linear 8:1 37749120
V-test1: 23068672 4194304 linear 8:1 46137728
V-test1: 27262976 4194304 linear 8:1 54526336
V-test1: 31457280 4194304 linear 8:1 62914944
V-test1: 35651584 4194304 linear 8:1 71303552
V-test1: 39845888 4194304 linear 8:1 79692160
V-test1: 44040192 8388608 linear 8:1 88080768
V-test1: 52428800 41943040 linear 8:1 104857984
V-test1: 94371840 2097152 linear 8:1 190841216
VolGroup00-LogVol01: 0 4063232 linear 3:2 151912832
VolGroup00-LogVol00: 0 151912448 linear 3:2 384

Version-Release number of selected component (if applicable):
[root@link-08 ~]# rpm -qa | grep lvm2
[root@link-08 ~]# rpm -qa | grep device-mapper
Comment 1 Corey Marthaler 2006-11-03 11:48:58 EST
I will try this on single node lvm as well.
Comment 2 Christine Caulfield 2006-11-06 05:56:25 EST
As pvmove isn't even slightly cluster-aware I would be very surprised if it
behaved differently on a single-node.
Comment 3 Corey Marthaler 2006-11-07 18:54:07 EST
this is most likely a single node lvm2 bug.
Comment 4 Corey Marthaler 2006-11-13 16:24:35 EST
this is reproducable.

Comment 5 Jonathan Earl Brassow 2007-02-27 12:41:10 EST
*** Bug 207204 has been marked as a duplicate of this bug. ***
Comment 6 Alasdair Kergon 2007-04-20 16:47:20 EDT
Key information is missing - complete dmsetup info -c & table output,  kernel
message log, kernel version etc.

The 'lvmdump' script now exists to help you gather all this info.
Comment 7 Milan Broz 2007-12-04 16:36:59 EST
I just run pvmove tests with upstream code for very segmented volumes and it works.
(just memory requirements are during monitoring too high IMHO)

Is this still reproducible with current lvm2 package ? 

If so, please attach lvmdump from failing system.
Comment 8 Corey Marthaler 2007-12-19 15:48:45 EST
This bug hasn't been seen in over a year, and this test case is run quite often
as a part of mirror sanity testing. Closing...

Note You need to log in before you can comment on or make changes to this bug.