Bug 213887 - pvmove of very segmented lv fails
Summary: pvmove of very segmented lv fails
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: lvm2
Version: 4.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Milan Broz
QA Contact: Cluster QE
URL:
Whiteboard:
: 207204 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2006-11-03 16:47 UTC by Corey Marthaler
Modified: 2013-03-01 04:04 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2007-12-19 20:48:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2006-11-03 16:47:16 UTC
Description of problem:
This may be related to bz 207204.

I created two very segmented LVs using lvextend and then attempted a pvmmove on
that volume.

[root@link-08 ~]# lvs -a -o +devices
  LV    VG   Attr   LSize  Origin Snap%  Move Log Copy%  Devices
  test1 V    -wi-a- 46.00G                               /dev/sda1(0)
  test1 V    -wi-a- 46.00G                               /dev/sda1(2560)
  test1 V    -wi-a- 46.00G                               /dev/sda1(3584)
  test1 V    -wi-a- 46.00G                               /dev/sda1(4608)
  test1 V    -wi-a- 46.00G                               /dev/sda1(5632)
  test1 V    -wi-a- 46.00G                               /dev/sda1(6656)
  test1 V    -wi-a- 46.00G                               /dev/sda1(7680)
  test1 V    -wi-a- 46.00G                               /dev/sda1(8704)
  test1 V    -wi-a- 46.00G                               /dev/sda1(9728)
  test1 V    -wi-a- 46.00G                               /dev/sda1(10752)
  test1 V    -wi-a- 46.00G                               /dev/sda1(12800)
  test1 V    -wi-a- 46.00G                               /dev/sda1(23296)
  test2 V    -wi-a- 47.00G                               /dev/sda1(1280)
  test2 V    -wi-a- 47.00G                               /dev/sda1(3072)
  test2 V    -wi-a- 47.00G                               /dev/sda1(4096)
  test2 V    -wi-a- 47.00G                               /dev/sda1(5120)
  test2 V    -wi-a- 47.00G                               /dev/sda1(6144)
  test2 V    -wi-a- 47.00G                               /dev/sda1(7168)
  test2 V    -wi-a- 47.00G                               /dev/sda1(8192)
  test2 V    -wi-a- 47.00G                               /dev/sda1(9216)
  test2 V    -wi-a- 47.00G                               /dev/sda1(10240)
  test2 V    -wi-a- 47.00G                               /dev/sda1(11776)
  test2 V    -wi-a- 47.00G                               /dev/sda1(17920)
  test2 V    -wi-a- 47.00G                               /dev/sda1(23552)

[root@link-08 ~]# pvmove -n test1 /dev/sda1
  Error locking on node link-08: Volume is busy on another node
  Failed to activate test1
[root@link-08 ~]# vgchange -an
  0 logical volume(s) in volume group "V" now active
[root@link-08 ~]# pvmove -n test1 /dev/sda1
  /dev/sda1: Moved: 0.6%
  /dev/sda1: Moved: 1.2%
  /dev/sda1: Moved: 1.8%
  /dev/sda1: Moved: 2.4%
  /dev/sda1: Moved: 3.1%
  /dev/sda1: Moved: 3.7%
  /dev/sda1: Moved: 4.3%
  /dev/sda1: Moved: 4.9%
  /dev/sda1: Moved: 5.5%
  /dev/sda1: Moved: 6.1%
  /dev/sda1: Moved: 6.7%
  /dev/sda1: Moved: 7.3%
  /dev/sda1: Moved: 8.0%
  /dev/sda1: Moved: 8.6%
  /dev/sda1: Moved: 9.3%
  /dev/sda1: Moved: 9.9%
  /dev/sda1: Moved: 10.6%
  /dev/sda1: Moved: 10.9%
  Error locking on node link-08: device-mapper: reload ioctl failed: Invalid
argument
  Unable to reactivate logical volume "pvmove0"
  ABORTING: Segment progression failed.


[root@link-08 ~]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "V" using metadata type lvm2
[root@link-08 ~]# dmsetup ls | grep V
V-test1 (253, 2)
VolGroup00-LogVol01     (253, 1)
VolGroup00-LogVol00     (253, 0)
[root@link-08 ~]# dmsetup table | grep V
V-test1: 0 10485760 linear 8:17 384
V-test1: 10485760 4194304 linear 8:1 20971904
V-test1: 14680064 4194304 linear 8:1 29360512
V-test1: 18874368 4194304 linear 8:1 37749120
V-test1: 23068672 4194304 linear 8:1 46137728
V-test1: 27262976 4194304 linear 8:1 54526336
V-test1: 31457280 4194304 linear 8:1 62914944
V-test1: 35651584 4194304 linear 8:1 71303552
V-test1: 39845888 4194304 linear 8:1 79692160
V-test1: 44040192 8388608 linear 8:1 88080768
V-test1: 52428800 41943040 linear 8:1 104857984
V-test1: 94371840 2097152 linear 8:1 190841216
VolGroup00-LogVol01: 0 4063232 linear 3:2 151912832
VolGroup00-LogVol00: 0 151912448 linear 3:2 384



Version-Release number of selected component (if applicable):
[root@link-08 ~]# rpm -qa | grep lvm2
lvm2-cluster-2.02.13-1
lvm2-cluster-debuginfo-2.02.06-7.0.RHEL4
lvm2-2.02.13-1
[root@link-08 ~]# rpm -qa | grep device-mapper
device-mapper-debuginfo-1.02.07-4.0.RHEL4
device-mapper-1.02.12-3

Comment 1 Corey Marthaler 2006-11-03 16:48:58 UTC
I will try this on single node lvm as well.

Comment 2 Christine Caulfield 2006-11-06 10:56:25 UTC
As pvmove isn't even slightly cluster-aware I would be very surprised if it
behaved differently on a single-node.

Comment 3 Corey Marthaler 2006-11-07 23:54:07 UTC
this is most likely a single node lvm2 bug.

Comment 4 Corey Marthaler 2006-11-13 21:24:35 UTC
this is reproducable.



Comment 5 Jonathan Earl Brassow 2007-02-27 17:41:10 UTC
*** Bug 207204 has been marked as a duplicate of this bug. ***

Comment 6 Alasdair Kergon 2007-04-20 20:47:20 UTC
Key information is missing - complete dmsetup info -c & table output,  kernel
message log, kernel version etc.

The 'lvmdump' script now exists to help you gather all this info.

Comment 7 Milan Broz 2007-12-04 21:36:59 UTC
I just run pvmove tests with upstream code for very segmented volumes and it works.
(just memory requirements are during monitoring too high IMHO)

Is this still reproducible with current lvm2 package ? 

If so, please attach lvmdump from failing system.


Comment 8 Corey Marthaler 2007-12-19 20:48:45 UTC
This bug hasn't been seen in over a year, and this test case is run quite often
as a part of mirror sanity testing. Closing...


Note You need to log in before you can comment on or make changes to this bug.