RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 839796 - pvmove is inconsistent and gives misleading message for raid
Summary: pvmove is inconsistent and gives misleading message for raid
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.5
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks: 886216
TreeView+ depends on / blocked
 
Reported: 2012-07-12 21:02 UTC by benscott
Modified: 2013-02-21 08:11 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.98-4.el6
Doc Type: Bug Fix
Doc Text:
'pvmove' has been disallowed from operating on RAID logical volumes due to incorrect handling of their sub-LVs. If it is necessary to move a RAID logical volume's components from one device to another, 'lvconvert --replace <old_pv> <vg>/<lv> <new_pv>' should be used.
Clone Of:
Environment:
lvs --version LVM version: 2.02.96(2)-cvs (2012-03-06) Library version: 1.02.75-cvs (2012-03-06) Driver version: 4.22.0
Last Closed: 2013-02-21 08:11:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0501 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2013-02-20 21:30:45 UTC

Comment 2 benscott 2012-07-12 21:59:38 UTC
Well apparently bugzilla lost my description of the bug so here goes again:


  LVM version:     2.02.96(2)-cvs (2012-03-06)
  Library version: 1.02.75-cvs (2012-03-06)
  Driver version:  4.22.0

1. Create a raid 1 mirror like this one:

  LV               VG    Attr     LSize Copy%  Devices
  lvol0            NewVg rwi-a-m- 2.00g 8.79   lvol0_rimage_0(0),lvol0_rimage_1(0)
  [lvol0_rimage_0] NewVg Iwi-aor- 2.00g        /dev/sdg(1)
  [lvol0_rimage_1] NewVg Iwi-aor- 2.00g        /dev/sdh(1)
  [lvol0_rmeta_0]  NewVg ewi-aor- 4.00m        /dev/sdg(0)
  [lvol0_rmeta_1]  NewVg ewi-aor- 4.00m        /dev/sdh(0)


2. Run pvmove with the volume name:

pvmove --name lvol0  /dev/sdh /dev/sdi
  Skipping mirror LV lvol0
  All data on source PV skipped. It contains locked, hidden or non-top level LVs only.
  No data to move for NewVg

3. Run pvmove without the volume name:

pvmove   /dev/sdh /dev/sdi
  Skipping mirror LV lvol0
  /dev/sdh: Moved: 0.2%
  /dev/sdh: Moved: 17.7%
  /dev/sdh: Moved: 35.1%
  /dev/sdh: Moved: 53.0%
  /dev/sdh: Moved: 70.4%
  /dev/sdh: Moved: 87.7%
  /dev/sdh: Moved: 99.8%
  /dev/sdh: Moved: 100.0%

The warning is wrong, the data really was moved.

  LV               VG    Attr     LSize Copy% Devices
  lvol0            NewVg rwi-a-m- 2.00g 100   lvol0_rimage_0(0),lvol0_rimage_1(0)
  [lvol0_rimage_0] NewVg iwi-aor- 2.00g       /dev/sdg(1)
  [lvol0_rimage_1] NewVg iwi-aor- 2.00g       /dev/sdi(0)
  [lvol0_rmeta_0]  NewVg ewi-aor- 4.00m       /dev/sdg(0)
  [lvol0_rmeta_1]  NewVg ewi-aor- 4.00m       /dev/sdi(512)


3. The same results will occur with a raid stripe set, except the the
warning about skipping a mirror is not printed.

Comment 3 Corey Marthaler 2012-07-12 23:29:59 UTC
The misleading message part has come up before. :)

https://bugzilla.redhat.com/show_bug.cgi?id=500899#c6

Comment 4 Alasdair Kergon 2012-07-12 23:32:13 UTC
Well, from its point of view, it didn't move lvol0 at all - it moved the lvol0_* volumes!  So the output you quote is perfectly logical!

We'll sort out what the capabilities of these commands really are now and get them fixed.

Comment 7 Alasdair Kergon 2012-07-31 13:56:19 UTC
Corey: chicken and egg:)

The purpose of the bugzilla *is* to work out what the correct behaviour of the tools should be in all interactions between pvmove and the different types of raid devices, and then to implement that.

Comment 9 Jonathan Earl Brassow 2012-12-04 23:38:55 UTC
If you look at the device-mapper mapping tables for the result of the RAID LV after the move, you find it horribly disfigured.  This should not be allowed.  I am disallowing 'pvmove' of RAID LVs for now.  The feature can be requested for RHEL6.5+.

For now, users wishing to move a particular leg of a RAID LV can use:
  'lvconvert --replace old_pv vg/lv new_pv'

Comment 10 Jonathan Earl Brassow 2012-12-04 23:56:04 UTC
commit 383575525916d4cafb1c8396c95a40be539d1451
Author: Jonathan Brassow <jbrassow>
Date:   Tue Dec 4 17:47:47 2012 -0600

    pvmove/RAID:  Disallow pvmove on RAID LVs until properly handled
    
    Attempting pvmove on RAID LVs replaces the kernel RAID target with
    a temporary pvmove target, ultimately destroying the RAID LV.  pvmove
    must be prevented on RAID LVs for now.
    
    Use 'lvconvert --replace old_pv vg/lv new_pv' if you want to move
    an image of the RAID LV.

Comment 11 Jonathan Earl Brassow 2012-12-04 23:59:12 UTC
Results:

[root@bp-01 lvm2]# devices vg
  LV            Cpy%Sync Devices                      
  lv              100.00 lv_rimage_0(0),lv_rimage_1(0)
  [lv_rimage_0]          /dev/sdb1(1)                 
  [lv_rimage_1]          /dev/sdc1(1)                 
  [lv_rmeta_0]           /dev/sdb1(0)                 
  [lv_rmeta_1]           /dev/sdc1(0)                 
[root@bp-01 lvm2]# pvmove --name lv /dev/sdb1 /dev/sdd1
  Skipping raid1 LV lv
  All data on source PV skipped. It contains locked, hidden or non-top level LVs only.
  No data to move for vg
[root@bp-01 lvm2]# pvmove /dev/sdb1 /dev/sdd1
  Skipping raid1 LV lv
  Skipping RAID sub-LV lv_rimage_0
  Skipping RAID sub-LV lv_rmeta_0
  Skipping RAID sub-LV lv_rimage_1
  Skipping RAID sub-LV lv_rmeta_1
  All data on source PV skipped. It contains locked, hidden or non-top level LVs only.
  No data to move for vg
[root@bp-01 lvm2]# devices vg
  LV            Cpy%Sync Devices                      
  lv              100.00 lv_rimage_0(0),lv_rimage_1(0)
  [lv_rimage_0]          /dev/sdb1(1)                 
  [lv_rimage_1]          /dev/sdc1(1)                 
  [lv_rmeta_0]           /dev/sdb1(0)                 
  [lv_rmeta_1]           /dev/sdc1(0)

Comment 12 Nenad Peric 2012-12-05 11:31:19 UTC
Adding QA ack based on last comment. (#11)

Comment 15 Corey Marthaler 2013-01-07 22:11:24 UTC
Verified that pvmove is now disallowed on raid volumes.


2.6.32-348.el6.x86_64
lvm2-2.02.98-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012
lvm2-libs-2.02.98-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012
lvm2-cluster-2.02.98-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012
udev-147-2.43.el6    BUILT: Thu Oct 11 05:59:38 CDT 2012
device-mapper-1.02.77-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012
device-mapper-libs-1.02.77-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012
device-mapper-event-1.02.77-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012
device-mapper-event-libs-1.02.77-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012
cmirror-2.02.98-6.el6    BUILT: Thu Dec 20 07:00:04 CST 2012


[root@hayes-01 ~]# lvs -a -o +devices
  LV                        VG          Attr      LSize  Cpy%Sync Devices
  move_during_io            rwi-a-r--  2.00g    38.28 move_during_io_rimage_0(0),move_during_io_rimage_1(0)
  [move_during_io_rimage_0] Iwi-aor--  2.00g          /dev/etherd/e1.1p9(1)
  [move_during_io_rimage_1] Iwi-aor--  2.00g          /dev/etherd/e1.1p8(1)
  [move_during_io_rmeta_0]  ewi-aor--  4.00m          /dev/etherd/e1.1p9(0)
  [move_during_io_rmeta_1]  ewi-aor--  4.00m          /dev/etherd/e1.1p8(0)

[root@hayes-01 ~]# pvmove -v /dev/etherd/e1.1p9 /dev/etherd/e1.1p10
    Finding volume group "raid_sanity"
    Archiving volume group "raid_sanity" metadata (seqno 3).
    Creating logical volume pvmove0
  Skipping raid1 LV move_during_io
  Skipping RAID sub-LV move_during_io_rimage_0
  Skipping RAID sub-LV move_during_io_rmeta_0
  Skipping RAID sub-LV move_during_io_rimage_1
  Skipping RAID sub-LV move_during_io_rmeta_1
  All data on source PV skipped. It contains locked, hidden or non-top level LVs only.
  No data to move for raid_sanity

Comment 16 errata-xmlrpc 2013-02-21 08:11:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0501.html


Note You need to log in before you can comment on or make changes to this bug.