RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 991232 - LVM RAID: Make pvmove work with RAID LVs
Summary: LVM RAID: Make pvmove work with RAID LVs
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-01 22:38 UTC by Jonathan Earl Brassow
Modified: 2021-09-08 18:55 UTC (History)
10 users (show)

Fixed In Version: lvm2-2.02.102-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-06-13 11:46:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jonathan Earl Brassow 2013-08-01 22:38:31 UTC
pvmove currently skips RAID LVs - fix that.

Comment 1 Jonathan Earl Brassow 2013-08-26 21:48:47 UTC
There are several upstream commits that have gone into providing a fix for this bug.  The list below includes some patches that enable pvmove on non-RAID LVs as well - like thin.  However, they are somewhat intertwined and the final commit that includes testsuite updates is useful.


commit 0799e81ee08a76c877d774b2f7df75ad3ec29897
Author: Jonathan Brassow <jbrassow>
Date:   Mon Aug 26 16:38:54 2013 -0500

    test: pvmove tests for all the different segment types.
    
    Test moving linear, mirror, snapshot, RAID1,5,10, thinpool, thin
    and thin on RAID.  Perform the moves along with a dummy LV and
    also without the dummy LV by specifying a logical volume name as
    an argument to pvmove.

commit 2ef48b91ed74f732b6150a9492da624d204b331d
Author: Jonathan Brassow <jbrassow>
Date:   Mon Aug 26 16:36:30 2013 -0500

    pvmove:  Allow moving snapshot/origin.  Disallow converting and merging LVs
    
    The patch allows the user to also pvmove snapshots and origin logical
    volumes.  This means pvmove should be able to move all segment types.
    I have, however, disallowed moving converting or merging logical volumes.

commit caa77b33f2d5e59f2906b9f08f59ac2e64b14682
Author: Jonathan Brassow <jbrassow>
Date:   Mon Aug 26 14:12:31 2013 -0500

    pvmove: Fix inability to specify LV name when moving RAID, mirror, or thin L
V
    
    Top-level LVs (like RAID, mirror or thin) are ignored when determining which
    portions of an LV to pvmove.  If the user specified the name of an LV to
    move and it was one of the above types, it would be skipped.  The code would
    never move on to check whether its sub-LVs needed moving because their names
    did not match what the user specified.
    
    The solution is to check whether a sub-LVs is part of the LV whose name was
    specified by the user - not just if there was a name match.

commit 72d6bdd6b960d946039818684c125c3c31b7ae5e
Author: Jonathan Brassow <jbrassow>
Date:   Fri Aug 23 11:03:28 2013 -0500

    misc: make lv_is_on_pv use for_each_sub_lv to walk LV tree
    
    Make lv_is_on_pv use for_each_sub_lv to walk the LV tree.  This
    reduces code duplication.

commit 448ff0119fc0f4983917e10b663d9db896f8c5db
Author: Jonathan Brassow <jbrassow>
Date:   Fri Aug 23 09:13:14 2013 -0500

    pvmove: Ability to move thin volumes
    
    The previous commit was missing the code to allow moving thin
    volumes.

commit c59167ec132071d6ab53f928b0775c36a704fe7c
Author: Jonathan Brassow <jbrassow>
Date:   Fri Aug 23 08:57:16 2013 -0500

    pvmove: Add support for RAID, mirror, and thin
    
    This patch allows pvmove to operate on RAID, mirror and thin LVs.
    The key component is the ability to avoid moving a RAID or mirror
    sub-LV onto a PV that already has another RAID sub-LV on it.
    (e.g. Avoid placing both images of a RAID1 LV on the same PV.)
    
    Top-level LVs are processed to determine which PVs to avoid for
    the sake of redundancy, while bottom-level LVs are processed
    to determine which segments/extents to move.
    
    This approach does have some drawbacks.  By eliminating whole PVs
    from the allocation list, we might miss the opportunity to perform
    pvmove in some senarios.  For example, if we have 3 devices and
    a linear uses half of the first, a RAID1 uses half of the first and
    half of the second, and a linear uses half of the third (FIGURE 1);
    we should be able to pvmove the first device (FIGURE 2).
        FIGURE 1:
            [ linear ] [ -RAID- ] [ linear ]
            [ -RAID- ] [        ] [        ]
    
        FIGURE 2:
            [  moved ] [ -RAID- ] [ linear ]
            [  moved ] [ linear ] [ -RAID- ]
    However, the approach we are using would eliminate the second
    device from consideration and would leave us with too little space
    for allocation.  In these situations, the user does have the ability
    to specify LVs and move them one at a time.

commit e5c021316843a3b08e4f6d12ec27f06c20ded7da
Author: Jonathan Brassow <jbrassow>
Date:   Fri Aug 23 08:49:16 2013 -0500

    Thin: Make 'lv_is_on_pv(s)' work with thin types
    
    The pool metadata LV must be accounted for when determining what PVs
    are in a thin-pool.  The pool LV must also be accounted for when
    checking thin volumes.
    
    This is a prerequisite for pvmove working with thin types.

commit f1e3640df31d0593e47ed82f3bb2f7e976b6569c
Author: Jonathan Brassow <jbrassow>
Date:   Fri Aug 23 08:40:13 2013 -0500

    Misc: Make get_pv_list_for_lv() available to more than just RAID
    
    The function 'get_pv_list_for_lv' will assemble all the PVs that are
    used by the specified LV.  It uses 'for_each_sub_lv' to traverse all
    of the sub-lvs which may compose it.

Comment 3 Corey Marthaler 2014-02-20 23:52:06 UTC
Verified in the latest rpms.

3.10.0-84.el7.x86_64

lvm2-2.02.105-4.el7    BUILT: Wed Feb 19 09:19:54 CST 2014
lvm2-libs-2.02.105-4.el7    BUILT: Wed Feb 19 09:19:54 CST 2014
lvm2-cluster-2.02.105-4.el7    BUILT: Wed Feb 19 09:19:54 CST 2014
device-mapper-1.02.84-4.el7    BUILT: Wed Feb 19 09:19:54 CST 2014
device-mapper-libs-1.02.84-4.el7    BUILT: Wed Feb 19 09:19:54 CST 2014
device-mapper-event-1.02.84-4.el7    BUILT: Wed Feb 19 09:19:54 CST 2014
device-mapper-event-libs-1.02.84-4.el7    BUILT: Wed Feb 19 09:19:54 CST 2014
device-mapper-persistent-data-0.2.8-4.el7    BUILT: Fri Jan 24 14:28:55 CST 2014
cmirror-2.02.105-4.el7    BUILT: Wed Feb 19 09:19:54 CST 2014

Comment 5 Ludek Smid 2014-06-13 11:46:44 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.


Note You need to log in before you can comment on or make changes to this bug.