Bug 500899 - RFE: give better message when pvmove is already in progress on requested VG
Summary: RFE: give better message when pvmove is already in progress on requested VG
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: lvm2
Version: 5.3
Hardware: All
OS: Linux
low
medium
Target Milestone: rc
: ---
Assignee: Peter Rajnoha
QA Contact: Cluster QE
URL:
Whiteboard: FailsQA
Depends On: 500898
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-05-14 19:05 UTC by Corey Marthaler
Modified: 2010-03-30 09:02 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of: 500898
Environment:
Last Closed: 2010-03-30 09:02:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2010:0298 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2010-03-29 15:16:34 UTC

Description Corey Marthaler 2009-05-14 19:05:48 UTC
+++ This bug was initially created as a clone of Bug #500898 +++

Description of problem:
If you have a pvmove running on a VG and attempt another one from a different device (within the same VG), lvm will tell you there's nothing to move when in reality that's not true. lvm just doesn't allow more than one pvmove at a time to a single VG 

[root@grant-02 ~]# pvscan | grep ONE
  PV /dev/sdb1   VG ONE          lvm2 [45.41 GB / 5.41 GB free]
  PV /dev/sdb3   VG ONE          lvm2 [45.41 GB / 5.41 GB free]
  PV /dev/sdb6   VG ONE          lvm2 [45.41 GB / 25.41 GB free]
  PV /dev/sdc1   VG ONE          lvm2 [45.41 GB / 45.41 GB free]
  PV /dev/sdc3   VG ONE          lvm2 [45.41 GB / 5.41 GB free]
  PV /dev/sdc6   VG ONE          lvm2 [45.41 GB / 45.41 GB free]

[root@grant-01 ~]# pvmove -v /dev/sdb1 /dev/sdb3
    Finding volume group "ONE"
    Executing: /sbin/modprobe dm-cmirror  
    Archiving volume group "ONE" metadata (seqno 17).
    Creating logical volume pvmove0
    Moving 5120 extents of logical volume ONE/lv_one_1
    Moving 5120 extents of logical volume ONE/lv_one_2
    Updating volume group metadata
    Creating volume group backup "/etc/lvm/backup/ONE" (seqno 18).
    Checking progress every 15 seconds
  /dev/sdb1: Moved: 2.3%
  /dev/sdb1: Moved: 4.2%
  /dev/sdb1: Moved: 6.2%
  /dev/sdb1: Moved: 8.2%
  /dev/sdb1: Moved: 10.1%
  [...]
  /dev/sdb1: Moved: 96.3%
  /dev/sdb1: Moved: 98.2%
  /dev/sdb1: Moved: 100.0%
    Removing temporary pvmove LV
    Writing out final volume group after pvmove
    Creating volume group backup "/etc/lvm/backup/ONE" (seqno 23).

# Before the above finishes
[root@grant-02 ~]# pvmove -v /dev/sdb6 /dev/sdc1
    Finding volume group "ONE"
    Executing: /sbin/modprobe dm-cmirror  
    Archiving volume group "ONE" metadata (seqno 18).
    Creating logical volume pvmove1
  Skipping locked LV lv_one_1
  Skipping locked LV lv_one_2
  Skipping mirror LV pvmove0
  No data to move for ONE

The "Skipping mirror LV pvmove0" is a clue that one is already in progress. It would be nice if lvm just said something to that effect.

Once the first one finishes, I'm able to do the pvmove that just had "No data to move"

[root@grant-02 ~]# pvmove -v /dev/sdb6 /dev/sdc1
    Finding volume group "ONE"
    Executing: /sbin/modprobe dm-cmirror  
    Archiving volume group "ONE" metadata (seqno 23).
    Creating logical volume pvmove0
    Moving 2560 extents of logical volume ONE/lv_one_1
    Moving 2560 extents of logical volume ONE/lv_one_2
    Updating volume group metadata
    Creating volume group backup "/etc/lvm/backup/ONE" (seqno 24).
    Checking progress every 15 seconds
  /dev/sdb6: Moved: 5.1%
  /dev/sdb6: Moved: 9.7%


Version-Release number of selected component (if applicable):
lvm2-2.02.37-3.el4

How reproducible:
Everytime

Comment 1 Tom Coughlan 2009-10-19 21:33:18 UTC
From  Bug #500898:

> Sure, I think we could change the message quite easily for that specific
> situation (all PEs skipped on the PV because of the LV locks).

So I will set devel_ack for 5.5. Peter, it is not a high priority, so change it if you will not have time.

Comment 3 Milan Broz 2009-12-10 15:34:01 UTC
In lvm2-2_02_56-2_el5.

Comment 6 Corey Marthaler 2010-02-01 23:32:05 UTC
I thought that we were going to remove the "No data to move for $vg" message because that's not true. There *is* data to move, but we can't move it due to another currently running pvmove process.

"All data on source PV skipped. It contains locked, hidden or non-top level LVs only."

Is that really the best we can do? We can't check for another running pvmove in that VG and state the obvious? Instead we give three possibilities that may be causing this cmd to fail?


[root@hayes-02 ~]# pvmove -v /dev/etherd/e1.1p2 /dev/etherd/e1.1p4
    Finding volume group "ONE"
    Executing: /sbin/modprobe dm-log-clustered 
    Archiving volume group "ONE" metadata (seqno 6).
    Creating logical volume pvmove1
  Skipping locked LV stripe
  Skipping mirror LV pvmove0
  All data on source PV skipped. It contains locked, hidden or non-top level LVs only.
  No data to move for ONE

Comment 7 Peter Rajnoha 2010-02-02 12:23:47 UTC
I kept the "No data to move" even if all LVs are skipped because it seemed to me as a main reason for failing that operation and it's a good reason to give for a user - the operation as a whole failed because there is "No data to move for <VG>" and the reason for that is either that there really is no data to move (no extra message is shown then) or everything has been skipped (in this case we give that extra message).

As for giving even more detailed message like why that LV is really skipped - hmm, well, the question is if it's worth adding more complexity in the code just to assemble a more detailed message. Of course, it should be possible to detect that the LV is locked because it is a part of an ongoing pvmove that contains the same LV... But frankly, I wouldn't complicate it since this is a corner case only :)

Comment 8 Corey Marthaler 2010-02-03 15:41:55 UTC
Although I continue to disagree with your "fixed" message, it's not a big enough deal to delay RHEL5.5, so I'll mark it verified and move on...

Comment 10 errata-xmlrpc 2010-03-30 09:02:00 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2010-0298.html


Note You need to log in before you can comment on or make changes to this bug.