Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1535012 - pvmove fails with non-exclusive lvs in clvmd cluster
pvmove fails with non-exclusive lvs in clvmd cluster
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2 (Show other bugs)
7.5
x86_64 Linux
unspecified Severity unspecified
: rc
: ---
Assigned To: Zdenek Kabelac
cluster-qe@redhat.com
: Regression, TestBlocker
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2018-01-16 08:15 EST by Roman Bednář
Modified: 2018-04-10 11:24 EDT (History)
11 users (show)

See Also:
Fixed In Version: lvm2-2.02.177-4.el7
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-04-10 11:23:49 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
pvmove_vvvv (142.18 KB, text/plain)
2018-01-16 08:15 EST, Roman Bednář
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0853 None None None 2018-04-10 11:24 EDT

  None (edit)
Description Roman Bednář 2018-01-16 08:15:24 EST
Created attachment 1382001 [details]
pvmove_vvvv

pvmove attempt with non-exclusively active lvs does no longer work. Also there is a separate bug for pvmove not working without background (-b) flag (BZ#1476408)
Attaching verbose output of pvmove attempt.

See also BZ#1476408 but this BZ should track a different issue.

Odd are this is not a supported feature, although it seems functional in RHEL7.4 thus adding regression flag.

Reproducible always.



# lvs -a -o +devices
  LV   VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices       
  root rhel_virt-170 -wi-ao----  <6.20g                                                     /dev/vda2(205)
  swap rhel_virt-170 -wi-ao---- 820.00m                                                     /dev/vda2(0)  
  lv   vg            -wi-a-----   1.00g                                                     /dev/sda1(0)  
  lv2  vg            -wi-a-----   1.00g                                                     /dev/sda1(256)
  lv3  vg            -wi-a-----   1.00g                                                     /dev/sda1(512)

# pvmove -v -b /dev/sda1 /dev/sdb1
    Wiping internal VG cache
    Wiping cache of LVM-capable devices
    Archiving volume group "vg" metadata (seqno 16).
    Creating logical volume pvmove0
    Moving 256 extents of logical volume vg/lv
    Moving 256 extents of logical volume vg/lv2
    Moving 256 extents of logical volume vg/lv3
  Increasing mirror region size from 0    to 2.00 KiB
    Setting up pvmove in on-disk volume group metadata.
    Creating volume group backup "/etc/lvm/backup/vg" (seqno 17).
    Checking progress before waiting every 15 seconds.

# echo $?
0

# lvs -a -o +devices
  LV   VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices       
  root rhel_virt-170 -wi-ao----  <6.20g                                                     /dev/vda2(205)
  swap rhel_virt-170 -wi-ao---- 820.00m                                                     /dev/vda2(0)  
  lv   vg            -wi-a-----   1.00g                                                     /dev/sdb1(0)  
  lv2  vg            -wi-a-----   1.00g                                                     /dev/sdb1(256)
  lv3  vg            -wi-a-----   1.00g                                                     /dev/sdb1(512)

=================================
RHEL 7.5:

# lvs -a -o +devices
  LV   VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices       
  root rhel_virt-387 -wi-ao----  <6.20g                                                     /dev/vda2(205)
  swap rhel_virt-387 -wi-ao---- 820.00m                                                     /dev/vda2(0)  
  lv1  vg            -wi-a-----   1.00g                                                     /dev/sda1(0)  
  lv2  vg            -wi-a-----   1.00g                                                     /dev/sda1(256)
  lv3  vg            -wi-a-----   1.00g                                                     /dev/sda1(512)

# pvmove -v -b /dev/sda1 /dev/sdb1
    Wiping internal VG cache
    Wiping cache of LVM-capable devices
    Archiving volume group "vg" metadata (seqno 4).
    Creating logical volume pvmove0
    Moving 256 extents of logical volume vg/lv1.
    Moving 256 extents of logical volume vg/lv2.
    Moving 256 extents of logical volume vg/lv3.
  Increasing mirror region size from 0    to 2.00 KiB
  Error locking on node 1: Device or resource busy
  Failed to activate vg/lv1

# echo $?
5

# lvs -a -o +devices
  LV   VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices       
  root rhel_virt-387 -wi-ao----  <6.20g                                                     /dev/vda2(205)
  swap rhel_virt-387 -wi-ao---- 820.00m                                                     /dev/vda2(0)  
  lv1  vg            -wi-a-----   1.00g                                                     /dev/sda1(0)  
  lv2  vg            -wi-a-----   1.00g                                                     /dev/sda1(256)
  lv3  vg            -wi-a-----   1.00g                                                     /dev/sda1(512)



3.10.0-826.el7.x86_64

lvm2-2.02.176-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
lvm2-libs-2.02.176-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
lvm2-cluster-2.02.176-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
lvm2-python-boom-0.8.1-5.el7    BUILT: Wed Dec  6 11:15:40 CET 2017
cmirror-2.02.176-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
device-mapper-1.02.145-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
device-mapper-libs-1.02.145-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
device-mapper-event-1.02.145-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
device-mapper-event-libs-1.02.145-5.el7    BUILT: Wed Dec  6 11:13:07 CET 2017
device-mapper-persistent-data-0.7.3-3.el7    BUILT: Tue Nov 14 12:07:18 CET 2017
vdo-6.1.0.106-13    BUILT: Thu Dec 21 16:00:07 CET 2017
kmod-kvdo-6.1.0.106-11.el7    BUILT: Thu Dec 21 17:09:12 CET 2017
Comment 4 Zdenek Kabelac 2018-01-30 05:46:06 EST
These issues are already fixed  (assuming minor fixes are included in last build for activation during conversion)

In basic logic - lvm2 cannot PVMOVE lvs active on multiple nodes - thus LVs that are part of pvmove must be always active exclusively.

There are 2 issues:

1.) lvm2 does not support 'lock upgrade' - so whenever user has just 'local' activation - it has to be deactivated and activated locally exclusively.


2.) lvm2 does not support 'clustered' mirroring with pvmove (which would in fact required rather obscure cmirrord)


Thus from lvm2 perspective existing behavior is way more correct to the previous state - so it's not a regression rather a bugfix.

It's unclear whether scheduling support for those 2 issue is worth to work on ATM.
Comment 5 Jonathan Earl Brassow 2018-01-30 08:36:18 EST
> 2.) lvm2 does not support 'clustered' mirroring with pvmove (which would in
> fact required rather obscure cmirrord)

pretty sure that cmirrord /was/ used in pvmove in a cluster.  You sure about that?
Comment 6 Jonathan Earl Brassow 2018-01-30 11:23:05 EST
moving back to ASSIGNED.  I've tested this on 7.4 and found it working just fine.
Comment 8 Zdenek Kabelac 2018-02-01 17:39:37 EST
Two patches try to address the issue:

https://www.redhat.com/archives/lvm-devel/2018-February/msg00001.html
https://www.redhat.com/archives/lvm-devel/2018-February/msg00000.html

(may need some slight changes if the sources for 7.5 already diverted)
Comment 12 Roman Bednář 2018-02-13 09:16:40 EST
Verified.


3.10.0-847.el7.x86_64

lvm2-2.02.177-2.el7    
lvm2-libs-2.02.177-2.el7    
lvm2-cluster-2.02.177-2.el7  



SCENARIO - [segmented_pvmove]
Create a couple small segmented lvs and then pvmove them
 Running lv_seg on virt-364 to create the segmented volumes
 virt-364: /usr/tests/sts-rhel7.5/lvm2/bin/lv_seg -v mirror_sanity -n segment
 Deactivating segment0 mirror
 Moving data from pv /dev/sdc1 (-n mirror_sanity/segment0) on virt-364
     Executing: /usr/sbin/modprobe dm-log-userspace
     Wiping internal VG cache
     Wiping cache of LVM-capable devices
   Increasing mirror region size from 0    to 512 B
     Archiving volume group "mirror_sanity" metadata (seqno 27).
     Creating logical volume pvmove0
     Moving 27 extents of logical volume mirror_sanity/segment0.
     Creating volume group backup "/etc/lvm/backup/mirror_sanity" (seqno 28).
     Checking progress before waiting every 15 seconds.
   /dev/sdc1: Moved: 25.93%
   /dev/sdc1: Moved: 100.00%
 Device does not exist.
 Command failed
 Unable to get copy percent, pvmove most likely finished.
 Ignore 'Command failed' messages ;)
     Polling finished successfully.
 Device does not exist.
 Command failed
 Device does not exist.
 Command failed
 Device does not exist.
 Command failed
 Device does not exist.
 Command failed
 Device does not exist.
 Command failed
 Device does not exist.
 Command failed
 Quick verification that pvmove is finished on other nodes as well
 Deactivating mirror segment0... and removing
 Deactivating mirror segment1... and removing
Comment 14 Marian Csontos 2018-02-16 03:18:36 EST
Kabi, is this complete?

a2d2fe3a8cf840fcfcd23fb0e706c3699b79b5fa
552e60b3a1e35329a47d6112c548ada124b5a4e3

I would do the build today or Monday, so QE can get their hands on it ASAP.
Comment 16 Roman Bednář 2018-02-22 05:54:41 EST
Verified.

lvm2-2.02.177-4.el7.x86_64
kernel-3.10.0-854.el7.x86_64


SCENARIO - [segmented_pvmove]
Create a couple small segmented lvs and then pvmove them
Running lv_seg on virt-386 to create the segmented volumes
virt-386: /usr/tests/sts-rhel7.5/lvm2/bin/lv_seg -v mirror_sanity -n segment
Deactivating segment0 mirror
Moving data from pv /dev/sdb1 (-n mirror_sanity/segment0) on virt-386
    Executing: /usr/sbin/modprobe dm-log-userspace
    Wiping internal VG cache
    Wiping cache of LVM-capable devices
    Archiving volume group "mirror_sanity" metadata (seqno 27).
    Creating logical volume pvmove0
  Increasing mirror region size from 0    to 512 B
    Moving 105 extents of logical volume mirror_sanity/segment0.
    Creating volume group backup "/etc/lvm/backup/mirror_sanity" (seqno 28).
    Checking progress before waiting every 15 seconds.
  /dev/sdb1: Moved: 7.62%
  /dev/sdb1: Moved: 77.14%
  /dev/sdb1: Moved: 80.95%
  /dev/sdb1: Moved: 84.76%
  /dev/sdb1: Moved: 88.57%
  /dev/sdb1: Moved: 92.38%
  /dev/sdb1: Moved: 96.19%
  /dev/sdb1: Moved: 100.00%
    Polling finished successfully.
Device does not exist.
Command failed
Unable to get copy percent, pvmove most likely finished.
Ignore 'Command failed' messages ;)
Device does not exist.
Command failed
Device does not exist.
Command failed
Device does not exist.
Command failed
Device does not exist.
Command failed
Device does not exist.
Command failed
Device does not exist.
Command failed
Quick verification that pvmove is finished on other nodes as well
Deactivating mirror segment0... and removing
Deactivating mirror segment1... and removing
Comment 19 errata-xmlrpc 2018-04-10 11:23:49 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0853

Note You need to log in before you can comment on or make changes to this bug.