RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1242671 - if given a pv, shouldn't 'lvconvert --repair' place the new meta data device on it
Summary: if given a pv, shouldn't 'lvconvert --repair' place the new meta data device ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.2
Hardware: x86_64
OS: Linux
low
low
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-07-13 22:37 UTC by Corey Marthaler
Modified: 2023-03-08 07:27 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.175-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 15:16:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0853 0 None None None 2018-04-10 15:17:48 UTC

Description Corey Marthaler 2015-07-13 22:37:46 UTC
Description of problem:
[root@host-110 ~]# lvs -a -o +devices
  LV              Attr       LSize   Pool Origin Data%  Meta% Devices
  POOL            twi---tz--   1.00g                          POOL_tdata(0) 
  POOL_meta0      -wi-------   4.00m                          /dev/sdc1(0)
  POOL_meta1      -wi-------   4.00m                          /dev/sdc1(257)
  POOL_meta2      -wi-------   4.00m                          /dev/sdc1(258)
  POOL_meta3      -wi-------   4.00m                          /dev/sdc1(259)
  POOL_meta4      -wi-------   4.00m                          /dev/sdc1(260)
  POOL_meta5      -wi-------   4.00m                          /dev/sdc1(261)
  [POOL_tdata]    Twi-------   1.00g                          /dev/sdc1(1)
  [POOL_tmeta]    ewi-------   4.00m                          /dev/sdc1(262)
  [lvol7_pmspare] ewi-------   4.00m                          /dev/sdc1(263)
  origin          Vwi---tz--   1.00g POOL
  other1          Vwi---tz--   1.00g POOL
  other2          Vwi---tz--   1.00g POOL
  other3          Vwi---tz--   1.00g POOL
  other4          Vwi---tz--   1.00g POOL
  other5          Vwi---tz--   1.00g POOL
  snap            Vwi---tz-k   1.00g POOL origin

[root@host-110 ~]# lvconvert --yes --repair snapper_thinp/POOL /dev/sdd1
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (1.00 GiB)!
  For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100.
  WARNING: If everything works, remove "snapper_thinp/POOL_meta6".
  WARNING: Use pvmove command to move "snapper_thinp/POOL_tmeta" on the best fitting PV.

[root@host-110 ~]# lvs -a -o +devices
  LV              Attr       LSize   Pool Origin Data%  Meta% Devices
  POOL            twi---tz--   1.00g                          POOL_tdata(0) 
  POOL_meta0      -wi-------   4.00m                          /dev/sdc1(0)
  POOL_meta1      -wi-------   4.00m                          /dev/sdc1(257)
  POOL_meta2      -wi-------   4.00m                          /dev/sdc1(258)
  POOL_meta3      -wi-------   4.00m                          /dev/sdc1(259)
  POOL_meta4      -wi-------   4.00m                          /dev/sdc1(260)
  POOL_meta5      -wi-------   4.00m                          /dev/sdc1(261)
  POOL_meta6      -wi-------   4.00m                          /dev/sdc1(262)
  [POOL_tdata]    Twi-------   1.00g                          /dev/sdc1(1)
  [POOL_tmeta]    ewi-------   4.00m                          /dev/sdc1(263)
  [lvol8_pmspare] ewi-------   4.00m                          /dev/sdc1(264)
  origin          Vwi---tz--   1.00g POOL
  other1          Vwi---tz--   1.00g POOL
  other2          Vwi---tz--   1.00g POOL
  other3          Vwi---tz--   1.00g POOL
  other4          Vwi---tz--   1.00g POOL
  other5          Vwi---tz--   1.00g POOL
  snap            Vwi---tz-k   1.00g POOL origin


Version-Release number of selected component (if applicable):
3.10.0-290.el7.x86_64

lvm2-2.02.125-2.el7    BUILT: Fri Jul 10 03:42:29 CDT 2015
lvm2-libs-2.02.125-2.el7    BUILT: Fri Jul 10 03:42:29 CDT 2015
lvm2-cluster-2.02.125-2.el7    BUILT: Fri Jul 10 03:42:29 CDT 2015
device-mapper-1.02.102-2.el7    BUILT: Fri Jul 10 03:42:29 CDT 2015
device-mapper-libs-1.02.102-2.el7    BUILT: Fri Jul 10 03:42:29 CDT 2015
device-mapper-event-1.02.102-2.el7    BUILT: Fri Jul 10 03:42:29 CDT 2015
device-mapper-event-libs-1.02.102-2.el7    BUILT: Fri Jul 10 03:42:29 CDT 2015
device-mapper-persistent-data-0.5.3-1.el7    BUILT: Tue Jul  7 08:41:42 CDT 2015
cmirror-2.02.125-2.el7    BUILT: Fri Jul 10 03:42:29 CDT 2015
sanlock-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
sanlock-lib-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
lvm2-lockd-2.02.125-2.el7    BUILT: Fri Jul 10 03:42:29 CDT 2015

Comment 2 Zdenek Kabelac 2017-09-13 09:15:22 UTC
This is a bit more sophisticated. With current upstream release of lvm2 2.02.173

'lvconvert --repair' uses  existing _pmspare (which normally already exists) for thin_repair execution and just newly allocated _pmspare gets allocated on PV placed on command line.

But there is immediately visible problem with existing cmdline -   'lvconvert --repair' does not support  '--poolmeetadataspace n' - but that's slightly different issue.

Comment 3 Zdenek Kabelac 2017-09-20 14:31:04 UTC
So with recent upstream changes:

https://www.redhat.com/archives/lvm-devel/2017-September/msg00025.html

User can use '--poolmetadataspare n' to avoid automatic creation of spare device - and in this case devices created for repair will be placed on provided storage area.

So for testing this means - create thin-pool without spare and --repair it also without spare.

Comment 8 Corey Marthaler 2017-11-14 19:02:21 UTC
Based on comment #3, there are two different scenarios here with different expectations for the specified repair device.


## Scenario 1: No pmspare volume (--poolmetadataspare n)
Here both the new _pmspare *and* the _tmeta volume will be placed on the speficied --repair device.

[root@host-116 ~]# lvcreate  --thinpool POOL -L 1G  --zero y --poolmetadataspare n snapper_thinp
  Using default stripesize 64.00 KiB.
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: recovery of pools without pool metadata spare LV is not automated.
  Logical volume "POOL" created.
[root@host-116 ~]# lvs -a -o +devices
  LV           VG            Attr       LSize Pool Origin Data%  Meta%  Devices
  POOL         snapper_thinp twi-a-tz-- 1.00g             0.00   0.98   POOL_tdata(0)
  [POOL_tdata] snapper_thinp Twi-ao---- 1.00g                           /dev/sdg1(0)
  [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m                           /dev/sdh1(0)

[root@host-116 ~]# lvchange -an snapper_thinp/POOL
[root@host-116 ~]# lvconvert --yes --repair snapper_thinp/POOL /dev/sdb1
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  WARNING: LV snapper_thinp/POOL_meta0 holds a backup of the unrepaired metadata. Use lvremove when no longer required.
  WARNING: New metadata LV snapper_thinp/POOL_tmeta might use different PVs.  Move it with pvmove if required.

[root@host-116 ~]# lvchange -ay snapper_thinp/POOL
  WARNING: Not using lvmetad because a repair command was run.
[root@host-116 ~]# lvs -a -o +devices
  WARNING: Not using lvmetad because a repair command was run.
  LV              VG            Attr       LSize Pool Origin Data%  Meta%  Devices
  POOL            snapper_thinp twi-a-tz-- 1.00g             0.00   1.07   POOL_tdata(0)
  POOL_meta0      snapper_thinp -wi------- 4.00m                           /dev/sdh1(0)
  [POOL_tdata]    snapper_thinp Twi-ao---- 1.00g                           /dev/sdg1(0)
  [POOL_tmeta]    snapper_thinp ewi-ao---- 4.00m                           /dev/sdb1(0)
  [lvol1_pmspare] snapper_thinp ewi------- 4.00m                           /dev/sdb1(1)




## Scenario 2: Pmspare volume (--poolmetadataspare 4M) 
Here only the new _pmspare volume will be placed on the speficied --repair device, the _tmeta will not.

[root@host-116 ~]# lvcreate  --thinpool POOL -L 1G  --zero y --poolmetadatasize 4M  snapper_thinp
  Using default stripesize 64.00 KiB.
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  Logical volume "POOL" created.
[root@host-116 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize Pool Origin Data%  Meta%  Devices
  POOL            snapper_thinp twi-a-tz-- 1.00g             0.00   0.98   POOL_tdata(0)
  [POOL_tdata]    snapper_thinp Twi-ao---- 1.00g                           /dev/sdg1(1)
  [POOL_tmeta]    snapper_thinp ewi-ao---- 4.00m                           /dev/sdh1(0)
  [lvol0_pmspare] snapper_thinp ewi------- 4.00m                           /dev/sdg1(0)

[root@host-116 ~]# lvchange -an snapper_thinp/POOL
[root@host-116 ~]# lvconvert --yes --repair snapper_thinp/POOL /dev/sdb1
  WARNING: Disabling lvmetad cache for repair command.
  WARNING: Not using lvmetad because of repair.
  WARNING: LV snapper_thinp/POOL_meta0 holds a backup of the unrepaired metadata. Use lvremove when no longer required.
  WARNING: New metadata LV snapper_thinp/POOL_tmeta might use different PVs.  Move it with pvmove if required.

[root@host-116 ~]# lvchange -ay snapper_thinp/POOL
  WARNING: Not using lvmetad because a repair command was run.
[root@host-116 ~]# lvs -a -o +devices
  WARNING: Not using lvmetad because a repair command was run.
  LV              VG            Attr       LSize Pool Origin Data%  Meta%  Devices
  POOL            snapper_thinp twi-a-tz-- 1.00g             0.00   1.07   POOL_tdata(0)
  POOL_meta0      snapper_thinp -wi------- 4.00m                           /dev/sdh1(0)
  [POOL_tdata]    snapper_thinp Twi-ao---- 1.00g                           /dev/sdg1(1)
  [POOL_tmeta]    snapper_thinp ewi-ao---- 4.00m                           /dev/sdg1(0)
  [lvol1_pmspare] snapper_thinp ewi------- 4.00m                           /dev/sdb1(0)



3.10.0-772.el7.x86_64

lvm2-2.02.176-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
lvm2-libs-2.02.176-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
lvm2-cluster-2.02.176-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
lvm2-lockd-2.02.176-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
lvm2-python-boom-0.8-3.el7    BUILT: Fri Nov 10 07:16:45 CST 2017
cmirror-2.02.176-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
device-mapper-1.02.145-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
device-mapper-libs-1.02.145-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
device-mapper-event-1.02.145-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
device-mapper-event-libs-1.02.145-3.el7    BUILT: Fri Nov 10 07:12:10 CST 2017
device-mapper-persistent-data-0.7.3-2.el7    BUILT: Tue Oct 10 04:00:07 CDT 2017

Comment 11 errata-xmlrpc 2018-04-10 15:16:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0853


Note You need to log in before you can comment on or make changes to this bug.