RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1255925 - 'lvconvert --repair' doesn't work for thin pools on top of raid (Cannot rename "lvol0_pmspare")
Summary: 'lvconvert --repair' doesn't work for thin pools on top of raid (Cannot renam...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.2
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-08-21 21:55 UTC by Corey Marthaler
Modified: 2023-03-08 07:27 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.02.130-2.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-11-19 12:47:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2147 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2015-11-19 11:11:07 UTC

Description Corey Marthaler 2015-08-21 21:55:32 UTC
Description of problem:
lvcreate  --type raid1  -m 1 --profile thin-performance --zero y -L 4M -n meta snapper_thinp                                                                                                     
lvcreate  --type raid1  -m 1 --profile thin-performance --zero y -L 1G -n POOL snapper_thinp                                                                                                     
Waiting until all mirror|raid volumes become fully syncd...                                                                                                                                      
   1/2 mirror(s) are fully synced: ( 50.82% 100.00% )                                                                                                                                            
   2/2 mirror(s) are fully synced: ( 100.00% 100.00% )

lvconvert --thinpool snapper_thinp/POOL --poolmetadata meta --yes
  WARNING: Converting logical volume snapper_thinp/POOL and snapper_thinp/meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)

Making origin volume
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n origin
lvcreate  -V 1G -T snapper_thinp/POOL -n other1
  WARNING: Sum of all thin volume sizes (2.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n other2
  WARNING: Sum of all thin volume sizes (3.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n other3
  WARNING: Sum of all thin volume sizes (4.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n other4
  WARNING: Sum of all thin volume sizes (5.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n other5
  WARNING: Sum of all thin volume sizes (6.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
Making snapshot of origin volume
lvcreate  -k n -s /dev/snapper_thinp/origin -n snap

[root@host-109 ~]# lvs -a -o +devices
  LV                    Attr       LSize   Pool Origin Data%  Meta% Devices
  POOL                  twi---tz--   1.00g                          POOL_tdata(0)
  [POOL_tdata]          rwi---r---   1.00g                          POOL_tdata_rimage_0(0),POOL_tdata_rimage_1(0)
  [POOL_tdata_rimage_0] Iwi---r---   1.00g                          /dev/sda1(3)
  [POOL_tdata_rimage_1] Iwi---r---   1.00g                          /dev/sde1(3)
  [POOL_tdata_rmeta_0]  ewi---r---   4.00m                          /dev/sda1(2)
  [POOL_tdata_rmeta_1]  ewi---r---   4.00m                          /dev/sde1(2)
  [POOL_tmeta]          ewi---r---   4.00m                          POOL_tmeta_rimage_0(0),POOL_tmeta_rimage_1(0)
  [POOL_tmeta_rimage_0] Iwi---r---   4.00m                          /dev/sda1(1)
  [POOL_tmeta_rimage_1] Iwi---r---   4.00m                          /dev/sde1(1)
  [POOL_tmeta_rmeta_0]  ewi---r---   4.00m                          /dev/sda1(0)
  [POOL_tmeta_rmeta_1]  ewi---r---   4.00m                          /dev/sde1(0)
  [lvol0_pmspare]       ewi-------   4.00m                          /dev/sda1(259)
  origin                Vwi---tz--   1.00g POOL
  other1                Vwi---tz--   1.00g POOL
  other2                Vwi---tz--   1.00g POOL
  other3                Vwi---tz--   1.00g POOL
  other4                Vwi---tz--   1.00g POOL
  other5                Vwi---tz--   1.00g POOL
  snap                  Vwi---tz--   1.00g POOL origin

# Swap in new _tmeta device using lvconvert --repair

[root@host-109 ~]# lvconvert --yes --repair snapper_thinp/POOL /dev/sdb1
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (1.00 GiB)!
  For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100.
  Cannot rename "lvol0_pmspare": name format not recognized for internal LV "POOL_tmeta_rimage_0"
[root@host-109 ~]# echo $?
5


Version-Release number of selected component (if applicable):
3.10.0-306.el7.x86_64

lvm2-2.02.128-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
lvm2-libs-2.02.128-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
lvm2-cluster-2.02.128-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
device-mapper-1.02.105-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
device-mapper-libs-1.02.105-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
device-mapper-event-1.02.105-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
device-mapper-event-libs-1.02.105-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
device-mapper-persistent-data-0.5.5-1.el7    BUILT: Thu Aug 13 09:58:10 CDT 2015
cmirror-2.02.128-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015
sanlock-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
sanlock-lib-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
lvm2-lockd-2.02.128-1.el7    BUILT: Tue Aug 18 03:45:17 CDT 2015


How reproducible:
Everytime

Comment 1 Zdenek Kabelac 2015-09-11 20:28:59 UTC
Improval of swap of identifiers fixes this issue with upstream commit:

https://www.redhat.com/archives/lvm-devel/2015-September/msg00099.html

Comment 3 Corey Marthaler 2015-09-23 17:04:49 UTC
This test case now works again with raid volumes. Marking verified in the latest rpms.


3.10.0-313.el7.x86_64
lvm2-2.02.130-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
lvm2-libs-2.02.130-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
lvm2-cluster-2.02.130-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
device-mapper-1.02.107-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
device-mapper-libs-1.02.107-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
device-mapper-event-1.02.107-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
device-mapper-event-libs-1.02.107-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
device-mapper-persistent-data-0.5.5-1.el7    BUILT: Thu Aug 13 09:58:10 CDT 2015
cmirror-2.02.130-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
sanlock-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
sanlock-lib-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
lvm2-lockd-2.02.130-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015



[root@host-109 ~]# vgcreate snapper_thinp /dev/sd[abcdefgh]1
  Volume group "snapper_thinp" successfully created
[root@host-109 ~]# pvscan
  PV /dev/sda1   VG snapper_thinp   lvm2 [24.99 GiB / 24.99 GiB free]
  PV /dev/sdb1   VG snapper_thinp   lvm2 [24.99 GiB / 24.99 GiB free]
  PV /dev/sdc1   VG snapper_thinp   lvm2 [24.99 GiB / 24.99 GiB free]
  PV /dev/sdd1   VG snapper_thinp   lvm2 [24.99 GiB / 24.99 GiB free]
  PV /dev/sde1   VG snapper_thinp   lvm2 [24.99 GiB / 24.99 GiB free]
  PV /dev/sdf1   VG snapper_thinp   lvm2 [24.99 GiB / 24.99 GiB free]
  PV /dev/sdg1   VG snapper_thinp   lvm2 [24.99 GiB / 24.99 GiB free]
  PV /dev/sdh1   VG snapper_thinp   lvm2 [24.99 GiB / 24.99 GiB free]
[root@host-109 ~]# lvcreate  --type raid1  -m 1 --profile thin-performance --zero y -L 4M -n meta snapper_thinp
  Logical volume "meta" created.
[root@host-109 ~]# lvcreate  --type raid1  -m 1 --profile thin-performance --zero y -L 1G -n POOL snapper_thinp
  Logical volume "POOL" created.
[root@host-109 ~]# lvs -a -o +devices
  LV              Attr       LSize Pool Origin Data%  Meta%  Cpy%Sync Devices
  POOL            rwi-a-r--- 1.00g                           54.69    POOL_rimage_0(0),POOL_rimage_1(0)
  [POOL_rimage_0] Iwi-aor--- 1.00g                                    /dev/sda1(3)
  [POOL_rimage_1] Iwi-aor--- 1.00g                                    /dev/sdb1(3)
  [POOL_rmeta_0]  ewi-aor--- 4.00m                                    /dev/sda1(2)
  [POOL_rmeta_1]  ewi-aor--- 4.00m                                    /dev/sdb1(2)
  meta            rwi-a-r--- 4.00m                           100.00   meta_rimage_0(0),meta_rimage_1(0)
  [meta_rimage_0] iwi-aor--- 4.00m                                    /dev/sda1(1)
  [meta_rimage_1] iwi-aor--- 4.00m                                    /dev/sdb1(1)
  [meta_rmeta_0]  ewi-aor--- 4.00m                                    /dev/sda1(0)
  [meta_rmeta_1]  ewi-aor--- 4.00m                                    /dev/sdb1(0)
[root@host-109 ~]# lvs -a -o +devices
  LV              Attr       LSize Pool Origin Data%  Meta%  Cpy%Sync Devices
  POOL            rwi-a-r--- 1.00g                           67.19    POOL_rimage_0(0),POOL_rimage_1(0)
  [POOL_rimage_0] Iwi-aor--- 1.00g                                    /dev/sda1(3)
  [POOL_rimage_1] Iwi-aor--- 1.00g                                    /dev/sdb1(3)
  [POOL_rmeta_0]  ewi-aor--- 4.00m                                    /dev/sda1(2)
  [POOL_rmeta_1]  ewi-aor--- 4.00m                                    /dev/sdb1(2)
  meta            rwi-a-r--- 4.00m                           100.00   meta_rimage_0(0),meta_rimage_1(0)
  [meta_rimage_0] iwi-aor--- 4.00m                                    /dev/sda1(1)
  [meta_rimage_1] iwi-aor--- 4.00m                                    /dev/sdb1(1)
  [meta_rmeta_0]  ewi-aor--- 4.00m                                    /dev/sda1(0)
  [meta_rmeta_1]  ewi-aor--- 4.00m                                    /dev/sdb1(0)

[root@host-109 ~]# lvconvert --thinpool snapper_thinp/POOL --poolmetadata meta --yes
  WARNING: Converting logical volume snapper_thinp/POOL and snapper_thinp/meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted snapper_thinp/POOL to thin pool.
[root@host-109 ~]# lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n origin
  Logical volume "origin" created.
[root@host-109 ~]# lvcreate  -V 1G -T snapper_thinp/POOL -n other1
  Logical volume "other1" created.
[root@host-109 ~]# lvcreate  -k n -s /dev/snapper_thinp/origin -n snap
  Logical volume "snap" created.
[root@host-109 ~]# lvs -a -o +devices
  LV                    Attr       LSize Pool Origin Data%  Meta%  Cpy%Sync Devices
  POOL                  twi-aotz-- 1.00g             0.00   1.17            POOL_tdata(0)
  [POOL_tdata]          rwi-aor--- 1.00g                           100.00   POOL_tdata_rimage_0(0),POOL_tdata_rimage_1(0)
  [POOL_tdata_rimage_0] iwi-aor--- 1.00g                                    /dev/sda1(3)
  [POOL_tdata_rimage_1] iwi-aor--- 1.00g                                    /dev/sdb1(3)
  [POOL_tdata_rmeta_0]  ewi-aor--- 4.00m                                    /dev/sda1(2)
  [POOL_tdata_rmeta_1]  ewi-aor--- 4.00m                                    /dev/sdb1(2)
  [POOL_tmeta]          ewi-aor--- 4.00m                           100.00   POOL_tmeta_rimage_0(0),POOL_tmeta_rimage_1(0)
  [POOL_tmeta_rimage_0] iwi-aor--- 4.00m                                    /dev/sda1(1)
  [POOL_tmeta_rimage_1] iwi-aor--- 4.00m                                    /dev/sdb1(1)
  [POOL_tmeta_rmeta_0]  ewi-aor--- 4.00m                                    /dev/sda1(0)
  [POOL_tmeta_rmeta_1]  ewi-aor--- 4.00m                                    /dev/sdb1(0)
  [lvol0_pmspare]       ewi------- 4.00m                                    /dev/sda1(259)
  origin                Vwi-a-tz-- 1.00g POOL        0.00
  other1                Vwi-a-tz-- 1.00g POOL        0.00
  snap                  Vwi-a-tz-- 1.00g POOL origin 0.00

[root@host-109 ~]# vgchange -an snapper_thinp
  0 logical volume(s) in volume group "snapper_thinp" now active
[root@host-109 ~]# lvconvert --yes --repair snapper_thinp/POOL /dev/sde1
  WARNING: If everything works, remove "snapper_thinp/POOL_meta0".
  WARNING: Use pvmove command to move "snapper_thinp/POOL_tmeta" on the best fitting PV.

Comment 4 errata-xmlrpc 2015-11-19 12:47:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2147.html


Note You need to log in before you can comment on or make changes to this bug.