RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1365286 - Converting thin pool_tdata containing virt LVs to/from raid1|10 results in failure
Summary: Converting thin pool_tdata containing virt LVs to/from raid1|10 results in fa...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.3
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: ---
Assignee: Alasdair Kergon
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1380840
TreeView+ depends on / blocked
 
Reported: 2016-08-08 19:59 UTC by Corey Marthaler
Modified: 2021-09-03 12:37 UTC (History)
8 users (show)

Fixed In Version: lvm2-2.02.166-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1380840 (view as bug list)
Environment:
Last Closed: 2016-11-04 04:16:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1445 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2016-11-03 13:46:41 UTC

Description Corey Marthaler 2016-08-08 19:59:39 UTC
Description of problem:

# Down conversion works fine w/o virt LVs
[root@host-078 ~]# lvcreate  --thinpool POOL -L 4G  --zero n --poolmetadatasize 4M test
  Logical volume "POOL" created.
[root@host-078 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize Pool Origin Data%  Meta% Devices       
  POOL            test          twi-a-t--- 4.00g             0.00   1.27  POOL_tdata(0) 
  [POOL_tdata]    test          Twi-ao---- 4.00g                          /dev/sda1(1)  
  [POOL_tmeta]    test          ewi-ao---- 4.00m                          /dev/sdh1(0)  
  [lvol0_pmspare] test          ewi------- 4.00m                          /dev/sda1(0)  

[root@host-078 ~]# lvconvert --type raid1 -m 1 test/POOL_tdata
[root@host-078 ~]# lvconvert --type raid1 -m 0 test/POOL_tdata
[root@host-078 ~]# lvs -a -o +devices
  LV              VG   Attr       LSize Pool Origin Data%  Meta% Devices
  POOL            test twi-a-t--- 4.00g             0.00   1.27  POOL_tdata(0)
  [POOL_tdata]    test Twi-ao---- 4.00g                          /dev/sda1(1)
  [POOL_tmeta]    test ewi-ao---- 4.00m                          /dev/sdh1(0)
  [lvol0_pmspare] test ewi------- 4.00m                          /dev/sda1(0)



# This time create virt LVs
[root@host-078 ~]# lvcreate  --thinpool POOL -L 4G  --zero n --poolmetadatasize 4M test
  Logical volume "POOL" created.
[root@host-078 ~]# lvcreate  --virtualsize 1G -T test/POOL -n origin
  Logical volume "origin" created.
[root@host-078 ~]# lvcreate  -k n -s /dev/test/origin -n pool_convert
  Logical volume "pool_convert" created.

[root@host-078 ~]# lvconvert --type raid1 -m 1 test/POOL_tdata
[root@host-078 ~]# lvconvert --type raid1 -m 0 test/POOL_tdata
  Internal error: Performing unsafe table load while 1 device(s) are known to be suspended:  (253:8) 
  Internal error: Performing unsafe table load while 1 device(s) are known to be suspended:  (253:9) 
  Internal error: Performing unsafe table load while 1 device(s) are known to be suspended:  (253:10) 
  Internal error: Performing unsafe table load while 1 device(s) are known to be suspended:  (253:11) 

[root@host-078 ~]# lvs -a -o +devices
  LV              VG   Attr       LSize Pool Origin Data%  Meta% Devices
  POOL            test twi-aot--- 4.00g             0.00   1.37  POOL_tdata(0)
  [POOL_tdata]    test Twi-ao---- 4.00g                          /dev/sda1(1)
  [POOL_tmeta]    test ewi-ao---- 4.00m                          /dev/sdh1(0)
  [lvol0_pmspare] test ewi------- 4.00m                          /dev/sda1(0)
  origin          test Vwi-a-t--- 1.00g POOL        0.01
  pool_convert    test Vwi-a-t--- 1.00g POOL origin 0.01





# Same thing w/ RAID10
[root@host-078 ~]# lvconvert --type raid10 -m 1 test/POOL_tdata
[root@host-078 ~]# lvconvert -m 0 test/POOL_tdata
  Internal error: Performing unsafe table load while 1 device(s) are known to be suspended:  (253:8) 
  Internal error: Performing unsafe table load while 1 device(s) are known to be suspended:  (253:9) 
  Internal error: Performing unsafe table load while 1 device(s) are known to be suspended:  (253:10) 
  Internal error: Performing unsafe table load while 1 device(s) are known to be suspended:  (253:11) 



Version-Release number of selected component (if applicable):
3.10.0-480.el7.x86_64

lvm2-2.02.161-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
lvm2-libs-2.02.161-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
lvm2-cluster-2.02.161-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
device-mapper-1.02.131-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
device-mapper-libs-1.02.131-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
device-mapper-event-1.02.131-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
device-mapper-event-libs-1.02.131-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016
cmirror-2.02.161-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016
sanlock-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
sanlock-lib-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
lvm2-lockd-2.02.161-3.el7    BUILT: Thu Jul 28 09:31:24 CDT 2016


How reproducible:
Everytime

Comment 1 Corey Marthaler 2016-08-08 20:09:23 UTC
This may just be a dup of bug 1347048.

Comment 4 Alasdair Kergon 2016-08-23 18:12:49 UTC
Reproduced.

Comment 5 Alasdair Kergon 2016-09-20 15:03:58 UTC
Also getting 'Number of segments in active LV vg99/pool_tdata does not match metadata.'

Comment 6 Alasdair Kergon 2016-09-20 15:12:16 UTC
The second (snapshot) LV is unnecessary.

Comment 7 Alasdair Kergon 2016-09-22 13:43:20 UTC
The primary cause is that:
  lvconvert --type raid10 -m 1 test/POOL_tdata
is not actually completing the conversion and leaves the on-disk metadata inconsistent with what's live in the kernel.

A further 
  lvchange --refresh
is required to make it work.

Additionally, if LVs are inactive, we see messages such as:
  Unable to determine sync status of vg99/lvol2.
and the code proceeds regardless.

Comment 8 Alasdair Kergon 2016-09-27 15:04:14 UTC
We are missing some code in _add_lv_to_dtree to make sure that the underlying raid devices get added to the dtree when they are present in the metadata but not in the kernel.  (It walks through and skips them.)

For now, we will try to disable the lvconvert commands that do not work correctly.

Comment 11 Corey Marthaler 2016-09-28 23:14:21 UTC
It appears these operations are locked down now when the pool is active. Marking verified in the latest rpms.

3.10.0-510.el7.x86_64

lvm2-2.02.166-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
lvm2-libs-2.02.166-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
lvm2-cluster-2.02.166-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-libs-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-event-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-event-libs-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016



# Attempt raid up conversion of active pool sub volumes

[root@host-116 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize Pool Origin Data%  Meta% Devices
  POOL            snapper_thinp twi-aot--- 4.00g             0.00   1.37  POOL_tdata(0)
  [POOL_tdata]    snapper_thinp Twi-ao---- 4.00g                          /dev/sdf1(1)
  [POOL_tmeta]    snapper_thinp ewi-ao---- 4.00m                          /dev/sdh1(0)
  [lvol0_pmspare] snapper_thinp ewi------- 4.00m                          /dev/sdf1(0)
  origin          snapper_thinp Vwi-a-t--- 1.00g POOL        0.01
  pool_convert    snapper_thinp Vwi-a-t--- 1.00g POOL origin 0.01

[root@host-116 ~]# lvconvert --type raid1 -m 1 snapper_thinp/POOL_tdata
  Can't add image to active thin pool LV snapper_thinp/POOL_tdata yet. Deactivate first.

[root@host-116 ~]# lvconvert --type raid1 -m 1 snapper_thinp/POOL_tmeta
  Can't add image to active thin pool LV snapper_thinp/POOL_tmeta yet. Deactivate first.

[root@host-116 ~]# lvconvert --type raid10 -m 1 snapper_thinp/POOL_tdata
  Using default stripesize 64.00 KiB.
  Conversion operation not yet supported.

[root@host-116 ~]# lvconvert --type raid10 -m 1 snapper_thinp/POOL_tmeta
  Using default stripesize 64.00 KiB.
  Conversion operation not yet supported.

[root@host-116 ~]# vgchange -an snapper_thinp
  0 logical volume(s) in volume group "snapper_thinp" now active

[root@host-116 ~]# lvconvert --type raid1 -m 1 snapper_thinp/POOL_tdata
  Logical volume snapper_thinp/POOL_tdata successfully converted.




# Attempt down conversion of active raid10 pool sub volume

[root@host-116 ~]# lvconvert --type raid1 snapper_thinp/POOL_tdata
  Unable to convert LV snapper_thinp/POOL_tdata from raid10 to raid1.
  Direct conversion of raid10 LV snapper_thinp/POOL_tdata is not possible.

[root@host-116 ~]# lvconvert --type raid5 snapper_thinp/POOL_tdata
  Using default stripesize 64.00 KiB.
  Unable to convert LV snapper_thinp/POOL_tdata from raid10 to raid5.
  Direct conversion of raid10 LV snapper_thinp/POOL_tdata is not possible.

Comment 13 errata-xmlrpc 2016-11-04 04:16:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1445.html


Note You need to log in before you can comment on or make changes to this bug.