RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1132547 - Thin pool incorrectly updates transaction_id when it is skipped via volume_list
Summary: Thin pool incorrectly updates transaction_id when it is skipped via volume_list
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.6
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On: 1132512
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-08-21 14:23 UTC by Zdenek Kabelac
Modified: 2014-10-14 08:25 UTC (History)
18 users (show)

Fixed In Version: lvm2-2.02.110-1.el6
Doc Type: Bug Fix
Doc Text:
LVM no longer incorrectly assumes messages were sent to the kernel's thin pool driver when such interaction was forbidden due to a configuration parameter such as activation/volume_list. This could lead to future activation failures reporting a transaction ID mismatch.
Clone Of: 1132512
Environment:
Last Closed: 2014-10-14 08:25:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1387 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2014-10-14 01:39:47 UTC

Description Zdenek Kabelac 2014-08-21 14:23:05 UTC
+++ This bug was initially created as a clone of Bug #1132512 +++

First I created the physical volume, the volume group, one thin pool that almost used all the space of the volume group. Then I created and formated one thinly provisioned volume within this volume group. About a week later my PC had a power failure and was not shutdown cleanly. About two weeks later I opened the luks container and created a new thin volume. I tried to activate it, but I got the following error message.

Thin pool transaction_id=5, while expected: 6.

So the creation of the volume was successful, but the activation wasn't. I had to activate it manually because the name of the volume group was not listed in the "activation" "volume_list" in lvm.conf during the creation of the volume.

The volume group only has one physical volume on a luks encrypted hard drive. The hard drive does not have any partitions. It is a Seagate Barracuda 7200.14 (AF). smartctl does not show any signs of a failure of the hard drive.

Kernel: 3.14.14
LVM version:     2.02.108(2) (2014-07-23)
Library version: 1.02.87 (2014-07-23)
Driver version:  4.27.0

--- Additional comment from Zdenek Kabelac on 2014-08-21 10:03:23 EDT ---

Yep  tricky one - we miss to really pass messages to thin pools which happens to be skipped because of volume_list.


This bug may easily lead to desynchronization of  kernel and lvm2 metadata which requires non-trival effort to fix it.

Comment 1 Peter Rajnoha 2014-08-21 14:26:03 UTC
(In reply to Zdenek Kabelac from comment #0)
> Yep  tricky one - we miss to really pass messages to thin pools which
> happens to be skipped because of volume_list.
> 
> This bug may easily lead to desynchronization of  kernel and lvm2 metadata
> which requires non-trival effort to fix it.

Requesting blocker for this one.

Comment 2 Zdenek Kabelac 2014-08-26 13:38:43 UTC
Trying to initially address this bug with upstream commit:

https://www.redhat.com/archives/lvm-devel/2014-August/msg00075.html

Comment 4 Zdenek Kabelac 2014-08-27 07:24:21 UTC
A simple reproducer could like this -

create thin pool
deactivate thin pool
set volume_list to not allow activation of thin-pool (i.e. put in name of non-existing vg)
then try to create thin volume - in buggy version - transaction_id is moved forward but the kernel target doesn't know about it.
then when volume_list is set in way to allow activation of thinpool - mistmatching transaction is found and activation of thin pool is not possible and requires fix of lvm2 metadata.


With fixed version it should not be possible.

Comment 5 Nenad Peric 2014-09-09 14:13:40 UTC
The creation of the thin LV is prevented when thin pool is no longer active (and cannot be due to unmatched volume_list).

[root@virt-147 ~]# lvchange -an vg/thin_pool
[root@virt-147 ~]# lvs -a
  LV                VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  [lvol0_pmspare]   vg         ewi-------   4.00m                                                    
  thin_pool         vg         twi---tz--   1.00g                                                    
  [thin_pool_tdata] vg         Twi-------   1.00g                                                    
  [thin_pool_tmeta] vg         ewi-------   4.00m                                                    
  lv_root           vg_virt147 -wi-ao----   6.71g                                                    
  lv_swap           vg_virt147 -wi-ao---- 816.00m         

[root@virt-147 ~]# lvcreate -T -V1G -n data_LV vg/thin_pool
  Cannot activate thin pool vg/thin_pool, perhaps skipped in lvm.conf volume_list?
[root@virt-147 ~]# lvs -a
  LV                VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  [lvol0_pmspare]   vg         ewi-------   4.00m                                                    
  thin_pool         vg         twi---tz--   1.00g                                                    
  [thin_pool_tdata] vg         Twi-------   1.00g                                                    
  [thin_pool_tmeta] vg         ewi-------   4.00m                                                    
  lv_root           vg_virt147 -wi-ao----   6.71g                                                    
  lv_swap           vg_virt147 -wi-ao---- 816.00m                       


Marking this VERIFIED with:

lvm2-2.02.111-2.el6    BUILT: Mon Sep  1 13:46:43 CEST 2014
lvm2-libs-2.02.111-2.el6    BUILT: Mon Sep  1 13:46:43 CEST 2014
lvm2-cluster-2.02.111-2.el6    BUILT: Mon Sep  1 13:46:43 CEST 2014
udev-147-2.57.el6    BUILT: Thu Jul 24 15:48:47 CEST 2014
device-mapper-1.02.90-2.el6    BUILT: Mon Sep  1 13:46:43 CEST 2014
device-mapper-libs-1.02.90-2.el6    BUILT: Mon Sep  1 13:46:43 CEST 2014
device-mapper-event-1.02.90-2.el6    BUILT: Mon Sep  1 13:46:43 CEST 2014
device-mapper-event-libs-1.02.90-2.el6    BUILT: Mon Sep  1 13:46:43 CEST 2014
device-mapper-persistent-data-0.3.2-1.el6    BUILT: Fri Apr  4 15:43:06 CEST 2014

Comment 6 errata-xmlrpc 2014-10-14 08:25:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1387.html


Note You need to log in before you can comment on or make changes to this bug.