Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1735358

Summary: Seeing "Thin pool <thinpool>(253:14) transaction_id is 1, while expected 9. " when activating thin LV
Product: Red Hat Enterprise Linux 7 Reporter: jhouston
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
lvm2 sub component: Thin Provisioning QA Contact: cluster-qe <cluster-qe>
Status: CLOSED DUPLICATE Docs Contact:
Severity: urgent    
Priority: urgent CC: agk, ali.uenlue, bhull, dennis.kaulbars, heinzm, hornbach, jbrassow, jmagrini, markus.reiss, mleimenmeier, msnitzer, prajnoha, thornber, zkabelac
Version: 7.6Flags: jhouston: needinfo-
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-11-13 14:17:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
dd's of _tmeta LV's affected by issue
none
sos report of affected node none

Description jhouston 2019-07-31 19:53:33 UTC
Description of problem:

Seeing "Thin pool <thinpool>(253:14) transaction_id is 1, while expected 9." when attempting to activate thin pool

Version-Release number of selected component (if applicable):

  System:
    Mfr:  Red Hat
    Prod: RHEV Hypervisor
    Vers: 7.6-1.el7ev

$ cat uname
Linux 189tuxosg003.gfi.ihk.de 3.10.0-957.1.3.el7.x86_64 #1 SMP Thu Nov 15 17:36:42 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

[redhat-release] Red Hat Enterprise Linux Server release 7.6 (Maipo)
[redhat-storage-release] Red Hat Gluster Storage Server 3.4

$ grep lvm installed-rpms 
lvm2-2.02.180-10.el7_6.2.x86_64                             Thu Nov 29 12:59:31 2018
lvm2-libs-2.02.180-10.el7_6.2.x86_64                        Thu Nov 29 12:59:31 2018

How reproducible:
N/A

Steps to Reproduce:
N/A

Actual results:

Seeing "Thin pool <thinpool>(253:14) transaction_id is 1, while expected 9." when attempting to activate thin pool

Expected results:

Expect thin LV's and pools to activate, and be able to be used accordingly.

Additional info:

This issue originally manifested itself as full LV's where we could not activate thin pools because the space was exhausted. After resolving this, the pools/LV's are unable to be activated with the above error. 

Attempted the solution in:
   LVM commands fail with error: " Thin pool testvg-thinpool-tpool (253:10) transaction_id is 1302, while expected 1303." 
   https://access.redhat.com/solutions/2576081

Reverted changes after it do not assist with the issue. 


Full output of issue:

[189tuxosg003:root] ~ > vgchange -ay vg_f6c125b35f0df66949a94b69aca48517
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_58f124053939398031b36cc6e6078ca8-tpool (253:14) transaction_id is 1, while expected 9.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_58f124053939398031b36cc6e6078ca8-tpool (253:14) transaction_id is 1, while expected 9.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_1a609029379241c6838705533161e21f-tpool (253:29) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_1a609029379241c6838705533161e21f-tpool (253:29) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_bb8b15fce4c44a4e8c0d908fcaecb453-tpool (253:29) transaction_id is 1, while expected 9.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_bb8b15fce4c44a4e8c0d908fcaecb453-tpool (253:29) transaction_id is 1, while expected 9.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_47d5ec172faecf1f1e061d740ec71863-tpool (253:34) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_47d5ec172faecf1f1e061d740ec71863-tpool (253:34) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_4a4ad03e3c6715e63bf91ec4633b4a32-tpool (253:39) transaction_id is 1, while expected 9.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_4a4ad03e3c6715e63bf91ec4633b4a32-tpool (253:39) transaction_id is 1, while expected 9.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_aeccf73ec83a3f82f938a934c5716a89-tpool (253:39) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_aeccf73ec83a3f82f938a934c5716a89-tpool (253:39) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_54d3fd397b93c07d4e3dd998f3fecd71-tpool (253:39) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_54d3fd397b93c07d4e3dd998f3fecd71-tpool (253:39) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_5536bf2b73c6f8952ade9d96839b26e2-tpool (253:39) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_5536bf2b73c6f8952ade9d96839b26e2-tpool (253:39) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_23bdc89602f0c94cee9513c07fb90bed-tpool (253:39) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_23bdc89602f0c94cee9513c07fb90bed-tpool (253:39) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_10cf7e51be8bdaa8c0e2ea82127efd03-tpool (253:39) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_10cf7e51be8bdaa8c0e2ea82127efd03-tpool (253:39) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_edd8e1270b0ede7c1824ccf6e41ae459-tpool (253:44) transaction_id is 1, while expected 7.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_edd8e1270b0ede7c1824ccf6e41ae459-tpool (253:44) transaction_id is 1, while expected 7.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_39ec60e189d5b033cf9281e86003879c-tpool (253:49) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_39ec60e189d5b033cf9281e86003879c-tpool (253:49) transaction_id is 1, while expected 5.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_3327f45bb3f3d80a126c0b4493e4dae7-tpool (253:54) transaction_id is 1, while expected 7.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_3327f45bb3f3d80a126c0b4493e4dae7-tpool (253:54) transaction_id is 1, while expected 7.
  device-mapper: reload ioctl on  (253:56) failed: No data available
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_46d9e3415cc356d71cd482673a3a1ad7-tpool (253:63) transaction_id is 1, while expected 9.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_46d9e3415cc356d71cd482673a3a1ad7-tpool (253:63) transaction_id is 1, while expected 9.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_006f0c6f6d2bdf0c5f7c5c95cf48fa29-tpool (253:63) transaction_id is 1, while expected 9.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_006f0c6f6d2bdf0c5f7c5c95cf48fa29-tpool (253:63) transaction_id is 1, while expected 9.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_fe0f7fcbccf4522b2e8401880051b35d-tpool (253:63) transaction_id is 1, while expected 7.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_fe0f7fcbccf4522b2e8401880051b35d-tpool (253:63) transaction_id is 1, while expected 7.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_2f4a70aa033ee3368760c16f612413aa-tpool (253:63) transaction_id is 1, while expected 9.
  Thin pool vg_f6c125b35f0df66949a94b69aca48517-tp_2f4a70aa033ee3368760c16f612413aa-tpool (253:63) transaction_id is 1, while expected 9.
  device-mapper: reload ioctl on  (253:70) failed: No data available
  device-mapper: reload ioctl on  (253:74) failed: No data available
  device-mapper: reload ioctl on  (253:88) failed: No data available
  device-mapper: reload ioctl on  (253:92) failed: No data available
  69 logical volume(s) in volume group "vg_f6c125b35f0df66949a94b69aca48517" now active


Output of thin_checks:

[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_1a609029379241c6838705533161e21f_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts
[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_bb8b15fce4c44a4e8c0d908fcaecb453_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts
[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_47d5ec172faecf1f1e061d740ec71863_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts
[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_4a4ad03e3c6715e63bf91ec4633b4a32_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts
[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_aeccf73ec83a3f82f938a934c5716a89_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts
[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_54d3fd397b93c07d4e3dd998f3fecd71_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts
[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_5536bf2b73c6f8952ade9d96839b26e2_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts
[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_23bdc89602f0c94cee9513c07fb90bed_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts
[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_10cf7e51be8bdaa8c0e2ea82127efd03_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts
[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_edd8e1270b0ede7c1824ccf6e41ae459_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts
[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_39ec60e189d5b033cf9281e86003879c_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts
[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_3327f45bb3f3d80a126c0b4493e4dae7_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts
[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_46d9e3415cc356d71cd482673a3a1ad7_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts
[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_006f0c6f6d2bdf0c5f7c5c95cf48fa29_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts
[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_fe0f7fcbccf4522b2e8401880051b35d_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts
[189tuxosg003:root] ~ >  thin_check /dev/vg_f6c125b35f0df66949a94b69aca48517/tp_2f4a70aa033ee3368760c16f612413aa_tmeta
examining superblock
examining devices tree
examining mapping tree
checking space map counts


Will be uploading dd of _tmeta lv's that are experiencing issues, as well as a fresh sos report of this node.

Comment 2 jhouston 2019-07-31 19:57:34 UTC
Created attachment 1596536 [details]
dd's of _tmeta LV's affected by issue

Comment 3 jhouston 2019-07-31 20:08:05 UTC
Created attachment 1596538 [details]
sos report of affected node

Comment 4 Zdenek Kabelac 2019-08-01 14:19:33 UTC
This case looks familiar to other already existing gluster+thin-pool cases.

So my impression from 1st. look -

thin-pools runs out-of-space.

User executed thin_repair command (lvconvert --repair) and tried to activate thin-pool after such repair.

Since old version of device-mapper-persistent-data-0.7.3-3.el7.x86_64 has been used - it's been incapable to repair metadata after such damage and the recovert thin-pool metadata are mostly empty with transaction Id == 1 - so then during activation of thin-pool founds inconsistency between lvm2 metadata - wanting thin-pool with transaction Id == 9.

For proper repair I'd probably suggest to  installed newer version of  d-m-p-d  (latest version of RH7  >= 0.8.6)

For recovery use repair to *BIGGER* new metadata volume (something existing lvm2 is not capable to do itself). Original metadata are still held in  _tmeta0   (make sure you do not delete them).

thin_repair -i *_tmeta0  -o  biggerLV

afterwards check data were recovered (thin_dump  biggerLV    shows some bigger output and 
check transaction Id is 9)

And swap-in this recovered metadata into thin-pool and try activation - also make sure there is proper match of repaired metadata with pool (lot's of them are there with similar names)

If still does not work - let's do another round of thinking?

Comment 7 jhouston 2019-11-13 14:17:02 UTC

*** This bug has been marked as a duplicate of bug 1738446 ***