RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1846332 - metadata corrupted by double activation , cannot be repaired with repair tools
Summary: metadata corrupted by double activation , cannot be repaired with repair tools
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.8
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: LVM and device-mapper development team
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-11 12:06 UTC by nikhil kshirsagar
Modified: 2021-09-03 12:37 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-06-12 09:46:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Packed metadata tp_64 (7.22 KB, application/octet-stream)
2020-06-11 12:10 UTC, Joe Thornber
no flags Details
Packed metadata tp_6a (76.91 KB, application/octet-stream)
2020-06-11 12:13 UTC, Joe Thornber
no flags Details

Comment 3 Joe Thornber 2020-06-11 12:10:45 UTC
Created attachment 1696744 [details]
Packed metadata tp_64

Packed version of the metadata.  Use thin_metadata_unpack to expand.

Comment 4 Joe Thornber 2020-06-11 12:13:12 UTC
Created attachment 1696745 [details]
Packed metadata tp_6a

Use thin_metadata_unpack to expand.

Comment 6 Joe Thornber 2020-06-12 09:46:13 UTC
I've looked at the metadata.  The superblock is inconsistent, which indicates concurrent activation of the pool on multiple machines/vms.  Typically we can recover from this, but in this case other metadata has been overwritten.  This strongly suggests the pool is actually being used on different machines/vms (ie. IO is going to the thins).

I can recover most of the metadata for tp_64..., but I know ~2000 mappings are missing for thin_dev 2.  As yet I haven't been able to recover anything for tp_6a, and am going to stop trying.

I'm closing the bz as thin has been used incorrectly.  It is not cluster aware, activating on multiple machines is akin to trying to mount a filesystem on multiple machines at the same time.


Note You need to log in before you can comment on or make changes to this bug.