Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionJonathan Earl Brassow
2014-11-17 22:24:48 UTC
- Be clearer about what the effects of the damage are.
eg, This thin volume is lost. The mappings in this logical range
are lost, there were 'n' mapped blocks in this region.
thin_rmap tool written, just need cross referencing with LVM
metadata to identify the LVs.
- Handle out of data space gracefully
- Configure a set of thins that may be purged?
- Handle out of metadata space gracefully
- Automate recovery in the case of damaged pool metadata
Need to produce a set of scenarios (eg, transaction ids mismatch
after recovery, differing nrs of thins, missing data).
- Make sure recovery works at all levels in our supported stacks list.
eg, RAID recovery if a leg of the data volume is lost.
- Any thin volumes that have been recovered but have potentially
lost blocks should be marked as such.
- Any completely lost thin volumes should be recreated empty, but
marked as damaged.
Alternatively delete them (which is what current lvm does for
non-thin). Decide.
- Damaged thins should report how much space was allocated at the
time of recovery.
- The admin should be able to delete all damaged thins with a simple
command line. (similar to --removemissing).
- Work around transient failures of underlying PV/LV
- We should be able to activate the pool and any thin volumes
that are not provisioned on these failed PV/LVs.
- We should be able to activate thins that *do* have allocations
on the failed PV/LVs at the admins request. This will mark the
thins as damaged. (Punch error targets into the pool).
** It is very likely that this will be broken-up further into more bugs.**
- Be clearer about what the effects of the damage are. eg, This thin volume is lost. The mappings in this logical range are lost, there were 'n' mapped blocks in this region. thin_rmap tool written, just need cross referencing with LVM metadata to identify the LVs. - Handle out of data space gracefully - Configure a set of thins that may be purged? - Handle out of metadata space gracefully - Automate recovery in the case of damaged pool metadata Need to produce a set of scenarios (eg, transaction ids mismatch after recovery, differing nrs of thins, missing data). - Make sure recovery works at all levels in our supported stacks list. eg, RAID recovery if a leg of the data volume is lost. - Any thin volumes that have been recovered but have potentially lost blocks should be marked as such. - Any completely lost thin volumes should be recreated empty, but marked as damaged. Alternatively delete them (which is what current lvm does for non-thin). Decide. - Damaged thins should report how much space was allocated at the time of recovery. - The admin should be able to delete all damaged thins with a simple command line. (similar to --removemissing). - Work around transient failures of underlying PV/LV - We should be able to activate the pool and any thin volumes that are not provisioned on these failed PV/LVs. - We should be able to activate thins that *do* have allocations on the failed PV/LVs at the admins request. This will mark the thins as damaged. (Punch error targets into the pool). ** It is very likely that this will be broken-up further into more bugs.**