Bug 1747468
| Summary: | Unable to restore VG with Thin volumes: "Thin pool rhel-pool00-tpool (XXX:X) transaction_id is X, while expected Y" | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Renaud Métrich <rmetrich> | |
| Component: | rear | Assignee: | Pavel Cahyna <pcahyna> | |
| Status: | CLOSED ERRATA | QA Contact: | David Jež <djez> | |
| Severity: | high | Docs Contact: | Prerana Sharma <presharm> | |
| Priority: | high | |||
| Version: | 8.0 | CC: | dconsoli, djez, fkrska, mailinglists35, oliver, pcahyna, presharm, zkabelac | |
| Target Milestone: | rc | Keywords: | Reopened, Triaged | |
| Target Release: | 8.0 | Flags: | pm-rhel:
mirror+
|
|
| Hardware: | All | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | rear-2.6-3.el8 | Doc Type: | Enhancement | |
| Doc Text: |
.Errors when restoring LVM with thin pools do not happen anymore
With this enhancement, ReaR now detects when thin pools and other logical volume types with kernel metadata (for example, RAIDs and caches) are used in a volume group (VG) and switches to a mode where it recreates all the logical volumes (LVs) in the VG using lvcreate commands. Therefore, LVM with thin pools are restored without any errors.
NOTE: This new method does not preserve all the LV properties, for example LVM UUIDs. A restore from the backup should be tested before using ReaR in a Production environment in order to determine whether the recreated storage layout matches the requirements.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 2004178 (view as bug list) | Environment: | ||
| Last Closed: | 2021-11-09 18:53:41 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 2004178 | |||
|
Description
Renaud Métrich
2019-08-30 14:02:28 UTC
The swap issue seems to happen in scenario 1 as well, due to vgcfgrestore not having enabled the VG it created. +++ mkswap -U 487d9f6f-93cc-4c6d-be48-a65d21ab1af8 /dev/mapper/rhel-lvswap /dev/mapper/rhel-lvswap: No such file or directory For swap issue, a vgchange is missing on line 107:
Upstream Code /usr/share/rear/layout/prepare/GNU/Linux/110_include_lvm_code.sh:
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
95 if lvm vgcfgrestore -f "$VAR_DIR/layout/lvm/${vg}.cfg" $vg >&2 ; then
96 lvm vgchange --available y $vg >&2
97
98 LogPrint "Sleeping 3 seconds to let udev or systemd-udevd create their devices..."
99 sleep 3 >&2
100 create_volume_group=0
101 create_logical_volumes=0
102
103 #
104 # It failed ... restore layout using 'vgcfgrestore --force', but then remove Thin volumes, they are b roken
105 #
106 elif lvm vgcfgrestore --force -f "$VAR_DIR/layout/lvm/${vg}.cfg" $vg >&2 ; then
107
108 lvm lvs --noheadings -o lv_name,vg_name,lv_layout | while read lv vg layout ; do
...
-------- 8< ---------------- 8< ---------------- 8< ---------------- 8< --------
Indeed, even though vgcfgrestore failed, some volumes have been created and can be safely activated.
Apparently RHEL7.5 GA also failed:
+++ lvm lvremove -q -f -y rhel/pool00
Thin pool rhel-pool00-tpool (252:3) transaction_id is 6, while expected 4.
Failed to update pool rhel/pool00.
# lvm version
LVM version: 2.02.177(2)-RHEL7 (2018-01-22)
Library version: 1.02.146-RHEL7 (2018-01-22)
Driver version: 4.37.0
So it looks like it never worked properly ...
Lvm2 (vgcfgrestore) can't be used this way to restore thin-volumes. Main purpose of vgcfgrestore (with mandatory --force option) is to let users fix existing thin-pool. The only proposal for recreating thinLVs on a newly created thin-pool is to call individual lvcreate commands so kernel thin-pool metadata can have the information about volumes as well. So if this is meant to be used for 'data recovery' - the tool recovering LVs need to create them with proper sizes. Would it be feasible to avoid using vgcfgrestore and do the same thing as in migration mode always (or at least always with thin pools)? Properties of segments could be preserved by using lvextend, as in bz1732328#c24. zkabelac, I think it is a bug (in ReaR). Thanks Zdenek for looking into this. I will be pushing your feedback to ReaR developers so that we come with a proper solution. By the way, I think the idea was not to restore thin-volumes this way (using vgcfgrestore), but to restore all other volumes (than thin ones), delete the broken thin ones, and recreate them using lvcreate. This is apparently beacuse vgcfgrestore does not have a way to restore the non-thin volumes only. I'm not familiar with usage of 'rear' tool - if it's only 'backup' tool for device content - it might be out-of-scope for this tool to be able recreate same sort of storage layout - i.e. imagine thinLV on thin-pool this has cached raid6 thin-pool data & raid1 thin-pool metadata - who would be supposed to restore such device stack ? Anyway it might be possible RFE on 'rear' - but it probably needs different title. Reopening since component is ReaR, not LVM2, and ReaR needs a fix. Opened Issue https://github.com/rear/rear/issues/2222 for that Upstream. (In reply to Zdenek Kabelac from comment #8) > I'm not familiar with usage of 'rear' tool - if it's only 'backup' tool for > device content - it might be out-of-scope for this tool to be able recreate > same sort of storage layout - > > i.e. imagine thinLV on thin-pool this has cached raid6 thin-pool data & > raid1 thin-pool metadata - who would be supposed to restore such device > stack ? It is a recovery tool, so its main responsibility is to backup and recreate the layout, not the content. Deletion of thin-pool works as designed with 'double --force' even for damaged thin-pools. The problem is 'restoring' same thin-pool with matching thinID & transactionID. We we likely need to make some 'new lvm2' command to somehow support recreation of all LVs from scratch. (as vgcfgbackup/restore relies on the proper disk content) Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (rear bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2021:4344 |