Bug 2145014
| Summary: | ReaR recovery failing to mount all VG logical volumes | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | eric_hagen <eric.p.hagen.ctr> | ||||||||
| Component: | rear | Assignee: | Pavel Cahyna <pcahyna> | ||||||||
| Status: | CLOSED ERRATA | QA Contact: | Jakub Haruda <jharuda> | ||||||||
| Severity: | medium | Docs Contact: | Šárka Jana <sjanderk> | ||||||||
| Priority: | unspecified | ||||||||||
| Version: | 9.1 | CC: | gfialova, jharuda, pcahyna, sjanderk | ||||||||
| Target Milestone: | rc | Keywords: | Triaged | ||||||||
| Target Release: | --- | Flags: | pm-rhel:
mirror+
|
||||||||
| Hardware: | x86_64 | ||||||||||
| OS: | Unspecified | ||||||||||
| Whiteboard: | |||||||||||
| Fixed In Version: | rear-2.6-19.el9 | Doc Type: | Bug Fix | ||||||||
| Doc Text: |
.System recovered by ReaR no longer fails to mount all VG logical volumes
The `/etc/lvm/devices/system.devices` file represents the Logical Volume Manager (LVM) system devices and controls device visibility and usability to LVM. By default, the `system.devices` feature is enabled in RHEL 9 and when active, it replaces the LVM device filter.
Previously, when you used ReaR to recover the systems to disks with hardware IDs different from those the original system used, the recovered system did not find all LVM volumes and failed to boot. With this fix, if ReaR finds the `system.devices` file, ReaR moves this file to `/etc/lvm/devices/system.devices.rearbak` at the end of recovery. As a result, the recovered system does not use the LVM devices file to restrict device visibility and the system finds the restored volumes at boot.
Optional: If you want to restore the default behavior and regenerate the LVM devices file, use the `vgimportdevices -a` command after booting the recovered system and connecting all disk devices needed for a normal operation, in case you disconnected any disks before the recovery process.
|
Story Points: | --- | ||||||||
| Clone Of: | Environment: | ||||||||||
| Last Closed: | 2023-11-07 08:37:21 UTC | Type: | Bug | ||||||||
| Regression: | --- | Mount Type: | --- | ||||||||
| Documentation: | --- | CRM: | |||||||||
| Verified Versions: | Category: | --- | |||||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||
| Embargoed: | |||||||||||
| Deadline: | 2023-08-28 | ||||||||||
| Attachments: |
|
||||||||||
Created attachment 1926529 [details]
backup log
Created attachment 1926530 [details]
local.conf
Try to move away /etc/lvm/devices/system.devices in the restored system. If that helps, can you please attach its content here? (I suspect the problem is similar to the one reported in bz2135427, but this time it is related to the ReaR restore itself, not to cloning). Try with a larger drive. rear -dD recover Relax-and-Recover 2.6 / 2020-06-17 Running rear recover (PID 677) Using log file: /var/log/rear/rear-rhelws.log Running workflow recover within the ReaR rescue/recovery system Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available. Started RPC portmapper 'rpcbind'. RPC portmapper 'rpcbind' available. Started rpc.statd. RPC status rpc.statd available. Started rpc.idmapd. Using backup archive '/root/tmp/rear.yC1xG4LDYGX8LU6/outputfs/rhelws/backup.tar.gz' Calculating backup archive size Backup archive size is 2.5G /root/tmp/rear.yC1xG4LDYGX8LU6/outputfs/rhelws/backup.tar.gz (compressed) Comparing disks Device sda has size 568712495104 bytes but 549755551744 bytes is expected (needs manual configuration) Switching to manual disk layout configuration Original disk /dev/sda does not exist (with same size) in the target system Using /dev/sda (the only appropriate) for recreating /dev/sda Current disk mapping table (source => target): /dev/sda => /dev/sda UserInput -I LAYOUT_MIGRATION_CONFIRM_MAPPINGS needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 275 Confirm or edit the disk mapping 1) Confirm disk mapping and continue 'rear recover' 2) Confirm identical disk mapping and proceed without manual configuration 3) Edit disk mapping (/var/lib/rear/layout/disk_mappings) 4) Use Relax-and-Recover shell and return back to here 5) Abort 'rear recover' (default '1' timeout 300 seconds) 2 UserInput: Valid choice number result 'Confirm identical disk mapping and proceed without manual configuration' User confirmed identical disk mapping and proceeding without manual configuration Start system layout restoration. Disk '/dev/sda': creating 'gpt' partition table Disk '/dev/sda': creating partition number 1 with name ''EFI System Partition'' Disk '/dev/sda': creating partition number 2 with name ''sda2'' Disk '/dev/sda': creating partition number 3 with name ''sda3'' Creating LVM PV /dev/sda3 Restoring LVM VG 'emss' Sleeping 3 seconds to let udev or systemd-udevd create their devices... Creating filesystem of type xfs with mount point / on /dev/mapper/emss-root. Mounting filesystem / Creating filesystem of type xfs with mount point /home on /dev/mapper/emss-home. Mounting filesystem /home Creating filesystem of type xfs with mount point /tmp on /dev/mapper/emss-tmp. Mounting filesystem /tmp Creating filesystem of type xfs with mount point /var on /dev/mapper/emss-var. Mounting filesystem /var Creating filesystem of type xfs with mount point /var/log on /dev/mapper/emss-var_log. Mounting filesystem /var/log Creating filesystem of type xfs with mount point /var/log/audit on /dev/mapper/emss-var_log_audit. Mounting filesystem /var/log/audit Creating filesystem of type xfs with mount point /var/tmp on /dev/mapper/emss-var_tmp. Mounting filesystem /var/tmp Creating filesystem of type xfs with mount point /boot on /dev/sda2. Mounting filesystem /boot Creating filesystem of type vfat with mount point /boot/efi on /dev/sda1. Mounting filesystem /boot/efi Creating swap on /dev/mapper/emss-swap Disk layout created. Restoring from '/root/tmp/rear.yC1xG4LDYGX8LU6/outputfs/rhelws/backup.tar.gz' (restore log in /var/lib/rear/restore/recover.backup.tar.gz.677.restore.log) ... Restoring boot/initramfs-0-rescue-add8c1a3dd4a4488afcfa338a11900e0.img OK Restored 4969 MiB in 236 seconds [avg. 21562 KiB/sec] Restoring finished (verify backup restore log messages in /var/lib/rear/restore/recover.backup.tar.gz.677.restore.log) Created SELinux /mnt/local/.autorelabel file : after reboot SELinux will relabel all files Recreating directories (with permissions) from /var/lib/rear/recovery/directories_permissions_owner_group Migrating disk-by-id mappings in certain restored files in /mnt/local to current disk-by-id mappings ... Migrating filesystem UUIDs in certain restored files in /mnt/local to current UUIDs ... Patching symlink etc/sysconfig/grub target /mnt/local/etc/default/grub Patching filesystem UUIDs in /mnt/local/etc/default/grub to current UUIDs Skip patching symlink etc/mtab target /mnt/local/proc/7328/mounts on /proc/ /sys/ /dev/ or /run/ Patching filesystem UUIDs in etc/fstab to current UUIDs Patching filesystem UUIDs in etc/mtools.conf to current UUIDs Patching filesystem UUIDs in etc/sysconfig/smartmontools to current UUIDs Patching filesystem UUIDs in boot/efi/EFI/redhat/grub.cfg to current UUIDs Running dracut... Updated initrd with new drivers for kernel 5.14.0-162.6.1.el9_1.x86_64. Creating EFI Boot Manager entries... Creating EFI Boot Manager entry 'RedHatEnterpriseServer 9' for 'EFI\redhat\grubx64.efi' (UEFI_BOOTLOADER='/boot/efi/EFI/redhat/grubx64.efi') Finished 'recover'. The target system is mounted at '/mnt/local'. Exiting rear recover (PID 677) and its descendant processes ... Running exit tasks You should also rm -Rf --one-file-system /root/tmp/rear.yC1xG4LDYGX8LU6 cat /etc/lvm/devices/system.devices # LVM uses devices listed in this file. # Created by LVM command pvcreate pid 4993 at Tue Nov 22 22:38:18 2022 VERSION=1.1.1 IDTYPE=sys_wwid IDNAME=t10.ATA_____VBOX_HARDDISK___________________________VB1245ca03-e706e93c_ DEVNAME=/dev/sda3 PVID=znR6jx279137Ui3E4io7BcoPEv3wLmnA PART=3 mv /etc/lvm/devices/system.devices root/. Still gets stuck on mounted /boot/efi, times out on mounting /dev/mapper/emss-home, emss_var, var_log etc. > mv /etc/lvm/devices/system.devices root/.
Is that in the recovered system? (If you do it after "rear recover", you do it in the rescue system, you need to do "chroot /mnt/local" first.)
If moving away /etc/lvm/devices/system.devices in the recovered system does not help, does it at least result in an improvement in commands like vgscan, lsblk (and lvdisplay) ?
Thank you very much. That worked, I had changed to the /mnt/local, but forgot to remove the / from the mv command. I was able to successfully boot the recovered system, and all the lv's mounted. Is there any additional logs or data collection you need to identify what is going wrong? No, thanks, I think I know what's wrong. In RHEL 9, one cannont simply copy the system to new disks, the LVM devices file need to be removed or regenerated. (This affects any system cloning method, not only ReaR backup and restore.) The only mysterious thing is that originally you had the problem on a physical machine as well. Did you restore to the same disk as was used when creating the backup? If so, the problem should not have happened there. Can you please show the device file from the physical machine? You mention two RAID 5 volumes, are those software RAID? (I think not, I don't see anything RAID-related in the Rear console messages.) The HP hardware issue was a rear backup being restored to a different system with the exact same RAID 5 setup, so I can see why the problem happened in that case. This VM test was a backup from an existing test RHEL 9.1 patched, and STIG, being restored to another VM system to simulate a disaster recovery to new hardware. Thank you for the quick responses and attention to detail, and for providing a work around to meet my requirements. Feel free to close this bug case with the work around. Thank you for the confirmation, I will keep the bug report open, because this needs to be solved in ReaR itself. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (rear bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:6571 |
Created attachment 1926528 [details] config Description of problem: ReaR recovery failing to mount all VG logical volumes Version-Release number of selected component (if applicable): rear-2.6-15.el9.x86_64 How reproducible: Every attempt so far. Steps to Reproduce: Build and configure system. Run rear backup Run rear restore to same system. Boot up restored system. Actual results: Boot of restored system fails to mount all logical volumes. Expected results: Boot up restored system. Additional info: Noticed on testing RHEL 9.1 rear backup and recovery on HP DL380 with two RAID 5 volumes. Backup and restore without noted errors. Booting restored system fails to mount all the logical volumes in the volume group. Able to recreat issue on a virtual machine running in virtual box. DISA Manual STIG applied. rear-2.6-15.el9.x86_64 Notes from recovery to same disk size. rear -dD recover Relax-and-Recover 2.6 / 2020-06-17 Running rear recover (PID 701) Using log file: /var/log/rear/rear-rhelws.log Running workflow recover within the ReaR rescue/recovery system Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available. Started RPC portmapper 'rpcbind'. RPC portmapper 'rpcbind' available. Started rpc.statd. RPC status rpc.statd available. Started rpc.idmapd. Using backup archive '/root/tmp/rear.iIGL6rVHxSBfJSl/outputfs/rhelws/backup.tar.gz' Calculating backup archive size Backup archive size is 2.5G /root/tmp/rear.iIGL6rVHxSBfJSl/outputfs/rhelws/backup.tar.gz (compressed) Comparing disks Device sda has size 549755813888 bytes but 549755551744 bytes is expected (needs manual configuration) Switching to manual disk layout configuration Original disk /dev/sda does not exist (with same size) in the target system Using /dev/sda (the only appropriate) for recreating /dev/sda Current disk mapping table (source => target): /dev/sda => /dev/sda UserInput -I LAYOUT_MIGRATION_CONFIRM_MAPPINGS needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 275 Confirm or edit the disk mapping 1) Confirm disk mapping and continue 'rear recover' 2) Confirm identical disk mapping and proceed without manual configuration 3) Edit disk mapping (/var/lib/rear/layout/disk_mappings) 4) Use Relax-and-Recover shell and return back to here 5) Abort 'rear recover' (default '1' timeout 300 seconds) 2 UserInput: Valid choice number result 'Confirm identical disk mapping and proceed without manual configuration' User confirmed identical disk mapping and proceeding without manual configuration Start system layout restoration. Disk '/dev/sda': creating 'gpt' partition table Disk '/dev/sda': creating partition number 1 with name ''EFI System Partition'' Disk '/dev/sda': creating partition number 2 with name ''sda2'' Disk '/dev/sda': creating partition number 3 with name ''sda3'' Creating LVM PV /dev/sda3 Restoring LVM VG 'emss' Sleeping 3 seconds to let udev or systemd-udevd create their devices... Creating filesystem of type xfs with mount point / on /dev/mapper/emss-root. Mounting filesystem / Creating filesystem of type xfs with mount point /home on /dev/mapper/emss-home. Mounting filesystem /home Creating filesystem of type xfs with mount point /tmp on /dev/mapper/emss-tmp. Mounting filesystem /tmp Creating filesystem of type xfs with mount point /var on /dev/mapper/emss-var. Mounting filesystem /var Creating filesystem of type xfs with mount point /var/log on /dev/mapper/emss-var_log. Mounting filesystem /var/log Creating filesystem of type xfs with mount point /var/log/audit on /dev/mapper/emss-var_log_audit. Mounting filesystem /var/log/audit Creating filesystem of type xfs with mount point /var/tmp on /dev/mapper/emss-var_tmp. Mounting filesystem /var/tmp Creating filesystem of type xfs with mount point /boot on /dev/sda2. Mounting filesystem /boot Creating filesystem of type vfat with mount point /boot/efi on /dev/sda1. Mounting filesystem /boot/efi Creating swap on /dev/mapper/emss-swap Disk layout created. Restoring from '/root/tmp/rear.iIGL6rVHxSBfJSl/outputfs/rhelws/backup.tar.gz' (restore log in /var/lib/rear/restore/recover.backup.tar.gz.701.restore.log) ... Restoring boot/initramfs-0-rescue-add8c1a3dd4a4488afcfa338a11900e0.img OK Restored 4969 MiB in 240 seconds [avg. 21203 KiB/sec] Restoring finished (verify backup restore log messages in /var/lib/rear/restore/recover.backup.tar.gz.701.restore.log) Created SELinux /mnt/local/.autorelabel file : after reboot SELinux will relabel all files Recreating directories (with permissions) from /var/lib/rear/recovery/directories_permissions_owner_group Migrating disk-by-id mappings in certain restored files in /mnt/local to current disk-by-id mappings ... Migrating filesystem UUIDs in certain restored files in /mnt/local to current UUIDs ... Patching symlink etc/sysconfig/grub target /mnt/local/etc/default/grub Patching filesystem UUIDs in /mnt/local/etc/default/grub to current UUIDs Skip patching symlink etc/mtab target /mnt/local/proc/7298/mounts on /proc/ /sys/ /dev/ or /run/ Patching filesystem UUIDs in etc/fstab to current UUIDs Patching filesystem UUIDs in etc/mtools.conf to current UUIDs Patching filesystem UUIDs in etc/sysconfig/smartmontools to current UUIDs Patching filesystem UUIDs in boot/efi/EFI/redhat/grub.cfg to current UUIDs Running dracut... Updated initrd with new drivers for kernel 5.14.0-162.6.1.el9_1.x86_64. Creating EFI Boot Manager entries... Creating EFI Boot Manager entry 'RedHatEnterpriseServer 9' for 'EFI\redhat\grubx64.efi' (UEFI_BOOTLOADER='/boot/efi/EFI/redhat/grubx64.efi') Finished 'recover'. The target system is mounted at '/mnt/local'. Exiting rear recover (PID 701) and its descendant processes ... Running exit tasks You should also rm -Rf --one-file-system /root/tmp/rear.iIGL6rVHxSBfJSl df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 728M 0 728M 0% /dev/shm tmpfs 292M 9.6M 282M 4% /run 10.1.100.226:/redhat 760G 444G 317G 59% /redhat /dev/mapper/emss-root 70G 4.5G 66G 7% /mnt/local /dev/mapper/emss-home 100G 751M 100G 1% /mnt/local/home /dev/mapper/emss-tmp 4.0G 61M 4.0G 2% /mnt/local/tmp /dev/mapper/emss-var 20G 946M 20G 5% /mnt/local/var /dev/mapper/emss-var_log 11G 138M 11G 2% /mnt/local/var/log /dev/mapper/emss-var_log_audit 11G 111M 11G 1% /mnt/local/var/log/audit /dev/mapper/emss-var_tmp 4.0G 61M 4.0G 2% /mnt/local/var/tmp /dev/sda2 1014M 247M 768M 25% /mnt/local/boot /dev/sda1 599M 9.4M 590M 2% /mnt/local/boot/efi So the restore created all the VG, and LV, restored data to them. Reboot, system starts boot, fails to find and mount the all the volumes. ls -tlr dev/sd* brw-rw---- 1 root disk 8, 0 Nov 22 20:38 dev/sda brw-rw---- 1 root disk 8, 3 Nov 22 20:38 dev/sda3 brw-rw---- 1 root disk 8, 2 Nov 22 20:39 dev/sda2 brw-rw---- 1 root disk 8, 1 Nov 22 20:39 dev/sda1 parted /dev/sda print Shows the boot fat32, the xfs sda2 and sda3 lvm partitions. reboot fails to mount volume group vgscan # errors out Devices file sys_uuid t10.ATA..... PVID ... last seen on /dev/sda3 not found ls -tlr /dev/sda* shows disk 0, /dev/sda3 block device. lsblk shows sda -sda3 -emss-root -emss-swap But none of the other /tmp, var_log, var_log_audit, home, var_tmp volumes.