RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2145014 - ReaR recovery failing to mount all VG logical volumes
Summary: ReaR recovery failing to mount all VG logical volumes
Keywords:
Status: CLOSED ERRATA
Alias: None
Deadline: 2023-08-28
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: rear
Version: 9.1
Hardware: x86_64
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Pavel Cahyna
QA Contact: Jakub Haruda
Šárka Jana
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-11-22 22:15 UTC by eric_hagen
Modified: 2024-01-02 12:30 UTC (History)
4 users (show)

Fixed In Version: rear-2.6-19.el9
Doc Type: Bug Fix
Doc Text:
.System recovered by ReaR no longer fails to mount all VG logical volumes The `/etc/lvm/devices/system.devices` file represents the Logical Volume Manager (LVM) system devices and controls device visibility and usability to LVM. By default, the `system.devices` feature is enabled in RHEL 9 and when active, it replaces the LVM device filter. Previously, when you used ReaR to recover the systems to disks with hardware IDs different from those the original system used, the recovered system did not find all LVM volumes and failed to boot. With this fix, if ReaR finds the `system.devices` file, ReaR moves this file to `/etc/lvm/devices/system.devices.rearbak` at the end of recovery. As a result, the recovered system does not use the LVM devices file to restrict device visibility and the system finds the restored volumes at boot. Optional: If you want to restore the default behavior and regenerate the LVM devices file, use the `vgimportdevices -a` command after booting the recovered system and connecting all disk devices needed for a normal operation, in case you disconnected any disks before the recovery process.
Clone Of:
Environment:
Last Closed: 2023-11-07 08:37:21 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)
config (932.08 KB, text/plain)
2022-11-22 22:15 UTC, eric_hagen
no flags Details
backup log (834.28 KB, text/plain)
2022-11-22 22:16 UTC, eric_hagen
no flags Details
local.conf (453 bytes, text/plain)
2022-11-22 22:17 UTC, eric_hagen
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github rear rear pull 3043 0 None open Remove the lvmdevices file at the end of recovery 2023-08-24 17:34:14 UTC
Red Hat Issue Tracker RHELPLAN-140225 0 None None None 2022-11-22 22:22:09 UTC
Red Hat Product Errata RHBA-2023:6571 0 None None None 2023-11-07 08:37:39 UTC

Description eric_hagen 2022-11-22 22:15:24 UTC
Created attachment 1926528 [details]
config

Description of problem:
ReaR recovery failing to mount all VG logical volumes

Version-Release number of selected component (if applicable):
rear-2.6-15.el9.x86_64

How reproducible:
Every attempt so far.

Steps to Reproduce:
Build and configure system.
Run rear backup
Run rear restore to same system.
Boot up restored system.

Actual results:
Boot of restored system fails to mount all logical volumes.

Expected results:
Boot up restored system.

Additional info:
Noticed on testing RHEL 9.1 rear backup and recovery on HP DL380 with two RAID 5 volumes.
Backup and restore without noted errors.
Booting restored system fails to mount all the logical volumes in the volume group.

Able to recreat issue on a virtual machine running in virtual box.
DISA Manual STIG applied.
rear-2.6-15.el9.x86_64

Notes from recovery to same disk size. 

rear -dD recover
Relax-and-Recover 2.6 / 2020-06-17
Running rear recover (PID 701)
Using log file: /var/log/rear/rear-rhelws.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
Started rpc.statd.
RPC status rpc.statd available.
Started rpc.idmapd.
Using backup archive '/root/tmp/rear.iIGL6rVHxSBfJSl/outputfs/rhelws/backup.tar.gz'
Calculating backup archive size
Backup archive size is 2.5G	/root/tmp/rear.iIGL6rVHxSBfJSl/outputfs/rhelws/backup.tar.gz (compressed)
Comparing disks
Device sda has size 549755813888 bytes but 549755551744 bytes is expected (needs manual configuration)
Switching to manual disk layout configuration
Original disk /dev/sda does not exist (with same size) in the target system
Using /dev/sda (the only appropriate) for recreating /dev/sda
Current disk mapping table (source => target):
  /dev/sda => /dev/sda

UserInput -I LAYOUT_MIGRATION_CONFIRM_MAPPINGS needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 275
Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
2) Confirm identical disk mapping and proceed without manual configuration
3) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
4) Use Relax-and-Recover shell and return back to here
5) Abort 'rear recover'
(default '1' timeout 300 seconds)
2
UserInput: Valid choice number result 'Confirm identical disk mapping and proceed without manual configuration'
User confirmed identical disk mapping and proceeding without manual configuration
Start system layout restoration.
Disk '/dev/sda': creating 'gpt' partition table
Disk '/dev/sda': creating partition number 1 with name ''EFI System Partition''
Disk '/dev/sda': creating partition number 2 with name ''sda2''
Disk '/dev/sda': creating partition number 3 with name ''sda3''
Creating LVM PV /dev/sda3
Restoring LVM VG 'emss'
Sleeping 3 seconds to let udev or systemd-udevd create their devices...
Creating filesystem of type xfs with mount point / on /dev/mapper/emss-root.
Mounting filesystem /
Creating filesystem of type xfs with mount point /home on /dev/mapper/emss-home.
Mounting filesystem /home
Creating filesystem of type xfs with mount point /tmp on /dev/mapper/emss-tmp.
Mounting filesystem /tmp
Creating filesystem of type xfs with mount point /var on /dev/mapper/emss-var.
Mounting filesystem /var
Creating filesystem of type xfs with mount point /var/log on /dev/mapper/emss-var_log.
Mounting filesystem /var/log
Creating filesystem of type xfs with mount point /var/log/audit on /dev/mapper/emss-var_log_audit.
Mounting filesystem /var/log/audit
Creating filesystem of type xfs with mount point /var/tmp on /dev/mapper/emss-var_tmp.
Mounting filesystem /var/tmp
Creating filesystem of type xfs with mount point /boot on /dev/sda2.
Mounting filesystem /boot
Creating filesystem of type vfat with mount point /boot/efi on /dev/sda1.
Mounting filesystem /boot/efi
Creating swap on /dev/mapper/emss-swap
Disk layout created.
Restoring from '/root/tmp/rear.iIGL6rVHxSBfJSl/outputfs/rhelws/backup.tar.gz' (restore log in /var/lib/rear/restore/recover.backup.tar.gz.701.restore.log) ...
Restoring boot/initramfs-0-rescue-add8c1a3dd4a4488afcfa338a11900e0.img OK
Restored 4969 MiB in 240 seconds [avg. 21203 KiB/sec]
Restoring finished (verify backup restore log messages in /var/lib/rear/restore/recover.backup.tar.gz.701.restore.log)
Created SELinux /mnt/local/.autorelabel file : after reboot SELinux will relabel all files
Recreating directories (with permissions) from /var/lib/rear/recovery/directories_permissions_owner_group
Migrating disk-by-id mappings in certain restored files in /mnt/local to current disk-by-id mappings ...
Migrating filesystem UUIDs in certain restored files in /mnt/local to current UUIDs ...
Patching symlink etc/sysconfig/grub target /mnt/local/etc/default/grub
Patching filesystem UUIDs in /mnt/local/etc/default/grub to current UUIDs
Skip patching symlink etc/mtab target /mnt/local/proc/7298/mounts on /proc/ /sys/ /dev/ or /run/
Patching filesystem UUIDs in etc/fstab to current UUIDs
Patching filesystem UUIDs in etc/mtools.conf to current UUIDs
Patching filesystem UUIDs in etc/sysconfig/smartmontools to current UUIDs
Patching filesystem UUIDs in boot/efi/EFI/redhat/grub.cfg to current UUIDs
Running dracut...
Updated initrd with new drivers for kernel 5.14.0-162.6.1.el9_1.x86_64.
Creating EFI Boot Manager entries...
Creating  EFI Boot Manager entry 'RedHatEnterpriseServer 9' for 'EFI\redhat\grubx64.efi' (UEFI_BOOTLOADER='/boot/efi/EFI/redhat/grubx64.efi') 
Finished 'recover'. The target system is mounted at '/mnt/local'.
Exiting rear recover (PID 701) and its descendant processes ...
Running exit tasks
You should also rm -Rf --one-file-system /root/tmp/rear.iIGL6rVHxSBfJSl

df -h
Filesystem                      Size  Used Avail Use% Mounted on
devtmpfs                        4.0M     0  4.0M   0% /dev
tmpfs                           728M     0  728M   0% /dev/shm
tmpfs                           292M  9.6M  282M   4% /run
10.1.100.226:/redhat            760G  444G  317G  59% /redhat
/dev/mapper/emss-root            70G  4.5G   66G   7% /mnt/local
/dev/mapper/emss-home           100G  751M  100G   1% /mnt/local/home
/dev/mapper/emss-tmp            4.0G   61M  4.0G   2% /mnt/local/tmp
/dev/mapper/emss-var             20G  946M   20G   5% /mnt/local/var
/dev/mapper/emss-var_log         11G  138M   11G   2% /mnt/local/var/log
/dev/mapper/emss-var_log_audit   11G  111M   11G   1% /mnt/local/var/log/audit
/dev/mapper/emss-var_tmp        4.0G   61M  4.0G   2% /mnt/local/var/tmp
/dev/sda2                      1014M  247M  768M  25% /mnt/local/boot
/dev/sda1                       599M  9.4M  590M   2% /mnt/local/boot/efi

So the restore created all the VG, and LV, restored data to them.
Reboot, system starts boot, fails to find and mount the all the volumes.


ls -tlr dev/sd*
brw-rw---- 1 root disk 8, 0 Nov 22 20:38 dev/sda
brw-rw---- 1 root disk 8, 3 Nov 22 20:38 dev/sda3
brw-rw---- 1 root disk 8, 2 Nov 22 20:39 dev/sda2
brw-rw---- 1 root disk 8, 1 Nov 22 20:39 dev/sda1

parted /dev/sda print
Shows the boot fat32, the xfs sda2 and sda3 lvm partitions.



reboot fails to mount volume group

vgscan # errors out
Devices file sys_uuid t10.ATA..... PVID ... last seen on /dev/sda3 not found
ls -tlr /dev/sda*
shows disk 0, /dev/sda3 block device.

lsblk shows
sda
-sda3
 -emss-root
 -emss-swap
But none of the other /tmp, var_log, var_log_audit, home, var_tmp volumes.

Comment 1 eric_hagen 2022-11-22 22:16:54 UTC
Created attachment 1926529 [details]
backup log

Comment 2 eric_hagen 2022-11-22 22:17:49 UTC
Created attachment 1926530 [details]
local.conf

Comment 3 Pavel Cahyna 2022-11-22 22:20:47 UTC
Try to move away /etc/lvm/devices/system.devices in the restored system. If that helps, can you please attach its content here?

(I suspect the problem is similar to the one reported in bz2135427, but this time it is related to the ReaR restore itself, not to cloning).

Comment 4 eric_hagen 2022-11-22 22:48:41 UTC
Try with a larger drive. 

rear -dD recover 
Relax-and-Recover 2.6 / 2020-06-17
Running rear recover (PID 677)
Using log file: /var/log/rear/rear-rhelws.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
Started rpc.statd.
RPC status rpc.statd available.
Started rpc.idmapd.
Using backup archive '/root/tmp/rear.yC1xG4LDYGX8LU6/outputfs/rhelws/backup.tar.gz'
Calculating backup archive size
Backup archive size is 2.5G	/root/tmp/rear.yC1xG4LDYGX8LU6/outputfs/rhelws/backup.tar.gz (compressed)
Comparing disks
Device sda has size 568712495104 bytes but 549755551744 bytes is expected (needs manual configuration)
Switching to manual disk layout configuration
Original disk /dev/sda does not exist (with same size) in the target system
Using /dev/sda (the only appropriate) for recreating /dev/sda
Current disk mapping table (source => target):
  /dev/sda => /dev/sda

UserInput -I LAYOUT_MIGRATION_CONFIRM_MAPPINGS needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 275
Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
2) Confirm identical disk mapping and proceed without manual configuration
3) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
4) Use Relax-and-Recover shell and return back to here
5) Abort 'rear recover'
(default '1' timeout 300 seconds)
2
UserInput: Valid choice number result 'Confirm identical disk mapping and proceed without manual configuration'
User confirmed identical disk mapping and proceeding without manual configuration
Start system layout restoration.
Disk '/dev/sda': creating 'gpt' partition table
Disk '/dev/sda': creating partition number 1 with name ''EFI System Partition''
Disk '/dev/sda': creating partition number 2 with name ''sda2''
Disk '/dev/sda': creating partition number 3 with name ''sda3''
Creating LVM PV /dev/sda3
Restoring LVM VG 'emss'
Sleeping 3 seconds to let udev or systemd-udevd create their devices...
Creating filesystem of type xfs with mount point / on /dev/mapper/emss-root.
Mounting filesystem /
Creating filesystem of type xfs with mount point /home on /dev/mapper/emss-home.
Mounting filesystem /home
Creating filesystem of type xfs with mount point /tmp on /dev/mapper/emss-tmp.
Mounting filesystem /tmp
Creating filesystem of type xfs with mount point /var on /dev/mapper/emss-var.
Mounting filesystem /var
Creating filesystem of type xfs with mount point /var/log on /dev/mapper/emss-var_log.
Mounting filesystem /var/log
Creating filesystem of type xfs with mount point /var/log/audit on /dev/mapper/emss-var_log_audit.
Mounting filesystem /var/log/audit
Creating filesystem of type xfs with mount point /var/tmp on /dev/mapper/emss-var_tmp.
Mounting filesystem /var/tmp
Creating filesystem of type xfs with mount point /boot on /dev/sda2.
Mounting filesystem /boot
Creating filesystem of type vfat with mount point /boot/efi on /dev/sda1.
Mounting filesystem /boot/efi
Creating swap on /dev/mapper/emss-swap
Disk layout created.
Restoring from '/root/tmp/rear.yC1xG4LDYGX8LU6/outputfs/rhelws/backup.tar.gz' (restore log in /var/lib/rear/restore/recover.backup.tar.gz.677.restore.log) ...
Restoring boot/initramfs-0-rescue-add8c1a3dd4a4488afcfa338a11900e0.img OK
Restored 4969 MiB in 236 seconds [avg. 21562 KiB/sec]
Restoring finished (verify backup restore log messages in /var/lib/rear/restore/recover.backup.tar.gz.677.restore.log)
Created SELinux /mnt/local/.autorelabel file : after reboot SELinux will relabel all files
Recreating directories (with permissions) from /var/lib/rear/recovery/directories_permissions_owner_group
Migrating disk-by-id mappings in certain restored files in /mnt/local to current disk-by-id mappings ...
Migrating filesystem UUIDs in certain restored files in /mnt/local to current UUIDs ...
Patching symlink etc/sysconfig/grub target /mnt/local/etc/default/grub
Patching filesystem UUIDs in /mnt/local/etc/default/grub to current UUIDs
Skip patching symlink etc/mtab target /mnt/local/proc/7328/mounts on /proc/ /sys/ /dev/ or /run/
Patching filesystem UUIDs in etc/fstab to current UUIDs
Patching filesystem UUIDs in etc/mtools.conf to current UUIDs
Patching filesystem UUIDs in etc/sysconfig/smartmontools to current UUIDs
Patching filesystem UUIDs in boot/efi/EFI/redhat/grub.cfg to current UUIDs
Running dracut...
Updated initrd with new drivers for kernel 5.14.0-162.6.1.el9_1.x86_64.
Creating EFI Boot Manager entries...
Creating  EFI Boot Manager entry 'RedHatEnterpriseServer 9' for 'EFI\redhat\grubx64.efi' (UEFI_BOOTLOADER='/boot/efi/EFI/redhat/grubx64.efi') 
Finished 'recover'. The target system is mounted at '/mnt/local'.
Exiting rear recover (PID 677) and its descendant processes ...
Running exit tasks
You should also rm -Rf --one-file-system /root/tmp/rear.yC1xG4LDYGX8LU6

cat /etc/lvm/devices/system.devices
# LVM uses devices listed in this file.
# Created by LVM command pvcreate pid 4993 at Tue Nov 22 22:38:18 2022
VERSION=1.1.1
IDTYPE=sys_wwid IDNAME=t10.ATA_____VBOX_HARDDISK___________________________VB1245ca03-e706e93c_ DEVNAME=/dev/sda3 PVID=znR6jx279137Ui3E4io7BcoPEv3wLmnA PART=3

mv /etc/lvm/devices/system.devices root/.

Still gets stuck on mounted /boot/efi, 
times out on mounting /dev/mapper/emss-home, emss_var, var_log etc.

Comment 5 Pavel Cahyna 2022-11-22 23:10:23 UTC
> mv /etc/lvm/devices/system.devices root/.

Is that in the recovered system? (If you do it after "rear recover", you do it in the rescue system, you need to do "chroot /mnt/local" first.)

If moving away /etc/lvm/devices/system.devices in the recovered system does not help, does it at least result in an improvement in commands like vgscan, lsblk (and lvdisplay) ?

Comment 6 eric_hagen 2022-11-22 23:26:38 UTC
Thank you very much.
That worked, I had changed to the /mnt/local, but forgot to remove the / from the mv command. 
I was able to successfully boot the recovered system, and all the lv's mounted.
Is there any additional logs or data collection you need to identify what is going wrong?

Comment 7 Pavel Cahyna 2022-11-22 23:33:57 UTC
No, thanks, I think I know what's wrong. In RHEL 9, one cannont simply copy the system to new disks, the LVM devices file need to be removed or regenerated. (This affects any system cloning method, not only ReaR backup and restore.)

The only mysterious thing is that originally you had the problem on a physical machine as well. Did you restore to the same disk as was used when creating the backup? If so, the problem should not have happened there. Can you please show the device file from the physical machine? You mention two RAID 5 volumes, are those software RAID? (I think not, I don't see anything RAID-related in the Rear console messages.)

Comment 8 eric_hagen 2022-11-22 23:51:13 UTC
The HP hardware issue was a rear backup being restored to a different system with the exact same RAID 5 setup, so I can see why the problem happened in that case.
This VM test was a backup from an existing test RHEL 9.1 patched, and STIG, being restored to another VM system to simulate a disaster recovery to new hardware.
Thank you for the quick responses and attention to detail, and for providing a work around to meet my requirements. 
Feel free to close this bug case with the work around.

Comment 9 Pavel Cahyna 2022-12-08 16:32:57 UTC
Thank you for the confirmation, I will keep the bug report open, because this needs to be solved in ReaR itself.

Comment 18 errata-xmlrpc 2023-11-07 08:37:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (rear bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:6571


Note You need to log in before you can comment on or make changes to this bug.