RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2061521 - Back-port fix to include multipath disks in the backup (upstream bugs 2236, 2237) [rhel-7.9.z]
Summary: Back-port fix to include multipath disks in the backup (upstream bugs 2236, 2...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: rear
Version: 7.9
Hardware: x86_64
OS: Linux
urgent
high
Target Milestone: rc
: ---
Assignee: Pavel Cahyna
QA Contact: David Jež
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-03-07 18:16 UTC by Carlos Santos
Modified: 2022-05-18 16:17 UTC (History)
6 users (show)

Fixed In Version: rear-2.4-16.el7_9
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-18 16:16:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
patched /usr/share/rear/lib/layout-functions.sh (34.83 KB, text/plain)
2022-03-07 18:16 UTC, Carlos Santos
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github rear rear issues 2236 0 None closed Multipath disk incorrectly excluded from backup 2022-03-15 14:41:07 UTC
Github rear rear pull 2237 0 None Merged Fix including of multipath disks in backup 2022-03-15 14:41:07 UTC
Red Hat Issue Tracker RHELPLAN-114745 0 None None None 2022-03-07 18:18:32 UTC
Red Hat Product Errata RHBA-2022:4646 0 None None None 2022-05-18 16:16:26 UTC

Description Carlos Santos 2022-03-07 18:16:48 UTC
Created attachment 1864406 [details]
patched /usr/share/rear/lib/layout-functions.sh

Description of problem:

ReaR ignores filesystems on multipath devices even if AUTOEXCLUDE_MULTIPATH=n is added
to /etc/rear/local.conf.

Version-Release number of selected component (if applicable):

rear-2.4-15.el7_9.x86_64

How reproducible:

Always.

Steps to Reproduce:
1. Intall RHEL 7.9 with the "system" filesystems are on a local (no multipath) disk and
   with some filesystems on multipath devices (directly or with LVM2).

   Test system fstab:

   UUID=c55b8336-3ca1-42c5-8e97-8216d0a4ced6 /boot       ext4 defaults                   1 2
   UUID=4D1B-44E9                            /boot/efi   vfat umask=0077,shortname=winnt 0 0
   /dev/mapper/vg_root-lv_root               /           ext4 defaults                   1 1
   /dev/mapper/vg_root-lv_var                /var        ext4 defaults                   0 0
   /dev/mapper/vg_root-lv_swap               swap        swap defaults                   0 0
   /dev/mapper/appvg-lv_netbackup            /netbackup  ext4 defaults                   0 0
   /dev/mapper/appvg-lv_opt                  /opt        ext4 defaults                   0 0
   /dev/mapper/appvg-lv_opt_app              /opt/app    ext4 defaults                   0 0
   /dev/mapper/nbuvg-ntbackup_lv             /NetBackup1 ext4 defaults                   1 2
   UUID=2f0ca534-6702-4c9b-9183-c53357db7144 /NetBackup2 ext4 defaults                   1 2

   Test system lsblk output:

   NAME                    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
   sda                       8:0    0    20G  0 disk  
   ├─sda1                    8:1    0    64M  0 part  /boot/efi
   ├─sda2                    8:2    0   200M  0 part  /boot
   ├─sda3                    8:3    0   194M  0 part  
   │ ├─appvg-lv_netbackup  253:13   0    64M  0 lvm   /netbackup
   │ ├─appvg-lv_opt        253:14   0    64M  0 lvm   /opt
   │ └─appvg-lv_opt_app    253:15   0    64M  0 lvm   /opt/app
   └─sda4                    8:4    0  19.6G  0 part  
     ├─vg_root-lv_root     253:0    0     8G  0 lvm   /
     ├─vg_root-lv_swap     253:1    0     2G  0 lvm   [SWAP]
     └─vg_root-lv_var      253:16   0   9.6G  0 lvm   /var
   sdb                       8:16   0     1G  0 disk  
   sdc                       8:32   0   256M  0 disk  
   └─mpathe                253:6    0   256M  0 mpath 
     └─mpathe1             253:11   0   255M  0 part  /NetBackup2
   sdd                       8:48   0   256M  0 disk  
   └─mpathe                253:6    0   256M  0 mpath 
     └─mpathe1             253:11   0   255M  0 part  /NetBackup2
   sde                       8:64   0   256M  0 disk  
   └─mpathe                253:6    0   256M  0 mpath 
     └─mpathe1             253:11   0   255M  0 part  /NetBackup2
   sdf                       8:80   0   256M  0 disk  
   └─mpathe                253:6    0   256M  0 mpath 
     └─mpathe1             253:11   0   255M  0 part  /NetBackup2
   sdg                       8:96   0   256M  0 disk  
   └─mpatha                253:2    0   256M  0 mpath 
     └─mpatha1             253:9    0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1
   sdh                       8:112  0   256M  0 disk  
   └─mpatha                253:2    0   256M  0 mpath 
     └─mpatha1             253:9    0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1
   sdi                       8:128  0   256M  0 disk  
   └─mpatha                253:2    0   256M  0 mpath 
     └─mpatha1             253:9    0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1
   sdj                       8:144  0   256M  0 disk  
   └─mpatha                253:2    0   256M  0 mpath 
     └─mpatha1             253:9    0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1
   sdk                       8:160  0   256M  0 disk  
   └─mpathb                253:3    0   256M  0 mpath 
     └─mpathb1             253:8    0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1
   sdl                       8:176  0   256M  0 disk  
   └─mpathb                253:3    0   256M  0 mpath 
     └─mpathb1             253:8    0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1
   sdm                       8:192  0   256M  0 disk  
   └─mpathb                253:3    0   256M  0 mpath 
     └─mpathb1             253:8    0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1
   sdn                       8:208  0   256M  0 disk  
   └─mpathb                253:3    0   256M  0 mpath 
     └─mpathb1             253:8    0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1
   sdo                       8:224  0   256M  0 disk  
   └─mpathc                253:4    0   256M  0 mpath 
     └─mpathc1             253:7    0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1
   sdp                       8:240  0   256M  0 disk  
   └─mpathc                253:4    0   256M  0 mpath 
     └─mpathc1             253:7    0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1
   sdq                      65:0    0   256M  0 disk  
   └─mpathc                253:4    0   256M  0 mpath 
     └─mpathc1             253:7    0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1
   sdr                      65:16   0   256M  0 disk  
   └─mpathc                253:4    0   256M  0 mpath 
     └─mpathc1             253:7    0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1
   sds                      65:32   0   256M  0 disk  
   └─mpathd                253:5    0   256M  0 mpath 
     └─mpathd1             253:10   0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1
   sdt                      65:48   0   256M  0 disk  
   └─mpathd                253:5    0   256M  0 mpath 
     └─mpathd1             253:10   0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1
   sdu                      65:64   0   256M  0 disk  
   └─mpathd                253:5    0   256M  0 mpath 
     └─mpathd1             253:10   0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1
   sdv                      65:80   0   256M  0 disk  
   └─mpathd                253:5    0   256M  0 mpath 
     └─mpathd1             253:10   0   255M  0 part  
       └─nbuvg-ntbackup_lv 253:12   0  1008M  0 lvm   /NetBackup1

2. Backup the system:

   # rear -dD mkbackup

Actual results:

All filesystems on multipath devices are ignored. (marked as done in
/var/lib/rear/layout/disktodo.conf).

Expected results:

All filesystems should be included in the backup.

Additional info:

The problem is the same reported upstream in

   Multipath disk incorrectly excluded from backup
   https://github.com/rear/rear/issues/2236

   Fix including of multipath disks in backup
   https://github.com/rear/rear/pull/2237

The attached patch contains a proposed patch for 2.4.

After applying it, backup the system with

   # rear -dD mkbackup

And restore with

   # export MIGRATION_MODE=true
   # rear recover -dD

Comment 4 Pavel Cahyna 2022-03-15 14:41:08 UTC
Thank you for the detailed report. Does the problem happen only in migration mode? (It does not seem so according to the upstream description.)

Comment 5 Carlos Santos 2022-03-15 16:05:58 UTC
(In reply to Pavel Cahyna from comment #4)
> Thank you for the detailed report. Does the problem happen only in migration
> mode? (It does not seem so according to the upstream description.)

It does not need to be in migration mode to happen.

Comment 6 Carlos Santos 2022-03-15 16:11:40 UTC
I also found that I replace all disks or reset them by booting from the
rescue media and running

    # for dev in /dev/sd?; do dd if=/dev/zero of=$dev count=100; done
    # reboot

then the filesystems on multipath devices are not recreated even if I use
the proposed patch and recover with MIGRATION_MODE=true.

Comment 7 Pavel Cahyna 2022-03-15 17:20:25 UTC
> then the filesystems on multipath devices are not recreated

Does this mean that the previous experiments used the existing filesystems on the disks, instead of recreating them?

If so, it looks like we have two bugs: one with backup, the other with layout recreation.

Comment 8 Carlos Santos 2022-03-15 18:30:48 UTC
(In reply to Pavel Cahyna from comment #7)
> > then the filesystems on multipath devices are not recreated
> 
> Does this mean that the previous experiments used the existing filesystems
> on the disks, instead of recreating them?

Yes.

> If so, it looks like we have two bugs: one with backup, the other with
> layout recreation.

Probably yes, but We'll need to investigate it deeply to confirm.

Comment 31 errata-xmlrpc 2022-05-18 16:16:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (rear bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:4646


Note You need to log in before you can comment on or make changes to this bug.