Bug 2131948

Summary: ReaR fails to restore non-LVM XFS filesystems (e.g. /boot) when disk mapping happens
Product: Red Hat Enterprise Linux 7 Reporter: Renaud Métrich <rmetrich>
Component: rearAssignee: Pavel Cahyna <pcahyna>
Status: CLOSED ERRATA QA Contact: Jakub Haruda <jharuda>
Severity: high Docs Contact:
Priority: high    
Version: 7.9CC: fkrska, jharuda, ovasik, pcahyna
Target Milestone: rcKeywords: Triaged
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: rear-2.4-17.el7_9 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-07-18 07:48:37 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2131946, 2160748    
Bug Blocks:    

Description Renaud Métrich 2022-10-04 08:51:05 UTC
This bug was initially created as a copy of Bug #2131946

I am copying this bug because: 

Also happens with rear-2.4-16.el7_9.
Prevents unattended restore (when automatic mapping happens).

Description of problem:

The issue happens when having the following conditions:
1. Non-LVM XFS filesystem is restored (e.g. `/boot`)
2. XFS filesystem has `sunit!=0` and `swidth!=0`
3. Disk mapping occurs (e.g. vda -> vdb)

When all three conditions are met, ReaR creates the file system with default settings because of the absence of `/var/lib/rear/layout/xfs/vda1.xfs` file, resulting in using default options to XFS when creating the filesystem:
~~~
2022-10-04 09:19:32.261221631 Can't read /var/lib/rear/layout/xfs/vdb1.xfs, falling back to mkfs.xfs defaults.
...
+++ mkfs.xfs -f -m uuid=17c2c80a-afad-4b64-a31b-55b21ced3740 /dev/vdb1
meta-data=/dev/vdb1              isize=512    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
~~~

Later, ReaR mounts the filesystem with XFS specific options (`sunit=xxx,swidth=xxx`) which are incompatible (due to having use default options at file system creation time) and this fails:
~~~
+++ mount -o rw,relatime,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=512,noquota /dev/vdb1 /mnt/local/boot
mount: /mnt/local/boot: wrong fs type, bad option, bad superblock on /dev/vdb1, missing codepage or helper program, or other error.
~~~

This was fixed in ReaR Upstream through commit a8fd431fda6c47ee5d1f4e24065bc28280a1f46d.

Version-Release number of selected component (if applicable):

rear-2.6-4.el8

How reproducible:

Always

Steps to Reproduce:
1. Make sure `/boot` is not using defaults `sunit=0` settings

    ~~~
    # BOOT_DEV=$(findmnt -o source --noheadings /boot)
    # BOOT_UUID=$(blkid -s UUID -o value $BOOT_DEV)
    # tar cf /root/boot.tar -C /boot .
    # umount /boot
    # mkfs.xfs -f -m uuid=$BOOT_UUID -d sunit=512,swidth=512 $BOOT_DEV
    # mount /boot
    # tar xf /root/boot.tar -C /boot
    ~~~

2. Create the rescue

3. Add a disk with same size (to force migration mode)

4. Boot on rescue and recover on other disk after wiping the disks (to avoid duplicates)

    ~~~
    # wipefs -a /dev/vda
    # wipefs -a /dev/vdb
    # rear recover
    ...
    Current disk mapping table (source => target):
      /dev/vda => /dev/vda
    ...
    3) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
    4) Use Relax-and-Recover shell and return back to here
    5) Abort 'rear recover'
    (default '1' timeout 300 seconds)
    3
    
      /dev/vda /dev/vdb
    ...
    ~~~

Actual results:

Failure mounting `/boot`

Expected results:

No failure + proper XFS options used

Comment 15 errata-xmlrpc 2023-07-18 07:48:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (rear bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:4122