RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2131946 - ReaR fails to restore non-LVM XFS filesystems (e.g. /boot) when disk mapping happens
Summary: ReaR fails to restore non-LVM XFS filesystems (e.g. /boot) when disk mapping ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: rear
Version: 8.6
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Pavel Cahyna
QA Contact: David Jež
Šárka Jana
URL:
Whiteboard:
Depends On: 2160748
Blocks: 2131948
TreeView+ depends on / blocked
 
Reported: 2022-10-04 08:48 UTC by Renaud Métrich
Modified: 2023-10-26 13:56 UTC (History)
6 users (show)

Fixed In Version: rear-2.6-8.el8
Doc Type: Bug Fix
Doc Text:
.ReaR no longer fails to restore non-LVM XFS filesystems Previously, when you used ReaR to restore a non-LVM XFS filesystems with certain settings and disk mapping, ReaR created the file system with the default settings instead of the specified settings. For example, if you had a file system with the `sunit` and `swidth` parameters set to non-zero values and you restored the file system using ReaR with disk mapping, the file system would be created with default `sunit` and `swidth` parameters ignoring the specified values. As a consequence, ReaR failed during mounting the filesystem with specific XFS options. With this update, ReaR correctly restores the file system with the specified settings.
Clone Of:
: 2160748 (view as bug list)
Environment:
Last Closed: 2023-05-16 08:42:15 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-142258 0 None None None 2022-12-14 15:47:28 UTC
Red Hat Product Errata RHBA-2023:2900 0 None None None 2023-05-16 08:42:31 UTC

Description Renaud Métrich 2022-10-04 08:48:44 UTC
Description of problem:

The issue happens when having the following conditions:
1. Non-LVM XFS filesystem is restored (e.g. `/boot`)
2. XFS filesystem has `sunit!=0` and `swidth!=0`
3. Disk mapping occurs (e.g. vda -> vdb)

When all three conditions are met, ReaR creates the file system with default settings because of the absence of `/var/lib/rear/layout/xfs/vda1.xfs` file, resulting in using default options to XFS when creating the filesystem:
~~~
2022-10-04 09:19:32.261221631 Can't read /var/lib/rear/layout/xfs/vdb1.xfs, falling back to mkfs.xfs defaults.
...
+++ mkfs.xfs -f -m uuid=17c2c80a-afad-4b64-a31b-55b21ced3740 /dev/vdb1
meta-data=/dev/vdb1              isize=512    agcount=4, agsize=65536 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=0 inobtcount=0
data     =                       bsize=4096   blocks=262144, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
~~~

Later, ReaR mounts the filesystem with XFS specific options (`sunit=xxx,swidth=xxx`) which are incompatible (due to having use default options at file system creation time) and this fails:
~~~
+++ mount -o rw,relatime,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=512,noquota /dev/vdb1 /mnt/local/boot
mount: /mnt/local/boot: wrong fs type, bad option, bad superblock on /dev/vdb1, missing codepage or helper program, or other error.
~~~

This was fixed in ReaR Upstream through commit a8fd431fda6c47ee5d1f4e24065bc28280a1f46d.

Version-Release number of selected component (if applicable):

rear-2.6-4.el8

How reproducible:

Always

Steps to Reproduce:
1. Make sure `/boot` is not using defaults `sunit=0` settings

    ~~~
    # BOOT_DEV=$(findmnt -o source --noheadings /boot)
    # BOOT_UUID=$(blkid -s UUID -o value $BOOT_DEV)
    # tar cf /root/boot.tar -C /boot .
    # umount /boot
    # mkfs.xfs -f -m uuid=$BOOT_UUID -d sunit=512,swidth=512 $BOOT_DEV
    # mount /boot
    # tar xf /root/boot.tar -C /boot
    ~~~

2. Create the rescue

3. Add a disk with same size (to force migration mode)

4. Boot on rescue and recover on other disk after wiping the disks (to avoid duplicates)

    ~~~
    # wipefs -a /dev/vda
    # wipefs -a /dev/vdb
    # rear recover
    ...
    Current disk mapping table (source => target):
      /dev/vda => /dev/vda
    ...
    3) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
    4) Use Relax-and-Recover shell and return back to here
    5) Abort 'rear recover'
    (default '1' timeout 300 seconds)
    3
    
      /dev/vda /dev/vdb
    ...
    ~~~

Actual results:

Failure mounting `/boot`

Expected results:

No failure + proper XFS options used

Additional infos:

Backporting the commit works fine.

Comment 13 errata-xmlrpc 2023-05-16 08:42:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (rear bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:2900

Comment 14 Pavel Cahyna 2023-10-26 13:56:50 UTC
(In reply to Renaud Métrich from comment #0)
> Later, ReaR mounts the filesystem with XFS specific options
> (`sunit=xxx,swidth=xxx`) which are incompatible (due to having use default
> options at file system creation time) and this fails:
> ~~~
> +++ mount -o
> rw,relatime,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=512,
> noquota /dev/vdb1 /mnt/local/boot
> mount: /mnt/local/boot: wrong fs type, bad option, bad superblock on
> /dev/vdb1, missing codepage or helper program, or other error.

FYI, this failure is due to logbsize= being incompatible with sunit=. In some cases, the change of the recreated XFS parameters is intentional (MKFS_XFS_OPTIONS), so this fix won't help then. This was reported as https://issues.redhat.com/browse/RHEL-10478. In upstream PR https://github.com/rear/rear/pull/3058 I submitted a fix for this.


Note You need to log in before you can comment on or make changes to this bug.