This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2093377 - xfsdump fails to dump root filesystem (via device mapper) on hosts with bind-chroot
Summary: xfsdump fails to dump root filesystem (via device mapper) on hosts with bind-...
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: xfsdump
Version: 8.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: Eric Sandeen
QA Contact: Murphy Zhou
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-03 14:37 UTC by Dan Astoorian
Modified: 2023-09-23 11:07 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-23 11:07:31 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   RHEL-7877 0 None Migrated None 2023-09-23 11:07:27 UTC
Red Hat Issue Tracker RHELPLAN-124221 0 None None None 2022-06-03 14:49:15 UTC

Description Dan Astoorian 2022-06-03 14:37:20 UTC
Description of problem:
After upgrading from xfsdump-3.1.8-2.el8.x86_64 to xfsdump-3.1.8-4.el8.x86_64, attempting to xfsdump the root filesystem, via a path through the device mapper, on a host with the BIND chroot runtime enabled, fails.

Version-Release number of selected component (if applicable):
xfsdump-3.1.8-4.el8.x86_64

How reproducible:
Always?

Steps to Reproduce:
1. With named-chroot.service active, run
    /sbin/xfsdump -F -J -l 0 - /dev/dm-0 > /dev/null
where /dev/dm-0 is the device for the root filesystem; or, equivalently,
    /sbin/xfsdump -F -J -l 0 - /dev/mapper/cl_hostname-root > /dev/null
where /dev/mapper/cl_hostname-root is the device name as per /etc/fstab.


Actual results:
xfsdump fails with a message of the form:

/sbin/xfsdump: version 3.1.8 (dump format 3.0)
/sbin/xfsdump: level 0 dump of hostname:/var/named/chroot/var/named
/sbin/xfsdump: dump date: Fri Jun  3 10:05:08 2022
/sbin/xfsdump: session id: 29d5cf9e-951d-4077-966c-3db918668768
/sbin/xfsdump: session label: ""
/sbin/xfsdump: ERROR: /var/named/chroot/var/named is not the root of the filesystem (bind mount?) - use primary mountpoint
/sbin/xfsdump: Dump Status: ERROR

(Note that xfsdump is attempting to dump the wrong pathname, i.e., /var/named/chroot/var/named instead of /.)

Expected results:
Dump should proceed, although note that in previous releases (e.g., 3.1.8-2.el8):
- xfsdump still reported "dump of hostname:/var/named/chroot/var/named", although it appears that the backup contained the full root filesystem), and
- a warning of the form "NOTE: root ino 128 differs from mount dir ino 134955734, bind mount?" would be emitted.

Additional info:

In my testing, the problem did not occur using
    /sbin/xfsdump -F -J -l 0 - / > /dev/null
but rather only when using a path to the device through /dev/mapper/.  However, there's no obvious way to use this to workaround when dumping with Amanda, which appears to map the filesystem to the device according to /etc/fstab .

The issue is almost certainly not specific to bind-chroot, and likely occurs in other instances where bind mounts are present, but bind-chroot is likely to be a common reason for bind mounts to be present.

Comment 1 Eric Sandeen 2022-06-03 18:39:45 UTC
I'm not sure how all the moving parts here interact, but xfsdump does need to be pointed at the true root directory of a filesystem, not a bind-mounted directory below the root; it won't work properly otherwise.

i.e. if the "/" you are pointing at is actually a bind-mounted sub-directory of a parent filesystem, it's going to fail.  Is that the case?

Comment 2 Dan Astoorian 2022-06-03 19:18:32 UTC
See the command line under "Steps to reproduce"; the problem happens when the device is specified to xfsdump by the device name (e.g., "/dev/mapper/cl_hostname-root"), but not when specified by mount point (e.g., "/").

I'm guessing that xfsdump is trying to map the device name back to a mount point, and is getting the bind mount instead of the root.  I'm not sure what method xfsdump would use to try to do that mapping, but I note that /proc/mounts contains indistinguishable entries for each of the bind mounts in addition to the filesystem root, e.g.:

/dev/mapper/cl_hostname-root / xfs rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
/dev/mapper/cl_hostname-root /var/named/chroot/etc/localtime xfs rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
/dev/mapper/cl_hostname-root /var/named/chroot/etc/named.root.key xfs rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
[...]
/dev/mapper/cl_hostname-root /var/named/chroot/var/named xfs rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0


If there's no better way for xfsdump to detect bind mounts, perhaps it could assume that entries for a device after the first one are bind mounts, and use the first match for the device in the mount table rather than the last one?  (I have no idea whether the order of entries in the mount table is deterministic.)

If running xfsdump on a device name without providing the filesystem root is not going to be supported, then at a minimum, Amanda would need to be updated not to map the mount point to a device when invoking xfsdump.

Comment 3 Eric Sandeen 2022-06-03 19:47:02 UTC
Oh, I see. Sorry for my confusion.

TBH, xfsdump is an ancient, tangled mess. You may be right that it's doing something odd when the device you've specified is found multiple times in /proc/mounts. I'll see what I can figure out.

Comment 4 Dan Astoorian 2022-06-04 15:05:49 UTC
It looks like fs_tab_lookup_blk() in xfsdump-3.1.8/common/fs.c returns the first result from the list constructed from fs_tab_ent_build() (which are the entries from /etc/mtab in reverse order) where the block device name matches or has the same st_rdev value.

Would it be practical for fs_tab_lookup_blk() to invoke (or duplicate the logic of) check_rootdir() from 0002-xfsdump-intercept-bind-mount-targets.patch to reject potential mount points that don't satisfy the root inode requirement?

Failing that, would changing fs_tab_lookup_blk() to return the last match from fs_tabp instead of the first one (i.e., declare
   fs_tab_ent_t *rtep = 0;
then change "return tep" to "rtep = tep" throughout, and change "return 0" to "return rtep" after the loop) be a practical workaround?  Again, this makes the assumption that bind mounts will always follow the real ones in /etc/mtab.

Comment 5 Eric Sandeen 2022-06-06 14:34:31 UTC
That assumption sounds pretty reasonable, but I always worry about coding to behavior that's not actually specified as a standard. I'll give this some thought, thanks.

Comment 6 Dan Astoorian 2022-06-06 15:15:47 UTC
In fact, I've found that the assumption does not necessarily hold, since it's technically possible to unmount and remount the underlying filesystem without removing the bind mount:

[djast@test djast]# mount --bind /boot/grub /mnt
[djast@test djast]# grep sda1 /etc/mtab
/dev/sda1 /boot xfs rw,relatime,attr2,inode64,noquota 0 0
/dev/sda1 /mnt xfs rw,relatime,attr2,inode64,noquota 0 0
[djast@test djast]# umount /boot
[djast@test djast]# mount /boot
[djast@test djast]# grep sda1 /etc/mtab
/dev/sda1 /mnt xfs rw,relatime,attr2,inode64,noquota 0 0
/dev/sda1 /boot xfs rw,relatime,attr2,inode64,noquota 0 0

Perhaps fs_tab_lookup_blk() should read /proc/self/mountinfo to identify and omit bind mounts of subdirectories, but a bind mount at the root would still be indistinguishable based on /proc/self/mountinfo.  However, that might be good enough for xfsdump's purposes--presumably the root inode would still be the same, so if one does "mount --bind / /mnt", would there be any difference between "xfsdump /" and "xfsdump /mnt"?

I know that xfsdump, unlike dump for ext2/3/4, requires that the filesystem be mounted in order to dump it (although I don't know why that restriction exists), but if a filesystem is mounted multiple times via bind mounts, does xfsdump need to know that in order to do its job, or is it sufficient for xfsdump to identify any suitable mount point?

Comment 7 Christopher 2022-06-13 22:14:19 UTC
(In reply to Dan Astoorian from comment #6)
> 
> I know that xfsdump, unlike dump for ext2/3/4, requires that the filesystem
> be mounted in order to dump it (although I don't know why that restriction
> exists), but if a filesystem is mounted multiple times via bind mounts, does
> xfsdump need to know that in order to do its job, or is it sufficient for
> xfsdump to identify any suitable mount point?

  I am interested in the answer to this question as we have also encountered
this issue.

  If the requirement exists that xfsdump _must_ use the mount point, then it 
perhaps the caller (in this case amanda) should be adjusted to use the mount 
point in place of the device in order to avoid the case where xfsdump is unable
to locate the root mount point automatically.

  The following single line change works for us as this section of code is 
already only applicable to xfsdump.

diff -uNrp amanda-3.5.1/client-src/sendbackup-dump.c amanda-3.5.1-p/client-src/sendbackup-dump.c
--- amanda-3.5.1/client-src/sendbackup-dump.c   2017-12-01 07:26:32.000000000 -0600
+++ amanda-3.5.1-p/client-src/sendbackup-dump.c 2022-05-13 10:37:50.029125898 -0500
@@ -308,7 +308,7 @@ start_backup(
                            "-F",
                            "-l", dumpkeys,
                            "-",
-                           device,
+                           dle->device,
                            NULL);
     }
     else

Comment 8 RHEL Program Management 2023-09-23 11:06:14 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 9 RHEL Program Management 2023-09-23 11:07:31 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.


Note You need to log in before you can comment on or make changes to this bug.