RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1169724 - [Hyper-V][RHEL7.1] The backup fail when a partition is mounted under multiple pathes.
Summary: [Hyper-V][RHEL7.1] The backup fail when a partition is mounted under multiple...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: hyperv-daemons
Version: 7.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Vitaly Kuznetsov
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-12-02 10:02 UTC by lijing
Modified: 2019-08-15 04:06 UTC (History)
18 users (show)

Fixed In Version: hyperv-daemons-0-0.26.20150402git.el7
Doc Type: Bug Fix
Doc Text:
Cause: hyperv-vssd daemon was not detecting partitions which are mounted more than once (e.g. mounted simultaneously to several different mount points). Consequence: All backups from Windows Backup tool for such VMs were failing. Fix: A special workaround for partitions mounted more than once was added to hyperv-vssd daemon. Result: Backups from Windows Backup tool for such VMs should succeed.
Clone Of:
Environment:
Last Closed: 2015-11-19 08:22:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2845961 0 None None None 2017-01-05 08:24:58 UTC
Red Hat Product Errata RHBA-2015:2234 0 normal SHIPPED_LIVE hyperv-daemons bug fix update 2015-11-19 08:48:25 UTC

Description lijing 2014-12-02 10:02:56 UTC
Description of problem:
Added a device(IED or SCSI) in guest to format and mount the device to different path. the backup fail to backup the guest. 

the log is as below:
Dec 02 14:43:50 dhcp-66-106-190.nay.redhat.com hypervvssd[790]: Hyper-V VSS: VSS: freeze of /boot: Success
Dec 02 14:43:50 dhcp-66-106-190.nay.redhat.com hypervvssd[790]: Hyper-V VSS: VSS: freeze of /mnt/sdb1: Success
Dec 02 14:43:50 dhcp-66-106-190.nay.redhat.com hypervvssd[790]: Hyper-V VSS: VSS: freeze of /mnt/sdb1_bak: Device or resource busy
Dec 02 14:43:50 dhcp-66-106-190.nay.redhat.com hypervvssd[790]: Hyper-V VSS: VSS: freeze of /: Device or resource busy

Version-Release number of selected component (if applicable):
Host: Hyper-V 2012R2
hyperv-daemons: 0-0.25.20141008git.el7

How reproducible:
100%

Steps to Reproduce:
1.Add device(IED or ISCSI) to guest with Hyper-V Manager.
2.format the device and mounted the different path
#fdisk /dev/sdb
#mkfs.xfs /dev/sdb1; mkdir -p /mnt/sdb1 /mnt/sdb1_bak
3.mount /dev/sdb1 to /mnt/sdb1 and /mnt/sdb1_bak
4.run backup on host side
#wbaadmin start backup -backupTarget:K:\ -hyperv:jingli-test

Actual results:
Backup fail when a partition is mounted under multiple pathes. 

Expected results:
The backup run successful when multiple partition are mounted

Additional info:
The upstream seem has fixed this issue, however, there is no issue on rhel6 guest.
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/tools/hv?id=4f689190bb55d171d2f6614f8a6cbd4b868e48bd

Comment 1 Ronen Hod 2014-12-02 13:20:04 UTC
I wouldn't expect it to work if one of the mounts is R/W. Unrelated to HV
What are the exact mount commands?

Comment 2 Stefan Hajnoczi 2014-12-03 15:18:26 UTC
(In reply to Ronen Hod from comment #1)
> I wouldn't expect it to work if one of the mounts is R/W. Unrelated to HV
> What are the exact mount commands?

I agree, mounting a read-write XFS file system more than once can lead to file system corruption and should not be done.  On my Fedora 21 system mounting an XFS file system twice fails with this error from mount(8):

  $ sudo mount /var/tmp/test.img /tmp/a
  $ sudo mount /var/tmp/test.img /tmp/b
  mount: wrong fs type, bad option, bad superblock on /dev/loop1,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
  [1718427.293015] XFS (loop1): Filesystem has duplicate UUID a92e7988-88cc-4e6f-ac2b-0c728e5976ee - can't mount

That said, the upstream patch may be useful.  I'm not sure which scenarios it helps, maybe the more exotic mount modes (bind mounts or shared subtrees, see man mount) where you can have a file system visible in several places in the namespace.

lijing: If you can't find a valid scenario where multiple mounts of the same file system exist, try emailing the author of the upstream patch: Dexuan Cui <decui>.

Comment 3 Dexuan Cui 2014-12-04 02:55:33 UTC
The comment of the commit https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/commit/tools/hv?id=4f689190bb55d171d2f6614f8a6cbd4b868e48bd
gives some scenarios:

+	/*
+	 * If a partition is mounted more than once, only the first
+	 * FREEZE/THAW can succeed and the later ones will get
+	 * EBUSY/EINVAL respectively: there could be 2 cases:
+	 * 1) a user may mount the same partition to differnt directories
+	 *  by mistake or on purpose;
Dexuan: e.g., we can mount an ext4 partition to 2 different mount points.

+	 * 2) The subvolume of btrfs appears to have the same partition
+	 * mounted more than once.
Dexuan: some distros, like SUSE12, create a root filesystem with btrfs, FYI: an example of its fstab can be:

# cat /etc/fstab
UUID=7798e6be-783c-4de9-b561-27c4d487115e swap swap defaults 0 0
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df / btrfs defaults 0 0
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df /boot/grub2/i386-pc btrfs subvol=@/boot/grub2/i386-pc 0 0
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df /boot/grub2/x86_64-efi btrfs subvol=@/boot/grub2/x86_64-efi 0 0
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df /opt btrfs subvol=@/opt 0 0
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df /srv btrfs subvol=@/srv 0 0
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df /tmp btrfs subvol=@/tmp 0 0
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df /usr/local btrfs subvol=@/usr/local 0 0
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df /var/crash btrfs subvol=@/var/crash 0 0
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df /var/lib/mailman btrfs subvol=@/var/lib/mailman 0 0
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df /var/lib/named btrfs subvol=@/var/lib/named 0 0
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df /var/lib/pgsql btrfs subvol=@/var/lib/pgsql 0 0
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df /var/log btrfs subvol=@/var/log 0 0
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df /var/opt btrfs subvol=@/var/opt 0 0
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df /var/spool btrfs subvol=@/var/spool 0 0
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df /var/tmp btrfs subvol=@/var/tmp 0 0
UUID=63891d6e-673d-4f0f-9435-b4f339414253 /home xfs defaults 1 2
UUID=78d21213-83a8-423c-afc8-f87a1d4be4df /.snapshots btrfs subvol=@/.snapshots 0 0

Comment 4 lijing 2014-12-04 04:40:41 UTC
Hi Stefan,

I had a try the way given by you above with command below:
#dd if=/dev/zero of=/tmp/test.img bs=1024M count=2
#fdisk /tmp/test.img; mkfs.xfs /tmp/test.img
#mount /tmp/test.img /mnt/sdb1
#mount /tmp/test.img /mnt/sdb2

the error log is same as the log above. the same device can't be mounted into different directory. however, I add a device(5G) with Hyper-V Manager. there is no error issue. 

[root@rhel7 ~]# cat /proc/mounts |grep sdb1
/dev/sdb1 /mnt/sdb1 xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
/dev/sdb1 /mnt/sdb1_bak xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0

I tried scenarios1 above, there is the same error outputted from hypervvssd. the backup fail.
[root@dhcp-66-106-190 ~]# cat /proc/mounts |grep -i sdb1
/dev/sdb1 /mnt/sdb1 ext4 rw,seclabel,relatime,data=ordered 0 0
/dev/sdb1 /mnt/sdb1_bak ext4 rw,seclabel,relatime,data=ordered 0 0

Dec 02 14:43:50 dhcp-66-106-190.nay.redhat.com hypervvssd[790]: Hyper-V VSS: VSS: freeze of /boot: Success
Dec 02 14:43:50 dhcp-66-106-190.nay.redhat.com hypervvssd[790]: Hyper-V VSS: VSS: freeze of /mnt/sdb1: Success
Dec 02 14:43:50 dhcp-66-106-190.nay.redhat.com hypervvssd[790]: Hyper-V VSS: VSS: freeze of /mnt/sdb1_bak: Device or resource busy
Dec 02 14:43:50 dhcp-66-106-190.nay.redhat.com hypervvssd[790]: Hyper-V VSS: VSS: freeze of /: Device or resource busy

Anyway, I tried the patch given by Dexuan Cui, the backup can backup the VM successfully.

Comment 5 Dexuan Cui 2014-12-04 04:46:55 UTC
(In reply to Ronen Hod from comment #1)
> I wouldn't expect it to work if one of the mounts is R/W. Unrelated to HV
> What are the exact mount commands?

What's the reason?
Is this a XFS-specific character?

I think 1 ext4 partition can be r/w mounted twice safely on different mount points.

Comment 6 Ronen Hod 2014-12-04 10:01:18 UTC
(In reply to Dexuan Cui from comment #5)

I am not an expert, but I strongly believe that every file-system runs under
the assumption that it has exclusive access to the data and metadata on the disk, and that they can be cached. It will be crazy to always assume that another instance of a file system can make modifications to the disk.

Ronen.

Comment 7 Dexuan Cui 2014-12-04 10:26:53 UTC
Thanks for the explanation, Ronen!

I'm not familiar with the internals of file system drivers at all.
It looks your explanation makes sense at least to me.

I thought there should be some kind of sync mechanism between 2 'instances' of the same file system while you implied there isn't.

So, a new question comes to me:
The 'mount' utility allows this unsafe 'r/w mount the same partition to 2 different mountpoints' -- why?  Isn't it easy to detect this unsafe operation and prevent the user from doing this? :-)

I'll need to further look into this.

Comment 8 Dexuan Cui 2014-12-04 10:36:51 UTC
Fond the answer: https://bugzilla.redhat.com/show_bug.cgi?id=209487

"The mount(2) man page:
 Since Linux 2.4 a single filesystem can be visible at  multiple  mount  points,
 and multiple mounts can be stacked on the same mount point."

So it should be r/w mount the same file systems to multiple mountpoints. :-)

Comment 9 Dexuan Cui 2014-12-04 10:37:55 UTC
(In reply to Dexuan Cui from comment #8)
> Fond the answer: https://bugzilla.redhat.com/show_bug.cgi?id=209487
> 
> "The mount(2) man page:
>  Since Linux 2.4 a single filesystem can be visible at  multiple  mount 
> points,
>  and multiple mounts can be stacked on the same mount point."
> 
> So it should be r/w mount the same file systems to multiple mountpoints. :-)
I meant "should be *safe* to r/w mount ..."

Comment 10 Ronen Hod 2014-12-04 15:53:07 UTC
(In reply to Dexuan Cui from comment #8)
> Fond the answer: https://bugzilla.redhat.com/show_bug.cgi?id=209487
> 
> "The mount(2) man page:
>  Since Linux 2.4 a single filesystem can be visible at  multiple  mount 
> points,
>  and multiple mounts can be stacked on the same mount point."
> 
> So it should be r/w mount the same file systems to multiple mountpoints. :-)

I assume that "visible" means R/O in this case. Anyhow, you will probably have to consult with the file-system people.

Comment 11 Dexuan Cui 2014-12-05 03:02:58 UTC
The book ULK's "12.4.2. Filesystem Mounting" (http://gauss.ececs.uc.edu/Courses/c4022/code/memory/understanding.pdf) says:

"
In most traditional Unix-like kernels, each filesystem can be mounted only once. Suppose that an Ext2
filesystem stored in the /dev/fd0 floppy disk is mounted on /flp by issuing the command:
 mount -t ext2 /dev/fd0 /flp
Until the filesystem is unmounted by issuing a umount command, every other mount command acting on
/dev/fd0 fails.
However, Linux is different: it is possible to mount the same filesystem several times. Of course, if a
filesystem is mounted n times, its root directory can be accessed through n mount points, one per mount
operation. Although the same filesystem can be accessed by using different mount points, it is really unique.
Thus, there is only one superblock object for all of them, no matter of how many times it has been mounted.
"
So it looks not R/O only. Maybe the only superblock makes "multiple r/w mounting"  possible.

Comment 12 Stefan Hajnoczi 2015-01-08 16:38:58 UTC
(In reply to Dexuan Cui from comment #11)
> The book ULK's "12.4.2. Filesystem Mounting"
> (http://gauss.ececs.uc.edu/Courses/c4022/code/memory/understanding.pdf) says:
> 
> "
> In most traditional Unix-like kernels, each filesystem can be mounted only
> once. Suppose that an Ext2
> filesystem stored in the /dev/fd0 floppy disk is mounted on /flp by issuing
> the command:
>  mount -t ext2 /dev/fd0 /flp
> Until the filesystem is unmounted by issuing a umount command, every other
> mount command acting on
> /dev/fd0 fails.
> However, Linux is different: it is possible to mount the same filesystem
> several times. Of course, if a
> filesystem is mounted n times, its root directory can be accessed through n
> mount points, one per mount
> operation. Although the same filesystem can be accessed by using different
> mount points, it is really unique.
> Thus, there is only one superblock object for all of them, no matter of how
> many times it has been mounted.
> "
> So it looks not R/O only. Maybe the only superblock makes "multiple r/w
> mounting"  possible.

There are a couple of different points of discussion.  We need to separate them:

1. The test case proposed in this bugzilla is wrong.  XFS and ext4 both do not support mounting the file system at the same time using mount /dev/sdb1 /a; mount /dev/sdb1 /b.  This will fail or lead to file system corruption - don't do it.

2. Dexuan mentioned btrfs subvolumes and I mentioned bind mounts.  In these cases there will really be multiple lines in /proc/mounts for the same file system.  For example:
  # mount --bind /home /mnt
  # grep home /proc/mounts
/dev/mapper/lv_home /home ext4 rw,relatime,data=ordered 0 0
/dev/mapper/lv_home /mnt ext4 rw,relatime,data=ordered 0 0

Note that there is only one superblock here, so there is no risk of file system corruption.

This is the case that needs to be tested.


Summary: We should backport the Hyper-V VSS tool fix and the test plan should be updated to use a bind mount.

Comment 13 lijing 2015-01-13 10:40:59 UTC
Had a try with the way above, the backup fail, however, the backup can run successfully with the patches from upstream  so we QE  also think the patches should be backport. 

an 13 18:12:42 rhel7 hypervvssd[748]: Hyper-V VSS: VSS: freeze of /boot: Success
Jan 13 18:12:42 rhel7 hypervvssd[748]: Hyper-V VSS: VSS: freeze of /: Success
Jan 13 18:11:48 rhel7 hypervvssd[748]: Hyper-V VSS: VSS: thaw of /boot: Success
Jan 13 18:11:48 rhel7 hypervvssd[748]: Hyper-V VSS: VSS: thaw of /: Success
Jan 13 18:18:59 rhel7 hypervvssd[748]: Hyper-V VSS: VSS: freeze of /boot: Success
Jan 13 18:18:59 rhel7 hypervvssd[748]: Hyper-V VSS: VSS: freeze of /mnt/sdb1: Success
Jan 13 18:18:59 rhel7 hypervvssd[748]: Hyper-V VSS: VSS: freeze of /mnt/sdb1_bak: Device or resource busy
Jan 13 18:18:59 rhel7 hypervvssd[748]: Hyper-V VSS: VSS: freeze of /: Device or resource busy

Comment 20 ldu 2015-09-14 09:00:16 UTC
Verified the bug, as comment 12 methond.

The bug related host version: Hyper-V server 2012 R2

The bug related guest version:kernel-3.10.0-313.el7.x86_64

Verify Steps:
1. Start the RHEl7.2 guest and make sure the hypervvssd service is running.
2. mount --bind /home /mnt                                     
3. run backup on host side.

Actual Result:
The backup run successful.

The test result: Verified.

Comment 22 errata-xmlrpc 2015-11-19 08:22:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2234.html


Note You need to log in before you can comment on or make changes to this bug.