Bug 237544 - mount.gfs2 doesn't play well with local fs on loopback devices
mount.gfs2 doesn't play well with local fs on loopback devices
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: gfs2-utils (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: David Teigland
Dean Jansa
Depends On: 237538
  Show dependency treegraph
Reported: 2007-04-23 14:28 EDT by David Teigland
Modified: 2010-01-11 22:38 EST (History)
1 user (show)

See Also:
Fixed In Version: RHBA-2007-0579
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2007-11-07 13:04:51 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description David Teigland 2007-04-23 14:28:22 EDT
+++ This bug was initially created as a clone of Bug #237538 +++

make a local gfs2 fs and try to mount it via loopback:

[root@neon tmp]# dd if=/dev/zero of=fsfile bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.239597 seconds, 438 MB/s
[root@neon tmp]# mkfs.gfs2 -p lock_nolock -j 1 fsfile 
This will destroy any data on fsfile.

Are you sure you want to proceed? [y/n] y

Device:                    fsfile
Blocksize:                 4096
Device Size                0.10 GB (25600 blocks)
Filesystem Size:           0.10 GB (25599 blocks)
Journals:                  1
Resource Groups:           1
Locking Protocol:          "lock_nolock"
Lock Table:                ""

[root@neon tmp]# dmesg -c > /dev/null
[root@neon tmp]# mount -o loop fsfile mnt/
/sbin/mount.gfs2: can't find /proc/mounts entry for directory mnt
[root@neon tmp]# dmesg
GFS2: fsid=: Trying to join cluster "lock_nolock", "loop0"
GFS2: fsid=loop0.0: Joined cluster. Now mounting FS...
GFS2: fsid=loop0.0: jid=0, already locked for use
GFS2: fsid=loop0.0: jid=0: Looking at journal...
GFS2: fsid=loop0.0: jid=0: Done

loop0 is still set up though:

[root@neon tmp]# losetup /dev/loop0
/dev/loop0: [0802]:33847258 (fsfile)
[root@neon tmp]# losetup /dev/loop1
loop: can't get info on device /dev/loop1: No such device or address

now bypass mount.gfs2:

[root@neon tmp]# mount -i -o loop fsfile mnt/

mounts and sets up another loopback device:

[root@neon tmp]# losetup /dev/loop1
/dev/loop1: [0802]:33847258 (fsfile)

umount fails too:

[root@neon tmp]# umount mnt/
/sbin/umount.gfs2: file system mounted on /tmp/mnt not found in mtab

works if you bypass umount.gfs2:

[root@neon tmp]# umount -i mnt/

original failed mount never cleaned up loop0:

[root@neon tmp]# losetup /dev/loop0
/dev/loop0: [0802]:33847258 (fsfile)
[root@neon tmp]# losetup /dev/loop1
loop: can't get info on device /dev/loop1: No such device or address

this was all on a reasonably uptodate FC6 box

-- Additional comment from esandeen@redhat.com on 2007-04-23 14:10 EST --
mount -v output at dct's request:

[root@neon tmp]# mount -v -o loop fsfile mnt/
mount: going to use the loop device /dev/loop0
mount: you didn't specify a filesystem type for /dev/loop0
       I will try type gfs2
/sbin/mount.gfs2: mount /dev/loop0 mnt
/sbin/mount.gfs2: parse_opts: opts = "rw"
/sbin/mount.gfs2:   clear flag 1 for "rw", flags = 0
/sbin/mount.gfs2: parse_opts: flags = 0
/sbin/mount.gfs2: parse_opts: extra = ""
/sbin/mount.gfs2: parse_opts: hostdata = ""
/sbin/mount.gfs2: parse_opts: lockproto = ""
/sbin/mount.gfs2: parse_opts: locktable = ""
/sbin/mount.gfs2: mount(2) ok
/sbin/mount.gfs2: can't find /proc/mounts entry for directory mnt

-- Additional comment from teigland@redhat.com on 2007-04-23 14:23 EST --
This is caused by the lack of a preceding "/" before "mnt".
/proc/mounts always shows the preceding "/" in the mountpoint
and mount.gfs2 fails to match "mnt" and "/mnt".
Comment 1 David Teigland 2007-04-23 15:20:01 EDT
Use realpath(3) to canonicalize path names for device and mount point.

Checking in mount.gfs2.c;
/cvs/cluster/cluster/gfs2/mount/mount.gfs2.c,v  <--  mount.gfs2.c
new revision:; previous revision: 1.20
Comment 3 RHEL Product and Program Management 2007-05-01 12:19:40 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
Comment 6 errata-xmlrpc 2007-11-07 13:04:51 EST
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.