Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Cause: Incorrect logic in mount.gfs2 when trying to find mtab entries to delete. The exact device/mount paths were not being compared
Consequence: During remounts, the original entry is not found and hence not delete. This results in double mtab entries
Fix: Patch uses realpath() on the device/mount paths so that they match up with what's in mtab.
Result: The correct original mtab entry is deleted during a remount and a replacement entry with the new mount options is inserted in its place.
Description of problem:
----------------------
With two gfs2 mounts defined in /etc/fstab, after a system boot the 'df' command reflects two of each of these mounts.
Component Version-Release:
-------------------------
kernel 2.6.32-125.el6.x86_64
gfs2-utils-3.0.12.35.el6 x86_64
How reproducible:
----------------
consistent
Steps to Reproduce:
------------------
1. define a gfs mount in /etc/fstab
2. start gfs2 service
Actual results:
--------------
df reports double mounts ...
# df
Filesystem 1K-blocks Used Available Use% Mounted on
...
/dev/mapper/vg_vol1-lv_oracleHome
83877056 5483600 78393456 7% /ora/db
/dev/mapper/vg_vol1-lv_iozone1
83877056 132436 83744620 1% /RHTSspareLUN1
/dev/mapper/vg_vol1-lv_oracleHome
83877056 5483600 78393456 7% /ora/db
/dev/mapper/vg_vol1-lv_iozone1
83877056 132436 83744620 1% /RHTSspareLUN1
Expected results:
----------------
df reports a single entry for each gfs2 mount ...
# df
Filesystem 1K-blocks Used Available Use% Mounted on
...
/dev/mapper/vg_vol1-lv_oracleHome
83877056 5483600 78393456 7% /ora/db
/dev/mapper/vg_vol1-lv_iozone1
83877056 132436 83744620 1% /RHTSspareLUN1
Workaround:
----------
umount the mounts by hand twice. If the umount occurs only once, the remount attempt will fail reporting the mounts are already present even though df doesn't see them. The 2nd umount attempt fails and then the mount will succeed and reflect one version of each of the gfs2 mounts.
Additional info:
---------------
While /proc/mounts shows only single entries of these mounts, /etc/mtab lists the double entries ... one set of which includes the nobarrier option which was NOT specified as an option in the fstab file.
# grep gfs2 /etc/fstab
/dev/mapper/vg_vol1-lv_oracleHome /ora/db gfs2 defaults,noatime,nodiratime 0 0
/dev/mapper/vg_vol1-lv_iozone1 /RHTSspareLUN1 gfs2 defaults,noatime,nodiratime 0 0
# grep gfs2 /proc/mounts
/dev/dm-23 /ora/db gfs2 rw,noatime,nodiratime,hostdata=jid=1,nobarrier 0 0
/dev/dm-20 /RHTSspareLUN1 gfs2 rw,noatime,nodiratime,localflocks,localcaching,nobarrier 0 0
# grep gfs2 /etc/mtab
/dev/mapper/vg_vol1-lv_oracleHome /ora/db gfs2 rw,noatime,nodiratime,hostdata=jid=1 0 0
/dev/mapper/vg_vol1-lv_iozone1 /RHTSspareLUN1 gfs2 rw,noatime,nodiratime,localflocks,localcaching 0 0
/dev/mapper/vg_vol1-lv_oracleHome /ora/db gfs2 rw,noatime,nodiratime,hostdata=jid=1,nobarrier 0 0
/dev/mapper/vg_vol1-lv_iozone1 /RHTSspareLUN1 gfs2 rw,noatime,nodiratime,localflocks,localcaching,nobarrier 0 0
tuned-adm active profile: enterprise-storage
Comment 2RHEL Program Management
2011-05-26 16:29:37 UTC
This request was evaluated by Red Hat Product Management for inclusion
in a Red Hat Enterprise Linux maintenance release. Product Management has
requested further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products. This request is not yet committed for inclusion in an Update release.
THis was discovered on the cluster when each node of the cluster had mutually exclusive gfs2 file systems in the fstab file. At boot up, tuned was also installed and configured with enterprise-storage.
The mount -o remount is the culprit. /proc/mounts shows only a single entry, but the mtab file shows the original mount, and the tuned remount.
Barry
The patch in comment #6 fixes the issue.
Without the patch I see the following:
[root@smoke-02 ~]# mount -t gfs2 /dev/smoke_vg/100g /mnt/gfs2/
[root@smoke-02 ~]# cat /etc/mtab | grep gfs2
/dev/mapper/smoke_vg-100g /mnt/gfs2 gfs2 rw,relatime,hostdata=jid=0 0 0
[root@smoke-02 ~]# mount -o remount,noatime /mnt/gfs2
file system mounted on /mnt/gfs2 not found in mtab
[root@smoke-02 ~]# cat /etc/mtab | grep gfs2
/dev/mapper/smoke_vg-100g /mnt/gfs2 gfs2 rw,relatime,hostdata=jid=0 0 0
/dev/mapper/smoke_vg-100g /mnt/gfs2 gfs2 rw,noatime,hostdata=jid=0 0 0
I unmounted twice to get rid of the mtab entries and tried on another node in the cluster that has the patch applied to mount.gfs2:
[root@smoke-01 ~]# mount -t gfs2 /dev/smoke_vg/100g /mnt/gfs2/
[root@smoke-01 ~]# cat /etc/mtab | grep gfs2
/dev/mapper/smoke_vg-100g /mnt/gfs2 gfs2 rw,relatime,hostdata=jid=0 0 0
[root@smoke-01 ~]# mount -o remount,noatime /mnt/gfs2
[root@smoke-01 ~]# cat /etc/mtab | grep gfs2
/dev/mapper/smoke_vg-100g /mnt/gfs2 gfs2 rw,noatime,hostdata=jid=0 0 0
Technical note added. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.
New Contents:
Cause: Incorrect logic in mount.gfs2 when trying to find mtab entries to delete. The exact device/mount paths were not being compared
Consequence: During remounts, the original entry is not found and hence not delete. This results in double mtab entries
Fix: Patch uses realpath() on the device/mount paths so that they match up with what's in mtab.
Result: The correct original mtab entry is deleted during a remount and a replacement entry with the new mount options is inserted in its place.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
http://rhn.redhat.com/errata/RHBA-2011-1516.html
Description of problem: ---------------------- With two gfs2 mounts defined in /etc/fstab, after a system boot the 'df' command reflects two of each of these mounts. Component Version-Release: ------------------------- kernel 2.6.32-125.el6.x86_64 gfs2-utils-3.0.12.35.el6 x86_64 How reproducible: ---------------- consistent Steps to Reproduce: ------------------ 1. define a gfs mount in /etc/fstab 2. start gfs2 service Actual results: -------------- df reports double mounts ... # df Filesystem 1K-blocks Used Available Use% Mounted on ... /dev/mapper/vg_vol1-lv_oracleHome 83877056 5483600 78393456 7% /ora/db /dev/mapper/vg_vol1-lv_iozone1 83877056 132436 83744620 1% /RHTSspareLUN1 /dev/mapper/vg_vol1-lv_oracleHome 83877056 5483600 78393456 7% /ora/db /dev/mapper/vg_vol1-lv_iozone1 83877056 132436 83744620 1% /RHTSspareLUN1 Expected results: ---------------- df reports a single entry for each gfs2 mount ... # df Filesystem 1K-blocks Used Available Use% Mounted on ... /dev/mapper/vg_vol1-lv_oracleHome 83877056 5483600 78393456 7% /ora/db /dev/mapper/vg_vol1-lv_iozone1 83877056 132436 83744620 1% /RHTSspareLUN1 Workaround: ---------- umount the mounts by hand twice. If the umount occurs only once, the remount attempt will fail reporting the mounts are already present even though df doesn't see them. The 2nd umount attempt fails and then the mount will succeed and reflect one version of each of the gfs2 mounts. Additional info: --------------- While /proc/mounts shows only single entries of these mounts, /etc/mtab lists the double entries ... one set of which includes the nobarrier option which was NOT specified as an option in the fstab file. # grep gfs2 /etc/fstab /dev/mapper/vg_vol1-lv_oracleHome /ora/db gfs2 defaults,noatime,nodiratime 0 0 /dev/mapper/vg_vol1-lv_iozone1 /RHTSspareLUN1 gfs2 defaults,noatime,nodiratime 0 0 # grep gfs2 /proc/mounts /dev/dm-23 /ora/db gfs2 rw,noatime,nodiratime,hostdata=jid=1,nobarrier 0 0 /dev/dm-20 /RHTSspareLUN1 gfs2 rw,noatime,nodiratime,localflocks,localcaching,nobarrier 0 0 # grep gfs2 /etc/mtab /dev/mapper/vg_vol1-lv_oracleHome /ora/db gfs2 rw,noatime,nodiratime,hostdata=jid=1 0 0 /dev/mapper/vg_vol1-lv_iozone1 /RHTSspareLUN1 gfs2 rw,noatime,nodiratime,localflocks,localcaching 0 0 /dev/mapper/vg_vol1-lv_oracleHome /ora/db gfs2 rw,noatime,nodiratime,hostdata=jid=1,nobarrier 0 0 /dev/mapper/vg_vol1-lv_iozone1 /RHTSspareLUN1 gfs2 rw,noatime,nodiratime,localflocks,localcaching,nobarrier 0 0 tuned-adm active profile: enterprise-storage